Industry Insights with Michiel Prins, Co-founder of HackerOne

The Hacker Perspective on Generative AI and Cybersecurity

Unveiling the Risks and Insights: A Hacker's Take on Generative AI and Cybersecurity

Generative AI has undergone incredibly fast adoption, with fresh launches of the latest large language model (LLM) coming every day. As with any new technology, however, we often don’t understand the risk implications before rushing to build it into our applications.

See Also: 5 Requirements for Modern DLP

Ethical hackers understand the ins and outs of the security issues inherent in Generative AI, and they’ve been exploring the common mistakes made by organizations rushing to leverage the technology. Who better to learn from when it comes to preventing and managing risks than the hackers who know how to exploit them?

We’ve spoken with several experienced hackers in the space to get their perspectives on the most important considerations for Generative AI and cybersecurity.

Future Risk Predictions

In a recent presentation at Black Hat 2023, HackerOne Founder, Michiel Prins, and hacker, Joseph Thacker aka @rez0, discussed some of the most impactful risk predictions related to Generative AI and LLMs, including:

  • Increased risk of preventable breaches
  • Loss of revenue and brand reputation
  • Increased cost of regulatory compliance
  • Diminished competitiveness
  • Reduced ROI on development investments

The Top Generative AI and LLM Risks According to Hackers

According to hacker Gavin Klondike, “We’ve almost forgotten the last 30 years of cybersecurity lessons in developing some of this software.” The haste of GAI adoption has clouded many organizations’ judgment when it comes to the security of artificial intelligence. Security researcher Katie Paxton-Fear aka @InsiderPhD, believes, “this is a great opportunity to take a step back and bake some security in as this is developing and not bolting on security 10 years later.”

Prompt Injections

The OWASP Top 10 for LLM defines prompt injection as a vulnerability during which an attacker manipulates the operation of a trusted LLM through crafted inputs, either directly or indirectly.Thacker uses this exampleto help understand the power of prompt injection:

“If an attacker uses prompt injection to take control of the context for the LLM function call, they can exfiltrate data by calling the web browser feature and moving the data that are exfiltrated to the attacker’s side.”

Ethical hacker, Roni Carta aka @arsene_lupin, points out that if developers are using ChatGPT to help install prompt packages on their computers, they can run into trouble when asking it to find libraries. Carta says, “ChatGPT hallucinates library names, which threat actors can then take advantage of by reverse-engineering the fake libraries.”

Agent Access Control

“LLMs are only as good as their data,” says Thacker. “The most useful data is often private data.”

According to Thacker, this creates an extremely difficult problem in the form of agent access control. Access control issues are very common vulnerabilities found through the HackerOne platform every day. Where access control goes particularly wrong regarding AI agents is the mixing of data. Thacker says AI agents have a tendency to mix second-order data access with privileged actions, exposing the most sensitive information to potentially be exploited by bad actors.

The Evolution of the Hacker in the Age of Generative AI

Naturally, as new vulnerabilities emerge from the rapid adoption of Generative AI and LLMs, the role of the hacker is also evolving. During a panel featuring security experts from Zoom and Salesforce, hacker Tom Anthony predicted the change in how hackers approach processes with AI:

“At a recent Live Hacking Event with Zoom, there were easter eggs for hackers to find — and the hacker who solved them used LLMs to crack it. Hackers are able to use AI to speed up their processes by, for example, rapidly extending the word lists when trying to brute force systems.”

There are even new tools for the education of hacking LLMs — and therefore for identifying the vulnerabilities created by them. Anthony uses an online game for prompt injection where you work through levels, tricking the GPT model to give you secrets. It’s all developing so quickly.”

Use the Power of Hackers for Secure Generative AI

Even the most sophisticated security programs are unable to catch every vulnerability. HackerOne is committed to helping organizations secure their GAI and LLMs and to staying at the forefront of security trends and challenges. With HackerOne, organizations can:

About the Author

Michiel Prins, Co-founder of HackerOne

Michiel Prins, Co-founder of HackerOne

Co-founder of HackerOne

Prins is a co-founder and senior director of professional services at HackerOne, the leader in attack resistance. He is an information security expert, researcher, hacker and developer who has been finding critical software vulnerabilities in technology for over 10 years. Prior to founding HackerOne, Prins co-founded a successful penetration testing company that worked on projects for trusted organizations from governments institutions to top technology companies, including Twitter, Facebook, Evernote and Airbnb, among others. He regularly presents on vulnerability disclosure and security research projects regarding security management, privacy and web application infrastructure.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.