Euro Security Watch with Mathew J. Schwartz

Artificial Intelligence & Machine Learning , Big Data Security Analytics , Next-Generation Technologies & Secure Development

Yes, Virginia, ChatGPT Can Be Used to Write Phishing Emails

But for All AI Malicious Use Cases, Better Alternatives Abound - At Least So Far
Yes, Virginia, ChatGPT Can Be Used to Write Phishing Emails

"That's all well and good, but what does it mean for cybersecurity?" So goes the refrain with all new things technology, and ChatGPT is no exception.

See Also: Webinar | Prisma Access Browser: Boosting Security for Browser-Based Work

Without question, 2023 is the year of large language models, or LLMs. Microsoft is investing billions in ChatGPT's creator, OpenAI, to add "AI" capabilities to its products, from images generated on demand for pasting into PowerPoint to better Bing search engine results. Google introduced its rival offering, Bard, earlier this month, and China's Baidu announced that its approach, Wenxin Yiyan - aka Ernie Bot - will soon go live.

As with any new type of technology, criminals also are querying how AI chatbots might make their schemes easier and more profitable. Script kiddies in particular have been asking if ChatGPT might help them build better malware for free.

Results have been extremely mixed. "Right now, I think it's a novelty," says John Kindervag, creator of zero trust and senior vice president of cybersecurity strategy at ON2IT Group. But as AI gets better, he says, "probably it will allow the attackers to craft more sophisticated attacks, and it will toast everybody who is not paying attention."

So far, at least, the fervor over AI chatbots being used to build a better cybercrime mousetrap is claptrap, says security researcher Marcus Hutchins, aka MalwareTech.

As an example, he instructed ChatGPT to generate cookie-stealing malware for Chrome and found that it created nonfunctional code full of errors and required advanced skills to debug. This highlights how programming isn't just a technical exercise but rather requires making design choices and understanding limits. "Only then can you begin translating ideas into code," Hutchins says in a blog post.

Criminals needn't bother to use AI chatbots, which are trained on publicly available code. Instead, they can go to the source. "If someone with zero coding ability wants malware, there are thousands of ready-to-go examples available on Google" and GitHub, Hutchins says.

Another rising concern is that criminals will use AI chatbots to craft better phishing email lures, especially outside their native language.

Google Translate, of course, already does this for free. But the proven approach is to instead pay "a small fee" to someone fluent in multiple languages, which Hutchins says "cybercriminals have been doing for decades." In addition, ChatGPT is unable to design phishing emails that look like the real thing. But criminals can do that simply by copying and pasting the HTML code from a legitimate email.

So far, the main cybercrime threat seemingly posed by ChatGPT - like the World Cup, the Olympics, natural disasters and other high-profile events - has been criminals crafting their own lures to reference them. Already, multiple fake OpenAI social media pages are being used to spread malware, malicious Android apps are using the ChatGPT icon and name, and phishing sites are stealing credit card data by pretending to be official pages for subscribing to ChatGPT, reports threat intelligence firm Cyble.

Use of LLMs also poses unknown risks to some types of data security, privacy and integrity. Imagine the chaos that could occur if an AI chatbot's predictive capabilities are used to summarize test results for a patient's medical chart or recommend a course of treatment, when such tools will argue incorrectly with users about what year it is. That's one reason why AI chatbot terms and conditions prohibit them from being used for healthcare purposes.

Trust or Distrust?

Too often, new technology is little more than a set of engineering capabilities in search of a problem to solve. To borrow the words of speculative fiction writer William Gibson: When it comes to AI chatbots, should we "distrust that particular flavor"?

Certainly, ChatGPT is far from perfect, and its attempt to synthesize facts often results in factual inaccuracies. All LLMs also suffer from the "garbage in, garbage out," problem, as demonstrated by Bard recently delivering an incorrect answer in a promotional video, sending Google's stock tumbling.

Researchers have found ChatGPT has the potential to spread misinformation at a massive scale, because that's part of what it's been trained on. Knowing technology firms, they'll likely try to delegate the task of separating fact from fiction to their customers, just as using CAPTCHAs to block bots has helped Google make use of humans to refine its image recognition tools.

As LMMs become better trained, arguably AI error rates will plummet.

In the interim, Palo Alto Networks CEO Nikesh Arora says ChatGPT's humanlike ability to amuse and inform, as well as to summarize large data sets, is "the best thing that's happened to security" because it has raised customers' expectations for what their tools should be able to do. Vendors who can deliver clean, comprehensive and real-time results - via AI - are arguably set to thrive.

Cybercrime Business Case

For cybercriminals, however, the business case, at least in the near term, remains less clear. Hutchins says perhaps criminals will automate tasks, such as prepping troll farm content or running tech support scams.

Whatever business criminals are trying to innovate will still need to show return on investment. Last fall, for example, former members of the Conti ransomware group laid off 45 employees in a call center they were using to trick victims into installing remote control software on their PCs, Yelisey Bohuslavskiy, chief research officer at New York-based threat intelligence firm Red Sense, tells The Wall Street Journal.

The problem, he says, is that the criminals' call center was losing money. "It wasn't producing enough revenue, as its effectiveness dropped over time," he tells me. When it was launched in spring 2022, "when it was novel, the methodology was crazy effective, but then it began to be recognized."

Using AI chatbots wouldn't magically make such problems disappear, never mind lead to trusting them with the delicate task of committing search engine optimization fraud or negotiating ransom payments - both of which remain big earners. "Nobody would ever automate such an important thing as negotiations or other social engineering tasks," Bohuslavskiy says. "These tasks take a very limited time to complete but are extremely important, so they are all handmade."

But as the continuing scourge that is ransomware demonstrates, many criminals are expert at creating more innovative business models. Who knows what uses they might find for AI.

That's why zero trust creator Kindervag says the writing is on the wall to put strategies such as his in place - strategies designed to repel any type of attack, whether human- or machine-generated. "We're just focused on protecting a protect surface" - critical data, assets, applications and services - and by having the right policies in place to do that, the attack type doesn't matter, he says.

In other words, with the right approach, who needs fear AI, whatever it might someday be asked to do?



About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.