AI Governance , AI Industry Innovations , AI Technologies

ChatGPT: The Good, the Bad and the Ugly

How Generative AI Models Can Be Used for Cyberattacks and Cybersecurity
ChatGPT: The Good, the Bad and the Ugly
Image: Shutterstock

The cat-and-mouse game between cybercriminals and security solutions providers just escalated to the next level. With AI and machine learning being the new staple for enterprise applications, both cybercriminals and security researchers are exploring ways to leverage these technologies. Security practitioners say it is crucial for government, private enterprise and academia to collaborate and share threat intelligence. A proactive, threat intelligence approach is also recommended.

See Also: Securing the Data & AI Landscape with DSPM and DDR

According to a 2023 BlackBerry survey of 1,500 IT decision makers in North America, the U.K. and Australia, more than half (51%) of the respondents predict that ChatGPT will be used in a successful cyberattack this year. Seventy-four percent acknowledge ChatGPT is a potential cybersecurity threat. ChatGPT's ability to help hackers craft legitimate-sounding phishing emails is the top global concern for 53% of respondents, and 49% believe technologies like ChatGPT will enable less experienced hackers to improve their technical knowledge and develop specialized skills for spreading misinformation.

Sven Krasser, SVP and chief scientist for CrowdStrike, says lower-skilled adversaries can get "rudimentary support authoring malicious code." He believes future model advances (for large language models) could shorten the time from vulnerability disclosure to exploit creation.

OpenAI just released GPT 4.0, which powers its ChatGPT bot. The latest version is more accurate and has the capability to accept image inputs - though that feature is not yet released - which could potentially be misused for deepfakes.

AI Aiding Attackers

Bad actors are sure to leverage the extended capabilities of ChatGPT and other generative AI models such as Google's Bard, unless there is some governance around this technology.

Steve Povolny, principal engineer and director at Trellix, says cybercriminals are already looking at unique ways to leverage the AI tool for nefarious purposes. "It isn't hard to create hyper-realistic phishing emails or exploit code, for example, simply by changing the user input or slightly adapting the output generated," he says.

Povolny fears threat actors may look to refine data processing engines to emulate ChatGPT, while removing restrictions, and even enhance the tool's abilities to create malicious output.

Since generative AI technology like GPT is capable of generating text with human-like expressions, hackers could immediately use it for automating spear phishing and social engineering attacks.

Aaron Bugal, field CTO APJ at Sophos, believes ChatGPT will help "entry-level cybercriminals" hone their skills to write convincing copy for phishing campaigns. He believes advanced persistent threat actors could also use this technology to up their game.

"[With] ChatGPT, those telltale signs of a poorly formed spam email will simply blend in better, making them harder to spot," Bugal says.

Further, there are apprehensions that ChatGPT technology could be misused to spread false information on social media. Past misinformation social media campaigns by propagandists have resulted in real-life impact, most prominently the 2016 U.S. presidential elections.

An OpenAI blog warns about the misuse of large language models, LLMs, for misinformation campaigns. OpenAI partnered with Georgetown University's Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes.

A Helping Hand for Cybersecurity

At one of ISMG's panel discussions, Zaid Bin Hamzah, executive education fellow at the National University of Singapore, questions the "balance of power" between hackers and defenders.

"Our biggest challenge is to stay ahead of the curve so that we can continue to create a much safer and more secure environment," Hamzah says.

Despite its potential pitfalls, generative AI and LLMs can also be used for boosting cybersecurity. Since these models can scan through large swathes of text, they can assist SOC analysts with log analysis and look for unusual behavior in SIEM logs, for instance.

"Overall, AI is a critical tool to stem the volume of attacks. First, it can handle large amounts of attacks in an automated fashion, reducing the load on human reviewers. Second, it can analyze significantly more data to come to a more precise decision than a human can," Krasser says.

The volume and sophistication of cyberattacks are putting a lot of stress on security analysts, causing burnouts and even job resignations. They could certainly do with a helping hand or "co-pilot" to assist in voluminous tasks such as log analysis.

Dane Sherrets, senior solutions architect at HackerOne, concurs. "AI does have a place in security teams' toolboxes," he says. "Ethical hackers and security teams can use AI to increase efficacy and save time on repetitive tasks that are less impactful toward the accuracy of security testing outcomes."

He feels narrow AI can help write vulnerability reports, generate code samples, and identify trends in large data sets that arm ethical hackers and security teams with insight - to help them arrive at solutions faster.

Speed is the essence in detecting breaches before they spread laterally.

Chetan Anand, AVP - information security and CISO, Profinch Solutions, and ISACA global mentor, believes ChatGPT can be used for basic malware analysis "as an input for risk mitigation plans, and it has the potential to be used for threat intelligence."

Anand says these models can analyze the contents of malicious emails and text messages and predict whether they are social engineering attempts.

"This could be helpful for internal audits, such as the creation of checklists for a novice cybersecurity professional or for summarizing an audit report. ChatGPT could also be used for penetration tests by providing a list of plausible passwords for automated tooling to execute brute force attacks," he says.

The Way Forward

BlackBerry's research revealed that 82% of IT decision-makers plan to invest in AI-driven cybersecurity in the next two years and 48% plan to invest before the end of 2023. This reflects the growing concern that signature-based protection solutions are no longer effective in providing cyber protection against an increasingly sophisticated threat.

Whilst IT directors are positive that ChatGPT will enhance cybersecurity for businesses, the survey also revealed that 95% believe governments have a responsibility to regulate advanced technologies.

The priority is to "develop proactive threat intelligence" and "intelligence sharing and collaboration" between governments, industry and universities, Hamza says. "If we can increase our data sharing, especially our trained model sharing, it will lead to optimization of resources."

There is no doubt that only a collective and collaborative approach can be used to fight threat actors, especially those who plan to use advanced AI and machine learning technologies against organizations in the future. For now, it is crucial for cybersecurity researchers to stay ahead in the game through a proactive and threat intelligence approach.


About the Author

Brian Pereira

Brian Pereira

Sr Executive Editor - CIO.inc, ISMG

Pereira has three decades of journalism experience. He is the former editor of CHIP, InformationWeek and CISO MAG. He has also written for The Times of India and The Indian Express. He has developed and curated content for major technology conferences, including CeBIT and INTEROP India.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.