Cybercriminals are using an evil twin of OpenAI's generative artificial intelligence tool Chat GPT. It's called FraudGPT, it's available on criminal forums, and it can be used to write malicious code and create convincing phishing emails. A similar tool called WormGPT is also available.
A startup led by former AWS and Oracle AI executives completed a Series A funding round to strengthen security around ML systems and AI applications. Seattle-based Protect AI plans to use the $35 million investment to expand its AI Radar tool and research unique threats in the AI and ML landscape.
Shifting from encryption, threat actors are now resorting to data exfiltration to exert pressure on enterprises, demanding ransom for the protection of valuable data and reputation. CIOs and CISOs must disrupt the playing field by deploying deception and embracing other innovative methods.
Apart from some of the threats surrounding AI, this emerging technology can help defenders formulate effective policies and controls to prevent and mitigate BEC scams. With the evolving threat landscape, harnessing AI becomes crucial in defending, said Johan Dreyer, CTO at Mimecast.
Compromised chatbot credentials are being bought and sold by criminals who frequent underground marketplaces for stolen data, warns cybersecurity firm Group-IB, as the use of ChatGPT and rival AI chatbot offerings and services newly baked into existing products continues to surge across the globe.
Artificial intelligence poses a global risk of extinction tantamount to nuclear war and pandemics, say a who's who of artificial intelligence executives in an open letter that evokes danger without suggesting how to mitigate it. Among the signatories are Sam Altman and Geoffrey Hinton.
Cybersecurity expert Mikko Hypponen recently got sent "LL Morpher," a new piece of malware that uses OpenAI's GPT to rewrite its Python code with every new infection. While more proof-of-concept than current threat, "the whole AI thing right now feels exciting and scary at the same time," he said.
Generative AI tools such as ChatGPT will undoubtedly change the way clinicians and healthcare cybersecurity professionals work, but the use of these technologies come with security, privacy and legal concerns, says Lee Kim of the Healthcare Information Management and Systems Society.
The cat-and-mouse game between cybercriminals and security solutions providers just escalated to the next level. With AI and machine learning being the new staple for enterprise applications, both cybercriminals and security researchers are exploring ways to leverage these technologies.
Technologists were quick to point out that popular AI-based chatbot, ChatGPT, could lower the bar for attackers in phishing campaigns and even write malware code, but Cato Networks' Etay Maor advises taking these predictions "with a grain of salt" and explores the pros and cons of ChatGPT.
Low-level hackers are probing the capacity of ChatGPT to generate scripts that could be used toward criminal ends, such as for stealing files or malicious encryption. One poster on a hacking forum described the process as writing pseudo-code. More sophisticated cases are likely a matter of time.
ChatGPT, an AI-based chatbot that specializes in dialogue, is raising concern among security professionals about how criminals could use cheap, accessible natural language AI to write convincing phishing emails and pull off nefarious deepfake scams. Peter Cassidy discusses the implications.
Anything that can write a software code can also write malware. The latest AI technology can do it in seconds. Even worse, it could open the door to rapid innovation for hackers with little or no technical skills or help them overcome language barriers to writing the perfect phishing email.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.