Various "dark" generative artificial intelligence tools purportedly help criminals more quickly amass victims. Guess what? They've all gone bust, if they weren't simply outright scams - in part because legitimate tools can be "jailbroken" to achieve similar results. What are they really achieving?
Adversaries use artificial intelligence to obtain explosives, advance sextortion schemes and propagate malware through malicious websites that appear legitimate. Intelligence officials grapple with emboldened criminals who use AI for nefarious purposes and nation-state actors such as China.
Cybercriminals are using an evil twin of OpenAI's generative artificial intelligence tool Chat GPT. It's called FraudGPT, it's available on criminal forums, and it can be used to write malicious code and create convincing phishing emails. A similar tool called WormGPT is also available.
Artificial intelligence poses a global risk of extinction tantamount to nuclear war and pandemics, say a who's who of artificial intelligence executives in an open letter that evokes danger without suggesting how to mitigate it. Among the signatories are Sam Altman and Geoffrey Hinton.
ChatGPT, an AI-based chatbot that specializes in dialogue, is raising concern among security professionals about how criminals could use cheap, accessible natural language AI to write convincing phishing emails and pull off nefarious deepfake scams. Peter Cassidy discusses the implications.
Anything that can write a software code can also write malware. The latest AI technology can do it in seconds. Even worse, it could open the door to rapid innovation for hackers with little or no technical skills or help them overcome language barriers to writing the perfect phishing email.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.