Generative AI offers fantastic productivity gains, but if deployed without due security precautions, it can also open up the organization to new cybersecurity threats.
Adversaries use artificial intelligence to obtain explosives, advance sextortion schemes and propagate malware through malicious websites that appear legitimate. Intelligence officials grapple with emboldened criminals who use AI for nefarious purposes and nation-state actors such as China.
Cybercriminals are using an evil twin of OpenAI's generative artificial intelligence tool Chat GPT. It's called FraudGPT, it's available on criminal forums, and it can be used to write malicious code and create convincing phishing emails. A similar tool called WormGPT is also available.
A startup led by former AWS and Oracle AI executives completed a Series A funding round to strengthen security around ML systems and AI applications. Seattle-based Protect AI plans to use the $35 million investment to expand its AI Radar tool and research unique threats in the AI and ML landscape.
Shifting from encryption, threat actors are now resorting to data exfiltration to exert pressure on enterprises, demanding ransom for the protection of valuable data and reputation. CIOs and CISOs must disrupt the playing field by deploying deception and embracing other innovative methods.
Apart from some of the threats surrounding AI, this emerging technology can help defenders formulate effective policies and controls to prevent and mitigate BEC scams. With the evolving threat landscape, harnessing AI becomes crucial in defending, said Johan Dreyer, CTO at Mimecast.
Organizations need to adopt a creative approach when building policies around the legal, commercial and reputational risks raised by generative AI tools - such as with privacy, consumer protection and contractual obligations, said legal expert Anna King of Markel.
Compromised chatbot credentials are being bought and sold by criminals who frequent underground marketplaces for stolen data, warns cybersecurity firm Group-IB, as the use of ChatGPT and rival AI chatbot offerings and services newly baked into existing products continues to surge across the globe.
Artificial intelligence poses a global risk of extinction tantamount to nuclear war and pandemics, say a who's who of artificial intelligence executives in an open letter that evokes danger without suggesting how to mitigate it. Among the signatories are Sam Altman and Geoffrey Hinton.
The French data protection authority on Tuesday signaled increased concerns over the privacy impacts of generative artificial intelligence and said issues such as data scraping raise data protection questions. Data scraping by AI companies is a flashpoint in the technology's rollout.
As organizations increasingly look to use artificial intelligence to boost cybersecurity, Kroll's Alan Brill discusses how sound legal counsel and compliance officers can ensure caution and assist with due diligence for the effective implementation of the technology.
Cybersecurity expert Mikko Hypponen recently got sent "LL Morpher," a new piece of malware that uses OpenAI's GPT to rewrite its Python code with every new infection. While more proof-of-concept than current threat, "the whole AI thing right now feels exciting and scary at the same time," he said.
Generative AI has revolutionized the way people interact with chatbots. Ruby Zefo, chief privacy officer and ACG for privacy and cybersecurity at Uber Technologies, cited ChatGPT as an example of the need to conduct an "environmental scan" of both external and internal risks associated with it.
The potential use cases for generative AI technology in healthcare appear limitless, but they're weighted with an array of potential privacy, security and HIPAA regulatory issues, says privacy attorney Adam Greene of the law firm Davis Wright Tremaine.
Generative AI tools such as ChatGPT will undoubtedly change the way clinicians and healthcare cybersecurity professionals work, but the use of these technologies come with security, privacy and legal concerns, says Lee Kim of the Healthcare Information Management and Systems Society.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.