As Bugcrowd helps OpenAI keep pace with the inevitable cybersecurity risks amid the massive popularity of its applications, the bug bounty firm's CEO discusses the unique elements of finding vulnerabilities in OpenAI, its impact and the journey so far.
Most healthcare workers don't check security protocols before trying out new generative AI tools such as ChatGPT, putting patient and other sensitive data at risk, said Sean Kennedy of software vendor Salesforce, which recently conducted research on potential security gaps in healthcare settings.
Everyone needs to have a security-first mindset for identity because as much as it is a defender's shield, it is also an attacker's target, said Rohit Ghai, CEO at RSA. In fact, identities are the most attacked part of enterprises, yet too little energy is spent on monitoring them.
With the growing dominance of AI and concerns over its responsible use, is it time to move toward AI ethics by design? Sameer Ahirrao, founder of Ardent Privacy, shared how privacy and regulatory verticals should - and will - shape the future of AI.
As organizations increasingly look to use artificial intelligence to boost cybersecurity, Kroll's Alan Brill discusses how sound legal counsel and compliance officers can ensure caution and assist with due diligence for the effective implementation of the technology.
Artificial intelligence can solve really old problems around data wrangling and data protection that are essential to many security investigations, said Norwest Ventures' Rama Sekhar. The VC firm is looking at emerging companies that use large language models to automatically clean up data.
Cybersecurity expert Mikko Hypponen recently got sent "LL Morpher," a new piece of malware that uses OpenAI's GPT to rewrite its Python code with every new infection. While more proof-of-concept than current threat, "the whole AI thing right now feels exciting and scary at the same time," he said.
Artificial intelligence and machine learning are used extensively for detecting threats, but their use in other areas of security operations is less explored. One of the biggest opportunities for AI and ML in cyber is around investigating potential security incidents, said Forrester's Allie Mellen.
AI is a tool for augmenting humans rather than replacing them, and AI is far from surpassing human capabilities on a scalable level. Although AI can generate realistic images and believable text, it still has a long way to go in detecting anomalies, said Kyle Hanslovan, CEO of Huntress.
Artificial intelligence and machine-learning technology is vulnerable to cyberattacks due to a lack of security around the models themselves, said Mark Hatfield, founder and general partner at Ten Eleven Ventures. How do we identify and fix the potential risks of misuse that come with AI?
Generative AI has revolutionized the way people interact with chatbots. Ruby Zefo, chief privacy officer and ACG for privacy and cybersecurity at Uber Technologies, cited ChatGPT as an example of the need to conduct an "environmental scan" of both external and internal risks associated with it.
There is a growing need for "citizen data scientists," such as engineers and programmers, to better understand the inner workings of AI and ML as those technologies become more ubiquitous, said Tom Scanlon, technical manager of the CERT data science team at Carnegie Mellon University.
ChatGPT is "amazing" and "has reformed the way we interact with computing," says Nikesh Arora, chairman and CEO of Palo Alto Networks. But to get value from AI and to use it to make the SOC more proactive, we need to have a lot of data - and pay attention to what it's telling us, he says.
The potential use cases for generative AI technology in healthcare appear limitless, but they're weighted with an array of potential privacy, security and HIPAA regulatory issues, says privacy attorney Adam Greene of the law firm Davis Wright Tremaine.
The U.S. weapons arsenal developed without a zero trust architecture is at growing risk from cyberattacks, lawmakers heard today in a panel dedicated to how artificial intelligence can simultaneously help and hurt efforts to protect warfighters from digital attacks.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.