Most healthcare workers don't check security protocols before trying out new generative AI tools such as ChatGPT, putting patient and other sensitive data at risk, said Sean Kennedy of software vendor Salesforce, which recently conducted research on potential security gaps in healthcare settings.
Everyone needs to have a security-first mindset for identity because as much as it is a defender's shield, it is also an attacker's target, said Rohit Ghai, CEO at RSA. In fact, identities are the most attacked part of enterprises, yet too little energy is spent on monitoring them.
While AI is presenting intriguing opportunities for productivity and innovation, the tech world must grapple with serious regulatory, legal and related policy considerations, said privacy, security and legal experts Benham Dayanim, Patricia Titus and Heather West in this CyberEdBoard talk.
With the growing dominance of AI and concerns over its responsible use, is it time to move toward AI ethics by design? Sameer Ahirrao, founder of Ardent Privacy, shared how privacy and regulatory verticals should - and will - shape the future of AI.
As organizations increasingly look to use artificial intelligence to boost cybersecurity, Kroll's Alan Brill discusses how sound legal counsel and compliance officers can ensure caution and assist with due diligence for the effective implementation of the technology.
Artificial intelligence can solve really old problems around data wrangling and data protection that are essential to many security investigations, said Norwest Ventures' Rama Sekhar. The VC firm is looking at emerging companies that use large language models to automatically clean up data.
Cybersecurity expert Mikko Hypponen recently got sent "LL Morpher," a new piece of malware that uses OpenAI's GPT to rewrite its Python code with every new infection. While more proof-of-concept than current threat, "the whole AI thing right now feels exciting and scary at the same time," he said.
Generative AI tools such as ChatGPT have created quite a buzz. Cybersecurity defenders are excited about the prospect of simplifying coding but are concerned about security and privacy issues. SentinelOne’s Milad Aslaner said security teams should get to know emerging AI - before the criminals do.
Artificial intelligence and machine learning are used extensively for detecting threats, but their use in other areas of security operations is less explored. One of the biggest opportunities for AI and ML in cyber is around investigating potential security incidents, said Forrester's Allie Mellen.
AI is a tool for augmenting humans rather than replacing them, and AI is far from surpassing human capabilities on a scalable level. Although AI can generate realistic images and believable text, it still has a long way to go in detecting anomalies, said Kyle Hanslovan, CEO of Huntress.
Artificial intelligence and machine-learning technology is vulnerable to cyberattacks due to a lack of security around the models themselves, said Mark Hatfield, founder and general partner at Ten Eleven Ventures. How do we identify and fix the potential risks of misuse that come with AI?
Early-stage startups interested in the implementation of artificial intelligence are often concerned about the policies surrounding AI use. While some startups are looking at automating policies, others are building platforms to test the accuracy, integrity and robustness of AI models.
Generative AI has revolutionized the way people interact with chatbots. Ruby Zefo, chief privacy officer and ACG for privacy and cybersecurity at Uber Technologies, cited ChatGPT as an example of the need to conduct an "environmental scan" of both external and internal risks associated with it.
Pre-RSA social media gaming predicted it. Many predicted they would loath it. And it happened: Discussions at this year's RSA conference again and again came back to generative artificial intelligence - but with a twist. Even some of the skeptics professed their conversion to the temple of AI.
There is a growing need for "citizen data scientists," such as engineers and programmers, to better understand the inner workings of AI and ML as those technologies become more ubiquitous, said Tom Scanlon, technical manager of the CERT data science team at Carnegie Mellon University.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.