ISMG's Cybersecurity Pulse Report Black Hat 2024 Edition delivers a deep dive into the most critical security challenges discussed at this year's conference. This report, created through advanced AI-driven analysis, compiles expert insights on topics ranging from AI tech to critical infrastructure.
The Dutch data regulator is the latest agency to fine artificial intelligence company Clearview AI over its facial data harvesting and other privacy violations of GDPR rules, joining regulatory agencies in France, Italy, Greece and the United Kingdom.
Telegram deleted 25 videos the South Korean Communications Standards Commission said depicted sex crimes, and regulators reported that site administrators pledged a "relationship of trust." The agency said it intends to establish a hotline to ensure urgent action on deepfakes.
While the criminals may have an advantage in the AI race, banks and other financial services firms are responding with heightened awareness and vigilance, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-driven scams.
AI models are increasing efficiency, but come with new hidden vulnerabilities that can be a struggle to keep up with and safeguard against. The possible Malicious attacks can lead to business disruption and data breaches of highly sensitive data.
Identifying your AI model and its vulnerabilities is crucial. Many...
HackerOne has tapped F5's longtime product leader as its next chief executive to continue expanding its portfolio beyond operating vulnerability disclosure programs. The firm tasked Kara Sprague with building on existing growth in areas including AI red teaming and penetration testing as a service.
AI holds great promise for healthcare but ethical concerns abound. David Hoffman, assistant professor of bioethics at Columbia University, shares insights on balancing AI's benefits and risks, the need for informed consent by patients and ethical considerations in AI implementations.
ISMG's Virtual AI Summit brought together cybersecurity leaders to explore the intersection of AI and security. Discussions ranged from using AI for defense to privacy considerations and regulatory frameworks and provided organizations with valuable insights for navigating the AI landscape.
Cisco announced its intent to acquire Robust Intelligence to fortify the security of AI applications. With this acquisition, Cisco aims to address AI-related risks, incorporating advanced protection to guard against threats such as jailbreaking, data poisoning and unintentional model outcomes.
AI-assisted coding tools can speed up code production but often replicate existing vulnerabilities when built on poor-quality code bases. Snyk's Randall Degges discusses why developers must prioritize code base quality to maximize the benefits and minimize the risks of using AI tools.
From skill shortages to cultural shifts and security risks, implementing AIOps presents significant challenges. By effectively addressing these challenges, CIOs can improve their organizations' IT operations management, foster scalability and enable predictive maintenance.
AI's integration into cybersecurity demands a strong foundational approach. Many companies seek advanced AI solutions but struggle with basic cybersecurity practices such as managing assets and patching vulnerabilities, said Michael Thiessmeier, executive director of U.S. NAIC-ISAO.
Shachar Menashe, senior director of security research at JFrog, discusses critical security risks in MLOps platforms - including code execution vulnerabilities in machine learning models - and why organizations must treat ML models as potentially malicious code to mitigate these inherent risks.
The ability to create real-time deepfakes of trusted figures has transformed the landscape of corporate security threats. Brandon Kovacs, senior red team consultant at Bishop Fox, details how attackers can now clone voices and video in real-time, enabling new forms of social engineering and fraud.
Many cybersecurity organizations hope generative artificial intelligence and large language models will help them secure the enterprise and comply with the latest regulations. But to date, commercial LLMs have big problems - hallucinations and a lack of timely data, said NYU professor Brennan Lodge.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.