Welcome to Information Security Media Group's Black Hat and DEF CON 2024 Compendium featuring latest insights from the industry's top cybersecurity researchers and ethical hackers, as well as perspectives from CEOs, CISOs and government officials on the latest trends in cybersecurity and AI.
The Dutch data regulator is the latest agency to fine artificial intelligence company Clearview AI over its facial data harvesting and other privacy violations of GDPR rules, joining regulatory agencies in France, Italy, Greece and the United Kingdom.
HackerOne has tapped F5's longtime product leader as its next chief executive to continue expanding its portfolio beyond operating vulnerability disclosure programs. The firm tasked Kara Sprague with building on existing growth in areas including AI red teaming and penetration testing as a service.
AI holds great promise for healthcare but ethical concerns abound. David Hoffman, assistant professor of bioethics at Columbia University, shares insights on balancing AI's benefits and risks, the need for informed consent by patients and ethical considerations in AI implementations.
ISMG's Virtual AI Summit brought together cybersecurity leaders to explore the intersection of AI and security. Discussions ranged from using AI for defense to privacy considerations and regulatory frameworks and provided organizations with valuable insights for navigating the AI landscape.
AI-assisted coding tools can speed up code production but often replicate existing vulnerabilities when built on poor-quality code bases. Snyk's Randall Degges discusses why developers must prioritize code base quality to maximize the benefits and minimize the risks of using AI tools.
From skill shortages to cultural shifts and security risks, implementing AIOps presents significant challenges. By effectively addressing these challenges, CIOs can improve their organizations' IT operations management, foster scalability and enable predictive maintenance.
This week, the European Union's AI Act has gone into force, marking a significant step in AI development. Starting Aug. 1, 2024, it will enforce strict rules on high-risk AI systems and prohibit harmful practices, to ensure transparency and protect fundamental rights.
It's been nearly 18 months since ChatGPT paved the way for rapid generative AI adoption, but enterprises are just beginning to implement basic cybersecurity strategies and use blocking controls, DLP tools and live coaching to mitigate gen AI risks, according to security firm Netskope.
Aboitiz Data Innovation faced a unique challenge: Design a wholesale architecture for a generative AI lab for a bank while ensuring accurate responses and maintaining strict information security protocols, said Guy Sheppard, chief commercial officer at Aboitiz Data Innovation.
An official from the U.S. Department of Defense Chief Digital and Artificial Intelligence Office said Thursday the department is testing generative AI tools to help streamline its contracting and management operations and free up time for federal employees.
Generative AI has advanced rapidly over the past year, and organizations are recognizing its potential across business functions. But businesses have now taken a cautious stance regarding gen AI adoption due to steep implementation costs and concerns regarding hallucinations.
HR experts at Lattice had a vision of treating AI bots as human employees, with a place on the org chart, training, key performance metrics - and a boss. But the workforce may not be ready for that, and the firm learned the hard way. Lattice scrapped its plan three days after announcing it.
The Gartner CDAO playbook contains 10 plays that define a structure for the AI journey. This article outlines crucial steps for successful AI project execution. From building ecosystems to governance, Plays 6 to 10 address key challenges in scaling AI initiatives and managing associated risks.
Whistleblowers from OpenAI have reportedly complained to the Securities and Exchange Commission that the company unlawfully restricted employees from alerting regulators of the artificial intelligence technology's potential risks to humanity.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.