While AI transforms business operations, it helps cybercriminals develop sophisticated impersonation techniques such as deepfakes and voice synthesis, posing new challenges for corporate security, said Surinder Lall, senior vice president of global information security risk management at Paramount.
This week, CyberEdBoard members Jon Staniforth and Helmut Spöcker joined ISMG editors to unpack the hot topics at ISMG's London Cybersecurity Summit 2024, including ransomware lessons learned, AI trends and the growing importance of continuous learning and resilience in the cybersecurity industry.
At the annual Cybersecurity Summit: London, Information Security Media Group recently brought together top cybersecurity professionals, executives and thought leaders to find solutions to the latest threats, identity-related weaknesses and emerging risks posed by AI technology.
BlackCloak’s $17 million Series B funding round will help the company triple its engineering and product teams, enhancing cybersecurity for executives and high-net-worth individuals. The funding will help BlackCloak address emerging issues such as deepfakes and threat intelligence and modeling.
Welcome to Information Security Media Group's Black Hat and DEF CON 2024 Compendium featuring latest insights from the industry's top cybersecurity researchers and ethical hackers, as well as perspectives from CEOs, CISOs and government officials on the latest trends in cybersecurity and AI.
Telegram deleted 25 videos the South Korean Communications Standards Commission said depicted sex crimes, and regulators reported that site administrators pledged a "relationship of trust." The agency said it intends to establish a hotline to ensure urgent action on deepfakes.
While the criminals may have an advantage in the AI race, banks and other financial services firms are responding with heightened awareness and vigilance, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-driven scams.
The ability to create real-time deepfakes of trusted figures has transformed the landscape of corporate security threats. Brandon Kovacs, senior red team consultant at Bishop Fox, details how attackers can now clone voices and video in real-time, enabling new forms of social engineering and fraud.
Banks need to make changes to fraud programs to tackle mule accounts in the age of AI. Organizations need to move away from having one control to handle all suspicious accounts, said Anthony Hope, group head of AML, counter-terrorist financing, and fraud risk at NAB.
As deepfakes evolve, they pose significant cybersecurity risks and require adaptable security measures. In this episode of "Proof of Concept," Sam Curry of Zscaler and Heather West of Venable discuss strategies for using advanced security tactics to outpace deepfake threats.
Deepfake cases in Germany have seen a year-on-year increase of 142% - with various contributing factors. But what can be done to properly prepare for the threat that's here already?
In this webinar, we explore:
Current capabilities of deepfake technology - from an attack vs defence perspective;
Recent case...
Like security practitioners, cybercriminals want AI too. But in the AI-versus-AI cyber battle, the barrier for malicious actors "keeps getting lower and lower, while the barrier for defenders is getting more complex and more difficult," said Rick Holland, field CISO, ReliaQuest.
Pope Francis during a speech at the G7 summit in Italy called for a ban of autonomous weapons and urged world leaders to keep humans and ethics at the forefront of the artificial intelligence revolution, making him the first pope to address the annual meeting of the world's wealthy democracies.
While AI has spurred the growth of authentication controls, it has also enabled voice cloning and video deepfakes to become much more convincing. Fraud fighters are looking at adopting a multifactor authentication system using multimodal biometrics to fight against deepfakes.
Election security threats are real, and attacks will come from sophisticated nation-state threat actors who will hack victims and leak sensitive information paired with AI-generated deepfakes as part of disinformation campaigns across Western nations, social media companies told the U.K. government.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.