The U.S. presidential election is still eight months away, but the FBI is already seeing its share of cyberattacks, nation-state threats and AI-generated deepfakes. According to FBI Agent Robert K. Tripp, "We're no longer considering threats as a what-if situation; it's happening now."
In the latest "Proof of Concept," Jeff Brown, CISO for the state of Connecticut, and Lester Godsey, CISO for Maricopa County, Arizona, join ISMG editors to discuss AI-related threats to election security, safeguarding against cyber and physical threats and coordinating efforts for complete security.
In the latest weekly update, Jeremy Grant of Venable LLP joins editors at ISMG to discuss the state of secure identity in 2024, the challenges in developing next-generation remote ID proofing systems, and the potential role generative AI can play in both compromising and protecting identities.
The escalating adoption of generative AI has introduced concerns regarding data privacy, fake data and bias amplification. Ashley Casovan, managing director of the IAPP AI Governance Center, discusses the need to develop governance models and standardize AI systems.
Fraudsters used deepfake technology to trick an employee at a Hong Kong-based multinational company to transfer $25.57 million to their bank accounts. Hong Kong Police said Sunday that the fraudsters had created deepfake likenesses of top company executives in a video conference to fool the worker.
In the latest "Proof of Concept," Sam Curry of Zscaler and Heather West of Venable assess how vulnerable AI models are to potential attacks, offer practical measures to bolster the resilience of AI models and discuss how to address bias in training data and model predictions.
South Korea's intelligence agency has reported that North Korean hackers are using generative AI to conduct cyberattacks and search for hacking targets. Experts believe North Korea's AI capabilities are robust enough for more precise attacks on South Korea.
Alex Zeltcer, CEO and co-founder at nSure.ai, believes more companies are using AI and gen AI to create synthetic data that will be used to identify fraudulent groups who target online shoppers and gamers. He also observes social engineering at scale, perpetrated by machines, to conduct fraud.
Machine learning systems are vulnerable to cyberattacks that could allow hackers to evade security and prompt data leaks, scientists at the National Institute of Standards and Technology warned. There is "no foolproof defense" against some of these attacks, researchers said.
In the future, deepfake technology will have a significant impact on newer forms of authentication such as voice and facial recognition and pose new challenges to defenders, said Ofer Friedman, chief business development officer at AU10TIX, an Israel-headquartered identity verification company.
By the numbers, who has implemented GenAI in their organization? Who has a dedicated budget? And who understands the AI regulations for their industry? An expert panel discusses the findings of ISMG's First Annual Generative AI Study: Business Rewards vs. Security Risks.
According to a recent pulse poll from ISACA on generative AI, only 6% of respondents' organizations are providing training to all staff on AI, and more than half - 54% - say that no AI training is provided at all, even to teams directly affected by AI.
As Congress weighs potential legislative and regulatory guardrails for the use of AI in healthcare, issues such as human oversight, privacy and security risk need close attention, said healthcare industry experts who testified during a House Energy and Commerce subcommittee hearing on Wednesday.
Make no mistake, beyond all the hype, widespread availability of generative AI is a revolution impacting us all and fundamentally changing how we do business...let alone, communicate. In every revolution there are winners and losers, and there is no opt-out if we want to avoid being left behind by our...
AI-generated attacks can be faster and more adaptable than human-led attacks. Organizations can defend against AI-powered attacks by educating their users, creating policies and using AI-powered security tools, said Vlad Brodsky, chief information security officer at OTC Markets Group.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.