AI-Based Attacks , Banking & Finance , Governance, Security & Risk
New York Financial Regulator Publishes AI Safety Guidance
Agency Details AI Cybersecurity Risks, Prevention, Mitigation StrategiesFinancial regulators with the state of New York on Wednesday published guidance to help organizations identify and mitigate cybersecurity threats related to artificial intelligence.
The New York State Department of Financial Services said it's not imposing new requirements, but responding to inquiries on how regulated financial institutions can mitigate AI risks.
AI can enhance threat detection and incident response strategies, said DFS Superintendent Adrienne A. Harris, but it also creates "new opportunities for cybercriminals to commit crimes at greater scale and speed."
The guidance calls out AI-specific risks such as social engineering, theft of non-public information and increased vulnerabilities due to supply chain dependencies. It highlights "enhanced" cyberattacks, the potency, scale and speed of which are amplified by AI.
Due to AI's ability to scan and quickly ingest information, "threat actors can use AI quickly and efficiently to identify and exploit security vulnerabilities, often allowing threat actors to access more information systems at a faster rate," it said.
Once inside the systems, threat actors can deploy AI to conduct reconnaissance to determine how best to deploy malware and access and exfiltrate data, it said. AI can also accelerate the development of new malware variants and change ransomware to enable it to bypass defensive security controls.
Theft of nonpublic information is a potential byproduct of using AI, since AI products that deploy the technology may collect and process such data. Organizations that maintain nonpublic information in large quantities and deploy AI make for bigger targets since threat actors have a "greater incentive to target these entities in an attempt to extract NPI for financial gain or other malicious purposes."
AI logon solutions may additionally store biometric data, the theft of which could give hackers access to internal data, regulators warn. Attackers could also use biometric data to generate realistic deepfakes.
The agency recommends institutions implement multiple layers of cybersecurity controls to ensure that if one control fails, others can mitigate the impact of an attack. Institutions should have monitoring processes to detect vulnerabilities and establish strong data management practices. Third-party vendor management, access controls and cybersecurity training must feature prominently on the organization's security agenda, New York regulators say.