AI-Based Attacks , Banking & Finance , Governance, Security & Risk

New York Financial Regulator Publishes AI Safety Guidance

Agency Details AI Cybersecurity Risks, Prevention, Mitigation Strategies
New York Financial Regulator Publishes AI Safety Guidance
New York financial regulators published guidance urging controls against cybersecurity risks from using AI tools. (Image: Shutterstock)

Financial regulators with the state of New York on Wednesday published guidance to help organizations identify and mitigate cybersecurity threats related to artificial intelligence.

The New York State Department of Financial Services said it's not imposing new requirements, but responding to inquiries on how regulated financial institutions can mitigate AI risks.

AI can enhance threat detection and incident response strategies, said DFS Superintendent Adrienne A. Harris, but it also creates "new opportunities for cybercriminals to commit crimes at greater scale and speed."

The guidance calls out AI-specific risks such as social engineering, theft of non-public information and increased vulnerabilities due to supply chain dependencies. It highlights "enhanced" cyberattacks, the potency, scale and speed of which are amplified by AI.

Due to AI's ability to scan and quickly ingest information, "threat actors can use AI quickly and efficiently to identify and exploit security vulnerabilities, often allowing threat actors to access more information systems at a faster rate," it said.

Once inside the systems, threat actors can deploy AI to conduct reconnaissance to determine how best to deploy malware and access and exfiltrate data, it said. AI can also accelerate the development of new malware variants and change ransomware to enable it to bypass defensive security controls.

Theft of nonpublic information is a potential byproduct of using AI, since AI products that deploy the technology may collect and process such data. Organizations that maintain nonpublic information in large quantities and deploy AI make for bigger targets since threat actors have a "greater incentive to target these entities in an attempt to extract NPI for financial gain or other malicious purposes."

AI logon solutions may additionally store biometric data, the theft of which could give hackers access to internal data, regulators warn. Attackers could also use biometric data to generate realistic deepfakes.

The agency recommends institutions implement multiple layers of cybersecurity controls to ensure that if one control fails, others can mitigate the impact of an attack. Institutions should have monitoring processes to detect vulnerabilities and establish strong data management practices. Third-party vendor management, access controls and cybersecurity training must feature prominently on the organization's security agenda, New York regulators say.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.