AI Governance , AI Technologies , Generative AI
Managing AI Risks in Corporate Workflows
Hearst's Andres Andreu on the Need to Form Partnerships With AI ProvidersIntegrating generative AI into corporate workflows requires careful attention to security. Corporate policies alone can't govern user behavior. Organizations need proactive measures, such as browser plug-ins and network-level controls, to intercept data before it leaves the organization's perimeter, said Andres Andreu, deputy CISO at Hearst.
See Also: OnDemand: AI Model Security Challenges: Financial and Healthcare Data
Generative AI is "tricky" as users sometimes bypass corporate controls, which could lead to potential data leaks, Andreu said. For example, he said, employees "could take corporate documents home and get on a personal machine and leak them to a public LLM."
Building relationships with AI providers, Andreu said, is crucial to managing data and identifying leaked proprietary information.
In this video interview with Information Security Media Group at the Fraud, Security and Risk Management Summit, Andreu also discussed:
- The need for a balanced global policy approach to ensure user privacy and data protection;
- How industry collaboration will help in setting AI standards;
- The need for adaptive cybersecurity strategies to change user behavior.
Andreu is responsible for reviewing and optimizing software development processes to ensure consistent and predictable delivery. His expertise spans information security management and cyber and web application security. Andreu has nearly 30 years of experience and has served in the U.S. Drug Enforcement Administration. He is a member of the CyberEdBoard.