AI Technologies , Discriminative AI , Ethics
Cybersecurity Ethics: Artificial Intelligence Imperatives
Machine-Learning Models Must Work for All, Warns Industry Veteran Diana KelleyAs a long-time industry veteran who’s done everything from building and managing large enterprise networks to advising executives at some of the world's biggest technology giants, Diana Kelley continues to track some of the top trends and issues in cybersecurity.
See Also: Business Rewards vs. Security Risks of Generative AI: Executive Panel
Ethics remains one of her chief concerns, including the design of new technology, including the machine-learning models that underpin so-called artificial intelligence capabilities.
"You need to take into account how you train these systems, who's going to use them and make sure you deploy them with inclusion and ethics in mind," says Kelley, who serves as both the chief strategy officer and chief security officer at career accelerator firm Cybrize.
In a video interview with Information Security Media Group at RSA Conference 2022, Kelley also discusses:
- Artificial intelligence and machine-learning ethical implications;
- The resilience imperative and how to communicate the "when, not if" imperative to CEOs;
- How to include and keep people in cyber.
Kelley is the CSO² - chief strategy officer and chief security officer - and co-founder of Cybrize. She also serves on the boards of Cyber Future Foundation, WiCyS and The Executive Women's Forum. Previously, she's served as cybersecurity field CTO for Microsoft, global executive security adviser at IBM Security, general manager at Symantec, vice president at Burton Group - now Gartner - as well as a manager at KPMG, CTO and co-founder of SecurityCurve, and chief vCISO at SaltCybersecurity.