Law enforcement investigating murder of Brian Thompson, CEO of UnitedHealthcare, are examining potential motives. But shell casings reportedly found at the crime scene spotlight one of the top motives speculated - anger over the company's alleged denial of coverage practices. Where does AI fit in?
The U.S. Department of Homeland Security is reportedly expanding its use of emerging surveillance tools, including drones and artificial intelligence, without proper safeguards as experts warn of potential privacy violations and risks involving facial recognition and third-party data usage.
Boeing, NASA and Pfizer have established chief artificial intelligence officer positions to lead ethical deployment and innovation in 2023. Federal requirements are pushing agencies to create CAIO roles, accelerating enterprisewide adoption across a variety of industries.
Large language models developed by Meta and Mistral AI are among a dozen artificial intelligence models that fail to meet the cybersecurity and fairness requirements of the European Union AI Act, which went into effect on Aug. 1, said developers of a new open-source AI evaluation tool.
Welcome to Information Security Media Group's Black Hat and DEF CON 2024 Compendium featuring latest insights from the industry's top cybersecurity researchers and ethical hackers, as well as perspectives from CEOs, CISOs and government officials on the latest trends in cybersecurity and AI.
From interlinked navigation and electric vehicles to smart voice assistants and autonomous vehicles, Mercedes-Benz aims to harness AI using its vast data pools. The company's AI policy is governed by four principles: responsible data use, explainability, privacy and safety.
ISO/IEC 42001, launched in late 2023, is the world's first AI management system standard, offering a framework to ensure responsible AI practices. Craig Civil, director of data science and AI at BSI, discusses the importance of AI policies and BSI's plans to implement the standard.
HR experts at Lattice had a vision of treating AI bots as human employees, with a place on the org chart, training, key performance metrics - and a boss. But the workforce may not be ready for that, and the firm learned the hard way. Lattice scrapped its plan three days after announcing it.
Whistleblowers from OpenAI have reportedly complained to the Securities and Exchange Commission that the company unlawfully restricted employees from alerting regulators of the artificial intelligence technology's potential risks to humanity.
CISOs Shefali Mookencherry and Kenneth Townsend examine the implications of AI for copyright infringement and consent. They discuss the need for clear governance and responsible use of data and the evolving landscape of AI privacy issues in both the healthcare and non-healthcare sectors.
The International Monetary Fund suggested that governments consider a fiscal approach to remedy the damages artificial intelligence has brought to the environment and the economy. The agency proposed imposing a green tax on AI-related carbon emissions and taxing excess profits.
Pope Francis during a speech at the G7 summit in Italy called for a ban of autonomous weapons and urged world leaders to keep humans and ethics at the forefront of the artificial intelligence revolution, making him the first pope to address the annual meeting of the world's wealthy democracies.
When using AI, the origin of data matters as much as its application. Nathan Shaffer, partner in intellectual property litigation with Orrick, Herrington & Sutcliffe, advocates for forward-looking approaches to AI risk management to anticipate emerging risks.
Bias lurks everywhere in generative artificial intelligence: in the data, in the model, in the human interpreting the output of a model. That's why one of the biggest emerging security threats is relying on generative AI for important business decisions, said Vice President and CISO Rick Doten.
Greg Touhill, director of the Carnegie Mellon University Software Engineering Institute's CERT Division, detailed the creation of the AISIRT to address the increase in vulnerabilities in materials, products and services, including in software that is used to create machine learning models.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.