Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.
The Irish data regulator launched an investigation to determine Google's compliance with a European privacy law when it was developing its PaLM 2 artificial intelligence model. Google launched the multilingual generative AI model last year.
The Dutch data regulator is the latest agency to fine artificial intelligence company Clearview AI over its facial data harvesting and other privacy violations of GDPR rules, joining regulatory agencies in France, Italy, Greece and the United Kingdom.
Microsoft says it fixed a security flaw in artificial intelligence chatbot Copilot that enabled attackers to steal multifactor authentication code using a prompt injection attack. Security researcher Johann Rehberger said he discovered a way to invisibly force Copilot to send data.
Social media platform X faces the prospect of more legal scrutiny in Europe over its decision to feed customer data into its Grok artificial intelligence system after it agreed Thursday to suspend harvesting tweets as training data. NOYB said the company it is still likely violating privacy law.
The Irish data regulator sued social media platform X, accusing the service of wrongfully harvesting users' personal data for its artificial intelligence model Grok. During a hearing on Tuesday, regulators told the High Court of Ireland that X violated GDPR rules.
The world's first-ever binding regulation on artificial intelligence came into force on Thursday. The law's requirements are set to be enforced in a phased manner. The ban on high-risk AI systems is set to be actionable first, six months after the enforcement date.
A widely used generative artificial intelligence framework is vulnerable to a prompt injunction flaw that could enable sensitive data to leak. Security researchers at Palo Alto Networks uncovered two arbitrary code flaws in open-source library LangChain.
Artificial intelligence researchers say they came up with a new way to trick chatbots into circumventing safeguards and dispensing information that otherwise goes against their programming. They tell the bots that the information is for educational purposes and ask it to append warnings.
Apple said in a Friday statement it will delay the rollout of artificial intelligence-powered features on smartphones in Europe, citing European law meant to rein in the power of large tech companies. The smartphone giant said continental customers won't have access to Apple Intelligence this year.
Social media giant Meta will delay plans to train artificial intelligence with data harvested from European Instagram and Facebook users weeks after a rights group lodged a complaint against the company with 11 European data regulators. A Meta spokesperson said the delay is temporary.
Meta's plan to train artificial intelligence with data generated by Facebook and Instagram users faces friction in Europe after a rights group alleged it violates continental privacy law. Austrian privacy organization NOYB said it lodged complaints against Meta with 11 European data regulators.
The European AI Office, which is tasked with implementing the AI Act, the first-ever binding regulation on artificial intelligence, is set to begin operating next month. The office will be headed by Lucilla Sioli, previously an official at the Directorate-General CONNECT at the European Commission.
Artificial intelligence has a limited impact on the outcome of specific elections, says the U.K.'s Alan Turing Institute, but evidence suggests its application in campaign settings creates second-order risks such as polarization and damaging trust in online sources.
Instant messaging app Snapchat brought its artificial intelligence-powered tool under compliance after the U.K. data regulator said it violated the privacy rights of individual Snapchat users. The agency concluded its probe by stating that the company has brought its privacy measures in compliance.
Election security threats are real, and attacks will come from sophisticated nation-state threat actors who will hack victims and leak sensitive information paired with AI-generated deepfakes as part of disinformation campaigns across Western nations, social media companies told the U.K. government.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.