Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.
An attempt by the California statehouse to tame the potential of artificial intelligence catastrophic risks hit a roadblock when Governor Gavin Newsom vetoed the measure late last month. One obstacle is lack of a widely-accepted definition for "catastrophic" AI risks.
Foreign threat actors are using generative artificial intelligence to influence U.S. elections, but their impact is limited, said OpenAI. Threat actors from China, Russia, Iran, Rwanda and Vietnam maliciously used AI tools to support influence operations.
A U.S. federal judge mostly stopped from going into effect a newly-enacted California law restricting the use of election-related deepfakes, ruling Wednesday the statute likely violates American freedom of speech guarantees. The legislation "acts as a hammer instead of a scalpel," the judge wrote.
OpenAI claims its new artificial intelligence model, designed to "think" and "reason," can solve linguistic and logical problems that stump existing models. Officially called o1, the model nicknamed Strawberry can deceive users and help make weapons that can obliterate the human race.
California Gov. Gavin Newsom on Sunday vetoed a hotly debated AI safety bill that would have pushed developers to implement measures to prevent "critical harms." The bill "falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks," Newsom said.
Wednesday brought more turmoil in the top ranks of OpenAI after three executives in leadership positions quit the company at a time when the AI giant seeks to convert itself into a for-profit entity. The new structure may affect how the company prioritizes and addresses AI risks.
More than 100 tech companies including OpenAI, Microsoft and Amazon on Wednesday made voluntary commitments to conduct trustworthy and safe development of artificial intelligence in the European Union, with a few notable exceptions, including Meta, Apple, Nvidia and Mistral.
Using facts to train artificial intelligence models is getting tougher, as companies run out of real-world data. AI-generated synthetic data is touted as a viable replacement, but experts say this may exacerbate hallucinations, which are already a big pain point of machine learning models.
LinkedIn this week joined its peers in using social media posts as training data for AI models, raising concerns of trustworthiness and safety. The question for AI developers is not whether companies use the data or even whether it is fair to do so - it is whether the data is reliable or not.
If the bubble isn't popping already, it'll pop soon, say many investors and close observers of the AI industry. If past bubbles are a benchmark, the burst will filter out companies with no solid business models and pave the way for more sustainable growth for the industry in the long term.
California enacted regulation to crack down on the misuse of artificial intelligence as Gov. Gavin Newsom on Tuesday signed five bills focused on curbing the impact of deepfakes. The Golden State has been on the national forefront of tech regulation.
The U.S. federal government is preparing to collect reports from foundational artificial intelligence model developers, including details about their cybersecurity defenses and red-teaming efforts. The Department of Commerce said it wants thoughts on how data should be safety collected and stored.
The underground market for illicit large language models is a lucrative one, said academic researchers who called for better safeguards against artificial intelligence misuse. "This laissez-faire approach essentially provides a fertile ground for miscreants to misuse the LLMs."
A three-month-old startup promising safe artificial intelligence raised $1 billion in an all-cash deal in a seed funding round. Co-founded by former OpenAI Chief Scientist Ilya Sutskever, Safe Superintelligence will reportedly use the funds to acquire computing power.
While the criminals may have an advantage in the AI race, banks and other financial services firms are responding with heightened awareness and vigilance, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-driven scams.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.