Google asserts that platformization and consolidation can help contain today's sophisticated threats. Embedding generative AI into security is also required as the industry moves from assisted AI to semi-autonomous and, eventually, to autonomous security, with the goal of security by default.
OpenAI’s new $6.6 billion round of funding has nearly doubled its valuation to $157 billion. With investments from Thrive Capital, Microsoft, SoftBank and Nvidia, OpenAI plans to expand its AI research while facing pressures around executive turnover and its transition away from a nonprofit model.
OpenAI claims its new artificial intelligence model, designed to "think" and "reason," can solve linguistic and logical problems that stump existing models. Officially called o1, the model nicknamed Strawberry can deceive users and help make weapons that can obliterate the human race.
ISC2’s 2024 Cybersecurity Workforce Study warns of a stagnant workforce, a growing skills gap and a shortage of 4.8 million cybersecurity professionals worldwide. Despite increasing demand, many organizations struggle to fill critical roles, hindered by budget constraints and skills shortages.
With its advanced - and evolving - capabilities, AI is integrated into most business processes and tasks, becoming nearly indispensable across industries. Its impact on the workforce is, thus, unsurprising and raises a familiar question: Can the technology take over jobs?
Logpoint acquires Muninn to integrate its AI-based NDR technology, enhancing threat detection and response capabilities in its SIEM platform. This move supports Logpoint's mission to defend OT and ICS systems against ransomware attacks by combining visibility from networks and applications.
A group of leading organizations across industries and technology giants is calling on lawmakers in the United States to develop focused regulations around artificial intelligence that limit the risks associated with emerging technologies while allowing innovation to flourish.
While AI transforms business operations, it helps cybercriminals develop sophisticated impersonation techniques such as deepfakes and voice synthesis, posing new challenges for corporate security, said Surinder Lall, senior vice president of global information security risk management at Paramount.
In the latest "Proof of Concept," Troy Leach of the Cloud Security Alliance and Avani Desai of Schellman discuss the risks of AI hallucinations. As AI models advance, hallucinations pose serious threats to security, especially when quick and accurate decision-making is essential.
Wednesday brought more turmoil in the top ranks of OpenAI after three executives in leadership positions quit the company at a time when the AI giant seeks to convert itself into a for-profit entity. The new structure may affect how the company prioritizes and addresses AI risks.
More than 100 tech companies including OpenAI, Microsoft and Amazon on Wednesday made voluntary commitments to conduct trustworthy and safe development of artificial intelligence in the European Union, with a few notable exceptions, including Meta, Apple, Nvidia and Mistral.
The U.S. Federal Trade Commission on Wednesday announced a series of law enforcement actions targeting companies the commission said is using deceptive artificial intelligence practices to defraud consumers, from promising "AI lawyers" to generating countless fake reviews.
Using facts to train artificial intelligence models is getting tougher, as companies run out of real-world data. AI-generated synthetic data is touted as a viable replacement, but experts say this may exacerbate hallucinations, which are already a big pain point of machine learning models.
Integrating AI and automation into DevOps improves development efficiency, software quality, scalability and resilience. By handling routine tasks, AI allows developers to focus on complex, creative work for faster development and higher-quality software releases.
LinkedIn this week joined its peers in using social media posts as training data for AI models, raising concerns of trustworthiness and safety. The question for AI developers is not whether companies use the data or even whether it is fair to do so - it is whether the data is reliable or not.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.