In the latest weekly update, ISMG editors discussed the implications of the U.S. investigation into Chinese hackers targeting telecom wiretap systems, the catastrophic risks of AI and the recent veto of an AI safety bill in the U.S., and the latest global ransomware response guidance.
An attempt by the California statehouse to tame the potential of artificial intelligence catastrophic risks hit a roadblock when Governor Gavin Newsom vetoed the measure late last month. One obstacle is lack of a widely-accepted definition for "catastrophic" AI risks.
Foreign threat actors are using generative artificial intelligence to influence U.S. elections, but their impact is limited, said OpenAI. Threat actors from China, Russia, Iran, Rwanda and Vietnam maliciously used AI tools to support influence operations.
The U.S. Department of Justice is drafting new guidelines for law enforcement on the use of artificial intelligence and facial recognition tools to enhance public safety while safeguarding civil rights and ensuring ethical deployment, a senior official said Wednesday.
A U.S. federal judge mostly stopped from going into effect a newly-enacted California law restricting the use of election-related deepfakes, ruling Wednesday the statute likely violates American freedom of speech guarantees. The legislation "acts as a hammer instead of a scalpel," the judge wrote.
Formula 1, one of the most data-driven sports, processes massive data using advanced technology and agile digital infrastructure. Continuous innovation is key to its success. Amie Smith, head of IT systems and support at F1, shares how her team ensures smooth operations.
This webinar session will equip you with the tools and strategies needed to build a persuasive business case for cyber resilience that resonates with executives and board members.
During this session, we'll share how Rubrik can help you successfully recover from cyberattacks without paying a ransom and give you a blueprint for turning a potential disaster into a testament to cyber preparedness.
Join us for this immersive webinar and equip yourself with the tools, strategies, and collaborative mindset needed to master cyber recovery and crisis management in the face of cyber attacks!
Google asserts that platformization and consolidation can help contain today's sophisticated threats. Embedding generative AI into security is also required as the industry moves from assisted AI to semi-autonomous and, eventually, to autonomous security, with the goal of security by default.
While the number of ransomware attacks stayed about the same in the past year, cybercriminals are using more effective tactics such as weaponizing breach disclosure deadlines to extract higher ransoms, according to ENISA's 2024 Threat Landscape report.
OpenAI’s new $6.6 billion round of funding has nearly doubled its valuation to $157 billion. With investments from Thrive Capital, Microsoft, SoftBank and Nvidia, OpenAI plans to expand its AI research while facing pressures around executive turnover and its transition away from a nonprofit model.
OpenAI claims its new artificial intelligence model, designed to "think" and "reason," can solve linguistic and logical problems that stump existing models. Officially called o1, the model nicknamed Strawberry can deceive users and help make weapons that can obliterate the human race.
ISC2’s 2024 Cybersecurity Workforce Study warns of a stagnant workforce, a growing skills gap and a shortage of 4.8 million cybersecurity professionals worldwide. Despite increasing demand, many organizations struggle to fill critical roles, hindered by budget constraints and skills shortages.
The European Commission appointed a 13 member team to draft the general purpose artificial intelligence code of practice mandated by the AI Act. The commission on Monday announced four working groups that will oversee drafting of the rules.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.