Network detection and response delivers ground truth in cybersecurity, giving organizations crucial visibility into attacker behavior before, during and after ransomware attacks. Corelight CEO Brian Dye explains how NDR helps security teams verify threats and contain incidents effectively.
Artificial intelligence is transforming cybersecurity on both offensive and defensive fronts. Attackers use AI to iterate and modify exploits rapidly, making malicious code harder to detect, said Tim Gallo, head - global solutions architects, Google.
AI-assisted coding tools can speed up code production but often replicate existing vulnerabilities when built on poor-quality code bases. Snyk's Randall Degges discusses why developers must prioritize code base quality to maximize the benefits and minimize the risks of using AI tools.
Data integrity, collection, analytics - they all are essential for compliance reporting, and yet each remains a challenge for enterprises across business sectors. Siva Vrs of Wipro discussed the pain points with compliance in the cloud era and Wipro's partnership with AWS to alleviate them.
As artificial intelligence technology continues to evolve, security professionals have become involved in areas that traditionally weren't their concern such as preventing biases in decision-making, said Nathan Hamiel, senior director of research at Kudelski Security.
Generative AI tools boost developer productivity, but they also generate code with similar vulnerability rates as human developers. Chris Wysopal, co-founder and CTO of Veracode, explains why enterprises must treat AI-generated code with caution and automate security testing.
AI's influence on social engineering and election security has become a focal point at Black Hat. ISMG editors discuss how advanced technologies are making it easier to manipulate people and compromise security systems and offer key insights on machine learning vulnerabilities.
Artificial intelligence, much like when the internet became public, is simultaneously the most overhyped and underhyped technology in history, said Sam Curry, vice president and CISO at Zscaler. Its application in cyber defense is still evolving.
AI systems acting autonomously bring risks of large-scale mistakes that current human defenses can't match, says Matt Turek, deputy director at DARPA. He discusses AI agents, adversarial attacks and the need for provable AI safety in both offensive and defensive capacities.
AI's integration into cybersecurity demands a strong foundational approach. Many companies seek advanced AI solutions but struggle with basic cybersecurity practices such as managing assets and patching vulnerabilities, said Michael Thiessmeier, executive director of U.S. NAIC-ISAO.
Shachar Menashe, senior director of security research at JFrog, discusses critical security risks in MLOps platforms - including code execution vulnerabilities in machine learning models - and why organizations must treat ML models as potentially malicious code to mitigate these inherent risks.
The ability to create real-time deepfakes of trusted figures has transformed the landscape of corporate security threats. Brandon Kovacs, senior red team consultant at Bishop Fox, details how attackers can now clone voices and video in real-time, enabling new forms of social engineering and fraud.
Brandon Pugh of R Street Institute discusses Congress' struggle to balance AI innovation and regulation, the U.S. approach compared to the EU, and the urgent need for privacy laws to protect AI-driven data. He emphasizes education on AI technologies and the ongoing challenge of defining key terms.
Many cybersecurity organizations hope generative artificial intelligence and large language models will help them secure the enterprise and comply with the latest regulations. But to date, commercial LLMs have big problems - hallucinations and a lack of timely data, said NYU professor Brennan Lodge.
In the latest weekly update, ISMG editors discussed the Trump campaign's leaked documents and the many hacker groups targeting the U.S. presidential election, the potential for OpenAI's new voice feature to blur the line between AI and human relationships, and insights from the Black Hat Conference.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.