AI Technologies , Generative AI , Large Language Models

Feds Call for Certifying, Assessing Veracity of AI Systems

Biden Administration Wants to Ensure AI Tech Works as Intended Without Undue Bias
Feds Call for Certifying, Assessing Veracity of AI Systems
Image: Shutterstock

The Biden administration initiated a potential precursor to regulations for artificial intelligence tools such as ChatGPT amid mounting concern about the technology's unintended effects.

See Also: Panel Discussion: Generative AI in 2024 – Navigating Business Rewards and Security Risks

In a document soliciting public input over the next 60 days, the administration floats a range of possible mechanisms to ensure an "AI accountability ecosystem" that includes audits, reporting, testing and evaluation.

"Accountability policies will help us shine a light on these systems and verify whether they are safe, effective, responsible and lawful," said Alan Davidson, assistant secretary of commerce for communications and information, during a Tuesday speech at the University of Pittsburgh.

The announcement by the head of the National Telecommunications and Information Administration comes days after President Joe Biden told reporters that "it remains to be seen" whether AI is dangerous and weeks after a slew of top tech executives called for a minimum half-year pause on advanced artificial intelligence systems development (see: Tech Luminaries Call for Pause in AI Development).

Hackers have been quick to use tools such as the natural language model behind ChatGPT for nefarious ends even as some cybersecurity experts have said that it can help defenders as well.

Even before ChatGPT's record-setting growth, the Biden administration openly worried that AI could limit opportunities by encoding existing biases.

Davidson said the NTIA is particularly looking for responses that address certifications AI systems might need ahead of deployment, the data sets needed to conduct AI audits and assessments, the designs developers should choose to prove their AI systems are safe, and the level of testing and assurance the public should expect before systems are released.

"People will benefit immensely from a world in which we reap the benefits of responsible AI while minimizing or eliminating the harms," Davidson said. "We have to move fast because these AI technologies are moving very fast in some ways," he told The Guardian.

Nat Beuse, chief safety officer at self-driving technology vendor Aurora, told attendees of the University of Pittsburgh event that anomaly resolution must be coded into AI systems so technology that doesn't perform up to expectations can be adjusted.

Regulators will likely be more interested in how AI systems are trained, Beuse said during a panel following Davidson's keynote, while customers typically want to learn more about uptime or issues in the field.

How much or whether to regulate AI is a question that has consumed increasing levels of Washington's attention. Eric Schmidt, former CEO of Google and AI proponent, told a House Oversight Committee hearing in early March that innovation should be the primary concern.

"Let American ingenuity, American scientists, the American government and American corporations invent this future and we’ll get something pretty close to what we want. Then [the government] can work on the edges where you have misuse," he said. Schmidt earlier this month told the Australian Financial Review he's against a six-month pause in AI development "because it will simply benefit China."

In Pittsburgh, Mozilla Fellow Deborah Raji sounded another concern. "Of course, there are concerns about fairness," she said. "But before all of that, we should actually be questioning whether what's being put on the market is really AI. Is it false marketing for a company to say its AI technology does certain things if the product doesn't live up to the claims?"


About the Author

Michael Novinson

Michael Novinson

Managing Editor, Business, ISMG

Novinson is responsible for covering the vendor and technology landscape. Prior to joining ISMG, he spent four and a half years covering all the major cybersecurity vendors at CRN, with a focus on their programs and offerings for IT service providers. He was recognized for his breaking news coverage of the August 2019 coordinated ransomware attack against local governments in Texas as well as for his continued reporting around the SolarWinds hack in late 2020 and early 2021.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.