AI Industry Innovations , AI Technologies , Generative AI
US Senate Leader Champions More AI Security, Explainability
Guardrails Needed to Stop AI Misuse by Autocratic Government, Rogue Domestic ActorsSenate Majority Leader Chuck Schumer unveiled a framework for artificial intelligence development focused on security, accountability, explainability and minimizing foreign interference.
See Also: Securing the Data & AI Landscape with DSPM and DDR
Schumer, D-N.Y, urged lawmakers to put guardrails in place that ensure autocratic governments and rogue domestic actors can't use artificial intelligence for illicit or bad purposes such as financial extortion or political upheaval. He worries that as soon as next year, political campaigns could deploy fabricated yet very believable and convincing footage of opposing candidates saying things they didn't actually say (see: Feds Call for Certifying, Assessing Veracity of AI Systems).
"If we don't set the norms for AI's proper uses, others will," Schumer said during an address Wednesday at the Center for Strategic and International Studies. "The Chinese Communist Party, which has little regard for the norms of democratic governance, could leap ahead of us and set the rules for the game for AI. Democracy could enter an era of steep decline."
Schumer said advances in artificial intelligence mean chatbots can be deployed at great scale, targeting millions of individual voters for political persuasion. Once the fabricated information is sent to 100 million homes, Schumer said, it's hard to "put the genie back in the bottle." He said lawmakers must move quickly to ensure U.S. citizens can engage in democracy without outside interference.
"We should develop the guardrails that align with democracy and encourage the nations of the world to use them," Schumer said. "Without taking steps to make sure AI preserves our country's foundations, we risk the survival of our democracy."
Transparency Needed on How AI Generates Answers
Schumer focused many of his remarks on explainability, or how an artificial intelligence system came up with a particular answer. Users must know where an artificial intelligence sources its information from as well as why that information was chosen over other possibilities, he said. Without explainability, the other goals of security - accountability and protecting democratic foundations - won't be possible, he said.
"Even the experts don't always know why these algorithms produce the answers they do. It's often a black box," Schumer said. "No everyday user of AI will understand the complicated, ever-evolving algorithms that determine what AI systems produce in response to a question or a task."
Schumer acknowledged algorithms represent the highest level of intellectual property for developers of artificial intelligence. Forcing companies to reveal these algorithms in full would stifle innovation and empower adversaries to use them for malicious reasons, he said. He called on companies and outside experts to come up with a fair solution that lawmakers can use to break open AI's "black box."
"If we can put this together in a very serious way, I think the rest of the world will follow."
– Senate Majority Leader Chuck Schumer, D-N.Y.
"The average person does not need to know the inner workings of these algorithms," Schumer said. "But we do need to require companies to develop a system where - in simple and understandable terms - users understand why the system produced a particular answer and where that answer came from. This is very complicated but very important work."
Preventing Malicious Use, Ensuring Fair Competition for All
From an accountability standpoint, Schumer said, Congress must prevent companies from using artificial intelligence to track the movement of minors and inundate them with harmful advertising that damages their self-worth or mental health. He also said lawmakers should ensure that nefarious businesses can't use artificial intelligence to exploit adults with addiction issues, financial problems or serious mental illnesses.
"Without guardrails in place regulating how AI is developed, audited and deployed - and without making clear that certain practices should be out of bounds - we risk living in a total free-for-all, which nobody wants," Schumer said.
Beyond that, Schumer said, lawmakers should consider the proper balance between collaboration and competition among entities developing artificial intelligence, as well as how much federal intervention is needed to encourage innovation. Congress also should examine the proper balance between private AI systems and open AI systems as well as how to effectively ensure AI innovation is open to everyone.
"We certainly don't want a situation where two or three companies dominate and everyone else is closed out," Schumer said.
Schumer said he has looked at artificial intelligence policies coming out of the United Kingdom, the European Union, Singapore and Brazil and found that they fail to "capture the imagination of the world." Many of the proposals, he said, are already being withdrawn or modified. Given the size of the U.S. economy and the number of big corporations headquartered in the U.S., Schumer believes America can set the agenda for AI regulation.
"Most of the world would like to see one system," Schumer said. "If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI."