[ad_1]
Don Beyer’s automobile dealerships were among the first in the United States to create a website. As a representative, the Virginia Democrat leads a bipartisan group focused on promoting fusion energy. Read books about geometry for fun.
So when questions arose about the regulation of artificial intelligence, Beyer, 73, took what seemed to him an obvious step: He enrolled at George Mason University to earn a master’s degree in machine learning. In an era when lawmakers and Supreme Court justices sometimes admit they don’t understand emerging technology, Beyer’s trip is an outlier, but it highlights a broader effort by members of Congress to educate themselves about artificial intelligence as they consider the laws that would shape its development. .
Frightening for some, exciting for others, baffling for many: artificial intelligence has been considered a transformative technology, a threat to democracy or even an existential risk for humanity. It will be up to members of Congress to figure out how to regulate the industry in a way that encourages its potential benefits while mitigating the worst risks.
This is how AI will empower citizens and improve freedom
But first they have to understand what AI is and what it is not.
“I tend to be optimistic about AI,” Beyer told The Associated Press after a recent afternoon class on George Mason’s campus in suburban Virginia. “We can’t even imagine how different our lives will be in five, 10 or 20 years thanks to AI… There won’t be red-eyed robots chasing us anytime soon. But there are other, deeper existential risks we need to pay attention to.” “.
Risks such as massive job losses in industries where AI has become obsolete, programs that obtain biased or inaccurate results, or deepfake images, videos and audio that could be used for political disinformation, scams or sexual exploitation. On the other side of the equation, burdensome regulations could hinder innovation, leaving the United States at a disadvantage as other nations seek to harness the power of AI.
Striking the right balance will require input not only from technology companies but also from industry critics, as well as the industries that AI can transform. While many Americans may have formed their ideas about AI from science fiction movies like The Terminator or The Matrix, it’s important for policymakers to have a clear understanding of the technology, said Rep. Jay Obernolte, R-Calif., and Chairman of the House AI Task Force.
When lawmakers have questions about AI, Obernolte is one of the people they look to. She studied engineering and applied sciences at the California Institute of Technology and earned a master’s degree in artificial intelligence at UCLA. The California Republican also founded his own video game company. Obernolte said he is “very pleasantly impressed” with the seriousness with which his colleagues on both sides of the aisle are taking their responsibility for understanding AI.
That shouldn’t be surprising, Obernolte said. After all, lawmakers regularly vote on bills that address complicated legal, financial, health, and scientific issues. If you think computers are complicated, check out the rules governing Medicaid and Medicare.
Keeping up with the pace of technology has been a challenge for Congress since the steam engine and cotton gin transformed the country’s industrial and agricultural sectors. Nuclear power and weapons are another example of a highly technical issue that policymakers have had to grapple with in recent decades, according to Kenneth Lowande, a political scientist at the University of Michigan who has studied the experience and its relationship to policymaking. policies in Congress.
NAVIGATING THE LABYRINTH OF AI IN GOVERNMENT AGENCIES
Federal lawmakers have created several offices (the Library of Congress, the Congressional Budget Office, etc.) to provide specialized resources and input when needed. They also have staff with specific expertise in specific topics, including technology.
Then there is another, more informal form of education that many members of Congress receive.
“They have interest groups and lobbyists knocking on their doors to give them information,” Lowande said.
Beyer said he has always had an interest in computers and when AI emerged as a topic of public interest he wanted to learn more. Much more. Almost all of her classmates are decades younger; Most don’t seem all that taken aback when they find out that her classmate is a congresswoman, Beyer said.
He said that the classes, which he adapts to his busy schedule in Congress, are already bearing fruit. He has learned about the development of AI and the challenges facing this field. He said it has helped him understand the challenges (biases, unreliable data) and the possibilities, such as better cancer diagnostics and more efficient supply chains.
Beyer is also learning to write computer code.
“I’m finding that learning to code, which is thinking in this kind of mathematical algorithm step by step, is helping me think differently about a lot of other things: how to set up an office, how to work on a piece of legislation,” he said. Beyer.
While a computer science degree is not required, it is imperative that policymakers understand the implications of AI for the economy, national defense, healthcare, education, personal privacy and intellectual property rights, according to Chris Pierson, CEO of cybersecurity firm BlackCloak.
“AI is neither good nor bad,” said Pierson, who previously worked in Washington for the Department of Homeland Security. “That’s how you use it.”
The work of safeguarding AI has already begun, although so far it is the executive branch that is leading the way. Last month, the White House unveiled new rules requiring federal agencies to demonstrate that their use of AI does not harm the public. Under an executive order issued last year, AI developers must provide information about the safety of their products.
When it comes to more substantial actions, the United States is catching up with the European Union, which recently enacted the world’s first major rules governing the development and use of AI. The rules prohibit some uses (routine AI-enabled facial recognition by law enforcement, for example) and require other programs to submit information about public safety and risks. The landmark law is expected to serve as a model for other nations as they contemplate their own AI laws.
As Congress begins that process, the focus should be on “mitigating the potential harm,” said Obernolte, who said he’s optimistic that lawmakers from both parties can find common ground on ways to prevent the worst risks from AI. .
“Nothing substantial will be done that is not bipartisan,” he said.
To help guide the conversation, lawmakers created a new AI working group (Obernolte is co-chair), as well as an AI Caucus made up of lawmakers with particular expertise or interest in the topic. They have invited experts to inform policymakers about the technology and its impacts, and not only computer scientists and technology gurus, but also representatives from different sectors who see their own risks and rewards in AI.
Rep. Anna Eshoo is the Democratic chair of the group. She represents part of California’s Silicon Valley and recently introduced legislation that would require technology companies and social media platforms like Meta, Google or TikTok to identify and label AI-generated deepfakes to ensure the public is not misled. She said the caucus has already proven its value as a “safe place” where lawmakers can ask questions, share resources and begin building consensus.
“There is no bad or stupid question,” he said. “You have to understand something before you can accept or reject it.”
[ad_2]