Senators are meeting with Silicon Valley elites to learn how to deal with artificial intelligence. But can Congress address this rapidly emerging technology before tackling it?

When it comes to artificial intelligence, U.S. senators are looking to Silicon Valley giants to solve a problem in the Senate — one that today’s political class is addressing in an increasingly partisan manner every day, and which generative AI is now leveraging This helped rewrite the U.S. Constitution. Our common future.

Today, the Senate is hosting an unprecedented closed-door forum on artificial intelligence, hosted by Elon Musk, Mark Zuckerberg, Bill Gates, and more than 17 others, including ethicists and academics. Although they will be on the senator’s turf for about six hours, they will hold microphones while the nation’s elected leaders are muzzled.

“All senators are encouraged to participate in this important discussion, but please note that this format will not provide an opportunity for senators to comment or ask questions of the speakers,” Majority Leader Chuck Schumer wrote in a notice. “

But there’s a problem: Schumer is contributing to the wrong conversation. With generative AI poised to flood the internet with more and more convincing disinformation and misinformation, many AI experts say the Senate’s first goal should be to restore confidence in the Senate itself.

“It seems to me that the government’s belief is that process is more important than outcome – if the process is fair, we will accept the outcome, whether you agree with it or not,” said committee chairman Dan Mintz. Department of Information Technology, University of Maryland Global Campus. “But now people don’t trust the process and they don’t trust the results.”

Fact is increasingly becoming a quaint concept, fading from our collective retrospect. Over the past few elections, the truth we wanted—whether myth, reality, or a hodgepodge of both—was just a few clicks away in the depths of the web. But generative AI only helps politicians easily create believable fictions that appeal to our most basic biases, before deploying the same technology to bring dark uses of technology into our social media feeds.

But many politicians haven’t received the after-action memo, which is why most lawmakers praised Google’s recent announcement that it will require disclosure of AI-generated “synthetic” content in political ads.

“This is a real concern. We have to have a way for people to easily verify whether what they are seeing is reality,” said Sen. Gary Peters of Michigan, chairman of the Democratic Senatorial Campaign Committee.

But can new technology do what today’s political leaders have failed to do and restore faith in the U.S. political system? suspect. Americans—with the invisible help of the algorithms that now run our digital lives—increasingly live in a different political world. Currently, about 69% of Republicans believe that US President Joe Biden lost in 2020, and more than 90% of Republicans believe that the news media deliberately publishes lies. On the other hand, 85% of Democrats believe former President Donald Trump is guilty of interfering in the 2020 election.

Also Read:

“We now truly believe that facts can be changed, so the ability to move people is becoming increasingly difficult. So I think the biggest problem with deepfakes is not the direct impact it will have on elections, but the impact it will have on lowering people’s trust in institutions. confidence to make a greater contribution,” Mintz said.

Congress could force all tech companies to watermark AI-generated content, as many on Capitol Hill support, but that would amount to window dressing in today’s political climate.

“Honestly, I don’t think it’s going to solve the problem,” said Chinmayi Arun, executive director of the Information Society Project and a research scholar at Yale Law School. “It’s a rebuilding of trust, but new technologies also make this disruptive version possible. That’s why maybe it’s necessary to label them so people know about it.”

At least one senator seems to agree. Ohio Republican Sen. J.D. Vance said it might be a good thing for all of us not to trust what we see online. “I’m actually very optimistic that in the long run it’s just going to make people not believe everything they see on the internet, but I think in the interim it could actually do some real damage,” Vance said .

In 2016 and 2020, misinformation and disinformation became synonymous with American politics, but we have now entered an era of deepfakes, marked by the democratization of tools to deceive, and as subtle as they may be, here’s a realistic voiceover Or there are fake photos that are polished with precision.

Not only does generative AI help to effortlessly reshape the world into one person’s political fantasies, its power lies in its ability to precisely deliver these fakes to the most ideologically vulnerable communities, where they are most capable of triggering A blazing electronic fire. Vance believes no legislation can address these complex and intertwined issues.

“Maybe, on the fringes, there are things you can do to help, but I don’t think you can really control these viral things unless there’s a widespread level of suspicion, and I do think we’ll get there,” Vance said.

“Scripted” political drama

This summer, Schumer and a bipartisan group of senators hosted three private, full-Senate AI briefings that are now integrated with these new technology forums.

For a conference hall packed with 100 politicians who love photography and are notoriously talkative, the briefing was a game changer. In normal committee hearings, senators have become experts at raising money and sometimes eliciting knowledge by asking specially crafted YouTube questions, but not this time. While they won’t be able to question the assembled tech experts this week, Schumer and the other hosts will play puppet masters offstage.

“This is intended to be a guided conversation. It’s a scripted set of questions that are all designed to elicit a myriad of different ideas on a range of policy areas for the benefit of staff and legislators,” said Sen. Todd, R-Ind. Todd Young said.

Also Read:  Apple's Decision to Kill Its CSAM Photo-Scanning Tool Sparks Fresh Controversy

Yang is part of Schumer’s bipartisan group of four senators, along with Democratic Sen. Martin Heinrich of New Mexico and Republican Sen. Mike Rounds of South Dakota, who have been spearheading the effort These private Senate AI research conferences.

While there’s no official timetable, Yang doesn’t expect the Senate AI forum to wrap up until this winter or early next spring.

The meetings may be bipartisan, but the two parties remain worlds apart when it comes to underlying policy. As always, Democrats are calling for new regulations, while Republicans are slamming the brakes on the idea.

“In most of these areas, existing regulations prohibit conduct that we would like to continue to prohibit,” Young said. “The policy challenge therefore becomes ensuring that within government, our existing regulatory and enforcement mechanisms are adapted to an AI-driven world.”

While many Democrats have called for a new AI agency, Republicans are unlikely to vote for one, making it increasingly likely that the president will be forced to install an “AI czar” within the government without the need for a Senate vote. Approved formal nomination process.

“I think you probably need someone to coordinate policy development activities between different government agencies, which may be located within the White House, [which] It could be similar to the national security adviser,” Yang said.

The national security adviser is not elected, which is why former President Barack Obama was able to put Susan Rice in the White House even as she became the Republican Party’s favorite political piñata. It’s also how Trump was able to put conspiracy theory peddler Michael Flynn in the White House – where he stayed for 22 days before he was forced out for lying.

Other senators are also looking for ways to get around the narrowly divided Senate, not to mention the ever-warring Republican-controlled House.

“One thing we can do is clarify that the FEC [Federal Election Commission] “We have the jurisdiction to take on this issue and study it,” said Sen. Martin Heinrich, D-N.M. “I think they might, but I’m not sure all members share that view.” perspective. So we should make this very clear.”

While the two parties are increasingly divided the more they study AI, some are looking for ways to combine traditional concerns from both sides into an overarching argument for action.

“I think if you can build a coalition between those who want to protect the election and those who want to protect your confidence in the public markets, the chances are great,” Democratic Sen. Mark Warner said. Add to that — all of a sudden you have strange bedfellows coming together.” from Virginia, who is also the chairman of the Intelligence Committee.

Also Read:  How to Remove Your Info From Google With the 'Results About You' Tool

Warner spent decades in the tech industry, co-founding the company that became Nextel, before overseeing the past few elections as chairman of Intel and witnessing foreign incursions firsthand. While he praised Google for taking the first step in protecting the public from inflammatory nonsense generated by artificial intelligence, he said it doesn’t go far enough.

“My concern is whether individual platforms will decide for themselves what is good or bad. We’ve seen this happen in the past,” Warner said. “That won’t work.”

It may not have worked in the past, but that doesn’t mean Congress did anything about it. This is how Twitter (now X) went from banning political ads in the 2022 midterm elections to announcing that they would allow political ads in 2024. Other platforms also change their policies at will.

follow the money

While the millionaires and billionaires Schumer is calling out are flush with cash — either their own or their investors — the government is not. Or, at least, lawmakers haven’t earmarked the billions of dollars in this emerging field of generative AI to try to counter the private sector.

“We’ve seen very little investment in this direction. So just compare that to how much money OpenAI makes, how much investment it attracts – compared to Darpa’s meager staff [Defense Advanced Research Projects Agency] said Siwei Lyu, Imperial Innovation Professor in the Department of Computer Science and Engineering at the State University of New York at Buffalo.

“These numbers are a huge astronomical imbalance, so we need governments to pay more attention and invest in counter-tech,” Lyu said.

While Lui and other scholars have been calling for investment in these countertechs for years, Congress has balked. Now, Schumer is offering his chamber microphone to wealthy CEOs. Lyu has been working in media forensics for two decades and has seen this situation before.

“This is the classic conflict between capitalism (making money, profit) and social welfare,” Lui said. “Everything calls for the government to be more actively involved in the process.”

Once a room of digital silliness, today — after a summer working on artificial intelligence — most senators feel they know enough about the topic to utter a few grumbles about Silicon Valley’s titans. But this week, the senator known for his excruciating ability to fill the dead air with his voice will once again be asked to sit down and listen to an artificial discussion about artificial intelligence.

But when they speak, the generative AI will listen. It will then reconstruct our real world in its hyperpartisan image, a problem that party leaders have not addressed. Because as of now, AI may be very disruptive, but it has yet to have a disruptive impact on business-as-usual politics in Washington.

Categories: Security