When an image showing what looked to be a bombing at the Pentagon started to spread online last week, the stock market dipped momentarily. Kayla Tausche, who covers the White House for CNBC, quickly started fact checking. Popping into Lower Press — the cluster of desks and offices behind the briefing room where many press aides work — she found principal deputy press secretary Olivia Dalton and asked about the reports.
“There was initially confusion about where it was coming from (I said ‘RT-style unconfirmed viral accounts’) and then exasperation,” Tausche told West Wing Playbook.
Dalton moved quickly, connecting with the Pentagon and National Security Council before telling Tausche there did not appear to have been a bombing. Once additional tweets suggested the phony image had been generated by artificial intelligence, Tausche followed up with Dalton to apologize for the diversion.
“She said, with visible frustration, that she is dealing with these types of inquiries on a daily basis, with greater and greater frequency,” Tausche added.
The White House press shop has found itself on one of the many front lines of the AI battles. Aides there, who collectively handle hundreds of media inquiries a day, have already been briefed by experts on the potential national security risks posed by images and videos that have been altered using AI, according to an administration official.
Outside the press shop, the White House has scaled up its efforts to assess and manage AI’s risks, impressing on AI companies during meetings on campus that it’s their responsibility to ensure their products are safe. It updated the strategic plan for AI research and development for the first time in four years and last week launched a process to work toward developing an AI bill of rights.
“Everyone is trying very hard to be sensitive, to issue these warnings but without predicting what could happen, and that's because they don't know,” said Kara Swisher, a prominent tech-focused journalist. “Most people, if they're being honest, would tell you they don't know what's going to happen.”
The administration’s knockdown of reports of the Pentagon bombing — backed by a tweet from Arlington, Va., first responders — was part of a swift debunking that helped the market recover after the S&P fell 0.3 percent, a momentary loss of some $500 billion in value.
But days later, another AI-generated deep fake popped up in the form of a video showing a purported Microsoft Teams call between anti-Russia activist Bill Browder and former Ukraine President Petro Poroshenko arguing for the easing of sanctions against Russian oligarchs. Both fakes were easy enough to spot for those familiar with AI. But as the technology develops and improves, AI-generated text, audio and video could quickly become indistinguishable from that produced by human beings.
On Tuesday, prominent industry officials, including OpenAI CEO Sam Altman, issued a succinct but jarring statement aimed at seizing the attention of global leaders: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.
When asked about the statement, White House press secretary Karine Jean-Pierre wouldn’t say if the president shares the belief that AI, if mismanaged, could lead to extinction. She only acknowledged that AI is “one of the most powerful technologies that we see currently in our time” and that the administration takes risk mitigation seriously.
There are various proposals floating for regulating AI — and Big Tech more broadly — on Capitol Hill, including legislation released earlier this month by Sen. Michael Bennet (D-Colo.) to create a new federal agency to oversee the technology.
“We remain concerned about an uptick in deepfake videos and manipulated images spreading on social media platforms,” said White House assistant press secretary Robyn Patterson. “As the technology for creating fake videos and images improves, it’s important for the media and the public to be aware of this trend, which we expect to grow, if not exponentially.”
While there are huge potential upsides with AI that are already triggering a global arms race to harness and capitalize on the technology, the unanticipated bumps could be severe, especially amid the coming presidential election.
“It’s not that one piece of content is going to be devastating; it’s the collective, scaled approach to inauthenticity that’s the problem. People can do this at scale now,” said Sarah Kreps, a professor at Cornell University’s Brooks School Tech Policy Institute and one of three AI researchers invited to speak to Biden’s new working group on the matter within the President’s Council of Advisors on Science and Technology. “It can look like massive numbers of citizens are supporting a particular issue when they’re not.”
In a country where sectarian partisanship has already given rise to misinformation and the spread of conspiracy theories, AI may only deepen the public’s growing mistrust of facts. “It just creates this ecosystem of distrust in a democracy where trust is such a foundational pillar,” said Kreps.
from Politics, Policy, Political News Top Stories https://ift.tt/NsyG613
via IFTTT
0 comments:
Post a Comment