With the announcement that the big names of Silicon Valley will be joining forces to “discuss with government agencies [how] to secure the November election” comes the question: Does it matter?
As reported by The New York Times, the group of Facebook, Twitter, Microsoft, and Google will meet with the Cybersecurity and Infrastructure Security Agency and the Department of Justice’s National Security Division. LinkedIn, which is owned by Microsoft, Pinterest, Reddit, Verizon Media and the Wikimedia Foundation are also reportedly involved.
Today we joined other technology companies and U.S. government agencies for another regular meeting on our election security efforts. You can read our joint statement here: pic.twitter.com/zO6GD8RHX6
— Google Public Policy (@googlepubpolicy) August 12, 2020
With fewer than 100 days to the election and problems over political ads and misinformation swirling on all the major social media platforms, some of these platforms have individually chosen to take action: Facebook and Twitter, at least, have publicly begun flagging and fact-checking fake news posts, even if they come from President Donald Trump. But at this point, what kind of effectiveness could a big coalition have?
“There’s still an opportunity to make an impact,” said Ben Goodman, senior vice president at cybersecurity firm ForgeRock. “But I think realistically it’s probably too late. This is not something they’re going to fix between now and November.”
Rick Forno, the assistant director of the University of Maryland Baltimore County’s Center for Cybersecurity, echoed Goodman’s view.
“I don’t think it’ll hurt,” Forno said. “But I’m not sure how significantly effective it’ll be. I think the tech companies are trying to implement a lot of the lessons from [the] 2016 [election], but this is not the solution now.”
Misinformation also knows no season — it doesn’t just go away when an election is over. Tech and social media companies need to have their election-related war rooms operating every day of the year, Forno said. It wasn’t enough to just ramp up regulation six months beforehand and then ramped down a week after.
“You need to have the planning in place to be able to surge everything at the appropriate time,” Forno said. “You can’t just turn everything off on November 10th and say you’re done until the midterms. You have to keep refining processes on how to recognize fake news, fake audio and video, disinformation, all of that.”
Both experts likened the coalition to the way U.S. banks currently communicate about fraud: Pooling resources on recent scams without sharing intellectual property. Goodman said he believed that this coalition could be the start of something good, but with the small amount of time left until the election, there’s not much time left to fix all the problems rampant on these platforms.
“It makes a lot of sense for these different services to start signal sharing,” said Goodman, pointing out that de-platforming a problematic figure often results in that figure just hopping from platform to platform to keep peddling their disinformation. “If they can band together, that could be really effective,” he said. “But that would require a level of maturity that I don’t think we’ve achieved yet.”
A spokesperson for the Wikimedia Foundation told Digital Trends in an email, “this is a long-term effort focused more broadly on disinformation that will continue beyond the U.S. election.”
oogle referred Digital Trends to a statement in said, in part: “For the past several years, we have worked closely to counter information operations across our platforms.” Google also said in its statement it will “stay vigilant” against misinformation.
Reddit and Microsoft said they did not have a comment on the coalition. The other companies involved have not yet responded to a request for comment.