On Wednesday, CNN reporter Jim Acosta had a pointed exchange with the president over immigration during a press conference, resulting in the Trump administration banning him from the White House. During the exchange, a Trump aide attempted to wrestle his microphone away from him. Today, a partisan war broke out over what a video of that incident really showed — and in so doing, seemed to herald the arrival of an era in which manipulated videos further erode the boundaries between truth and fiction.
Aaron Rupar sets the stage at Vox:
When Trump insulted Acosta at the press conference, a White House intern approached him and tried to physically remove a microphone from his hands. Their arms touched as the woman reached across Acosta’s body to grab the microphone he was holding in his hand.
Looking back at the video, it does not in fact show Acosta “placing his hands” on the woman. But about 90 minutes after she posted her string of tweets, Infowars editor Paul Joseph Watson tweeted out a video of the incident that was doctored to make it look like Acosta chopped the woman’s arm with his hand.
Less than an hour later, [Press Secretary Sarah] Sanders tweeted out the doctored video, writing, “We will not tolerate the inappropriate behavior clearly documented in this video.”
Vox’s headline calls the footage in question a “fake Infowars video.” Was it? Charlie Warzel messaged Watson, who told him that he had simply zoomed in one section of the footage, but otherwise left it as is. That led to debates about whether a simple change to the frame rate of the video transformed it to make it appear as if Acosta were the aggressor. As Warzel notes, it’s complicated:
Watson’s defense is an issue of semantics — that he altered the video but did not “doctor” it to show something that wasn’t there. Unfortunately, establishing just how the video was changed is complicated. The original video file was created by Watson from a gif file that the Daily Wire tweeted. It’s not out of the realm of possibility that the image was distorted by that process. More importantly, the process of converting videos to gifs often results in losing frames from the original video file (in the case of the Daily Wire gif, that means there’s likely frames missing from the original CSPAN video it was made from).
It’s all confusing. There’s even an example in which all parties are mostly correct. Watson’s clip is different than the CSPAN clip because it was taken from a gif and thus missing frames, which could cause the Acosta movement to look faster than it actually was. In that case, one can argue that the video was made faster. If that’s the case, there’s also an argument that Watson is telling the truth — he didn’t personally speed up the video, he just took a clip that was missing frames.
Meanwhile, Shane Raymond, a journalist at “social media intelligence” company Storyful, does a frame-by-frame analysis and concludes that Sanders shared footage that was altered to make certain frames repeat. The Washington Post’s Drew Harwell, citing various other analyses, also wrote that the footage had been doctored. Paris Martineau, who also went frame by frame, smartly noted that the video makes the incident seem more dramatic than it was primarily by repeating it three times.
Whatever the case, Warzel worries that today marked a milestone on the road to a dystopia in which everyone “chooses their own reality” based in part on doctored videos.
It’s a concern that has accelerated in recent months with the arrival of “deepfakes,” which could eventually show people doing things they are not actually doing, in perfectly crisp detail. The mere existence of these perfect deepfakes, of course, will cast doubt on the truth of all legitimate video clips.
But this election has shown that it’s real videos, not fake ones, that are likely to cause us the most problems. Jane Lytvynenko wrote this week about a misleading clip that purported to show voter fraud. In reality, it showed a paper jam. Facebook and Instagram removed the video for violating their rules; Twitter left it up, and it has been viewed more than 95,000 times.
Recently departed Facebook security chief Alex Stamos says mislabeled videos are likely to be a much bigger problem than doctored ones for the foreseeable future. “Deepfakes get too much play as a risk compared to mis-framing videos that don’t have technical indicators of falsity,” he tweeted. “There is no [machine-learning] algorithm to find videos that are intentionally mislabeled.”
Nor is there an algorithm that can settle the case of Acosta versus the White House. Many intelligent people, looking at the same footage, walked away from it with very different conclusions. Those conclusions largely reflected their political views. In other words, they chose their own reality.
Over here in my reality, an aide attempting to wrest a microphone out of the hand of a journalist doing his job is an assault on democracy. But on this day, that seemed to be more or less beside the point, even if the fact that it had happened was not in dispute.
Kurt Wagner notes that, however well Election Day seemed to go for Facebook, its real problems didn’t materialize until months later, after the company and journalists could dig in on the results. (Sal Rodriguez makes a similar point here.)
The problem, of course, is that Facebook appeared to be fine the day after the 2016 election, too. CEO Mark Zuckerberg even dismissed the idea that so-called fake news was a real problem. It wasn’t until months later that people, Facebook included, fully realized the extent to which Russian trolls were using the service to try and sow political discord among U.S. voters.
Like me, Kevin Roose thinks that one of the greatest risks to Facebook coming out of Election Day is that it will grow complacent:
Facebook has shown, time and again, that it behaves responsibly only when placed under a well-lit microscope. So as our collective attention fades from the midterms, it seems certain that outsiders will need to continue to hold the company accountable, and push it to do more to safeguard its users — in every country, during every election season — from a flood of lies and manipulation.
I have officially lost count of the investigations and potential investigations of Facebook now percolating in Europe, so if anyone wants to create an updated spreadsheet, I am happy to link to it here. In the meantime, the European Union’s competition chief is considering a new tax investigation, Thibault Larger reports:
EU competition chief Margrethe Vestager is weighing up whether there are grounds to open a probe into Facebook’s European tax arrangements as she deepens her multinational investigation into sweetheart tax deals, two people close to the case said Wednesday.
Countries giving preferential tax deals to big companies — particularly in the tech sector — have been a European Commission priority since 2014, and Vestager ruled in 2016 that Ireland would have to claw back €13 billion in unpaid taxes from Apple. In a novel crackdown against tax avoidance, the EU has started to treat preferential tax arrangements as a form of state aid — essentially declaring that countries are giving illegal subsidies to businesses.
In a significant victory for the organizers of the Google walkout, Google will no longer require arbitration for sexual harassment claims. Adi Robertson:
One of Google’s key changes is making arbitration optional for individual sexual harassment and sexual assault claims, so employees could take misconduct claims to court instead of privately settling them. Pichai also promises to provide “more granularity” in internal reports about harassment at Google. Google will also update and expand its mandatory sexual harassment training, and it will start docking the performance review scores of employees who don’t complete the training.
Doug MacMillan writes up new data from the Pew Research Center about kids and YouTube:
Amid concern from children’s advocacy groups that the Google-owned video website is profiting from advertisements targeted at minors, the survey from the Pew Research Center shows that more than four out of five parents with children 11 and younger have given them permission to watch a YouTube video. More than one-third of those parents let their children watch videos on the site regularly, according to the results of the survey published Wednesday.
The survey also showed that the majority of parents whose children watch YouTube say their children have seen disturbing content on the site.
YouTube released some new stats about what it’s paying out to owners of copyrighted works. Here’s Paul Sawers:
Arguably the most interesting figure from the company’s latest How Google Fights Piracy report relates to YouTube’s Content ID. Indeed, Google revealed that it has spent more than $100 million on the technology since its inception, including computing resources and staffing, up from $60 million two years ago.
And it has also now doled out more than $3 billion to rightsholders, up from “over $2 billion” in 2016 and $1 billion two years before that.
Russia is rapidly approaching the logical conclusion of its effort to monitor all of its citizens’ communications, reports David Meyer:
When someone signs up for a messaging service, the operator of that service will need to verify their registration data through their mobile operator.
The mobile operator will have all of 20 minutes to respond to each request for information and will have to record information about the messaging apps that each customer uses.
China’s state-run news agency, Xinhua, says it’s using AI to create “virtual AI anchors” for its newscasts. James Vincent suggests this is… not great:
The technology has its limitations. In the videos above and below of the English-speaking anchor, it’s obvious that the range of facial expressions are limited, and the voice is clearly artificial. But machine learning research in this area is making swift improvements, and it’s not hard to imagine a future where AI anchors are indistinguishable from the real thing.
This will strike many as a disturbing prospect, especially as the technology is being deployed in China. There, the press is constantly censored, and it is nearly impossible to get clear reports of even widespread events like the country’s suppression of the Muslim Uighur community. Creating fake anchors to read propaganda sounds chilling.
WeChat now offers 1 million lightweight apps to its user base. Here’s an interesting example of constraints spurring creativity:
As their names imply, mini programs allow files up to only 2MB. They load faster than native apps — which means users may tend to reinstall them in the future — but they also compromise certain features, which could undermine user experience.
Of the hundreds of app verticals, games take up 28 percent of all mini programs, followed by life services and e-commerce, according to QuestMobile.
Dan Seifert reviews the Facebook Portal video phone, which goes on sale today. He and other reviewers say that the Portal is good at its intended purpose of making video calls, but otherwise, it doesn’t do enough make it worth buying, especially given the privacy risks:
Facebook is saying all the right things about privacy, but I’m not sure that will be enough to convince the skeptics. There’s already been mixed messaging from Facebook on whether it will be able to use data from the Portal for advertising purposes, so people are right to be skeptical.
Beyond that, unless you are a heavy user of Facebook’s Messenger calling, the Portal doesn’t currently do enough to justify its existence.
Facebook hardware chief Andrew Bosworth gives Sarah Frier the hard sell for Portal. It “isn’t a data-gathering operation,” he says. (Good blurb for the packaging!) But it will be used for advertising purposes:
“If there’s an ad-targeting cluster on Facebook for people interested in video calling, that might be a cluster that now I’m going to be a part of,” he said.
Before meeting your Facebook date in person, may I recommend you first chat with your prospective partner on a Facebook Portal?
Facebook and Google are going crazy building data centers and other buildings Kevin McLaughlin reports:
Facebook and Alphabet, the parent company of Google, boosted their capital expenditures the most of any of their peers, more than doubling such investments during the first nine months of the year from the same period in 2017, according to an analysis by The Information. Microsoft and Intel also increased their spending significantly during the period. The growth is a sign that the biggest players in tech are sufficiently bullish on future growth opportunities that they are willing to plow cash from their booming businesses, along with savings from corporate tax cuts, into infrastructure.
Shira Ovide is annoyed that Facebook keeps comparing its efforts to shore up global democracy to its effort in 2012 to build a successful mobile app:
Facebook’s 2012 smartphone reboot was a cinch compared with its current challenges. Facebook now wants to protect elections around the world, weed out misinformation and encourage online behavior that unifies people. Nothing in Facebook’s history shows it’s up to this set of challenges.
The stakes are simply higher for Facebook today. This moment is different from the time when users initially revolted against Facebook’s news feed in 2006, or when people grumbled about a separate app for chats. And this time is different from Facebook’s reboot in response to the smartphone threat. Facebook was right in those moments, but that doesn’t make it infallible.
Kyle Russell, for one, is having a good time on Twitter dot com. He writes about his decision to post about every book he read this year, and everything that has happened as a result:
By framing my interaction on the platform around something I consider good for me, I’ve been able to have that rush compel me not toward starting fights but to deepen my understanding of the world and the history leading up to its current state. I can both see and feel the compounding of this effect: as the thread gets longer and the included books more diverse, I get more eyes on the entire thread with each new book, and more likes on all previous posts, and so I am rewarded for the new, latest book and all the work I’ve done so far. This effect is scary when it leads to the radicalization of someone giving into the effects of having outrageous, combative, misinformed behavior receive systemic incentivization, but it’s deeply appreciated when it’s simply keeping me from slowing down something I’m proud of and want to do more of despite my personal tendency toward procrastination and letting projects fall to the wayside as I focus on professional matters.
One of my pet interests at the moment is whether a decentralized social network, a la Mastodon, could avoid some of the problems that the big tech platforms have encountered. A federalized content moderation policy, for example, might better balance speech and security. Here Aviv Ovadya writes that decentralization alone won’t be enough to solve the problem:
What decentralization does is re-distribute power. That can sometimes be exactly what is needed—but in other cases it can exacerbate the original problem! In the case of misinformation and harassment, it re-distributes power from platform governance—e.g. Facebook rules and algorithms; to “publishers”—in this case misinformers and harassers.
That Myanmar report. Google in China. Facebook’s war on Isis. The Internet Research Agency. And TikTok!
As a young man I would often wonder at what age men start feeling compelled to hide or lie about their age on dating sites. But even now, I have yet to reach the stage where I would be willing to sue Tinder in order to represent myself as two decades younger than I am. Points for a novel, argument, though:
Emile Ratelband, 69, argues that if transgender people are allowed to change sex, he should be allowed to change his date of birth because doctors said he has the body of a 45-year-old.
I also have the body of a 45-year-old. Unfortunately, I’m 38.
Talk to me
Send me tips, comments, questions, and clearly faked videos: email@example.com.