On October 14 the New York Post shared a story that claimed Hunter Biden (son of Presidential candidate Joe Biden) dropped off his computer at a repair shop, never picked it up, and emails were discovered on the laptop that offered to introduce a Ukrainian businessman to Hunter’s father (who was then VP). The Post described this story as a “smoking gun”. It was salacious and exciting. But outside reporters immediately started to speak out against the story saying there was no real proof. Twitter and Facebook quickly took action against this information.
Almost immediately, Twitter wouldn’t allow users to post the link to the article. If they tried, a message popped up saying, “We can't complete this request because this link has been identified by Twitter or our partners as being potentially harmful." If users attempted to share the link via direct message, Twitter warned users that the link was “potentially unsafe”. But why did Twitter make these moves? Twitter said that the article violated their anti-hacking policy and its private personal information policy. The screenshots in the New York Post article shared private contact information, and they were obtained off of the computer without Hunter’s permission. Twitter received a lot of backlash for this move, even from the President himself. President Trump tweeted, “So terrible that Facebook and Twitter took down the story of 'Smoking Gun' emails." Twitter then went as far as to disable The Post’s Twitter account until The Post stopped attempting to share the article on Twitter's platform, another controversial move.
Facebook also acted immediately by greatly reducing the reach the article could have on Facebook - which means that the article would not pop up as suggested content and if users shared the article, it would live only on their personal timelines, not the feed that all their friends scroll through. Twitter gave more of an explanation to why they removed the article, citing actual company policies. Facebook left things a little more unclear, saying that they wanted third party fact checkers to look at the article before it was given free rein on their platform. Facebook had never made that move so quickly before. Honestly, both companies have very vague language about what can and cannot be on their platforms. We have to admit, Facebook and Twitter both hold an enormous amount of control over what we see online, and if they don't clear up their policies, the platforms could easily fall into either end of the abyss.
Even with the efforts of Facebook and Twitter to keep possible misinformation off of their platforms, the article still had 2.59 million interactions on Facebook and Twitter. We've said it before and we'll say it again - don't use social media as a news source. It's too murky, and when you want to know what's going on in the word, you need clarity.
Comentários