Over the past year, social media companies and even reputed publications have come under increasing scrutiny for the content being posted on their platforms. During the 2016 US elections, false news stories were regularly busted throughout the online world. Some social media companies reacted by sacking the human editors, who curated trending news stories, and replacing them with an algorithm. Often without the desired effect just days later, the sites once again displayed a fake news headline as trending.
A couple of weeks ago, a video-sharing site criticism when some of its “Trusted Flaggers” (a special network of volunteers who help identify worrisome posts and comments on the network) complained that the company had a huge backlog of reports, some months old, and that the company responds to only a small fraction of complaints from the public about child endangerment and suspected child grooming.
Other companies bank on artificial intelligence programs to solve the problem by spotting and flagging violent videos without any human involvement. Unfortunately, these tools began flagging and removing war documentaries.
It seems clear from the examples above that while AI can take up much of the work identifying disturbing posts, photos and videos, human intervention is still needed to verify what the tools are flagging. As argued by Mandy Jenkins, head of news at Storyful, “Machines think in black and white. I don’t think verification can be automated yet. A judgement call has to happen. It’s about asking questions and seeing how a story adds up against other facts we know. What is the background of the source or site? Who is the person who wrote this story? Where does it come from? These are too many questions for a robot to answer on its own.
What is needed is a combination of mind+machine, technology and human minds complementing each other for more efficiency and quality.
On the other hand, humans alone simply cannot deal with the amount of posts that any social network is dealing with. Time is the issue here. Facebook for example now has 2 billion users, users upload an estimate 400 hours of content onto YouTube every minute. Those statistics are staggering, and it is clear that it’s impossible to monitor and control the amount of data posted on these platforms with human minds alone.
How do the fake news and artificial intelligence relate to our business?
As I have argued before, the end user of an analytics solution wants to receive the insight, because that allows them to make the necessary business decisions. You will remember from our previous blog post about the Ring of Knowledge that every analytics solution must go through a number of steps to reach a stage where these decisions can be consistently successful. Unfortunately, neither minds nor machines can complete all the steps on their own. The right combination of mind+machine is needed – the coming together of human ingenuity and skill with state-of-the-art digital machines.
In the context of social media, this means that it is absolutely necessary to have powerful algorithms to trawl through mountains of data and flag potential malicious content, but it is also just as necessary to have trained human minds who can verify the results and complete the process.
Let me know what you think in the comments below, but beware, I will check and remove any malicious posts.