by Tanay Patni
“Fake News” was named the word of the year in year 2017. Was it just another hype word or do we need to worried about it? The spread of fake news and hate speech is a real problem and probably one of the biggest. The spread of misinformation has led to unrest in many parts of the world. Hate speech has become one of the major factors contributing to poor mental health. The mishandling of these has resulted in social media companies being under heavy scrutiny. The question arises, what actions should these companies take to tackle this issue?
Before we dive into the roots of the problem and ways to tackle them it is important to see some real-world consequences that were a direct result of spread of fake news and misinformation.
1) A man entered and opened fire in a pizzeria in Washington D.C. because he believed it was front for human trafficking as a result of fake news being spread about it.
2) Military leaders in Myanmar used Facebook to spread fake news about Rohingya, an ethnic minority group, to incite violence against them which eventually led to genocide.
3) During and after the 2016 election, Russian agents created social media accounts to spread fake news that stirred protests and favoured presidential candidate Donald Trump while discrediting candidate Hillary Clinton and her associates.
4) Many home remedies that had not been verified by professionals, fake advisories and conspiracy theories about COVID — 19 was spread in India during the pandemic. It was a serious issue as it directly affected the health of millions and millions of people.
It can be clearly seen from these examples that we cannot ignore the problem anymore. Fake news is being used to incite violence and hate, manipulate elections and spread misinformation that can harm people.
To successfully tackle any issue, we have to first analyse the issue. One of the main problems is that the social media companies claim that they are not responsible for any content posted on their platforms. This would have been true if they didn’t have algorithms running at back end to control the reach of any content. It is a common knowledge that platforms use algorithms to try to keep you on their platform. This leads to fast spread of fake news and hate speech, because these kinds of content grab the attention of a person quickly. This fact is exploited by people who use fake accounts and bots to manipulate the algorithm. They buy paid likes, views and comments to increase the reach of their content. In an experiment conducted by NATO STRATEGIC COMMUNICATIONS CENTRE OF EXCELLENCE to recognize the gravity of this situation they decided to buy fake engagements from three high-quality Russian social media manipulation service providers. They were able to buy 1150 comments, 9690 likes, 323202 views, and 3726 shares on Facebook, Instagram, YouTube, Twitter, and Tiktok for just 300 euros which roughly translates to 26,000 rupees. That is not a huge amount of money if we consider the amount of engagement it was able to produce.
The other problem is the sheer number of texts, images and videos posted online every day. Let us take twitter as an example, it is estimated that around 500 million or 50 crores tweets are tweeted in a single day. It is humanly impossible to filter and check such huge number of contents. Also, since we cannot define fake news or hate speech properly it gets difficult to use computers to filter them. When we look at the incident that took place in Myanmar, the language that was majorly used was Burmese. At that time the Burmese language was not converted to Unicode so it became extremely difficult for the engineers at Facebook to come up with a proper solution to differentiate between innocent and harmful content.
The social media companies are aware of this problem and are trying hard to tackle this issue. They have invested both resources and time to work against this problem and make their platforms free from it. Although it is not possible to come up with a proper solution immediately the companies are trying their best to come up with policies to improve the situation.
· Facebook is working with more than 60 fact-checking organizations which have been reviewing and rating content in over 50 international languages. According to Facebook’s blog this helps them identify, take down and restrict fake news.
· Google announced in their blog posted on 2nd April 2020 that they are going to provide 6.5 million dollars in funding to fact-checkers and non-profits fighting misinformation around the world, with an immediate focus on coronavirus.
· Instagram, in their blog dated December 16, 2019, announced that they will allow fact-checking organizations around the world to assess and rate misinformation on their platform. They further added that they will reduce its visibility and label it properly to inform people seeing the post.
· Twitter is actively working on identifying and removing bots from their platform. Twitter suspended more than 70 million accounts in May and June of 2018 and they have improved their efficiency since then actively removing malicious accounts.
These steps have helped with the situation a bit but it is clear that they are not enough. There are some more steps these companies could take to improve the situation.
First of the all the companies should take, not all, but some responsibility of the content being shared online. This would incentivise them to take more efficient actions. They could take advantage of the control they have over their platform to better control the flow and reach of the content being shared.
Since this is a common problem faced by all the social media companies, one of the major steps these companies could take is to share their resources. Together they can invest more money, time, talent and experience. This will help in coming with a more efficient solution faster. Pooling in the resources will give them more data to work with, which would help in better train machine learning models. This will eventually lead to a stronger filtering algorithm helping with the amount of content being monitored.
It would help to bring in experts for their opinions and suggestions. Their expertise would help in taking better decisions while dealing with complicated issues. The only way to tackle such big issue is to choose the battles to fight. Focusing on more immediate and bigger threats would eventually reduce the impact. For example, choosing to focus on curbing the spread of anti-vaccination over flat-earth content will guarantee less damage to the society and no major consequences.
Fighting mis-information with information is an effective way to ensure that the fake news not powerful enough to affect the mind of people. Companies should promote trusted and verified news sources to keep their users informed. This way the users will have a credible source of information which they can refer to.
The battle against the spread of fake news is a long one and it will take time to successfully gain control of the situation. It not just the responsibility of these big companies to fight it. We as common people should also keep ourselves informed and not get easily swayed by anything we read online. We should be responsible about what we read, believe and share. This war can be won only if we fight it together.