Back in February 2019, a Facebook (yes, that is what it used to be called then) employee ran a test to understand what a new user in India would see if all they did was follow pages and groups recommended by Facebook’s algorithms. What the test revealed was horrifying: “A near constant barrage of polarising nationalist content, misinformation, and violence and gore,” according to the tester. This included Islamophobia, a man holding a severed head wrapped in a Pakistani flag, unverified images of Indian retaliatory strikes in Pakistan and battered bodies. India joins Brazil and the US in an exclusive club, “Tier zero,” for hate speech and disinformation on Facebook. The company had set up pre-election “war rooms” and dashboards to alert officials to emerging problems. Yet, despite this elite status, India has been left out in the cold not just by Facebook but by its critics as well.
Thanks to the revelations tumbling out with the Facebook Papers, we now learn that despite the elevated risk status of countries such as India and Brazil or Indonesia and Iran that bagged spots in the next tier of priority for monitoring, 87 per cent of Facebook’s global budget was dedicated to identifying misinformation solely in the US, leaving a measly 13 per cent for all others. Notably, 52.5 per cent of Facebook’s revenues from the first quarter of 2021 came from outside the US and Canada. To make things worse, thanks to the lack of Hindi and Bengali classifiers, much of the problematic content out of India never got flagged. This is a head-scratcher, as much of Facebook’s content moderation is outsourced to —drumroll, please — cheap labour from India.