Harassment is defined by Merriam Webster as an unpleasant and hostile situation for an uninvited and unwelcome verbal or physical conduct. Simply put, it is a type of behaviour that is persistently annoying. In this article, we’ll discuss the idea of verbal harassment online. We spend around 3.5 hours on our phones, daily. Owing to the pandemic, an average Indian spent 4.3 hours a day on smartphones after the first lockdown began in March 2020.
The internet as a whole is a service provided to us, consumers. The very essence of being a consumer is the ‘power of participation’. Imagine if we the consumers stopped being bothered about purchasing things that we need, things we want and the things we’re convinced that we would need – thanks to marketing!
As a consumer, we are involved in the business activity of a firm. A firm is born when an idea is given tangibility.
Take a moment and look at the other open tabs, apps on your device as you read this. Every app serves a particular purpose, to satisfy the kind of expectations you have from being a member of the digital space. The demographic of users differ from one platform to another.
Every platform on the internet provides you with a sense of gratification, mainly of being informed, being seen and being heard. However, in the last few years, we’ve seen these platforms duplicating the features of each other.
Snapchat was the first social media service to have the ‘Snap story’- a feature which allows the user to post content with a 24-hour lifespan. This feature was subsequently duplicated by Instagram, Facebook and on Twitter.
What’s that got to do with harassment?
It’s funny how competing social media brands with such variation in their respective demographics are chasing after imitating what one has to offer. What’s the Unique Selling proposition (USP) then?
It is you. Social networking sites are all about finding and building your communities.
The platforms which cater to the highest number of like-minded people to you are the ones you’re more likely to spend your time on.
Yet, here lies the problem. Every person with an internet presence is entitled to their communities. When it comes to usage of these social media platforms, there’s no such divide you and the celebrity with 50 million followers. To the platform, you and the celebrity are equal consumers.
The impact of social media on consumer behaviour is more deeply rooted than we know, we’ll try to empirically prove it and write on it soon. As I’m writing this article a bunch of people belonging to the Reddit community r/Wallstreetbets have blown the share price of an almost bankrupt GameStop. Elon Musk, CEO of SpaceX tweeted about Dogecoin and the cryptocurrency’s price escalated to a whopping 50%. When a bunch of Redditors can come together shaking the entire Wall Street in a matter of hours, I think it is pretty clear to understand that power of the internet is unmatchable. I mean it’s just too difficult to keep up!
Also, to give you some Indian context- Rihanna just tweeted about the Farmers protest and the rest is history. A countless number of people get verbally abused online, every passing minute. It is almost like the freedom of speech comes at a confusing cost. Yes, I said confusing.
Harassment online was a part and parcel of being online since the dawn of the internet itself.
First of all, it is necessary to distinguish between verbal harassment online Vs verbal harassment in the real world. In the digital realm, you don’t exist. It’s another you created by you who lacks the trauma, the empathy, the memories and the experiences. To be more precise, the “internet you” lacks the humanness your real self possess.
The “real you” might physically react to a lure comment, a racial slur or catcalling. On the internet, your abuser enjoys anonymous autonomy. The next step is to report the abuse.
In the real world– law enforcement systems function by relying on evidence. Evidence is the aftermath of abuse. It could be a wound, a scar, damage to life or property. One can file a complaint, demand for legal action to take against the abuser; provided there is sufficient evidence. On the contrary, there is enough documentation of the abuse happening online in the form of screenshots, the viewership of people who visited the website or the post.
However, there is an inadequate online footprint of law enforcement. An FIR merely creates an avenue for further investigation of the incident. Platforms like Instagram have such ambiguity in terms of their privacy settings. You can either have a public profile, giving access to whooping 1 Billion users or to keep your profile private- allowing to be viewed by a set of people whom you approve of. There is no in-between.
Social media platforms boast about their community guidelines, their ardent efforts in safeguarding the sanity present on the internet. They use machine learning and AI to identify words which could be offensive, triggering and pauses the process of posting. Ex: On Instagram any content which includes the word- Pandemic/ COVID-19 has a Public Service Announcement (PSA) which pops up to educate the user more about the chosen words. Twitter did this with the Hashtags long ago.
Anyone on the internet is prone to abuse, 8 out of 10 Indians have been cyberbullied. As of March 2020, cyberbullying of women and kids rose to 36%.
In Bihar, 4 out of 5 women haven’t even used the internet according to a National Family Health Survey (NFHS) report.
The spectrum of users is so vast that there is a need for a neutral reporting mechanism. When a user identifies abuse; they report it directly to the platform. This complaint is addressed 4-5 days later. In most of the cases, these accounts are blocked, or even deleted. And in a matter of hours, the abuser creates another account. This the perpetual pattern of the reporting social media houses provide to its users.
What happens if the insult or abuse is never identified by the software installed to restrict such content?
This is called “Abuse in disguise”. Abusers will always prowl their way into your messages to attack you. It needn’t be with hate speech, cuss words or deepfake pornography. They’re conscious of the content that would alert the detection mechanisms in these platforms. Hence, they use context-based photos, texts that would go through the filter and continue to harass people online. Ex: A pro-choice activist is sent a photo of cracked eggs soaked in blood.
Here’s an idea- What if there’s a physical regulatory body in every local law enforcement which overlooks the complaints filed by people present in their locale?
An active IT Cell which can address these complaints faster and hold these abusers accountable?
Decentralising the process of grievance redressal will fasten the rate at which complaints are addressed. Having personnel exclusively appointed to perform the following functions:
- Acknowledging the complaint by categorizing it according to the severity of the situation
- Running through a database to check the frequency of complaints received against the same harasser/bully.
- Identify the IP address, notify the billing address-linked email address, send an automated memo or warning, then slow down the speed of internet as a penalty, then in the worst case, cut down access. Further, a terminated address won’t be able to take a new connection from the same address for 2-3 months.
- Alerting the given platform about the abuse- so that the account is removed.
- Sending out disclaimer from the local law enforcement alerting people to strictly avoid any online interaction with the given user name/ ID.
Millions of users report abuse to social media platforms, just to be ignored or denied help. This is because the detection mechanism is a software or is done with the help of AI. There has always been a lack of humanness in the receiving end of these complaints.
This can be done with API – Application programming interfaces. When the government intervenes, platforms can be assured with credibility and a sense of stakeholdership. Every district can have the “ROAR” – Regional Online Abuse Report.
- Every platform should be forced to feed it’s “Abuse Reports” & its actions are taken to resolve it into ROAR. This should be based on Geography, jurisdiction as per local law, etc.
- All platforms should allow users to export Abuse Reports automatically to ROAR via APIs designed to generate all “logs” identifying both the abused & the abuser’s IP address details encrypted in a format only internal ROAR systems can access, including the “verbal interaction records” with the consent of one of the parties to disclose, along with “timestamp” of when what happened.
- API merely is the access point that Social Media companies have to give to ROAR to legally pull that data out. For that, legal approval has to be made online as well.
- Say I report that X abused me – I can take a screenshot & share it but it can be doctored or photoshopped too.
So, the actual proof has to come from this API-pull from the platform.
- In this case, when I report, I send a request to the social media platform for reporting to ROAR. It’ll bundle all the necessary evidence, encrypt data between you & me, & then send it to ROAR. ROAR can decrypt it with an adequate legal pass. After online law says yes, a report is submitted.
- Another alternative is to go via ROAR, log in with your Social Media account, then make Facebook API give the bundle to be uploaded to ROAR.
One of the major concerns was the lack of empathy at the receiving end of these complaints of abuse. The ROAR system paves the way to a much more inclusive form of reporting. By breaking down the function of surveillance into smaller and more accessible groups we can make the internet a much safer space, one ROAR at a time.
Your thoughts..