0 0
Read Time:5 Minute, 26 Second

Facebook YouTube and Twitter have agreed on the first steps to restrain harmful content online, the big advertisers have announced on Wednesday, by following restrictions of social media platform which they had accused of tolerating hate speech. Under the deal, which has announced by the Advertisers World Federation, common definitions would be adopted for forms of harmful content like hate speech and bullying, as well as platforms would adopt harmonized reporting standards. The deal has come less than size weeks before a polarising election of US president.
The social media platforms have agreed to have some practices reviewed by the external auditors and to provide advertisers more control of what content is appeared alongside their advertisements. Luis Di Como has said that “This is an important milestone in the journey to rebuild trust online” executive vice president of global media at Unilever and one of the biggest advertisers of the world. Whilst change doesn’t happen overnight, today marks an essential step in the appropriate direction”.
Three months ago, the major advertisers boycotted Facebook in the wake of anti-racism demonstrations that followed the death of George Floyd, an American Black man, in the police custody in Minneapolis. Advertisers have complained for years that big social media companies do so little to prevent advertisements from appearing beside hate and inappropriate speech, fake news, and other harmful speech, and other harmful content. Big technology establishments have started taking steps to fend off calls for more rules and regulations.
Carolyn Everson, the Vice President of Facebook for Global Marketing solutions, has said in the agreement that, “has aligned the industry on the brand safety floor and suitability framework, giving us all unified language to move ahead on the fight for the online hate”.
These commitments must be followed in a timely and complete manner, to make sure that they aren’t the kind of empty promises which we have seen too often from Facebook. We will continue to press Facebook and other providers to make meaningful changes to their platform in the months and weeks ahead. They said to stop hate for profit didn’t respond to a message seeking comment.
Skeptical
Campaigners who actually want more regulation of social media companies have been sceptical of voluntary measures like those announced on Wednesday.
“Any progress in lessening harmful online content is to be welcomed. Although, up to now voluntary action from social media companies has lived up to its starting promises rarely. Hence, time will tell how much of a difference this advanced industry-led initiative will make, “David Babbs of UK-based group Clean Up the Internet toils Reuters by email.
The Stop Hate for Profit campaign behind the Facebook boycott is backed by the Anti-defamation League as well as the NAACP, two of the oldest and biggest anti-racism campaign groups in the US. The campaign didn’t quickly respond to an email seeking comment.
In a statement, it has said, the “Failures of Facebook lead to real-life violence and sow division, and we are calling on the company to enhance its policies. We need to advise people to vote as well as demand Facebook to stop undermining our democracy. Enough is enough!!
The reason behind taking this step
The current violence in Bengaluru as a result of a message which was circulated on the social medial platform again underlined the requirement of strengthening the regulatory system. The Internet helps as a space for public communication and opinion formation therefore, the neutrality of the mediators is tough. It is a characteristic of search engines and social networks that they filter, personalize, and present contents to the users based on personal data they have collected on them. to help the overarching business model, algorithms are programmed to make sure that the users spend as much time as possible on their respective platforms. In current times, there had been various cases of hate speeches, fake videos, and disinformation being placed on social media platforms resulting in communal violence. In fact, every time such incidents have taken place and there had been a noticeable increase in the harmful message being circulated.
Circulation of harmful contents on social media platforms, required for the holistic policy to toughen the regulatory system
The current violence in Bengaluru as a result of an email that was circulated on the social media platform once again underlined the need for making the regulatory system stronger. The internet serves as a space for public communication along with opinion formation. Therefore, the neutrality of the intermediaries is tough and it is a characteristic of search engines and social networks that they filter and help the overarching business model, algorithms are designed to make sure that the users have spent as much as time possible on their respective platforms. In current times, there have been various cases of hate speeches, fake videos as well as disinformation being placed on social media platforms which in result communal violence.
Three factors are driving the nation’s world over to tougher its procedures and surveillance systems. First of all, new types of techniques and methodologies are being used to cause harm the society, economy and also create doubts in the system of government often resulting in unexpected violence of high intensity. Second that not all intermediaries might be neutral and that they could exacerbate the harmful impact just by permitting quick circulation, possibly to the targeted persons.
A look at the other countries that have or are introducing new systems for the Internet Intermediaries regulation that is social media platforms is appropriate for formulating an effective system for the requirements. In 2017, Germany has passed the law known as the Network Enforcement law needs social media platforms with more than 2 million has registered users in Germany to keep in place processes to remove harmful and illegal contents within 24 hours. Enforcement lies within the power of the competent 14 state media authority.
The state media authorities have the far-reaching right of information and analytical powers the media intermediaries. The intermediaries can be punished with fines of up to EUR 500,000. In Australia, the Criminal Code Amendment Act, 2019, oblige obligations concerning the requirement for the intermediaries to inform authorities of repugnant violent material being circulated by using their services and to remove these types of contents expeditiously.
The act goes beyond the social media platforms and consists of any internet site that enabled users to communicate with other users and any electronic services where users can integrate with others.

About Post Author

admin

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *