Facebook is planning to not allow any graphic images of self-harm on its application as it tightens its policies on suicide content among growing criticism of how social media companies moderate violent and potentially dangerous content.
On Tuesday, this social media company mentioned that now searching for self-injury content will become harder on Instagram and this platform will ensure that it does not appear as recommended in the Explore section on the photo-sharing app.
Facebook issued this statement on World Suicide Prevention Day that self-harm contents will no longer be reported as abusive in an effort to reduce the stigma around suicide.
According to the World Health Organization (WHO) report, around 8 million people die due to suicide every year, or one person every 40 seconds.
Currently, Facebook has a team of moderators who check for content such as live streaming of violent acts as well as suicides.
In February, Reuters showed in a report that This company is working with at least five outsourcing vendors in at least eight countries on content review.
Governments globally are in conflict with how to control the content on social media platforms, often blamed for encouraging abuse, the spread of online pornography and for influencing or manipulating voters.
Amazon has spoken to Reuters last month that this company is planning to develop helpline phone numbers to customers who query its site about suicide, after searches on its site suggested users search for nooses and other potentially harmful products.
Facebook, Twitter and Alphabet’s Google have already been issuing helpline numbers in response to user queries involving the term “suicide.”