Is Meta acting in good faith in its attempts to curtail the spread of self-harm content to teenagers?
Rather than take Meta at their word, Danish researchers set up their own self-harm network, & ratcheted up the explicitness of the content to see at what point Meta would start taking the images down....
guess what; all Meta's algorithm did was to amplify & spread the messages/posts.
So, can we trust Meta with children's wellbeing; most certainly not!
#socialmedia
https://www.theguardian.com/technology/2024/nov/30/instagram-actively-helping-to-spread-of-self-harm-among-teenagers-study-suggests
@ChrisMayLA6 This but could we please do away with the framing of "harm to teenagers"? It implies treating teenagers in discriminatory ways, forcing proof of age and identity, etc. are valid solutions when the only real solution is REMOVE THE VECTOR OF HARM regardless of whether the user harmed is a young child or teen or adult.
yes, fair comment; the harms are not limited by age (or any other social designation)