Digital Innovation & Entrepreneurship
TikTok RESPONSIBLE AI AND SOCIAL MEDIA
I t is nearly six years since material about self-harm. Her father criticised the response from social media firms as “underwhelming”. He also warned that he had no confidence that planned legislation would protect other children from the kind of toxic content his daughter experienced, raising the risk of similar tragedies in future. If there is currently no clear regulatory answer, perhaps technology can offer a solution. safer and prevent online harm. This technology is driven by Responsible AI that can oversee and understand the subtleties of both languages and images in a range of settings. Essentially, it is an AI that can understand context. 14-year-old Molly Russell took her own life after being exposed to graphic online We are seeking to build a tool that can keep children This tool could provide an extra layer of insight to shine a spotlight on potentially damaging communications and flag when children are at risk. Content without context doesn’t mean much. Tech giants know this. Yet they are still using a system that flags words and images without understanding who is saying what, and to whom. Knowing what goes through the mind of an individual who is feeling “Every day, millions of parents fear the bleak impact of social media upon their kids”
Can AI protect children online before it’s too late?
by Shweta Singh
suicidal, for instance, is crucial. If technology could make sense of behaviour patterns – what a person is searching for online, who they are talking to and how often, what they are watching, what advice they are not seeking (for example, ignoring suicide helpline links despite numerous recommendations) – it would provide an extra layer of oversight. These are the tools we are building. Harmful or inappropriate content on social media is already weeded out by AI-informed algorithms. But, as we have seen, this level of moderation can be a blunt tool if it is wielded without the necessary level of insight. It flagged one concerned father who sent an image of his son’s genitals to his doctor as a potential abuser. However, it failed to sound the alarm when another man searched 20 times in three days for information on how to kill someone and dispose of a body, before murdering his wife. Without technology, it is impossible to oversee the vast
traffic of images and words exchanged on social media every day. After all, 500 hours of content are uploaded to YouTube every minute. Human moderators cannot keep up with what is often grim and distressing work. Our proposed Responsible AI could provide a helping hand and could be ready as a prototype in a matter of months, offering the first step towards keeping children safer online. We are now sifting through vast quantities of language and images to compile ‘dictionaries’ of insights around each of the most harmful areas that threaten children and young people. These include hate speech, cyber bullying, suicide, eating disorders, child violence, and child sex abuse. We are infusing Google’s natural language processing algorithm BERT (Bidirectional Encoder Representations from Transformers) with an additional layer of knowledge, which can allow the technology to understand language more as humans do.
Warwick Business School | wbs.ac.uk
wbs.ac.uk | Warwick Business School
22
23
Made with FlippingBook Learn more on our blog