YouTube and TikTok Algorithms “Brazenly” Suggest Harmful Content to Kids, Study Finds
According to a recent analysis, social media platforms like TikTok, YouTube, and Instagram “brazenly” suggest films that promote harmful and even fatal behavior among youngsters.
According to a research by the non-profit group Fairplay, when a young user searches for a video involving hazardous activity, all three platforms offer movies promoting unsafe behavior.
The study’s researchers posed as a 14-year-old guy to create profiles across all three networks. The next step was a search for films of “car surfing” and “train surfing,” two extreme sports in which participants ride or stand atop moving vehicles while clinging to their sides for support.
A search for “car surfing” on TikTok yielded 80% of videos showing people surfing on top of vehicles, while a search for “train surfing” yielded 60% of films showing people surfing on top of trains.
Sixty percent of the results for “car surfing” on YouTube were videos related to that topic. Train surfing videos made up 90% of the results when you searched for that term online.
With regards to Instagram, the numbers were 28% (when in-car) and 84% (train surfing).
In his opinion, “algorithms across various platforms openly propose swarms of films that celebrate dangerous conduct,” as Fairplay put it. Each time a video is suggested, all three services go against their own code of conduct, which states that they will not promote anything that promotes violence. The worst part is that many young people have been hurt or even died while trying these feats.
Ms. Bogard, whose son was 15 when he died after playing the so-called “choking game” in 2019, was cited in the paper as a parent concerned about the prevalence of this trend.
In his words, “no kid should ever be damaged because an algorithm directed them to risky and damaging films,” Bogard is certain that this will never happen.
In order to better safeguard children while they are online, Fairplay is advocating for the passage of the Kids Online Safety Act (S. 3663). Platforms would be obligated to “act in children’s best interest” and “mitigate against harms arising from the promotion of self-harm and other matters that pose a physical threat to a minor,” as well as “make dangerous challenges easier to avoid by allowing minors to opt out of algorithms that recommend them,” as stated by Fairplay.
Fairplay argued that regulation was necessary to guarantee that platforms effectively protected young people who used their services.
Be the first to comment