Safeguarding children’s rights online is a priority. A recent investigation by the New York Times showed that tech companies report more than 45 million images of sexually exploited children annually.
In 2018, there were more than 18.4 million reports of online abuse of children, a 6,000-fold increase in only 20 years. And in the last decade, the number of files containing CSAM (child sexual abuse material) has increased dramatically – by almost a hundredfold.
The Internet Watch Foundation reports that 39% of CSAM features children under the age of 10 and 43% includes acts of extreme sexual violence.
Online communities and chat rooms enable 99% of CSAM to go undetected. The growth of closed online communities where abusers and molesters can gather, communicate, and share images and videos present a significant challenge for law enforcement and relevant NGOs. And the continuing growth of the dark web and the use of virtual currencies is another hurdle that must be overcome.
Some of those figures may seem shocking, but what is even more disturbing is that only 1% of children trafficked for sex are rescued.
Are Tech Companies Failing to Do Anything?
Law enforcement’s resources are at breaking point due to the amount of data they have to examine just to monitor and shield users from tech-savvy online predators. A recent paper by the NCMEC (National Center for Missing and Exploited Children) highlighted the problems law enforcement are experiencing.
Legislators have been slow to enforce laws that hold tech companies accountable and to make them take positive action.
While most media platforms have stringent regulations, and punishments, when it comes to any content they publish or broadcast, there are far fewer regulations when it comes to online content.
• Facebook: Facebook did report nearly 12 million CSAM cases in 2018 by using content moderators; their proposals for better encryption on Messenger could help abusers avoid detection.
• Apple: The tech giant does not do any scanning of material kept in its cloud storage, and its encrypted messaging app also helps predators avoid detection.
• Amazon: Does no scanning or checking of images in its cloud storage.
• Microsoft/Google/Dropbox: Only scans for CSAM images and videos when they are shared but never on uploading.
Can Artificial Intelligence Help?
The United Nations Interregional Crime and Justice Research Institute is involved in developing AI tools that identify online grooming and abuse and also find images and videos. There are several ways AI can do this.
• Computer vision: Tools scan online and stored images for several factors such as the age of the subject and whether the images contain nudity. It can also analyze body motion in videos to identify if the content is explicit, and it can use facial recognition to identify any known abusers or trafficked children as well as using that data for later investigations.
• Predictive AI: With predictive AI, models can be constructed that help in complex and global investigations. This can include monitoring, collecting, and analyzing data from known sex sites to identify potential abusers and victims.
• Language processing and text analysis: By constructing AI programs that analyze text, suspicious and sexually abusive can be identified. It can also help mimic the language used by groomers and abusers so law enforcement can engage in conversations with them.
The Takeaway
Managing CSAM is expensive, an operational nightmare, and a PR headache, but additional legislation regulating technology may help combat the online war against the sexual exploitation of minors. Combining AI with human intelligence can help protect our vulnerable children who can become victims of nefarious online behavior by sexual predators.
If you suspect a family member has been affected by sexual abuse, we may be able to help. Contact us for a confidential discussion.