eSafety Commissioner Julie Inman Grant has stated that platforms are “doing shockingly little” to detect child abuse material.
On 18 November last year, eSafety received draft industry codes from associations like the Digital Industry Group, which represents platforms like Meta, Twitter and Google.
Then in December, eSafety released a damning report [pdf] on platforms’ technical limitations detecting and responding to child abuse content.
“Some of the largest cloud-hosted content like iCloud and OneDrive, were not scanning for child sexual abuse imagery,” eSafety Commissioner Julie Inman Grant told the committee.
“And so it really suggests to us when you think about all of the devices and handsets that are out there, and all the potential storage, that we don't even know the scale and the scope of child sexual abuse [material] that's existing on these mainstream services.
“The major companies that do have access to advanced technology —AI, video matching technologies, imaging clusters and other technologies — should be putting investment into these tools to make them more efficacious,” she said.
Last Monday, the Commissioner asked the associations to resubmit their draft codes for filtering the class 1A and 1B “harmful content,” and to address “areas of concern.”
The content moderation watchdog has “a strong expectation that industry commit, through the codes, a strong stance in relation to detection of that kind of material proactively,” eSafety Commissioner acting chief operating officer Toby Dagg told Senate Estimates last week.
CyberBeat is a grassroots initiative from a team of producers and subject matter experts, driven out of frustration at the lack of media coverage, responding to an urgent need to provide a clear, concise, informative and educational approach to the growing fields of Cybersecurity and Digital Privacy.
If you have a story of interest, a comment, a concern or if you'd just like to say Hi, please contact us
We couldn't do this without the support of our sponsors and contributors.