The Social Media Crackdown Is Coming

The Social Media Crackdown Is Coming
Image: Getty Images

The Australian government has scolded representatives from the major tech companies at a meeting in Queensland on Tuesday, calling on the tech giants like Facebook and Twitter to convince regulators that they can monitor and crackdown on violent content livestreamed on their platforms.

Reports floated around earlier this morning that the government has begun drafting legislation that could result in criminal penalties being lobbied against companies like Facebook – but Google, Twitter and other livestreaming platforms such as Twitch would also be affected – if they fail to remove violent content, like what was aired in Christchurch.

The ABC reported that the individual executives of social media companies would also be prosecuted, should the platform holder fail to act immediately upon being notified that violent terrorist footage was hosted on their platforms. The ABC understands that the laws are being modelled on the existing regulation that forces social media companies to pull child exploitation posts, images or video, and to notify police after finding it.

On Tuesday, executives from the major tech companies met with the Australian government in Queensland at a meeting convened by Prime Minister Scott Morrison, according to the AFR. Representatives from Telstra, Optus, Vodafone, TPG, Facebook, Twitter and Google reportedly attended the meeting.

In the meeting, Federal Home Affairs minister Peter Dutton asked the tech companies how it was possible the Christchurch livestream could have remained on their network without them being alerted to the content, according to Nine.

Following the meeting, Prime Minister Scott Morrison told a press conference on Tuesday afternoon: “If you can write an algorithm to make sure that the ads they want you to see can appear on your mobile phone, then I’m quite confident they can write an algorithm to screen out hate content on these social platforms,”

While the technical angle from local politicians is unique, the general rhetoric is aligned with what most regulators have been saying over the past week. New Zealand, which earmarked the terrorist’s manifesto as “objectionable” content that makes it illegal to possess or distribute, has led to the strongest attack against the tech giants.

“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” New Zealand Prime Minister Jacinda Ardern said last week.

New Zealand has already arrested an 18-year-old for sharing a video of the attack, with a second person charged for posting a photo of one of the mosques attacked featuring the words “target acquired”.

Facebook, where the livestream was hosted, argued that less than 200 people saw the original livestream with the company claiming that none of the viewers reported the material during the live broadcast. One journalist pointed out otherwise, although he later noted that his report didn’t go through Facebook’s systems. And part of the problem there is that Facebook’s reporting tools let users raise flags over content featuring suicide, but not murder, something Facebook admitted was a flaw in their systems.

But as Facebook noted in a statement, relying on AI or machine learning to detect this kind of content in real-time isn’t as easy as it sounds. Because machine learning models work on training an AI against references images and video, the only effective way to train the model is to run it through more types of content like the massacre so it can learn to tell the difference:

This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.

Facebook added that a likely problem is false positives. If the settings are dialled up too high, Facebook’s systems could end up accidentally flagging livestreams of popular first-person shooter games like Counter-Strike or Apex Legends, wasting the time of moderators and slowing down alerting authorities.

Still, it’s the numbers that are going to be the real problem. The social media giant said that they removed 1.5 million videos featuring the attacks, with 1.2 million videos being blocked at the point of upload.

That means around 300,000 videos got through Facebook’s distribution platform, and it’s that inability to detect some objectionable content – but not all of it – that regulators are using to push the tech giants into a corner.

“There’s nothing within your algorithms or network that advises you there is a 17 minute video livestream of people being murdered on your platform … you’re still not aware at the 29 minute mark,” Home Affairs Minister Peter Dutton reportedly said during Tuesday’s meeting.

And that’s complexity that all social media giants face with livestreaming in general. The point of livestreaming is to give users an unfettered voice to the internet. Platform holders see themselves as the postman, not a publisher, with the Communications Decency Act in the United States (where these companies are based) offering them legal cover from what is aired on their platforms.

The industry, at least, is aware of what they’re facing. Microsoft president Brad Smith has suggested a more collaborative approach between companies to help respond more quickly when attacks like those in Christchurch happen, suggesting that safe search-type functionality should be built into web browsers to block the viewing of content at the point of search, and that tech companies aren’t doing enough as a whole.

But their actions haven’t been good enough for governments so far, and it’s hard to see the industry unifying fast enough to prevent the imminent crackdown. Livestreaming is well and truly in the government spotlight, and while politicians are waiting for the companies to come back with an equitable solution, it’s going to be hard to convince them to put the bats down – especially this close to a federal election.


  • When are folks gonna realise that they are actually really effective at both spreading and stopping information, that you just assume they are doing a bad job….

    You don’t spend billions learning how to manipulate folks and then suck at it.

  • “If you can write an algorithm to make sure that the ads they want you to see can appear on your mobile phone, then I’m quite confident they can write an algorithm to screen out hate content on these social platforms,” There we go! There’s the “Just write an algorithm to stop the bad people doing something while letting the good people still do it” suggestion politicians love.

    Honestly the government should be embracing social media and live streaming, it’s basically a ready made global surveillance system. I’m honestly a little worried about where this is going to end up though because we could end up being like China if it goes in the wrong direction.

    • What I want to know is how this algorithm will differentiate between the hate content they’re talking about, and bacon?

      Or how it works when keywords aren’t used. Its surprisingly easy to be offensive without using trigger words.

  • This will fail, the government will look stupid, and nothing will be accomplished. This is a shitty knee jerk reaction which they probably know won’t work, but makes it look like they’re being tough and doing something.

    • Given how badly their previous attempts have gone (Conroys attempt, The fact you can bypass current pirate site blocks with a DNS change), I think their attempts will amount to nothing.

      • I worry that sooner or later they’ll accidentally achieve what they’re trying to do. Infinite monkeys and all that.

        As a side issue, the block efforts so far can easily be seen as a step in that direction for them. There ARE ways to seriously block them beyond DNS blocks, it only need the courts to force them to do it. Which basically means going back to the start, and through the courts all over again, talking about cost (thousands per site, not dollars), etc etc. But thats administration, and not necessarily a permanent deterrent.

        The RSP’s know it too, which is why I think they bent over and did the mickey mouse block. It set a precedent of the content owners paying for it – now, its a bait and switch solution… The REAL block costs each of them hundreds of thousands of dollars not $50, and its hard for them to argue they arent willing to pay for it.

        I applauded the RSP’s for how they pulled it off myself. Flipped all the important costs back on to them, while obeying the court instructions, AND making the content owners set their own precedent that will work against what they wanted.

        • I always kinda thought their half-assed attempt at blocking was a sort of a balanced approach because they know high usage plans are probably indulging in a bit of torrenting – “Okay, it’s blocked. What, it can be bypassed? Cheeky buggers, oh well, we tried.”

          • The moment that convinced me was when they revealed it would cost $50 per site to block. At that moment the content owners stopped bickering over who was going to pay for it, and happily agreed to do so. The RSP’s knew 100% that the $50 option was just a DNS block, and was trivial to bypass, but it set that precedent that THEY didn’t have to pay for it.

            Add in how many were fundamentally against it in the first place, and you got that added benefit of “oh well, we tried” while showing they had blocked it. It took effort to get around that block, and while it was only a few seconds of effort, still effort. So not their problem. If people go out of their way to circumvent the blocks, what can they do? It was brilliant entrapment.

            Before that, the cost argument was that it would be around $1000 per site. One or two of the RSP’s were maintaining that when the $50 claim was made, but that changed very quickly. So you now have a situation where hundreds or thousands of sites were ‘blocked’, for $50 a site.

            The cost to the content owners was in the thousands, a number they were happy to pay. But if you REALLY want to block it with packet sniffing, etc, suddenly its in the hundreds of thousands or even millions. That will be a fun discussion.

    Let me post on 4chan already, stupid australia. I NEED to shitpost!
    I love how they blocked 4chan/8chan but didnt do anything to facebook, the place where the shooting was originally live streamed, totally not retarded at all.

    • By their own admission the video (and permutations of) had been removed 1.5 million times from Facebook, while it’s impossible for 4chan to have even come to close to those numbers.
      It’s some real shoddy distribution of work.

    • A direct link to the video on Facebooks CDN was still accessible for hours after the shooting. This was one of the first links shared on 4chan within 20 minutes of news getting out.

      Morrison’s suggestion is absurd, but it’s also absurd that this video was available for hours, when it should have been one of the first taken down.

      My guess is they ‘removed’ the videos from timelines, etc, but didn’t remove them from their CDN.

  • If Facebook just said it’s too expensive to follow the new legislation and pulled out of Australia how quickly would the pro censorship boomers backflip?

    • Given how aggressively they’re all trying to get into China, I don’t see this ever happening. It’s been a long time since these tech companies took a principled stand, and we should expect them to never do it again.

  • Ah yes, such an easy thing to fix.

    *open fb app, hit live
    * somehow stop someone from hitting live and streaming their violent crimes automagically

    Solution: get rid of FB live.

  • The problem is the government and I do mean the Liberals are complicit in spreading false information and fake news.Directly through press statements and divisive hate speech and through online social media platforms. There have been numerous cases of Liberal MPs sharing memes and fake stories from lobbyist setup fake news organisations that directly . It suits them to keep us divided and misinformed.Pot kettle black.

  • Sentencing people for sharing videos and pictures for 14 years in jail is probably one of those precursor indicators for the complete utter collapse of society. Get those mad max outfits ready!

  • Do it. Do it. Do it. Go on… do it. Do it. I dare you. I double-dare you. Do it. Be the government responsible for getting Australia banned from Facebook and YouTube. Do it. Do it. Do it now. Do it like right now. Do it.

Show more comments

Log in to comment on this story!