A Government Crackdown Could Make Life Harder For New Streamers

livestream limits federal government australia

In the wake of the Christchurch terrorist attack, the Australian federal government has continued to push for more restrictions around social media and particularly livestreaming platforms. We're getting a greater idea of what those boundaries are starting to look like now, with a government taskforce outlining a series of recommendations around the creation of new accounts and limits on audience growth for new streamers.

The document, "Report of the Australian Taskforce to Combat Terrorist And Extreme Violent Material Online", was published on the Department of Prime Minister and Cabinet's website over the weekend. Unlike the ACCC's investigation into Facebook, Google and other online publishers/social media/platforms, the taskforce's report explores specifically methods for regulating social media and livestreaming to prevent the spread of content like what was streamed during the Christchurch attacks.

The report comes after a joint industry and government summit at the end of March, which included representatives from major social media platforms, Google, Facebook, ISPs, departmental heads, and various ministers including the Prime Minister Scott Morrison, Peter Dutton and the former Minister for Communications Mitch Fifield.

"It identifies actions and recommendations that fall into one of five streams: prevention; detection and removal; transparency; deterrence; and capacity building," the executive summary reads.

The first major proactive measure recommended focuses on the expansion of a hash and URL database, with the former designed to be a cross-company database that allows "member companies to systematically add newly identified terrorist content" and faster identification and blocking of such material.

But the recommendations that would impact most users, and livestreaming in particular, revolve around restrictions on "the ability of new users to livestream until they have established a pattern of behaviour" that is deemed acceptable.

The section on livestreaming controls recommends:

  • Strengthening account validation processes, particularly for account creation
  • 'Cooling off periods' before a new user can livestream (such as 24 hours)
  • Limiting audience size or the capacity to monetise livestreamed content for new users
  • Implementing streamer ratings or scores, and/or
  • Monitoring account activity

The most impactful change would revolve around the limitations on audience size and monetisation, although it would only affect new users. The idea isn't to target the livestreaming platform per se, but to combat malicious actors trying to spread content quickly through the creation of new accounts. The taskforce also called for platforms like Twitch, YouTube et al. to restrict automated account creation, saying that "persistent bad actors may find a way to circumvent" suspensions and bans.

The report does make a distinction towards the end about actual terrorist acts, rather than animated or fictionalised acts. That's important particularly for gaming and entertainment: combating terrorists and the horrors of war have been a trope in video games for a fair while, and there's always a concern that any new algorithm or database-driven detection system would erroneously flag gameplay or some other kind of entertainment as violent content.

Platforms like YouTube and Facebook have already implemented stricter standards to livestreaming and content in the wake of the Christchurch attacks. A spokesperson for Facebook, responding to a government statement from the G20 Summit where world leaders agreed to jointly act on social media firms that fail to satisfactorily respond to livestreaming of violent content, welcomed the Australian government's approach.

"Since the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate," the spokesperson told the Australian. "We also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the concrete steps we’re taking to address­ the abuse of technology to spread terrorist content."

The taskforce's full recommendations can be read over on the PMC website.


Comments

    This makes no sense. You could totally get around this by streaming to your PC and having it auto upload torrents or whatever. The data still gets out.

      Enh. They're not trying to stop the data from existing. As you pointed out, that's pretty much impossible: there will always be a workaround.

      What they're to do instead is minimize the ability to go viral. Limit the vectors, slow the spread, allow major platforms time for damage control, targeted prevention.

    As Red Skull said to Steve Rogers back in Captain America:

    "Knee-jerk reactions may not be a uniquely Australian Government trait, but I must say you do it better than anyone."

    The taskforce also called for platforms like Twitch, YouTube et al. to restrict automated account creation, saying that "persistent bad actors may find a way to circumvent" suspensions and bans.

    Huh. I thought those platforms already attempt to prevent automated account creation, through captcha and the like? Or do they not do that anymore?

      Make a new account and find out? I would assume they do, but the Government probs want something that's more complex, like face recognition so they can track us all. In the end, this is just one more move in their game. We are simply playing chess why they play the grand game of 5D chess.

    So ... do they have any suggestions that would have applied to the New Zealand shooter at all?

      It's from the minds who gave us the refined NBN... so of course not.

      The limiter on viewers for a new streamer would have prevented the footage from going viral (which is the point of the recommendations, to stop those streams from blowing up like that one did before the service could stop it). But only if he were a new streamer; if he'd built up a reputable enough history by just doing, like, Minecraft streams or something, it wouldn't work.

      So it does something, but arguably not enough to have the intended effect, while also making things a little more inconvenient. Chances are new streamers on Twitch or something won't be blowing the hell up right out the gate, so they don't have anything to worry about, but I could see it as a problem for stuff like new politicians trying to engage with voters on Facebook Live.

    The tech giants need to take a page out of game publishers books. The ESRB was industry made self regulation, they put in place to keep goverment happy. The new tech empires seem to try get away with as much as possible it leaves goverments with no choice but to impose regulations on them.

      The tech giants need to call the governments' bluff and cease operations in Australia, turning the heavy-handed lawmakers into the ones who got Australia banned from Google, Facebook and YouTube.

      We'll see how long the laws last after that happens.

    If anyone thinks these proposals are about actually stopping terrorism you are sadly misinformed.

    Ha, video game streamers.
    No love lost here.

    They might have to (god forbid) get a real job like the rest of us.

    Last edited 01/07/19 8:29 pm

      It must be exhausting being bitter at people who simply don't adhere to your ideals and go out and get a 'real job', instead making money off something they enjoy, etc. How dare they!

      Oh how I'd wager there's something you like that people get paid quite for, something you'd approve of and defend as being a 'real job', yet others can't possibly fathom why anyone is paid to do it.

      To be crystal clear also... I'm not a streamer, I have no desire to be one, I don't know anyone who is, and I've personally never seen the value in donating money to streamers. But then I'll never see the value in the likes of athletes making the money they do playing football, basketball, etc, either.

    None of this would have prevented the NZ terror attack, nor have stopped the initial live stream and subsequent distribution. It’s simply impossible to monitor the sheer volume of content that gets streamed - which is why they rely on reports. Hashing and matching content is fine when it’s identified but proposing some sort of magical system that correctly identifies objectionable content on the fly is... well, magical thinking I guess.

      Actually, like I said in another comment, there is one recommendation that actually would have stopped the initial livestream and distribution: the limiting of viewers for new broadcasters. Presuming he was a new broadcaster, that would've given them more of a chance to lock down the situation before it went everywhere. Not a foolproof plan, and also not one that couldn't be circumvented by someone who knew it was there and specifically wanted to do this (all you'd have to do is enough legitimate streaming for a moderator to look at that and go 'okay, you're fine', which probably wouldn't have to be very much or even very good) but a precaution that hypothetically would have helped.

Join the discussion!

Trending Stories Right Now