In the wake of the Christchurch terrorist attack, the Australian federal government has continued to push for more restrictions around social media and particularly livestreaming platforms. We're getting a greater idea of what those boundaries are starting to look like now, with a government taskforce outlining a series of recommendations around the creation of new accounts and limits on audience growth for new streamers.
The document, "Report of the Australian Taskforce to Combat Terrorist And Extreme Violent Material Online", was published on the Department of Prime Minister and Cabinet's website over the weekend. Unlike the ACCC's investigation into Facebook, Google and other online publishers/social media/platforms, the taskforce's report explores specifically methods for regulating social media and livestreaming to prevent the spread of content like what was streamed during the Christchurch attacks.
The report comes after a joint industry and government summit at the end of March, which included representatives from major social media platforms, Google, Facebook, ISPs, departmental heads, and various ministers including the Prime Minister Scott Morrison, Peter Dutton and the former Minister for Communications Mitch Fifield.
"It identifies actions and recommendations that fall into one of five streams: prevention; detection and removal; transparency; deterrence; and capacity building," the executive summary reads.
The first major proactive measure recommended focuses on the expansion of a hash and URL database, with the former designed to be a cross-company database that allows "member companies to systematically add newly identified terrorist content" and faster identification and blocking of such material.
But the recommendations that would impact most users, and livestreaming in particular, revolve around restrictions on "the ability of new users to livestream until they have established a pattern of behaviour" that is deemed acceptable.
The section on livestreaming controls recommends:
- Strengthening account validation processes, particularly for account creation
- 'Cooling off periods' before a new user can livestream (such as 24 hours)
- Limiting audience size or the capacity to monetise livestreamed content for new users
- Implementing streamer ratings or scores, and/or
- Monitoring account activity
The most impactful change would revolve around the limitations on audience size and monetisation, although it would only affect new users. The idea isn't to target the livestreaming platform per se, but to combat malicious actors trying to spread content quickly through the creation of new accounts. The taskforce also called for platforms like Twitch, YouTube et al. to restrict automated account creation, saying that "persistent bad actors may find a way to circumvent" suspensions and bans.
The report does make a distinction towards the end about actual terrorist acts, rather than animated or fictionalised acts. That's important particularly for gaming and entertainment: combating terrorists and the horrors of war have been a trope in video games for a fair while, and there's always a concern that any new algorithm or database-driven detection system would erroneously flag gameplay or some other kind of entertainment as violent content.
Platforms like YouTube and Facebook have already implemented stricter standards to livestreaming and content in the wake of the Christchurch attacks. A spokesperson for Facebook, responding to a government statement from the G20 Summit where world leaders agreed to jointly act on social media firms that fail to satisfactorily respond to livestreaming of violent content, welcomed the Australian government's approach.
"Since the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate," the spokesperson told the Australian. "We also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the concrete steps we’re taking to address the abuse of technology to spread terrorist content."
The taskforce's full recommendations can be read over on the PMC website.