Intel, A ‘White Nationalism’ Slider Ain’t It

Intel, A ‘White Nationalism’ Slider Ain’t It
Image: Intel

Intel gave a presentation at GDC a few weeks back, but I’m guessing nobody actually watched it because it took until this week for anyone to notice this absolutely absurd pitch from the company, which wants to use AI to monitor and censor “hate speech” in your online voice chat, and let users toggle just how much hate they want to hear.

It’s a service launching later this year called Bleep, which “is a user-facing application that uses AI to detect and redact audio based on user preferences”. Which basically means it will monitor audio as it comes out of your system, and mute/beep your speakers or headphones when it detects bad words.

It’ll detect those words using AI, but it’s the user preferences part of this that is the most hilarious/horrifying. Here’s a look at Bleep’s backend settings, and it is a technological hellscape.

Image: Intel Image: Intel

Feel free to fine tune! Would you like to hear people body-shaming you? Oh, you would? But only a little? OK, sure!

Image: Intel Image: Intel

So many choices! I deeply appreciate the fact I can only hear someone screaming white nationalist taunts down a microphone “most” of the time — sometimes you need a break, after all! — and will be thinking very hard about whether I want to toggle that N-word switch to its “on” or “off” position.

It’s ghastly that something like this ever left a whiteboard, let alone made it all the way into a major presentation, but then we’re years past the point where we should be expecting companies like Intel to think about anything except ways it can waste millions trying to use its own technology to combat deeply human problems.

Maybe there was a good intention here at some point. Letting people enjoy a safer online experience is, after all, a very good thing! But this, this is not the way to do it. Hateful speech is something that needs to be educated and fought, not toggled on a settings screen.

You can watch the presentation at around the 29:30 mark in the video below (though it should autoplay at that point if you click on it anyway).


  • Which is hilarious since you just know the country Niger, Nigeria and other variants are going to be censored. “What country are you from?” “Oh, I’m from *BLEEP*”

    Not to mention all the rappers out there who will be bleeped once parents discover the settings.

    Bring it on, the great Censorship tide!

      • Dude, there was a Nigerian dude in university class with my brother who pronounced it exactly as akeashar stated when he was asked by the teachers. Intel’s demonstrating some profound racism of their own by not checking out all of the different countries that have words that sound similar to all the words that make Americans cry. That alone is enough reason to not touch the idea with a barge pole, because the world shouldn’t be subject to America screwing up their own dialect of English and enforcing it on everyone else.

        There was already a Japanese player banhammered in the last week from Apex Legends for saying “nigero” which means “run away” in Japanese. You can’t even speak your own native language without a group of racist morons in America telling you how your language should work, while they run around using terms like “spastic” like that’s fine to other English speakers. Intel can take their slider and shove it.

  • I get where you’re coming from Luke, but I can see sliders like this being appropriate for black gamers who don’t want to have their voice-chats censored for their day-to-day language, or for voice chats where people discuss the hate speech they experienced in a safe, supporting, space.

    • The problem is the slider, not giving users the option to censor comments, or not.

      I mean, seriously, what exactly do I get if I allow “some” “white nationalism” and “most” “misogyny”? Surely the only real options here are to allow or to disallow?

      • I think the labels are probably misleading. Really, the sliders are about how aggressive the AI is. If you can imagine how effective the AI is likely to be, you probably do want to control how ‘sensitive’ it is (and it may work more or less effectively for certain types of content, thus the multiple sliders), otherwise it could be blocking everything.

        I also disagree with Luke. I think some of that behaviour you see online is attention seeking behaviour, and blocking others from hearing it may help lessen some of that behaviour. It’s obviously not a cureall, but I think more tools are too be appluaded.

        That being said, everyone is exactly right as presented it looks like a hellscape distopian future. But that doesnt mean it should have stayed on the whiteboard.

        • Yeah, if they were AI sensitivity sliders that would make sense and I don’t think anyone would have a problem. But that’s not what Intel wrote on the sliders. What they wrote was “Do you want some, most, or all LGBTQ+ Hate?”

          Speculating here, but it could be that a sensitivity slider implies the AI isn’t perfect and some human intervention is required. But Intel don’t would rather suggest that their AI is perfect and it’s the humans who need adjustment … and this is what happens.

  • Luke…. buddy… champ…

    You’re assuming the AI works 100% perfectly all the time.

    Maybe leave the innovation to those who can create.

  • This is genius on Intel’s part.

    They are gonna get a bunch of personal data they normally wouldn’t be able to get. Think about it, if intel just said they wanted to record your conversations everyone would tell them to stick it where the sun don’t shine.

    But now people are more than glad to hand over this data. Genius!

    • indeed. how else can they censor on the fly. live tv often is a few seconds behind so producers can actually censor or cut the feed if need be. this will either delay voice chat at critical moments or steal massive amounts of data to sell to other people.

Show more comments

Log in to comment on this story!