AI in games cheat, often out of necessity to keep up with human players. Even so, no one enjoys copping impossible head shots, or being spotted kilometres away through solid concrete. But I don’t think anyone’s sat back after being snapped by a Counter-Strike bot and thought “it’s about bloody time we had guidelines for ethical AI behaviour in video games!” Unless you’re Unity, that is.
When we think of artificial intelligence, we picture sentient robots, or computers communicating in natural language about pod bay doors. We never think about what we have now, which could be considered rudimentary in comparison.
That’s until you realise that AI — in the form of machine learning and neural networks — are very much doing work today in a wide variety of fields, from choosing the best time to auto-update Windows to making the world’s creepiest porn. Or even predicting which Game of Thrones characters will die next.
So, the problem isn’t so much Skynet killing us all (or pesky aimbots), but setting racist Twitter bots loose on the internet, however unintentional. We really need to stop and think about how we deploy AI, rather than just doing it and hoping for the best.
While video games might not be in the same league just yet, VR, AR and similar technologies will push the boundaries and being proactive about how AI is used isn’t such a bad idea.
This week, Unity decided to get a ahead of the game — so to speak — releasing its “Guide to Ethical AI”. Far from being Asimov’s three laws, the guidelines are “meant as a blueprint” for developer behaviour:
These principles are meant as a blueprint for the responsible use of AI for our developers, our community, and our company. We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology.
And, if you’re wondering, here are those guidelines:
Design AI tools to complement the human experience in a positive way. Consider all types of human experiences in this pursuit. Diversity of perspective will lead to AI complementing experiences for everybody, as opposed to a select few.
Consider the potential negative consequences of the AI tools we build. Anticipate what might cause potential direct or indirect harm and engineer to avoid and minimize these problems.
Do not knowingly develop AI tools and experiences that interfere with normal, functioning democratic systems of government. This means saying no to product development aimed at the suppression of human rights, as defined by the Universal Declaration of Human Rights, such as the right to free expression.
Develop products responsibly and do not take advantage of your products’ users by manipulating them through AI’s vastly more predictive capabilities derived from user data.
Trust the users of the technology to understand the product’s purpose so they can make informed decisions about whether to use the product. Be clear and be transparent.
Guard the AI derived data as if it were handed to you by your customer directly in trust to only be used as directed under the other principles found in this guide.
As you can see, they apply more to developers, and not AI itself. And, while not iron-clad rules, hopefully they’ll steer not only Unity and Unity developers, but the industry at large in the right direction.
Late last year, SpaceX co-founder Elon Musk warned that AI could take over the world, sparking a flurry of commentary both in condemnation and support. For such a monumental future event, there's a startling amount of disagreement about whether or not it will even happen, or what form it will take.Read more