It frequently seems, when looking at the academic research side of things and the 'real world' applications, that never the twain shall meet - academics are notorious for floating about in their own little world (it's practically part of the job description some days ...), while everyone else wants to know how academic research can be applied to real world situations. There's an interesting post up on Game/AI - especially the comments - talking about the problems of bridging the industry/academic gap in gaming. Are we all too beholden to our institutional obligations? Is there any way to bridge the gap? Will academics ever get their heads out of the theoretical clouds? Will designers ever start thinking academics have something to offer beside star gazing?
The problem is compounded by the fact that it's very much a chicken-and-egg situation -- that is, design and AI very often go hand-in-hand. The current state of the art in game AI is very limited by the fact that so many game designers intentionally avoid using AI because they don't understand what's possible ... or they watched other designers make wildly unrealistic promises about AI, and took the wrong lessons from that experience ... or they mistakenly believe we're still stuck in the 1980s and only heavily scripted AI can work. We need to grow out of all that.
Part of the challenge of developing AI is going to involve working on the design side, and pushing designers out of the narrow "comfort zones" they build for themselves. Too many designers are still perfectly happy making zombie games, and that holds us all back.
The original thoughts have spawned some responses on GrandTextAuto and others - worth a read through (especially the comments on the original post!) if you, like me, are interested in the confluence of the academic and actual game design.