Could This Australian Company Revolutionise Video Game Visuals?

Euclideon is an Australian company based in Brisbane. It's also the creator of 'Unlimited Detail' - a technology the company is claiming will change the way developers create visuals in video games. I'm not a developer and only have a layman's knowledge of precisely how this technology works, so I'm unsure with regards to how legit this technology actually is - but it's definitely worth checking out the demonstration video to decide for yourself.

We're a little cynical - the kind of processing power required to run this kind of tech, and have games running at a stable framerate, is an unknown quantity - but we're wondering if this tech could be the foundation for something spectacular.

I'm keen to hear the thoughts of people more informed in these processes - is this a hoax? Is this technology viable? Let us know in the comments below.

Thanks to Adam Mathew for sending it in!


    Reading the reddit thread about this, it looks like it's a hoax.

    There were similar videos released a year ago on a different YouTube channel and a lot of the comments from people who seem to understand the technology basically translate to "it couldn't work."

      Yeah, wasn't sure - I thought maybe someone who knew more about things would be able to confirm.

        I hope its not a hoax... would be nice to see some innovation...

      Hope it's not a hoax. These guys were apparently given $2 million in government commercialisation grants this year.

      Not a hoax and too bad that Trjn is the first comment.

      Its like reading that the NBN got hacked last week* :P

      * which it didn't ffs -_-

        I didn't say it was a hoax, but that it looks like a hoax. There's a big difference.

        Also, hoax was the wrong word. Unfeasible seems much more appropriate.

      I know from other developers who have been approached and from sources within the company that the tech is legit.
      What you saw there was using software rendering but they are working on making it work better on hardware rendering (as in so it will render directly on your video card which is faster).
      This is also an early previewing!!!!! This is about getting hype generated. The key part that this tech demo was showing off was "CHECK OUT OUR TRILLIONS OF POLYGONS".
      Which even with the poor lighting in that test level is pretty huge.
      So before dismissing it as a hoax just remember its a mid stage demo, not a finished version

    with how far technology has come within the past 31 years, i dont think this is un attainable. in the space of 30 years we have gon from the likes of space invaders and asteriod to the likes of Cysis, hell look at the CGI in movies, we gone from the cgi in the 1989 batman movies to full on toy story in under 10 years.

    He is pretty obviously not Australian, he sounds like a constipated limy.

    I think its a hoax.

      Dude. He's Australian. I have many a friend who sounds like that. (I'm Aussie and I'm assuming you are too)

      For some reason, many comp. sci. students/grads sound like that... :)

    Worst. Voice-over. Ever.

    But yeah I have no idea if it's real or not. I'll just say that things that 'are impossible' are often possible.


      That voice alone seemed to validate any notion that it might be a hoax.

      yeah - the guy's accent is farout. sounds like a radio DJ, but perhaps that's just his 'presenter voice' like we all have our 'telephone voice' or 'business voice'.

      definitely aussie though - NSW/Qld?

    It's voxels, but advanced to the point where it doesn't look blocky or blobby, which is where the real innovation is. In terms of storage and the core principle, you can view some similar stuff here: (By the guy who made the non-voxel Duke3D engine 'build')

    The problem as i understand it is how current GPU's render graphics, they work by polygons/ vector data as described in the Vid then uses rasterisation to display a 3d model into 2d space. Is this new method a form of ray tracing or tessellation? Ill be interested to see what sort of equipment or power you will need to render these scenes or if it will be a software rendered method. Either way it looks pretty awesome.

    i dunno about how fake or real it is, but the guy doing the reading sounds hilarious

    I don't really know much about the technology itself, but the video sure as hell seemed dodgy. Unprofessional in every possible way. That guy (aside from coming across as stupidly arrogant and having one of the most annoying voices known to mankind) just kept making excuse after excuse for why things didn't look a certain way. Seemed less like a tech demo and more this guy desperately trying to make it seem legit.

    It would be really cool if this was possible but I'm with Trjn, looks like a hoax to me.

    Hmm, interesting theory on where the medium's visuals are heading.

    Honestly, I'm a bigger fan of clever and effective art and aesthetic design than absolute visual realism. That said, I'm sure this type of technological approach could be used to satisfy my desire for newer, more stylish iterations of the former.

    Mark, get an interview. Face to face. I reckon you'll be able to spot if its BS or not.

      There's a lot of comments on the youtube vid asking how it would work with moving or animated objects/models.

      Curious to see what their answer on that one is.

        yeah that could be different story again.

        Although i think people would be willing to settle with an awesome world, regardless of whether the monsters were still constrained by common polygons. Considering thats where most of them tend to disapear to

        I'm actually curious how much effort has to go into painting the world then.

        Because in my mind thats what they would be doing, painting the atoms as if they were a canvas since i assume traditional textures don't work because they generally assume that they are a picture on a polygon surface

    I'd say I know more than most about rendering technology, but I'm hardly an expert. I've programmed shader and studied GPU architecture and underpinnings at QUT.

    Nothing stated in the video gives it away as being in any way fake or fraudulent.
    I can find no info on Bruce Roberts Dell, the CEO, however,
    that is definitely a Brisbane accent.

    In any case, assuming this is real, this is a game changer (pun intended). There is no hardware available to accelerate this, other than GPGPU, which is hardly a mature technology. So, all of the results there are going to have been done through software. The equivalent of trying to run Crysis without a dedicated graphics card.

    Also, given that all of this was essentially completed with flat shading, and the detail becomes obvious.

    Another, less obvious aspect, is that now that structures can have DENSITY as well as VOLUME, advanced rendering techniques such as refraction or Sub-surface-scattering might be accomplished much more reliably.

      This guy is real. I've actually met him in RL, and this project is 100% true.

    I love how he says ComPUter graphics.

    . . . *shudder*

    This has been going on for over a year now, I don't think it's a hoax. Unfortunately all we have to go off of are these terrible videos, Serrels, you should try to get in contact with the company for an interview and see what they have to say.

    Either put the doubters at bay, or confirm that it is indeed a hoax. I remember id was looking at this kind of technology, as are others. It's not that far off now, it makes sense to change from multiple 2D polygons to atoms once we have the technology for it. As he said, hospitals and the such already use this kind of technology.

    The physics processing required to power so much "volume" would surely be out of reach for current tech?
    Besides isn't this the same as the voxel tech used in Delta Force ages ago?
    Looks good if legit though - I've always thought that "assets" such as rocks, trees, buildings etc should be shared amongst developers to help reduce dev time and cost...

    this is a fairly old vid, and anyone in the industry can easily tell you it aint gonna happen.

      i dont mean tech wise either, i mean noone in their right mind would model an elephant prop at half a million polys when they can drop it to about 10k and sculpt details in the normal map in a day.

        What do you mean? To make a normal map you have to model the high resolution elephant anyway - the normal map is generated from the difference between the high res model and the low res one. The high resolution model has to be made either way, which is a very quick thing to do with programs like zbrush or mudbox.

          i mean, you make the 10k poly prop, sculpt it in zbrush or mudbox, export the MAP (not the high res model) and there you go. why would you ever model a 500,000 poly prop using polys for all the detail, it's wasteful and a bitch if it needs to be rigged and animated as well.

          • Not to mention that a lot of the time they don't make the high end sculpts anyway. The use of a baked normal is considered a "next-gen" technique (read: this gen), but not all developers do it that way. For the majority of objects you can get a passable representation using an automated process, and if you only want to push certain details of models (see SC2 as an example) then it's wasteful to make a full high-res model.

            I still don't really know what you mean when you say "why would you ever model a 500,000 poly prop" when you can just use a normal map out of zbrush... When you sculpt it in zbrush, you're making a high res model - it takes the same amount of time to make it and export a map as it would to make it and export the high res version - it's the same process just a different button you press at the end. You have to spend the time to make the high resolution model either way don't you?

            I don't know if this system would ever be able to rig and animate the model though, all they've shown us is static scenes. In theory you could use the same techniques for rigging it and animating it as a poly mesh, with bones and weighted envelopes... if your computer doesn't explode first.

              That was one of the first things I thought of - the cost of asset creation is already astronomical with all these pretty engines these days. If something like this happens in games (which it probably will eventually), imagine the fact that a lot of destructible entities could have internal detail as well, like bones and muscles inside monsters, and structural supports inside buildings.

                (that was meant to be in reply to "Blaghman, patron saint of doing it wrong"'s comment above sorry)

    This is strange to say the least.

    Maybe this company is just staying hush-hush to protect its intellectual property. It could just be waiting for the right time to bring this out and make some serious cash.

    It's not unreasonable to think that a company could have thought of a new way of doing things that produces results not seen in video games before.

    Apparently the company got a $2 million dollar grant to develop this technology from the government. So they are obviously working on something.

    I think the reason so many people are calling it a hoax is that the only paper trail to CEO Bruce Roberts Dell on the Internet leads right back to this company. Weird.

      Thanks for saying this!!! Thats what I was thinking. They probably wont release it until they get a patent cleared, avoiding copy-cats

    Looks really interesting, and somewhat promising, but there are still unanswered questions:
    -Animations/moving backgrounds
    -Destructible environments/objects
    -Collision detection
    -Combining cloud data with polygons
    -Memory requirements
    -Performance on standard hardware

    They do little to answer any of these practicality questions, they just show nice looking rocks.

      - Render times
      - Load times
      - Impact of dynamic lights
      etc etc.

      Frankly, it's a big list of unanswereds. I guess we'll see in "a couple of months" when the SDK is supposedly going to be released.

      To quote Futurama "Interesting if true".

    • This is exactly it. As weird as it sounds, the best solution may well be to combine the two current systems of rendering, although that would be a logistical nightmare, if possible at all.

    Don't know much about graphics rendering and stuff...but I have done research into fractal systems (mainly for surface chemistry) I do know some basics on surface geometry...

    What I don't get is...yes, you can populate a grid using points instead of polygons...but the process of joining the points still creates the rendering process is still going to output polygons (which creates a surface) (points = 1D...surface = 2-D)

    Looking more what he claims, which is to make everything so small that the detail is infinite...thats basic fractalisation of points. we do it currently with animations. (e.g. water splashes or lava flows) so its not new....

      From reading their site, it seems that the core of the tech's innovation is not so much in how the system creates what you see (making surfaces out of the points), but in how it ignores what you don't see.

      It sounds like they are adopting algorithms and methodologies developed for high speed search logic (aka google and bing), to ignore every data point not needed to generate the current frame at the exact resolution you are using. In other words, no wasted processing power on detail that isn't being displayed anyway.

      No idea how they do this given that for refractive lighting and object interaction you really do need the full object defined in the system. Maybe that's why we are not seeing more complex lighting effects and moving objects in the demo yet.

      In theory the principle seems sound. Find a way to drastically increase the allocation efficiency of 3D processing resources, allowing far more detail in the remaining elements being displayed. The question is whether it is flexible enough for the real world demands of gaming today.

  • This technology is interesting, to say the least. What they've done is step away from rasterisation, which is how we currently convert the data of a 3d scene into a "2d" image that is displayed on your screen. Assuming this is real, what this does is a ray-trace, or what would probably be called a reverse ray-trace (as a ray-trace in current computer graphics they trace the rays and then "bake" them on to the model), whereby the main limiting factor is the number of pixels you are attempting to render. If I remember correctly, John Carmack stated a few years back that this kind of tech was the future, though I'm not sure whether that was on his blog, and I don't have a source for that.

    Another point, based on my memory, is that this kind of technology has problems with "dynamic" objects. So, for instance, you could have a pre-set animation of someone running, but, if you tried to rag-doll them, it would just fail horrendously. Again though, that's stuff that I think I'm remembering I've heard.

    In terms of tech, this is entirely possible, so far as my limited knowledge goes.

    I think there would also be problems related to shaders, lighting, and other things, which can be seen in that video.

      In regards to dynamic objects - that does seem like it would be a problem. I imagine they would actually have to have a low resolution polygon version of the world running invisibly underneath the voxels, and use that to calculate dynamic interactions.

      I think the problems with shaders and lighting in that video is just the fact that it's a new and incomplete engine, I can't think of any natural limitations that would stop that stuff from working.

      • I guess it comes in to question as to how they calculate the advanced stuff...

        Hmm, how can I phrase this...

        Okay, typically to make a shadow map, we get the light, cast "rays" based on the position of the light source (and resolution of the map etc.) and work out when something's in the way, and then shade the areas that are blocked out (okay, that's a terrible way of describing it, but I'm going to assume you know what I mean, as you seem quite knowledgeable), and I'm just curious as to how the shading parts would work for this stuff (I'll assume that they'll have written their own shadow system for it).

        Now that I think about it, I think I remember something about them being able to apply multiple shaders to each "atom" or whatever they're calling them, which would allow for more complex attributes (specularity for instance).

        In terms of the rigging thing, I'll admit, I don't know why it would actually be impossible to have a dynamic object (other than the fact that you'd be dealing with the movements of millions of points dynamically which would be a complete bitch for hardware), I just remember back when I first saw this stuff an explanation for why it wouldn't work, but it's entirely possible that that would have changed by now.

        Also, in relation to what you said above, with destructible things having to have internals, I imagine that would be horrendous for whoever makes the stuff, although, I can imagine a future where someone has designed a procedural system for the handling of all that stuff(like spore but less bad, and I guess the generation would not be in real time). The only way we can keep up the current increase in graphical fidelity is through the use of procedural generation, otherwise everything just gets too expensive (in my opinion, anyway).

    It's real, it's not a hoax. Whether they can get the fps working at a decent rate is another question...

    The best thing about that tech demo was the brilliant Loyd Grossman impression the V/O guy was doing.

    I kept wating on him asking "Who would live on an Island like this?" for reference.

    Its not the hoax,

    From what I understand its just voxels but MUCH further developed.

    IM going to use it to remake tetris.

    The most realistic Tetris ever.

Join the discussion!