I have known two of the guys involved for a while now (Dan and Bran). Both of them strike me as honest people who are legitimately passionate about graphics tech, so I don't think they are running a scam (only mentioning this because of other comments). I myself, having delved into the voxel world, have my own doubts about voxels in general, as a silver bullet for everything, but there are many areas worth pursuing (particularly within medical, which they are). From an aesthetics standpoint alone voxels are great for games (voxel art now has a serious following on Twitter). Anyhow, I've been amazed with Siles' work over the years. I am curious to see what they cook up. :)
Volume amd voxel rendering are essential for medical visualizations. But there should already be a lot of competition in that area.
Voxel graphics for games is mostly restricted to non-deforming bodies. Games like Comanche and Outcast made use of an optimized variant of voxel rendering for their terrains with great results in the 90s. But I have yet to see an animated face in realtime rendered using voxels or points. I would love to see that done, but this is a hard problem.
Nice, but this implementation should extend the voxels down to the lowest neighbor. This would close the gaps in the steep hillsides.
If you're doing it in a pure software implementation and with no tilted camera, you could optimize this heavily. You start at the bottom of the screen and for each vertical row of pixels you march along the heightmap in 2D and project the texels to the screen. You also remember the current hightest Y coordinate that you've drawn. If the new texel is projected higher than that, you draw another vertical line to that position with the texel color and remember that position. So you're filling up your screen from bottom to top in an extremely efficient way.
The video in this article shows an animated face and body, unless you mean something else. Jon Olick did some work with deformation/animation of voxels, IIRC. But agreed, it is still a challenging and not very well solved problem.
I must have missed this video then. I'll have another look.
edit: found the video (I find myself skipping the teaser quite often these days). They show recordings, not animations. This may seem like splitting hairs, but it's not. An animated rig of some kind can be adjusted through parameters that are entered by an animator to create or alter the movement. This is also the basis for technology like animation blending and inverse kinematics which (in a nutshell) are some of the techniques that make movement in current games so plausible. If this is just a static stream of volumes that is played back like a flipbook, then no easy adjustments are possible.
Atomontage has been around for many years now without any release. The two Achilles heels of finely detailed voxel engines have historically been animation and lighting. Based on the video, it looks like they have animation now but the lighting doesn't seem to be any better.
All of the shading that I've seen in past Atomontage videos has been screen space ambient occlusion, which is a great effect but is no substitute for real lighting with dynamic shadows. I've also seen them use AO baked directly into the voxel models.
I just wish that they would actually release something instead of posting countless tech demo videos. If the technology is so great, why aren't they trying to put it in the hands of real game developers right now? The whole thing smells of vaporware...
Before the development of Voxel Quest (https://www.voxelquest.com/) was stopped it had shadows and animated agents. However, I'm not knowledgeable enough on game renderers to estimate which kind of technique was used nor what are its limitations.
Yup, all that lighting in the video is baked. If you look at some of their other videos where they start destroying models, you can tell that it's baked because the shadows don't change. Modern video games still heavily use baked lighting, but anything that moves need to have dynamic shadows.
Anyone else getting flashbacks to that "Infinite Detail" Vaporware/Propagandaware with this?
"Exciting New Graphics Technology!", light on technical details and heavy on marketing speak.
Wow, that brings back memories. The company behind that, Euclideon, is apparently now in the "hologram" business. Watch their video, it's exactly what you would expect from their earlier ones: https://www.youtube.com/watch?v=GjPWk0UhKDQ
Definitely. There could absolutely be good tech here, but getting through the vaporware-like marketing speak is too hard. It hit so many buzzwords and tropes that by the time they got to "like Minecraft but better" I was just laughing.
Yeah, that was my first thought too. Maybe it's just because they're both voxels.
The problem with voxels is always the memory requirements. Even with only one byte per voxel at 1cm resolution, you're talking 1GB of per 10m cube for a naive implementation. You can limit your model to just the visible surfaces of your world, but then you're basically just using fat triangles.
While I totally get that this isn't going to quite "change everything" what really stuck out to me is their use of geometry shaders to do the animations with voxels. Very interesting. I have a fork of google draco where I've been working on getting animated point clouds into unity compressed then adding geometry shaders to them. I think there are very unique experiences coming for us with point cloud tech and voxel tech, but they probably wont replace our polys any time soon for gaming or anything.
Attn. vaporware naysayers: Some smart people think this is worth pursuing. Miguel Cepero has been doing this forever with Voxel Farm. Sony almost made an Everquest with it. John Carmack once believed a future id engine would be voxel-based. It hasn't been proven yet with a big hit, but there is probably something here worth noting.
I don't think most are saying "voxels aren't worth pursuing", only "every voxel implementation we've seen has been un-marketable because of major limitations they were misleadingly silent about, new ones inherit that doubt".
So Sparse Voxel Oct-trees (SVOs) have been around for a while now. NVIDIA heavily invested in researchers for this. However, the existing workflow for game / movie assets is polygonal in nature, making this a really niche technology. They've instead focused on Bounding Volume Hierarchies (bVOs) which allow for fast ray tracing into traditional models / with normal materials.
I don’t think voxel engines need to work with only voxels. The environment could be, but animated models should probably still be polygons.
So the challenge is you don’t just need fast voxel rendering, you need fast 3D rasterization into an optimized data structure (like sparse voxel octrees).
I don’t think this will pay off until game engines do absolutely massive amounts of physics and lighting simulation. But I think it’s inevitable. You’re already seeing indirect lighting engines using low resolution sparse voxel representations of the world.
But isn't your comment just a list of failed endeavors? I've seen them as well, which is why I am a vaporware naysayers - I think it be prudent from a Bayesian approach.
Why something failed matters. Was it a bad idea, or a good idea that was too far ahead of its time (and thus, a 'bad' idea in its time, but only pragmatically)?
Oh absolutely, I'm not saying the general technology is worthless.
Just that this implementation in particular, and a similarly advertised one in the past both stink of marketing bullshit with no actual substance behind it.
The volumetric video rendering they showed seems particularly compelling for me, having played a bit more with VR recently.
What're the current methods for doing reasonable volumetric video capture? I had a set up of kinects a few years ago, but it was quite tedious to programmatically interact with in real-time.
Generally structured light (ie. Kinect or laser scanner) is still gonna be your only option, I think. Stereo 3D reconstruction is getting quite good and might be usable though.
This again? Remember the Euclideon engine? The main problem is that you can't animate models composed of voxels. Every one of these "atom-based engine" projects runs into that wall and rides the funding until the hype runs out. Bummer.
Of course you can animate voxel models - there are game engines that do it now - the trouble has been doing it on a large number of voxels in real time.
This is why existing voxel games tend to look super blocky and pixelated.
That said - they seem to have solved that for non-organic forms at least (their video forms are very cool, but are basically frame by frame volumetic animation - a little different).
Lighting will be the big issue, though not always a problem in all games.
> but are basically frame by frame volumetic animation
I'm assuming by this you mean each frame using a distinct voxel model.
I may be wrong, but I think what's being demoed at 0:47 in the video is animation by deformation of a static model. If true, this is something I've not seen in voxel graphics before.
Unfortunately, the article is very light on the details about what is technologically novel about what Atomontage is doing, versus previous endeavours.
That's certainly what they're doing with the buildings. The dancing man and face animations look like frame-by-frame tho (which is why they are likely video I would imagine)
That said if they can do just non-organic forms well and leave organics to polygons - that would be pretty incredible.