Unreal Engine 5 and Nanite virtualized geometry

What Does It Mean For Content Creators?

Epic Games/Unreal Engine

Since the very beginning of commercial 3D gaming in the early ’90s, developers have been hampered by polycount restrictions. As both hardware and software evolve, we have been able to push these limits to achieve remarkable levels of detail and quality, using per-pixel shading techniques, such as normal mapping to simulate higher resolution geometries, and building LOD (level of detail) systems to handle swapping between higher resolution and lower resolution models based on their size on the screen.

With the advent of Unreal Engine 5, Epic is hoping to make these restrictions a thing of the past — streamlining the real-time content development pipeline and allowing developers to focus on the content itself, and not the technical hurdles that we’ve historically had to overcome.

In this article, we’ll be focusing on one of the major new features of Unreal Engine 5 — Nanite virtualized geometry — and how this feature could impact our workflows, for better or for worse.

Nanite (Virtualized Geometry)

Epic Games/Unreal Engine

The way we handle the optimisation of scene polycounts has been largely unchanged over the last few decades. We author our model content at a variety of different resolutions, or LODs, and simulate high-resolution detail through the use of textures in shaders — either through per-pixel lighting or dynamic mesh tessellation.

In the Unreal Engine 5 reveal, Epic disclosed that the engine is capable of drawing scenes with billions of triangles of source geometry. In essence, this means that the engine will have access to the full resolution geometry, and will optimise it in real-time for the target display.

By building each frame of the scene geometry from a bucket of triangles, we can have a more predictable scene cost for our geometry budget, and adjust headroom for other budgets by scaling the output resolution — either at a fixed resolution or using dynamic resolution scaling.

Using traditional techniques, you would throw a model into the scene and every triangle of that model would be drawn, regardless of whether the triangle was in the camera’s viewing frustum, or if the triangles were so small that they shared the same pixel space as others. With Nanite, these considerations are smartly handled for you on the fly. This means you have uncompromised quality and generally only pay the cost for what is making a valid contribution to the fully resolved scene.

Of course, how many triangles an engine is capable of rendering is not necessarily a useful metric, when you consider how many other things factor into a frame’s cost. But, when you consider that you can mitigate some of these other costs (1bit alpha, overdraw, shader-driven per-pixel lighting) by throwing more geometry at it, it becomes a much more notable statistic.

Goodbye Normal Maps / LODs, Hello Sculpting All The Things?

Just as Epic is re-thinking the way it approaches rendering geometry, as content creators we should reconsider our approach to creating the source geometry in order to best take advantage of this new technology.

Many game engine assets already have a high-resolution counterpart — be it a several million triangle Z Brush sculpt, or a highly tessellated sub-division mesh — it’s just that these rarely make their way into the game engine. Instead, these high-resolution source models are used to bake normal maps onto the game-ready meshes.

Left to right: Low-poly plane, vertex normals only. High-resolution source mesh. Low-poly plane with per-pixel normal lighting. The differences are inconceivable!

By encoding a surface’s direction, or ‘normal’, at a per-pixel level, we can give the illusion of higher detail. This has been the industry standard for nearly 20 years, and the tools and processes involved with authoring normal maps have been evolving steadily.

Although normal maps have been a worthy stand-in, they are no match for fully realised geometry. Normal maps only simulate fluctuations in the surface direction — they do not handle occlusion, shadowing, or changes to the silhouette. There have been techniques such as parallax occlusion mapping that have found favour in some modern games, but these can be quite expensive and/or have visible rendering artefacts.

Note that from a more oblique angle, the normal mapped plane does not convey the silhouette of the higher resolution source geometry.

Normal maps also do not cast shadows.

With Nanite virtual geometry, we can bypass this whole process and simply import the full resolution source geometry. This sounds like a huge time-saver, and it is! Anyone who has experienced baking normal maps will be well-versed in the joys of adjusting cages, exploding parts, smoothing groups, UVs, and different coordinate spaces, and also just sitting around waiting for bakes.

We also don’t need to worry about creating LODs for each asset-Nanite takes a more sophisticated and less wasteful approach to rendering our content. This means that content looks more consistent (no ‘popping’ between different LOD states), reduces time spent by artists creating and checking LODs, and reduces the memory footprint of the asset as we only need to store the highest level of detail.

Of course, it can be expected that the memory savings from the normal map and extra LODs will be more than absorbed by the higher resolution source asset, but it should represent a more unified approach to model generation and promote consistency across your library of assets — if everything is authored at the highest possible resolution, then there should be no visible disparity between assets when placed in a scene together.

However, not all assets are created equal. It has become increasingly common that assets are created at a mid-poly level and surface detail is authored purely at the texture level, with no baking from a high-resolution source asset. This is a more optimal workflow for some artists — particularly those working on hard-surface props — as sculpted detail is usually limited to high-frequency surface scratches or dents, which are generally easier to stamp on during the texturing process.

When it comes to these types of assets in the ‘land of Nanite’, artists will either need to adopt the high-resolution model workflow or adapt their tools to generate high-resolution meshes from their mid-poly ones — perhaps by exporting a tessellated version of their mesh with the heightmap data from their texture authoring tool, applied at the geometry level. However, normal maps are still likely to play a part in high-frequency surface details, as it is inconvenient to author these in the source geometry.

The Not-So-Distant Future

It’s time to see what’s next! — Epic Games/Unreal Engine

In the same way that cars have been making the transition from internal combustion to electric, I expect that we will see games adopt a ‘hybrid’ approach — using Nanite virtualized geometry to handle static or rigid environmental assets, and falling back to more traditional methods for anything more dynamic, such as characters, soft-body physics assets or anything requiring ambient vertex deformation such as grass in the wind.

In an ideal world, Nanite would be the one triangle rasterization solution to rule them all, and there would be no need to use traditional techniques to handle skinned meshes or translucency. It’s likely that Epic is looking into ways that we can support this type of content with Nanite, although skinned meshes, both from a content creation and rendering perspective, are a much more difficult technical challenge to overcome.

Unreal Engine 5 signifies a giant leap forward in real-time rendering technology. With Nanite virtualised geometry, Lumen real-time global illumination, Chaos next-generation physics, and Niagara particle systems, Epic is leading the way with the next generation of real-time graphics. I cannot wait to see what amazing content will result.

Previous
Previous

Is Snapchat taking us one step closer to the metaverse?

Next
Next

The rise of Virtual Production