Ray tracing is a technology that should be familiar to any engineer who actually works in AR/VR/XR, product design or simulation. Because it’s one of the most important advancements in graphics technology since the birth of three-dimensional (3D) graphics, and it’s about to move from the high-level film and advertising realms to the embedded realms of mobile, wearables, and cars as new, more efficient Methods for dealing with ray tracing enter the market.
If you look at any 3D scene, the fidelity depends a lot on the lighting. In traditional graphics rendering (rasterization), lightmaps and shadowmaps are precomputed and then applied to the scene to simulate the appearance of the scene. However, while beautiful results can be achieved, they are always limited by the fact that these techniques are only simulating lighting. Ray tracing technology simulates how light behaves in the real world to create more accurate and procedural images of reflections, transparency, glows and materials.
In real life, a virtual beam of light from a light source strikes an object. The light then interacts with the object and is reflected back onto another surface depending on the surface properties of the object. After that, the light is constantly reflected, creating light and shadow.
Ray tracing in a computer, or more precisely “path tracing”, is the opposite of the real-world ray-tracing path. The light is actually emitted from the camera’s point of view, hitting objects in the scene, and the algorithm then calculates how the light will interact with that surface based on the properties of the surface it hits. After that, it continues to trace the path of each ray hitting each object until it returns to the light source. The result is a scene that is illuminated as if it were illuminated by the sun in the real world: with realistic reflections and shadows.
Traditionally, this cannot be done in real-time on embedded devices (or even high-end workstations) due to the computational load being too high.
While ray tracing is still very new in gaming, many people are already familiar with ray tracing from 3D animation films. In the opening shot of Toy Story 4, light reflected in a rain puddle is a recent example. However, these scenes take months to render on a dedicated server cluster, which is not practical for games where scenes must be generated in real-time at at least 30 frames per second, preferably twice as fast or more high.
Real-time ray tracing in games was previously unattainable due to the enormous amount of computation involved, but that has changed thanks to a mix that combines the speed of rasterization with the visual accuracy of ray tracing formula method. How awesome would it be to run a ray-traced game on your phone? This will become a reality in the near future.
There’s the problem that while real-time ray tracing solutions for mobile have been around for a while, there’s still not an ecosystem to support it – but that’s changing. In 2018, NVIDIA released hardware with hybrid real-time ray tracing for the desktop market, especially gamers. But even Nvidia released the hardware without any games, underscoring how difficult it is to build an ecosystem for a new technology. However, now that a number of game developers such as Bethesda and Unity have made their move, the new generation of consoles in 2020 will also include some ray tracing capabilities. Before long, ray tracing will start appearing in other markets as well – such as the AR/VR market.
As a result, as ray tracing is back on the agenda and people start experiencing it on their computers, people will increasingly want to own it, and indeed expect it to be on their mobile devices, VR headsets, and consoles. Therefore, to keep up with the times, engineers need to start familiarizing themselves with the latest advancements in this industry-changing technology.