Ray Tracing: How NVIDIA Solved the Impossible!
Aug 15, 2023
Ray Tracing: How NVIDIA Solved the Impossible!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The showcased papers are available here:https://research.nvidia.com/publicati …https://research.nvidia.com/publicati …https://graphics.cs.utah.edu/research …https://users.cg.tuwien.ac.at/zsolnai … Link to the talk at GTC: https://www.nvidia.com/en-us/on-deman … If you wish to learn more about light transport, I have a course that is free for everyone, no strings attached: https://users.cg.tuwien.ac.at/zsolnai … ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - / @twominutepapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O’Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér’s links: Instagram: https://www.instagram.com/twominutepa … Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Content
0.48 -> Dear Fellow Scholars, this is Two Minute Papers
with Dr. Károly Zsolnai-Fehér. Or not quite. To
7.2 -> be more exact, I have had the honor to hold a talk
here at GTC, and today we are going to marvel at
14.7 -> a seemingly impossible problem, and 4 miracle
papers from scientists at NVIDIA. Why these 4?
23.16 -> Could these papers solve the impossible?
Well, we shall see together in a moment.
29.1 ->
If we wish to create a truly gorgeous
32.1 -> photorealistic scene, in computer graphics,
we usually reach out to a light transport
37.8 -> simulation algorithm, and then, this happens.
Oh yes, concept number one. Noise! This is
46.68 -> not photorealistic at all, not yet anyway. Why is
that? Well, during this process, we have to shoot
54.54 -> millions and millions of light rays into the scene
to estimate how much light is bouncing around,
61.08 -> and before we have simulated enough rays, the
inaccuracies in our estimations show up as noise
68.04 -> in these images. This clears up over time, but it
may take from minutes to days for this to happen,
75.3 -> even for a smaller scene. For instance, this one
took us 3 full weeks to finish. 3 weeks! Yes,
84.48 -> really. I am not kidding. Ouch.
88.41 -> Solving this problem in real time seems absolutely
impossible, which has been the consensus in the
95.4 -> light transport research community for a long
while. So much so, that at the most prestigious
101.1 -> computer graphics conference, SIGGRAPH, there
was even a course by the name: Ray tracing is
107.22 -> the future and ever will be. This was a bit of a
wordplay yes, but I hope you now have a feel of
115.02 -> how impossible this problem feels.
118.56 -> When I was starting out as a first year PhD
student, I was wondering whether real-time
124.32 -> light transport will be a possibility within
my lifetime. It was such an outrageous idea,
131.34 -> I usually avoided even bringing up the question in
conversation. And boy, if only I knew what we are
139.8 -> going to be talking about today. Wow.
142.56 -> So, we are still at the point where these images
take from hours to weeks to finish. And now,
149.76 -> I have good news and bad news. Let’s go with
the good news first. If you overhear some
156.96 -> light transport researchers talking, this is
why you hear the phrase “importance sampling”
162.18 -> a great deal. This means to choose where to
shoot these rays in the scene. For instance,
168.54 -> you see one of those smart algorithms here,
called Metropolis Light Transport. This is one
175.68 -> of my favorites. It typically allocates these rays
much smarter than previous techniques, especially
182.22 -> on difficult scenes. But, let’s go even smarter!
This is my other favorite Wenzel Jakob’s Manifold
190.56 -> Exploration paper at work here. This algorithm is
absolutely incredible, and the way it develops an
198.06 -> image over time is one of the most beautiful
sights in of all light transport research.
203.28 ->
So if we understand correctly,
205.8 -> the more complex these algorithms are, the smarter
they can get, however, at the same time, due to
213 -> their complexity, they cannot be implemented so
well on the graphics card. That is a big bummer.
220.08 -> So, what do we do?
222.15 -> Do we use a simpler algorithm and take
advantage of the ever-improving graphics
226.8 -> cards in our machines, or write something
smarter and miss out on all of that?
232.92 ->
So now, I can’t believe I am saying this,
236.52 -> but let’s see how NVIDIA solved the impossible
through 4 amazing papers. And that is, how they
244.98 -> created real-time algorithms for light transport.
248.64 -> Paper number one. Voxel Cone Tracing. Oh my, this
is an iconic paper that was one of the first signs
257.4 -> of something bigger to come. Now, hold on to your
papers, and look at this. Oh my goodness. That is
266.82 -> a beautiful, real-time light transport simulation
program. And it gets better, because this one
273.6 -> paper that is from 2011, a more than 10-year-old
paper, and it could do all this. Wow! How is that
283.14 -> even possible? We just discussed that we’d be
lucky to have this in our lifetimes, and it seems
289.14 -> that it was already here, 10 years ago!
292.14 -> So, what is going on here? Is light transport
suddenly solved? Well, not quite. This solves
300.3 -> not the full light transport simulation
problem, but it makes it a little simpler.
305.94 -> How? Well, it takes two big shortcuts. Shortcut
number one: it subdivides the space into voxels,
314.76 -> small little boxes, and it runs the light
simulation program on this reduced representation.
321.66 ->
Shortcut number two, it only computes 2 bounces
326.46 -> for each light ray for the illumination. That is
pretty good, but not nearly as great as a full
334.44 -> solution with potentially infinitely many bounces.
338.4 -> It also uses tons of memory, so, plenty of things
to improve here, but my goodness, if this was not
346.68 -> a quantum leap in light transport simulation, I
really don’t know what is. This really shows that
353.34 -> scientists at NVIDIA are not afraid of completely
rethinking existing systems to make them better,
359.76 -> and boy, isn’t this a marvelous example of that.
And remember, all this in 2011. 2011! More than 10
370.2 -> years ago. Absolutely mind-blowing. And one more
thing: this is the culmination of software and
378.48 -> hardware working together. Designing them for
each other. This would not have been possible
385.08 -> without it.
386.1 -> But, once again, this is not the full light
transport. So, can we be a little more ambitious,
392.64 -> and hope for a real-time solution for
the full light transport problem? Well,
398.64 -> let’s have a look together and find out! And here
is where paper number two comes to the rescue. In
406.26 -> this newer work of theirs, they presented an
amusement park scene that contains a total of
412.2 -> over 20 million triangles, and it truly is
a sight to behold. So let’s see, and! Oh,
420.66 -> goodness! This does not take from minutes to days
to compute, each of these images were produced in
426.96 -> a matter of milliseconds! Wow.
429.9 -> And, it gets better. It can also render this
scene with 3.4 million light sources, and this
438.42 -> method can really render not just an image, but
an animation of it interactively. What’s more,
445.2 -> the more detailed comparisons in the paper reveal
that this method is 10 to 100 times faster than
452.76 -> previous techniques, and it also maps really well
onto our graphics cards. Okay, but what is behind
460.68 -> all this wizardry? How is this even possible?
464.4 -> Well, the magic behind all this is a smarter
allocation of these ray samples that we have to
471 -> shoot into the scene. For instance, this technique
does not forget what we did just a moment ago when
478.32 -> we move the camera a little and advance to the
next image. Thus, lots of information that is
484.74 -> otherwise thrown away can now be reused as we
advance the animation. Now note that there are
492.12 -> so many papers out there on how to allocate
these rays properly, this field is so mature,
498.18 -> it truly is a challenge to create something
that is just a few percentage points better
503.82 -> than previous techniques. It is very hard to
make even the tiniest difference. And to be able
510.78 -> to create something that is 10 to a 100 times
better in this environment? That is insanity.
517.92 ->
And, this proper ray allocation
520.56 -> has one more advantage. What is that? Well, have a
look at this. Imagine that you are a good painter,
528.66 -> and you are given this image. Now your task is
to finish it. Do you know what this depicts?
536.7 -> Hmm…maybe. But knowing all the details of this
image is out of question. Now, look, we don’t have
544.5 -> to live with these noisy images, we have denoising
algorithms tailored for light simulations. This
551.88 -> one does some serious legwork with this noisy
input, but even this one cannot possibly know
558.12 -> exactly what is going on because there is so much
information missing from the noisy input. And now,
565.44 -> if you have been holding on to your papers so far,
squeeze that paper, because, look. This technique
572.04 -> can produce this image in the same amount of
time. Now we’re talking! Now, let’s give it
579.06 -> to the denoising algorithm, and…yes! We get a
much sharper, more detailed output. Actually,
587.58 -> let’s compare it to the clean reference image.
Yes, yes yes! This is much closer. This really
595.56 -> blows my mind. We are now, one step closer
to proper, interactive light transport!
601.38 ->
Now note that I used the
604.08 -> word interactively twice here. I did not say real
time. And that is not by mistake. These techniques
612.18 -> are absolutely fantastic, one of the bigger leaps
in light transport research, but, they still cost
618.96 -> a little more than what production systems can
shoulder. They are not quite real time yet.
625.5 ->
And, I hope you know what’s coming.
628.62 -> Oh yes! Paper number three. Check this out!
633.36 -> This is their more recent result on the Paris
Opera House scene, which is quite detailed,
639.24 -> there is a ton going on here. And, you are all
experienced Fellow Scholars now, so when you
646.5 -> see them flicking between the raw, noisy and the
denoised results, you now know exactly what is
653.4 -> going on. And, hold on to your papers, because
all this takes about 12 milliseconds per frame.
660.9 -> That is over 80 frames per second. Yes yes yes! My
goodness! That is finally in the real time domain
670.32 -> and then some! What a time to be alive!
673.56 -> Okay, so where is the catch? Our keen
eyes see that this is a static scene.
679.74 -> It probably can’t deal with dynamic movements
and rapid changes in lighting, can it? Well,
686.22 -> let’s have a look. Wow! I cannot believe my
eyes. Dynamic movement, checkmark. And here,
694.98 -> this is as much change in the lighting as
we would ever want, and it can do this too.
701.1 ->
And, we are still not done yet. At this point,
705.12 -> real time is fantastic, I cannot overstate how
huge of an achievement that is. However, we need
713.34 -> a little more to make sure that the technique
works on a wide variety of practical cases.
719.64 -> For instance, look here. Oh yes! That is a ton of
noise, and it’s not only noise, it is the worst
728.64 -> kind of noise! High-frequency noise. The bane of
our existence. What does that mean? It means these
736.98 -> bright fireflies. If you show that to a light
transport researcher, they will scream and run
743.34 -> away. Why is that? It is because these are light
paths that are difficult to get to, and hence,
750.96 -> take a ton more ray samples to clean up.
754.38 -> And, you know what is coming! Of course, here
is paper number four. Let’s see what it can
761.64 -> do for us. Am I seeing correctly? That is so much
better! This seems nearly hopeless to clean up in
769.98 -> a reasonable amount of time, and this…this might
be ready to go as is with a good noise filter.
777.78 -> How cool is that!
780.03 -> Now, talking about difficult light paths,
let’s have a look at this beautiful caustics
785.52 -> pattern here. Do you see it? Well, of course
you don’t! This region is so undersampled,
792.12 -> we not only can’t see it, it is hard to
even imagine what should be there. So,
799.08 -> let’s see if this new method can accelerate
progress in this region. That is not true. That
807 -> just cannot be true. When I first saw this paper,
I could not believe this, and I had to recheck the
813.96 -> results over and over again. This is at the very
least a 100 times more developed caustic pattern.
821.94 -> Once again, with a good noise filter, probably
ready to go as is. I absolutely love this one.
829.92 -> Now, note that there are still shortcomings. None
of these techniques are perfect. Artifacts can
837.3 -> still appear here and there, and around specular
and glossy reflections, things are still not as
843.78 -> clear as the reference simulation. However, we now
have real time light transport and not only that,
850.92 -> but the direction we are heading to is truly
incredible, and amazing new papers are popping
858 -> up what feels like every single month. Don’t
forget to apply The First Law Of Papers,
864.24 -> which says that research is a process. Do not look
at where we are, look at where we will be two more
871.2 -> papers down the line.
872.82 -> Also, NVIDIA is amazing at democratizing
these tools and putting them into the
878.76 -> hands of everyone. Their tech transfer
track record is excellent. For instance,
884.46 -> their marbles demo is already out there,
and not many know that they already have
890.52 -> a denoising technique that is online and
ready to use for all of us. This one is a
897.42 -> professional grade tool right there. Many
of the papers that you have heard about
902.52 -> today may see the same fate. So, real-time light
transport for all of us? Sign me up right now!
966.18 -> Thanks for watching and for your generous
support, and I'll see you next time!
Source: https://www.youtube.com/watch?v=NRmkr50mkEE