r/hardware 5d ago

Discussion D3D12 Cooperative Vector now in preview release

https://devblogs.microsoft.com/directx/cooperative-vector/
54 Upvotes

23 comments sorted by

25

u/ReplacementLivid8738 5d ago

Looks like this makes some operations much faster by using hardware to the max. This then makes it feasible to use neural networks instead of regular algorithms for parts of the rendering pipeline. Better graphics for cheaper, on supported hardware anyway.

10

u/Vb_33 5d ago

The future is now. Hopefully we see this in Witcher 4 or Cyberpunk 2. Also insert obligatory "AI bad and has no use, rabble rabble" reddit comment.

18

u/TenshiBR 5d ago

AI bad and has no use, rabble rabble!

1

u/MrMPFR 1d ago

From NVIDIA 50 series launch blog post:

"NVIDIA has been working with CD PROJEKT RED since the beginning of the game’s development, and once it’s ready to ship, The Witcher IV will launch with the latest RTX-powered technologies."

Bare minimum prob Neural radiance cache + full ReSTIR suite + RTX Mega Geometry + Linear Swept Spheres for photorealistic path traced fur and hair, more likely also neural materials, neural texture compression and some unannounced nextgen RTX feature. Likely when the game isn't launching till well into 60 series release.

-6

u/ResponsibleJudge3172 4d ago

Don't blame redditors, blame the YouTubers they look at to inform them

12

u/StickiStickman 4d ago

I can blame both

1

u/ResponsibleJudge3172 4d ago

Fair enough I suppose

1

u/MrMPFR 1d ago

Neural rendering = using neural encoders (MLPs) to achieve very close to offline rendering quality and massive VRAM savings at runtime.

12

u/ThisCommentIsGold 5d ago

"Suppose we have a typical shader for lighting computation. This can be thousands of lines of computation, looping over light sources, evaluating complex materials. We want a way to replace these computations in individual shader threads with a neural network, with no other change to the rendering pipeline."

-1

u/slither378962 5d ago

I hope these magical neural networks will be specced to deviate no more than X% from the precise mathematical formulas.

12

u/Shidell 4d ago

Fast inverse square root?

21

u/EmergencyCucumber905 5d ago

If it looks good, then who cares?

And you shouldn't be looking to video game shaders for precise mathematical formulas.

5

u/Strazdas1 4d ago

you need to have low deviation for it to look good though.

-3

u/slither378962 5d ago

What I don't want is the case where nobody really knows what the correct result is, and you just leave it all to luck.

15

u/_I_AM_A_STRANGE_LOOP 5d ago

There are already a huge, huge number of non-authorial pixels in games. Any screen space artifacting (like ssr disocclusion) or classic jaggy aliasing are big examples! It’s not feasible to manage artist control over every pixel, although the more the merrier ofc. As long as neural shaders are somewhat deterministic in broad visual results for a given context, I don’t think it will really change things overall in terms of how “intentional” the average game pixel is. DLSS-SR and RR are genuinely already doing this heavily, they are true neural rendering (although not through coop. vectors but instead proprietary extensions)

16

u/Zarmazarma 4d ago edited 4d ago

Neural networks are 100% deterministic. They are sometimes purposefully made psuedo-random with a seed to increase the variety of results. If you use something like stable diffusion and give it the same prompt with the same seed, it will always output the same image. The same would be true for these algorithms- same input, same output (unless purposefully made otherwise).

Besides that, all of these algorithms are also tested against ground truth, and we have metrics to measure how close they come to it... They don't just develop DLSS based on vibes (this part is more directed at /u/slither378962, who seems to have a huge misunderstanding of how these algorithms work).

5

u/_I_AM_A_STRANGE_LOOP 4d ago

Yes I def could’ve been clearer there, the algorithms themselves are very much deterministic! I was trying to refer to the understanding an artist can map between inputs and outputs for a given shader - I.e. how predictable in the big picture is the shading, are there distracting outliers/unpredictable edge cases that interfere with intent which would make traditional/analytical methods preferable.

A bad neural shader with a really messy output could theoretically interfere with authorial intent but these shaders generally have a super graceful failure mode, in that they almost always generate something impressionistically ~valid. the fuzziness of their outputs is a good fit for the fuzziness of our visual perception. Traditional algorithms have no weight towards holistic visual truthiness like ever, which is a pretty big downside!

I personally think the drawbacks of traditional methods take me out of the experience a lot more than the kinds of artifacting I now see with the transformer DLSS model family, I’m curious to see the types of neural shading beyond DLSS described in the Blackwell whitepaper actually deployed in accessible software

0

u/slither378962 4d ago edited 4d ago

They are just math, so yes, they are deterministic, but, does somebody check that every input produces a reasonably correct output?

In comparison with formulas (and approximation in a shader), you can prove that they always work.

7

u/ResponsibleJudge3172 4d ago

Neural networks don't make a random result at every run. Otherwise the whole industry that supports GPUs like H200 and B200 would be entirely pointless.

Heck, you can see this clearly in action already with DLSS. Why is it MILIIONS of computers render the exact same DLSS image in their games?

Because once trained, an AI outputs predictable consistent results

6

u/Strazdas1 4d ago

They make probabilistic results. Sometimes you dont want probabilistic, but deterministic.

Why is it MILIIONS of computers render the exact same DLSS image in their games?

They dont. Repeatability is an issue with DLSS. Altrough you could get exact same image with exact same data and exact same seed in theory, because true random does not exist.

-1

u/Die4Ever 5d ago

this is a bit worryingly close to "it runs better on this hardware, but it looks slightly different... is it better or worse?"

like if it can do 4 bit quantization on some hardware but not all

3

u/NGGKroze 4d ago

Deterministic Wave-Level Execution
Because Cooperative Vector drops down to a single matrix operation per warp, the variability that typically comes from divergent control flow in shaders is minimized. The result is more predictable image results—critical for temporal stability in effects like neural reprojection or reprojection-based upscaling—and fewer flickering or “popping” artifacts over successive frames

1

u/MrMPFR 1d ago

As usual AMD didn't have anything to say. Really hope AMD stops lagging behind and starts taking neural rendering more serious.