Nvidia has launched the much-anticipated RTX 30 Series graphics cards - Here is what you need to know

Nvidia has launched the much-anticipated RTX 30 Series graphics cards - Here is what you need to know

The GeForce RTX 30 Series GPUs deliver up to 2x the performance and 1.9x the power efficiency of the previous-generation RTX 20 Series.
Note that's Control:
Ampere_PPW.jpg

That perf/watt metric is a bit iffy there, as settings aren't mentioned, and they could be using DLSS for which the Ampere GPUs should be way better positioned as they have double the tensor cores.

Still a really nice jump in performance this generation (though I do wonder how much of that is due to the large performance node improvement), hope AMD pulls of something similar.

To note: 3070 is the 2080 successor etc, all the numbers moved down one, which is what they were supposed to be if you look at their internal code numbers.

From Anandtech:
The immediate oddity here is that power efficiency is normally measured at a fixed level of power consumption, not a fixed level of performance. With power consumption of a transistor increasing at roughly the cube of the voltage, a “wider” part like Ampere with more functional blocks can clock itself at a much lower frequency to hit the same overall performance as Turing. In essence, this graph is comparing Turing at its worst to Ampere at its best, asking “what would it be like if we downclocked Ampere to be as slow as Turing” rather than “how much faster is Ampere than Turing under the same constraints”. In other words, NVIDIA’s graph is not presenting us with an apples-to-apples performance comparison at a specific power draw.


If you actually make a fixed wattage comparison, then Ampere doesn’t look quite as good in NVIDIA’s graph. Whereas Turing hits 60fps at 240W in this example, Ampere’s performance curve has it at roughly 90fps. Which to be sure, this is still a sizable improvement, but it’s only a 50% improvement in performance-per-watt. Ultimately the exact improvement in power efficiency is going to depend on where in the graph you sample, but it’s clear that NVIDIA’s power efficiency improvements with Ampere, as defined by more normal metrics, are not going to be 90% as NVIDIA’s slide claims.

All of which is reflected in the TDP ratings of the new RTX 30 series cards. The RTX 3090 draws a whopping 350 watts of power, and even the RTX 3080 pulls 320W. If we take NVIDIA’s performance claims at their word – that RTX 3080 offers up to 100% more performance than RTX 2080 – then that comes with a 49% hike in power consumption, for an effective increase in performance-per-watt of just 34%. And the comparison for the RTX 3090 is even harsher, with NVIDIA claiming a 50% performance increase for a 25% increase in power consumption, for a net power efficiency gain of just 20%.


This is also a pretty huge improvement:
1599029330539.png
Finally, with the launch of the RTX 30 series, NVIDIA is also announcing a new suite of I/O features that they’re calling RTX IO. At a high level this appears to be NVIDIA’s implementation of Microsoft’s forthcoming DirectStorage API, which like on the Xbox Series X console where it’s first launching, allows for direct, asynchronous asset streaming from storage to the GPU. By bypassing the CPU for much of this work, DirectStorage (and by extension RTX IO) can improve both I/O latency and throughput to the GPU by letting the GPU more directly fetch the resources it needs.


The most significant innovation here, besides Microsoft providing a standardized API for the technology, is that Ampere GPUs are capable of directly decompressing assets. Game assets are frequently compressed for storage purposes – least Flight Simulator 2020 take up even more SSD space – and currently decompressing those assets to something the GPU can use is the job of the CPU. Offloading it from the CPU not only frees it up for other tasks, but ultimately it gets rid of a middleman entirely, which helps to improve asset streaming performance and game load times.

Note all of this is from anandtech: https://www.anandtech.com/show/1605...re-for-gaming-starting-with-rtx-3080-rtx-3090
 
I was kinda shocked at the performance for the price if its true.

If the 3080 really out performs the 2080ti for 699
 
I was kinda shocked at the performance for the price if its true.

If the 3080 really out performs the 2080ti for 699
You'll need some benchmarks first, the memory bus width difference. Still way better price/performance.
 
Still rocking my GTX1660ti for 1080p gaming im all good for a few years i feel bad for the people who bought RTX2080 etc
 
3070 for $499 is the new card to buy.

better than a 2080 for $499.

Although in good old RSA I don't see it coming cheaper than
R15k on launch
 
I'm in the market for a 3080. Currently using a 1060, which has been fine thus far, but I'm gaming a bit more these days and it's struggling on my monitor. Can't wait.
 
How many times have you finished HL2?

:ROFL:
Funny you mention that, i just installed Half Life 2 over the weekend :D Runs very smoothly and so does Call of Duty 4 Modern Warfare so I'm super happy at the moment
 
Kicker is you will need PCIeX Gen 4 to probably utilize the full potential. Anyone out there able to confirm?
 
Great news for the 2nd hand graphics card market, I picked up a GTX 1070 over the weekend for a steal, I'm sure prices are just going to keep plummeting.

Did the same a while back.
Insane how old the 1070 is and it's still more than enough for my need. I don't even use it much but for the R2.6K I paid it was a steal.
I think most of the reason people are impressed with the perf/price of the new cards is that people are so use to getting shafted.
 
I'm in the market for a 3080. Currently using a 1060, which has been fine thus far, but I'm gaming a bit more these days and it's struggling on my monitor. Can't wait.
It's going to cost around R16K locally
 
Top
Sign up to the MyBroadband newsletter