Just as the title says, taking from DSOG Nvidia now admits the RTX sells are lower then expected. No shit Sherlock! What, you're telling me only a handful of people bought super expensive that is the GPU RTX? And that's somehow shocking? Not the mention, your stock has lost half of it's value since the RTX launched.
For example, Shadow of the Tomb Raider has not received yet its RTX real-time ray tracing effects, and almost none of the games that NVIDIA claimed that would support DLSS actually support it. Right now, only Battlefield 5 supports real-time ray tracing effects and only Final Fantasy XV supports DLSS. Oh, there is also the new path-tracing version of Quake 2. Other than these games though, there is nothing currently on the market that can take advantage of both the real-time ray tracing effects and the DLSS tech
And this doesn't help either and why I'm keeping my 1080Ti for a long time until Raytracing becomes mainstream. They pushed their luck with the prices, I would have been all over the 2080Ti if it had 1080Ti launch price but nope.
RTX 2080 Ti is the only one worth buying (way overpriced though) and that is only if you are pushing 3440x1440 high refresh rate or 4K/60fps or higher. Otherwise, the 10 series cards are much better value and make a lot more sense.
I am waiting for a huge discount on the RTX Titan. That card has 24gb GDDR6 ram, its a monster of a video editing machine with all the best and whistles. Compared to quadro its cheaper and offers the same ram size and is faster. Quadro is essentially useless for VFX art, so the Titan RTX is the only RTX card that is interesting to me.
I sold my 2080 TI because it was just too much. I really wanted the power, but I just couldn't justify it after I thought about it for a while.
I knew going in that the ray tracing would be bullshit. It always takes a few generations for things like that to mature..that is assuming they survive at all.
Expected, we're well into launch and Battlefield V is still the only ray traced game we've got. Shadow of the tomb raider ray tracing is nowhere to be found. Dlss support is still confined to final fantasy. Not good, nvidia!
I sold a 1080Ti for the price of a RTX 2080 and went back to 1440p. I’ve been happy with the decision so far. I’m out of the $1K+ pc component game, so the 2080 Ti and RTX Titan are a no no. I spent the extra cash on rgb fans, a rgb mousepad, rgb headset, and rgb headset stand instead. Best best $350 I’ve ever spent...
Expected, we're well into launch and Battlefield V is still the only ray traced game we've got. Shadow of the tomb raider ray tracing is nowhere to be found. Dlss support is still confined to final fantasy. Not good, nvidia!
Just as the title says, taking from DSOG Nvidia now admits the RTX sells are lower then expected. No shit Sherlock! What, you're telling me only a handful of people bought super expensive that is the GPU RTX? And that's somehow shocking? Not the mention, your stock has lost half of it's value since the RTX launched.
For example, Shadow of the Tomb Raider has not received yet its RTX real-time ray tracing effects, and almost none of the games that NVIDIA claimed that would support DLSS actually support it. Right now, only Battlefield 5 supports real-time ray tracing effects and only Final Fantasy XV supports DLSS. Oh, there is also the new path-tracing version of Quake 2. Other than these games though, there is nothing currently on the market that can take advantage of both the real-time ray tracing effects and the DLSS tech
And this doesn't help either and why I'm keeping my 1080Ti for a long time until Raytracing becomes mainstream. They pushed their luck with the prices, I would have been all over the 2080Ti if it had 1080Ti launch price but nope.
NVIDIA confirms DirectML support on Turing GPUs. https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf
@ronvalencia: What the hell does your response have to do with anything said in this thread?
When DirectML arrives and games using the new rapid pack math API from MS, the gap between Turing and gaming Pascal GPUs will widen.
Atm, accessing rapid pack math hardware features are done by vendor specific API access. Specific vendor access to end with Microsoft's DirectML API standard.
I wish nvidia would make the 2070,2080 and 2080ti cards have a non-RTX version.
Even if every single game from this point forward used RTX, i still wouldn't enable it because i want high framerate. Low framerates with reflections i can only notice if i do screenshot comparisons with it on and off is not worth the extra GPU cost and performance hit.
Not advisable with Microsoft's DirectML road map. RTX is needed to match Microsoft's new PC DirectX API road map.
Adam Kozak has explained that AMD is experimenting with an evaluation version of the DirectML SDK and that the upcoming Radeon VII is “showing excellent results in that experiment.”
Because of the success of the GCN architecture, when it comes to compute-related workloads, the red team seems confident that it would be able to create some sort of super sampling effect using Microsoft’s own Windows-based machine learning code. And that could create a typically AMD open ecosystem for boosting the overall fidelity of our games without drastically impacting frame rate performance, and all without the same level of dedicated silicon that Nvidia is filling its Turing GPUs with.
This is why I still have my 980ti from years ago. This is good news for me though because hopefully they release some wallet friendly GPU’s soon that are affordable.
Edit:
I just looked up prices for the 1080ti and they average around $1000!??? Why the hell are these so damn expensive?
@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.
I think they just didn't sell a lot of people with their demo. real time tracing looks good but it seems to be in the details and it ups the price of already expensive cards (in the eyes of the people, who probably upgraded like a year ago) by like 2x? That's a steep ask. Meanwhile the only thing driving hardware purchases right now that is clear to me, are people buying higher resolution monitors. Because games aren't really becoming that much more demanding? So I can understand slower sales.
This is why I still have my 980ti from years ago. This is good news for me though because hopefully they release some wallet friendly GPU’s soon that are affordable.
Edit:
I just looked up prices for the 1080ti and they average around $1000!??? Why the hell are these so damn expensive?
Nvidia has discontinued GTX 1080Ti because they want everyone to go buy their super expensive RTX 2080Ti and since 1080Ti production has stop, the prices have increase because of this. And the fact they cost over $1000, you might as well save $200 more and go buy 2080Ti which would be the smart choice.
You're 980Ti is still good, if you are only gaming in 1080p, no point in upgrading and if you really want to upgrade, go buy the RTX 2060 for only $350, it's a good upgrade over the 980Ti.
@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.
Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.
DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.
Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.
Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.
RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.
@davillain-: ahhhh ok that all makes sense. Yeah, I plan to stick with my 980ti for at least another year. Can still max out anything and still get 60fps no problem so I won’t be upgrading until I go all out 4k when it makes more financial sense. Hoping within the next year or two we see some decent prices Nvidia products because I always prefer them over AMD. Just right now they have their heads up their asses lol.
Ray-Tracing needs its Crysis. Also, PC gamers need to get over their pissing match with consoles about framerates and support games that push graphical envelopes, again. The issues with these RTX cards are as much cultural as they are technical.
I'm still rocking my RX570, I just bought a GTS450 in case my 570 flops lol (cost me $20) while in the mean time I'll order a 1070ti, they are around $500 when the 570 dies. I hope Nvidia learns their lessons, their pricing is ridiculous.
@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?
That is literally the reason why they did so well during the 10__ time period, and why those cards literally cost 50%+ more than the MSRP.
anyway, sold all my nvidia stock when it started diving, then bought an assload more a few days ago because they will come out on top. AMD won't, and Intel's GPU's are a ways away.
Also not sure if this thread is intended to bash PC, but it should be known that nvidia has products in a lot of different things, especially consoles.
@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?
No I didn't play it, hell I didn't even get Crysis till a few years ago, since it appears you have a strong rig than obviously you are aware that 60 FPS should be the minimum we strive for. By the way you worded your post it made it sound as if 60 FPS isn't an important point to strive for, yes I treated you like a fanboy because you sort of came off as one for this I apologize. I do think FPS matters and it is something consoles should strive for as well and they are making some strides on those regards. Clearly we know there is a difference between FPS and pushing graphics only divides players even more, we will have people coming in saying 30 FPS is fine when really, its bottom of the barrel playing. When I turn up some of my games and I experience dips below 60 FPS I can certainly tell the difference.
@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?
No I didn't play it, hell I didn't even get Crysis till a few years ago, since it appears you have a strong rig than obviously you are aware that 60 FPS should be the minimum we strive for. By the way you worded your post it made it sound as if 60 FPS isn't an important point to strive for, yes I treated you like a fanboy because you sort of came off as one for this I apologize. I do think FPS matters and it is something consoles should strive for as well and they are making some strides on those regards. Clearly we know there is a difference between FPS and pushing graphics only divides players even more, we will have people coming in saying 30 FPS is fine when really, its bottom of the barrel playing. When I turn up some of my games and I experience dips below 60 FPS I can certainly tell the difference.
I think that is horse shit. JRPGs, puzzle games, graphic novels, and tons of other gaming genres aren't served in any way trying to do anything other than smash you with eye-candy. I mean, you didn't qualify your argument with anything sensible. If you said that first-person shooters and fighting games should NEVER go below 60fps then I would be totally with you. So, I reiterate my original argument. PC gamers who insist on stupid frame rates for everything @4k Resolution (or higher) are directly responsible for the slowness by which graphical technology is advancing. Card makers have been throwing more ROPS at DX11 and DX12 for years, now. All of a sudden, something new comes around and people bitch about tired ass shit as if the technology couldn't or shouldn't get better as time goes forward. No excitement for innovation.
@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?
No I didn't play it, hell I didn't even get Crysis till a few years ago, since it appears you have a strong rig than obviously you are aware that 60 FPS should be the minimum we strive for. By the way you worded your post it made it sound as if 60 FPS isn't an important point to strive for, yes I treated you like a fanboy because you sort of came off as one for this I apologize. I do think FPS matters and it is something consoles should strive for as well and they are making some strides on those regards. Clearly we know there is a difference between FPS and pushing graphics only divides players even more, we will have people coming in saying 30 FPS is fine when really, its bottom of the barrel playing. When I turn up some of my games and I experience dips below 60 FPS I can certainly tell the difference.
I think that is horse shit. JRPGs, puzzle games, graphic novels, and tons of other gaming genres aren't served in any way trying to do anything other than smash you with eye-candy. I mean, you didn't qualify your argument with anything sensible. If you said that first-person shooters and fighting games should NEVER go below 60fps then I would be totally with you. So, I reiterate my original argument. PC gamers who insist on stupid frame rates for everything @4k Resolution (or higher) are directly responsible for the slowness by which graphical technology is advancing. Card makers have been throwing more ROPS at DX11 and DX12 for years, now. All of a sudden, something new comes around and people bitch about tired ass shit as if the technology couldn't or shouldn't get better as time goes forward. No excitement for innovation.
Most of the innovation happens at the PC level as well, indie games are a prime example of this, multiple mods, servers maintained etc. Seems PC gamers want 60 FPS and innovation already. Not sure what your bitching about.
Expected, we're well into launch and Battlefield V is still the only ray traced game we've got. Shadow of the tomb raider ray tracing is nowhere to be found. Dlss support is still confined to final fantasy. Not good, nvidia!
Right? Them showing off ray tracing and listing it as coming to many games got me excited for it. I hope Metro has it on launch, but I wouldn't be surprised if it didn't.
At least the card is easily more powerful than the 1080 Ti, so it wasn't all that bad of a purchase except for the price.
overpriced card with a niche feature that cripples your fps. I’ll wait for a gpu that can do 4k, ultra, 100fps. Meanwhile ill be sticking with my gtx1080
Most of the innovation happens at the PC level as well, indie games are a prime example of this, multiple mods, servers maintained etc. Seems PC gamers want 60 FPS and innovation already. Not sure what your bitching about.
I mean there is certainly that, but I was speaking specifically about graphics technology. That's the part this is seemingly stagnant. I think we are reaching a point where giant leaps are simply not going to happen just for the sake of art budgets being barely manageable in AAA development as it is. If gamers thumb their nose at every new tech because of performance, we're going to have DX12 and ROPS til the end of time. Just the raw horsepower demands of making 4K alone seems to table innovation for the foreseeable future.
I guess, what I'm really bitching about is whether we're just going to settle for what we have because 8K monitors are going to break into the mainstream over the next 4-5 years. Nothing will change if we put the almighty frame-per-second over everything else for a very very long time.
@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.
Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.
DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.
Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.
Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.
RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.
DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls
Also not sure if this thread is intended to bash PC,
I honestly don't know how you come to this conclusion in the first place? This was to show how Nvidia are fools of increasing the prices of the RTX and they know exactly why their new GPU isn't flying off the shelves.
@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.
Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.
DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.
Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.
Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.
RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.
DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls
1. According to Microsoft, DirectML will use NVIDIA's Tensor hardware.
2. DirectML's metacommands expose hardware specifics optimizations. It's effectively Microsoft is building another "Xbox" on Windows PC with vendor neutral API hardware access.
3. DirectML to perform better than hand written compute shaders! Shader Model 6 has a short life.
4. DirectML is coming with the next major Windows 10 update.
5. DirectML has been confirmed to run on Radeon VII.
5, https://www.guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html and https://wccftech.com/amd-radeon-vii-excellent-result-directml/
AMD: Radeon VII Has Excellent Results with DirectML; We Could Try a GPGPU Approach for Something NVIDIA DLSS-like
AMD is already working on DirectML SDK for Radeon VII which is outside Microsoft's R&D.
Again, AMD and NVIDIA is following Microsoft's DirectX12 evolution road map.
Turing has heavy TFLOPS bias relative to it's raster power, hence Turing more closely resembles AMD GCN than Pascal in that portions of the GPU aren't fully used. Multi-engine allows better use of both GPUs. Over time, Turing will probably age better than some of the previous Nvidia cards unless Nvidia reduces support similar to Kepler.
overpriced card with a niche feature that cripples your fps. I’ll wait for a gpu that can do 4k, ultra, 100fps. Meanwhile ill be sticking with my gtx1080
Honestly, most games now you can scale them with negligible visuals difference.
RE2 is a very good example of that. Med and max, get your microscope out.
Also not sure if this thread is intended to bash PC,
I honestly don't know how you come to this conclusion in the first place? This was to show how Nvidia are fools of increasing the prices of the RTX and they know exactly why their new GPU isn't flying off the shelves.
because it's system wars and the original post focused explicitly on a video card for PC. I was just assuming you were trying to be clever and "backdoor" bash PC is all :D
The truth is there are many many many reasons Nvidia faltered; some were their own doing, some not.
Either way you look at it, though, they're still offering the best performing product, and also have a good range for people concerned with price.
@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.
Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.
DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.
Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.
Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.
RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.
DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls
1. According to Microsoft, DirectML will use NVIDIA's Tensor hardware.
2. DirectML's metacommands expose hardware specifics optimizations. It's effectively Microsoft is building another "Xbox" on Windows PC with vendor neutral API hardware access.
3. DirectML to perform better than hand written compute shaders! Shader Model 6 has a short life.
4. DirectML is coming with the next major Windows 10 update.
5. DirectML has been confirmed to run on Radeon VII.
5, https://www.guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html and https://wccftech.com/amd-radeon-vii-excellent-result-directml/
AMD: Radeon VII Has Excellent Results with DirectML; We Could Try a GPGPU Approach for Something NVIDIA DLSS-like
AMD is already working on DirectML SDK for Radeon VII which is outside Microsoft's R&D.
Again, AMD and NVIDIA is following Microsoft's DirectX12 evolution road map.
Turing has heavy TFLOPS bias relative to it's raster power, hence Turing more closely resembles AMD GCN than Pascal in that portions of the GPU aren't fully used. Multi-engine allows better use of both GPUs. Over time, Turing will probably age better than some of the previous Nvidia cards unless Nvidia reduces support similar to Kepler.
RE 2 still wont make use of fp16 on nvidia gpus without capcom modifying the code. when DML finally releases it wont retrofit fp16 into existing games where it was used with AMD specific extensions
@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.
Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.
DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.
Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.
Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.
RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.
DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls
1. According to Microsoft, DirectML will use NVIDIA's Tensor hardware.
2. DirectML's metacommands expose hardware specifics optimizations. It's effectively Microsoft is building another "Xbox" on Windows PC with vendor neutral API hardware access.
3. DirectML to perform better than hand written compute shaders! Shader Model 6 has a short life.
4. DirectML is coming with the next major Windows 10 update.
5. DirectML has been confirmed to run on Radeon VII.
5, https://www.guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html and https://wccftech.com/amd-radeon-vii-excellent-result-directml/
AMD: Radeon VII Has Excellent Results with DirectML; We Could Try a GPGPU Approach for Something NVIDIA DLSS-like
AMD is already working on DirectML SDK for Radeon VII which is outside Microsoft's R&D.
Again, AMD and NVIDIA is following Microsoft's DirectX12 evolution road map.
Turing has heavy TFLOPS bias relative to it's raster power, hence Turing more closely resembles AMD GCN than Pascal in that portions of the GPU aren't fully used. Multi-engine allows better use of both GPUs. Over time, Turing will probably age better than some of the previous Nvidia cards unless Nvidia reduces support similar to Kepler.
RE 2 still wont make use of fp16 on nvidia gpus without capcom modifying the code. when DML finally releases it wont retrofit fp16 into existing games where it was used with AMD specific extensions
NVIDIA's delta color compression superiority has similar effect on memory bandwidth conservation.
@ronvalencia: no it doesnt. the biggest benefit of fp16 in terms of something related to bandwidth is down to cache, something which delta color compression doesnt affect at all AFAIK
@ronvalencia: no it doesnt. the biggest benefit of fp16 in terms of something related to bandwidth is down to cache, something which delta color compression doesnt affect at all AFAIK
Yes it does, FP16 reduce memory bandwidth by half per operation. AMD's delta color compression is inferior.
NVIDIA applies memory compression with it's L2 cache.
By utilizing a large pattern library, NVIDIA is able to try different patterns to describe these deltas in as few pixels as possible, ultimately conserving bandwidth throughout the GPU, not only reducing DRAM bandwidth needs, but also L2 bandwidth needs and texture unit bandwidth needs (in the case of reading back a compressed render target).
For example
NVIDIA doesn't need FP16's memory bandwidth saving tricks, but increasing FLOPS rate with double rate FP16 feature is beneficial.
AMD's FP16 usage with inferior memory compression benefits
1. Memory bandwidth saving
2. Increase FLOPS rate via rapid pack math (Vega IP).
NVIDIA's FP16 usage with superior memory compression benefits
1. Increase FLOPS rate via rapid pack math (Turing).
From Pascal's effective memory bandwidth gain with compression, one can workout VII vs RTX 2080 and RTX 2080 Ti estimated performance gap e.g.
VII estimate, 62 percent effective memory bandwidth from theoretical memory bandwidth x 1.4X memory compression gain based on Vega 64's memory behaviors
(1TBps x 0.62) x 1.4 = 868 GB/s
---
RTX 2080, 73 percent effective memory bandwidth from theoretical memory bandwidth x 2.47X memory compression gain based Pascal's memory behaviors
(448.0 GBps x 0.73) x 2.47 = 807.79 GB/s (incidentally, BW number is similar to GTX 1080 Ti)
Note why VII is landing around RTX 2080 level in addition to common 64 ROPS limit.
RTX 2080 Ti, 73 percent effective memory bandwidth from theoretical memory bandwidth x 2.47X memory compression gain based Pascal's memory behaviors
(616.0 GBps x 0.73) x 2.47 = 1110.7096 GB/s
Based on effective memory bandwidth factor, RTX 2080 Ti has ~28 percent advantage over VII and RTX 2080
Log in to comment