Nvidia takes the L as they admit RTX flop in sells expectations!

  • 53 results
  • 1
  • 2
Avatar image for davillain
DaVillain

56226

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#1 DaVillain  Moderator
Member since 2014 • 56226 Posts

Just as the title says, taking from DSOG Nvidia now admits the RTX sells are lower then expected. No shit Sherlock! What, you're telling me only a handful of people bought super expensive that is the GPU RTX? And that's somehow shocking? Not the mention, your stock has lost half of it's value since the RTX launched.

For example, Shadow of the Tomb Raider has not received yet its RTX real-time ray tracing effects, and almost none of the games that NVIDIA claimed that would support DLSS actually support it. Right now, only Battlefield 5 supports real-time ray tracing effects and only Final Fantasy XV supports DLSS. Oh, there is also the new path-tracing version of Quake 2. Other than these games though, there is nothing currently on the market that can take advantage of both the real-time ray tracing effects and the DLSS tech

And this doesn't help either and why I'm keeping my 1080Ti for a long time until Raytracing becomes mainstream. They pushed their luck with the prices, I would have been all over the 2080Ti if it had 1080Ti launch price but nope.

Avatar image for R4gn4r0k
R4gn4r0k

46444

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#2 R4gn4r0k
Member since 2004 • 46444 Posts

As soon as I saw how disappointing the RTX announcement was I went out and bought a 1080ti

Avatar image for BassMan
BassMan

17835

Forum Posts

0

Wiki Points

0

Followers

Reviews: 226

User Lists: 0

#3  Edited By BassMan
Member since 2002 • 17835 Posts

RTX 2080 Ti is the only one worth buying (way overpriced though) and that is only if you are pushing 3440x1440 high refresh rate or 4K/60fps or higher. Otherwise, the 10 series cards are much better value and make a lot more sense.

Avatar image for GarGx1
GarGx1

10934

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#4 GarGx1
Member since 2011 • 10934 Posts

Keeping my GTX 1080's at least until the 3080 is out. No point waiting on AMD though with top end Navi looking like it'll match 1080 performance.

Avatar image for cdragon_88
cdragon_88

1841

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#5 cdragon_88
Member since 2003 • 1841 Posts

Why would I buy it when, raytracing is its most attractive feature but if enabling raytracing, my fps goes to shit?

Avatar image for deactivated-5ebd39d683340
deactivated-5ebd39d683340

4089

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 0

#6 deactivated-5ebd39d683340
Member since 2005 • 4089 Posts

I am waiting for a huge discount on the RTX Titan. That card has 24gb GDDR6 ram, its a monster of a video editing machine with all the best and whistles. Compared to quadro its cheaper and offers the same ram size and is faster. Quadro is essentially useless for VFX art, so the Titan RTX is the only RTX card that is interesting to me.

Avatar image for xantufrog
xantufrog

17875

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#7 xantufrog  Moderator
Member since 2013 • 17875 Posts

@GarGx1: leaked benches show the Radeon VII beating the 2080 at 4k. But we'll have to see if this holds up

Avatar image for schu
schu

10191

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#8  Edited By schu
Member since 2003 • 10191 Posts

I sold my 2080 TI because it was just too much. I really wanted the power, but I just couldn't justify it after I thought about it for a while.

I knew going in that the ray tracing would be bullshit. It always takes a few generations for things like that to mature..that is assuming they survive at all.

Avatar image for deactivated-5d78760d7d740
deactivated-5d78760d7d740

16386

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#9 deactivated-5d78760d7d740
Member since 2009 • 16386 Posts

Expected, we're well into launch and Battlefield V is still the only ray traced game we've got. Shadow of the tomb raider ray tracing is nowhere to be found. Dlss support is still confined to final fantasy. Not good, nvidia!

Avatar image for with_teeth26
with_teeth26

11511

Forum Posts

0

Wiki Points

0

Followers

Reviews: 43

User Lists: 1

#10 with_teeth26
Member since 2007 • 11511 Posts

they made sticking with my 1080 a pretty easy decision

we'll see what the next gen of GPUs bring

Avatar image for sakaixx
sakaiXx

15947

Forum Posts

0

Wiki Points

0

Followers

Reviews: 8

User Lists: 5

#11  Edited By sakaiXx
Member since 2013 • 15947 Posts

Personally I think 2060 is a good buy especially due it can easily compete with a 1070ti but the higher RTX card is a waste of money.

Avatar image for deactivated-63d2876fd4204
deactivated-63d2876fd4204

9129

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#12  Edited By deactivated-63d2876fd4204
Member since 2016 • 9129 Posts

I sold a 1080Ti for the price of a RTX 2080 and went back to 1440p. I’ve been happy with the decision so far. I’m out of the $1K+ pc component game, so the 2080 Ti and RTX Titan are a no no. I spent the extra cash on rgb fans, a rgb mousepad, rgb headset, and rgb headset stand instead. Best best $350 I’ve ever spent...

Avatar image for schu
schu

10191

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#13 schu
Member since 2003 • 10191 Posts

@XVision84 said:

Expected, we're well into launch and Battlefield V is still the only ray traced game we've got. Shadow of the tomb raider ray tracing is nowhere to be found. Dlss support is still confined to final fantasy. Not good, nvidia!

Not to mention where is the DLSS support? lul

Avatar image for blackhairedhero
Blackhairedhero

3231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#14 Blackhairedhero
Member since 2018 • 3231 Posts

Creating proprietary technology's like Hairworks, Raytracing etc. Then making sure your high end Gpu's are the only ones that run them.

Thank's Nvidia!

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#15 JasonOfA36
Member since 2016 • 3725 Posts

I'm impressed at how Nvidia won't still blame themselves.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16  Edited By ronvalencia
Member since 2008 • 29612 Posts

@davillain- said:

Just as the title says, taking from DSOG Nvidia now admits the RTX sells are lower then expected. No shit Sherlock! What, you're telling me only a handful of people bought super expensive that is the GPU RTX? And that's somehow shocking? Not the mention, your stock has lost half of it's value since the RTX launched.

For example, Shadow of the Tomb Raider has not received yet its RTX real-time ray tracing effects, and almost none of the games that NVIDIA claimed that would support DLSS actually support it. Right now, only Battlefield 5 supports real-time ray tracing effects and only Final Fantasy XV supports DLSS. Oh, there is also the new path-tracing version of Quake 2. Other than these games though, there is nothing currently on the market that can take advantage of both the real-time ray tracing effects and the DLSS tech

And this doesn't help either and why I'm keeping my 1080Ti for a long time until Raytracing becomes mainstream. They pushed their luck with the prices, I would have been all over the 2080Ti if it had 1080Ti launch price but nope.

https://www.techspot.com/review/1784-resident-evil-2-benchmarks/

Resident Evil 2 Remake supports Rapid Pack Math.

------

NVIDIA confirms DirectML support on Turing GPUs. https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf

------

https://www.pcgamesn.com/amd/amd-gpu-nvidia-dlss-radeon-vii

AMD: an “Nvidia DLSS-like thing can be done with our GPU”

https://www.overclock3d.net/news/gpu_displays/amd_s_radeon_vii_supports_directml_-_an_alternative_to_dlss/1

AMD confirms Radeon VII to support DirectML (DirectX's machine learning API), MS's DirectX ML API alternative to Nvidia's DLSS technology

Avatar image for deactivated-63d2876fd4204
deactivated-63d2876fd4204

9129

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#17 deactivated-63d2876fd4204
Member since 2016 • 9129 Posts

@ronvalencia: What the hell does your response have to do with anything said in this thread?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#19  Edited By ronvalencia
Member since 2008 • 29612 Posts

@goldenelementxl said:

@ronvalencia: What the hell does your response have to do with anything said in this thread?

When DirectML arrives and games using the new rapid pack math API from MS, the gap between Turing and gaming Pascal GPUs will widen.

Atm, accessing rapid pack math hardware features are done by vendor specific API access. Specific vendor access to end with Microsoft's DirectML API standard.

Skylake X's 512 bit AVX = OK

Turing with ML hardware = OK

Both AMD and NVIDIA follows MS's guidance.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#20  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Yams1980 said:

I wish nvidia would make the 2070,2080 and 2080ti cards have a non-RTX version.

Even if every single game from this point forward used RTX, i still wouldn't enable it because i want high framerate. Low framerates with reflections i can only notice if i do screenshot comparisons with it on and off is not worth the extra GPU cost and performance hit.

Not advisable with Microsoft's DirectML road map. RTX is needed to match Microsoft's new PC DirectX API road map.

https://www.pcgamesn.com/amd/amd-gpu-nvidia-dlss-radeon-vii

Adam Kozak has explained that AMD is experimenting with an evaluation version of the DirectML SDK and that the upcoming Radeon VII is “showing excellent results in that experiment.”

Because of the success of the GCN architecture, when it comes to compute-related workloads, the red team seems confident that it would be able to create some sort of super sampling effect using Microsoft’s own Windows-based machine learning code. And that could create a typically AMD open ecosystem for boosting the overall fidelity of our games without drastically impacting frame rate performance, and all without the same level of dedicated silicon that Nvidia is filling its Turing GPUs with.

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#21 JasonOfA36
Member since 2016 • 3725 Posts

@Yams1980: They wouldn't as it would put a price on RTX

Avatar image for vaidream45
Vaidream45

2116

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#22  Edited By Vaidream45
Member since 2016 • 2116 Posts

This is why I still have my 980ti from years ago. This is good news for me though because hopefully they release some wallet friendly GPU’s soon that are affordable.

Edit:

I just looked up prices for the 1080ti and they average around $1000!??? Why the hell are these so damn expensive?

Avatar image for m3dude1
m3dude1

2334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#23  Edited By m3dude1
Member since 2007 • 2334 Posts

@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.

Avatar image for KungfuKitten
KungfuKitten

27389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 42

User Lists: 0

#24  Edited By KungfuKitten
Member since 2006 • 27389 Posts

I think they just didn't sell a lot of people with their demo. real time tracing looks good but it seems to be in the details and it ups the price of already expensive cards (in the eyes of the people, who probably upgraded like a year ago) by like 2x? That's a steep ask. Meanwhile the only thing driving hardware purchases right now that is clear to me, are people buying higher resolution monitors. Because games aren't really becoming that much more demanding? So I can understand slower sales.

Avatar image for djoffer
djoffer

1856

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#25 djoffer
Member since 2007 • 1856 Posts

So you are saying that people was willing to pay 1500$ for a small upgrade to their existing card? Shocker...

Avatar image for davillain
DaVillain

56226

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#26 DaVillain  Moderator
Member since 2014 • 56226 Posts

@ronvalencia: uh okay.

@vaidream45 said:

This is why I still have my 980ti from years ago. This is good news for me though because hopefully they release some wallet friendly GPU’s soon that are affordable.

Edit:

I just looked up prices for the 1080ti and they average around $1000!??? Why the hell are these so damn expensive?

Nvidia has discontinued GTX 1080Ti because they want everyone to go buy their super expensive RTX 2080Ti and since 1080Ti production has stop, the prices have increase because of this. And the fact they cost over $1000, you might as well save $200 more and go buy 2080Ti which would be the smart choice.

You're 980Ti is still good, if you are only gaming in 1080p, no point in upgrading and if you really want to upgrade, go buy the RTX 2060 for only $350, it's a good upgrade over the 980Ti.

Avatar image for uninspiredcup
uninspiredcup

59086

Forum Posts

0

Wiki Points

0

Followers

Reviews: 86

User Lists: 2

#27 uninspiredcup
Member since 2013 • 59086 Posts

Very happy with mid end card, always have been. Paying ridiculous amounts for incremental improvement is an indulgence.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#28  Edited By ronvalencia
Member since 2008 • 29612 Posts

@m3dude1 said:

@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.

https://www.overclock3d.net/reviews/software/resident_evil_2_remake_pc_performance_review/13

Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.

DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.

Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.

Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.

RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.

Avatar image for vaidream45
Vaidream45

2116

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#29 Vaidream45
Member since 2016 • 2116 Posts

@davillain-: ahhhh ok that all makes sense. Yeah, I plan to stick with my 980ti for at least another year. Can still max out anything and still get 60fps no problem so I won’t be upgrading until I go all out 4k when it makes more financial sense. Hoping within the next year or two we see some decent prices Nvidia products because I always prefer them over AMD. Just right now they have their heads up their asses lol.

Avatar image for Shewgenja
Shewgenja

21456

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#30 Shewgenja
Member since 2009 • 21456 Posts

Ray-Tracing needs its Crysis. Also, PC gamers need to get over their pissing match with consoles about framerates and support games that push graphical envelopes, again. The issues with these RTX cards are as much cultural as they are technical.

Avatar image for Enragedhydra
Enragedhydra

1085

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#31 Enragedhydra
Member since 2005 • 1085 Posts

I'm still rocking my RX570, I just bought a GTS450 in case my 570 flops lol (cost me $20) while in the mean time I'll order a 1070ti, they are around $500 when the 570 dies. I hope Nvidia learns their lessons, their pricing is ridiculous.

Avatar image for Enragedhydra
Enragedhydra

1085

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#32  Edited By Enragedhydra
Member since 2005 • 1085 Posts

@Shewgenja Have you played anything 60 fps or above? I'm pretty sure Shadow of the Tomb Raider does push this card.

Avatar image for Shewgenja
Shewgenja

21456

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#33 Shewgenja
Member since 2009 • 21456 Posts

@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?

Avatar image for mrbojangles25
mrbojangles25

58398

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#34  Edited By mrbojangles25
Member since 2005 • 58398 Posts

Nvidia didn't fail, the mining craze just died.

That is literally the reason why they did so well during the 10__ time period, and why those cards literally cost 50%+ more than the MSRP.

anyway, sold all my nvidia stock when it started diving, then bought an assload more a few days ago because they will come out on top. AMD won't, and Intel's GPU's are a ways away.

Also not sure if this thread is intended to bash PC, but it should be known that nvidia has products in a lot of different things, especially consoles.

Avatar image for Enragedhydra
Enragedhydra

1085

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#35 Enragedhydra
Member since 2005 • 1085 Posts

@Shewgenja said:

@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?

No I didn't play it, hell I didn't even get Crysis till a few years ago, since it appears you have a strong rig than obviously you are aware that 60 FPS should be the minimum we strive for. By the way you worded your post it made it sound as if 60 FPS isn't an important point to strive for, yes I treated you like a fanboy because you sort of came off as one for this I apologize. I do think FPS matters and it is something consoles should strive for as well and they are making some strides on those regards. Clearly we know there is a difference between FPS and pushing graphics only divides players even more, we will have people coming in saying 30 FPS is fine when really, its bottom of the barrel playing. When I turn up some of my games and I experience dips below 60 FPS I can certainly tell the difference.

Avatar image for Shewgenja
Shewgenja

21456

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#36 Shewgenja
Member since 2009 • 21456 Posts

@Enragedhydra said:
@Shewgenja said:

@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?

No I didn't play it, hell I didn't even get Crysis till a few years ago, since it appears you have a strong rig than obviously you are aware that 60 FPS should be the minimum we strive for. By the way you worded your post it made it sound as if 60 FPS isn't an important point to strive for, yes I treated you like a fanboy because you sort of came off as one for this I apologize. I do think FPS matters and it is something consoles should strive for as well and they are making some strides on those regards. Clearly we know there is a difference between FPS and pushing graphics only divides players even more, we will have people coming in saying 30 FPS is fine when really, its bottom of the barrel playing. When I turn up some of my games and I experience dips below 60 FPS I can certainly tell the difference.

I think that is horse shit. JRPGs, puzzle games, graphic novels, and tons of other gaming genres aren't served in any way trying to do anything other than smash you with eye-candy. I mean, you didn't qualify your argument with anything sensible. If you said that first-person shooters and fighting games should NEVER go below 60fps then I would be totally with you. So, I reiterate my original argument. PC gamers who insist on stupid frame rates for everything @4k Resolution (or higher) are directly responsible for the slowness by which graphical technology is advancing. Card makers have been throwing more ROPS at DX11 and DX12 for years, now. All of a sudden, something new comes around and people bitch about tired ass shit as if the technology couldn't or shouldn't get better as time goes forward. No excitement for innovation.

Avatar image for Enragedhydra
Enragedhydra

1085

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#37 Enragedhydra
Member since 2005 • 1085 Posts

@Shewgenja said:
@Enragedhydra said:
@Shewgenja said:

@Enragedhydra: I have a Vega64 and a 3440x1440 monitor, yes, most games play well above 60fps for me. Question for you. Did you ever play Crysis on an 8800?

No I didn't play it, hell I didn't even get Crysis till a few years ago, since it appears you have a strong rig than obviously you are aware that 60 FPS should be the minimum we strive for. By the way you worded your post it made it sound as if 60 FPS isn't an important point to strive for, yes I treated you like a fanboy because you sort of came off as one for this I apologize. I do think FPS matters and it is something consoles should strive for as well and they are making some strides on those regards. Clearly we know there is a difference between FPS and pushing graphics only divides players even more, we will have people coming in saying 30 FPS is fine when really, its bottom of the barrel playing. When I turn up some of my games and I experience dips below 60 FPS I can certainly tell the difference.

I think that is horse shit. JRPGs, puzzle games, graphic novels, and tons of other gaming genres aren't served in any way trying to do anything other than smash you with eye-candy. I mean, you didn't qualify your argument with anything sensible. If you said that first-person shooters and fighting games should NEVER go below 60fps then I would be totally with you. So, I reiterate my original argument. PC gamers who insist on stupid frame rates for everything @4k Resolution (or higher) are directly responsible for the slowness by which graphical technology is advancing. Card makers have been throwing more ROPS at DX11 and DX12 for years, now. All of a sudden, something new comes around and people bitch about tired ass shit as if the technology couldn't or shouldn't get better as time goes forward. No excitement for innovation.

Most of the innovation happens at the PC level as well, indie games are a prime example of this, multiple mods, servers maintained etc. Seems PC gamers want 60 FPS and innovation already. Not sure what your bitching about.

Avatar image for DragonfireXZ95
DragonfireXZ95

26649

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#38 DragonfireXZ95
Member since 2005 • 26649 Posts
@XVision84 said:

Expected, we're well into launch and Battlefield V is still the only ray traced game we've got. Shadow of the tomb raider ray tracing is nowhere to be found. Dlss support is still confined to final fantasy. Not good, nvidia!

Right? Them showing off ray tracing and listing it as coming to many games got me excited for it. I hope Metro has it on launch, but I wouldn't be surprised if it didn't.

At least the card is easily more powerful than the 1080 Ti, so it wasn't all that bad of a purchase except for the price.

Avatar image for ocinom
ocinom

1385

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#39  Edited By ocinom
Member since 2008 • 1385 Posts

overpriced card with a niche feature that cripples your fps. I’ll wait for a gpu that can do 4k, ultra, 100fps. Meanwhile ill be sticking with my gtx1080

Avatar image for Shewgenja
Shewgenja

21456

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#40 Shewgenja
Member since 2009 • 21456 Posts

@Enragedhydra said:

Most of the innovation happens at the PC level as well, indie games are a prime example of this, multiple mods, servers maintained etc. Seems PC gamers want 60 FPS and innovation already. Not sure what your bitching about.

I mean there is certainly that, but I was speaking specifically about graphics technology. That's the part this is seemingly stagnant. I think we are reaching a point where giant leaps are simply not going to happen just for the sake of art budgets being barely manageable in AAA development as it is. If gamers thumb their nose at every new tech because of performance, we're going to have DX12 and ROPS til the end of time. Just the raw horsepower demands of making 4K alone seems to table innovation for the foreseeable future.

I guess, what I'm really bitching about is whether we're just going to settle for what we have because 8K monitors are going to break into the mainstream over the next 4-5 years. Nothing will change if we put the almighty frame-per-second over everything else for a very very long time.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#41  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Enragedhydra said:

@Shewgenja Have you played anything 60 fps or above? I'm pretty sure Shadow of the Tomb Raider does push this card.

From https://wccftech.com/review/msi-geforce-rtx-2080-ti-and-rtx-2080-gaming-x-trio-review/5/

I have FreeSync monitor to handle less than 60 fps target.

Avatar image for m3dude1
m3dude1

2334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#42  Edited By m3dude1
Member since 2007 • 2334 Posts

@ronvalencia said:
@m3dude1 said:

@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.

https://www.overclock3d.net/reviews/software/resident_evil_2_remake_pc_performance_review/13

Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.

DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.

Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.

Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.

RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.

DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls

Avatar image for davillain
DaVillain

56226

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#43 DaVillain  Moderator
Member since 2014 • 56226 Posts

@mrbojangles25 said:

Also not sure if this thread is intended to bash PC,

I honestly don't know how you come to this conclusion in the first place? This was to show how Nvidia are fools of increasing the prices of the RTX and they know exactly why their new GPU isn't flying off the shelves.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#44  Edited By ronvalencia
Member since 2008 • 29612 Posts

@m3dude1 said:
@ronvalencia said:
@m3dude1 said:

@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.

https://www.overclock3d.net/reviews/software/resident_evil_2_remake_pc_performance_review/13

Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.

DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.

Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.

Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.

RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.

DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls

1. According to Microsoft, DirectML will use NVIDIA's Tensor hardware.

2. DirectML's metacommands expose hardware specifics optimizations. It's effectively Microsoft is building another "Xbox" on Windows PC with vendor neutral API hardware access.

3. DirectML to perform better than hand written compute shaders! Shader Model 6 has a short life.

4. DirectML is coming with the next major Windows 10 update.

5. DirectML has been confirmed to run on Radeon VII.

Reference

1,2,3,4, http://on-demand.gputechconf.com/siggraph/2018/video/sig1814-2-adrian-tsai-gpu-inferencing-directml-and-directx-12.html

5, https://www.guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html and https://wccftech.com/amd-radeon-vii-excellent-result-directml/

AMD: Radeon VII Has Excellent Results with DirectML; We Could Try a GPGPU Approach for Something NVIDIA DLSS-like

AMD is already working on DirectML SDK for Radeon VII which is outside Microsoft's R&D.

Again, AMD and NVIDIA is following Microsoft's DirectX12 evolution road map.

Turing has heavy TFLOPS bias relative to it's raster power, hence Turing more closely resembles AMD GCN than Pascal in that portions of the GPU aren't fully used. Multi-engine allows better use of both GPUs. Over time, Turing will probably age better than some of the previous Nvidia cards unless Nvidia reduces support similar to Kepler.

Avatar image for uninspiredcup
uninspiredcup

59086

Forum Posts

0

Wiki Points

0

Followers

Reviews: 86

User Lists: 2

#45 uninspiredcup
Member since 2013 • 59086 Posts

@ocinom said:

overpriced card with a niche feature that cripples your fps. I’ll wait for a gpu that can do 4k, ultra, 100fps. Meanwhile ill be sticking with my gtx1080

Honestly, most games now you can scale them with negligible visuals difference.

RE2 is a very good example of that. Med and max, get your microscope out.

Avatar image for mrbojangles25
mrbojangles25

58398

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#46 mrbojangles25
Member since 2005 • 58398 Posts

@davillain- said:
@mrbojangles25 said:

Also not sure if this thread is intended to bash PC,

I honestly don't know how you come to this conclusion in the first place? This was to show how Nvidia are fools of increasing the prices of the RTX and they know exactly why their new GPU isn't flying off the shelves.

because it's system wars and the original post focused explicitly on a video card for PC. I was just assuming you were trying to be clever and "backdoor" bash PC is all :D

The truth is there are many many many reasons Nvidia faltered; some were their own doing, some not.

Either way you look at it, though, they're still offering the best performing product, and also have a good range for people concerned with price.

Avatar image for m3dude1
m3dude1

2334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#47 m3dude1
Member since 2007 • 2334 Posts

@ronvalencia said:
@m3dude1 said:
@ronvalencia said:
@m3dude1 said:

@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.

https://www.overclock3d.net/reviews/software/resident_evil_2_remake_pc_performance_review/13

Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.

DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.

Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.

Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.

RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.

DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls

1. According to Microsoft, DirectML will use NVIDIA's Tensor hardware.

2. DirectML's metacommands expose hardware specifics optimizations. It's effectively Microsoft is building another "Xbox" on Windows PC with vendor neutral API hardware access.

3. DirectML to perform better than hand written compute shaders! Shader Model 6 has a short life.

4. DirectML is coming with the next major Windows 10 update.

5. DirectML has been confirmed to run on Radeon VII.

Reference

1,2,3,4, http://on-demand.gputechconf.com/siggraph/2018/video/sig1814-2-adrian-tsai-gpu-inferencing-directml-and-directx-12.html

5, https://www.guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html and https://wccftech.com/amd-radeon-vii-excellent-result-directml/

AMD: Radeon VII Has Excellent Results with DirectML; We Could Try a GPGPU Approach for Something NVIDIA DLSS-like

AMD is already working on DirectML SDK for Radeon VII which is outside Microsoft's R&D.

Again, AMD and NVIDIA is following Microsoft's DirectX12 evolution road map.

Turing has heavy TFLOPS bias relative to it's raster power, hence Turing more closely resembles AMD GCN than Pascal in that portions of the GPU aren't fully used. Multi-engine allows better use of both GPUs. Over time, Turing will probably age better than some of the previous Nvidia cards unless Nvidia reduces support similar to Kepler.

RE 2 still wont make use of fp16 on nvidia gpus without capcom modifying the code. when DML finally releases it wont retrofit fp16 into existing games where it was used with AMD specific extensions

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#48 ronvalencia
Member since 2008 • 29612 Posts

@m3dude1 said:
@ronvalencia said:
@m3dude1 said:
@ronvalencia said:
@m3dude1 said:

@ronvalencia: source that RE2 REmake uses rapid packed math? also the 2080ti performance there is 31% faster than 1080ti. thats completely standard with the majority of games.

https://www.overclock3d.net/reviews/software/resident_evil_2_remake_pc_performance_review/13

Capcom has fundamentally changed Resident Evil 2, creating what the game would have been if it were created today, not what the original would look like with enhanced visuals, forging a game that will surpass the original for many. On PC we also get to see the game push beyond the other versions of the remake on a technological level, supporting advanced HBAO+ ambient occlusion, AMD's Rapid Packed Math acceleration tech, FP16 compute and other graphical settings that can push past all of the game's console version.

DirectML API enables Rapid Pack Maths and machine learning instruction set hardware access be to uniformed across multiple GPU vendors.

Turing CUDA has full Rapid Pack Math in addition to Tensor matrix math cores. AMD has merged machine learning instruction set with GCN's CUs.

Both NVIDIA and AMD are following Microsoft's DirectX12 evolution road maps.

RTX 2080 Ti's tensor cores and rapid pack math feature set are enabled which can overlap with workstation GPU cards and similar argument for VII vs MI50/MI60.

DIrectML doesnt exist yet outside of microsoft R&D. RPM on amd cards is used via their shader intrinsics so im doubtful its enabled on NVIDIA gpus, especially considering benchmarks. AFAIK the only game to expose fp16 on nvidia turing gpus is wolfenstien 2, part of the abnormally large performance increase on turing. theres currently no API standard way to utilize RPM on amd and nvidia concurrently. it has to be done thru each IHVs specific instructions outside of standard API calls

1. According to Microsoft, DirectML will use NVIDIA's Tensor hardware.

2. DirectML's metacommands expose hardware specifics optimizations. It's effectively Microsoft is building another "Xbox" on Windows PC with vendor neutral API hardware access.

3. DirectML to perform better than hand written compute shaders! Shader Model 6 has a short life.

4. DirectML is coming with the next major Windows 10 update.

5. DirectML has been confirmed to run on Radeon VII.

Reference

1,2,3,4, http://on-demand.gputechconf.com/siggraph/2018/video/sig1814-2-adrian-tsai-gpu-inferencing-directml-and-directx-12.html

5, https://www.guru3d.com/news-story/amd-could-do-dlss-alternative-with-radeon-vii-though-directml-api.html and https://wccftech.com/amd-radeon-vii-excellent-result-directml/

AMD: Radeon VII Has Excellent Results with DirectML; We Could Try a GPGPU Approach for Something NVIDIA DLSS-like

AMD is already working on DirectML SDK for Radeon VII which is outside Microsoft's R&D.

Again, AMD and NVIDIA is following Microsoft's DirectX12 evolution road map.

Turing has heavy TFLOPS bias relative to it's raster power, hence Turing more closely resembles AMD GCN than Pascal in that portions of the GPU aren't fully used. Multi-engine allows better use of both GPUs. Over time, Turing will probably age better than some of the previous Nvidia cards unless Nvidia reduces support similar to Kepler.

RE 2 still wont make use of fp16 on nvidia gpus without capcom modifying the code. when DML finally releases it wont retrofit fp16 into existing games where it was used with AMD specific extensions

NVIDIA's delta color compression superiority has similar effect on memory bandwidth conservation.

Avatar image for m3dude1
m3dude1

2334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#49 m3dude1
Member since 2007 • 2334 Posts

@ronvalencia: no it doesnt. the biggest benefit of fp16 in terms of something related to bandwidth is down to cache, something which delta color compression doesnt affect at all AFAIK

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#50  Edited By ronvalencia
Member since 2008 • 29612 Posts

@m3dude1 said:

@ronvalencia: no it doesnt. the biggest benefit of fp16 in terms of something related to bandwidth is down to cache, something which delta color compression doesnt affect at all AFAIK

Yes it does, FP16 reduce memory bandwidth by half per operation. AMD's delta color compression is inferior.

NVIDIA applies memory compression with it's L2 cache.

https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/8

By utilizing a large pattern library, NVIDIA is able to try different patterns to describe these deltas in as few pixels as possible, ultimately conserving bandwidth throughout the GPU, not only reducing DRAM bandwidth needs, but also L2 bandwidth needs and texture unit bandwidth needs (in the case of reading back a compressed render target).

For example

NVIDIA doesn't need FP16's memory bandwidth saving tricks, but increasing FLOPS rate with double rate FP16 feature is beneficial.

AMD's FP16 usage with inferior memory compression benefits

1. Memory bandwidth saving

2. Increase FLOPS rate via rapid pack math (Vega IP).

NVIDIA's FP16 usage with superior memory compression benefits

1. Increase FLOPS rate via rapid pack math (Turing).

From Pascal's effective memory bandwidth gain with compression, one can workout VII vs RTX 2080 and RTX 2080 Ti estimated performance gap e.g.

VII estimate, 62 percent effective memory bandwidth from theoretical memory bandwidth x 1.4X memory compression gain based on Vega 64's memory behaviors

(1TBps x 0.62) x 1.4 = 868 GB/s

---

RTX 2080, 73 percent effective memory bandwidth from theoretical memory bandwidth x 2.47X memory compression gain based Pascal's memory behaviors

(448.0 GBps x 0.73) x 2.47 = 807.79 GB/s (incidentally, BW number is similar to GTX 1080 Ti)

Note why VII is landing around RTX 2080 level in addition to common 64 ROPS limit.

RTX 2080 Ti, 73 percent effective memory bandwidth from theoretical memory bandwidth x 2.47X memory compression gain based Pascal's memory behaviors

(616.0 GBps x 0.73) x 2.47 = 1110.7096 GB/s

Based on effective memory bandwidth factor, RTX 2080 Ti has ~28 percent advantage over VII and RTX 2080