The PC gaming world was eagerly awaiting the reveal of Nvidia’s GeForce RTX 40 series, only to let out a frustrated sigh at the end of Jensen Huang’s GTC presentation. The cheapest of the new cards is $900, a $200 increase over their predecessor… and thanks to some ambiguity in the branding, arguably a lot more. Despite an effort to position the new RTX 4080 and 4090 graphics cards as just the top of a new “GeForce family” that still includes RTX 30 series offerings, gamers are facing a serious sticker shock.
This is an opportunity for competition from Nvidia, AMD and the underdog Intel. While AMD’s Radeon lineup has made absolutely incredible gains in performance and competitiveness over the past few years, it remains a distant second in the market with around 20% of discrete GPU sales. Intel, having lost the perfect entry point for its long overdue Arc graphics cards during the pandemic’s GPU shortage, now has a second chance to present itself as a viable alternative to the duopoly.
Nvidia’s RTX 40 series price is a “new normal”
But let’s examine Nvidia’s position more closely. If PC gamers are complaining about Nvidia’s aggressive pricing, who could blame them? After two years of shortages during the pandemic, GPU prices have finally started to normalize. With chip shortages dwindling around the world and the bursting of the cryptocurrency bubble (and supposedly “dead” GPU mining), we were massively oversupplied with high-end cards from retailers and resellers. Anyone looking for a powerful GPU won’t have to look too far or spend too much, as long as they’re fine with the core designs that are a bit long.
Nvidia executives claim that the costs of these new GeForce RTX 40-series cards are rising, that evolutionary technologies like the unique DLSS 3.0 make them more valuable, and that Moore’s law is dead… again, again. But these are statements more likely to win over engineers and investors than consumers.
It doesn’t help that the new cards seem to intentionally overshadow some hidden commitments. The new RTX 4080 comes in two varieties, a 12GB version for “only” $900 and a 16GB version for $1200. But despite its name, Nvidia takes a fresh and not entirely welcome approach to these levels. The $1,200 card is effectively a different, higher-end Lovelace GPU, with 76 streaming multiprocessors instead of 60, nearly 2,000 extra CUDA cores, and a slight increase in memory speed.
Nvidia
Early observers are pointing out that Nvidia could have called the 12GB card the RTX 4070 if it wanted to. But there are obviously some branding perks to giving the design a bit of prominence…and a price bump to match. If you consider the cynical view that the RTX 4080 12GB is an RTX 4070 under a more aspirational moniker, this gives you an effective $400 retail price increase over the RTX 3070 from 2020. That might have seemed like reasonable a year ago, now it just seems miserly.
Nvidia may present top counts and benchmarks until the cows come home (and all those “DLSS ON” numbers showing miraculous gains aren’t especially convincing, by the way), but the market is telling us that graphics cards should now be a hell of a lot cheaper than the last two years.
Nvidia’s answer to this is a brand change, the new “GeForce Family”. While older cards have always had a place in the budget market, Nvidia is now explicitly marketing the RTX 30 series as a cheaper alternative to the RTX 4080 and 4090, although some of the existing “Ti” variant cards have retail prices. as high (or higher) than the new RTX 40 series GPUs.
Nvidia
The pandemic has proven that people are willing to pay a lot more for graphics cards, at least in extraordinary circumstances. From a consumer perspective, it’s hard to see Nvidia’s pricing approach as anything but an attempt to keep that sauce train moving. Nvidia appears to be using its dominant position to try and keep prices artificially high, strengthening the GPU market into a “new normal” at a time when we were getting comfortable with the return to sanity. No wonder the peasants are revolting.
An opportunity to strike back
But with restlessness comes opportunity. Nvidia isn’t the only company with a shiny new series of graphics cards on the horizon. As usual, AMD is prepping its next-gen offerings at more or less the same time – in fact, the company announced the reveal of its next-gen RDNA 3 Radeon GPU family just hours before Nvidia’s CEO showed off its latest cards. recent. It also indicated that AMD is thinking beyond the usual arms race, targeting performance-per-watt gains that could be welcome as energy prices rise even faster than everything else. But more intriguingly, AMD is expected to switch to a chiplet-based approach with RDNA 3, eschewing the typical “giant monolithic array” approach, after successfully revitalizing the CPU market with Ryzen processors built around its own chiplet designs. GPUs and CPUs are much different beasts, but there is an opportunity for RDNA 3 chiplets to radically redefine the graphics card market, depending on how they work (and how much they cost).
Intel
And don’t forget about Intel. The company has been pushing its entry into the discrete GPU market for over a year now, but frequent delays and unimpressive benchmarks for the debut Arc A380 graphics card have us wondering when we’ll see anything beyond budget cards. Intel has been quite upfront about its problems entering this incredibly competitive market, not least its lack of experience in developing complex drivers. A full-range debut six months ago would have been perfect, but a mid-range entry with the A770 next month is better late than never.
Intel knows it is facing a huge imbalance in experience and a market that is unlikely to accept a newcomer. And it seems to be making the smart choice: competing on price. According to interviews with the Arc development team, Intel plans to price its GPUs based on their worst-completion of game tests. After Nvidia’s claims of triple and quadruple performance gains using proprietary rendering tricks, it’s a bit of refreshing honesty – assuming it really stands out at shelf prices.
At the moment, creating an unmistakable contrast with Nvidia is the smartest thing AMD and Intel can do. With consumers experiencing near-unprecedented price fatigue, a potential recession looming to slash expendable income, and sentiment on the verge of cracking against Nvidia’s attempts to keep prices high, there’s an incredible opportunity to tap into the swagger of the industry leader. Marketplace.
OMG
The AMD Radeon RDNA 3 announcement in November is sure to showcase cards that are competitive with Nvidia designs, whether or not they can keep up with the RTX 40 series in terms of raw power. (As much as that is, in the real world, beyond ideal DLSS and RTX benchmarks.) Wherever chips fall, AMD should absolutely hammer Nvidia competitively priced, especially in the mid-range.
Imagine the goodwill AMD could gain if it unveiled a theoretical Radeon RX 7800 that competes with the RTX 4080 12GB on paper, and beats a new RTX 4070 in price, coming in at around $550. That’s a short distance from the RX 6800’s 2020 retail price. Nvidia’s pricing will be strong. But positioning itself as an undeniable value would be an almost guaranteed way to quickly and dramatically increase market share. It might even be worth positioning these cards at a loss-leading price, at least while Nvidia insists that quadruple digits are the new normal.
Brad Chacos/IDG
Meanwhile, Intel could double down on its promise to deliver value, paving the way for the $150-250 segment with cards that can run most new games at 60fps without the bells and whistles of ray tracing. Intel seems to understand that it simply won’t compete with Nvidia and AMD at the top of the market – which is why its “main” GPU costs less than $350. Intel’s partnerships with OEMs (and, to be blunt, its track record) of strong-arm business tactics) can come in handy here, providing shelves full of cheap, pre-built “gaming desktops” in retail stores around the world.
Selling competitive but not exorbitant cards to the budget conscious doesn’t make you billions, but it does give you a seat at the table and the opportunity to make big swings in the GPU market once your presence is established. Intel probably has the most to lose here; without some immediate and visible gains, whether in market share or sheer profit, your investors might get cold feet and tell the company to stick to tried-and-true CPUs. creating an entire generation of discrete cards if it wasn’t.
A fight for the future of the GPU market
Will AMD and Intel be aggressive enough to make the most of this situation and tip the scales against Nvidia for the first time in decades? Who knows. I’m not telling these companies anything they haven’t already discovered for themselves, and I’m not privy to the kind of data that would make those decisions possible. Despite a normalization of the chip market, it may not be economically possible to scale back Nvidia and remain profitable. And indeed, these companies may simply not have the business. hutzpah sacrificing short-term profitability in favor of the chance of a better position in the future.
But this kind of chance, this confluence of market circumstances and consumer turmoil, doesn’t come very often. If there was ever a time to knock Nvidia from its comfortable position at the top of the GPU stack, it’s now. Even under ideal circumstances, AMD and Intel are unlikely to take away their huge market lead. But they won’t have a better chance of winning over new customers anytime soon.