Tag Archives: rumors

Rumors Suggest Nvidia Might Re-Launch RTX 2060, RTX 2060 Super

This site may earn affiliate commissions from the links on this page. Terms of use.

Nvidia announced its RTX 3060 during CES last week, but according to one report, the company has actually restarted production of its RTX 2060 and RTX 2060 Super. If true, it would mean Nvidia doesn’t think it can alleviate the graphics card shortage quickly enough if it relies solely on 7nm GPUs.

The rumor comes from French site Overclocking.com, which claims to gotten confirmation from several brands. Reportedly, Nvidia shipped out a new set of RTX 2060 and 2060 Super GPUs to re-enable the manufacture of these cards. If true, Nvidia could potentially alleviate the GPU shortage by relying on TSMC’s older (and presumably, less-stressed) 12nm product line.

Nvidia showed the following slide during the RTX 3060 launch. It gives some idea how the two compare, though it does not look as though DLSS is being used for the RTX 2060, and there’s no 2060 Super.

Nvidia’s published claims about RTX 3060 versus 2060 performance. Remember, DLSS is enabled on some RTX 3060 benchmarks.

Either way, there should be some room in the product market beneath the RTX 3060 to carve out space for the 2060, 2060 Super, or both.

How’d We Get Here, Anyway?

We’re in this position today because Nvidia wanted to avoid a repeat of Turing’s disastrous launch. Back in 2018, Nvidia repeatedly told investors that the huge spike in GPU sales through 2017 and into 2018 was being driven by gamers, not by cryptocurrency mining. It’s never been clear how true that was — and Nvidia has been sued by shareholders over the idea that the firm knew full well where its demand was coming from. But whether the company misread the market or not, it appears to have been genuinely caught off-guard when the crypto market cooled off. This left a lot of Pascal GPUs on shelves that had to be moved.

Turing’s second problem was its pricing. Nvidia decided to raise prices with Turing and increased the prices of its GPUs accordingly. It proved unwise to raise Turing prices when Pascal cards were hitting some of the best prices of their lives, and sales of the cards suffered.

Turing’s third problem was that its major feature wasn’t supported in any shipping titles yet. This is not unusual when major new features are introduced to gaming — hardware support has to precede software support, because the arrow of time is annoying and inconvenient — but it still counts as a drag on the overall launch.

This time around, Nvidia wanted to avoid these issues. Turing production was discontinued well before Ampere launched. The end-user community was deeply unhappy with Nvidia’s Turing pricing, and Nvidia, to its credit, adjusted its prices. The non-availability of ray tracing, similarly, is not a problem here. While the number of ray-traced games remains small, there’s now a small collection — including AAA titles — with RTX / DXR support integrated.

Nvidia did everything right, in terms of building appeal for gamers. The one thing it didn’t count on was the impact of the COVID-19 epidemic on semiconductor demand. Bringing back the RTX 2060 and 2060 Super could give Nvidia a way to respond to this problem without sabotaging its new product lineup.

Frankly, it’d be nice to see the RTX 2060 and 2060 Super back in-market, if only to bring a little stability to it. Here are Newegg’s current top-selling GPUs as of 1/20/2021:

It’s not unusual for the Top 10 to have a few cheap cards in it, but every GPU with any horsepower whatsoever is far above retail price.

Newegg’s best-selling GPUs are bottom-end Pascal cards. The last-gen RX 580 and the GTX 1660 Super are the only two consumer cards selling for under $ 500. Both of them are terrible deals at this price point.

There’s always a bunch of low-end garbage stuffed into the GPU market. Typically, these parts live below the $ 100 price point, where you’ll find a smorgasbord of ancient AGP cards, long-vanished GPU micro-architectures, and rock-bottom performance that almost always costs too much. Today, the garbage has flooded into much higher price points. Want a GTX 960? That’ll be $ 150. How about a GTX 460 for $ 145 or an HD 7750 for $ 155? There’s a GTX 1050 Ti for $ 170, which is only $ 40 more than the GPU cost when new, over four years ago.

Right now, it’s impossible to buy any GPU for anything like MSRP. If bringing the RTX 2060 and RTX 2060 Super back to market actually provides some stability and some kind of modern GPU to purchase, I’m in favor of it. At this point, it wouldn’t be the worst thing in the world if AMD threw the old Polaris family back into market, either. While they wouldn’t be a great value at this point ordinarily, the cheapest RX 5500 XT at Newegg is $ 397. Under these circumstances, any midrange GPU manufactured in the last four years that can ship for less than $ 300 would be an improvement.

The past five years have been the worst sustained market for GPUs in the past two decades. Currently, GPU prices have been well above MSRP for 24 out of the past 56 months, dating back to the launch of Pascal in late May, 2016. This isn’t expected to change until March or April at the earliest. When cards aren’t available at MSRP for nearly half the time they’ve been on the market over five years and two full process node deployments, it raises serious issues about whether we can trust MSRPs when making GPU recommendations. Right now, the best price/performance ratio you can get in the retail market might be an RX 550 for $ 122.

The GPU market in its current form is fundamentally broken. Manufacturer MSRPs have the same authority as any random number you might pick out of a hat. There are a lot of factors playing a part in the current situation, including manufacturing yields and COVID-19, but this problem started four years before the pandemic.

AMD and Nvidia need to find a better way to ensure that customers are able to buy the cards they actually want to purchase, or they need to delay their launches for a sufficient length of time as to build up a meaningful stockpile of hardware, sufficient to supply launch demand for a matter of days, not seconds. Alternately, they may need to delay launches until yield percentages and availability are high enough to ensure a constant stream of equipment to buyers.

Right now, we have launch days that sell out instantly and interminable delays between new shipments. If these rumors are true, and we hope they are, Nvidia bringing back the RTX 2060 and 2060 Super will help a little in the short term, but what we obviously need is for AMD and Nvidia to take a fundamentally different approach to product inventory management. As things stand, these aren’t product launches. They’re product teases.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Rumors Point Towards Remarkable Gains for AMD’s Upcoming ‘Big Navi’ GPUs

This site may earn affiliate commissions from the links on this page. Terms of use.

There’s been a lot of debate in the past 12 months over whether RDNA2 would deliver a huge improvement over RDNA. The Radeon 5700 and 5700 XT were significant leaps forward for AMD’s products, but they failed to cleanly beat Turing on absolute power efficiency, and while they challenged Nvidia’s RTX GPUs, they weren’t enough to deliver knockout blows. RDNA was important because it demonstrated that after years of iterating on GCN, AMD was still capable of delivering significant advances in GPU technology.

AMD raised eyebrows when it claimed RDNA2 would offer a 1.5x performance per watt improvement over RDNA, in the same way RDNA had improved dramatically over GCN. Generally speaking, such dramatic improvements only come from node shrinks, not additional GPUs built on the same node. Nvidia’s Maxwell is probably the best example of a GPU family that improved over its predecessor without a node change, and the gap between Maxwell and Kepler was smaller than the gap between Pascal and Maxwell, as far as power efficiency improvements and performance gains.

If you increase something by 1.5x twice, your gain over baseline is 2.25x. AMD’s graph conforms to that relative improvement if you measure the heights of the graph bars in pixels.

There are rumors going around that Big Navi might dramatically faster than expected, with performance estimated at 1.95x – 2.25x higher than the 5700 XT. This would be an astonishing feat, to put it mildly. The slideshow below shows our test results from the 5700 XT and 5700. The 5700 XT matched the RTX 2070 (and sometimes the 2080) well, while the 5700 was modestly faster than the RTX 2060 for a slightly higher price. A 1.95 – 2.25x speed improvement would catapult Big Navi into playable frame rates even on the most demanding settings we test; 18fps in Metro Exodus at Extreme Detail and 4K becomes 35-41 fps depending on which multiplier you choose. I have no idea how Big Navi would compare against Ampere at that point, but it would handily blow past the RTX 2080 Ti.

Evaluating the Chances of an AMD Surge

Let’s examine the likelihood of AMD delivering a massive improvement of the sort contemplated by these rumors. On the “Pro” side:

  • AMD has openly declared that it’s trying to deliver a Ryzen-equivalent improvement on the GPU side of its business. As I noted back when RDNA debuted, it’s not fair to judge GCN-RDNA the same way we judged Bulldozer-Ryzen. AMD had five years to work on Ryzen, while the gap from RX Vega 64 to RDNA wasn’t even two.
  • AMD claims to have improved power efficiency by 1.5x with RDNA, and our comparisons between the Radeon RX 5700 and the Radeon Vega 64 back up this claim. The Radeon 5700 delivers 48fps in 1080p in Metro Last Light Exodus and draws an average of 256W during the fixed-duration workload. The Radeon Vega 64 hit 43fps and drew an average of 347W. That works out to an overall performance-per-watt improvement of ~1.5x.
  • Rumors around Big Navi have generally pointed to a GPU with between 72-80 CUs. That’s a 1.8x – 2x improvement, and it makes the claim of 1.95x – 2.25x more likely on the face of it. Nvidia has not been increasing its core counts generation on generation by this much. The 980 Ti had 2,816 GPU cores, the 1080 Ti packed 3,584 and the 2080 Ti has 4,352. Nvidia has been increasing its GPU core count by about 1.2x per cycle.
  • The PlayStation 5’s GPU core clocks remarkably high for a GPU, at over 2GHz. If we assume that the specified 2.23GHz boost clock for the PS5 is equivalent to the boost clock for RDNA2’s top-end GPU’s with the game clock a little lower, we’d be looking at a 1755MHz Game Clock on 5700 XT versus a 2.08GHz game clock on the Radeon RX Next. That’s a 1.18x gain. A 1.18x gain in clock speed plus a 1.8x gain in CU count = 2.124x improved performance. Pretty much bang on estimated target. A 1.18x IPC improvement without any clock increase (or a mix of the two) could also deliver this benefit.

And the cons?

A 1.5x performance per watt improvement is the kind of gain we typically associate with new process nodes. Nvidia pulled this level of improvement once with Maxwell. The GTX 980 Ti was an average of 1.47x faster than the GTX 780 Ti at the same power draw. AMD never delivered this kind of performance-per-watt leap with GCN over the seven years that architecture drove their GPUs, though GCN absolutely became more power-efficient over time.

Running GPUs at high clock speeds tends to blow their power curves, as the Radeon Nano illustrated against the Radeon Fury five years ago. In order for RDNA2 to deliver the kind of improvements contemplated, it needs to be 1.8x – 2x the size while simultaneously increasing clock without destroying its own power efficiency gains. That’s a difficult, though not impossible trick.

Promising a 1.5x improvement in performance per watt — the one piece of information AMD has confirmed — doesn’t tell us whether that gain is coming from the “performance” side of the equation or the “wattage” side. For example, the GTX 980 Ti and the GTX 780 Ti have virtually the same power consumption under load. In that case, the 1.47x improvement came entirely from better performance in the same power envelope. If AMD delivered a successor to the 5700 XT that drew 197W instead of 295W but offered exactly the same performance, it could also claim a 1.5x improvement in performance-per-watt without having improved its actual real-world performance at all. I don’t think this is actually likely, but it illustrates that improvements to performance per watt don’t necessarily require any performance improvements at all.

I haven’t addressed the question of IPC at all, but I want to touch on it here. When Nvidia launched Turing, it paid a significant penalty in die size and power consumption relative to a GPU with an equivalent number of cores, TMUs, and ROPs but without the tensor cores and RT cores. What does that mean for AMD? I don’t know.

The Nvidia and AMD / ATI GPUs of any given generation almost always prove to respond differently to certain types of workloads in at least a few significant ways. In 2007, I wrote an article for Ars Technica that mentioned how the 3DMark pixel shader test could cause Nvidia power consumption to surge.

Certain feature tests could cause one company’s GPU power consumption to spike but not the others. Image by Ars Technica.

I later found a different 3DMark test (I can’t recall which one, and it may have been in a different version of the application) that caused AMD’s power consumption to similarly surge far past Nvidia.

Sometimes, AMD and Nvidia implement more-or-less the same solution to a problem. Sometimes they build GPUs with fundamental capabilities (like asynchronous computing or ray tracing) that their competitor doesn’t support yet. It’s possible that AMD’s implementation of ray tracing in RDNA2 will look similar to Nvidia’s in terms of complexity and power consumption penalty. It’s also possible that it’ll more closely resemble whatever Nvidia debuts with Ampere, or be AMD’s unique take on how to approach the ray tracing efficiency problem.

The point is, we don’t know. It’s possible that RDNA’s improvements over RDNA1 consist of much better power efficiency, higher clocks, more CUs, and ray tracing as opposed to any further IPC gains. It’s also possible AMD has another IPC jump in store.

The tea leaves and indirect rumors from sources suggest, at minimum, that RDNA2 should sweep past the RTX 2000 family in terms of both power efficiency and performance. I don’t want to speculate on exactly what those gains or efficiencies will be or where they’ll come from, but current scuttlebutt is that it’ll be a competitive high-end battle between AMD and Nvidia this time around. I hope so, if only because we haven’t seen the two companies truly go toe-to-toe at the highest end of the market since ~2013.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Machine Gun Kelly Says He’s ‘In Love’ Amid Megan Fox Romance Rumors

Machine Gun Kelly Says He’s ‘In Love’ Amid Megan Fox Romance Rumors | Entertainment Tonight

Let’s block ads! (Why?)

News

Nvidia Ampere Rumors Point to 300W+ TDP, Up to 24GB of VRAM

This site may earn affiliate commissions from the links on this page. Terms of use.

Rumors about Nvidia’s Ampere and the proposed family of consumer GPUs based on it have been circulating for months. The latest leaks claim Nvidia is prepping some significant improvements at the top end, along with higher power consumption.

According to Igor’s Lab, which also leaked supposed images of the new Ampere GPUs this week, the upcoming top-end cards will be the RTX 3090 (possibly branded as Ti or Super), followed by the RTX 3080 (Ti/Super), followed by the RTX 3080. All three of these GPUs would be based on the GA102 die, and all of them would use 300W+ of TDP — 350W for the 3090, and 320W for both of the 3080 models below it. RAM would be GDDR6X in all cases, with a 384-bit, 352-bit, and 320-bit interface on each GPU respectively. The top-end 3090 would carry NVLink, but the lower-end GPUs wouldn’t.

Ampere-Leak

Image by TechSpot

The framing of these GPUs makes me wonder if Nvidia is launching its absolute top-end market stack first. With Pascal, Nvidia led with the GTX 1080 and 1070, with the 1080 Ti debuting months later. For Turing, Nvidia launched the RTX 2080 Ti, 2080, and 2070 simultaneously, but used a different GPU for each. This positioning sounds like Nvidia will lead with what we’d have typically called a “Titan / xx80 Ti / xx80” positioning as opposed to “xx80 Ti / xx80, xx70.”

The RAM loadout is also interesting. With consoles now packing 16GB of unified RAM and some high-end GPUs like the Radeon VII already featuring 16GB, I think there’s been a certain amount of assumption that 16GB would be the RAM capacity of choice next generation. This data suggests otherwise. The 24GB of VRAM on the 3090 Ti/Super is a nod to the card’s datacenter/workstation roots, not an attempt to move the market towards higher VRAM loadouts.

Typically Nvidia and AMD match each other on RAM capacity fairly closely, though it isn’t unusual for AMD to offer a bit more memory bandwidth and capacity depending on the SKU. Team Red has been warning about small VRAM buffers of 4GB and below, which both plays to its strengths in the market and aligns with trends showing that lower RAM levels can now meaningfully impact 1080p play at a high detail level.

If the TDPs are to be believed, Nvidia is also finally leaving the 250W TDP point behind at the high end. Both Nvidia and AMD have flirted with higher-power GPUs before, but 250W has been an anchor point for GPUs in much the same way that 125W TDPs were an anchor for consumer CPUs for many years. Intel and AMD have both exceeded that mark in recent years, and if these rumors are accurate, we should expect GPUs to do so as well. This would free AMD to essentially pursue the same path.

An increased TDP isn’t necessarily surprising. Nvidia may have chosen to maximize performance at the top end, gambling that high-end gamers who would consider these cards in the first place have systems powerful enough to handle them. If you have an 850W – 1.2kW PSU and adequate cooling, a 250W CPU and 350W GPU won’t be anything you can’t handle in the first place.

No word on pricing, but the one thing you can bet these cards won’t be is cheap. Nvidia may position them competitively relative to where Turing or Pascal debuted if it feels AMD is a threat or if it’s worried about the impact of coronavirus on GPU sales, but I’d expect the company to hold the line on pricing to the greatest degree possible.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Nvidia Ampere Rumors Point to 300W+ TDP, Up to 24GB of VRAM

This site may earn affiliate commissions from the links on this page. Terms of use.

Rumors about Nvidia’s Ampere and the proposed family of consumer GPUs based on it have been circulating for months. The latest leaks claim Nvidia is prepping some significant improvements at the top end, along with higher power consumption.

According to Igor’s Lab, which also leaked supposed images of the new Ampere GPUs this week, the upcoming top-end cards will be the RTX 3090 (possibly branded as Ti or Super), followed by the RTX 3080 (Ti/Super), followed by the RTX 3080. All three of these GPUs would be based on the GA102 die, and all of them would use 300W+ of TDP — 350W for the 3090, and 320W for both of the 3080 models below it. RAM would be GDDR6X in all cases, with a 384-bit, 352-bit, and 320-bit interface on each GPU respectively. The top-end 3090 would carry NVLink, but the lower-end GPUs wouldn’t.

Ampere-Leak

Image by TechSpot

The framing of these GPUs makes me wonder if Nvidia is launching its absolute top-end market stack first. With Pascal, Nvidia led with the GTX 1080 and 1070, with the 1080 Ti debuting months later. For Turing, Nvidia launched the RTX 2080 Ti, 2080, and 2070 simultaneously, but used a different GPU for each. This positioning sounds like Nvidia will lead with what we’d have typically called a “Titan / xx80 Ti / xx80” positioning as opposed to “xx80 Ti / xx80, xx70.”

The RAM loadout is also interesting. With consoles now packing 16GB of unified RAM and some high-end GPUs like the Radeon VII already featuring 16GB, I think there’s been a certain amount of assumption that 16GB would be the RAM capacity of choice next generation. This data suggests otherwise. The 24GB of VRAM on the 3090 Ti/Super is a nod to the card’s datacenter/workstation roots, not an attempt to move the market towards higher VRAM loadouts.

Typically Nvidia and AMD match each other on RAM capacity fairly closely, though it isn’t unusual for AMD to offer a bit more memory bandwidth and capacity depending on the SKU. Team Red has been warning about small VRAM buffers of 4GB and below, which both plays to its strengths in the market and aligns with trends showing that lower RAM levels can now meaningfully impact 1080p play at a high detail level.

If the TDPs are to be believed, Nvidia is also finally leaving the 250W TDP point behind at the high end. Both Nvidia and AMD have flirted with higher-power GPUs before, but 250W has been an anchor point for GPUs in much the same way that 125W TDPs were an anchor for consumer CPUs for many years. Intel and AMD have both exceeded that mark in recent years, and if these rumors are accurate, we should expect GPUs to do so as well. This would free AMD to essentially pursue the same path.

An increased TDP isn’t necessarily surprising. Nvidia may have chosen to maximize performance at the top end, gambling that high-end gamers who would consider these cards in the first place have systems powerful enough to handle them. If you have an 850W – 1.2kW PSU and adequate cooling, a 250W CPU and 350W GPU won’t be anything you can’t handle in the first place.

No word on pricing, but the one thing you can bet these cards won’t be is cheap. Nvidia may position them competitively relative to where Turing or Pascal debuted if it feels AMD is a threat or if it’s worried about the impact of coronavirus on GPU sales, but I’d expect the company to hold the line on pricing to the greatest degree possible.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Sophie Turner and Joe Jonas Shop for Kids Clothes Amid Pregnancy Rumors

Sophie Turner and Joe Jonas Shop for Kids Clothes Amid Pregnancy Rumors | Entertainment Tonight

Let’s block ads! (Why?)

News

Issa Rae Shuts Down ‘Set It Off’ Remake Rumors: ‘I Would Never Remake a Classic’ (Exclusive)

Issa Rae Shuts Down ‘Set It Off’ Remake Rumors: ‘I Would Never Remake a Classic’ (Exclusive) | Entertainment Tonight

Let’s block ads! (Why?)

News

Jeffree Star Posts Cryptic Message Amid Nathan Schwandt Breakup Rumors

Jeffree Star Posts Cryptic Message Amid Nathan Schwandt Breakup Rumors | Entertainment Tonight

Let’s block ads! (Why?)

News

Cody Simpson’s Sister Shuts Down Rumors the Singer and Miley Cyrus Split

Cody Simpson’s Sister Shuts Down Rumors the Singer and Miley Cyrus Split | Entertainment Tonight

Let’s block ads! (Why?)

News

Megan Thee Stallion Shuts Down Tristan Thompson Dating Rumors: ‘They Literally Made Up a Whole LIE’

Megan Thee Stallion Shuts Down Tristan Thompson Dating Rumors: ‘They Literally Made Up a Whole LIE’ | Entertainment Tonight

Let’s block ads! (Why?)

News