Tag Archives: Radeon

AMD Radeon 6700 XT Review: A Great GPU at a Tough Price

Earlier this week, we examined the Radeon RX 6700 XT’s IPC and power consumption improvements against its predecessor, the RDA-based 5700 XT. Our tests revealed that the Radeon 6700 XT is significantly more power-efficient than the Radeon 5700 XT when both cards are measured at 1.85GHz. Now we’re taking a fuller look at the RX 6700 XT as compared with the RTX 3070, as well as Nvidia’s previous-generation RTX 2080 and the 5700 XT.

The 6700 XT is based on the Navi 22 GPU core. Its performance against the 5700 XT has been of particular interest, as it’s a near-identical replacement for that GPU as far as core resource allocation. Our tests earlier this week showed that RDNA2 is only slightly faster than RDNA when measured clock-for-clock, but that AMD’s L3 cache and smaller memory bus have paid huge dividends in power efficiency. Here’s how the 6700 XT stacks up against the 5700 XT, as well as the competition from Team Green:

The relationship between the RTX 2080 and RTX 3070 (mostly) mirrors the relationship between the RX 5700 XT and 6700 XT. The RTX 3070 has far more cores than the RTX 2080, and its tensor core and ray tracing performance are higher overall. But the two GPUs share the same number of texture mapping units, render outputs, and ray tracing cores. Memory bandwidth on both GPUs is the same at 448GB/s, they run at nearly identical clock speeds, and they both have 8GB frame buffers.

AMD’s $ 479 positioning on the RX 6700 XT looks pretty optimistic at first glance. GPUs from different families can’t be directly compared on the basis of core counts, ROPs, or TMUs, but more of these things still tends to be better, and the RTX 3070 packs more of everything the RX 6700 XT has to offer (except VRAM and clock). The base clock of 2325MHz on the 6700 XT is no less than 1.54x faster than the base clock on the RTX 3070, and the 6700 XT offers 12GB of RAM, compared with just 8GB on other cards. We’ll run some tests today aimed at testing how much this additional VRAM matters.

Our RTX 3070 GPU is an MSI Gaming X Trio — we reviewed this card last year if you’re looking for more model-specific information.

Test Setup and Configuration

We’re switching to a new graphing engine here at ExtremeTech, so let us know what you think of the new design when you check it out below. The graph below shows our results for four video cards. You can select or de-select which cards you want to see by clicking on the color buttons next to each card.

Game results were combined for the three Total War: Troy benchmark maps (Battle, Campaign, and Siege), leading to the “Combined” score. Similarly, results from Hitman 2’s Miami and Mumbai maps were averaged to produce a single result. Gaps between the cards in these maps were proportional and this averaging does not distort the overall comparison between the three cards in those titles.

This presentation method prevents us from giving per-game detail settings in the graph body, so we’ll cover those below:

Ashes of the Singularity: Escalation: Crazy Detail, DX12.

Assassin’s Creed: Origins: Ultra Detail, DX11.

Borderlands 3
: Ultra detail, DX12

Deus Ex: Mankind Divided
: Very High Detail, 4x MSAA, DX12

Far Cry 5
: Ultra Detail, High Detail Textures enabled, DX11.

Godfall (RT-only): We only tested Godfall with ray tracing enabled, in Epic Detail. Grats to the Godfall developers for coming up with a credible name for a preset above “Ultra” that isn’t “Extreme.”

Hitman 2 Combined
: Ultra detail, but performance measured by “GPU” frame rate reported via the benchmarking tool. This maintains continuity with the older Hitman results, which were reported the same way. Miami and Mumbai test results combined. Tested in DX12.

Metro Exodus
: Tested at Extreme Detail, with Hairworks and Advanced Physics disabled. Extreme Detail activates 2xSSAA, effectively rendering the game at 4K, 5K, and 8K when testing 1080p, 1440p, and 4K. Tested in DX12.

Metro Exodus (RT): Ultra Detail, with Ultra ray tracing enabled. The only difference between Ultra and Extreme Detail in Metro Exodus is that Extreme enables 2x SSAA, effectively rendering the game at double the resolution. Hairworks and Advanced physics disabled.

Shadow of the Tomb Raider
: Tested at High Detail, with SMAATx2 enabled. Uses DX12.

Strange Brigade
: Ultra Detail, Vulkan.

Total War: Troy Combined
: Ultra detail, DX12.

Total War: Warhammer II
: Ultra detail, Skaven benchmark, DX12.

Watch Dogs Legion (RT-Only): Tested on Ultra detail with ultra ray tracing enabled and disabled.

Our test settings are aggressive and put a heavy load on GPUs, especially Metro Exodus and Deus Ex: Mankind Divided. Testing these GPUs at non-playable speeds can help expose differences in the underlying architectures.

All games were tested using an AMD Ryzen 9 5900X on an MSI X570 Godlike equipped with 32GB of DDR4-3200 RAM. AMD’s Ryzen 6700 XT launch driver was used to test both the 5700 XT and 6700 XT. AMD’s launch Radeon RX 6700 XT driver was used for all AMD GPUs and Nvidia’s 461.92 driver handled NV cards. Smart Access Memory / Resizable BAR was enabled for the Radeon 6700 XT but disabled for the 5700 XT, RTX 2080, and RTX 3070.

Performance Results and Analysis

We’ll talk about rasterization results first, then switch over and chat on ray tracing.

In 1080p, in aggregate, the RTX 2080 is 15 percent faster than the RX 5700 XT and the 6700 XT is 7 percent faster than the RTX 2080. The RTX 3070, in turn, is 11 percent faster than the RX 6700 XT. AMD refers to the 6700 XT as an enthusiast’s 1440p GPU and the data once again bears out this positioning — in 1440p the 6700 XT widens its lead over Turing to 14 percent. The RTX 3070 is still faster overall, but by just 6 percent. The gaps widen again in 4K, with the RTX 2080 winning over 5700 XT by 20 percent, 6700 XT once again 1.07x faster than the RTX 2080, and the RTX 3070 beating the 6700 XT by 1.16x.

There are some benchmarks where the 6700 XT pulls ahead of the 5700 XT by a larger-than-expected margin in 1440p, including Ashes of the Singularity: Escalation, Shadows of the Tomb Raider, Strange Brigade, and Total War Troy, particularly TWT. Total War Troy is an interesting example of a game that responds extremely well to AMD’s L3 cache in one specific resolution. Performance craters in 4K, but it craters for both AMD GPUs.

Our ray tracing results look much as we’d expect, but the 4K data, specifically, is worth your attention:

There’s some evidence to suggest that Nvidia’s decision to equip the RTX 3070 with just 8GB of VRAM really could be a limiting factor in games going forward. There’s evidence of this in both Godfall and Watch Dogs Legion, particularly WDL.

1080p and 1440p show similar patterns of performance between the three cards. Ultra detail is extraordinarily hard on both the RTX 2080 and the 6700 XT, with or without ray tracing enabled. Once we hit 4K, however, things change. Both the RTX 2080 and 3080 fall off a cliff in Godfall, where the RX 6700 XT outperforms them by over 3x.

In Watch Dogs Legion, the RTX 3070 is no less than 2.91x faster than the 6700 XT in 1440p, but loses to it in 4K. While none of the GPUs turns in a playable frame rate, the wholesale collapse of the Nvidia cards at high resolution is indicative of one thing: an insufficient VRAM buffer.

This is concerning in the case of the RTX 3070, which really ought to have enough horsepower to step up to 4K with ray tracing enabled, but can’t do so in titles you can already buy today. While overall ray tracing performance on the RTX 3070 is higher than the 6700 XT in both 1080p and 1440p (and by a significant margin), the RX 6700 XT makes a potent argument in favor of its own 12GB VRAM buffer at 4K and scores a few points in the process.

Power Consumption

Power consumption was measured in Metro Exodus and Metro Last Light Redux on the third loop of a three-benchmark run. I threw Last Light Redux back into the mix when I noticed Exodus stressed the 5700 XT a bit differently. I’ve also shown the low-power result from running the Radeon 6700 XT at 1.85GHz (to match the 5700 XT). We discussed these results more in our IPC comparison earlier this week.

There’s a much more efficient chip hiding inside the 6700 XT. Matched clock for clock against the 5700 XT, the 6700 XT is quite power efficient. At 1.85GHz, it’s actually more efficient in terms of power consumption per frame drawn than the RTX 3070. Cranking up the clock to compete with Nvidia reduces the 6700 XT’s efficiency, and the RTX 3070 is more efficient when both GPUs are run at full speed.

Compared with the 5700 XT, the 6700 XT offers a 1.20x increase in performance in Exodus at a slight increase in power consumption. This is a bit worse than its 1440p performance overall, where it offers 1.34x better performance than the 5700 XT. This is a very significant degree of uplift for a GPU that’s a near-mirror of its predecessor, and it speaks to AMD’s work optimizing RDNA2’s clock scheme and overall efficiency.

Here’s one more tidbit. The Radeon VII (not shown) hits around 1.8GHz maximum and draws about 402W in Metro Last Light. The 5700 XT pulls about 350W in this test at a 1.85GHz clock, while the 6700 XT draws 267W at 1.85GHz. Performance between all three GPUs is similar at this clock, with the 6700 XT leading modestly. This means AMD has drawn down its 7nm power consumption from 402W at the launch of Radeon VII to a hypothetical 267W today, at least in this specific test. That data point doesn’t have any direct bearing on our review, since the Radeon 6700 XT is not a 1.85GHz card, but it helps illustrate the long arc of AMD’s RDNA2 efficiency gains.

Conclusion

The 6700 XT has some solid strong points. AMD’s efforts to bring some Ryzen DNA to RDNA2 have clearly paid off. There is no sign of untoward memory bandwidth pressure on the 6700 XT except in Metro Exodus and Deus Ex: Mankind Divided — two titles we benchmark in configurations that put egregious pressure on memory bandwidth. There’s no sign of a problem in any title at settings that yield playable frame rates. AMD may be marketing this GPU as a 1440p solution, but it’s perfectly capable of driving 4K frame rates.

If you read other reviews around the net, you’ll find that the relative gap between the 6700 XT and RTX 3070 ranges from 0 – 10 percent depending on which titles reviewers’ tested. In our test suite, the 6700 XT never quite matches the performance of the RTX 3070, even at its target 1440p resolution. The RTX 3070 theoretically costs just 4 percent more than the 6700 XT and it’s more than 4 percent faster, in every resolution.

Normally, this would be an open-and-shut case, but the high-end ray tracing hit we saw from both the RTX 2080 and RTX 3070 gives us pause. AMD recommends the 6700 XT be used for 1080p ray tracing, so it’s not clear how much ray tracing fans will be doing in 4K anyway. But in some circumstances, the 8GB buffers on the RTX 3070 and RTX 2080 are not large enough for ultra detail settings with RT ladled on top.

The strongest argument in favor of the 6700 XT is the GPU’s 12GB VRAM buffer. The additional VRAM capacity clearly buffers the 6700 XT in Godfall at 4K in a way that isn’t explained by it being an AMD-friendly title, and it allows the 6700 XT to eke out a narrow win in Watch Dogs Legion after losing the first two resolutions. It is possible that the 6700 XT is a rare example of a GPU whose larger VRAM capacity will deliver meaningfully better scaling down the line when cards with less VRAM are forced to disable features like ray tracing at lower resolutions.

I’m really hoping that the next Big Navi GPUs AMD announces find a way to take advantage of the card’s power efficiency rather than relying so heavily on clock. If the Radeon 6700 follows the 5700’s lead, it’ll feature 36 CUs instead of 40 and a reduced clock. It would be nice to see AMD keep more CUs at lower clocks to play up the power consumption angle; wider, slower GPUs tend to be more power-efficient than narrower, higher-clocked GPUs.

If the market were currently normal, I’d argue that the RTX 3070 is the better value if you replace your GPU fairly often or game at 1440p and below, while the 6700 XT might be a better option if you’re concerned about the VRAM issue longer-term. I always tend to weight the here-and-now more heavily than the long-term future of any feature, and the RTX 3070 offers generally faster performance for less than the additional cost of the card in both ray tracing and rasterization.

A $ 30-$ 50 price cut would do the 6700 XT a world of good. This is closer to where the card ought to be priced, given its overall competitive performance against the RTX 3070. At $ 429, the 6700 XT would be an easy recommendation for anyone wanting to save a bit of cash over the RTX 3070 or to step up from an older AMD or Nvidia GPU.

But market prices aren’t normal and they aren’t expected to be normal until 2022, making this talk of hypothetical price comparisons a bit silly. The actual best GPU you can buy right now is the GPU you can get for something approaching MSRP, the Radeon 6700 XT very much included. With six-year-old cards like the R9 390X going for over $ 400 on eBay, a $ 479 price tag on this latest GPU, should you ever see one, is an absolute steal.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Radeon 6700 XT vs. 5700 XT: Putting RDNA2 to the Test

AMD’s 6700 XT launch last week gives us a bit of an unusual opportunity. Typically, generational GPU comparisons are a bit limited because core counts, texture units, and ROP configurations don’t align perfectly between the families. AMD’s Radeon RX 6700 XT is an exception to this rule, and it allows for a tighter comparison of RDNA versus RDNA2 that would otherwise be possible.

The Radeon 6700 XT and 5700 XT both feature 40 CUs, 160 texture mapping units, and 64 render outputs. (2560:160:64). The 5700 XT has a wider memory pipeline with a 256-bit memory bus and 14Gbps GDDR6. This works out to 448GB/s of main memory bandwidth. The 6700 XT, in contrast, has a 192-bit memory bus, 16Gbps GDDR6, 384GB/s of memory bandwidth, and a 96MB L3 cache. Today, we’ll be examining the 5700 XT against the 6700 XT at the same clock speed to measure the performance and efficiency impacts of the new architecture, smaller memory bus, and L3 cache.

GPU-Comparison-Chart-6700XT

According to AMD, switching to a huge L3 cache allowed them to shrink the memory bus while improving performance. There’ve been concerns from readers that this limited memory bus could prove a liability in gaming, given that a 192-bit memory bus on a $ 479 card is highly unusual.

Comparing both GPUs at the same clock allows us to look for any additional IPC (instructions per clock cycle) improvements between the 5700 XT and 6700 XT. RDNA is capable of issuing one instruction per clock cycle, compared with one instruction every four cycles for GCN. This allowed AMD to claim a 1.25x IPC improvement from GCN to RDNA, and while the company hasn’t claimed an equivalent increase from RDNA to RDNA2, we may see signs of low-level optimizations or just the overall impact of the L3 itself.

We’re comparing the performance of the 5700 XT and 6700 XT today, with both cards approximately locked to a 1.85GHz clock speed. We’ll also compare against the 6700 XT at full speed (SAM disabled) to see the card’s native performance and power consumption. A full review of this card, with Nvidia comparison data, will be arriving shortly.

Test Setup, Configuration, and a New Graphing Engine

We’re shifting to a new, more capable graphing engine here at ET. The graph below shows our results in 11 titles for the 5700 XT (1.85GHz). Clicking on any of the color buttons next to a given card will remove that card from the results, allowing you to focus on the others. Click on the button again to restore the data. Data is broken up by tabs, with one resolution per tab.

Game results were combined for the three Total War: Troy benchmark maps (Battle, Campaign, and Siege), leading to the “Combined” score. Similarly, results from Hitman 2’s Miami and Mumbai maps were averaged to produce a single result. Gaps between the cards in these maps were proportional and this averaging does not distort the overall comparison between the three cards in those titles. We’ve still used our classic graphs for a few results that didn’t map neatly into the specific result format used in this article, but the new engine is spiffier (a technical term), so we plan to use it for most projects going forward.

This presentation method prevents us from giving per-game detail settings in the graph body, so we’ll cover those below:

Ashes of the Singularity: Escalation: Crazy Detail, DX12.

Assassin’s Creed: Origins: Ultra Detail, DX11.

Borderlands 3
: Ultra detail, DX12

Deus Ex: Mankind Divided
Very High Detail, 4x MSAA, DX12

Far Cry 5
: Ultra Detail, High Detail Textures enabled, DX11.

Hitman 2 Combined:
Ultra detail, but performance measured by “GPU” frame rate reported via the benchmarking tool. This maintains continuity with the older Hitman results, which were reported the same way. Miami and Mumbai test results combined. Tested in DX12.

Metro Exodus:
: Tested at Extreme Detail, with Hairworks and Advanced Physics disabled. Extreme Detail activates 2xSSAA, effectively rendering the game at 4K, 5K, and 8K when testing 1080p, 1440p, and 4K. Tested in DX12.

Shadow of the Tomb Raider
: Tested at High Detail, with SMAATx2 enabled. Uses DX12.

Strange Brigade
: Ultra Detail, Vulkan.

Total War: Troy Combined
: Ultra detail, DX12.

Total War: Warhammer II
: Ultra detail, Skaven benchmark, DX12.

Of the games we test, Deus Ex: Mankind Divided and Metro Exodus put the heaviest load on the GPUs, by far. DXMD’s multisample antialiasing implementation carries a very heavy penalty and Exodus is effectively rendering in 8K due to the use of supersampled antialiasing.

All games were tested using an AMD Ryzen 9 5900X on an MSI X570 Godlike equipped with 32GB of DDR4-3200 RAM. AMD’s Ryzen 6700 XT launch driver was used to test both the 5700 XT and 6700 XT. ReBAR / SAM was disabled — AMD doesn’t support this feature on the 5700 XT, so we disabled it for our 6700 XT IPC comparison. ReBAR / SAM is also disabled for the 6700 XT “full clock” results, to ensure an apples-to-apples comparison. We’ll have results with SAM enabled in our full 6700 XT review.

The 1.85GHz clock speed is approximate. In-game clocks remain near the minimum value, but this is not absolute. The 6700 XT was allowed to run between 1.85GHz and 1.95GHz and remained near 1.85GHz. The 5700 XT’s clock ranges from 1.75GHz – 1.95GHz, but it mostly remains between 1.8 – 1.9GHz. AMD’s 6700 XT requires a 100MHz GPU clock range and the 5700 XT didn’t respond to our attempts to manually adjust its clock, so we tuned the 6700 XT to the 5700 XT’s default clock range.

These tests will show any high-resolution / high-detail bottleneck that appears on the 6700 XT versus the 5700 XT. If the 6700 XT’s L3 can’t compensate for the increased memory pressure, the 5700 XT should outperform it. The 6700 XT’s default base clock is 2325MHz, or ~1.26x higher than the 1.85GHz minimum value we defined. Low scaling between the 1.85GHz Radeon 6700 XT and the stock-clocked version may mean memory bandwidth pressure is limiting performance.

We’ll also check power efficiency between the cards because AMD claimed a 1.5x increase for RDNA2 over and above RDNA.

Performance Test Results & Analysis

Here’s the good news: There’s no sign that the L3 cache + 192-bit memory bus chokes the 6700 XT in realistic workloads. Only two games show evidence of memory pressure: Metro Exodus and Deus Ex: Mankind Divided. The benchmark settings we use in those two titles make them maximally difficult to render: Deus Ex: Mankind Divided’s MSAA implementation is very expensive, on all GPUs. Metro Exodus’ “Extreme” benchmark preset renders at 2xSSAA. The game may still be output at 4K, but internally the GPU is rendering 8K worth of pixels. Reducing either of these settings to a sane value would immediately resolve the problem.

There is no sign of a memory bottleneck in the 6700 XT versus the 5700 XT in any other game we tested. On the contrary, one game — Far Cry 5 — shows sharply improved results at 4K for the 6700 XT compared with the 5700 XT. We confirmed these gains with repeated testing and confirmed the results. Either the L3 cache or some other aspect of RDNA2 seems to improve FC5 at 4K, in particular.

AMD has said the 6700 XT is intended as a 1440p GPU and our test results suggest that resolution shows the greatest gap between the 5700 XT and 6700 XT when the two are normalized clock-for-clock. Compared against itself, the gap between the 1.85GHz and the stock-clock 6700 XT was also widest at 1440p.

We’ve also included a few quick results in two benchmarks that use different scales than our tests above and were graphed on older templates: Final Fantasy XV and Neon Noir, the latter as a rare ray tracing game that can run on the 5700 XT.

There is no dramatically different data in either set of results. The gap in FFXV is the largest in 1440p but at 1.85GHz the 5700 XT catches (but doesn’t pass) the 6700 XT @ 1.85GHz in 4K. In Neon Noir, the Crytek ray tracing benchmark, the two GPUs hold a steady gap between themselves.

There’s only limited evidence for IPC gains between RDNA and RDNA2. Allowing for a 2-3 percent margin of error on the basis of GPU clock alone, and 2-3 percent for benchmark-to-benchmark variance, most of the gaps between RDNA and RDNA2 disappear. There are three exceptions at 1440p: Ashes of the Singularity (6700 XT is 15 percent faster), Assassin’s Creed: Origins (10 percent faster), and Total War: Warhammer II (8 percent faster).

The aggregate data across all games shows the 6700 XT is 3 percent faster than the 5700 XT at 1080p, 6 percent faster at 1440p, and 5 percent faster in 4K when the two GPUs are compared clock-for-clock. When tested at full speed (with SAM disabled), the full-speed 6700 XT is 1.23x faster than the 5700 XT at 1080p, 1.3x faster at 1440p, and 1.28x faster at 4K.

These clock-for-clock performance results don’t look great for RDNA2 versus RDNA, but we haven’t checked power consumption data. AMD claimed a 1.5x improvement in performance per watt for RDNA2 versus RDNA, and we don’t have much evidence for performance improvements yet. We measured full-load power consumption during the third run of a three-loop Metro Exodus benchmark at 1080p in Extreme Detail.

This is all sorts of interesting. Clock for clock, RDNA2 is much more power-efficient than RDNA. The 5700 XT and 6700 XT perform virtually identically in Exodus at 1080p, and the 6700 XT is drawing nearly 100W less power to do it while fielding 12GB of RAM (up from 8GB) and a 16Gbps RAM clock (1.14x higher than the 14Gbps on the 5700 XT).

The 5700 XT draws 1.37x as much power as the 6700 XT when they’re measured at the same clock and approximate performance level. That’s an impressive achievement for an iterative new architecture without a new process node involved. Unfortunately, it all goes out the window when the clock turns up. At stock clock in Metro Exodus, the 6700 XT with SAM disabled is 1.21x faster than RDNA, but it uses about 3 percent more power. Clearly, AMD has a fairly power-efficient chip at lower clocks, but it’s tapping 100 percent of available clock room to compete more effectively.

RDNA2 unquestionably offers AMD better clock scaling than the company’s GPUs have previously enjoyed, but with a heavy impact on power consumption. AMD pays for a 1.24x performance improvement with a 1.41x increase in power consumption. That’s not far from a 2:1 ratio, and it speaks to the importance of keeping efficiency high and clocks low in GPU architectures. Clock-for-clock, RDNA2 is capable of offering substantial power advantages over RDNA, but AMD has tuned the 6700 XT for performance, not power consumption. A hypothetical 6700 at lower clock could offer substantially better power consumption, but might not compete effectively with down-market Nvidia cards.

When AMD launched RDNA back in 2019, we noted that the company’s efforts to transform its GPUs would take time, and that not nearly enough of it had passed for an equivalent, Ryzen-like transformation of the product family. Looking at RDNA2 versus RDNA, we definitely see the increased power efficiency AMD was chasing in-evidence when the 5700 XT and 6700 XT are compared clock-for-clock. The smaller memory bus and large L3 cache do indeed appear to pay dividends. AMD is still aggressively tuning its GPUs for competitive purposes, but it has found new efficiencies with RDNA compared with GCN and then with RDNA2 compared to RDNA, to enable it to further boost performance.

We’ll examine the competitive and efficiency situation vis-à-vis Nvidia later this week.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Launches the Radeon RX 6700 XT

This site may earn affiliate commissions from the links on this page. Terms of use.

Today, AMD is launching the RX 6700 XT, a new high-end GPU that at least dips below a theoretical $ 500 price point. Just as the 6800 XT and 6800 filled in the top ranks of RDNA2, the 6700 XT fills in the next-lowest slot on the price bracket, with a smaller die, fewer cores, and much faster clocks. Please keep in mind that all discussion of GPU pricing below is, by necessity, theoretical. The impact of tariffs between the US and China plus the cryptocurrency boom have rendered GPUs priced at MSRP a rarity.

ET will have a review of this GPU up in the immediate future, but we’re currently porting data to a new graphing engine and the process took me a bit longer than anticipated.

At $ 479, the RX 6700 XT cuts the difference between the RTX 3070 and the RTX 3060 Ti, but it’s closer to the former ($ 500) than the latter ($ 400). It’s also the replacement for the RDNA-based 5700 XT, which debuted in mid-2019 for $ 400 as well.

6700XT-AMD

The 5700 XT versus 6700 XT comparison is going to be particularly interesting, as those GPUs share a common core configuration (2560:160:64). The 6700 XT has less raw memory bandwidth than the 5700 XT, but it offers something that card doesn’t have: A 96MB Infinity Cache.

Infinity-Cache

We’re borrowing this slide from the 6800 XT launch for the point about the power cost of Infinity Cache accesses versus conventional GDDR6. AMD is saving power here.

If you’re familiar with the 128MB L3 Infinity Cache on the 6800 and 6800 XT, this is its younger, smaller brother. According to AMD, the smaller cache provides the same approximate hit rate at 1440p that its 128MB big brother provides at 4K. AMD is similarly recommending you use ray tracing at 1080p if you intend to use it, due to the performance hit.

I’m in the odd position of having run a fair number of tests on the card already, so I’ll just add this: You can see some evidence in benchmark data for why the 6700 XT is positioned as a 1440p card. It dips a bit in 4K compared with the 6800 XT, though in all honesty, you could call it a 4K card, too — just not with the same leeway for future titles or the same guarantee of high performance with settings cranked up to max.

The 6700 XT’s high clock speeds definitely help it compared with cards like the 6800 XT. On paper, the 6800 XT packs 1.8x more cores and texture units, with 2x the ROPs. The 6700 XT compensates partly by cranking up the clock. The GPU’s base clock is 2.32GHz, compared with 1.825GHz on the 6800 XT. The extra 1.27x clock offsets a solid chunk of the resource gap, though the 6800 XT remains meaningfully faster than its smaller rival at 1440p.

We’ll have our full coverage for you in the not-so-distant future. AMD has not clarified if it intends to launch a full suite of RDNA2 GPUs to replace its RDNA stack or not. Nvidia has replaced its cards down to the nominal $ 329 price point. AMD is currently stuck at $ 479. Presumably, that means we’ll see an RX 6700 at some time in the future.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Announces New Radeon 6700 XT, Theoretically Priced at $479

This site may earn affiliate commissions from the links on this page. Terms of use.

AMD has announced its upcoming 6700 XT. As the name implies, it’s intended as the lower-end sibling of the 6800 and 6800 XT series and as the generational, drop-in replacement for the 5700 XT.

The 6700 XT will hit store shelves on March 18 with a price of $ 479. This represents a price increase relative to that previous card, which debuted at $ 400. Of course, given current GPU prices, anyone able to score a new card at anything adjacent to MSRP will likely feel as if they’ve won the lottery regardless given current price trends.

How Will This GPU Compare Against the 5700 XT?

There’s some good news regarding how this new card will compare with its less-expensive predecessor, at least. The older Radeon 5700 XT is a 40 CU card with a 1.75GHz/1.905GHz base and boost clock, a 256-bit memory bus, and an 8GB frame buffer. The Radeon 6700 XT is a 40 CU card with a 2.4GHz base clock, a 2.58GHz boost clock, 16Gbps GDDR6, and a 192-bit memory bus. It also has a 96MB L3 cache, trimmed down slightly from the 128MB on the 6800 and 6800 XT.

On paper, this card looks like a solid step down from AMD’s high-end, with no issues to speak of. The shift to a 192-bit memory bus would typically be concerning, but the 96MB of L3 cache should offset that change. The entire purpose of RDNA2’s Infinity Cache (and caches in general) is to relieve pressure on the main memory bus. A 192-bit bus is very small for a higher-end card, but a 192-bit bus may be just fine for a GPU also backed with a 96MB L3 intended for 1440p gaming. If the 6700 XT also uses 16Gbps RAM instead of the 14Gbps memory on the Radeon 5700 XT, it’ll give the card an extra 1.14x of memory bandwidth. Bandwidth should be between 336GB/s and 384GB/s, representing 75 percent to 86 percent of the 5700 XT’s raw memory bandwidth. The L3 cache worked well for 6800 / 6800 XT, so we expect it to work well here.

The good thing about clock speed gains (when combined with only modest changes to an architecture) is that they tell us a great deal about what to expect. If the 6700 XT has exactly the same core count as the 5700 XT and ~1.38x the clock speed, the GPU should be roughly 1.30x – 1.4x faster than its predecessor. We could see some variation in those figures based on the impact of the L3 cache and the smaller memory bus, but that’s the size of the overall expected improvement.

These gains should reduce the sting of higher prices somewhat — a 1.19x price increase ought to be “paid” for, in this instance, with a 1.3x – 1.4x performance improvement, which means AMD is objectively delivering a better value at that price point than it did 18 months ago. This is all for the good.

These clock boosts will also help offset the difference between the 6700 XT and its larger cousins. While the 6800 has a full 3840 cores, the base clock is just 1.8GHz. The 6700 XT has just 67 percent of the cores of the 6800 and 55 percent of the 6800 XT, but its base clock is 1.35x higher than the 6800 and 1.2x faster than the 6800 XT. This will help to close the performance gaps in compute-bound workloads.

Partner cards will be available at the same time as reference cards in an attempt to boost overall channel availability. AMD has not announced any plans to limit mining and it’s not clear how much cryptocurrency mining is creating problems right now. The GPU will focus on the 1440p segment, probably with some drops to 1080p for gamers who want smooth frame rates and added effects like ray tracing and are willing to drop resolution to get there.

Availability is likely to be minimal, despite AMD’s efforts. This is not a knock on AMD. Nvidia has had no success keeping GPUs on store shelves, either.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Announces New Radeon 6700 XT, Theoretically Priced at $479

This site may earn affiliate commissions from the links on this page. Terms of use.

AMD has announced its upcoming 6700 XT. As the name implies, it’s intended as the lower-end sibling of the 6800 and 6800 XT series and as the generational, drop-in replacement for the 5700 XT.

The 6700 XT will hit store shelves on March 18 with a price of $ 479. This represents a price increase relative to that previous card, which debuted at $ 400. Of course, given current GPU prices, anyone able to score a new card at anything adjacent to MSRP will likely feel as if they’ve won the lottery regardless given current price trends.

How Will This GPU Compare Against the 5700 XT?

There’s some good news regarding how this new card will compare with its less-expensive predecessor, at least. The older Radeon 5700 XT is a 40 CU card with a 1.75GHz/1.905GHz base and boost clock, a 256-bit memory bus, and an 8GB frame buffer. The Radeon 6700 XT is a 40 CU card with a 2.4GHz base clock, a 2.58GHz boost clock, 16Gbps GDDR6, and a 192-bit memory bus. It also has a 96MB L3 cache, trimmed down slightly from the 128MB on the 6800 and 6800 XT.

On paper, this card looks like a solid step down from AMD’s high-end, with no issues to speak of. The shift to a 192-bit memory bus would typically be concerning, but the 96MB of L3 cache should offset that change. The entire purpose of RDNA2’s Infinity Cache (and caches in general) is to relieve pressure on the main memory bus. A 192-bit bus is very small for a higher-end card, but a 192-bit bus may be just fine for a GPU also backed with a 96MB L3 intended for 1440p gaming. If the 6700 XT also uses 16Gbps RAM instead of the 14Gbps memory on the Radeon 5700 XT, it’ll give the card an extra 1.14x of memory bandwidth. Bandwidth should be between 336GB/s and 384GB/s, representing 75 percent to 86 percent of the 5700 XT’s raw memory bandwidth. The L3 cache worked well for 6800 / 6800 XT, so we expect it to work well here.

The good thing about clock speed gains (when combined with only modest changes to an architecture) is that they tell us a great deal about what to expect. If the 6700 XT has exactly the same core count as the 5700 XT and ~1.38x the clock speed, the GPU should be roughly 1.30x – 1.4x faster than its predecessor. We could see some variation in those figures based on the impact of the L3 cache and the smaller memory bus, but that’s the size of the overall expected improvement.

These gains should reduce the sting of higher prices somewhat — a 1.19x price increase ought to be “paid” for, in this instance, with a 1.3x – 1.4x performance improvement, which means AMD is objectively delivering a better value at that price point than it did 18 months ago. This is all for the good.

These clock boosts will also help offset the difference between the 6700 XT and its larger cousins. While the 6800 has a full 3840 cores, the base clock is just 1.8GHz. The 6700 XT has just 67 percent of the cores of the 6800 and 55 percent of the 6800 XT, but its base clock is 1.35x higher than the 6800 and 1.2x faster than the 6800 XT. This will help to close the performance gaps in compute-bound workloads.

Partner cards will be available at the same time as reference cards in an attempt to boost overall channel availability. AMD has not announced any plans to limit mining and it’s not clear how much cryptocurrency mining is creating problems right now. The GPU will focus on the 1440p segment, probably with some drops to 1080p for gamers who want smooth frame rates and added effects like ray tracing and are willing to drop resolution to get there.

Availability is likely to be minimal, despite AMD’s efforts. This is not a knock on AMD. Nvidia has had no success keeping GPUs on store shelves, either.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Biggest Navi: AMD Launches the Radeon RX 6900 XT, at $999

This site may earn affiliate commissions from the links on this page. Terms of use.

AMD technically launched its Radeon RX 6900 XT today, though stocks of the GPU show every indication of being severely limited. Priced at $ 999, the new Radeon card is intended to be the crown jewel of the Navi stack, and to put AMD on a more competitive footing against Nvidia.

This is a major moment for AMD in several regards. It’s the first time the company has fielded a high-end GPU intended to compete at the top of the market since the launch of the Fury X back in 2015. 2017’s Vega 64 competed roughly against the GTX 1080 at a time when Nvidia already had the 1080 Ti in-market, while the RX 6900 XT is supposed to land in-between the RTX 3080 and the RTX 3090.

Specs on the 6900 XT compared to previous AMD launches. Image by Hot Hardware.

AMD definitely hits its price point — the $ 1,000 RX 6900 XT is $ 500 less expensive than the RTX 3090 — but it offers a relatively small number of features to customers who step up to the card. Because the 6900 XT is a fully-enabled 6800 XT, customers get the benefit of an additional 8 compute units, or a total of 512 compute cores (5120, versus 4608). It also has 1.1x additional ray accelerators and TMUs, but the same base and boost clocks, the same VRAM, and the same TDP. The price gap between the RTX 3090 and the RTX 3080 is much larger than between the 6900 XT and the 6800 XT, but the RTX 3090 adds features like a wider memory bus, a larger jump in the total number of shader cores, and more than double the VRAM. Of course, Nvidia also wants $ 1,500 for the GPU, so they rather obviously needed to give people a reason to pick it.

AMD versus Nvidia at 4K. Data by THG.

According to Tom’s Hardware, the 6900 XT falls behind the RTX 3090 in 4K, with an average of 85fps in 13 games compared with the 3090 at 93.6fps. That makes the RTX 3090 just 1.1x faster than the 6900 XT, for 1.5x more money.

Of course, this chart also shows that the 6900 XT is only 1.06x faster than the RX 6800 XT, while costing 1.53x more money (based on MSRPs, lol). Altogether, the RTX 3090 is 1.18x faster than the 6800 XT, but costs 2.2x more. That’s the kind of price/performance ratio you’re buying into, if you choose to buy at the tip-top of the market.

THG writes that the RX 6900 XT “is slightly faster than the RTX 3080, and it can beat the 3090 in a few cases.” Ray tracing performance between AMD and Nvidia is currently very difficult to analyze. The games already on-market (with extensive Nvidia GPU optimizations and limited AMD optimization, if any) favor Nvidia, a lot. The couple of tests AMD distributed before launch of the 6900 XT favor AMD. With so few tests and such a lopsided optimization situation, it’s difficult to tell how things shake out.

The expectation that I’ve seen, which still seems true, is that Big Navi’s ray tracing performance is better than Turing, but not as good as Ampere. AMD’s newest GPUs seem to hit ~2080 Ti ray tracing levels and they offer the performance at <2080 Ti pricing, but they’re strongest against Nvidia in rasterized workloads so far. This could change with future optimizations and patches.

Hot Hardware and THG reach somewhat different conclusions regarding the overall performance of the card. THG notes: “Overall, however, the RX 6900 XT fails to impress relative to the RX 6800 XT. It’s such an incremental bump in performance that it hardly seems worth the trouble.” While it’s fast, it lags in ray tracing workloads and the $ 1500 RTX 3090 is viewed as offering more features.

Hot Hardware, in contrast, writes: “The AMD Radeon RX 6900 XT rocks. Is it the fastest card across the board? No. But it is an immensely powerful and capable GPU, with a beefy 16GB of memory, a leading-edge feature set, and obvious synergy with current-gen game console architectures, which should bode well for game development and optimizations moving forward.”

My own take (I’ve reviewed the 6800 XT but not the 69000 XT) is that the 6900 XT is AMD’s way of signaling it intends to compete in the high-end of the graphics market once more, but that the company is still playing catch-up in some regards. This is not automatically a bad thing. If we look back to 2015, we see AMD nearly-match the GTX 980 Ti, only to fall short of the mark with the Vega 64 in 2017. From 2016 – 2019, AMD’s most-competitive positioning was between $ 100 – $ 300. In mid-2019, Navi debuted at higher prices with the 5700 and 500 XT, and demonstrated that AMD was still capable of competing with Turing. With Big Navi in 2020, AMD has demonstrated that it can compete with Nvidia in the upper market once again — but Biggest Navi is still a bit of a reach.

Part of the reason for this, it should be said, is because AMD chose to emphasize high VRAM loadouts and relatively high clocks for its lower-end cards. AMD chose to weaken the RX 6900 XT’s positioning by improving the RX 6800 and RX 6800 XT, and while that makes their top-end solution a little bit of an underwhelming step up, it looks this way for the best possible reason.

Most Radeon gamers will, I suspect, be best-served by either the 6800 or the 6800 XT. Nevertheless, the 6900 XT sends a message to investors and enthusiasts that AMD intends to compete robustly in GPUs as well.

When you’ll actually be able to buy one of these cards is anyone’s guess. A recent PR from Swiss retailer Digitec revealed that the company had received just 35 cards for launch, implying that this GPU is going to be extremely difficult to find. In that sense, the entire discussion is academic, since you won’t really be able to buy a card until 2021 unless you want to pay 1.5x – 2.5x over list price. There are RTX 3090’s going on eBay for $ 2,000 to $ 2,500, and some that list for even more, so the chances you can buy a new RDNA2 GPU before Christmas are small, no matter what.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Radeon 6800 XT Review: Big Navi Battles the RTX 3080

AMD’s RDNA2 GPUs land this morning, in the form of the Radeon RX 6800 ($ 579) and Radeon RX 6800 XT ($ 649). These two graphics cards are based on AMD’s RDNA2 GPU architecture, also known as “Big Navi.” Big Navi is an extension and improvement of the RDNA architecture that AMD debuted last year. RDNA2 is the GPU technology inside the Xbox Series S, Series X, and Sony’s new PlayStation 5, all of which launched this month.

On the PC side of things, AMD’s RDNA2 is facing off against Ampere. Nvidia’s latest RTX 3000 series debuted a few months ago, and the only serious complaint about the Nvidia GPUs has been their limited availability. AMD is fighting uphill in this comparison, but the company has promised it can deliver equal or better performance than the Ampere GPUs it’ll face off against.

We’ll be checking that hypothesis, at least at the high end of the market.

Let’s Talk RDNA2

RDNA2 is a 7nm GPU design built on the same process node as RDNA, AMD’s first-generation Navi architecture. AMD has built multiple generations of chips on the same process node, but RDNA2 was billed as a significant architectural leap over and above RDNA, not just a widening of the original architecture.

According to AMD, RDNA2 is more efficient than RDNA, hits higher clocks, and incorporates unique first-ever features for a GPU, like a 128MB onboard cache. We’re still exploring what kind of performance it enables. One thing AMD told us: Running at low resolution isn’t expected to deliver a huge performance gain because the point of the IF is to reduce pressure on the 256-bit memory bus, and running in 720p or below doesn’t put enough meaningful pressure on it to meaningfully stretch the cache. Anyone with ideas on how to copy the entirety of Quake II or Quake III’s working data set into a 128MB GPU-mounted cache is invited to get in touch with me. Bonus points if we can guarantee loading the executable into a Zen 3’s 32MB L3.

Architecturally, the 6800 XT we’ll be reviewing today is a 4608:288:128 core with 72 ray accelerator units (one per CU). This is a bit below the Biggest Billy Goat Gruff Navi, the RX 6900 XT at 5120:320:128 and 80 ray accelerator units. We’ve previously discussed how RDNA improved performance over GCN when that architecture debuted last year — here’s a slide breaking down how AMD boosted RDNA2 performance over and above RDNA:

Design Frequency Increase means AMD leveraged resources from the Ryzen side of things when it came to designing for higher clock frequencies. It deployed transistors and other logic optimized for high performance in certain places, and it found ways to reduce latency and allow for higher clocks across the chip.

Improvements to power and gating improve overall consumption and, by association, thermals. The goal is to propagate clock signals across the GPU as efficiently as possible and to allow parts of the GPU to power back down when they don’t need to be turned on. Improving how efficiently power moves across the chip is critical to overall GPU efficiency.

Finally, there’s straight performance per clock. AMD is suggesting that 21 percent of their 54 percent uplift over RDNA came from general efficiency improvements within the GPU itself.

The Infinity Cache in particular has an interesting role to play here because data found within the cache will make it back to the GPU far faster than anything retrieved from GDDR6. During our engineering round table, AMD acknowledged that it didn’t have any new memory compression deployed within RDNA2, but that it had improved the implementation and applicability of its existing technology.

The new Infinity Cache has an impact on the overall cache architecture, as you’d expect. The 4MB L2 in Big Navi is the same size as in RDNA, and while we can’t really talk about saving die size or power with an extra 128MB of cache as opposed to an 8MB L2, AMD would have had to add additional L2 cache if it hadn’t come up with this unusual solution.

According to AMD, its compute units have been optimized to deliver 1.3x more throughput in the same power envelope.

Infinity Cache

AMD’s “Infinity Cache’ is a reference to how it connects to the GPU over the Infinity Fabric, not an attempt to claim AMD’s latest graphics card violates space-time principles. According to AMD, the cache distribution in the RDNA GPU wasn’t ideal:

AMD’s solution — well, one of AMD’s solutions — was to slap 128MB of cache down beside Big Navi, wire it up with 16x64B channels, and clock it at 1.94GHz. The cache provides ~2TB/s of bandwidth or nearly 4x the peak of GDDR6. Infinity Cache, according to AMD, is the secret sauce that allows it to compete so effectively with Nvidia and its RTX family.

There seems to be a period law of computing that allows us to slap cache down in new and exciting places every few years to good results. We’ve done L1, L2, L3, flirted with L4, built GPUs with L1, L2, and L3 caches of their own, and now AMD has gone and slapped 128MB of it directly on the graphics card. That’s how you know you’ve made it in the big leagues, in the PC industry — somebody finally comes along and slaps a huge amount of cache on you, just to see if anything interesting happens when they do.

The GPUs

Today, AMD is launching the Radeon 6800 and Radeon 6800 XT. Both GPUs are an attempt to claim an aggressive position against Nvidia, with the RX 6800 debuting at $ 579 versus a theoretical $ 499 for the RTX 3070, and the RX 6800 XT at $ 649 against the Nvidia RTX 3080 at $ 700. All of these prices are theoretical because of ongoing problems with bots, scalpers, and COVID-19.

The larger VRAM buffers on the 6800 and 6800 XT are an easy point of superiority over the Nvidia competition, but we’ll have to wait and see if those pay dividends in non-gaming workloads. For now, nothing we’ve tested is going to push those kinds of frame buffers.

Smart Access Memory

One feature we’ll be testing today is Smart Access Memory, also known as the first time AMD has claimed any kind of ability to tune performance between CPU and GPU due to owning the entire platform. This feature allows for memory transfers between CPU and GPU in blocks larger than 256MB, which can improve performance in some games.

Enabling SAM isn’t difficult — it required flipping two UEFI switches — and the performance boost from doing so is modest but real. It’s not a reason to buy an AMD card in and of itself, and Nvidia is claiming it can deliver an equivalent capability in an update, but it’s a good option to have.

Test Setup, GPU Configuration Issues

A large chunk of the time I’d set aside for working on the 6800 and 6800 XT review was focused on troubleshooting the GPUs instead. Initially, my testbed would not POST. Three days, 20 hours of work, one weekend, four motherboards, three UEFI updates, four CPUs, two CPU manufacturers, three SSDs, five sets of RAM, and four video cards later, I finally figured out the problem: The 6800 and 6800 XT are incompatible with the Dell UP2414Q.

I’ve used the UP2414Q as a testbed monitor ever since I wrote about it here. While it’s small, it offers support for three simultaneous display inputs (useful for a multi-system monitor) and its color reproduction is first-rate. A lot of my comparative DS9 work has been done on this panel. Unfortunately, it’s not compatible with the 6800 or 6800 XT and will not activate the monitor at POST.

When I brought this to AMD, the company assured me it’s virtually unheard of, affecting just my Dell panel and one other Acer model. Obviously, this isn’t an absolute guarantee — everyone is working off their own best knowledge — and I’m willing to believe that an early 4K panel could have some random bugs of its own. There are also some known issues with the LG CX OLED family, though AMD is working with LG to resolve them and expects the manufacturer to issue its own firmware update in the future.

We tested the 6800 XT on an Acer XB280HK and had no further problems.

There are several benchmarks where AMD’s 1080p and 1440p results are on top of each other. This is a known issue and the company is working on a solution. The implication here is that 1080p results aren’t as fast as they could be.

This review compares the RTX 3080 against the Radeon RX 6800 XT. Comparisons between the RTX 3070 and the RX 6800 will come in the near future. I wanted to test specifically on an AMD platform to evaluate SAM and the value proposition of an all-AMD platform — and due to the various issues I encountered, I wound up with less time to benchmark alternate cards than intended.

The RTX 2080 and Radeon VII data included here has been transcluded from our RTX 3070 review and the listed scores derive from those configurations. This is unlikely to have much practical impact — the RTX 2080 and Radeon VII are both last-generation GPUs, and I’m only providing them here for reference in the first place. The reason for not including the RTX 3070 the same way is that I’m specifically comparing that GPU against the 6800 in an all-AMD solution, and I didn’t want to clutter the situation any further than it already was.

Look for a follow-up on that comparison — with more lower-end GPUs — in the near future.

The RTX 3080 and Radeon RX 6800 XT were both tested on an AMD Ryzen 5900X running in an MSI X570 Godlike motherboard flashed with a beta UEFI provided by AMD. RAM was 32GB of Crucial DDR4-3600 memory in four DIMMs with a Corsair 2TB MP600 PCIe 4.0 SSD for storage.

Ashes of the Singularity: Escalation

Ashes of the Singularity: Escalation began life as a big AMD win but has flipped back and forth between Nvidia and AMD more recently. Interestingly, it’s not a win for SAM — overall performance falls back somewhat with Smart Access Memory enabled — but it does showcase the 6800 XT, slugging it out with the RTX 3080. AMD wins all three resolutions.

Assassin’s Creed: Origins

AC:O has always been an Nvidia-friendly title, as evidenced by the RTX 2080’s decisive tromping of the Radeon VII. In 1080p, this remains the case. Above 1080p, the RTX 3080 and 6800 XT are tied. SAM provides a modest improvement here.

Borderlands 3

BL3 is a new addition to our testing suite. Here, AMD again wins past Nvidia in all three benchmarks, taking the test entirely. SAM provides very small improvements. Not much new here as far as adding to our performance understanding, but AMD wasn’t kidding when it said it could take on Nvidia on its own turf.

Deus Ex: Mankind Divided

In Mankind Divided, the 6800 XT finally manages to shove its head above the 100 fps mark in 1080p (running 4x MSAA in this application incurs a wicked penalty as far as frame rates). There’s no impact from SAM, really, but we do see AMD holding the test.

Far Cry 5

Far Cry 5 doesn’t put much load on the GPU at 1080p, as evidenced by how scores bunch up. This is with the game’s HD textures loaded as well, so the game is tuned to maximum overall detail. Larger gaps appear at higher resolutions. Interestingly, SAM actually costs AMD a small performance hit in this game.

Final Fantasy XV

Nvidia has always led in Final Fantasy XV, and it’s not surprising to see that pattern hold up here. AMD can at least claim to have slashed the gap — at 4K, the RTX 2080 is 1.2x faster than the Radeon VII, while the RTX 3080 is only 1.1x faster than the RX 6800 XT. SAM is not very helpful to AMD in this benchmark either, but the performance rankings come out in the wash.

Hitman II

We have two Hitman II graphs: Miami and Mumbai. These benchmarks are basically a party for Smart Access Memory, which boosts the 6800 XT into a leadership position against the RTX 3080. It will be very interesting to see if Nvidia and Intel can copy AMD’s new trick for setting PCIe BAR size because if they can, Nvidia may regain this performance. Note that our figures for Hitman II reflect the “GPU” average logged in the benchmark report and not the “Overall” benchmark result reported briefly when the test is run.

Metro Exodus

We tested Metro Exodus in two modes: Extreme (no RT) and with ray tracing enabled. This is another area I’ll be visiting more extensively when we talk about the 6800 versus 3070 in the near future.

We test Exodus in Extreme detail, which sets 200 percent supersampling. Our effective test resolutions are therefore 4K, 5K, and 8K. One of these tests is run with ultra anti-aliasing enabled, the other is not. The Radeon VII doesn’t support RT in this title, so we don’t have results for that game run.

In standard Metro Exodus, the 6800 XT crushes it against the Radeon VII, with performance up 1.65x. The RTX 3080 still has an edge in 4K here, but it’s matched at lower resolutions.

Ray tracing changes our performance metrics here. Nvidia leads every metric except, oddly, our 4K test result on the 6800 XT (this may have simply been a blip). I only had time to run one ray tracing benchmark for this coverage, so I’m not going to draw any firm conclusions about how Nvidia and AMD stack up on this point — but it looks as though the situation between the two companies in ray tracing may track what happened with tessellation in the early 2010s, when AMD and Nvidia both had solutions in-market, but Nvidia’s was a bit beefier.

AMD’s ray tracing performance put it solidly above Turing by 1.5x – 2x, but behind Ampere on the whole. Interestingly, however, Ampere’s lead collapses at 4K, implying that something beyond sheer RT horsepower is causing a problem.

Shadow of the Tomb Raider

In SoTR, Smart Access Memory proves its worth again. AMD would win the text at 1080p and 1440p no matter what, but the 4K victory thanks to SAM is a nice cherry on the cake.

Strange Brigade

The RTX 3080 ekes out narrow wins in Strange Brigade, without much uplift from SAM for the Radeon family. Normally we’ve used this test for Vulkan, but we’ve had trouble with Vulkan for this review, so we fell back to DX12 for this test. Not much to see here one way or the other.

Warhammer II

Note: Total War II: Warhammer is tested in DX11 on NV cards and DX12 on AMD cards. I normally don’t split APIs like this, but NV + DX11 is the clear best choice for that card, while DX12 is the clear best-pick for AMD.

Warhammer II shows some interesting patterns. The Radeon VII and RDNA2 hit the exact same 1080p score while the RTX 3080 leaps ahead, implying there’s something fundamental blocking performance. The RTX family has no such problem. The 6800 XT doesn’t get much boost from SAM here and Nvidia sweeps the test.

Performance Conclusions

Without SAM, the Radeon RX 6800 XT has an average of 128.84 fps at 1080p, 108.25 fps at 1440p, and 67.09 fps at 4K. The RX 3080 averages at 134.86, 108.26, and 67.77 fps respectively, which gives it a small lead over AMD at 1080p and a tie position at 1440p and 4K.

SAM changes this situation somewhat. While the boosts are small — roughly 7 percent, 4 percent, and 3 percent, respectively — they do make enough of a difference to pop the 6800 XT over and above the RTX 3080 across our suite of tests. If Nvidia can replicate the advantage AMD has introduced, things will shift back towards Team Green. Otherwise, AMD can fairly claim to have matched its goals.

The big question for AMD is whether the overall driver and software support situation is equivalent to what Nvidia can offer. Historically, AMD drivers have had more problems than their Nvidia equivalents. Problems like the black screen of death last year caused issues for a lot of gamers, including yours truly.

The RX 6800 XT offers competitive performance with Nvidia’s RTX 3080, along with far more VRAM (very useful in certain circumstances). The tradeoff looks to be in ray tracing, where the card is better than Turing but falls behind Ampere.

If this conclusion sounds a little preliminary, that’s because it is. AMD has done an excellent job delivering a hardware followup to RDNA2. The few power consumption measurements I got in suggest a lead over the RTX 3080 as well, which isn’t easy to deliver when you’re working with an extra 6-8GB of RAM to power. Features like Infinity Cache have interesting long-term potential.

I’ll have more to say about where I feel like the card fits once I’ve finished with the 6800 and spent some more time with various Radeon software features. For now, I’d say AMD has re-established overall gaming competitiveness with Nvidia while adding its own ray tracing solution that offers meaningfully improved performance over Turing even if it isn’t quite up to par with what Ampere is bringing to the table. My problems bringing my testbed online colored my initial impressions of the card, and I want to poke at everything a bit more. Right now, I’d say Big Navi offers slightly higher rasterization performance on an all-AMD system in enough cases to make interesting, with some tradeoffs in ray tracing where workloads favor Nvidia.

More to come on this topic, including 6800 results versus the RTX 3070.

Now Read

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD’s New Radeon RX 6000 Series Is Optimized to Battle Ampere

Ever since AMD bought ATI, gamers have asked if there was an intrinsic benefit to running an AMD GPU alongside an AMD CPU. Apart from some of the HSA features baked into previous-generation AMD APUs and a brief period of dual graphics support, the answer was always “No.” From 2011-2017, AMD simply wasn’t competitive enough in gaming for the company to invest in that kind of luxury concept.

AMD’s RX 6000 GPUs will be the first cards that can specifically take advantage of platform-level features inside the 500-series chipset. We’re going to talk more about that specific feature and several others later on, but it’s one of the most interesting things AMD discussed today, and I wanted to get it on the board.

Before we go deeper on new features, let’s talk about the new cards. Click on images to enlarge them; all images below are from AMD’s launch event.

Meet the RX 6000 Series

AMD is launching three new GPUs: Radeon RX 6800, Radeon RX 6800 XT, and Radeon RX 6900 XT. Here are the relevant specs on each:

The Radeon 6800 is a 3,840-core GPU with an 1815MHz base clock and a 2.105GHz boost clock. It features 128MB of Infinity Cache (more on that shortly), 16GB of GDDR6, and 250W of total board power. Like the other two GPUs today, it uses a 256-bit memory bus (more on that shortly). Total board power is 250W, including VRAM.

AMD has positioned the 6800 well above the RTX 3070’s $ 499 launch price, so the GPU will need to demonstrate this kind of lead in our own testing to carry the price point. Features like 16GB of VRAM may help with that, though we’ll have to see if the extra RAM is useful at any practical detail levels the GPUs can reach. (It may be useful for AI GPU upscaling, where VRAM is worth its weight in gold.)

Note that this RX 6800 was tested using Smart Access Memory. This is AMD’s new technology that leverages the 500-series motherboard platform to give the CPU full access to GPU RAM, rather than limiting the window to 256MB. This supposedly improves performance somewhat, even on unoptimized titles. AMD is leveraging it to compete as well as it is against the RTX 2080 Ti. Just something to keep in mind.

Next up, the Radeon RX 6800 XT:

The RX 6800 XT offers 72 compute units (4,608 cores), a 2015MHz game clock, 2250MHz boost clock, 128MB Infinity Cache, 16GB of GDDR6, and 300W of total board power consumption. Performance-wise, it’s expected to compete against the RTX 3080. When you check these numbers, note that AMD is not using Smart Access Memory to show these results:

As for 4K, you can see those figures below:

Eyeballing the graph, the ratios are mostly the same, but Nvidia gains ground on AMD in Doom Eternal, Resident Evil 3, and Wolfenstein: Young Blood for sure. I’m less certain of the others, due to the off-angle comparison, but it’s something we’ll check on when we get cards. AMD also took some pains to point out that this GPU draws just 300W to Nvidia’s 320W. Price? $ 649.

Finally, there’s the Radeon 6900 XT:

As rumored, the 6900 XT is 80 CUs (5,120 cores), with the same 2015MHz base clock, 2.25GHz boost clock as the 6800 XT. It also packs the same 16GB of VRAM, the same 26.3B transistors (all three chips are obviously using the same chip design), and a price tag of $ 999. AMD is claiming an absolute uplift in performance per watt of 1.65x, over and above the 1.5x target it set for RDNA2. This implies AMD is binning the cards aggressively, as it did with Radeon Nano.

Note that in this set of comparison figures, AMD is explicitly activating both Smart Memory Access and a one-touch overclocking feature called Rage Mode. With Rage Mode enabled and on its preferred platform, the RX 6900 XT can pace the RTX 3090, even outperforming it in spots. If we didn’t have these features in place, the performance gaps would presumably be larger. The flip side to that, of course, is that the RTX 3090 has an MSRP of $ 1,500, where the Radeon 6900 XT has an MSRP of $ 1,000.

AMD’s Special Features

If you’re familiar with high-end GPU design, you’re probably wondering why AMD is building its highest-end chips with just 256-bit memory buses. The answer is a new feature AMD built into RDNA2 dubbed Infinity Cache. We don’t have much detail yet on how the large cache structure is allocated, but the company did show some information on how it compares with using a wider memory bus:

Evidently, it’s more efficient to deploy a large cache backed by a smaller VRAM bus than to simply deploy more GDDR6.

The company credits techniques like Infinity Cache, along with fine-grain clock gating, pipeline rebalancing, and redesigned data paths for boosting the overall performance of RDNA and delivering a total 1.54x uplift in performance-per-watt. Sustained clocks have supposedly improved ~1.3x over and above standard RDNA.

Smart Access Memory is a feature that only works with 500-series chipsets, but allows the Ryzen CPU to access all 16GB of GPU VRAM, rather than being limited to the standard 256MB aperture size. This reportedly allows for more efficient data allocation in VRAM and improves overall performance.

Rage Mode, referenced above, is a one-click overclocking option that will need to be a great deal better than any previous one-click overclocking option I’ve ever tested in order to pay dividends. Between Rage Mode and Smart Access Memory, AMD believes it can boost the baseline performance of the 6800 XT by a fair bit.

Game speed improvements range from 2 percent to 13 percent, with an average performance uplift of around 6.4 percent.

AMD is also continuing to expand its library of FidelityFX features:

Those are the major announcements from the event. Obviously, AMD has thrown down something of a gauntlet here. The RX 6900 XT and RX 6800 XT are both priced below their Nvidia counterparts, while the RX 6800 comes in somewhat above the RTX 3070. AMD clearly believes it’s got a strong position with this part.

The stage is set for two major showdowns in the next few weeks in both the CPU and GPU markets. This is going to be downright interesting. As always, take all manufacturer benchmarks with a grain of salt, though AMD’s performance claims do broadly line up with where we expected the company to fall versus Ampere. The big question, of course, is whether these cards will actually ship to consumers in significant numbers, or if they’ll all end up on eBay.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Wants to Prevent Bots and Scalpers From Wrecking Ryzen 5000, Radeon Launches

This site may earn affiliate commissions from the links on this page. Terms of use.

Bots and scalpers have been a problem at all of the major tech launches this year, to the general dismay of people who want to buy products at the price they’re actually supposed to sell for. AMD hasn’t said much publicly about what it plans to do about the bot problem, but a leaked document sheds light on what the company is doing behind the scenes.

AMD has apparently sent a letter to multiple retail partners with a list of suggested best practices, including bot detection, CAPTCHA implementations, and purchase limits (AMD suggests customers be limited to one GPU per user, and that the store monitor all orders for duplicate names, addresses, or email addresses). It also recommends that stores switch to manual order processing and verification on the day of launch to prevent automated systems from letting bots through previously undetected gaps.

In addition to these steps, AMD recommends not allocating a GPU in inventory until an order has actually been submitted, or setting a time limit on how long a customer can hold an item in their cart before it is made available to the general pool. Stores should also limit the number of cards they sell to commercial resellers over this time period, to keep more stock available in-channel. The letter ends with a request that stores reach out to their AMD representatives in order to review which solutions will work for their business.

These are exactly the kinds of security practices stores should practice to fend off bots and scalpers. Now that bots are proliferating across e-commerce, we need companies to adopt validation procedures that maximize the chance that limited launch hardware winds up in the hands of those who intend to buy it, rather than those attempting to make a quick buck. What was previously a tolerable irritant has become a problem that threatens to swamp the entire market. This isn’t good for anyone.

Nvidia has taken steps to improve its own store as a result of the RTX 3080 launch disaster and we can see that AMD is working with store owners to lock things down before its own launch day. Any vendor planning to sell the Xbox Series X or PlayStation 5 had best be paying attention to this problem unless they want to see their entire stock of hardware vanish in seconds, only to reappear on eBay next week for 3x the original price.

Will they be enough? That’s anyone’s guess. A lot of these bots aren’t free, and they aren’t a one-and-done purchase, either. There’s a monthly fee if you want access to a bot, and it isn’t cheap ($ 75 per month was quoted in connection with the RTX 3080 issues). These developers don’t want to see their cash cow evaporate, and the people who have earned themselves a pretty penny this way don’t want to be prevented from ripping more people off. Bots are likely here to stay. The good news is, a lot of the best solutions are pretty low-tech. Manual order verification and order limits aren’t a guaranteed solution to this problem, but they’re options that ought to be available to any company on relatively short notice.

It’s in the best interest of companies to deal with this problem. If the issue gets bad enough, companies with a retail channel presence such as AMD, Intel, Nvidia, Sony, and Microsoft will feel they have no choice but to preferentially deal with those stores willing to invest in anti-bot security. Would-be buyers are well aware, at this point, of how the gouging game is played — and that there are ways for stores to protect their inventories from predation. The stores that put product in the hands of those who want to use it will be the stores that win business, long term.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

The Supposed ‘AMD Radeon 6900 XT’ Slides Are Completely Fake

This site may earn affiliate commissions from the links on this page. Terms of use.

There’s a new set of slides being passed around that supposedly showcase AMD’s upcoming Radeon 6900 XT. They’re completely fake. Here’s how you can tell:

Radeaint-1

In the first slide, the branding has been updated at the upper right, but the branding on the actual GPU hasn’t been. Also, that’s a Radeon 5700 cooler with a Vega water-cooler next to it, and there’s a clear flaw in the image where the radiator attaches to the card.

The specs themselves are pretty reasonable. I’m not saying how accurate I think they are, but the specs are the only part of the slide that isn’t instantly fake. At the very least, I’d have to get out a calculator and run some numbers first.

The fact that there’s a price on the card is another way you know this slide is fake. Price is always the last thing a company decides on.

Radeaint-3

This slide made me laugh out loud when I saw it. Whoever created this work of art has never, ever, talked to anyone in marketing.

Marketing, my friends, is all about optimism. You might note, for example, that when AMD declared Ryzen would have an IPC 1.4x higher than Excavator, they did not do so with a giant slide labeled NO MORE BULLDOZENT

AMD-Zen-02

If AMD was ever going to whip out the “No More Compromises” play, you’d think they’d do it here.

There’s no way in hell AMD would ever advertise RDNA2 in a manner that implied RDNA or any previous GPU was “compromised.” AMD is still shipping Vega silicon in its APUs and as part of its compute business.

AMD is unlikely to call its ray tracing implementation “RXRT,” and the estimated performance impact of enabling the feature is hilarious, to put it lightly.

I was not particularly thrilled with Turing when it came out and I expect both Ampere and RDNA2 to offer superior performance in ray tracing workloads, but there’s no chance whatsoever that AMD takes a 5-9 percent penalty for enabling ray-tracing effects. If real-time ray tracing only carried a 5-10 percent performance penalty relative to rasterization, we’d have integrated RTRT a long time ago.

The typical performance hit for using ray tracing is more along the lines of 50-80 percent. Because we only have one generation of hardware from one company, we don’t know how much that number can be improved — but it beggars belief to think AMD has cut it by nearly an order of magnitude. It’s not even clear which RTX games will support RDNA2 ray tracing out of the gate.

Radeaint-2

The “Ultimate 4K Gaming Experience” is spot-on for a marketing slide, but AMD doesn’t give you game detail settings in the bars of its slides, and they don’t label up the y-axis to the point that you have to twist your neck like an owl in order to read it.

Also, good to see top-notch performance in Red Dead Redemption, a game that never received a PC release. Never heard of “Witcher 3,” either, since the actual name of the series is “The Witcher.”

This is what it looks like when people with more aspiration than Photoshop try to troll AMD fans. There’s a consistent design language to AMD’s slide decks and a consistent way that companies communicate about their products. These slides fail at both.

I declare this GPU a new and different product under the sun. Presenting the AMD Radeaint 6900 XT, launching September 2020.

Feature image is the AMD Radeon 5700 XT.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech