Tag Archives: hardware

Sony Will Ship New VR Hardware for PS5, but Not in 2021

This site may earn affiliate commissions from the links on this page. Terms of use.

Up until now, Sony has been pretty quiet about their plans for VR on the PlayStation 5. While the company assured gamers that the PS4-era PSVR would still work with the PlayStation 5, it hasn’t said much about how much it intended to advance or extend the VR capabilities of the new console. The company has now announced a new PSVR headset, meaning one that’s specifically tuned for the PlayStation 5.

Sony is only teasing its design for now, but the company claims the new headset will improve on both the PS4-era PSVR’s field of view and its resolution. The first-generation PSVR had a 1920×1080 display (960×1080 per eye) and a 100-degree FoV. Competing headsets like the first-generation Oculus Rift claimed a 110-degree FoV (evaluations measured less) and the recent, high-end Valve Index claims a 135-degree FoV.

Sony also plans to bring “some” of its DualSense technologies to its new VR controllers. This means the company will finally be retiring the PlayStation Move controllers that it relied on for PSVR. The DualSense has been widely praised for its haptic feedback and variable resistance triggers, and we can safely assume any new control mechanisms will be more accurate than the 2010-era PS Move. The new PSVR system will connect to the console via a single cable. There is no mention of a wireless option.

Just how dedicated Sony is to this platform remains to be seen. In an October 29 interview with The Washington Post, Sony Interactive Entertainment president and CEO Jim Ryan poured cold water on the idea that VR would get a big boost on the PlayStation 5: “I think we’re more than a few minutes from the future of VR,” Ryan said. “PlayStation believes in VR. Sony believes in VR, and we definitely believe at some point in the future, VR will represent a meaningful component of interactive entertainment. Will it be this year? No. Will it be next year? No. But will it come at some stage? We believe that.”

When the company building the product isn’t willing to commit to more than “We think this could be big, at some distant point in the future,” it’s not unrealistic to ask just what kind of plans Sony is making, and in what time frame. For now, all Sony is saying is that the updated PS5 version of PSVR won’t launch in 2021.

We did it. We finally found a semiconductor-based product that didn’t sell well in 2020. Data and graph by IDC.

As of last January, Sony had moved 5 million headsets, but IDC reports total VR shipments from all vendors absolutely fell off a cliff last year. Sales fell 43.3 percent in Q1 2020, 43.7 percent in Q2, and 60.1 percent in Q3. SuperDataResearch, a division of Nielsen, estimates that the PSVR only moved 125K units in Q4 2020, for example. Since Q4 may have been the high water mark for shipments, it’s possible Sony moved fewer than 500K PSVR kits in total last year. Five and a half million wouldn’t be nothing for lifetime sales, but the company has shipped 115M PS4 and PS4 Pros to-date. This means PSVR adoption is under 5 percent.

The problem is, Sony has done very little to improve this situation. The PS5 is only backward compatible with the PSVR if you get a (free) adapter from Sony. Sony’s pointed comments about VR not being the future of gaming for now, and the fact that we won’t see a new PSVR until sometime in 2022 doesn’t send a strong message of faith in the platform. That’s unfortunate for anyone who doesn’t want to be part of Facebook’s VR ecosystem, as Sony is one of the few companies offering a relatively low-cost headset with paired controllers that can be paired with a larger system for increased rendering horsepower. The Valve Index wins a lot of rave reviews, but it also costs $ 1,000.

Right now, VR is stuck in the liminal zone between “nig enough to attract the mass market” and “too small to care about.” It’s great for Sony to support the capability, but it’s hard not to think that more attention and support from the company would bring about the VR future it predicts a little more quickly. It’s completely understandable that getting the console out the door was Sony’s first and largest priority. But the company is still sending mixed signals on what kind of long-term support potential customers should expect.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Hardware Accelerators May Dramatically Improve Robot Response Times

This site may earn affiliate commissions from the links on this page. Terms of use.

(Credit: onurdongel/Getty Images)
New work in robotics research at MIT suggests that long-term bottlenecks in robot responsiveness could be alleviated through the use of dedicated hardware accelerators. The research team also suggests it’s possible to develop a general methodology for programming robot responsiveness to create specific templates, which would then be deployed into various robot models. The researchers envision a combined hardware-software approach to the problem of motion planning.

“A performance gap of an order of magnitude has emerged in motion planning and control: robot joint actuators react at kHz rates,” according to the research team, “but promising online techniques for complex robots e.g., manipulators, quadrupeds, and humanoids (Figure 1) are limited to 100s of Hz by state-of-the-art software.”
Robomorphic-Computing

Optimizing existing models and the code for specific robot designs has not closed the performance gap. The researchers write that some compute-bound kernels, such as calculating the gradient of rigid body dynamics, take 30 to 90 percent of the available runtime processing power in emerging nonlinear Model Predictive Control (MPC) systems.

The specific field of motion planning has received relatively little focus compared with collision detection, perception, and localization (the ability to orient itself in three-space relative to its environment). In order for a robot to function effectively in a 3D environment, it has to first perceive its surroundings, map them, localize itself within the map, and then plan the route it needs to take to accomplish a given task. Collision detection is a subset of motion planning.

The long-term goal of this research isn’t just to find a way to perform motion-planning more effectively, but it’s also to create a template for hardware and software that can be generalized to many different types of robots, speeding both development and deployment times. The two key claims of the paper are that per-robot software optimization techniques can be implemented in hardware through the use of specialized accelerators, and that these techniques can be used to create a design methodology for building said accelerators. This allows for the creation of a new field of robot-optimized hardware that they dub “robomorphic computing.”

The team’s methodology relies on creating a template that implements an existing control algorithm once, exposing both parallelism and matrix sparsity. The specific template parameters are then programmed with values that correspond with the capabilities of the underlying robot. 0-values contained within the matrices correspond with motions that a given robot is incapable of performing. For example, a humanoid bipedal robot would store non-zero values in areas of the matrices that governed the proper motion of its arms and legs. A robot with a reversible elbow joint that can bend freely in either direction would be programmed with different values than a robot with a more human-like elbow. Because these specific models are derived from a common movement-planning template, the evaluation code for all conditions could be implemented in a specialized hardware accelerator.

The researchers report that implementing their proposed structure in an FPGA as opposed to a CPU or GPU reduces latency by 8x to 86x and improves response rates by an overall 1.9x – 2.9x when the FPGA is deployed as a co-processor. Improving robot reaction times could allow them to operate effectively in emergency situations where quick responses are required.

A key trait of robots and androids in science fiction is their faster-than-human reflexes. Right now, the kind of speed displayed by an android such as Data is impossible. But part of the reason why is that we can’t currently push the limits of our own actuators. Improve how quickly the machine can “think,” and we will improve how quickly it can move.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Lenovo’s New Legion Gaming Laptops Sport Latest AMD, Nvidia Hardware

This site may earn affiliate commissions from the links on this page. Terms of use.

There’s no in-person CES this year, but companies are still rolling out new products for the virtual event. Among them is Lenovo, which has unveiled a complete revamp of its Legion gaming laptops. The new machines combine AMD’s latest CPUs with Nvidia’s new mobile GPUs. The price tags won’t be in the budget range, but they’re much lower than some competing gaming laptops. 

The Lenovo Legion 7 (above) is at the top of Lenovo’s new lineup. This computer has a 16-inch display with a less-common 16:10 ratio. That gives you a little more vertical space compared with 16:9 displays. The IPS panel is 2560 x 1600 with a 165Hz refresh rate, 3ms response time, 500 nits of brightness, HDR 400 with Dolby Vision, and Nvidia G-Sync. 

The display alone puts it in the upper echelon of gaming laptops, but the Legion 7 doesn’t stop there. It will also have the latest 5000-series AMD Ryzen mobile processors. On the GPU side, the laptop will have RTX 3000 cards, but Lenovo hasn’t specified which models. The Legion 7 comes with up to 32GB of RAM and 2TB of NVMe storage. Lenovo expects to launch the Legion 7 in June with a starting price of $ 1,699.99. That’s an expensive laptop, but we regularly see gaming laptops that cost much more. 

If you’re looking to keep your mobile gaming machine a little more mobile, there’s the Legion Slim 7. This laptop will weigh just 4.2 pounds, making it the thinnest and lightest Legion laptop ever. This laptop will have both 4K and 1080p display options, but the 165Hz refresh rate is only available on the 1080p model. Again, this laptop will have the latest AMD and Nvidia parts. However, we don’t have a price or release date yet. 

The Legion 5 Pro.

The next step down is the Legion 5, which comes in three variants: A 16-inch Legion 5 Pro, a 17-inch Legion 5, and a 16-inch Legion 5. The Legion 5 Pro will start at just $ 1,000 with a 16-inch 165Hz LCD at 2560 x 1600 (another 16:10 ratio). This computer will max out with a Ryzen 7 CPU (instead of Ryzen 9 in the Legion 7) and 16GB of RAM, but it’ll still have an RTX 3000 GPU. 

The non-pro versions of the Legion 5 will start at just $ 770, but the cost will depend on which of numerous screen and CPU configs you choose. The displays are stuck at 1080p, but you can get a super-fast refresh rate. All these devices will have next-gen Ryzen CPUs and Nvidia 3000-series GPUs as well. That could make even the base model an appealing gaming machine.

Now read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

AMD Will Bring Smart Access Memory Support to Intel, Nvidia Hardware

This site may earn affiliate commissions from the links on this page. Terms of use.

When AMD announced its Smart Access Memory, it sounded as if the company had finally designed a method of allowing Ryzen CPUs and GPUs to specifically work together in order to deliver higher performance than either could achieve together. Our performance tests confirmed that SAM worked fairly well, but it hasn’t been clear if the future would be restricted to AMD-AMD CPU/GPU configurations or not.

Thanks to a recent PCWorld interview, we have an answer. According to AMD, it has people on the Ryzen team working to get SAM working on Nvidia GPUs, while there are people on the Radeon team working with Intel to get the feature functional with Intel CPUs and chipsets. If AMD is comfortable making this kind of announcement, it implies that there’s reciprocity in these arrangements, meaning we’ll see cross-platform, cross-vendor support, though we haven’t heard anything about Nvidia/Intel cooperation. It only makes sense for the two companies to work together, however, since the alternative amounts to giving AMD a free performance advantage.

This confirms that SAM isn’t an AMD-specific technology as such, though AMD has done the work of enabling the feature before anyone else did. Resizable BAR Capability (that’s the PCIe specification-name for SAM) was initially baked into the PCIe 2.0 standard in 2008 before being modified in revisions to PCIe 3.0 in 2016. Microsoft added support for the feature with Windows 10 when it introduced Windows Display Driver Model 2.0, but evidently, no GPU vendor supported it until now.

If this were an AMD-specific technology, one might suspect that the company had to design Zen 3 and/or RDNA2 to use it. The fact that support can apparently be extended to Intel and Nvidia hardware implies the feature either wasn’t viewed as being worth the trouble or that the companies in question weren’t aware it could deliver a real uplift in performance until someone actually tested it. The latter would be rather droll.

According to AMD, there’s some work required to support the feature appropriately, implying we may not see it enabled immediately on Intel and Nvidia platforms. It’ll be interesting to see what kind of performance we see other platforms and hardware pick up from enabling this capability — Intel might benefit more than AMD (or vice-versa) and AMD GPUs might benefit more than Nvidia cards or the reverse.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Sony May Be Overselling Aspects of the PS5’s Hardware Performance

This site may earn affiliate commissions from the links on this page. Terms of use.

Sony is in a tricky position with the PlayStation 5. While it heads into the next console generation as the unquestioned winner of the current cycle, it looks as though the PS5 will be markedly less powerful than the Xbox Series X.

When Sony unveiled the PS5 last week, Mark Cerny told viewers that the PS5 wouldn’t be at a disadvantage against the XSX because a higher-clocked smaller GPU like the PS5’s could still outperform the wider, slower GPU on the Xbox Series X:

About the only downside is that system memory is 33 percent further away in terms of cycles, but the large number of benefits more than counterbalance that. As a friend of mine says, a rising tide lifts all boats. Also, it’s easier to fully use 36 CUs in parallel than it is to fully use 48 CUs – when triangles are small, it’s much harder to fill all those CUs with useful work.

We spoke to Dan Baker, Graphics Architect of Oxide Games, about the efficiency question and whether smaller GPUs would be a better fit for modern graphics workloads than larger ones.

“Small triangles are indeed inefficient,” said Baker, “Because you have to partially shade fragments that are ultimately discarded. However, this inefficiency is largely in CU execution because the CUs are being asked to compute more work, so you’d want more CUs to offset the inefficiency.

“However,” he continued, “This is specific to the type of renderer. In deferred renderers, which make up most of the market today, most of the shading computation is done in screen space, where the small triangle problem is minimized. Only the material setup really pays the cost for small triangles. For Oxide’s decoupled shading rendering technology, neither the setup nor the shading efficiency is affected by the size of the triangle, so we are impacted even less.”

According to Baker, the increase in memory latency that Cerny mentions is indeed a negative that can make smaller, high-clocked parts a bit less efficient than their wider, slower-clocked brethren.

What About Storage Performance?

Both Sony and Microsoft are delivering dramatic storage performance in their next-gen solutions, with Sony claiming ~2x the performance Microsoft does in terms of sustained streaming bandwidth. Sony has detailed a number of changes these improvements will deliver, including more efficient data loads, since objects don’t need to be duplicated dozens or hundreds of times in files across the game install.

There’s absolutely no question that upgrading from the HDD solutions inside the Xbox One X / PS4 Pro to PCIe-based SSDs will be an enormous improvement for both consoles. Swapping an HDD for an SSD is still one of the all-time best ways to improve performance, even in an old rig. When Sony talks about a 100x performance improvement compared with the PS4, that’s almost certainly true when measured against HDD latency, while storage bandwidth has improved nearly as much.

SSD-Speed-PS5

Image by Sony

The question is, what aspect of gaming is this additional performance going to improve? Baker believes the high-speed SSD will be used as a giant page file. Texture data can be selectively streamed in and out of system RAM to eliminate things like load times and texture pop-in. What it probably isn’t going to be used for — not as such — is simply making the game world bigger.

To be clear, I’m not saying open-game maps won’t get bigger next generation, just that the use of ultra-fast SSDs probably won’t be the reason why they do. No open-world title loads the entire game world into RAM at once. Rather than attempting to cache an entire title in RAM, a PCIe SSD serves as a giant, texture-y RAMDisk. There are a lot of improvements developers can make behind the scenes to how assets to boost performance, but they’re also likely to be tied to some complex new methods of handling storage and data loads.

To some extent, this is par for the course. During the PS3 era, Sony even declared that the PS3 was deliberately difficult to program for because this ensured it took developers longer to unlock the full potential of the system.

“We don’t provide the ‘easy to program for’ console that (developers) want, because ‘easy to program for’ means that anybody will be able to take advantage of pretty much what the hardware can do, so then the question is, what do you do for the rest of the nine-and-a-half years?” explained Kaz Hirai back in 2009.

But this type of thinking has been less common of late and console manufacturers now provide more help than they once did. Microsoft’s Xbox Series X, for example, offers 8 threads at 3.8GHz or 16 threads at 3.6GHz and the manufacturer has predicted at least some developers will opt for higher clocks and lower threads due to the difficulty of parallelizing effectively.

The net effect of this is that it isn’t clear the PS5 will get a different benefit than the Xbox Series X from its faster storage, while the XSX is likely to be faster on the whole thanks to a wider GPU. The storage improvements on both platforms are more likely to improve data load times and things like texture loading, rather than by making game worlds larger in absolute terms. We don’t know how these factors will play into customer purchases, however, because we don’t know the price on either platform or how the impact of the worldwide pandemic will affect launch schedules. On the whole, it looks like the PS5 is set to be a bit less powerful — though not necessarily less popular — than the Xbox Series X.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Sony Reportedly Struggling to Keep PS5 Hardware Costs Down

This site may earn affiliate commissions from the links on this page. Terms of use.

Both Sony and Microsoft have started teasing their 2020 game console releases, but there are precious few confirmed details. A new report claims that anxious gamers could have sticker shock when they try to pick up a PlayStation 5 later this year. The more powerful hardware in Sony’s upcoming console could amount to a hefty $ 470 price tag

Sony has talked in general terms about the hardware it plans to use in the PS5. The device will have a new Ryzen-based CPU, nearly instant game loads, 8K video decoding, and ray tracing graphics. Sony has said it will have about as much raw power as a low-end gaming PC. Game consoles are often far less powerful than contemporary PCs, but developers can wring every ounce of performance out of a console. That’s why the PS4 and Xbox One X can look as good as more powerful PCs. A game console that stands on equal footing with PCs could do amazing things. 

Sony’s desire to push the envelope with the PS5 has led to ballooning costs, according to Bloomberg. The cost to build each console is reportedly hovering around $ 450, and Sony is looking at the possibility of charging $ 470 when the device launches. Small profits on the hardware are par for the course with consoles — the games make much more over the long-term. 

Sony is reportedly scrambling to secure DRAM and flash memory in the necessary quantities to mass-produce the PS5. The more powerful hardware also requires a more elaborate (and expensive) cooling system than past Sony consoles. The leaked developer hardware (above) sure does have a lot of grilles on it, but the finaly device probably won’t look exactly like this. 

More than a decade ago, Sony released the PS3 with a whopping $ 499.99 starting price. The version with more storage (60GB) added $ 100 to that. Early sales were sluggish, and that put Sony at a disadvantage for that entire console generation. Are people more willing to accept a nearly $ 500 game console now? The Xbox One X launched at $ 500, and Microsoft has expressed confidence in its sales. However, you could play the same titles on the much cheaper Xbox One variants. There wouldn’t be a cheaper version of Sony’s console, though. $ 500 today is also less money than it was in 2006, due to inflation (a $ 500 console in November 2006 would cost $ 640 today, while the $ 600 version would cost $ 768). 

Sony’s experience with the PS3 might even prompt it to price the PS5 below cost, just so it can build up a large player base and earn more on games sales and subscriptions like PS Plus. Sony outsold Microsoft in the current console generation, and it might want to keep that momentum going even if it takes a small hit at launch.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Beyond Hardware: AMD’s Planned Software Improvements For Navi, GCN

This site may earn affiliate commissions from the links on this page. Terms of use.

We’ve been covering AMD’s Ryzen and Navi announcements at E3 throughout the week, with one more aspect of the situation left to discuss. While we’ve discussed Navi and its RDNA architecture, we haven’t talked about any of the software improvements AMD intends to offer with its next GPUs. Some of these gains will also be available to GCN cards as well.

Let’s talk about some features and improvements.

First up, there are the general quality-of-life gains baked into AMD’s Radeon Software. With Navi, the system will automatically drop your TV into its low-latency game mode if the display supports one. You’ll be able to save settings to a separate file and re-import them if you need to install the driver completely from scratch or are reinstalling your entire OS. There are also some improvements baked into how WattMan reports its results.

AMD’s Link streaming application now supports streaming to TV boxes, including Apple and Android TV. Wireless VR streaming is now supported as well. These improvements are not gated to any specific GPU.

Radeon Chill is AMD’s technology to reduce GPU power consumption when gaming. The software can now set frame rate caps on 60Hz displays to reduce the number of frames rendered when you aren’t actively controlling your character due to being afk.

Click to enlarge. Don’t try to read this as-is. What is WRONG with you?

AMD’s footnote on Radeon Chill is worth reading. Under the right circumstances, it can significantly reduce GPU power consumption, though this does impact frame rate, and the total size of the gain varies from title to title. Any GPU that previously used Radeon Chill can take advantage of these improvements.

Next up, Radeon anti-lag. According to AMD, it has invented a method of reducing the length of time between when you hit a button in a game and when you see the results of doing so. This is accomplished by delaying some CPU work to ensure it occurs simultaneously alongside the GPU rather than being completed in advance.

I cannot honestly say that I observed a difference between having Radeon Anti-Lag enabled versus disabled. AMD demonstrated that the effect was working using custom-built latency monitors clipped to displays, and I believe the company that the monitor I tested had slightly better latency. I’m at an age where motor reflexes have already begun to decline, and if I’m being honest, I was never a particularly good twitch gamer to start with.

Best-case, this feature shaves a few milliseconds off your total latency. If you’re a good enough gamer to compete in these spaces to begin with, that might genuinely be worth something. It’s not something I feel capable of commenting on.

Anti-lag is supported in DX11 on all AMD GPUs. Support for DX9 games is a Navi-only feature. DX12 games are not currently supported due to dramatically different implementation requirements in that API.

Radeon Image Sharpening is a feature that pairs contrast adaptive sharpening with the use of GPU upscaling techniques to improve overall image quality above baseline without requiring the penalty of native 4K rendering. The following slides compare RIS on versus RIS off.

RIS is off in the slide above.

RIS is on in this slide. The effect is very subtle. You may want to open both of the images above in separate tabs, zoom in carefully, and then compare the final product. While there’s a definite IQ improvement in the “ON” image, it’s a small one.

Still, small improvements to IQ are generally welcome. RIS was also designed by Timothy Lottes, who worked on FXAA at Nvidia. There’s no expected performance impact from using the feature (estimated performance hit is 1 percent or less). RIS is a Navi-only feature and is only supported in DX12 and DX9.

Finally, there’s FidelityFX.

FidelityFX is AMD’s new addition to GPUOpen, and is a capability it’s releasing to any developer that wants to take advantage of it. The Contrast Adaptive Sharpening tool can be used on any GPU if developers want to do so.

A Few More Navi Details

A few more hardware details on Navi that didn’t make it into earlier stories but probably should’ve (blame a frenetic briefing schedule and some jumbled note-taking):

AMD plans to keep GCN GPUsSEEAMAZON_ET_135 See Amazon ET commerce in-market to handle HPC workloads. The AMD engineer we spoke with compared GCN with an enormously effective broadsword if swung properly, but as being relatively cumbersome to use, while RDNA was more of a lightsaber in terms of focusing on elegance and economy of motion. GPUs like the MI50 and MI60 also offer far more memory bandwidth and larger memory pools than any of the Navi cards coming to market in the near future.

RDNA is expected to eventually replace GCN in this space and has some fixes for slow-path quirks that GCN suffered from. Irregular performance with certain texture formats has been fixed, for example, and RDNA has larger caches to prevent pipeline bubbles. Overall performance should be more predictable with RDNA-derived GPUs than it was with GCN.

There’s nothing super-new in these details, but I thought I’d include them for completeness’ sake. This concludes our E3 coverage.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Nintendo: 34.74M Switches Sold, No Plans for New Hardware at E3

This site may earn affiliate commissions from the links on this page. Terms of use.

Nintendo’s Switch has been a bona fide smash hit for the company and its sales figures continue to impress two years after launch. Nintendo announced lifetime sales of 34.74M Switches for the fiscal year ended March 31, with 16.95M of those units sold in the past 12 months. That’s a 12 percent increase over the Switch’s launch year and pegs the platform as Nintendo’s bestselling platform since the Wii — which is to say, likely the second-best selling Nintendo platform ever.

With this latest report, the Switch has now officially outsold the Nintendo 64, at 32.93M units. The next target is the SNES, at 49.1M systems, which the SwitchSEEAMAZON_ET_135 See Amazon ET commerce should handily outpace this year. Nintendo has stated that it expects to sell 18M Switches in FY 2020 (CY 2019), which would put it above Nintendo’s classic 16-bit platform.

In other sales news, the 3DS moved 2.55M units for the year, while the NES and SNES Classic Editions sold 5.95M units collectively. Put differently, Nintendo’s yearly sales figures for the 3DS, NES, and SNES are an appreciable percentage of Microsoft’s yearly Xbox One sales, which… ow. Continued sales of the Xbox One mean Nintendo probably won’t surpass that platform before the Xbox Next is presumably introduced in 2020, but it ought to come close.

Total software sales have also been excellent. So far, 23 Switch games have now surpassed the 1-million-copies-sold mark, with Super Smash Bros Ultimate and Pokemon: Let’s Go racking up 13.81M and 10.63M units, respectively. Nintendo’s overall operating profit for the year was $ 2.2B, up from 1.6B last year.

No New Switch Announcements at E3

Nintendo CEO Shuntaro Furukawa shot down rumors that Nintendo would demo a new Switch model at E3 this year. “As a general rule, we’re always working on new hardware and we will announce it when we are able to sell it,” Nintendo CEO Shuntaro Furukawa told reporters in Osaka. “But we have no plans to announce that at this year’s E3 in June.”

NX_hardware

Hope you like the hardware, because it doesn’t seem to be going anywhere.

An earlier report from Bloomberg had said that Nintendo could release a new low-cost Switch model as early as June. That same report said that the Switch would also receive a ‘modest upgrade’ but that a more powerful version was not currently in the works.

A lower-cost version of the Switch could capture the 3DS’s market when that console inevitably toddles off the mortal coil, while a higher-performance or longer-lived variant could take advantage of the considerable performance improvements available to mobile devices since the 2015-era, 20nm Tegra X1 CPU inside the Switch debuted. Nintendo may not feel the higher product development costs are justified given the Switch’s lower price compared with its console rivals, and the company has never embraced the Sony and Microsoft trend of offering periodic console upgrades. The significant improvement in mobile power consumption and performance has fueled speculation — including our own — that the company might break from its previous approach this time around.

Strategic Timing

One possibility is that Nintendo might be timing its own performance improvement announcements against the unveiling of new consoles from Sony and Microsoft. This would be the opposite of the strategy the company used with the Wii U, which launched in 2012, a year before the PS4 and Xbox One.

There’s no chance that a 7nm Switch would match the performance of the PS5 or Xbox Next, but the performance and processing power leap from 20nm to 7nm would give Nintendo a significant set of upgrades to discuss in its own right, with potential resolution improvements and detail level increases. This might be seen as a better strategy for keeping consumer interest high during its rivals’ launch season. This is, however, strictly speculation on our part — Nintendo has given no sign that it will go head-to-head with Sony or Microsoft in 2020 with its own refresh cycle.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Steam Hardware Survey Shows GPU Gains for AMD, Mixed Turing Results

This site may earn affiliate commissions from the links on this page. Terms of use.

We’ve been tracking the monthly updates of the Steam Hardware Survey to create a more detailed window into the GPU replacement cycle than we’ve typically provided in the past. There are a number of interesting trends currently playing out in the GPU market. Nvidia is the midst of an all-hands-on-deck effort to convince gamers that ray tracing is the Next Big Thing and that its Turing GPUs represent a worthwhile investment, even considering their increased costs relative to previous generations. AMD has aggressively positioned its lower-end GPUsSEEAMAZON_ET_135 See Amazon ET commerce to combat Pascal and Turing, with a lot of market buzz around 7nm and its upcoming Navi family. So how are consumers responding to these arguments?

Probably not as well as Nvidia would like, though the company remains the overwhelming player in the gaming GPU market. According to Steam, Nvidia’s overall market share is ~75 percent, with 10 percent of gamers on Intel solutions, and 14.7 percent using AMD. There’s a little good news for AMD in these results that we’ll discuss as well. First, though, let’s check out the state of Turing versus Pascal.

The slideshow below compares the percentage of Steam users with a given GPU, measured in the months after that GPU launched. There’s a 0 percent period in the graphs to show the time lag between when cards debut and when they actually appear in the Steam Hardware Survey. If a GPU launches in May, May is considered to be Month 1 of launch. The RTX 2080 and 2070SEEAMAZON_ET_135 See Amazon ET commerce use a 7-month period to reflect the time since launch, while the RTX 2060 uses a three-month window (it launched in January).

I’ve dropped the 1080 Ti from these comparisons because the Steam Hardware Survey suffered a major discontinuity in terms of the data set back in August 2017, and we’re now bumping into that period relative to the 1080 Ti’s launch window.

Because Turing GPUs sell at higher prices than their Pascal counterparts, we’ve also included price-matched comparisons that compare cards based on their actual price rather than branding. In these cases, the GTX 1080 is compared against the RTX 2070 and the GTX 1070 takes on the RTX 2060.

The entire data table is shown below:

Turing-versus-Pascal-Table-March

So, what do we see in aggregate? Mixed results. The gap between the 1080 and 2080 widened by a fraction, but scarcely enough to notice. The gap between the 1070 and the 2070, on the other hand, exploded. Adoption of the GTX 1070 surged once the cards were widely available in-market, while the RTX 2070 has yet to benefit from an equivalent leap. The GTX 1080 versus RTX 2070 comparison shows improvement, with the RTX 2070 gaining on the GTX 1080 as far as current adoption at the same place in their respective life cycles. This is good news for Nvidia.

The RTX 2060 similarly shows mixed results. Steam appears to have a cutoff at roughly 0.15 percent when it comes to whether a GPU rates being included on the survey. The RTX 2060 hits this adoption rate more quickly than any other RTX card, appearing in our survey in the third month post-launch. As you can see, none of the other Turing GPUs hit this point until Month 4. It also enters the survey at the highest adoption rate — 0.27 percent, compared with 0.22 percent for the 2080 and 0.17 percent for the RTX 2070. Again, this is a sign of increased uptake and better sales.

But while the RTX 2060 has had the best introduction of any Turing GPU judged on SHS adoption, it doesn’t hold a candle to either the original GTX 1060 or the GTX 1070. The availability of multiple GTX 1060 SKUs complicates this story, which is another reason why the GTX 1070 may be the better RTX 2060 comparison. Even here, however, the GTX 1070 is decisively ahead.

Nvidia has said that Turing drove far more revenue than Pascal during the early days of launch, and that may be true. Nevertheless, the best public data source available suggests that Turing has not been as widely adopted by the gaming community as Pascal was at the same point in its life cycle.

Modest Gains for AMD

AMD has been aggressively positioning its Radeon GPUs for months, and those price cuts are paying off. The RX 580 was the third-largest mover on Steam this month, jumping 0.16 percent for a total market share of 1.1 percent. To put that in perspective, however, the RX 580 is currently listed as the most popular AMD GPU on Steam.

AMD-Radeon-RX-580

Not actually all that popular.

The RX 570 grew modestly, at 0.06 percent, for a total market penetration of 0.34 percent. RX Vega launched in August 2017 but only appeared on the Steam Hardware Survey in January 2019 at 0.16 percent. Now the two GPUs are up to 0.22 percent share. It is unclear whether this includes the Radeon VII.

One point these comparisons hammer home is just how lopsided the GPU market currently is. It’s absolutely fair to compare Turing and Pascal or to discuss the overall GPU market in 2016 versus 2019, but right now, Nvidia doesn’t have much competition. It’s going to take more than price cuts for AMD to reverse its current market share.

Now Read:

Let’s block ads! (Why?)

ExtremeTechGaming – ExtremeTech

Steam Hardware Survey Shows GPU Gains for AMD, Mixed Turing Results

This site may earn affiliate commissions from the links on this page. Terms of use.

We’ve been tracking the monthly updates of the Steam Hardware Survey to create a more detailed window into the GPU replacement cycle than we’ve typically provided in the past. There are a number of interesting trends currently playing out in the GPU market. Nvidia is the midst of an all-hands-on-deck effort to convince gamers that ray tracing is the Next Big Thing and that its Turing GPUs represent a worthwhile investment, even considering their increased costs relative to previous generations. AMD has aggressively positioned its lower-end GPUsSEEAMAZON_ET_135 See Amazon ET commerce to combat Pascal and Turing, with a lot of market buzz around 7nm and its upcoming Navi family. So how are consumers responding to these arguments?

Probably not as well as Nvidia would like, though the company remains the overwhelming player in the gaming GPU market. According to Steam, Nvidia’s overall market share is ~75 percent, with 10 percent of gamers on Intel solutions, and 14.7 percent using AMD. There’s a little good news for AMD in these results that we’ll discuss as well. First, though, let’s check out the state of Turing versus Pascal.

The slideshow below compares the percentage of Steam users with a given GPU, measured in the months after that GPU launched. There’s a 0 percent period in the graphs to show the time lag between when cards debut and when they actually appear in the Steam Hardware Survey. If a GPU launches in May, May is considered to be Month 1 of launch. The RTX 2080 and 2070SEEAMAZON_ET_135 See Amazon ET commerce use a 7-month period to reflect the time since launch, while the RTX 2060 uses a three-month window (it launched in January).

I’ve dropped the 1080 Ti from these comparisons because the Steam Hardware Survey suffered a major discontinuity in terms of the data set back in August 2017, and we’re now bumping into that period relative to the 1080 Ti’s launch window.

Because Turing GPUs sell at higher prices than their Pascal counterparts, we’ve also included price-matched comparisons that compare cards based on their actual price rather than branding. In these cases, the GTX 1080 is compared against the RTX 2070 and the GTX 1070 takes on the RTX 2060.

The entire data table is shown below:

Turing-versus-Pascal-Table-March

So, what do we see in aggregate? Mixed results. The gap between the 1080 and 2080 widened by a fraction, but scarcely enough to notice. The gap between the 1070 and the 2070, on the other hand, exploded. Adoption of the GTX 1070 surged once the cards were widely available in-market, while the RTX 2070 has yet to benefit from an equivalent leap. The GTX 1080 versus RTX 2070 comparison shows improvement, with the RTX 2070 gaining on the GTX 1080 as far as current adoption at the same place in their respective life cycles. This is good news for Nvidia.

The RTX 2060 similarly shows mixed results. Steam appears to have a cutoff at roughly 0.15 percent when it comes to whether a GPU rates being included on the survey. The RTX 2060 hits this adoption rate more quickly than any other RTX card, appearing in our survey in the third month post-launch. As you can see, none of the other Turing GPUs hit this point until Month 4. It also enters the survey at the highest adoption rate — 0.27 percent, compared with 0.22 percent for the 2080 and 0.17 percent for the RTX 2070. Again, this is a sign of increased uptake and better sales.

But while the RTX 2060 has had the best introduction of any Turing GPU judged on SHS adoption, it doesn’t hold a candle to either the original GTX 1060 or the GTX 1070. The availability of multiple GTX 1060 SKUs complicates this story, which is another reason why the GTX 1070 may be the better RTX 2060 comparison. Even here, however, the GTX 1070 is decisively ahead.

Nvidia has said that Turing drove far more revenue than Pascal during the early days of launch, and that may be true. Nevertheless, the best public data source available suggests that Turing has not been as widely adopted by the gaming community as Pascal was at the same point in its life cycle.

Modest Gains for AMD

AMD has been aggressively positioning its Radeon GPUs for months, and those price cuts are paying off. The RX 580 was the third-largest mover on Steam this month, jumping 0.16 percent for a total market share of 1.1 percent. To put that in perspective, however, the RX 580 is currently listed as the most popular AMD GPU on Steam.

AMD-Radeon-RX-580

Not actually all that popular.

The RX 570 grew modestly, at 0.06 percent, for a total market penetration of 0.34 percent. RX Vega launched in August 2017 but only appeared on the Steam Hardware Survey in January 2019 at 0.16 percent. Now the two GPUs are up to 0.22 percent share. It is unclear whether this includes the Radeon VII.

One point these comparisons hammer home is just how lopsided the GPU market currently is. It’s absolutely fair to compare Turing and Pascal or to discuss the overall GPU market in 2016 versus 2019, but right now, Nvidia doesn’t have much competition. It’s going to take more than price cuts for AMD to reverse its current market share.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech