Interesting that when limited at 15W on geekbench multithread this ryzen 9 5980HS is faster than Apple M1 (and even faster at 35W), but below in single thread.
As an engineer, I love examining the details. I think the M1 is a great processor, but the performance advantages have been greatly exaggerated.
As an end-user, I just don't care if my CPU is 4 threads or 16 threads or 5nm or 7nm or 15W or 20W. I only care about the price, how fast it handles my workloads, and how long the battery lasts. I think AMD understands this angle well.
I think some of the exaggeration has come from people who only use Apple products basically seeing what the cutting edge actually is, e.g. We've historically been the same performance or worse, now our tribe is the fastest - spurred on by much faster benchmarks.
Anyway I look forward to websites slowing down as M1 becomes more popular amongst Devs who shouldn't need anything that fast in the first place (ideally)
>Anyway I look forward to websites slowing down as M1 becomes more popular amongst Devs who shouldn't need anything that fast in the first place (ideally)
Yup. The "new" Reddit redesign is totally unusable on all my computers except the M1 Air. Not looking forward to more web developers getting their hands on these things.
The redesign has absolutely no resource management. They don't load and unload media when its no longer in the viewport either from scrolling or opening a thread, and the redesign infinitely scrolls.
Plus thread view is a modal over the feed unless you navigate directly to its URL, so often times you'll have media loading and running in the thread view while other media may still be playing back in the feed.
I don't think that's the only source of the insane memory leak in Safari in particular (you will get into the GBs if you leave a media-heavy feed open past 10 minutes or so), but its gotta be a big contributor.
That doesn't sound so good, but I can do one better. The 4 FP EUs/core on the M1 (x64 has 2) that likely lead to this result might also lead other languages to use more floats and doubles.
It is, but the Firestorm cores can do 2x more scalar FP executions per clock than Zen 3. Still, for a full browser benchmark like Speedometer 2, I would've liked to see better results from 5800H/U given their large cache bumps from last gen.
Basically, you have to remember how slow and power-hungry Mac laptops before the M1-based laptops were. They were not using the latest available technology, and the M1 switched to using the latest available technology. So for Mac laptop users, it's a giant leap. For everyone else, not as amazing a leap, but still a very good chip.
The M1 still has a clear lead in single-core performance and that at a much lower clock speed than the competition. That is the big leap in performance. There are still far to many things which depend on single core performance. This is why a Threadripper with 64 cores really shines only for very compute-intensive parallel work loads. It will be very interesting, how the performance of Apple Silicon will be, once they go to 8 or 16 high performance cores.
A bit part of this is Apple's best intel laptop (the 16") is still on a 9th gen intel. So comparing a 2020 m1 on 5nm vs a 14nm 2018 chip isn't fair. It's real if you're an apple user, and upgrade from a 16" MBP to their m1, but it's not fair.
Plus, intel isn't even the x86 leader now. 7nm zen2 is a much better comparison.
Another part of this is that I think the intel line will probably be on "maintenance mode" at Apple for now; the best and brightest engineers are probably all working on Apple Silicon products...
In this case, I think Apple does as well. It seems extremely focused on Mac/iOS user workloads and by all means does them extremely well. That it matches or beats x86 general purpose CPUs in synthetic benchmarks is an added bonus.
I think interestingly enough, the M1 does this w/ much of the die devoted to neural engine widgets
Also the fact that it works with existing software out of the box.
For as much as Macbooks are touted to be developer laptops, they should have homebrew, python, and all the other dev tools available natively from launch day.
> Note that AMD has been beating Apple on single core performance since day 1
On EEMBC CoreMark. But CoreMark is not a meaningful benchmark outside of embedded devices.
From your linked article: "CoreMark focuses solely on the core pipeline functions of a CPU, including basic read/write, integer, and control operations. This specifically avoids most effects of system differences in memory, I/O, and so forth."
Memory & I/O are integral to CPU design, which is why AMD's own marketing material emphasizes it so much.
So you decide to cherry pick one single thread benchmark out of many to decide that AMD wins in single thread lol. In most benchmark, m1 is much faster than 4800u in ST.
They didn't enter any market they weren't already in. They've been selling laptops and sff desktops for years and they certainly haven't and aren't going to started selling CPUs. Not to say that the competition isn't good, they just haven't entered into any new markets.
If you look at the fourth table, the 15W config allows for 6 seconds of turbo at 42W. Which probably explains why the benchmark isn't very different between the 35W and 15W configurations. The processor probably just draws 42W for the duration of the benchmark in both cases. I don't know if comparisons to the Apple M1 are apples-to-apples (no pun intended) which tops out at like 20-25W.
That said, AMD's offering in this space is very impressive and really makes you question what Intel has been doing the last three years.
Well several things. The M1 does not have 8 equally performant cores. The M1 has only 4 of those who are faster in single core performance. Also it performs at 10W, not at 15W and does so without a fan (in the case of the MacBook Air). Hence I don't know if your comparison makes sense.
In this case the 15w was actual measured power draw not made up marketing numbers (actual measurement was 12W though I assume Anandtech opted to round up there in the other slides). The M1's actual multicore power draw is more like 20W, at least in the Mac mini being compared against here.
> Also it performs at 10W, not at 15W and does so without a fan (in the case of the MacBook Air
The comparison is with a Mac Mini that uses a fan.
15W is not far from 10W, considering Ryzen are still 7nm
IMO even 35W is not much at all, for 18 hours on battery and that kind of performances (especially the multi core).
The difference is negligible for users, assuming 8 hours a day for 365 days straight the 15W APU would consume 15KW more, in Italy it would cost about one euro more.
For those who didn't notice. ASUS is using Liquid Metal instead of regular paste! It's not mentioned in the article, but you can tell from the header picture. It will be very interesting to see if it will hold after 5 years or if it will "dry up" (eg. amalgamate with the heat sink).
I guess that would make changing the heat sink an issue but as long as you don't do that shouldn't amalgated metal give just as good thermal conductivity overall?
Do you (or anyone else reading) have any idea what that white stuff is. And I mean specific. It's probably some kind of setting rubber like material. I ask because I want to liquid metal a CPU and want to create just such a seal.
Planned obsolescence. I think manufacturers should be required to display clearly how long they think product will last and suddenly ownership turns into a subscription (divide the price by how many months product will last). I think that also should be clearly marked that you won't really own the product. Manufacturers should be required to attach bill of materials and a schematic to the product and it should be illegal to ask chip manufacturer to withdraw sales to consumers willing to repair on their own.
We need to nip these shady practices in the bud!
You are jumping straight from speculation about the longevity of liquid metal to speculation that it is done for planned obsolescence. Cynicism is warranted in this world but not quite this much. I've seen gamersnexus do some testing on old liquid metal and not find any aging to be happening. With laptops thermal throttling is a huge issue and liquid metal helps cool things better. It is nice to see the effort here.
I don't buy this narrative. Don't thermal pastes (at least the old ones) also dry up? "Repasting" old laptops/gpus seems to be a common advice on tech forums, at least a few years ago. If they're doing it for the planned obsolescence they really didn't need to go with the expensive liquid metal, they could just use the shitty thermal paste.
It is planned obsolescence, but not because of the usage of liquid metal thermal paste. Liquid metal is not only more expensive to buy but also to apply (it is very difficult to apply liquid metal thermal paste correctly, and also they need to take care to not spill liquid metal in exposed parts of the processor or it may short circuit the CPU; that is probably also the reason of the white plastic around the CPU). Also, liquid metal only form alloys with some types some metal and probably Asus knows this and avoided using them.
But it is still planned obsolescence because like most notebooks those parts like fans are not meant to be replaced, at all. They probably can be replaced, but is still expensive so most people will prefer to buy another notebook rather than replacing a fan or battery.
I repasted my Acer Predator laptop with liquid metal. It went from 80c idle 95c load to 65-70c idle and 75c load.
When I first turned it on I thought I hadn't plugged the fans back in all the way because it was so quiet. Fans didn't kick on till I was in Windows and it was loading apps, even then it was way quieter. Liquid metal is a game changer for certain computers.
If you purchased this laptop in Norway you would get a 5 year warranty by law. Maybe the US should do something similar.
I do remember watching a news clip a few years ago where TV manufacturers used a cheap capacitor that would burn out in around 5-6 years. There was a repair shop that would change out the cap for you with a long duty one that would easily last 10 years. The repairman was a bit annoyed at the obvious planned obsolescence.
Here in the UK you get even 6. Problem is that when the producer knows the limit, they can design the product to die roughly just after the warranty.
When I repaired some of my own equipment, I was amazed that even 40 year old capacitors were just fine and there was a service manual, with all schematics, BOM, calibration manual. Something unimaginable today.
I agree, however I think it would be simpler to require manufacturers to offer 10 year warranties (or perhaps longer) in other to sell their products. The market will be forced to respond with better engineering and also ensure that spare parts are readily available, if only to serve their own interests of making warranty repairs easier.
ASUS is using Thermal Grizzly Conductonaut. [1][2] The metals it contains are tin, gallium and indium. This is the market leader in liquid metal thermal solutions and in high power desktop uses can reduce the CPU temperature by 20C compared to bog standard solutions, and 10C compared to high end pastes.
Since then, I've read several reviews and watched videos. In a lot of cases, the update from Ryzen 4000 to Ryzen 5000 mobile is not super impressive (and certainly less so than Ryzen 3000 to Ryzen 4000 mobile, but more so than most of Intel's previous generational improvements), and I do worry there's not a big improvement in battery life, despite what AMD claims in this article.
I've also looked at some RTX 3070 results, and they are not blowing me away either. If anything, the best things to come out of 2021 laptops will probably be screen improvements, and in some cases (ahem Asus) better cooling solutions (though many Asus laptops still lacking webcams!)
I'm surprised that moving from 2 CCX to 1 CCX, doubling the L3 cache, and other Zen 3 improvements aren't having a bigger effect on Ryzen mobile, but as seen here ( https://www.youtube.com/watch?v=RjFc5ZRtwP4 ) in this direct comparison of Ryzen 7 5800H to Ryzen 7 4800H, some things that Intel was previously excelling at (like Excel ;)!) will now be better handled by AMD.
I know there must be some reason but I can't put my finger on why AMD are continuing to use Vega for their Mobile/APU range. Vega seems decent within the power range but I would have assumed Navi was more than mature enough for it now; potentially it's just economics and any performance upside for a Navi GP is not worth the extra cost.
A few helpful excerpts from different parts of the article (the rest of each paragraph is also useful):
> Users may be upset that the new processor range only features Vega 8 graphics, the same as last year’s design, however part of the silicon re-use comes in here enabling AMD to come to market in a timely manner.
> As mentioned on the previous page, one of the criticisms leveled at this new generation of processors is that we again get Vega 8 integrated graphics, rather than something RDNA based. The main reason for this is AMD’s re-use of design in order to enable a faster time-to-market with Zen 3. The previous generation Renoir design with Zen 2 and Vega 8 was built in conjunction with Cezanne to the point that the first samples of Cezanne were back from the fab only two months after Renoir was launched.
> With AMD’s recent supply issues as well, we’re of the opinion that AMD has been stockpiling these Ryzen 5000 Mobile processors in order to enable a strong Q1 and Q2 launch of the platform with stock for all OEMs.
One reason for depriorizing it might be that they realized iGPU performance isn't as relevant as they hoped in recent history. AMD spent years betting heavily on concepts like the APU, HSA, tight coupling with coherent virtual memory access from GPU and CPU side etc.
They tried to do a "build it and they will come" but the software didn't come and Intel ate their lunch with low effort iGPUs. Advances of GPU programming slowed industry wide, especially software & dev experience / programming language wise, and the GPU world remained far from CPU programming productivity. (And you can still get 2 orders of magnitude of parallelism staying on the CPU, from SIMD, multicore and SMT).
They learned their lesson, and this time around came back with a product that can beat Intel in their own game.
memory bandwidth as spqr0a1 said is probably a good reason, but also, the design pipeline probably made it hard to do Zen2 with Navi. Both of those came out in 2019; if Navi had gotten delayed, it would have delayed Renoir as well.
Now for the Zen3 APUs, AMD is emphasizing pin for pin compatibility; possibly switching the GPU would have needed different pins; or they just wanted to make sure they got Zen 3 APUs out quickly to keep making inroads into the laptop market.
There's a leaked AMD roadmap floating around that has a Van Gogh chip with Zen2 + Navi coming out sometime this year. But that roadmap didn't show the Lucienne Zen2 + Vega chips AMD recently announced (Ryzen 5{3,5,7}00U} to ensure model numbers stay confusing.
The article touches on this. Anandtech assumes that it's because it was faster to go to market with Vega. Something about the modular design of the chip. They also mentioned that AMD would likely upgrade the GPU in a next version, without upgrading the CPU. Maybe a minor mhz bump of the CPU together with a new GPU.
Memory bandwidth is but one aspect of GPU performance, and also memory bandwidth is substantially upgraded in this APU. It has support for LPDDR4X-4267 68.2 GB/s, up from DDR4 3200's 51.2 GB/s.
But the Vega 8 in this is definitely not the peak that you can squeeze out of DDR4. If that was the case then we wouldn't have had things like the Vega 11 in the 3400G. Also RDNA2's "Infinity Cache" helps reduce memory bandwidth requirements, which would also be a relevant upgrade.
This was just a time-to-market strategy to reduce risk. Not an optimal engineering decision. This let them avoid trying to make a power-optimized version of RDNA2 at the same time they were trying to release any version of RDNA2.
Navi improves delta compression for memory, which improves effective memory bandwidth for a given "raw" bandwidth. So switching to Navi would have alleviated the memory bandwidth bottleneck to some degree and improved performance.
It is a puzzling decision and perhaps the "wasn't ready when they taped out" explanation is the correct one. Cezanne seems to have been ready for a while now and just waiting for fab capacity - meaning it would have had to have been taped out in parallel with the RDNA2 architecture chips. So a design flaw in RDNA2 might have blown the Cezanne launch.
They could have used RDNA1 though and it still would have improved compression over GCN/Vega. Or ported over just the delta compression. I guess maybe they just wanted to port Zen3 over straight, use the memory controller they'd already proven with Renoir, and not take any risks?
It's definitely puzzling and I haven't heard an explanation I would consider 100% satisfactory.
Citation needed. Doesn't seem to be borne out by benchmarks comparing chips with varying iGPU resources and same memory setup.
Sure, they're going to be meomry bound some of the time, and it depends on what kind of bw/compute balance the GPU code is tuned for... but we didn't stop putting more cores and wider SIMD on chips either just because then current SW didn't fully utilize them.
Also in the article it is clearly mentioned that time to market using vega was much shorter than reinventing the wheel which was important i'm guessing since AMD is still small compared to intel in mobile space and they have struggled in the past in this segment.
Yes, though I'm pretty sure the same rationale was used for the 4000 series APUs as well.
I have a feeling that the particular market these chips are aimed at that improving the iGPU by even something like 50% is not that meaningful, it's still not going to be competitive with discrete and will nearly always be pair with one if gaming is an option on the particular laptop.
And so I suppose the rationale might be that it's just easier to stick with Vega and the current power draw for the iGPU as acceptable for the required graphics horsepower. Maybe one day we'll see an APU powerful enough to remove the need for discrete GPU in laptops, not for awhile yet though :).
Unlike DDR, GDDR is optimised for high-bandwidth (over latency), which is why dedicated GPUs can be so much faster. HBM even more so. DDR5 does 6.4Gbps, whereas HBM2E does 2.5Tbps (per stack).
Console APUs don't need to be particularly power-optimized. If the RDNA 2 GPU in the PS5 can't idle below 10w, nobody will care. If the RDNA 2 GPU in your laptop can't idle below 10w, that's unshippable. A decent laptop doesn't draw 10w total at idle including screen & all that.
And I'm sure at some point there will be an RDNA-something APU from AMD, too, but there also isn't any existing power-optimized RDNA cores sitting around either to point at and just be like "but why didn't you use that instead?"
And still no Desktop APU since ZEN+.(Except Renoir which is only for OEMs). Maybe AMD wants you to buy non APU cpus so that you also buy a GPU from them.
APUs and GPUs are relatively low-margin for AMD on a per-wafer basis. It takes a lot of silicon to produce a product which sells for a not a lot of money.
Consoles exercised options to increase their orders by 50% from the expected amounts during 2020, and reportedly consoles are now consuming up to 80% of AMD's wafer allocation. So they are in a defensive mode to maximize their returns on their remaining wafers.
Monolithic dies need to have IO for memory/etc, which doesn't shrink well on modern nodes, so it consumes a lot of space. The iGPU also consumes a pretty large amount of space. On chiplet based CPUs, all of that is pushed off to a separate, older process which is not in shortage (GloFo 12nm/14nm), so only the CCDs themselves are on 7nm. This is a very efficient way to use silicon compared to the monolithic APU dies.
GPUs are not a great situation for AMD either. NVIDIA is using a cheaper, worse process, but as is customary they have much better silicon engineering and have matched AMD's core efficiency despite the worse process - the only real advantage AMD has is that they use less memory which saves them about 30 watts out of a 300W budget. And at the top end they are exceeding their performance, plus they have tensor cores to accelerate DLSS and provide additional performance, plus more raytracing performance. So AMD is paying more for an expensive process and hasn't managed to beat NVIDIA. NVIDIA is essentially forcing AMD to cut their own throats on margins in the GPU segment, if they want to remain competitive. There is very much a reason that AMD is no longer wildly undercutting NVIDIA like they used to and it has nothing to do with mindshare/etc, AMD has been using more expensive technology just to try and stay performance competitive for years now.
As such, APUs and GPUs have been the products that get the shit end of the stick. To wit: as far as cutdowns go, a 5600X die is 80mm2, they sell that for $300 MSRP, and they keep most of that for themselves (let's say $50 chip cost/manufacturing/packaging, $50 for retailer/distribution, they make $200 a chip). A 6800 die is 520mm2, the MSRP is $580, and they have a significant BOM cost, and they have to incorporate a profit margin (even if slim, it's not great being an AMD partner). So let's say $100 chip cost/manufacturing/packaging, $300 BOM, $50 for the partner, $50 for the retailer, they make $80 per card. But it uses 6.5 times as much silicon per chip, so they make the equivalent of $12 on this GPU versus $200 on the CPU! It's literally an order of magnitude more profitable to make the chiplets than to make a GPU.
Or an APU, Cezanne is 175mm2 for an 8-core die, they can sell that for maybe $50-75 more than the iGPU-less chip, so $350-375. The chip costs them let's say $25 more, so they make $250, but it's 2.2x as big as the chiplet.
Just made up numbers pulled out of my ass, but you can see the math heavily disfavors making APUs and GPUs. Which is why AMD has actually been shunting production away from those throughout 2020, it is not a coincidence that they were announced in January and didn't show up until maybe May in actual products, and then you had reports from partners that they were getting their allocated (basically confirmed and accepted) orders pushed back a quarter or more.
(this eventually ended up with the orders from August getting pushed back into November if you read the series of threads from XMG)
TSMC just doesn't have enough capacity to go around, and right now they are an effective monopoly, people basically pre-write the obituary of products that choose processes that aren't TSMC. And unfortunately the situation is only getting worse this year and next year - there are products that have been waiting for consoles that will immediately soak up all that capacity. Intel is launching GPUs and potentially laptop CPUs (?) on TSMC now. Automakers have been clamoring for more and have had to shut down lines because they can't get enough chips. NVIDIA is reportedly moving a few of their high-end consumer products to TSMC late this year to supplement their Samsung capacity and allow a "refresh" like 3000 Super with higher performance at the top. Apple is moving all of their laptop and desktop products from Intel products (made at Intel foundries) to TSMC, so I expect they will continue to squat the 5nm node for a longer period of time (desktop/laptop will probably stay 5nm and phones/tablets move to 3nm once that's available), and this will echo right down the conga line, products from other brands that would have transitioned to 5nm will be stuck consuming 7nm instead.
Everyone thought it was some snub from TSMC that NVIDIA switched to Samsung, like there is a persistent narrative (which nobody credible will actually get behind) that NVIDIA tried to demand a discount from TSMC and got told to pound sand. It is now very clear that TSMC was not going to be able to deliver the capacity (and certainly not at a price anyone liked) and it's now looking like NVIDIA switching to Samsung was a masterstroke. They've been able to ramp Ampere faster than Pascal (3080 is at the same % marketshare in Steam Hardware Survey as 1080 was at the equivalent point in its lifecycle and the market is >2x as big) and AMD can't get product on shelves. And now NVIDIA can supplement their Samsung capacity with a few higher-performing chips on TSMC at the top of their stack to help broaden their production capacity.
It really says it all that AMD hasn't officially launched Milan yet. That's a chiplet-based product (so, good utilization of their wafers) but higher-margin than consumer products. So basically strictly better margins than consumer dies in all respects. But they just can't deliver enough capacity yet to be credible. That is the next thing on AMD's docket, it'll be launching in March. So far hyperscalers have samples and probably small production runs and that's it. Unlike NVIDIA, unlike Intel, AMD has no second-source to run to, it's TSMC or bust (or go back to shitty 12nm/14nm Zen+ chips from GloFo).
(which, btw, long story short, is why "only Zen+ APUs on desktop"! those APUs are manufactured at GloFo on an older node and thus do not compete for precious TSMC capacity.)
This article is just a marketing teaser to try and divert attention from the Tiger Lake-H launch. AMD had a core count advantage over Tiger Lake-U which they could point to (and which gave them a performance and efficiency advantage in highly-threaded tasks), Tiger Lake was faster per core but AMD had more of them. Well, now Intel has 8 cores coming up in March, and they're still faster cores than Zen2-based Renoir. AMD has been sitting on these Zen3-based Cezanne chips as long as possible so they could preserve their margins on chiplet products, with Tiger Lake-H coming they don't have a choice but to at least flash them around to show what they can do.
The APUs launching in March are gonna be a paper launch. Milan is where all their wafers are going in Q2, it's way higher margin and they need to hit that market (especially if they can beat out Ice Lake-SP since they have been making good inroads in server). But they need to show the flag in mobile because Intel is about to surpass Renoir in the mobile-workstation market, not just the ultrabook market. They have to show their successor now and hope to keep people on the hook long enough. It will probably be Q3 if not Q4 before these Cezanne processors show up in any number - there are just too many other companies who booked TSMC's capacity and too many products within AMD that need to go first before APUs.
It's no coincidence these articles are showing up 2 months before the launch event. Hardware Unboxed had one too. It's AMD's last chance to try and get benchmarks against 14nm 8-core chips and 10nm 4-core chips that they can make an argument against. This is "look at the shiny, please pay no attention to that Intel product launch that reviewers will have in-hand within a month or so". And Intel laptops have hard-launched to a much greater extent than AMD.
I wonder what Sony and MS are paying AMD for console chips to get them to push off Milan and their APUs, it's got to be considerable. Milan probably would have at latest launched alongside Vermeer if not before, so they have delayed it 6+ months. It must have been an absolute pile, Milan is a high-margin high-volume product.
tl;dr: fab capacity is fucked, maybe things will be better in 2022
That money is supposed to build a new US-based fab, among other things. I’m very glad to see that, but it does seem hard to scale up manufacturing capacity at the highest performance levels.
Perhaps there will always be a single “best” fab globally, which makes all the top-end products for everyone and commands way higher prices than everything else, while inspiring hand-wringing about risk concentration.
Huge thanks for such a comment. This is why I love HN.
As about consoles AMD likely just losing a lot of money on them simply because both Sony and Microsoft most likely have decade-long contracts that heavily limit AMD margins. There are reasons why Nvidia don't try to compete on this market.
> There are reasons why Nvidia don’t try to compete on this market.
Maybe not at the high-performance end, but the Nintendo Switch is powered by an Nvidia Tegra X1, and they won that contract largely by proving the concept in their own Nvidia Shield “microconsole”.
It really seems AMD's problem right now is fab capacity - as they can't sell their existing inventory fast enough. The new gen GPUs are double-whammied by the resurgence in cryptomining but it's telling that even the 5000-series CPUs still seem to be selling out (even the lowest tier 5600X).
From AMD's earnings report (released yesterday), their margins are healthy and enterprise segment more than 2X in revenue on QoQ basis. Certainly indicates that Milan is their top priority given the high profit margins.
https://www.anandtech.com/show/16455/amd-reports-q4fy-2020-e...
Will fab capacity really be better in 2022? Can't be worse than this year but with Apple buying up all the next-gen 3nm capacity, Intel also sourcing to TSMC, it just seems a persistent issue until more fabs come online (which takes years).
I'm sure it's coming. This market timing is frustrating, though. Especially as it gives mobo vendors (looking at you ASROCK) more slack with getting all their BIOS/UEFI of last-gen models up to date to support them, ever.
The most probable thing is that APU are low margin products and if they released an 8-core APU like in mobile they would have cannibalized the non APU parts
You're almost certainly going to find this paired with something like an RTX 3060 maxQ in a bunch of laptops (similar to last year's Zephyrus G14) which would be a much better option for SteamVR or any gaming.
VR at 20 fps is going to be nauseating to say the least. Reprojection helps, but not that much. For example Oculus won't let your game into the store at all unless the minimum spec system can sustain 45 FPS, and the recommended must be 90 FPS.