Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD Reveals Next-Gen Desktop Processors for Extreme PC Gaming (amd.com)
202 points by doener on Jan 11, 2024 | hide | past | favorite | 236 comments


Not a great headline when the distinguishing feature of these chips is building a complete system without a dGPU. You can make an all-around solid computer with these, like the well-regarded new 13" Framework laptops but in a desktop, but they're not "extreme" or "gaming" devices the way buyers see those things.

Perhaps there just isn't enough of a market, but ASRock made a series of nice little machines they called DeskMinis with 5x5in motherboards that were a bit larger than NUCs, but in the extra space could fit a socketed 65W chip and quiet low-profile cooler (NUCs I've used were noisy under load), two full-length NVMe M.2 disks, and two 2.5" SATA drives.

An updated AM5 version could potentially ditch SATA to be smaller, since there isn't a huge premium on NVMe capacity now. I wonder if they could also handle some non-G chips, since those now have very basic iGPUs (though power and/or chipset requirements could still limit that).

One issue is that AM5 is still the relatively expensive option; AMD is still adding new AM4 5xxx SKUs because there's still seemingly a market for them. The DeskMini systems could be economical, and if a similar AM5 system couldn't be, due to the mobo itself needing to be pricy or the 8xxxG + DDR5 SODIMM cost, maybe it's hard to sell enough of them to get it to make sense. Still, it's a fun idea, and if you're listening, ASRock, I can promise you'd sell at least one.


Maybe they mean “extreme gaming,” for example if you are going to game while jumping out of a helicopter or something you can’t have a dGPU slowing you down.


Sounds like next-stage capitalism: feel the immersion of Mario jumps due to actual free fall.


It's how you avoid VR nausea from the disconnect between your eyes and your inner ear.


The ASRock DeskMini X600 was presented one week ago. ASRock blamed AMD for the lack of affordable mainboards delaying it. https://www.asrock.com/news/index.asp?iD=5353


Oh, thank you! Looks like they did keep the DeskMini very much like the A300/X300, including keeping SATA.

Some other stories like https://videocardz.com/newz/asrock-announces-deskmeet-deskmi... suggest the DeskMini and DeskMeet can take a non-G 7000 CPU up to 65W; the 12-core 7900 would fit, though there's some substantial tradeoffs to doing that.


> Not a great headline when the distinguishing feature of these chips is building a complete system without a dGPU.

Any reason to think integrated graphics will be the future of top-end gaming hardware?


I do not really expect that, but it's possible to have higher-performance iGPUs than PCs have now; the bandwidth of two memory channels is a big constraint here. Apple's M3 Pro and Max appear to use 3 to 8(!) channels (and of course a lot of die area) to achieve good GPU performance.

More channels may not even be physically possible in AM5. And economically any big change to build a super-iGPU has to compete with just not taping a bigger APU out and letting folks who want lots of oomph go discrete.

The other thing that can maybe help is cache. AMD uses cache dies in their datacenter APUs (very unlike laptop APUs) and of course their dGPUs. Way back, Intel also made a product with eDRAM to boost effective memory bandwidth (Crystalwell). For an integrated chip, that could also act as an "L4" for the CPU and help with its bandwidth needs, though the impact there will be lower than for the GPU.


Might be that AMD wants most gamers on these so they can sell their discrete GPU wares to the AI folks at double the revenue.

All the games market has been chasing graphically is diminishing returns in terms of pumping rasterized polygons. 4k gaming is kind of stupid.

Now the real time raytracing they are starting to do is interesting, but I noticed that has faded from the headlines as NVidia's stock exploded due to AI coprocessors.


These aren't Zen 5 CPUs, they're Zen 4 chips with better integrated graphics (which gamers likely won't care about anyway) and some AI accelerators (which probably won't see wide adoption for at least another year)


Well, look at the steam hardware survey, you're going to be surprised.


People playing on laptops and the steamdeck? All legit uses for the iGPU (I have one myself)

https://store.steampowered.com/hwsurvey/videocard/


I think they meant people playing on really old dedicated GPU which likely are slower then the 8700G to a point where a 8700G might be a good replacement if their setup breaks or similar.


That's obviously not-legit. /s


The 5700G had for some time been one of the best options for budged gaming.

The 8700G should be seen as a successor of that I think.

Enough to play a huge amount of games nicely under 1080p.

But in a certain way at the lowest end of desktop gaming.

Still better then the low-mid range of gaming on a laptop, tho.

But for a lot of people that is exactly the right balance of what they get to what they pay. Weather that's because they don't have a lot money or they don't do a lot gaming and don't want to wast money.

The main "problem" the 8700G might run into is I guess that the cost of AM5 motherboards and RAM.

Also AMD (and I think Intel, too) have in recent years slowly worked to take advantage of integrated graphics even if you have also dedicated graphics (of the same vendor) if that sees some more improvements the 8700G could also become interesting in some use-case you wouldn't expect today. We will see.


A quick check puts a Ryzen 5 5600 and a Radeon RX 6600 at $360 combined, where the 8700G is set to $330. And a RX 6600 will deliver around double the graphics performance. So even without factoring in motherboard and memory the 8700G is hard to justify for a cheap gaming rig.


Source:

https://www.newegg.com/amd-ryzen-5-5600-ryzen-5-5000-series/... $150

https://www.newegg.com/p/pl?N=100007709%20601394871&Order=1 starts at $210 indeed.

People might argue the 5600 is Zen 3 where the 8700G is Zen 4.


The 8600G is $299 and comes with stock cooler.

And 8500G is $176 also with stock cooler.

At least if the infos in this interview with someone from AMD hold up until marked release: https://www.youtube.com/watch?v=GNrOY9YCRSs (2min in)


Wrong prices. Anyway, you'd get even lower graphics performance. The 8700G is 12 CUs, and those two models come with 8 and 4 CUs, making them even less compelling alternatives to a dedicated graphics card.


> wrong price

the 8600G seem to be $229 instead of $299, which seem to have been a mistake of the interviewer in the video

the 8500G seem to be $179 instead of $176 which was a typo on my side

Like OP mentioned they have only 8 and 4 CUs respectively.

In comparison the Steam Deck has 8 (RDNA2) CUs, so while you can't really call a 8600G a "desktop gaming" APU it still can somewhat run a lot of games, enough for many very causal gamers.

For comparison the minimal GPUs in the 7x00X CPUs have 2 (RDNA2) CUs, which is still enough for most office PCs and still can run stuff like Dead Cell on 1080p reasonable well. Now Dead Cell is highly optimized but what I'm trying to say is: If you get yourself a cheap office PC or update of one and want to sometimes play some simpler older games a 8500G can still be all you need.


$30 more ignores cost of electricity.

That combination uses more than a little extra power, relative to the APU.


Generally speaking, you can set the power target of a high-spec GPU below 100% to get most of the performance at significantly reduced power draw. So if you're cost sensitive it's still probably better to get a dGPU, just cap the power.

The dGPUs have more cores and wider memory buses/dedicated memory which delivers higher performance than the space constrained integrated GPUs.


A discrete GPU can be added latter (upgrade path), while the CPU side is definitely "fast enough".


I used to play WoW on a Llano APU. Plenty of gamers are addicted to something like that and just want a serviceable cheap machine. Pretty sure my current 3070 cost more than the entire computer I was killing Deathwing with.


I played healer so I could just point my camera at the floor zoomed in as far as possible and suffer the least bad performance possible. And scramble to get back in-game after guaranteed disconnects on big events e.g. every time Nefarion spawned a wave of ads. What a struggle.


Is that extreme gaming though?


Maybe extreme in the other direction lol


Do these AI accelerators even support common implementations for LLM inference or stable diffusion models?

Last time I checked, AMD's support was terrible.


AMD's support is terrible, if you believe their documentation. ROCm/hip works just fine (well, with Torch - Tensorflow is shamelessly an NVIDIA shill) on many "unsupported" GPUs if you enable an override envar.


To add a bit more context: I remember reading somewhere (may have been in the Phoronix forum) from an official AMD engeneer that "supported" means validated and actively tested in their pipeline, while "unsupported" generally just means "we don't do any or minimal testing on these, they should work and we don't explicitly prevent them from working but we don't guarantee anything" (at least when it comes to same die/gen cards).

In the same post they also wrote that they are gonna look at integrating more of those "unsupported" cards into their suite.

Honestly, I hope they change their wording for this to something like "validated", "supported" and "unsupported" with actual explenations what each of these mean (fully tested, works theoretically, does not work even theoretically)

Edit: I actually found the post I was talking about https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...


ROCm is also incredibly fragile and buggy at the best of times, so anything not actively tested by them stands a good chance of not working. Hell, I remember that a while back people were having problems with machine learning code giving garbage results on one of the few consumer GPUs that was officially listed as supported and AMD eventually replying to the bug report and declaring that actually, no they weren't going to support it and they'd remove it from the list ratther than try and fix the issue. I think this was back when newer consumer GPUs were genuinely unsupported as in the code simply wasn't there too. Integrated GPUs have also alwayu had a lot of problems.

There's also questionable OS compatiblity. ROCm is Linux-based and has extremely limited and rather experimental Windows support. Their fancy new neural processing unit is Windows-only, tied in with a Microsoft framework, and they don't seem to have any kind of definite plan for supporting it elsewhere. So there's quite possibly no single OS where all the hardware that theoretically could be used for machine learning and AI in these chips actually has even vaguely functioning code that works on that OS to make it work that way.


> Their fancy new neural processing unit is Windows-only, tied in with a Microsoft framework, and they don't seem to have any kind of definite plan for supporting it elsewhere. So there's quite possibly no single OS where all the hardware that theoretically could be used for machine learning and AI in these chips actually has even vaguely functioning code that works on that OS to make it work that way.

What they did was embed their Vitis IP in their CPUs. Those IP modules have been available in ASIC accelerator cards from AMD or their Xilinx FPGA offerings for a while now and have Linux support.

So they are releasing these APUs (which are primarily focused at a windows audience) with Windows support via ONNX. And then once there are chips in circulation, AMD can get Linux Vitis support for these chips set up with members of the Linux community being able to review and validate those patchsets.


Tensorflow is open source on GitHub. What’s stopping AMD engineers from contributing with Pull Requests? A casual search through the open PRs shows nothing of interest being submitted by team red.


AMD is not at the edge of bankruptcy anymore.

They should be able to afford testers, doing both manual and automated testing on a wide range of cards.

ROCm's "supported hardware" list is small to the point of concerning.

More cards also means more testers outside of AMD itself.


Does ROCm even work on XDNA? I don't think it does?


ah yes, the good old HSA_OVERRIDE_GFX_VERSION=10.3.0 switcheroo


I'm guessing no, as these newly announced processors are the first with those accelerators.

> AMD is also bringing the power of a dedicated AI neural processing unit (NPU) to desktop PC processors for the first time with the introduction of Ryzen™ AI


They're merely the first desktop processors from AMD with these NPUs. This is last year's laptop silicon repackaged for their desktop socket. They also re-branded the laptop version with new model numbers, because laptop OEMs basically demand new model numbers on an annual cadence. This year their marketing for these chips is heavily emphasizing the NPUs, because they actually have some tooling for them now, but they were basically dead weight when the silicon first shipped.


as long as PyTorch supports it, does it matter?


for as long as I'll need to recompile PyTorch for it to adequately support my card, yes, it matters.


The problem there is with PyTorch. They've got bugs in their AMD build flow that people keep pointing out to them, but because it's people with consumer cards which are "unsupported" (which just means AMD won't personally solve your problems with those cards, not that they aren't intended to work) they close the issues / PRs without fixing the bug.

AMDs wording isn't helping either of course.


Extreme integrated GPUs? Marketing department going wild


They are enough to run the Ally, and they're closely related to the one in the Steam Deck. So maybe they're extreme in the segment of integrated graphics??


They did not say which one of extremes.


They're just trying to compete with Intel Extreme Graphics from 2002. Must be they're reaching performance parity.

https://en.wikipedia.org/wiki/List_of_Intel_graphics_process...


Parity? AMD integrated GPU's have outperformed Intel's for many years.


only APU like the 5700G, 8700G or the recent Steam Deck APU (in reasonable comparison targets)

The integrated GPUs of the 7000 serious (not APUs) are maximal minimalist to a point I think they couldn't have shrunken them more without running into unusual issues (assuming not changing the architecture/design). They are only suited for web browsing, office use-cases and debugging (but also kinda where made only for that use-case). Well I guess Dead Cells and similar "low requirement/high optimized" games still run nicely on it on 1080p.

Still nice to have them if you only have such use-cases saves a ton of money.


APU is a marketing term. The GPU's in them are still integrated GPU's. Intel didn't have anything that held a candle to them until the Iris branded parts, and even those lagged behind.

> integrated GPUs of the 7000 serious (not APUs) are maximal minimalist to a point I think they couldn't have shrunken them more

Right, because these are meant as a backup display output in parts that are aimed at the gaming market, where they will almost always be paired with a GPU. Comparable parts in prior Ryzen generations didn't even have an iGPU. Intel offers you the choice (no GPU in the "K" SKU) which is nice.


> APU is a marketing term.

sure, the point is that not all integrated graphics are better, only such which try to do so, which happens to be marketed as APUs


> which gamers likely won't care about anyway

Seriously? I'm through with buying expensive space heaters. These CPUs are 65 W so I might upgrade the cpu with integrated graphics in my x86 box with them.

And my main entertainment is games.


Steam Deck users would disagree about integrated graphics.

Besides, APUs really got good for gaming unless you push resolution high.


I think they were hinting at the fact that the iGPU is more interesting to users of the mobile CPU (Steamdeck/Laptops), since this is a desktop CPU, which usually have dedicated graphics.


Dedicated AMD graphics starts at around 100 €.


>they're Zen 4 chips with better integrated graphics (which gamers likely won't care about anyway)

Integrated GPUs are becoming far more appealing to the general consumer, including (especially?) gamers, with how fucking expensive discrete GPUs are getting these days (AMD is part of that problem).

We might just witness dGPUs becoming the sound cards of this decade.


Extremely unlikely. The types of GPUs these compete against don't cost a lot of money.


If you want to buy new (which I don't think is unreasonable), there's not much in the market that doesn't cost a lot of money.

Maybe a Radeon 6500XT or a Arc A380; Geforce 1650 are still kind of a lot of money, and you're looking at at Geforce 1030 if you want something for not a lot of money. But why buy a Geforce 1030, unless you really just need something low profile.


>Radeon 6500XT

... has similar performance than 480/580 or what's expected from these APUs.

The APU is cheaper overall, uses less power (even cheaper!), and has a clear upgrade path (add beefy discrete GPU or otherwise replace with newer APU).


An RTX 4060 (with a pitiful 8GB vram) is about $300. Something a bit more practical like an RTX 4070 (with 12GB vram) is about $550.

That is bullshit expensive. It's still nowhere near as bad as during the cryptomining craze, but remember a GTX 1080 MSRP'd for about $430 back in the old days. An RTX 4080 for context is around $1,500 to $2,000 right now.

If I'm just looking to game and only have a reasonable budget like most people, I'll just grab some AMD APU and be done with it. Better yet just go and buy a console or game on my phone.


How do the best integrated graphics compare performance-wise though? If they are way behind in performance, you might be able to buy a cheap 2GB VRAM GPU for less than a cutting edge integrated graphics CPU.


The best integrated graphics are about equivalent to a 1650. This assumes that the machine also has decently speedy RAM.


> How do the best integrated graphics compare performance-wise though

A bunch of handhelds (Steam Deck , Asus ROG Ally) that can comfortably run even AAA games with 30fps+ on small screens (as an example Red Dead Redemption 2 runs with 50fps on my LCD Deck) with battery and thermal limitations would imply that desktop versions would be totally acceptable, unless you're looking at 4K on the latest big titles.


> you might be able to buy a cheap 2GB VRAM GPU

Lol?

It's another ~$100 what would not yield you any performance gains.

Older 5700G was ~$300 so it was hardly a part of a budget build, but 5 5500GT is $125 (and would probably drop below $100 in a half/year) and would clearly offer at least 30fps on FullHD[0] and you would buy it anyway because it's a CPU.

https://www.tomshardware.com/reviews/amd-ryzen-5-5600g-revie...


>and you would buy it anyway because it's a CPU.

This is the part that needs to be emphasized more, that iGPU is practically free because it comes with the CPU we're buying anyway.

It's already hard enough to compete with free, but on top of that dGPUs are asking for Keksimus Maximus monies. The value proposition for gamers, let alone most consumer users, is very strongly favored towards iGPUs.


Nothing is free. That CPU was going to be cheaper if if didn't include an iGPU.

Free means you aren't going to pay anything.


Current difference between AMD Ryzen 5 5600 OEM and AMD Ryzen 5 5600G OEM near me is less than $8.


I said practically free.


You can buy remanufactured RX580 8GB from aliexpress for $65. For $100 you can get a used Vega56 or a GTX 1070.


More appropriate comparison point would be something like Arc A580 which retails around $180 and is already dramatically faster than this igpu.


>An RTX 4060 (with a pitiful 8GB vram) is about $300.

Remember that AMD Radeon 7600XT (16GB vram, faster than 4060) is a dramatically better option than that, at $330.


Personally, I do not even acknowledge AMD exists because:

* Their naming scheme is undiluted nonsense.

* Their software is jank, which makes their hardware also jank.

* I don't have time for jank anymore; I'm an ancient geezer in his mid-thirties, not a teenager with acne and far too much time I need to waste.

* They aren't providing any meaningful competition to Nvidia nor Intel with regards to pricing.


They work in linux.


Your last sentence clearly shows you are not a pc gamer which is fine but you cannot use a persons opinion who is not a pc gamer about .. pc gamers.


Lots of PC gamers are very happy with "slightly better than Steam Deck" levels of performance.


Yo! I tolerate relatively poor performance out of my laptop (a several year old model with a Ryzen and iGPU) while having a fairly kickass home desktop[0]. So chalk me in that group.

[0]Which currently sees me playing a whole bunch of Rule the Waves 3..... poor underutilized 3080.


Don't worry, the 3080 Ti in my Kickass Desktop(tm) also spends most of its gaming time playing Princess Connect! Re:Dive and Uma Musume. :V


These are some awesome games with low hardware requirements.

Let's also add Genshin Impact and Honkai Star Rail to the list.


The only reason I didn't list Fate/Grand Order (Japan) is because there is no Windows client. This also means most of my gaming time and money spent is, in fact, not on Windows but on Android. :V


Similarly about the idolm@ster.


Hit the bricks, weaboos. This is strictly a Rimworld and Everspace rig now, sanctioned by yours truly.


>weaboos

Based in Tokyo, actually.

I hope you're not flying anything you cannot afford to lose in Eve Online.


I'm more of a sim fan, and my wallet thanks me for it. I do still feel profoundly bad for my friends who grew into playing Star Citizen though... hope they're making smart financial decisions in the new year (lmao)


I can rest easy knowing my spacewhaling on Fate Grand Order is still a better use of entertainment monies than Star Citizen.

...Probably.

:V


In a relativistic sense, spending money on fake space insurance and gacha pulls is better than an alcohol addiction or a hospital bill. In a pragmatic sense, the opportunity cost of investing in space travel simulation and softcore catgirl pornography is the progress towards real world space travel and actual catgirls.

When humanity eventually collapses in on itself due to resource exhaustion or outright hubris, our heavenly benefactors will zoom into our tech tree and say "Damn! They were just about to unlock interstellar travel and goodwill towards all mankind. What the hell were they speccing into earlier?"


Train Simulator, Train Sim World 4 and Trainz all seem to be resource-hungry.

So does x-plane.

Of course, Rimworld, Prison Architect and such are fortunately lightweight.


> Rimworld ... fortunately lightweight.

That depends entirely upon how deep into the modding rabbit hole you go.

<Typed on my iPhone while Rimworld loads through a couple hundred mods>


My man


PC gamer as a Reddit stereotypical le gaming masterrace? Because most gamers I know don’t give a shit about this stuff. They care about games.


The 4060 is really bad though. I would guess that a 12100F + A750 or 6600 XT would be faster than the 8700G with its "extreme" graphics.


And using the quiet expensive AM5 platform.


While these aren't "extreme PC gaming" by any stretch of the imagination, I love how powerful these modern integrated graphics are.

These RDNA iGPUs and the Meteor Lake Arc graphics are going to be the baseline that a lot of kids with hand-me-down laptops are going to end up with.

And when they inevitably will go onto whatever replaces reddit in the next 5y and ask "can I play X on this", a lot of the time, the answer will be "yes, kind of".

I believe these integrated graphics may do more for PC gaming than all those $2000 graphics cards ever did.

They'll make PC gaming accessible to more people, who have been priced out of the market for years now.


Yes, exactly! These new integrated graphics chips are great for 1080p 60hz for the games kids actually play, like Minecraft, Roblox, Fortnite, CSGO, etc.

Integrate graphics will be the default in the future with discrete GPUs used only by the niche gamers and AI developer.

Lowest common denominator consoles like the Series S and Steam Deck are also contributing to this push as gamedevs are forced to optimize for integrated AMD graphics.


Now that is some out-of-touch thinking. The GPU industry is only going to expand more to the AI area, but the gaming area is going to keep making money for the same exact reason that it does now, players bragging, being envious, and the good old FOMO. Ain't no discreet graphics on a CPU going to make a dent in it, until they start selling them with marketing like "it comes with RTX 4080 included".


Disagree.

We are reaching a point where these integrated graphics are "good enough". Something that wasn't possible before.

It is going to change the gaming landscape.


This will still exist for sure, but for a lot of the gaming demographic, buying high end GPUs is simply not realistic anymore.


Unfortunately lots of games, especially indie ones, don't optimize for IGPUs at all.

I had a 13500K, and I've played many not-so-demanding games which I should expect at least 30fps at lowest video setting, while in reality I have something like 5fps.


I think this is mostly because iGPUs were quite bad (even if they would have been good enough for some low-end indie games).

But that perception has shifted over the past few generations, especially on the AMD side, but increasingly also on the Intel side (and given the new Arc integrated graphics perform similarly to and sometimes even better than the current top end 780M, this trend should continue).

Things like GPU scarcity and increasing prices have obviously also contributed to this.

The pipelines of developing a videogame are quite long, so we are only now seeing releases of a lot of titles that have been developed against these new realities. And especially for indie titles, they don't see a lot of optimization after release, since the studios typically don't have the budget / staff to improve existing titles and work on new ones at the same time.


Maybe for older titles. But in the last 2 years Indies have been forced to optimize for iGPUs because of the Steam Deck.


>who have been priced out of the market for years now.

Looking in the market for a PC rig (used), and I'm wondering about the state of used GPUs. I recall the consensus years ago was used GPU are useless since they were being (over)used for crypto mining -- but I wonder if this argument a) is still valid now and b) shouldn't that substantially reduce the GPU costs?


The argument was never particularly valid in the first place: https://www.youtube.com/watch?v=UFytB3bb1P8


I think you’re partially right. However, big AAA engines will want all that $2000 worth of GPU if they have it available. The difference could be Cyberpunk 2077 on RDNA vs Cyberpunk 2077 on RTX.

I think we’ll see more and more RTX style GPU architectures become integrated as NVidia et al push AI cards on us next year. It won’t be crazy to see 512GB GPU ram with 64GPU cores in the next 5-10 years.


Having just recently built a Ryzen 7 7800X3D CPU based desktop, I was worried I had bought just before a new line came out, but from my reading of the press release, none of these CPUs are considered a replacement for that chip. I think?


AMD is fleshing out the $100 to $250 market with this announcement.

Top end $300+ chips (like 7800x3d) are safe.

---------

I'm seriously considering the $250 5700x3d for my nieces, or maybe something cheaper. They don't need the latest-and-greatest PC. But having something "pretty good" (and the 5700x3d will be pretty good) is surely going to be appreciated.


It's a bit funny how the 5800x3d was creeping in the 7xxx series during launch.


That is my CPU. A great purchase in hindsight.

Still great against Zen4, just not against an x3d Zen4.


I really want AVX512 on my AMD system (aka: 7xxx series) for maximum irony. So my upgrade will probably be that 7xxx series, probably 7800x3d.

This was a feature I thought would appear on mainstream Intel systems first for the longest time. But no, Intel decided to leave it on higher-end Xeons only...


I can't use my x3d anymore (my family does now), as I moved and left it behind.

But I plan on zen5 x3d (at a minimum), or Zen6 if I can hold for that long.

Zen+ laptop and some old board with 4790k cover the gap. I'll probably get a RX 7600XT for that haswell.


No, the "extreme gaming" line in the press release must have been chosen by someone that have no relation to AMD products and were just told "it does gaming without a GPU pretty good". Which is true, it does do that.

These are less important CPUs in this lifecycle though, usually the G series are pretty good for a small computer where you don't want a dedicated tiny GPU, but all but one Ryzen 7000 desktop CPUs had integrated graphics (though, much fewer CUs).


Literally did the same last month but heard rumblings about what was coming and felt much better. The 7800x3d is really awesome though.


Same with the Ryzen 9 7900X3D. I got a System76 machine, and it is really good. Nvidia card for GPU / local inference.


The 7900X3D is by far the worst value AMD currently offers. It only 6 cores per CCX, but two of them, so not only do you get the scheduling issues of the 7950X3D where sometimes games don't use the X3D CCX, you also only get 6 cores if it works. And you get a "7600X3D" (not a real thing) if you disable the other CCX, something 7950X3D owners would sometimes do to benchmark the scheduling difference to get what's essentially a 7800X3D.


This is the reason why I went with the 7950X and not the 3D variant. Too much additional work involved to get the other cores to play well.

Also I read that only one set of 6 cores as full access to cache while the other doesn't or has partial access.


The interesting thing was that in testing it turned out that having 3D cache for all cores didn't increase performance. The sweet spot for heat/performance was for half.

Granted, I don't think I'm running anything so CPU intensive that I'll be able to tell. For gaming, I was more interested in the DDR5 memory than in worrying about the L3 cache. For everything else I do (including running LLMs and compilers) it's beyond adequate.

For a mere $20K I could have had an overall slower gaming experience but be able to locally run things like the Goliath 120B model, but that seemed like a poor trade-off.


The cache is on a single CCX, yes. Which is why the 7800X3D is so good: There is no second CCX.

Another big advantage is that to get them to work, they need to use higher bin chips. CCX overheats easily because the stacked cache acts like a heat shield, so you get the most power efficient ones.


The disadvantage of single CCD is only half of available write memory bandwidth (using at least 1 computing thread on each CCD) [1].

[1] 16 B/cycle write vs 32 B/cycle read per CCD https://www.servethehome.com/amd-ryzen-7000-series-platform-...


I'd wager the number of workloads that'd actually take advantage of operating on entirely different data on a per-thread basis is much, much lower than ones that benefit from massive L3 cache.

And note that the infinity fabric and memory controller don't run on the same clock. The fabric tops out at ~2200 (though testing shows sweet spot at 2033 usually) and the memory controller usually tops out at ~3100 (DDR5-6200).


Those might be not necessary 2 threads of one program, but 2 independent programs.


Zen5 is meant to be released this year.

Just not yet.


> “AMD continues to lead the AI hardware revolution by offering the broadest portfolio of processors with dedicated AI engines in the x86 market,” said Jack Huynh, senior vice president and general manager, Computing and Graphics Group at AMD.

I can't believe anyone actually said this with a straight face.


It's not an inaccurate statement... I think that Intel's AI engine (NPU) is only available in one very-recently-released mobile processor, correct? AMD has had their dedicated AI on-chip thing for a year or more on multiple lines of CPU's.


Sure, but it's pretty clear that Nvidia "leads the AI hardware revolution", if such a thing exists.


That's why they diplomatically added "in the x86 market" :)


In what sense though? AMD hardware is very comparable. Unless you call Nvidia's lock-in through CUDA "leading".


There are only two (and a half) companies in the x86 market. They shipped their Phoenix processors with an inference accelerator a year ago. And Intel only now is shipping their equivalent.


Not an expert, so excuse me if this is obvious, but would these integrated graphics be any good for NLP? A GPU with 24GB of video memory costs $2000, you can put one of these in a system with 128 GB or 256 GB of DDR4 or DDR5 and give your neural network training software over 100GB of video memory if you want.

You only have 12 CUs, 768 shading units, 48 texture mapping units, and 32 ROPs but huge amounts of cheap memory. I'm not sure where the bottleneck is but at least it won't crash and burn if you ask it to start neural network training routine that requires 100 GB of RAM, and you don't have to take out a second mortgage for a video card with the requisite amount of graphics memory.


They’re good from the “do it at home” perspective, not from the business or enterprise performance perspective.

One of the ways folks do this now is use the Mac M* chips, since they have so much combined RAM. The raw performance isn’t as high as GPUs, but they can fit substantially larger models in memory.


The bottleneck would most certainly be memory, as you'll quickly overwhelm the on-die cache, without careful optimization.

That said, I think AMD's chiplet strategy might come into play. I could see AMD release a 4 core 8 thread processor with increased on die cache and other chiplets being neural compute units.


People keep reiterating this, but in practice one needs compute and bandwidth, especially outside of tiny context test prompts. On my 4900HS, mlc-llm vulkan is far faster than CPU inference on the same memory bus, with less cache, which wouldn't be the case if it was bandwidth/cache bound (since the CPU has far more cache as well).

My 7800X3D has 96MB of L3 and a golden-bin DDR5 overclock, but its absolutely dreadful for inference.


I don't disagree here, but the new chips here have special neural compute units, and specifically he's talking about models larger than 24GB.


They're slow, but OK for inference.

In practice no one uses AMD/Intel IGPs because no knows about the mlc-llm vulkan backend. llama.cpp is en vogue on the desktop, which does not support IGPs outside of Apple, and otherwise people use backends targeted at server GPUs.


The press release has a strange title for new chips targeted at lower-end PCs (those without a dedicated GPU).


>> The press release has a strange title for new chips targeted at lower-end PCs

It's a pretty high-end CPU though, and the graphics support AV1 encode/decode. This would be perfect for software development, video production, or pretty much anything but high-end gaming.


True, I should have said "low-end gaming PCs"


By the looks of how deep they had to dig to get frame rates to show off, this chip struggles at almost any 3D game that's been made in the last few years. It's equivalent to a six year old mid-tier discreet GPU.


> the fastest integrated desktop processor graphics in the world

I'm pretty sure the Mac Pro is a desktop. I'm pretty sure the M2 Ultra configuration includes an integrated graphics processor. And I'm pretty sure it's faster than the 780M in almost every workload.

AMD marketing team needs to be a bit less wrong. At least add a "PC" caveat.


Yeah, this "article" is terrible. "users can expect immense power and dominant performance for intensive workloads including gaming and content creation."


Right, the topic "Extreme PC Gaming" - PC - PC - PC - yeah just for you spelled again PC - is extremely misleading - next time they should include "Not Overpriced" that people like you know their beloved Hardware isnt meant.


It's in certain spots but not the text where they claim that nor the footnote. And in any case only the best by the technicality of Apple selling desktop personal computers that aren't IBM PC lineage.


You know Mac is a specific brand of PC, right?


A Mac is a brand of personal computer. The acronym PC comes originally from the IBM PC, and then from IBM-compatible PCs, and is generally understood not to include Macs.


The PC term predates IBM and was also used for the Apple II, Commodore, etc era "micro"computers.


Apple's advertising said otherwise.


Kind of a misleading title. It's not next gen processors (you'd expect Zen 5). It's still Zen 4.


It's the next generation of AMD G desktop processors with significant graphics. (The earlier Zen4 chips have graphics, but not significant graphics)

The last gen of those was Zen3, 5000G series. These are now Zen4 and also RDNA3 instead of Vega. So new generations in cpu and gpu.


Is it actually misleading? AMD never makes claim that their consumer branding relates to uArch. AMD still has Zen 2 in their 'modern' lineups. Intel also has previous generation uArch in current SKUs, and let's not talk about the 10->11 and 12->13 gen "refreshes".

Maybe the technically inclined of us would expect Zen 5 because we're keeping note of CPU uArches but to the general public they won't care. They will just see it's the biggest numbered AMD chip and expect it to be their best offering - which for integrated graphics on the desktop it is.


I'd expect next generation to imply new architecture.


Doesn't it have a GPU architecture never released into desktop CPUs?


Very misleading.

As I have said before, HN needs a "misleading title" flag.


This looks perfect to me. I have a big desktop, but I also want a tiny NUC-like PC with a decent integrated graphics for playing with Linux and trying gaming on Linux - without having to have a full sized PC. I also don't want to bother dual booting, or running hypervisors with passthroughs.


Folks confused here about the word "extreme" and seeing an iGPU might be missing that Steam Deck exists, and the market for handhelds is huge.

Of course, you won't be putting a 65W CPU into a Steam Deck, but if it scales down nicely, it's an amazing news. Steam Deck already enables playing the vast majority of current gen games (it was my Baldurs Gate 3 machine when BG came out); jumping in power so that I can enjoy some better graphics would be excellent.


Sadly, even the detailed specs don't tell how much unified memory (UMA) can be used. Would be quite relevant for running LLMs on the iGPU (like on Mx Macs). https://www.amd.com/en/product/14066 (AMD SmartAccess Memory: Yes)


> New AMD Ryzen™ 5000 Series Desktop Processors Bring More Performance to Legacy Socket AM4 Platforms

Wait, what? New chips for the old platform? I wonder what the driving factor even is to compel them to produce these?


Motherboard prices have increased pretty significantly. First comment on this hardforum post a entitled: AMD and Intel motherboard prices skyrocket past surging inflation rates thanks to 35-40% ASP increases[1]:

"Motherboard prices is why I haven't upgraded my CPU. More important things going on right now."

[1]https://hardforum.com/threads/amd-and-intel-motherboard-pric...


AM5 mobos are more expensive than Threadripper x399 mobos when they were new...


Hope AM5 is the last such an obsolete 2-channel memory platform. Was it 2004-2005 as we got 2-channel memory with single or first dual-core CPUs? Now we have 16 cores with... still 2-channel memory. Looking forward to 4-channel AM6 socket.

Memory bound software like OpenFOAM saturates memory bandwidth already on 6-8 cores with 2-channel memory [1]. So actually we should have got 4-channel with first 16-cores CPUs long ago.

[1] https://www.cfd-online.com/Forums/hardware/198378-openfoam-b...


> Looking forward to 4-channel AM6 socket.

More like AM5r2, AM5 has a few ID pins reserved.


Eh. I don't think it's that bad.

Something is always the bottleneck. On these 2-channel modern CPU, it's usually the concurrency of software; sometimes the throughput of the CPU cores; and very occasionally memory bandwidth, that sounds like a reasonably balanced system which the right amount of resources committed to each area.


> Something is always the bottleneck.

Insightful thought.


> Wait, what? New chips for the old platform? I wonder what the driving factor even is to compel them to produce these?

A lot of people are already using AM4 socket and might not want to do a socket upgrade, but want a new CPU.

Also, I seem to remember AMD made a commitment before/at AM4 launch that they would support it for at least N years, but don't remember the details about that. Maybe that could be related?


Nope, AMD promised to support AM4 socket until 2020.

AM4 came out in 2016, supporting it with new CPUs for over 7 years is unheard of in the PC industry.


And it earns them a lot of good will, and sales.


I might buy one for one of my old desktops that's still kicking around as a server. 16 cores isn't something to shake a stick at!


These new models don't include a 16 core part though?


>7 years is unheard of in the PC industry

LOL. Intel's LGA-775 which span support for CPUs from single core Pentium 4, to dual core Core-2-Duo, to quad core Core-2-Quad would like to have a word with you.

And 7 years back in those days was equivalent to an etternity in today's tech progress.

https://en.wikipedia.org/wiki/LGA_775


Not sure if the Mainboard Chipsets initially released with the Pentium iv supported the later released core2 quads... Which is actually the case for the am4, where even the cheapest, oldest Chipset (a320) can support the newest am4 CPU, as long as the MB maker provides a bios update.


Sure, but the technological leaps from Pentium 4 to Core 2 Duo and then to Quad that Intel made in those days over 15 years ago, were massive enough to justify the limitation of the same chipset from the Pentium 4 era not supporting the later multi-core Core CPUs, compared to the Ryzen 2-3-4 jumps that AMD made in a similar timeframe which aren't as radically different to each other.


And since then has an Intel socket supported more than one revolution of a tick-tock CPU generation?

Genuine question because I've now built several desktops in the Intel 'Core iX' era and have never truly had on opportunity to reuse a mobo.


Of course not, but I was only pointing out that 7 years of socket support is not unheard of as grandparent claimed.


> LGA-775 which span support for CPUs from single core Pentium 4, to dual core Core-2-Duo, to quad core Core-2-Quad

Mechanically inserting and actually running is a different things.


Kind of - early socket 775 motherboards don't support later Core CPUs.


Same chips, different binning. Zero design effort and almost zero QA effort went into those new products, and it lets them fine tune their margins.


AM5 is a more expensive upgrade, not only are the motherboards more expensive, they also take DDR5 and the overall performance improvement is not so meaningful. So, for many people who already have an AM4 PC, AM5 is not yet a justifiable upgrade.

There may also be some hesitance since while AMD technically did support AM4 for longer than the promised duration, they tried to pull out of it halfway through and overall it was kind of a mess for a while in terms of compatibility with chipset revision.


They are produced on an older processing node than the AM5 lineup. So if AMD switched to only produce AM5 CPUs and Radeon 7000 graphics chips, the old factories would lack something worthwhile to produce. Therefore AMD can negotiate much lower rates for using those older facilities, making it viable to continue production as long as the products sell. There aren't any truly new chips, it is just new binning and marketing of the old models.


I suspect these old fabs will keep going, and make chipsets instead of the main chip, at some point.


AMD historically supported their sockets a lot longer than Intel's two years. Recently they moved to AM5 and only had a single generation on a threadripper socket. This greatly angered tech media and consumers, despite it still being better than Intel support.

Perhaps releasing new SKUs on the "old" platform is a way to bump up the numbers even more: "see we supported AM4 for 5+ generations!"


>> New chips for the old platform? I wonder what the driving factor even is to compel them to produce these?

I might upgrade my 2400G, but I'd want 8 cores in 65W which is sadly not an option.


> I might upgrade my 2400G, but I'd want 8 cores in 65W which is sadly not an option.

5700G is 8 cores, with graphics, 65w tdp, released in 2021. This release also includes the 5700 (no letters) with no graphics at 65w tdp. Looks like 5700X is also 65w tdp? Also, you can usually set a 65w power limit on a cpu with a higher tdp and get most of the performance.


It is actually an option: just activate the eco mode on your favorite 8 core SKU.


Milking their existing old node fab contracts until they are no longer useful.


What extreme gaming when most discrete GPUs will going to beat them?


I posted that link 3 days ago but nobody commented ... how these things work at HN?


If you consider "Extreme PC Gaming" to be AAA games at 1080p on low settings, sure.

https://www.anandtech.com/show/21208/amd-unveils-ryzen-8000g...


For a desktop without an additional $200-$500 graphics card this is impressive.


Why can't we just acknowledge that "extreme gaming" in a rather absurd term to describe the APU in question? The fact that the performance doesn't suck as much as it used to doesn't make it less absurd.

Extreme gaming is close to maxing out currently available performance oriented hardware regardless the cost so unless those APUs can produce results comparable to top of the line CPU paired with something like RTX4090, it doesn't belong with "extreme gaming" in any way.


This type of language is used everywhere in the gaming hardware segment. I think it is kinda tacky, but it has to be working if the entire industry adopted it.


Intel's 2nd generation UMA GPUs, which were branded "Intel Extreme Graphics", were likewise far from extreme.


I just sold my old GTX 1050 Ti for $35, and it vastly outperforms Vega 11.

iGPUs are useful, they just aren't great value for budget gaming PCs (unless we're talking super-duper low budget, where a $50 used GPU is really going to break the bank). HTPC/NAS that's going to run Jackbox at 4K occasionally? Great use case for an R5 with an iGPU.


Why are you comparing to Vega? These new chips have RDNA iGPUs.

While I agree that buying a used GPU will be probably net you better performance per dollar, the 1050Ti is significantly slower than the 780M and even 760M and uses significantly more power.

Some benchmarks can be seen here https://m.youtube.com/watch?v=n1Cjnep8j-o


A couple reasons: Vega 11 is the best iGPU on a desktop CPU that you can actually buy right now. Preliminary benchmarks on engineering samples are fun to ogle at, but aren't necessarily an accurate representation of the product that will actually land on your local Microcenter's shelves. Then there's the estimated MSRP - only the 8500G's $180 price tag fits the "budget gaming PC" description, and even that's pushing it. You can also buy a better GPU than a 1050Ti for less than $50 used.


That price range might have been accurate 7 years ago, but modern graphics cards for gaming range from $300 to $1000 with the highest level pushing $2000 thanks to the AI craze. The higher prices make quality iGPUs all the more relevent - especially since the "gamer" demographic continues growing and many of the most popular games are not particularly demanding.


Platforms, especially AM5, are very expensive too. Ryzen 8000 is far slower than even the slowest new discrete GPU.

The real value option is buying an older platform (AM4) and a used discrete GPU, from back when prices were more sane.


... if electricity was free.

Power efficiency wise, a discrete GPU won't compare favorably.


Dunno if thats true.

Assume mid range 200w gpu. Assume 20 eurocents per kWh (I live in the northern europe and pay about as much on a flexible contract). Assume a decent PSU. A year of gaming (assume, very optimistically, 365hrs) is circa 17 eur for the electricity. Considering relative perf differences, I'd say a discrete GPU compares favourably.


65W TDP for CPU + GPU in power-constrained environments, or when you want to have less battery storage requirements for renewables/backup power, is quite the boon.

I had a 5700G (I think) last year and the system would go up to 150W max. I could survive about 2-3h of no electricity with my small inverter battery setup if I kept usage minimal (no work, no gaming). Handy performance wise for a mostly work machine with some indie gaming on the side, but these 8000 series are under half that wattage.

Admittedly this isn't the most common need/concern, but it's good to know options are getting better all the time.


It would be more useful to compare idle power, APU vs chiplet-based CPU + discrete GPU combo.


A used graphics card good enough to play 1080p games on low settings would not be very expensive at all.


I actually bought an RX 580 8 GB new for 139 EUR last year (most likely sat around some warehouse in an unopened box for years) - it has no issues running most indie games or even the decently optimized AAA games at 1080p, which was an upgrade over my previous RX 570 4 GB.

The only actual issues I ran into were when I tried running it for PCVR (Quest 2) because the latest drivers actually cause crashes and instability, although I could fix that by rolling back to either the 2020 or 2022 drivers: https://news.ycombinator.com/item?id=38870500

Sometimes hardware from generations ago is not only decently affordable, but also sufficient for what you want to do, as long as your standards aren't super high. Getting some of that hardware with no previous owners also feels a bit interesting, albeit I've also bought used CPUs off of AliExpress for my homelab servers with no issues either.


You can find used 1080ti's for like $100 nowadays. I still use one for 1440p gaming and get excellent frames with most games. Kinda mind blowing value


100% 1080Ti is insane value, I have two of them and zero complaints, the 11 gigs of vram make it great for inference as well.


this GPU was so good at the time that it started cannibalizing sales of higher end data center cards, and Nvidia had to put terms in the EULA for the drivers to try to legally prevent this.


Aren't these the remnants of the Bitcoin gold-rush age?


Please tell me where I can get a used 1080 ti (that works) for $100.



Wow, at that price they even threw in the zip tie?


That's a 1080 (8GB), not a 1080 Ti (11GB).

But, I believe you that they're out there.


Not really, since the APUs themselves are historically quite expensive.


That hasn’t been true for the Zen based models at least. The 5700G and now the 8700G occupy the $350-370 price slot. My comparable 13th gen Intel CPU with lesser graphics was the same price. That’s a great value for a casual gamer.

I’m sure at some point we’ll see Intel based APU equivalents on desktop with some teeth. Intel needs to solve their driver issues first.


> That’s a great value for a casual gamer.

It depends. If you're truly performance insensitive and have to buy new, maybe, but these APUs are extremely slow compared to even older, low end discrete GPUs.

AM5 is an expensive platform. The total cost of a DDR4 platform is peanuts in comparison.


The cpus are currently significantly more expensive, yes, but is the AM5 platform that much more expensive than AM4?

On AM4, pcpartpicker says I can get an A520M board for $70, and 2x16GB DDR4-3600 CL18 for $55; total $125

On AM5, an A620M board is $75, and 2x16GB DDR5-5600 starts at $76; total $151.

The real question is where the Microcenter bundle pricing will end up, if you live near a Microcenter.


Mmm, A620 has come down considerably since I last checked. That's good.


63 fps on Cyberpunk 2077 which when it came out was "unplayable but on the most powerful PCs" is incredibly impressive without a GPU.

This is pretty close to my 2070 GPU does, which cost me $400+ a couple years ago and uses 215W. My CPU also uses 100W, so about 300W compared to 65W for very roughly similar performance (in some games) is still pretty incredible.

Now GPUs are almost twice that for that xx70's and xx80's cards. I don't know what market this is aimed at, but this is very impressive for an APU. There's a pretty strong budget PC gamer community that could benefit from this. There are a lot of people who can't afford gaming PCs anymore and this could be a big seller to the budget community. Also at 65TDP power supply and fans and ventilation costs will be low, so they can be sold in cheap and modest cases and ps's.

I'm not sure if these chips translate into laptops, but a laptop that games well is always desirable in the gaming market.


> This is pretty close to my 2070 GPU does

Not even remotely close. It's equivalent to an RX570 or 580, which is roughly 1050 territory. Your 2070 is equivalent roughly to a 1080, plus raytracing.

> Cyberpunk when it came out [..] unplayable but on the most powerful PCs is incredibly impressive without a GPU.

The game has seen numerous patches in the last three years since it was released that have significantly increased its performance.


>It's equivalent to an RX570 or 580

RX 6500, in more recent parlance.

I would however wait for third party reviews on these SoCs. They promise performance that's simply unheard of, for an APU. Best to be skeptic than else.


It's a side effect of the laptop efforts... Getting a price and margin that makes sense. Using the same tech in desktop form makes sense for a lot of people. A $600 or so desktop that can game albeit at lower settings is pretty impressive these days.


> 63 fps on Cyberpunk 2077 which when it came out was "unplayable but on the most powerful PCs" is incredibly impressive without a GPU.

Cyberpunk got a ton of (performance) fixes after release, so not exactly relevant.


Cyberpunk scaled pretty well on CPUs and had a lot of graphical options. Digital foundry covered it pretty well


On AMD APUs, 65W thermal design power roughly corresponds to 85W electrical.


For an upgrade to a PC which fits, PSU and all, into a tiny case like this one:

https://store.nfc-systems.com/products/skyreach-4-tiny

with purely vanilla off-the-shelf parts - I definitely do. I will take these APUs every day of the week and so would millions of people for whom this is the new extreme in their PC system configuration space.


If this fits in a mini pc that would be pretty nice. A NUC sized box that plays cyberpunk, not bad


Pretty close to where a lot of the higher end mini PCs land... It's not a bad thing.


The minisforums designs (like UM790 Pro etc) are getting increasingly impressive. They have had a bumpy road but they really have finally figured out getting LM to work in a production product. You can't open it but you don't have to - everything that's user-accessible is in a service bay. UM790 Pro is perpetually ~$519-539 or w/e, and they also have a newer version with an oculink in addition to the 2x usb4/thunderbolt ports.

[LM: liquid metal, gallium-indium alloy with high conductivity but liquid at room temperature, and conductive, which makes it tedious and exacting to apply etc, but it's the only way to keep pushing thermal density upwards to that degree. PS5 used it as well, and that's part of how they pushed form factor down a ton in a fairly high-TDP product.]

Minisforum also had a laptop CPU on a mITX board which frankly is also an underexplored niche, why is that one chinese company (edit: erying) the only ones putting tiger lake onto a socket etc? Atoms etc aren't the only way that low-end market can be tapped, and big chipsets with tons of IO don't make sense on that kind of product anyway. Even the ryzen desktop chips provide a "SOC platform" that can do USB, ethernet, and a few other things iirc - it's used in things like the DeskMini/Deskmeet X300 series, look at the addons they need, it does pull the mobo cost way down.

(AMD does not allow X300 to be used in this way, X300 mobos cannot be sold standalone, has to be integrated into a product which is licensed separately, can't lose the bundled sale. Imagine if those firewall appliances etc could run a 5700X3D or 7940HS instead of a 1165G7 or 1125G4 or whatever. that would be awful.)

Long-term support is not great (things fall off pretty quickly from what I've heard) and really ideally I'd also like to see them support ECC (7940HS supports it) to open up the homelab market a little more. Because they're doing these really zany powerful things, they crammed a 7940HS into a NUC formfactor and routed it out efficiently into things for people who want to expand, etc. They're cooking, and as the oculink and MCIO and other breakout ecosystems start to mature things are gonna get interesting for the NUC market. And they're already interesting for GENOAD8X-2T and so on too. People are realizing the need for this to be more modular than PCIe edge card/ATX allow in a regular tower form factor. The children yearn for PCIe lanes, they just don't know why or how.

https://www.youtube.com/watch?v=1nu-GKq58Og

https://www.youtube.com/watch?v=5IfArhUdAm8

(this can be extended to an interleaved network of 5 nodes, where with only 2 links per node you are tolerant to no more than 1 node failing yet only 2 hops to any other node) https://i.imgur.com/i56PdsC.png

https://www.youtube.com/watch?v=O08LG64fvqI

https://www.servethehome.com/asrock-rack-genoad8x-2t-bcm-rev...


> X300 mobos cannot be sold standalone

Doesn't seem to be an issue for the X600. You may have seen this review [1]. The AM5D4ID-2T/BCM isn't cheap, but it doesn't have a real chipset either.

[1] https://www.servethehome.com/asrock-rack-am5d4id-2t-bcm-revi...


yee I did see that. another round of psychedelics for the boys at asrock, what an absolute crackhead design team, in the absolute best way. X99-ITX [0][1][2] and X299-ITX[3][4][[5] were both masterpieces in their own way too, and they've been one of the teams that has been cooking the hardest with intel and AMD both. Supermicro is there too but a lot more conservative at times, except[6] (look at the usb, and... audio...)...

good point, hadn't considered that board. I do hope they loosen up, but a high-end server market board is not completely the same thing either.

[0] https://www.tweaktown.com/image.php?image=https://static.twe...

[1] https://www.tweaktown.com/image.php?image=https://static.twe...

[2] https://www.tweaktown.com/image.php?image=https://static.twe...

[3] https://www.tweaktown.com/image.php?image=https://static.twe...

[4] https://www.tweaktown.com/image.php?image=https://static.twe...

[5] https://www.tweaktown.com/image.php?image=https://static.twe...

[6] https://www.supermicro.com/en/products/motherboard/x13sae-f

(please read the reviews of the site whose effort i've leached forever, and it's actual old-school good review content)

https://www.tweaktown.com/reviews/7112/asrock-x99e-itx-ac-mi...

https://www.tweaktown.com/reviews/8404/asrock-x299e-itx-ac-m...

The Asrock Rack series are really really good now and I think both their AM4 and SP3 platforms are really appealing. ROMED8-2T is everything a motherboard should be. Except for SFP+, of course. But Asrock Rack's design teams really have a good pulse on what the prosumer/homelab/soho market wants, they have been pumping out a ton of great designs lately.

https://www.asrockrack.com/general/productdetail.asp?Model=R...


it's average frame rate too

99%ile (or even minimum) would be far more interesting


I'm pretty sure this product line targets people who can't afford a new GPU and aren't aware of the used hardware market. 60fps average and arbitrarily low 1%-low fps is totally acceptable for this segment.

Recall that a lot of games ran at 30fps in the PS3 era. Back then people would unironically say "the human eye can't see 60fps". Even today a lotta gamers have never experienced low latency


I'd agree if AMD's title didn't explicitly say "Extreme PC Gaming"

Extreme PC Gaming in 2024 is not 60fps at 1080p, it wasn't even in 2018


Precisely. This is for when used hardware is not an option.


I just do regular pc gaming - am I allowed to buy these EXTREME processors?


Compared to consoles, that is extreme


It's six year old mid-tier performance.

AMD marketing had to dig pretty deep to find decent numbers...4-5 year old games. Metro Exodus? Shadow of the Tomb Raider? GTA 5? And some random kid's game nobody has ever heard of?

Given the choices are between "buggy as hell" (Intel) "space heater and slightly buggy" (AMD discreet) and "good but stupidly overpriced" (Nvidia), a fourth option, for the low end of the market, is welcome.

Edit: since I'm being downvoted for claiming the games listed aren't relevant: go look at steamcharts. GTA5 and Dota are the only top 25 games; Cyberpunk is #26. The rest aren't even top 100 games.


Kid's game? Tiny Tina/Borderlands is a huge franchise in gaming. I think its clear these two games were chosen because the cell-shaded style they use is less GPU demanding, but its still impressive. But note things like Cyberpunk 2077 is there too. These are all popular games. I don't think its this dishonest ploy you're making out to be.

My 2070 barely handles those games at that fps.

Yes those are older games because this APU is not going to play modern AAA at 4k, but it can handle some pretty hefty games fairly well and might be tempting to budget gamers especially when mid-tier cards start at $500-600 nowadays.


There are 2,000 people playing Tiny Tina on Steam.

There are a million people playing CS2. Fortnite sees about 2.6 million and peaks at eleven million.

The Finals has 70,000+ people playing.

Tiny Tina has about 1/5th of the player count it would need to be in the top 100. So yes, AMD marketing was pretty fucking desperate when they listed that game.


> There are 2,000 people playing Tiny Tina on Steam.

The game is cross-platform and available on PS4/PS5, Xbox X/S, Xbox One, Steam and Epic Store. And on top of that it is a paid game. I'm not even aware what this game is, but I'm aware of Borderlands franchise and they're quality games.

Why would you compare player numbers to top f2p games is beyond me.


That game selection isn't an accident. They're all extremely popular and/or have a benchmark built-in.

Look at any third-party review and you'll find a very similar selection.


With the exception of GTA5 and DOTA2, none of the games they list are even in the top 50 on Steamcharts.

"and/or have a benchmark built in" doing a whole lot of lifting...


And League of Legends and World of Tanks. Why would you expect to find singpleplayer to top the charts? Especially when they're not new.


Crysis was used in benchmarks for how long exactly? And it wasnt like it was a blockbuster either...

Maybe you should make an actual performance argument instead of a popularity contest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: