×
top 200 commentsshow all 487

[–]AMD_Botbodeboop[M] [score hidden] stickied comment (3 children)

This post has been flaired as a rumor, please take all rumors with a grain of salt.

[–]vectralsouli7 2600K @ 5GHz | GTX 1080 | 32GB DDR3 1600 CL9 | HAF X | 850W[S] 301 points302 points  (94 children)

TL;DR:

23% faster ST compared to the 5800X and 38% faster than the 5700X;

26% faster MT compared to the 5800X and 39% faster than the 5700X;

19% faster ST compared to the 5950X;

About the same ST as a 12900K;

Important note: this is a ES chip.

Edit: whoops, meant to write ES chip, not ES sample... lol. Thanks, reddit!

[–]wingdingbeautiful 170 points171 points  (34 children)

Engineering Sample Sample lol.

[–]FarAtmosphere 53 points54 points  (15 children)

LED diode.

[–]RedChldRyzen 5900X | RTX 3080 34 points35 points  (14 children)

RAID array

[–]patrulek 40 points41 points  (12 children)

RAM memory

[–]RenderBender_Uranus 29 points30 points  (9 children)

Hold my CPU processor

[–]HorruxR9 5950X - Radeon RX 6750 XT[🍰] 20 points21 points  (8 children)

What about my GPU processing unit?

[–]Pashta_Sauce 14 points15 points  (3 children)

Time to unplug my PSU unit…

[–]laudamodi 4 points5 points  (2 children)

On the pcie express

[–]opmopadop 3 points4 points  (1 child)

At least it's newer than the ISA architecture.

[–]TheDonnARK 2 points3 points  (2 children)

You can buy those after a trip to the ATM machine.

[–]Thepommiesmademedoit 1 point2 points  (1 child)

As long as you can remember your PIN Number of course :)

[–]paganisrockR5 1600& R9 290, Proud owner of 7 7870s, 3 7850s, and a 270X. 13 points14 points  (0 children)

LCD display

[–]choosewisely564 2 points3 points  (0 children)

HIV Virus

[–]alevyish 34 points35 points  (11 children)

Idk don’t know what you’re talking about.

[–]TTechnologyAMD 26 points27 points  (0 children)

SMH my head

[–]opmopadop 1 point2 points  (0 children)

That reminds me of an SMS message..

[–]LiamWRyzen 7 5800X | RX 580 2 points3 points  (0 children)

I mean, it’s a sample benchmark of an engineering sample chip…

So ES Sample could be technically correct, or, you know; the best kind of correct.

[–]cuartas15 56 points57 points  (3 children)

That makes the 5700X look way slower than it actually is vs the 5800X when they're practically identical

[–]Seanspeed 34 points35 points  (1 child)

Well Cinebench has never been properly representative of like gaming performance or anything. Something many of us had been trying to scream at people while AMD vs Intel fanboys kept trying to use these scores to bash each other over the heads with depending on who was beating who at the time. lol

[–]sql-journeyman 4 points5 points  (0 children)

cherry picking will always happen.

cine-bench is just one of 4-5 parts/tests a reviewer needs to do, and then compare the results of 3-4 reviewers for a over all aggerate understanding.

with that being said, 99% of people ain't got time for that, those who care will off load it to reviewers to generalize their results, and those who don't will pick a random benchmark and run with it.

at the end of the day comparing something like a 7700x to a 5700x for a general generational uplift to get a round-about idea I think is fine. as long as its apples to apples.

[–][deleted] 1 point2 points  (0 children)

The ccx latency can make a difference in games and stuff as well, not really much different than the 5800x3d gains by having a lot more cache.

Edit: wait that was the 3700 vs 3800 iirc

[–]Tricky-Row-9699 42 points43 points  (40 children)

As it stands, this is still 12% slower than the 12700K in multicore. Even if AMD ramps this thing up to 5GHz all-core, they still only match the 12700K, a chip which the 13600K should convincingly beat. (These engineering samples also don’t convincingly beat Alder Lake in ST yet.)

I bet AMD knows this too, which is why they’re planning to make it $299.

[–]PantZerman855800X3D, 3600CL16 DR B-die, 6900XT Red Devil 53 points54 points  (20 children)

12700K has 12 cores / 20 threads (8 performance cores with HT + 4 efficient cores without HT) while the 7700X has 8 / 16 so no surprise it might be beaten in MT.

[–]Photonic_Resonance 5 points6 points  (13 children)

Intel has Performance and Efficiency cores now, so I’m not sure you can just make a blanket comparison between them like that. Especially when the R7 and i7 are still intended to compete with each other (I believe this is still true)

Edit: I guess the e-core Intel CPUs sorta fit in between the Ryzen tiers. Maybe that’s how I should be thinking about the hierarchy now, huh.

[–]lioncat555600X | 16GB 3600 | RTX 3080 | 550W 21 points22 points  (3 children)

There is a point that more cores equal more cores. Some tasks don't inherently care about the core design too much. There are reasons why 128 core ARM server chips exist.

It would be interesting to see how certain multi core benchmarks score on the performance cores vs the efficiency cores.

[–]Photonic_Resonance 4 points5 points  (2 children)

There's actually been some testing done on this with the 12th gen Intels! Higher CPU frequency lead to a greater disparity between the P-cores and the E-cores. If I'm remembering correctly, it was something like 1 P-core could be equivalent to anywhere between 1.2-1.7 E-cores depending on frequency and specific workload. This led me to assume a 2:3 ratio of 2 P-Cores ~ 3 E-Cores for 12th gen as my "back-of-the-napkin math" reference that I've been using since I originally looked into it.

I'm very curious to see what Intel's 13th generation multicore benchmarks will look like. I wonder if it'll follow a similar ratio, and if OS changes since 12th gen have made a difference since I last checked.

[–]lioncat555600X | 16GB 3600 | RTX 3080 | 550W 1 point2 points  (1 child)

If you remember. Were the P and E cores are the same frequency or was the P core clocked higher then the E cores?

I.g. both at 1ghz they are very similar, but both at 3ghz the P core pulled ahead. Or the E core was at 3ghz while the P core was at 4.

Roughly 2P to 3E at the same clock speeds are quite intriguing.

[–]Photonic_Resonance 1 point2 points  (0 children)

Anandtech referring to single-core performance between core types:

"In the aggregate scores, an E-core is roughly 54-64% of a P-core, however this percentage can go as high as 65-73%. Given the die size differences between the two microarchitectures, and the fact that in multi-threaded scenarios the P-cores would normally have to clock down anyway because of power limits, it’s pretty evident how Intel’s setup with efficiency and density cores allows for much higher performance within a given die size and power envelope."

Anandtech referring to the multi-core performance:

"Focusing on the DDR5 set, the 8 E-cores are able to provide around 52-55% of the performance of 8 P-cores without SMT, and 47-51% of the P-cores with SMT. At first glance this could be argued that the 8P+8E setup can be somewhat similar to a 12P setup in MT performance, however the combined performance of both clusters only raises the MT scores by respectively 25% in the integer suite, and 5% in the FP suite, as we are hitting near package power limits.,, and there’s diminishing returns... given the shared L3 [cache]"

Extrapolating maybe too much from this article, it sounds like the E Core can hit around 3/4ths of a P Core around a similar frequency, but the P Core's higher frequency makes the gap larger. On the other hand, the E core probably can't clock higher because it's designed for efficiency, so all that extra power dedicated to the P Cores is being put to use by boosting the frequency.

It's also very interesting seeing how the main bottlenecks in multi-core performance with the big.LITTLE core format are the power constraints and shared CPU cache. The article also mentions how the increased bandwidth of DDR5 actually makes a noticeable difference in multi-core performance when the shared cache is oversaturated.

Source: https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity

[–]premell 5 points6 points  (7 children)

You don't mean to say it's misleading to call the 12700 a 12 core chip?

[–]looncrazFlare this. 4 points5 points  (1 child)

It's more of a 12 core chip than Fx-8150 was an 8-core chip and I consider it to be 8-cores.

AMD used to call their APUs with 4 CUs as 8-core APUs... and that's also not technically wrong.

[–]premell 3 points4 points  (0 children)

That was before my time but I would have called them out on it then aswell. I don't have that big of a problem that companies does it because it's marketing. But when people are presenting misleading data to serve their purpose, that irks me

[–]Photonic_Resonance 4 points5 points  (4 children)

It’s technically correct, but it seems misleading to me. Intel e-cores are weaker than p-cores, but not every Intel processor has e-cores. That’s why I would label it as having 8p+4e cores/20 threads. Especially when making a cross-comparison with Ryzen, where every core is essentially a “p” core.

It’s also just my opinion. I only think it’s “misleading” because not everyone on Reddit has the same background knowledge about CPUs (p and e cores are a new concept still), so more accurate descriptions prevent misconceptions. I don’t think it’s malicious. The original comment I replied to actually went and added the p/e-core split, which is cool.

[–]capn_hector 4 points5 points  (1 child)

It’s technically correct, but it seems misleading to me. Intel e-cores are weaker than p-cores, but not every Intel processor has e-cores.

Well, if the e-cores are stronger than a Bulldozer, surely most AMD fans would consider that a "core"? ;)

"Cores" doesn't say anything about performance, or having a FPU! A Bonnell atom from 2008 or whatever is still a core, even if they suck! ;)

And, well, if we're getting into "derating cores" by their performance... aren't some people are going to be owners of some mighty fine 4C 1800Xs? ;)

[–]Artoriuz 6 points7 points  (1 child)

It's not really new, mobile SoCs have had big.LITTLE for almost a decade now.

[–]Photonic_Resonance 2 points3 points  (0 children)

It's new in the desktop consumer space, which is the context I was referring to. Even for desktop enthusiasts the idea is a couple years old at this point, but for an average consumer who looks into computer tech once or twice a decade it's a new concept.

But yeah, I'm actually very happy that Intel is taking a que from the mobile space and trying the big.LITTLE format. I'm very fond of efficient tech, and the big.LITTLE format (or specialized cores on CPUs/GPUs/SOCs) really does seem to be the current method for noticeable growth

[–]jedimindtriks 1 point2 points  (0 children)

always go for price, not config. which is the better 299$ cpu?

[–]SirActionhaHAA 4 points5 points  (4 children)

That depends on how well it'd do in gaming. Alderlake's gaming perf ain't scaling with the st score in benches such as cinebench. The st improvement for 12900k is 20% and 23% over 5900x in r20 and r23 but the gaming performance is just 12% ahead on average. It means that just 60% of the "st" improvement translates to gaming performance for alderlake

The biggest question is how zen4 is gonna scale with gaming. It could be neck and neck with raptorlake but ya gotta remember that vcache sku is a thing and is rumored to launch real soon after zen4

Mt perf used to be important during kabylake and coffeelake times when the i7s had 4cores or 8threads. Nowadays anything above 8cores is kinda excessive for like 90% of the market

Intel is hurting from selling at its current prices, we're seein its margins fall from almost 60% over 1year ago to 45% last quarter. How much more financial pressure can intel stand if we're gonna assume that the 13900k would remain at the 12900k price despite being 25+% larger in area on the back of a client market that's shrinking at the rate of 15+%?

[–]Tricky-Row-9699 6 points7 points  (1 child)

It really does seem like that, doesn’t it? I think the gaming performance of Alder Lake not scaling with single-threading might be down to the ring bus having to run at the E-core clock instead of the P-core clock, along with some regressions in core-to-core latency that the Rocket Lake chips had still haunting Intel.

This does make me wonder, though, if AMD might somehow pull out a win this gen by the skin of their teeth because Intel raised prices.

[–]SirActionhaHAA 6 points7 points  (0 children)

It's gonna hurt either way because falling behind in both datacenter and mobile (let's be real here phoenix is gonna destroy raptorlake mobile) means that intel's got no other markets to increase prices in. They can take a further margin decrease or increase prices but intel promised investors that it'd increase dividends in the coming quarters

[–]joescalon 1 point2 points  (0 children)

These benchmarks would be after the frequency increase which is rumored at 5.2ghz?

[–]TT_207 6 points7 points  (2 children)

I'm happy to sacrifice a small ammount of Multicore to be sticking with big-big architecture. I've got no interest in moving to W11.

saying that though, I'm hoping they don't include Pluton on Ryzen 7000. If they do, I don't really know what I'm going to do aside from stick with the same architecture for a real long time, as I don't really like the TPM philosopy and related features.

[–]Guinness 8 points9 points  (0 children)

Worth noting:

For instance, compared to the Ryzen 7 5800X, the Core i9-13900K is 25.5% faster in single-threaded

That’d give Intel maybe 2.5% more performance. Which means the 7950 and 7900 might edge out the 13900k? Guess we shall see.

https://www.notebookcheck.net/Intel-Core-i9-13900K-Geekbench-performance-destroys-the-AMD-Ryzen-7-5800X-and-the-Ryzen-9-5950X.642036.0.html

[–]detectiveDollar 1 point2 points  (1 child)

Aren't the 5700X and 5899X a lot closer though?

[–][deleted] 2 points3 points  (0 children)

The difference is clock speed and default power limits. Both are 8c/16t 32MB cache Zen3 chips.

[–]andoke 76 points77 points  (16 children)

So how much compared to a 5800X3D?

[–]Crazy_Asylum 135 points136 points  (6 children)

the 5800X3D was actually lower than the 5800X in cinebench as that benchmark doesn’t leverage the v cache.

[–]andoke 16 points17 points  (0 children)

Thanks

[–]Undeluded 10 points11 points  (1 child)

The 5800X base clock is 400 MHz higher than the 5800X3D.

[–]SirMaster 2 points3 points  (2 children)

But what about compared to an X3D in a heavily single threaded gaming benchmark like an emulator?

[–]KsnNwkr7 5800X3D | RTX 4090 | 2x16GB 3600CL16 | 5TB 5 points6 points  (8 children)

Looking at HUB video of CPU comparison in SpiderMan and DDR5 platform being faster by singnificant amount in RayTracing.

I am just taking an educated wild guess.

Then Ryzen 7000 should be way faster in RayTracing titles than 5800x3d, at least up until 4K. As recent owner of 5800x3D and 32gb 5600cl36 being only 100€ more than 32GB 3600cl16 I am now torn.

In normal non RT games Ryzen 7000 will be faster and in non RT games that utilize cache like sim racing, performance should be about equal.

Basically most probably a 7600x midrange CPU for midrange price will be equal in gaming performance to 5800x3D that is sold as premium price.

I am glad at least that I paid only 390€ for it, not release prices. Still looks like I overpaid a good 100€, if 7600x + ddr5 will be it's equal.

[–]KaiserGSaw 9 points10 points  (2 children)

On the other hand you already own it and can make use of that neat little X3D CPU

Planning on buying it aswell tho 500€ is a hefty price for that bugger. Wanna move into an SFF case and the X3D pretty much meets the best balance of horsepower and power consumption

[–]BNSoul 5 points6 points  (3 children)

7600X + motherboard + DDR5, and then good luck selling off all your old computer parts. My 5800X3D + 3090 already max out my 1440p 144Hz monitor in pretty much every game so I couldn't care less about Zen 4... for now, yes big numbers I love innovations I want to have them all but I'm super satisfied with my current setup.

[–]NoSleep323 2 points3 points  (2 children)

Hell yeah. Planning my first ever PC build. Would I be dumb to just build an AM4 rig with a 5800X3D with a 3080ti or would it be worth it to spend extra now for AM5?

[–]BNSoul 3 points4 points  (1 child)

Depends on the performance you need from the CPU, I wouldn't buy any standard Zen 4 considering Zen 4 3D will be much better. Wait for the 5800X3D to drop a bit, what you save on motherboard and DDR5 you can put toward a high end GPU.

[–]NoSleep323 2 points3 points  (0 children)

Ya mostly just gaming and some light work stuff. Torn between the 5800X3D and the 5900X. They seem comparable and the extra cores on the 5900X should be beneficial. But ya I’ve decided to stay with AM4 for now as I don’t need the newest and best. Top it off with a 3080 and I think I’ll have my dream setup. Thanks

[–]MrWeasleR7 5800X3D | 32GB 3600Mhz | MSI RX 6800 XT 1 point2 points  (0 children)

Apparently DDR5 6000 will be the sweetspot for performance on Ryzen 7000, 1:1 infinity clock speed of 3000 but that will be the max apparently as well. Also this is probably why there's no DDR4 compatibility diminishing performance. 1:1 is too slow and 1:2 exceeds infinity fabric

[–]Strange-Scarcity 53 points54 points  (27 children)

Neat!

so, it looks like my plan of upgrading to the 5800X, recently, on my old B350 Motherboard, to give me another year to three on the current system was a good plan.

My next build should be... closer to 50% faster to be worth the dollars.

[–]pittbullblue 33 points34 points  (17 children)

I just upgraded to a 5800x from a 1700 and the difference is INCREDIBLE! Absolutely a fantastic upgrade for anyone upgrading from first gen for sure. Glad they finally made the AGESA version for a lot of the older boards.

[–]Vallkyrie 12 points13 points  (6 children)

I went from a 4790k to a 5900x in March. Probably the biggest cpu jump I've ever done.

[–][deleted] 3 points4 points  (4 children)

I did the 4770k to 5800x... Great jump.

[–]midfield99 2 points3 points  (2 children)

I'm still on a 4770k. I'm going to upgrade to 7000 or Raptor Lake this year.

[–]Strange-Scarcity 2 points3 points  (3 children)

Yeah, I started with a 1700, then a 3700X and now the 5800X. It's been a really nice boost to performance for me.

[–]mistriliasysmic 2 points3 points  (2 children)

How much was the jump from the 3700x to the 5800x? I'm currently using a 3700x and looking into jumping to zen4 since my partner got their upgrade last (2600 to 5800x, whew), hoping for an idea of a rough minimum baseline

[–]sizzwald 2 points3 points  (0 children)

I just jumped from a 3700x to a 5700x. What I notice most was the single core performance uplift. In my current situation I also upgraded my cooling solution/case so that probably helped a bit as I was in a SFF case and was slightly thermally limited with the 3700x. I notice that gaming feels much more smooth on my 5700x with more consistent lows. The biggest difference it has made for me was in Adobe software, which I use frequently. My advice would be to save your money for a larger upgrade if you're happy with your performance now. I don't think it's worth it unless your in a specific situation.

For example my 3700x went to a close friends as I had built him a system out of my spare parts. Though if I'm being completely transparent I wanted him to game on PC with me and used it as an excuse to upgrade a few things in my system so I would have the 'spare parts'. He got a 3700x and a 1660 Super build out of it lol.

Also, if you don't have the disposable income I wouldn't recommend it and you will be better off saving for upgrades in a couple years when the performance jump is significantly more. Like I said, in my situation I used giving my friend a system as justification. Really the performance difference is negligible in real world applications. I think the community often gets caught up in chasing number on the benchmarks.

Just my 2 cents. Either way enjoy your PC! That's what it's there for!

[–]pittguy578 4 points5 points  (3 children)

I have a 5800x and 3080ti. Definitely not going to upgrade to 7700x. Unsure if will be doing 7900x

[–]sparkythewildcat 1 point2 points  (2 children)

You're likely good til AM6 or close to it.

[–]Thrashinuva5800x | x570 | 6800xt 4 points5 points  (0 children)

At the rate we're going we might be able to get twice the performance on the next upgrade

[–]ondrejederAMD 304 points305 points  (127 children)

Now imagine a world where Ryzen was a flop and now we would still have 4c/8t Intel i7 as flagship consumer CPUs...

[–]dhylb 113 points114 points  (21 children)

imagine athlon 64 x2 was a flop and intel would still be serving us single core cpu's.

[–]cosine83 14 points15 points  (17 children)

I mean, single core Intel Pentium 4s had hyperthreading back in 2004. It's unlikely at that point they would've stayed at 1c/1t because of AMD, much less the Athlon X2 having any influence on Intel's trajectory in the late 2000s since they'd already brought out the Core and Core 2 (Conroe, Penryn, and Merom) line by 2008 and leading to market dominance. Unless you mean the Athlon 64 X2 (Brisbane) line that came out in 2005, which was a predecessor to the Athlon X2 (Kuma, Regor, Deneb).

[–]dhylb 13 points14 points  (13 children)

I mean the older Athlon 64 x2 days.... which was the first consumer true dual core. At the time intel was peddling single cores with 2 threads as "dual core" but that wasn't "dual core" at all. My father fell for it, got a pentium 4 with hyperthreading and it was trash. the only time it beat my amd athlon 64 x2 4400 was in encoding/decoding video content like divX (that old codec used way back when). every other task my amd shit on his pentium.

[–]RealThanny 9 points10 points  (0 children)

Intel never tried to pass SMT off as dual-core.

What they did try to pass off as dual-core was the Pentium D, which was actually two single-core dies on the same package. It was SMP on a single socket. Contrasted with the Athlon 64 X2, which was a single die with two processing cores, and included an on-die memory controller. The separate dies on a Pentium D had to go to the north bridge via the FSB to access memory, and had to use the FSB to communicate with each other. So despite being on the same package, the only way the two cores could communicate was down the package pins, onto the front-side bus, then back up another set of pins. That's where the original "glued-together" taunt came from. It was an accurate description of what Intel did to match AMD's actual dual-core processors.

They also later did the same trick to create the Core 2 Quad, only they used two actual dual-core dies on the same package. Same issues with memory access and inter-core communication.

[–]Big-Construction-938 1 point2 points  (0 children)

Amd gave 64 bit to the mass market That was the biggest achievement

Well that and Am386 even tho it was late

Also thankfully ati gave us 9700pro

Now if only Lisa su joined amd earlier,alongside Rory they may have been able to restructure amd earlier Imagine if we had 28nm piledriver in 2011

[–]Big-Construction-938 0 points1 point  (2 children)

Imagine if itanium was successful... Or ati went bankrupt in 2000

[–]jimbobjames5900X | 32GB | Asus Prime X370-Pro | Sapphire Nitro+ RX 7800 XT 0 points1 point  (1 child)

Itanium was always ripe for being undercut. It was built only for 64bit and ran any older code in emulation mode. It was slow.

Business's always need an off ramp, not a chasm and a parachute.

Classic Intel trying to just brute force the market. AMD just totally embarrassed them with x86 64.

[–]_Fony_7700X|RX 6950XT 24 points25 points  (12 children)

I think they were waiting for 10nm. The correct assumption though is pricing, 6 core CPU's would absolutely cost $999.

[–]tpf92Ryzen 5 5600X | A750 12 points13 points  (3 children)

[–]Quackmatici5 4690K - R9 390 3 points4 points  (0 children)

To be fair accounting for inflation that's more like $500 now.

[–]AK-Briani7-2600K@5GHz | 32GB 2133 DDR3 | GTX 1080 | 4TB SSD | 50TB HDD 1 point2 points  (1 child)

Don't forget the 6C/12T i7-8086K, at about $430 in mid 2018. Loved the model number, though, and being able to officially turbo to 5GHz was a long time coming.

[–]TristinMaysisHot 1 point2 points  (0 children)

I will never forget that CPU, because i was one of the lucky people to win it from Intel when they gave like 5k of them away.

[–]vaskemaskine 1 point2 points  (0 children)

I paid almost half that for a 6 core i7 back in 2011, so maybe not.

[–]John_Doexx[🍰] 19 points20 points  (4 children)

Now imagine a world where 8c/16t CPUs were not $450 msrp… wait amd did that

[–]looncrazFlare this. 5 points6 points  (1 child)

Yup, they were $1k when Ryzen came out, AMD sliced the cost per core to less than half even after price increases.

[–]Put_It_All_On_Blck 8 points9 points  (6 children)

5820k was 6 cores for $400 in 2014...

[–]HorruxR9 5950X - Radeon RX 6750 XT[🍰] 3 points4 points  (5 children)

On a $400 motherboard, using $400 RAM. (OK maybe not exactly but the point is, it wasn't mainstream AT ALL.).

[–]raidflex 4 points5 points  (2 children)

I had a 5820K OC to 4.2GHz on a $250 board and $200 memory, so actually it was a good deal. It kept up well up until I replaced it with a 5800X.

Even with normal inflation it's still not a bad deal.

[–]DMozrain 19 points20 points  (71 children)

Intel would have had over 4 cores for "flagship consumer CPUs" regardless of Ryzen, the 10nm catastrophe is what delayed them.

[–]freddyt55555 20 points21 points  (31 children)

the 10nm catastrophe is what delayed them

Way to rewrite history. Their first mainstream 6-core flagship was the 8700K and was fabbed on 14nm++. It was released less than 6 months after Ryzen came out and only 2 quarters after the 7700K. This was clearly in response to AMD. Considering how quickly they whipped out the 8700K after the 7700K, they obviously could have released a 6-core model much earlier than they did and likely would have continued releasing 4-core mainstream flagships indefinitely had Ryzen not been as successful as it was.

[–]RealThanny 1 point2 points  (0 children)

Intel had more than four cores long before that, but charged increasingly ridiculous prices for them. I had a six-core 3930K from 2012 to 2019. I never upgraded because Intel's prices were atrocious past that for all usable platforms (i.e. not the toy "mainstream" platform which lacked enough I/O to actually use expansion cards).

[–]arandomguy111 1 point2 points  (0 children)

There's been information to suggest that Cannon Lake which was originally supposed to succeed Sky Lake at 10nm around 2016 (continually than delayed with 10nm) was planned to feature 6 core and 8 core configurations. There were job listings that hinted at up to 8 cores for Cannon Lake. Engineering samples with these configuration turned up years later as well.

10nm yields however were so poor that the only Cannon Lake part ever released was configured with 2 cores and that was with most of the chip (more than half) disabled and still had poor yields (very low availability).

[–]Put_It_All_On_Blck 0 points1 point  (10 children)

Way to rewrite history.

Intel had a 6 core fairly priced CPU back in 2014. The 5820k, for $400.

[–]MoreFeeYouS 4 points5 points  (1 child)

Intel even had a 6 core 12 thread CPU in 2009 already with their first gen Core i7 980x. Obviously ridiculously priced because of no competition.

[–]PantZerman855800X3D, 3600CL16 DR B-die, 6900XT Red Devil 6 points7 points  (2 children)

I dont think motherboards for these were cheap.

[–]Jaidon24PS5=Top Teir AMD Support 5 points6 points  (0 children)

They weren’t B series cheap, but they weren’t the price of HEDT today either.

[–]CALL_ME_ISHMAEBYi7-5820K | RTX 3080 12GB | 144Hz 3 points4 points  (0 children)

I've been running a used one for the past 5 years!

[–]freddyt55555 2 points3 points  (2 children)

Intel had a 6 core fairly priced CPU back in 2014. The 5820k, for $400.

Fairly priced for HEDT. Re-read my post. I'm talking about mainstream desktop CPUs specifically.

[–][deleted] 5 points6 points  (1 child)

Don't hurt yourself with all the mental gymnastics.

Intel had affordable 6 core parts when AMD was still selling construction based cores. Yet to hear you guys tell it, everything from them not 4 cores was $9999 dollars.

[–]jorgp2 -2 points-1 points  (13 children)

Way to rewrite history.

You people are the only ones trying to rewrite history.

It takes more than 6 months to get a new CPU and platform out the door, yet you people state that it can be done in a matter of weeks.

The 8700K had nothing to do with Ryzen.

[–]MoreFeeYouS 6 points7 points  (0 children)

Of course it had to do with Ryzen.

Do you think that multi billion dollar corporation like Intel had no idea what AMD was working on all those years? Do you think they found out about Ryzen when the rest of us did? Come on now.

[–][deleted] 1 point2 points  (0 children)

I concur. I mean I had both Haswell-e and Broadwell-e 6 core machines before ryzen even launched.

It's a strange reality some people insist on living in.

[–]jorgp2 11 points12 points  (35 children)

Yeah, it's just a nonsense circlejerk.

They even forget that AMD already had 8 core CPUs, and Intels 4 cores were faster.

[–]blaktroniumAMD 12 points13 points  (22 children)

AMD never had a real 8 core consumer cpu before ryzen. The shared integer units in the FX series were a big deal because load/stor memory ops are integer ops. This barely ever got mentioned until their court case, but the access to memory is generally what is considered a core and the FX series topped out at 4 cores each with 2 FPUs.

Edit: I'm ass backwards wrong, they had shared FPUs, dual INT units but shared a memory pipeline because of shared L1/L2 and that's what slowed it all down

[–]RealThanny 6 points7 points  (3 children)

You're making a hash of things.

While Bulldozer shared a significant portion of the pipeline before splitting into separate integer cores, they were separate. Load/store operations don't have anything to do with integer execution units, so I'm not sure what you're going on about there.

A four-module Bulldozer design had eight integer cores and four floating point cores. That's the reality. All attempts to describe it otherwise are revisionist nonsense.

[–]blaktroniumAMD 1 point2 points  (2 children)

That's right I got it wrong it was the shared L1/L2 that made the modules behave like single cores

[–]69yuri69Intel® i5-3320M • Intel® HD Graphics 4000 5 points6 points  (0 children)

Wut? Bulldozer shared the then-very-large-FMAC-FPU between two adjacent integer cores. Sharing small and narrow integer cores would be pointless.

[–]jorgp2 2 points3 points  (5 children)

Yeah, no.

Where did you people get this nonsense from anyway?

I've even seen you state that you couldn't disable one core in a module.

[–]dhylb 2 points3 points  (10 children)

actually you are factually wrong. the bulldozer line was true 8 core. even windows reported it as 8 core. it wasn't until many years later, microsoft changed windows code and it started reporting 8 core bulldozer as 4 core.

on the court side, AMD settled, because its easier to pay out x money now then spend 2-10x more money over time to fight it. there is no legal definition of what constitutes what a core is.... it was a never ending lawsuit. so AMD settled. that doesn't meant they admit guilt, it means they didn't want to waste time arguing with absolute fucking morons.

[–]blaktroniumAMD 3 points4 points  (9 children)

A cpu core that can't access memory isn't a cpu core

[–]Osbios 1 point2 points  (0 children)

Wtf does that even mean? All current AMD/Intel CPUs go over the L3/(maybe L4/chiplet cache line directory) befor hitting a memory controller. This L3 in Intels case is even running on a different clock domain. Are you trying to inform us all that we are currently buying 0-core Intel and 0-core AMD chunks of metal?

[–]freddyt55555 5 points6 points  (11 children)

They even forget that AMD already had 8 core CPUs, and Intels 4 cores were faster.

This still doesn't justify their sandbagging and milking the consumer. Look how quickly they released the 8700K in response to Ryzen.

[–]jorgp2 5 points6 points  (10 children)

This still doesn't justify their sandbagging and milking the consumer. Look how quickly they released the 8700K in response to Ryzen.

They didnt release it in response to Ryzen, they released it because Cannon Lake didn't work.

And they didn't launch the 8700K quickly, just see how they had to refresh the 6700K before they had the 8700k ready.

[–]freddyt55555 4 points5 points  (8 children)

And they didn't launch the 8700K quickly

2 quarters after the 7700K isn't quickly? LOL.

[–]jorgp2 2 points3 points  (7 children)

Lol, another illiterate.

The 7700k was just a refreshed 6700k, they started working on the 8700k before the 7700k launched.

It takes more than two quarters to get from a mask to packaged chips.

Again, why do you people spout this nonsense?

[–]freddyt55555 2 points3 points  (0 children)

It takes more than two quarters to get from a mask to packaged chips.

Look, dingleberry, it doesn't matter what was a refresh of what. There's no way Intel would have released the 8700K only two quarters after the 7700K had it not been for Ryzen. Intel's Skylake-X came out the same year using the same microarchitecture as the 6000 series, so obviously they had the capability of releasing 6-core and 8-core CPUs in the Kaby Lake series too.

And nowhere did I say say they rushed any of the pre-production steps for CoffeeLake because of Ryzen. For all we know, CoffeeLake could have been ready for production in Q4 of 2016 and they just held back to keep milking suckers like you by releasing KabyLake.

[–]Big-Construction-938 2 points3 points  (0 children)

They would have had what is now 12600k as a $2000 i9 cpu, imo

[–]tablepennywad 2 points3 points  (0 children)

Seriously. And laptops only had 2 core i7s. That’s practically false advertising. They had quad core from q6700 kentsfield in 2006 alllll the way till 2017 with the 7700k kabylake until Ryzen forced their hand.

[–]John_Doexx[🍰] 1 point2 points  (0 children)

Now imagine a world where 8c/16t CPUs were not $450 msrp… wait amd did that

[–]EmilMR 65 points66 points  (8 children)

That's about as expected for cinebench result.

13900K vs 7950X is only one that's up in the air how it turns out.

[–]freddyt55555 57 points58 points  (7 children)

13900K vs 7950X is only one that's up in the air how it turns out.

Certainly, the 13900K will be up in air with all the hot air it's going to create.

[–]jackmiaw200ge/5600xB450TomaHawkMax 2x16 3600mhz ram r9 380 sapphire 13 points14 points  (1 child)

Code name alaska heater

[–][deleted] 5 points6 points  (0 children)

Hey, I kind of like my 12900K during the winter!

(With an undervolt, it puts out heat comparable to Ryzen).

[–]Ragnaraz690 1 point2 points  (0 children)

Ish, cause there's going to be a 7950X3D, which will be something to watch for gamers.

[–]Poor_And_Needy 37 points38 points  (2 children)

This just in from the editors at userbenchmark: with the new 20% increase in performance, the new 7700x is now 20% more of a waste of money. Sane people are advised to continue using the first generation 2c/4t i7 from the laptop they've had for 12 hrars.

/s

[–]riesendulli 4 points5 points  (0 children)

As always MAD marketing campaigns at Team Red. /s

[–]outtokill7 34 points35 points  (50 children)

So it looks like upgrading from a 2700X will give me a sizeable boost

[–]hairycompanion 31 points32 points  (19 children)

Went from a 2700x to a 5600x and the difference was huge. Cyberpunk areas went from 30 to 60.

[–]outtokill7 12 points13 points  (14 children)

This is what I was hoping for. A bump in performance on my 2080 would be really nice.

Thought about getting a 5600G but decided to wait for a 7600X/7700X.

The 2700X is still a good chip and does everything I need, but better single threaded performance would be really nice.

[–]DannyzPlayR9 5900X | RTX 3090 | 3733CL16 13 points14 points  (26 children)

If you don't want to invest money for a new board or ram, a 5800X3D would also be an immense upgrade for you.

[–]outtokill7 5 points6 points  (25 children)

I'll get the new board and ram. That's not much of a problem.

[–]anotherwave1 6 points7 points  (5 children)

That's the kicker. The new board + ddr5 ram are going to be a lot more than the AM4 board and dd4 ram, when these new chips come out we might be seeing a situation where it's cheaper to build a faster, older system (chip + mb + ram)

[–]outtokill7 1 point2 points  (4 children)

I upgrade every 5 years. I want the new platform. I'm not going to upgrade my CPU at this point to be stuck on DDR4 3000 and PCIe 3.

[–]anotherwave1 3 points4 points  (3 children)

Alright, but IF it's slower than current gen for equivalent price, you'll literally be sitting on a slower machine for the money for the next 5 years..

[–]outtokill7 2 points3 points  (1 child)

Well that's just a risk I'm going to take. I did buy a 2700X after all.

[–]EnderOfGender 15 points16 points  (0 children)

Pretty in line with what AMD showed with their mystery chip

[–]dkizzy 14 points15 points  (0 children)

looks like a good chip nonetheless, but I'm going to wait for the V-Cache version. I love the 5800X3D

[–]BiGkuracc3700X/b450Tomahawk/3070 4 points5 points  (6 children)

Spider-Man remastered gave me a itch to upgrade my 3yr old 3700x haha but deep down i know the 3700x is just fine at 1440p.

[–]BNSoul 2 points3 points  (3 children)

The problem with the 3700X is the non-monolithic design and that it clearly bottlenecks anything similar or above an RTX 2080.

[–]BiGkuracc3700X/b450Tomahawk/3070 1 point2 points  (0 children)

Majority of games I’m GPU bottlenecked not CPU at 1440p, 3080 would be fine also but 3090 my CPU would be a battleneck

[–]SirActionhaHAA 14 points15 points  (9 children)

773/630=1.227

20% faster

Nice math lol

[–]ForgottenCrafts 17 points18 points  (5 children)

I don't see what is wrong? It is roughly 20%.

[–]SirActionhaHAA 14 points15 points  (4 children)

It is roughly 20%

That's where it's wrong. Videocardz actually got the 23% number in the content body right but they pulled a random round down in the title

[–]RealThanny 3 points4 points  (1 child)

Don't be unnecessarily precise. The numbers are not for a production chip, so giving the rough figure of 20% is perfectly fine. It's going to be different for the released product, but will still be within the orbit of 20%.

[–]Taxxor90 1 point2 points  (0 children)

You'd expect the performance to increase a bit for final product, so at 22.7% you'd rather "round" up and say "almost 25%"

[–]georgfrankooASRock X570 / R7 5800x / 16 GB Ram 3200 Mhz 2 points3 points  (0 children)

Well , that means my 5800x is safe for the next 2 - 3 years :)

[–]jaKz9 7 points8 points  (4 children)

As a 5800x owner, the probability of me purchasing a CPU without 3D V-Cache in the future is roughly 0.00000001%.

[–]Cankles_of_Fury 2 points3 points  (2 children)

Just curious, but why is that?

[–]jaKz9 5 points6 points  (1 child)

Gaming performance, specifically in 1% and .1% lows (which is all I care about, not avg fps), is simply insane. I can only imagine how smooth games play with that amount of cache.

[–]gunshit 1 point2 points  (0 children)

Noice! Could you share some examples on software running better and/or more stable with 3D CPU's?

[–]Firefox72 11 points12 points  (30 children)

Thats kinda rough is it not?

Unless this is a test that favors Intel .

[–]Seanspeed 29 points30 points  (14 children)

A 20% improvement(if it pans out in normal usage) would be a pretty solid gain. Not quite the same improvement as Zen 2 and Zen 3 provided, but not bad by any means. 10% per year improvement essentially.

Zen 3 will probably remain a big outlier in general. ~25% performance increase in only like 16 months on the same process node was pretty crazy.

[–]H1Tzz5950X, X570 CH8 (WIFI), 64GB@3200mt/s - quad rank, RTX 3090 5 points6 points  (12 children)

hmm idk 20% for brand new platform is decent if you look in a vacuum, unfortunately competition is there and its strong, currently zen 4 has around similar single core perf as 12900k in cinebench which is not good.

[–]BoltTusk 8 points9 points  (2 children)

Not to mention new RAM too. Intel can have people use DDR4, H610 motherboards with their 13900KS and it would probably beat Zen 4 on $/performance. With no leaked benchmarks on the 7950X, I hope it is high enough to justify the platform adoption tax

[–]H1Tzz5950X, X570 CH8 (WIFI), 64GB@3200mt/s - quad rank, RTX 3090 5 points6 points  (1 child)

there wont be necessary to go to those extremes like h610 mobo, even with decent z690+ddr4 ram its going to be cheaper option than zen 4. Now with ddr5 hmm its going to be pretty similar in terms of platform cost but im not sure about performance.

[–]Seanspeed 3 points4 points  (3 children)

Dont get me wrong, I'm still a bit disappointed by Zen 4. After two full years of waiting, plus the switch to 5nm, I absolutely expected more than this.

But at 20%(again, if), it's hardly anything to scoff at.

[–]BoltTusk 3 points4 points  (1 child)

I’m disappointed too since these motherboard prices by ASUS suggest $670 MSRP for x670E boards

[–]H1Tzz5950X, X570 CH8 (WIFI), 64GB@3200mt/s - quad rank, RTX 3090 2 points3 points  (0 children)

ehh i have mixed feelings towards zen 4, on one hand that clockspeed is very impressive, on the other hand they sacrifice power limits. Idk the biggest issue with zen 4 as i see is total platform cost, at least that raptorlake have the advantage in the bag already. But its going to be interesting competition, cant wait for benches :)

[–]vectralsouli7 2600K @ 5GHz | GTX 1080 | 32GB DDR3 1600 CL9 | HAF X | 850W[S] 11 points12 points  (2 children)

How so?

AMD showed ">15% Single Thread uplift" in their presentation, as in 15% or greater.

Here with these results we see it's probably gonna be anywhere from 15 to 25 percent, give or take a few.

The reason why that ">" in the presentation was so important, yet a lot of people still went out of their way and started complaining about it being too low, not enough, etc.

[–]Firefox72 11 points12 points  (1 child)

Meant more so in the comparison to what Raptor Lake will get and not it being a small improvement. Its definitely a big jump. Wonder if its enough to match or surpass RL though.

[–]Liddo-kunR5 2600 8 points9 points  (0 children)

It all depends on prices. If AMD loses their minds and prices the 7700x in the same bracket as the 13700k, then this CPU is DOA. Ideally the 7700x should compete with RPL i5 skus, not the i7, and the price should reflect that.

[–]EmilMR 6 points7 points  (0 children)

AMD wont be winning any cinebench contest. Funny how Intel hated Cinebench and AMD was boasting about it a few years ago now it's a complete role swap. It's just one test that is not relevant to most users anyway but it's good for comparing single core result imo. 7700X to be on par with best of Alderlake is about as expected with AMD's statement. Upper SKUs have higher boost clock, should be a bit better but they probably wont match RPL but it's close really, doesn't matter.

[–]Triggle_E_Puff3900X, 5700XT Pulse 9 points10 points  (5 children)

For the 7700X SKU (an engineering sample at that) this looks pretty good. Cinebench has generally favored Intel for the past few generations, and we'll see increased performance when clock speeds are finalized.

[–]uzzi385950X + 6700XT - 2665mhz@1200mv 11 points12 points  (3 children)

I wouldn't expect higher clock speeds from here lol.

[–]SirActionhaHAA 4 points5 points  (0 children)

There could be a couple % difference if it's an es like the china channel said, ain't gonna be much. The bigger question is how gaming would scale. Is it gonna be like alderlake where gaming improvements are at 60-70% of the cb st gains, or higher?

[–]Firefox72 1 point2 points  (0 children)

Fair point.

[–]markthelast 1 point2 points  (0 children)

20% is the minimum. This is also a synthetic benchmark. Gaming and productivity benchmarks will be the main challenge. Pricing and availability will be the ultimate determining factors for many buyers.

[–]Tight-Discount-3179 1 point2 points  (0 children)

Oh wow, I'd never expect a new CPU to be faster than the old one.

[–][deleted] 2 points3 points  (7 children)

So a 5800x3D could beat it in many games.

[–]Seanspeed 5 points6 points  (0 children)

Certainly in some games, yes it will.

[–]KuivamaaR9 5900X, Strix 6800XT LC 7 points8 points  (2 children)

Remains to be seen. DDR5 will help a lot.

[–]RenderBender_Uranus 2 points3 points  (2 children)

Not when we look at HUB data on 12900K with DDR5 vs DDR4, the difference was insane just moving onto a faster memory.

Zen 4 is not just a DDR5 upgrade, it is an upgrade on all metrics

[–]RBImGuy 2 points3 points  (0 children)

the x3D version....well..I buy it

[–]tpf92Ryzen 5 5600X | A750 3 points4 points  (4 children)

So after a year they're finally reaching 12700F scores at a similar price, this is a bit of a joke, especially since it can't even run DDR4, hopefully gaming performance ends up better than the 12700F so it at least has something, 7700X is likely going to be competing against the 13600KF, which very likely will end up with performing better at most tasks while at a cheaper or similar price (13600KF would need roughly a 13.5% price increase from the 12600KF to land at a similar price to the 7700X), although less L3 cache, which I think usually affects compression or decompression performance, although obviously affects certain games more than others.

[–]Roubbes 1 point2 points  (3 children)

How's going to be the core count in the 7000 series?

[–]vectralsouli7 2600K @ 5GHz | GTX 1080 | 32GB DDR3 1600 CL9 | HAF X | 850W[S] 7 points8 points  (1 child)

As far as we know, it's not gonna be any different than the 5000 series:

R9 7950X -> 16C/32T

R9 7900X -> 12C/24T

R7 7700X -> 8C/16T

R5 7600X -> 6C/12T

There were some rumors going around that the missing 7800X might be a 10C/20T SKU, although I'm not sure how realistic that is, given the possible CCX configurations.

[–]Tommy_Arashikage 1 point2 points  (0 children)

I mean 2 CCDs with 5 cores each is certainly possible. But in AM4 Zen 3 AMD didn't want to go below 6 cores per CCD so I agree with you that it doubtably unrealistic.

I was thinking a 14 core R9 is more realistic and the R7 becoming a 12 core processor.

[–]Put_It_All_On_Blck 1 point2 points  (0 children)

Core count stays the same on every model we've seen. AMD even accidentally leaked the core counts on most of the models.

[–]INITMalcanisAMD 1 point2 points  (2 children)

I feel like my 5800X will last me just fine until Zen5 appears.

[–]Seanspeed 1 point2 points  (1 child)

I dont know why anybody would have thought otherwise. Do people generally think that CPU's become obsolete after one generation? :/

This aint the early 90's...

[–]chown-root 1 point2 points  (9 children)

13900K gon beat dey ass.

[–]bbe12345 3 points4 points  (0 children)

By a little bit yeah. Probably going to be 5-10% margin between 13900k and 7950x overall in multicore anyway

[–]sdcar1985AMD R7 5800X3D | 6950XT | Asrock x570 Pro4 | 32 GB 3200 CL16 2 points3 points  (7 children)

But with how much power draw and heat?

[–]Westify1 3 points4 points  (4 children)

Too much IMO. The rumors of the 13900k's "extreme mode" has PL2 limit raised to 350w.

Considering how bad the 12900k was to cool at only 240w PL2, the 13900k sounds like a nightmare for both cooler requirements and heat output since it will probably be paired with a 300W+ GPU.

I've purchased Intel CPU's the last 2 generations, but if their only way to stay competitive on a worse node is to blow the roof off the power limits then swapping to AMD will be the easiest choice.

[–]mista_r0boto 4 points5 points  (3 children)

350w for a cpu? Wtf.

[–]Maler_Ingo 2 points3 points  (2 children)

There was also a bench run of a 13900K with PL2 at 485W running on a dual loop chiller lmao

[–]mista_r0boto 1 point2 points  (1 child)

Holy shit. I can’t even. Add the gpu at 500w and the system overhead and you are basically running a space heater. It’s insane.

[–]dmaare 1 point2 points  (0 children)

With Intel CPU you're forced to do extreme OC?

Is it a problem that the cpu can be overclocked like that?

[–]pittbullblue 0 points1 point  (2 children)

I was going to be sad because I just bought a 5800x like 2 months ago, but I think it was definitely still worth the upgrade from a 1700 🤣