×
top 200 commentsshow all 265

[–]Saianna 173 points174 points  (9 children)

Thanks to Empy's project damage we discovered numerical values in the game are based around int32. -2.147bil to +2.147bil

IIRC deep delvers discovered that years ago.

[–]TritiumNZlolmarauder 86 points87 points  (3 children)

if you need a receipt, here is mr one mana left, giving a demo of the cap 2 months or so ago. for about the first 10 minutes.

TL;DW, he fights the same boss twice, but the second time the boss should have 70% more hp (from extra delve levels). Both fights take the same time to complete. Then he breaks down how that's possible, because they're both at the hp cap.

[–]baev_osGladiator 16 points17 points  (2 children)

Wow, that's not a video, that's a thesis. What a research!

[–]TritiumNZlolmarauder 4 points5 points  (1 child)

Dudes got a PhD (pretty high damage) in poe.

[–]Et_tu__Brute 30 points31 points  (2 children)

This wasn't discovered in delve. It's been known about for a very long time.

Delve was, however, the first place where it actually affected gameplay in a meaningful way. Delve is where an 'interesting factoid' became 'an important game mechanic to understand'.

[–]Obvious-Recording-90 2 points3 points  (1 child)

First place it was demonstrated well was in delve. We knew it from being a 32bit game before that.

[–]Obvious-Recording-90 3 points4 points  (0 children)

Yeah this is a post by someone who has no idea about the history of the game. We have known this for years. This is just stolen content from One mana left. If “empty” had any values he would credit oml. He doesn’t, he is a just a view bot humself.

[–]Nephalos 310 points311 points  (41 children)

Well I won’t repeat what others have already said, but if you’ve heard about the dot cap this is also the reason why it exists.

Dots are calculated per minute instead of per second, so the reason dots are capped at ~35.8mil dps is because that is equivalent to the integer cap (231) divided by 60.

[–]you_lost-the_gameAtziri 63 points64 points  (16 children)

Interesting. Never understood why there was a dot limit. its technical reasons.

[–]cart0graphy 46 points47 points  (15 children)

There no reason why they couldn't calculate it over any arbitrary time unit. The reasoning is actually just design decision.

[–]moonias 8 points9 points  (9 children)

I'm going to assume it's for simplicity purpose. Calculating as a per second value would make you have potentially 60 adjustments per second while on a per minute basis makes it easy to have interpolation over a longer period of time maybe?

But then they could easily double the dot cap by calculating it based on half a minute instead?

[–]Ornedan 9 points10 points  (2 children)

ΔHP/Δt from DoTs is applied every server tick (33 milliseconds). There's no adjustments or interpolation.

Storing DoTs as an integer per-minute value is just a weird design decision. Possibly left over from before they locked in the server tick length.

[–]Glaiele 1 point2 points  (0 children)

It's likely due to the way ignite and bleed work. It's probably more efficient to normalize the dot dmg into a consistent time period in order to figure out which dmg application will do the most damage and just compare the integer value rather than do the DPS calculation for each dmg instance. They probably picked a minute because dots aren't likely to ever last that longer than that.

[–]Legend_Zector 15 points16 points  (5 children)

You wouldn’t need to adjust it 60 times a second, you’d need a rate and then simply multiply by the fraction of time that the monster is receiving damage for (e.g. if they’re taking damage for 4 seconds, that’s 1/15th of a minute, and so just multiply the DOT’s damage-per-minute by that and gradually interpolate the monster’s HP). Putting it as per-minute seems strange, but if they’re doing it through an integer then it may be required so as to not round off low-DPS dots into stupidly low numbers.

[–]moonias 0 points1 point  (4 children)

But what if they average damage over more than 1 second for example to reduce the computations? Is it possible they don't "exactly" update the dot values every second?

[–]Legend_Zector 0 points1 point  (3 children)

I’m not entirely sure what you mean, but whether or not the rate is per-second or per-minute is irrelevant to how much damage the monster takes - all that changes is what you multiply by to get the damage output. And in order to see the monster taking damage at all, you’d need the damage updated.

[–]GeorgeZ 2 points3 points  (0 children)

Spaghetti code more likely, if I had to guess. Not a slight on GGG. Just nature of the beast.

[–]typoscript 0 points1 point  (1 child)

It's technical reasons, not arbitrary reasons. You can't say that if you don't understand the constraints they're working against.

This could be a design trade off for performance, for all you know using a different calculation method or keeping a larger var in memory could blow the whole game up, the amount of processes and memory currently used up in the game definitely aren't arbitrary.  

 It would be pretty short sighted to think ggg isn't optimizing the best they can, and simply changing a column type on a table would suddenly open up drastically better calculations or performance but they somehow haven't thought of that, right?

[–]BMaD_Chillyo -1 points0 points  (0 children)

int256 also exists =P

[–]cXs808 0 points1 point  (20 children)

Does this mean that as power creep inevitably happens - DOT will eventually be phased out as it won't be able to keep up with power creep at some point in the future?

Or is there a workaround for them to increase DOT cap while still being on int32?

[–]ShanderraaJuggernaut 28 points29 points  (12 children)

Technically, DOTs are already phased out. There's a reason the uber-minmaxed builds very rarely go DOT.

[–]moonias 34 points35 points  (5 children)

It's been demonstrated, damage reduction is applied BEFORE the dot cap enters into play. So by using a damage reduction factor on bosses instead of just giving them ridiculous high numbers of life points, dot cap keeps scaling.

For example, on a monster with 0% damage reduction, you can do 35M dot dps and you'll reach dot cap.

On a monster with 50% damage reduction, you can still do 35M dps AFTER your dot has been reduced, so essentially you are doing 70M dps to reach the dot cap.

So you can still scale your damage up even with dots. 100% delirium monsters have something like 90% damage reduction, but not 10x more life.

[–]TheZephyrim 1 point2 points  (1 child)

Still though, 90% would cap your DoT damage at 350million, and 95% would cap your DPS at 700million.

So at the high end DoTs will always be playable, but at the uber high end your DPS will be lower with DoTs than with hits

[–]moonias 3 points4 points  (0 children)

Yea I mean that's correct, the dot cap still exists no matter what but the OP was asking if DOTs would be phased out. I am merely illustrating that they still scale higher up than just doing 35M dps on your pob on regular monsters.

35M dps is plenty good to kill ubers anyway today.

[–]Pokey_Seagulls 7 points8 points  (2 children)

The reason minmaxers don't go Dot is because dots are by nature slower at killing than crit builds. 

Crits deal their damage instantly, dots deal their damage, well, over Time.

Even if damage caps didn't exist people still wouldn't do dot builds over crit builds, because crit builds would still kill things faster.

[–]PointiEar 3 points4 points  (1 child)

because they are zhp. When you are zhp zdefenses, you can scale things much more. Dots have less ways of being scaled that hard, so they are better for cheaper/tankier builds

[–]Aldreen 2 points3 points  (0 children)

Stuff having damage reduction rather than increased max HP can let dot builds go higher, as the damage needed to reach call is higher

[–]connerconverseHierophant 1 point2 points  (4 children)

This is already the case for several leagues now

[–]stormblindNecronomicon Author[M] 724 points725 points  (29 children)

I mean, it's been known for a while for those super into the weeds technically on the game. But for alot of folks, this would be new info. So let's be nice about it, no need to be putting the OP down guys.

[–]Mr_big_chill_ 131 points132 points  (6 children)

Yeah and even if you know it you can still upvote it for mildly interesting content...

I would probably upvote someone saying that sharks are older than the rings of Saturn even though I already know it. Its just a cool fact.

[–]Latter_Weakness1771Duelist 67 points68 points  (3 children)

Huh? The rings of Saturn are made up of an integer overflowing amount of sharks?? That's cool as fuck!

[–]Independent_Data365 10 points11 points  (1 child)

Saturn just got bumped up the list of favorite planets with the fact of shark rings.

[–]Latter_Weakness1771Duelist 4 points5 points  (0 children)

Yes, we should go disseminate this fact to the rest of Reddit.

To the front page!

[–]Sceth 2 points3 points  (0 children)

Or that bees play with little balls for fun

[–]stormblindNecronomicon Author 8 points9 points  (0 children)

That's a great secondary point. Folks who don't follow empy directly (I enjoy, and really appreciate what he does. Not a fan of his streams) may not realize he had another project that broke the game to this level.

So it's a neat way of saying "hey, go check this out!"

[–]HiddenForbiddenExile 14 points15 points  (0 children)

Not everybody should be expected to be up to date on everything, but when going out of your way to claim a new discovery (on someone else's behalf), that should warrant a quick search to see if that wasn't already known. It's not a big deal at all and not a reason to even downvote imo, but it's just something I'd think would be commonplace.

[–]odlayrrab 18 points19 points  (14 children)

Hasn't this been known for a while?

[–]_RrezZ_ 27 points28 points  (7 children)

Delver's have known this for ages as the top ones hit the cap every league.

[–]arbyterOfScales 22 points23 points  (6 children)

There was a video from Conner where he talks about the max HP cap, then he demonstrated it.

He made a calculation that said that a delve boss in a party of 6, at depth ~6000 and with some other mods should have somewhere between 500-700 billions HP. 

In other words, it would take a char who can do ubers in 1 second half an hour to kill that boss.

Then he proceed to demolish the boss

[–]mayo_power 5 points6 points  (5 children)

Some smart fuckers play this game.

[–]edifyingheresy 4 points5 points  (4 children)

I feel dumb every time I watch one of his vids but goddamn if he doesn't make some interesting content.

[–]GentleMathem -1 points0 points  (3 children)

NEVER MIND DISREGARD THIS, I AM DUMB DUMB

If you want to feel a little smarter, look at his vids then his PoB. I like his content too, but his damage calculations are usually off. His 3 comma dmg video for instance has more skills active in the corresponding PoB than possible. When I finally turned off all the extra skills it was far from a billion damage. Still one shots everything in the game, still impressive, and I still couldn't ramp that high on my own, but I do double check his work when I follow it.

[–]Boredy0 12 points13 points  (1 child)

Pretty much, it's also the reason DPS is capped at ~36m since total DoT DPS is stored as damage per minute and can at most be 232 / 2.

[–]Aerroon 5 points6 points  (0 children)

232 / 2 = 231

[–]stormblindNecronomicon Author 9 points10 points  (3 children)

It has, I think I had first learned about from Empys armor video funny enough.

But, guy could be newer than that project. Or simply skipped that league.

Not that many projects push things to that level, so it could be real easy to have missed it even if you were playing for 10 years :)

[–]EnergyNonexistantDeadeye 4 points5 points  (2 children)

Not that many projects push things to that level

wasn't it basically known for many many years since Delve caps out? i thought the DPS tooltip also broke many many MANY years ago when HH had no cap....

[–]ssbm_rando 0 points1 point  (0 children)

I think it's been known for a while for anyone who knows any amount of computer science. You really didn't have to be "super into the weeds technically on the game."

The people who didn't know this fact are either fairly new to the game (which is fine, this is a great post for them, I guess), or don't have the technical knowledge to meaningfully understand this post in the first place....

[–]dvolper -2 points-1 points  (0 children)

He will get downvoted anyways. This still is the internet...

[–]flyinGaijin -1 points0 points  (0 children)

Well, the OP put the flashy "information" tag for something that is old for many, and utterly pointless for most so ...

Making it sounds like an announcement / a big deal can lead to negative comments, and it seems justified to me

[–]ZaMr0 23 points24 points  (0 children)

Wasn't this known for ages? It's the case in a lot of games. Same reason why max cash in runescape is 2.147bill.

[–]gvieiraSaboteur 32 points33 points  (0 children)

we discovered it...

... like half a decade ago

[–]Lorberry 93 points94 points  (51 children)

This would be an interesting interview question to a developer.

Not really, or at least not to anyone who would have context for the answer.

Using a bigger type for a variable means every instance of that variable in the game takes up more space in memory - the 'box' of the type takes up the whole amount of space every time, even if you're only using a tiny portion of it. Considering how many times that's going to be referenced in a game as fast paced as PoE, and how few people are going to be brushing up against that upper limit (especially back at the start of things when this would have been first implemented), going with a bog standard integer makes perfect sense.

[–]Aacron 0 points1 point  (30 children)

My question.

Why are strictly non negative integers signed?

[–]0x00000000 29 points30 points  (3 children)

If you have 100 health, take 150 damage and heal 60, you could temporarily have -50HP in the middle of damage calculations before the values get clamped. Subtraction becomes a footgun with unsigned ints. Why add an entire potential class of bugs?

Yeah yeah you can deal with overflows around zero, but I can see the devs in early days going "eh we're not hitting 231, and if we do we'd also hit 232 anyway and signed is much less of a pain in the ass to deal with".

[–]arbyterOfScales 7 points8 points  (0 children)

Interesting question, one that fucked over even the big honcho of the C++ standards body, which now regrets some design decisions on how the C++ stl containers were implemented.


The mathematical reason:

Naturals numbers are NOT closed over substraction. This means that you can substract 2 natural numbers and get something non-natural.

For example 69 - 89 = -20. Which is a negative integer, ergo not natural.


The programming reason:

The integers are stored on a 32 bit representation. Addition and substraction are solved algorithmically. If you use unsigned integers, ANY representation of those bytes would be a positive integer, which will give bogus results due to rollover.

If you would have used unsigned integers 69 - 89 would have rolled over and give you something in the ballpark of 2 billions.

[–]OmgIRawr 4 points5 points  (13 children)

Signed means that the first bit is used to show if the stored value is negative or positive. Which means the limits are as show above. Unsigned integers are strictly positive and their value range from 0 to ~4.3b (2 to the power 32)

[–]Aacron -2 points-1 points  (12 children)

Yeah, I do embedded programming for a living.

Why are strictly non negative values signed?

[–]Xeverousfilter extra syntax compiler: github.com/Xeverous/filter_spirit 16 points17 points  (1 child)

There are multple reasons, some very specific.

  • Because some of these values may become negative in the future (I see a potential with negative Quality - that would be a buff to Forbidden Taste) and such approach reduces required refactoring.
  • Because there might still be some special uses, e.g. using value -1 to indicate absence of the mechanic.
  • Because even if you have 2 values that are non-negative by design, you may sometimes want to compute the difference between them (which can be negative if you don't sort them).
  • Because in C and C++ programming languages signed integer overflow is undefined behavior but unsigned integer overflow is well-defined to wrap around (at the hardware level both can wrap around). Unsigned integers are thus intended for bit operations and readouts of binary representation and signed integers are intended for general math and are a subject of wider set of optimizations (e.g. x + 1 > x can be simplified to true for signed integers, because the compiler can assume overflow should not happen).
  • Because of convertion rules in C and C++, some interactions of signed/unsigned mix in one operation can have a very surprising result (-1 < 0u results in converting both to unsigned which means it is treated as 4294967295u < 0u which is false).
  • Some tools/languages for various reasons do not support unsigned types. JSON is probably the biggest offender because it doesn't even differentiate between integer and floating-point types.

more details: https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#arithmetic

[–]Aacron -1 points0 points  (0 children)

Future proofing and json transfers weren't considered, thanks 

[–]fluuxx 5 points6 points  (2 children)

I think it's likely the way they've defined data storage is being used across different stats. Negative life regeneration for example is something that happens quite commonly, negative damage not so much.

I would gamble the data struct they have for character stats uses signed ints to cover both of these cases

[–]Smellypuce2 9 points10 points  (1 child)

Yeah, generally speaking you don't want to mix signed and unsigned if you can help it(common source of bugs). Also, people have been moving away from the idea of "use unsigned if you don't expect the number to be negative" to "only use unsigned if you really need to". Because using unsigned has a number of pitfalls that can make debugging and testing harder. More people have been moving away from using unsigned indexing even(unless an array of char larger than half of memory is needed on a 32-bit system)

[–]arbyterOfScales 4 points5 points  (0 children)

More people have been moving away from using unsigned indexing even(unless an array of char larger than half of memory is needed)

Yep, Stroustroup - the C++ commite chairman - said he regrets making the C++ STL containers return unsigned values for size

[–]lunaticloser 2 points3 points  (3 children)

Presumably to allow for edge cases where you could do negative damage.

In other words so they don't need to constantly worry about underflow issues - which would be way worse than signed overflow. I'm sure it's much easier to reach negative flat added damage, which would underflow to MAX_INT, and allow you to trivialise all content.

[–]Xeverousfilter extra syntax compiler: github.com/Xeverous/filter_spirit 4 points5 points  (1 child)

Technical correction: underflow is a different term. Underflow happens on floating points when the result of the operation is smaller than representable value (e.g. dividing 1 by 2 over and over will sooner or later reach 0).

Going below 0 on unsigned types is still named overflow.

[–]lunaticloser 1 point2 points  (0 children)

Ah true.

That's what you get for using too much JS for too long. Brain forgets.

[–]tetrahedral 0 points1 point  (7 children)

What are you hoping to get out of the answer? I don't know many devs who enjoy being handed riddles in interviews.

[–]Aacron 4 points5 points  (0 children)

There were several constructive and detailed replies, all before you commented, and contained the exact sort of game design considerations I hadn't considered.

[–]definitelymyrealname 1 point2 points  (5 children)

It's not a riddle. It's an interesting question and the answer is probably interesting too. The top reply about damage calculations sometimes going negative makes sense to me and wasn't something I had ever thought about. I'd be interested to hear a dev's response as well.

[–]zonkgod 145 points146 points  (26 children)

Haven't watched Empy's video, but having int32 limited values on the UI panel, does not mean that int32 is used in actual internal calculations or storage, it just means that UI panel is coded it terms of int32. Casting from wider (say int64) to narrower (int32) type is common in programming, often due to laziness of not wanting to update less important code to use wider types consistently.

[–]ThatsALovelyShirt 111 points112 points  (19 children)

It's likely 32-bits internally as well, wherever possible. There are bunch of CPU optimizations and SIMD operations which can vectorize 32-bit calculations for performance, while the equivalent instructions for 64-bit intrinsics would only be able to perform half as many calculations. I'm sure they're optimizing the hell out of the server-side binaries.

Think about all the damage calculations, recursive interactions, etc., which the server has to compute in realtime. Being able to keep things in 4-byte intrinsics would allow the compiler to make a lot more optimizations. Furthermore, it would significantly reduce inter- and intra-network overhead.

[–]MyLifeForAiur-69 59 points60 points  (3 children)

oh god, i understood almost all of this. i guess them student loans are paying off

[–]IRefuseToGiveAName 24 points25 points  (1 child)

We paid tens of thousands of dollars (if you're from the States anyway) to be able to follow an esoteric conversation on a subreddit 🤓

[–]JosemiHero_ 0 points1 point  (0 children)

Or about 0 if you don't have pretty good income here. Really makes you see the good things in your country

[–]IridiumIO 1 point2 points  (0 children)

Just wait until you find that YouTube video five years later that does a better job of explaining everything, except for free

[–]mindlesstourist3 7 points8 points  (4 children)

Realistically 64bit should be close:

  • Generally, the 32bit instructions and calculations are not nearly twice as fast because many of them can only use a limited set of registers (or they just use full 64bit registers under the hood anyways); they are also less targeted for optimization nowadays. The only case where they can get close to twice as fast is in case of the AVX instructions.
  • In certain cases, using 32 bit ints is actually even slower because the CPU needs to convert it back and forth to 64bit.
  • Network bandwidth is not that significant of a concern, at least not between player clients and the servers. Games have really low network bandwidth usage compared to basically any other network use.

One interesting thing I found out just now is that there is no efficient x64 CPU instruction to convert 64bit ints to 64bit floats.

Considering that the servers are almost certainly converting to some kind of floating point number for percentage calculations, the lack of that instruction might give some meaningful advantage to 32bit (but I still doubt it's even close to twice as fast).

[–]BitterAfternoon 11 points12 points  (2 children)

It seems unlikely to me they're using floats in inner-loop calculations. Floats are significantly slower than integers in general. Rather as much as possible you try to use tricks to turn everything into integer multiplication if possible. Such as fixed-point arithmetic. You can always convert to/from your internal representation before/after you start the nitty gritty calculations. As that only happens once.

But as to whether 32 or 64 bit int is much different, that would absolutely take benchmarking as there's a large number of factors in the complexity of an x86 cpu as performance could easily be determined by branch-prediction-failures or cache-misses (at varying levels - incidentally another reason besides SIMD to prefer 32 bits as the storage state - takes less CPU cache space!) moreso than how many multiplications per second could be done with perfect utilization using AVX registers vs normal registers.

IF it came down to the number of mathematical operations per second being the critical issue, AVX (which CAN do twice as many multiplications in 32 bit mode vs 64 bit mode) should be faster for doing large numbers of independent operations in parallel, as that's its entire reason for existence. BUT I think there's an overhead to packing/unpacking AVX registers when the data may be coming from different locations - it's intended to be taking a large continuous bulk of data with many many discrete pieces being treated the same way - so it might be taking a significant overhead hit that it has to pay for before it can pull ahead.

Bottom line is they'd need to benchmark alternative implementations to see how much of a performance gain/hit they take from different approaches and whether or not that's "worth it" for other tradeoffs the approaches might bring with them. It is extremely difficult to predict what optimizations are important with all the moving parts inside a modern CPU's architecture - which is why among other things it's recommended these days to not think too much about optimization until you've got a functional implementation and have a profiler tell you where it's using its time.

[–]qzex 1 point2 points  (0 children)

I agree with mostly everything you're saying especially regarding cache misses and branching being the bottleneck. I do want to point out that on modern architectures, floating point performance is very comparable to integer. In many cases it's even faster. For example if you're computing x * y + z, like for matrix multiplication, the CPU can do it in half a cycle (maximum throughput in a tight loop). For integers it would take 1.25 cycles.

[–]Upset-Range-3777 10 points11 points  (1 child)

32 bit here would not be chosen as an optimization reason. in gameplay code like that, you lose orders of magnitude more processing time to simply fetching a few bytes of memory that is not in cache than you do processing it. the 32 vs 64 bit switch would not make a difference here in computation time (it's also larger in memory, but the reason you lose so much time is latency, not really bandwidth). doing the actual computations in gameplay code is nearly always trivial. getting the data organized in the right way to do it is literally always the hard part.

the reality is probably just that they picked 32 bit uint as a default just like everyone does. and now it's getting a bit spaghetti-ish to change it due to probably many places in the code relying on this kind of assumption.

[–]cart0graphy 1 point2 points  (0 children)

Its not uint but a signed integer as it overflowed to a negative value.

[–]qzex 1 point2 points  (0 children)

If they use 32 bits internally, I think it's most likely not due to a performance requirement, but rather the person who initially wrote the code decided 32 bits was enough, and it's a lot of work to change now.

I find it very unlikely that using 64 bits where needed is significantly slower. I've worked in multiple HFT firms and we use int64s and doubles all over the place. It's quite rare that you are bound by pure CPU compute and the data is arranged perfectly for you to use SSE/AVX. Instead, cache misses, branch mispredictions, virtual function call overhead are generally much bigger issues. It's true that changing from int32 to int64 would increase cache misses but I doubt it would be that impactful.

On second thought I could be wrong, since there are a lot of damage values going around all the time, it's definitely possible that the penalty to cache locality from using int64 would be substantial.

[–]Adghar 15 points16 points  (0 children)

We already knew actual internal calcs for damage were int32 from the DoT cap being reached a couple years back.

[–]dobrowolskSaboteur 7 points8 points  (2 children)

Also it's not a valid conclusion that monster life is an int32 just because player damage is an int32. It's perfectly possible to substract one number type from another.

[–]arbyterOfScales 0 points1 point  (0 children)

It is not, but it can be easily demonstrated.

Step 1: Go delving to depth 1k Step 2: Find a delve boss Step 3: Bring 5 more friends Step 4: Kill boss 

Step 5: Repeat the same thing but now at depth 6000.

The bosses should have the same amount of HP.

[–]lunaticloser 1 point2 points  (0 children)

While this is true in general, deep delvers have long known mob hp to be capped at max int32.

Otherwise it'd be impossible to clear the current depths.

[–]mbxyzBerserker 80 points81 points  (4 children)

yea this isn't new

[–]L3Thvl 21 points22 points  (1 child)

Connerverse shared a lotta insight on this on one of his recent vids since everything in deep delve is relevant to such numbers.

[–]alexthealex 9 points10 points  (0 children)

Yeah anyone who has been following OneManaOnly/mana stacking builds has known this for a while but until the last league or two those builds were incredibly niche and almost always hidden from Poe.ninja.

[–]VVilkacy 9 points10 points  (0 children)

We already knew that.

[–]connerconverseHierophant 81 points82 points  (2 children)

Ah yes we discovered something that's been known and applicable in game for many many years

[–]Nazgul_Linux 4 points5 points  (0 children)

I mean, deep delver Connor has had this info out for a long time. Empy didn't discover this brand new. He just discovered it for himself and his crew. But, it's a 32-bit game so.. it should have been easy for anyone tech savvy to figure out.

[–]Korrigus 15 points16 points  (1 child)

I don't know how much software experience others in this discussion have, but one thing I rarely see brought up around this is basically the risk vs reward for this type of change.

It's basically a given that making any sort of change to this logic is going to be a HUGE refactor of a lot of different things, and that inherently comes with a high chance of breaking something.

Weigh that against the benefits of making the change (and let's be honest, most people wouldn't be affected by this change much, if at all), and there's not really a good reason for them to change anything.

If it ain't broke, don't fix it.

[–]GreatMacAndCheese -1 points0 points  (0 children)

Agreed, this is the kind of routine analysis and decision making that devs find themselves making every single day when deciding what to tackle/where to put their time. I wouldn't be surprised at all if it was a big reason for GGG making PoE2 a completely separate game -- would be an absolute nightmare understanding and knowing all the touchpoints between the two games.

[–]SubdueNA 9 points10 points  (10 children)

If GGG wanted to change the EHP beyond 2.147b without using less damage taken, they could also add a second life bar. It's not really a limitation.

[–]Xeverousfilter extra syntax compiler: github.com/Xeverous/filter_spirit 2 points3 points  (0 children)

Well many bosses already have multiple phases, each with their own health bar.

[–]warmachine237 3 points4 points  (7 children)

They could also just cut all numbers by 2 and call it a day.

Player damage, hp, enemy damage and hp etc.

[–]raxitronInquisitor 22 points23 points  (0 children)

Rip CI players

[–]beaverusiv 0 points1 point  (2 children)

Not without immense effort. PoE is way too complex with mechanic interactions and more/increased multipliers you could never perfectly get to a point where everything relevant was balanced the same as it is now but half the value

[–]RainbowwDash 3 points4 points  (0 children)

You can cut all base values in 4 and leave all multipliers the same

Probably requires a fair bit of polishing but also nothing at all like you're describing

[–]Monkey_1505 -3 points-2 points  (0 children)

GGG's 'balancing' procedure appears to be throw random things into the game, from amazing to garbage (mostly garbage), see what the players do with it, and if it's too good and hasn't been used for long, nerf it.

Chances are randomly altering the game balances in some manner wouldn't be much better or worse.

[–]MyLifeForAiur-69 0 points1 point  (1 child)

They could just divide all the numbers by 4 and take an early lunch

[–]passatigiPathfinder 1 point2 points  (0 children)

Divide all numbers by 8? Now we cooking!

[–]hermeticpotato 21 points22 points  (0 children)

This has been known for a while?

[–]Dilutional 5 points6 points  (0 children)

No shit

[–]Beneficial-Insect-44 5 points6 points  (0 children)

also DoT cap is a result of the game calculating all damage over time on a "per minute" basis which is then divided per 60 for displaying purposes to the player (for example in the character panel).

The game uses a 32-bit sized variable to store and "work" with this "per-minute DoT" value. 32-bit means there are 32 "bits" which can either be 1 or 0. The maximum value of such a field is 11111111111111111111111111111111 which is 4,294,967,295 in decimal. However, the game treats this value as a "signed" value. This means the left-most bit is used to indicate if the number is positive or negative, effectively cutting the available "absolute" range in half. This means the value in decimal can now range from -2,147,483,647 to 2,147,483,646.

If we now go ahead and divide the maximum possible (positive) value 2,147,483,646 by 60 (to display the damage "per second" instead of "per minute") we end up at 35,791,394 - the DoT cap.

[–]azkv 1 point2 points  (0 children)

this has been known forever, 0manaleft talks about it in at least 5 videos.

[–]Daedaloose87 1 point2 points  (0 children)

ConnerConverse Aka onemanaleft has a deep dive into this in his awesome deep delve video. You should have a look in his yt channel

[–]Hlidskialf 1 point2 points  (0 children)

Slowpoke

[–]Responsible-Pay-2389 1 point2 points  (0 children)

Yeah we've known that for ages lol

[–]paakoopa 1 point2 points  (0 children)

learned about that a bit before the legion started when connor droped the deep delve guide where he explained this quite lengthy and detailed since delve gets to a point where every rare has max life

[–]FeedMeYourDelusions 1 point2 points  (0 children)

Delvers knew this for a while tho

[–]ForbaneThe Human Meme 1 point2 points  (0 children)

Yea if you divide that number by 60 you get the DOT DPS limit b/c DOT is capped at that integer per minute...

[–]tenroseUKAtziri 1 point2 points  (0 children)

I didn't know that! Thanks for sharing!

[–]InfiniteNexusDaresso 1 point2 points  (0 children)

At some point big numbers just become meaningless. Better gameplay features and mechanics that work with a lower HP pool is better than just adding more zeros to an oversized number.

[–]richardtrle 1 point2 points  (0 children)

I know that this is known for a while, not using unsigned int for damage is the shock here

No other mainstream engine uses int for their damage calculations Warframe and Final Fantasy are the ones that comes in mind

So if there is negative damage it means that healing actually uses negative damage, that is beyond absurd as well

[–]Zarni22 3 points4 points  (0 children)

The only problem with damage reduction instead of more HP is leech gets screwed over, its so annoying.

[–]INeedToQuitRedditFFS 1 point2 points  (1 child)

I mean, yeah. The developers never expected damage numbers to get this high. In the case of total boss HP, even normal builds have had power creep to a point they likely didn't account for when doing initial architecture for damage calculations.

Changing to a larger value type seems like a super simple and low-impact change, but you have to realize just how often these values are being used. Every time you hit an enemy, multiple variables of the type have to be loaded and then calculated on. You hit a lot of enemies very fast in this game; it would absolutely have an impact on server performance.

Even that's assuming that doing so would be a simple variable type change and require no other rearchitecture, which is almost certainly not the case.

In short, they just have no reason to change it. Adding %less damage reduction is functionally the same as increasing base health, except for cases like leech(which you are maxing out anyways). Nobody ever hits overflow of damage numbers themselves in any real use case, so that side of things is irrelevant.

[–]Somepotato -1 points0 points  (0 children)

CPUs already calculate on 64+ bits for arithmetic, for integers the other 32 bits are just ignored. Also, lots of stuff is precalculated when your state (gear, buffs, etc) changes

[–]bigtoaster64 1 point2 points  (0 children)

It was already known if you've had any technical knowledge of how software works. And you have to remember that the game have been created more then a decade ago, meaning that at that time a int32 to hold numerical values might have been the right choice back then. And using a wider data type would maybe not have made sense, if it wasn't useful at that time or a near future. It's like running mud tires on the road, its more grip maybe, but it doesn't make sense for the current situation, it's inefficient. And then 10 years later, maybe int32 too small now, but changing 10 years of codebase to replace int32 with maybe long, is a freaking massive trip. So I wouldn't blame them to have went the easy path and simply had a multiplier to artificially alter the values.

[–]arbyterOfScales 1 point2 points  (1 child)

One more interesting thing to notice is that we are still on an int32 max HP game, despite more than a decade of power creep. 

Compare that to D3 which required much less than that to get to the cap of an int64. (Which is higher than 1018).

Keeping absolute power in check via nerfs seems to be much better than just buffing everything 

[–]eViLegion 1 point2 points  (4 children)

Why would this be a surprise? "int" is pretty much the most basic value type (it's usually the first one taught to noobs). And if a field is signed, even though it's contents should logically only ever be unsigned, it's probably because the dev was a bit sloppy and just used the most basic one without giving it much thought.

When POE development started, 32 bits machines were still very common, so they it probably wasn't so much a "decision" to use 32 bits, as simply maintaining compatibility with most existing machines. Also, many games still use 32 bits for most values unless they specifically need more bits, as certain optimisations can tend to allow 64 bit machines to pipeline 32 bit instructions faster than equivalent 64 bit code, by doing 2 identical 32 bit operations at once within 64 bit registers.

"That also means maximum numerical monster HP is 2.147bil." - I'm afraid that doesn't necessarily follow. It's likely to be true, but not guaranteed. GGG could absolutely use 64 bit integers for hit points, and 32 bit integers for damage, and cast before subtracting. They probably don't, but they could.

[–]poeFUN 3 points4 points  (1 child)

The max monster HP is true. As delve after a certain depth reaches the max hp cap.
https://www.youtube.com/watch?v=nC_En939Ing (for reference)

[–]mjtwelve 0 points1 point  (0 children)

Having watched that video a while back, I was gonna say, not only is this known but it actually has real world impacts and affects the play of a small percentage of high end POE players. The DOT cap is certainly well known.

[–]Whydontname 0 points1 point  (0 children)

This has been known for a long time lol.

[–]Inverno969Necromancer 0 points1 point  (0 children)

In a recent interview one of the devs (can't remember who) mentioned they actually used multiple variables to store the Life values of certain bosses to get past the memory limits of Ints.

[–]Alsaher123 -1 points0 points  (4 children)

Does the use of signed int really make sense tho? Is the reason why values aren’t unsigned due to possibility of self damage infliction? Sm1 who understand how the damage works better help me out

[–]AtmosTekk 2 points3 points  (2 children)

Signed makes more sense just to avoid overflows happening in the damage calculation step. Plus you wouldn't have to make a buffer int just to do negative hp calculations and use up more cpu time if you did go unsigned.

Edit: correction

[–]Xeverousfilter extra syntax compiler: github.com/Xeverous/filter_spirit 2 points3 points  (1 child)

Technical correction: underflow is a different term. Underflow happens on floating points when the result of the operation is smaller than representable value (e.g. dividing 1 by 2 over and over will sooner or later reach 0).

Going below 0 on unsigned types is still named overflow.

[–]AtmosTekk 1 point2 points  (0 children)

Thanks for catching that. I updated my post

[–]stdTrancR -1 points0 points  (1 child)

everyone is an armchair engineer all of a sudden but the truth is POE1 will not be seeing any core changes ever again probably. POE2 is another story though.

[–]OhtaniStanMan 2 points3 points  (0 children)

More so the majority of hardcore redditor players are some sort of it/cs first year or junior and have never had a real world case of int32 limits being met and are like wowwwww I know this! Look how smart I am! 

[–]PeopleCallMeSimon -1 points0 points  (0 children)

Monster hp is not limited to 1 int32.

They could make a system where the hp uses two ints.

First, hp is removed from the first int, and when it's 0, hp is removed from the second int.

They can use this on any number of things and with various amounts of ints.

It's pretty dumb though that it's possible to scale dmg so high that you get an int32 overflow.

Nerfs are needed.

[–]StuffinYrMuffinR -1 points0 points  (0 children)

I wish D4 noobs would stop making posts like they discovered some breaking news

[–]Palablues -1 points0 points  (0 children)

I mean...thanks for sharing, but that's been known for a while .

[–]ThisShowIsTrash -1 points0 points  (0 children)

Bro is probably still using Internet explorer somehow

[–]IceTop8150 -1 points0 points  (0 children)

All serious delvers have known this for years.

[–]DrFreshtacular -3 points-2 points  (0 children)

I get what you're going after, but integers don't have decimal places.

https://imgur.com/a/BsgkOhJ

This suggests it's something like floats under the hood, and failing some cast / rounding for UI purposes.

[–]J_KTrolling 0 points1 point  (0 children)

Yea for same reason there is a dot cap. Nothing new tbh

[–]ZobbyTheMouche 0 points1 point  (0 children)

No.

The fact that characters stats display (and only display) are stored in signed 4 bytes doesn't mean that other things in the game are stored in signed 4 bytes too.

[–]tufffffffHalf Skeleton 0 points1 point  (0 children)

It's not thanks to anything this guy did. This been known for a long time.

[–]XeratasStatue 0 points1 point  (0 children)

well PoE engine is 10 years old and written in c++ They used those variable types for obvious reasons, performance and when they started poe they probably didn't think that 2 bil hp might be not high enough one day.

Obviously i don't know the current tech stack but i'd assume those are just relics from the past that still remain in place.

[–]VadimHInquisitor 0 points1 point  (0 children)

PoE is runescape confirmed

[–]forbiddenknowledg3 0 points1 point  (0 children)

Known this for a while. It's why people can go super deep in delve, at some point it stops scaling.

[–]Monkey_1505 0 points1 point  (0 children)

If someone wants to build 2.147 billion DPS, and the game lets them do that, someone somewhere WANTS to one shot all the content and arguably someone somewhere kind of wants to let them do it.

[–]FeinsX 0 points1 point  (0 children)

Modern processors are better optimized for "int32" computation, any programmer in strictly typed languages should know this.

[–]taittista 0 points1 point  (0 children)

could you stack 2 dot caps on monster ex. Ignite dot cap and poison at the same time to do 70m dot?

[–]regularPoEplayer 0 points1 point  (0 children)

This explains why all new mechanics (including ubers) apply Less Damage Taken

Ubers have less damage taken because DD (detonate dead) scales with corpse HP. Which means having 200% more monster life would be 0% more monster life for DD-like skills.

[–]CrustyToeLover 0 points1 point  (0 children)

Less damage is the only way to do it, it's just the easiest. They COULD change the entire thing, it just wouldn't be worth the effort at all.

[–]trolledwolf 0 points1 point  (0 children)

Wouldn't it be better, once you get to that high of dps, to start rounding and reduce the amount of significant numbers? I'm honestly asking, since my understanding is probably lacking, but that would allow for bigger numbers in scientific notation by just storing the amount of times the number was rounded in a smaller variable or inside the number itself.

[–]rip_ap_yiLeague 0 points1 point  (0 children)

This would be an interesting interview question to a developer.

If this was a few years ago multiple devs would probably be in the comments already, i miss that era of poe reddit

[–]BraveGazan 0 points1 point  (0 children)

PoB?

[–]Myzzreal 0 points1 point  (0 children)

That's why all these asian MMOs have bosses with multiple HP bars, each of them can only store a 32-bit value and the damage in these games is huge!

(this is a joke obv)

[–]DreamWalker01 0 points1 point  (0 children)

Hey, can anyone explain why he does poison damage?

[–]flyinGaijin 0 points1 point  (0 children)

Although this isn't new, this isn't like any useful for the drastic majority of the players so ....

Plus Empy's team's experiment already has its own thread ...

This would be an interesting interview question to a developer.

How ? The answer could (likely?) be very short and absolutely uninteresting.

[–]Prosamis 0 points1 point  (0 children)

Good that this information is being spread despite it already being known

...By only a tiny fraction of the player base

[–]igmasHierophant 0 points1 point  (0 children)

Not a new discovery per say, it's been known a long time that POE is using INT32. As explained above with dot cap for example

[–]MonkeyIsBack 0 points1 point  (0 children)

Thanks to dotcap and Deep Delve we know for so long that this game is int32

[–]Chrikomu 0 points1 point  (0 children)

The server side of POE is an extremely optimized C++ application, where I would be very surprised if GGG doesn't utilize some version of AVX (Advanced Vector Extensions). This means that rewriting the code to 64 bit integers, could very likely cut performance in half, for the cases where good cache locality has been achieved.

[–]AlbinauricGod 0 points1 point  (0 children)

We knew that yeas ago? when we could still play selfcurse. Dmg tooltip would go to 2.147b and then disappear

[–]LapinTade[🍰] 0 points1 point  (0 children)

Less Damage Taken is the only way GGG can change EHP of monsters beyond 2.147bil

No there's several approach to have bigger number using only int32. Plenty of incremental games are using tricks to continue grinding numbers.

[–]Rigs79 0 points1 point  (0 children)

We’re not calculating spacecraft course corrections to another star here. No need for ridiculously accurate calculations.

[–]DumbFuckJuice92 0 points1 point  (0 children)

we discovered numerical values in the game are based around int32. -2.147bil to +2.147bil 

Who is this "we" you are talking about? This is known for years.

[–]OddImpression2022 0 points1 point  (0 children)

This is true for a lot of games. And has been known for a long time. It's not the first time he's hit the INT cap on a video.