set foo to 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Click the decrement button exactly 2 times. Guess what happens. Guess what it does.
I think floating point values are an engineering marvel, like the transistors that make computing possible, and TCP that makes the internet useful. But floating point can’t do miracles: you just can’t exactly represent the entire infinite real number line with 64 bits; you can only represent a finite set of real values. Your example is about the gaps between representable values.
The values that FP can represent are cleverly chosen to be packed very close together near 0, and more spread apart as you get away from 0. The design principle is that if you are doing FP arithmetic to represent some things in the real world, the accuracy of the computations will be roughly the same no matter what units you choose: light years vs miles vs millimeters even down to the diameter of a proton. It’s true - you can use any of these as your unit of length for computations about length and speed of human-scale things, and the accuracy of the result you get will be about the same with 64 FP, thanks to its design.
But this ability to choose arbitrary units would be broken if you, say, limited your numeric representation to 6 digits before a decimal point and 6 digits after a decimal point. Tragically, to represent small values (even for intermediate values not being displayed), Hopscotch does seem to limit itself to 6 digits after the decimal point:
Floating point is subtle, and its consequences can be surprising, but it is a good thing, not annoying, and it is the best thing we have, and meddling with it will cause problems (@Nazari can we talk?).
The next representable value after that is 10000000000000002048, and after that is 10000000000000004096. Those are results of changing a single bit in the FP representation. Going lower, the first representable value below 9999999999999999999 is 9999999999999997952, then 9999999999999995904. These are all separated by 2048 = 2^11. The gaps between representable numbers are always some power of 2 (possibly negative). The technical name for the gap is the “unit in last place” or “ulp”.
The design of floating point is that the ulp is roughly proportional to the value itself: that is how you get many representable values near 0, and they become more spread out with bigger values, so the accuracy is the same over an incredible range of scales (and choice of units). The relative accuracy of a FP representation is the ulp/value, and for 64-bit FP that is 1 part in 2^52=4503599627370496 or about 1 part in 5 quadrillion, which is impressive accuracy, good enough that you can squint and pretend you actually have the entire real number line at your disposal.
But still, large numbers have large ulps. I’m sure whomever implemented the increment and decrement buttons in HS was not anticipating that it would be used for values with ulp > 1, and you’ve gone with ulp = 2048. Those buttons are probably trying to do something clever with string-based and numeric representations, which breaks in the case where you found NaN (which is a special FP value, actually a whole range of FP values, and there’s actually two different flavors of NaN ok I’ll stop). Maybe you found a bug, but to me it is more like a quirk. The tragedy of breaking FP precision by limiting numbers to 6 decimal points is a bigger problem.
btw the last value with ulp=1 is 2^53=9007199254740992. The next value after that is 2 bigger, but the previous value is 1 smaller.
Sorry for the lecture. I floating point! and I want other people to understand it better.
Big numbers are often a headache to deal with in code. I code in other languages outside Hopscotch too, but I don’t have much experience with them, so I don’t know much about the exact way they are represented in computers. But I just saw that, as I typed this, someone with a lot more knowledge typed up a very detailed post above.
Yes. Also, using decimal we can’t represent 1/3 exactly either! Except with an infinite 0.3333333333…. Humans use base 10, computers use base 2. Neither one is correct. Both have representation gaps if you limit them to finite representations.
@JonnyGamer is this behavior with the increment/decrement buttons something that got in the way of something you wanted to do in HS?
Sometimes we refer to it as 0.3 with a bar above the three. But then math becomes harder. So (1/3) is way better math-wise to represent just as it is.
I was in my prime number app. I just made a large number and forgot it would’ve been too large, and so when I clicked the button something unexpected happened. It’s not a big deal, just funny. (I forgot that their mod function is for floats as well)
But actually yes, the arbitrarily large precision would be so helpful to have and let me do way more things. And at least being able to choose Int vs LongInt vs Float would be helpful