It corresponds to the numeric constant DBL_MIN in C++.
The code also uses the numeric constant Number.MAX_VALUE, which corresponds to the numeric constant DBL_MAX in C++.
(Or so I think. Somebody please correct me if I am making the wrong assumption.)
I would now like to edit the code so that it uses the limits for type float instead of type double.
In C++, values for these machine constants are held in FLT_MIN and FLT_MAX:
I suppose I could declare two constant variables and explicitly assign these values to them, but I would prefer not to use "magic numbers".
For example, is there a way to compute FLT_MIN in terms of Number.MIN_VALUE?
Could FLT_MIN be computed by some brute-force method, and computed each time it is required?
A related question:
Do these values vary from machine to machine?
Since the code runs in the client on a user's computer, would that influence the value computed by code that determines these values?
Say a small code block is written to compute FLT_MIN by brute force. Would its value be different on, say, a supercomputer with arbitrary precision than it would be on a simple 32-bit desktop computer?
(If so, that would be another reason to avoid "magic numbers" and try to better customize the code for the machine on which it runs.)