
Machine (Numeric) Constants in Javascript
I am updating some code in JavaScript which presently uses the numeric constant Number.MIN_VALUE, which I believe represents the smallest number of type double which can be used in Javascript.
It corresponds to the numeric constant DBL_MIN in C++.
The code also uses the numeric constant Number.MAX_VALUE, which corresponds to the numeric constant DBL_MAX in C++.
(Or so I think. Somebody please correct me if I am making the wrong assumption.)
I would now like to edit the code so that it uses the limits for type float instead of type double.
In C++, values for these machine constants are held in FLT_MIN and FLT_MAX:
Code:
FLT_MIN: 1.17549435082229e038
FLT_MAX: 3.40282346638529e+038
However, I do not think JavaScript has any builtin corresponding values.
I suppose I could declare two constant variables and explicitly assign these values to them, but I would prefer not to use "magic numbers".
Anybody here have suggestions for computing values in JavaScript that correspond to FLT_MIN and FLT_MAX?
Are there any builtin constants in JavaScript from which these values can be defined?
For example, is there a way to compute FLT_MIN in terms of Number.MIN_VALUE?
Could FLT_MIN be computed by some bruteforce method, and computed each time it is required?
Anything ...?
A related question:
Do these values vary from machine to machine?
Since the code runs in the client on a user's computer, would that influence the value computed by code that determines these values?
Say a small code block is written to compute FLT_MIN by brute force. Would its value be different on, say, a supercomputer with arbitrary precision than it would be on a simple 32bit desktop computer?
(If so, that would be another reason to avoid "magic numbers" and try to better customize the code for the machine on which it runs.)

Your 'FLT_MIN' and 'FLT_MAX' values don't match with JS values.
JS appears much bigger (or much smaller).
Code:
<script type="text/javascript">
var max = Number.MAX_VALUE;
var min = Number.MIN_VALUE;
alert('Max: '+max+'\nMin: '+min);
/* results:
max = 1.7976931348623157e+308
min = 5e324
*/
</script>

Originally Posted by JMRKER
Your 'FLT_MIN' and 'FLT_MAX' values don't match with JS values.
JS appears much bigger (or much smaller).
Code:
<script type="text/javascript">
var max = Number.MAX_VALUE;
var min = Number.MIN_VALUE;
alert('Max: '+max+'\nMin: '+min);
/* results:
max = 1.7976931348623157e+308
min = 5e324
*/
</script>
That's correct. Those numbers correspond to the values C++ returns for DBL_MAX and DBL_MINthe quantities for type double variables.
What I'd like now are corresponding values in JavaScript for variables of type float (i.e.  single precision.)
Unfortunately, it looks like such builtin quantities don't exist; I will have to declare and define some constant variables.
I have written a small routine to compute machine epsilon (DBL_EPSILON) by brute force, to confirm the values which are returned by the builtin C++ constant. The value returned by the JavaScript routineon various computersmatches the value of the builtin C++ constant. I wonder if a similar small program could be written to confirm FLT_MIN and FLT_MAX?

I don't expect to be all that helpful but there are a few things I wanted to point out.
As it turns out, javascript essentially has only one numeric data type. (from MDN)
According to the ECMAScript standard, there is only one number type: the "doubleprecision 64bit binary format IEEE 754 value".
So as far as wanting to match something like C++ and its various numeric data type limits, you won't find what you are looking for. The code and values posted above by JMRKER are the only min and max values for numeric javascript values (given that there is only one numeric data type).
And to answer the related question, no there are not different min or max values depending on the computer the code is run on. The min and max values are dictated by the language, not the client's computer, as is with C++ or any programming/scripting language. So those min and max values will always remain the same regardless of a computer's architecture, processing power or any other local factors.
"Given billions of tries, could a spilled bottle of ink ever fall into the words of Shakespeare?"

Originally Posted by Sup3rkirby
As it turns out, javascript essentially has only one numeric data type. (from MDN)
While by the spec that is 'correct', you also can't rely upon it... if you run JMRKER's example on some different OS, processors and browsers, you'll find that's actually the low end of the spectrum. Since IE's "jScript" is NOT really even close to ECMAScript compliant, it often has 32 bit limits instead of 64 bits in older versions, and exceeds it in some newer ones.
I can't remember which browser it was, but there's one of them that will actually switch to arbitrary precision when a number gets larger than 64 bits... gah, for the life of me I can't remember which one though. You go arbitrary precision "BCD" style, and concepts like min and max are more a matter of system memory than processor limitations. (admittedly with one heck of a speed penalty)

Okay, this opens up a can of worms for me.
Some of my JavaScript routines use a value of machine epsilon (DBL_EPSILON) that is computed by brute force:
Code:
var temp1, temp2, mchEps
temp1 = 1.0
do {
mchEps = temp1
temp1 /= 2
temp2 = 1.0 + temp1
} while (temp2 > 1.0)
//Upon exiting the doloop, mchEps should have the value of the Machine Epsilon
This routine uses the fundamental definition of DBL_EPSILON to compute it:
i.e.  it finds the smallest number that can be added to 1.0 such that (1 + DBL_EPSILON) is distinguishable from 1.
I had done it this way because I do not know ahead of time what kind of browser/computer combination on which the program will be run.
However, if it is a function of the language itself, I would be better off declaring DBL_EPSILON as a global constant in my programs, and not bother computing it within the program.
So, does it matter?
Should I edit my programs to declare DBL_EPSILON as a (constant) variable with an assigned value (e.g.  2.2204460492503131e16)?
Or am I okay computing it explicitly, as I am presently doing?
Your advice is much appreciated.

You don't really indicate how the value is to be used,
but in almost every case I can think of the predetermined value
will always be faster than running a loop each time you need it.
If you want to compute it one time, OK, but store that into a variable
so that the loop is not reexecuted every time the value is needed.

JMRKER has it right that you're better off using the constant in most cases. It will most always be calculated to the limit of the language implementations datatypes. Brute force recreating something the language provides for you is rarely if ever useful, unless you need a higher precision than the constant, something unlikely to be an issue in a nontyped language.
I mean, if you were in a pascal or C compiler where you needed pi accurate to an 80 bit extended, THEN you brute force it (or precalc and assign to your own constant) since the builtin one usually stops at 16 (single), 32 (double), 48 (real) bit floating point precision (depending on compiler and libraries used)... but in JavaScript? Not so much.
Though beware that with the differences in JavaScript engine implementations, the precision of floating point math can result in browsers giving slightly different results depending on how 'deep' your math goes.

Originally Posted by JMRKER
You don't really indicate how the value is to be used,
but in almost every case I can think of the predetermined value
will always be faster than running a loop each time you need it.
If you want to compute it one time, OK, but store that into a variable
so that the loop is not reexecuted every time the value is needed.
I use DBL_EPSILON in several numerical routines; some background is provided in a blog post:
Free Math Tools for Science and Engineering Students
For a rootfinding routine, a root can theoretically be found to within DBL_EPSILON.
For an optimization routine, a maximum can theoretically be found to within √DBL_EPSILON.
My programs try to get results as close to these theoretical limits as possible; hence the need for DBL_EPSILON.
At the moment, I am updating the polynomial rootfinder.
The underlying algorithm is the same for the 100th power root solver as it is for the quartic, cubic, and quadratic solvers; however, since most people who have gone through Grade 10 know about the quadratic equation, the numerical solver of a quadratic equation is the most popular.
I do, indeed, calculate DBL_EPSILON once per program, and save it in a variable for use throughout the rest of the program, but now I am wondering if even that is too much.
Perhaps I should just explicitly assign it the value I get from my C++ program (2.2204460492503131e16).
Suggestions?
Explicitly assign DBL_EPSILON to the value of 2.2204460492503131e16?
Or leave the bruteforce code block in place?

Using this as a test, it looks like you can do it either way, but precalc assignment "should" be faster.
Code:
<script type="text/javascript">
var precalc = 2.2204460492503131e16;
var temp1, temp2, mchEps;
temp1 = 1.0;
var cnt = 0;
do {
mchEps = temp1;
temp1 /= 2;
temp2 = 1.0 + temp1;
cnt++;
} while (temp2 > 1.0)
//Upon exiting the doloop, mchEps should have the value of the Machine Epsilon
if (precalc === mchEps) { alert('Use either '+precalc+' or '+mchEps); }
else { alert('No match of values'); }
alert('number of loops: '+cnt);
</script>
The do loop takes 53 passes.
Last edited by JMRKER; 05312014 at 07:17 PM.
Thread Information
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Posting Permissions
 You may not post new threads
 You may not post replies
 You may not post attachments
 You may not edit your posts

Forum Rules

