casevh wrote:Most modern computer architectures use 2's complement binary format which doesn't have a sign bit.
that's not what I read when I designed the signed-int r/w functions in dev4x of my program, even though I didn't consider it... >_>
oh where is that reference... <.<
(will edit when I find it)
yea, the reason I'm so up-tight about these standards is because I'm re-evaluating everything.
dev5 introduces a pointer system which needs all this extra care...
sure I could just make it throw an error, but my rep aims to step things to the next level.
in dev4x, since the input size was known, it was easy to just add a half-size value to the number and break down the binary.
but in this system, the way we need to write the value would require supplying an additional byte_size before we could break the value down properly.
so I used the next best approach, which was getting the mantissa and sign-bit, and fitting the mantissa in the new bit-width and applying the sign-bit.
I'm gonna need to do this for floating point values as well >_>
I think I'll just take the simple approach and use the breakdown method for writing a float of the required byte-size.
anyways... that's out of the question... we're talking about signed ints... not floats
there's another idea I had about ranging the value to shrink the binary into the new format...
(Nintendo does this with int vertices and applies an exponent to a two's complemented equation using the int (a pseudo-float))
maybe I'll just use an external library to managge that... heh
my original idea for dealing with overflow was to max -250 between -128 and 127 for s8 ints. (result would be -128)