Normalizing a two complement number

2

I am working on some Tensilica processor and I don't understand the normalization process.

NSA - Normalized Shift Amount

Usage:
    NSA at, as

NSA calculates the left shift amount that will normalize the twos complement contents of address register as and writes this amount (in the range 0 to 31) to address register at.

If as contains 0 or -1, NSA returns 31. Using SSL and SLL to shift as left by the NSA result yields the smallest value for which bits 31 and 30 differ unless as contains 0.

enter image description here

So basically NSA calculate a shift amount value (0...31) and writes to at register.

The question is how it calculates, what does it means normalizing the two complement value from as?

This is not a floating point instruction. In as can be a signed value on 32 bits (31 + sign)

Thanks for clarifications,

EDIT: Should be like this (thanks to Peter Cordes)

0x00000004 4                         _______  number
                                    |       
0    0    0    0    0    0    0    1| 
0000 0000 0000 0000 0000 0000 0000 0100
<--------------------------------> |
              NSA = 28 bits        |________  sign 


0x00000003 3                          ______  number
                                     |      
0    0    0    0    0    0    0    1 | 
0000 0000 0000 0000 0000 0000 0000 0011
<---------------------------------->|
              NSA = 29 bits         |_______  sign 


0x00000002 2                          ______  number
                                     |      
0    0    0    0    0    0    0    1 | 
0000 0000 0000 0000 0000 0000 0000 0010
<---------------------------------->|
              NSA = 29 bits         |_______  sign 


0x00000001 1                           _____  number
                                      |     
0    0    0    0    0    0    0    1  | 
0000 0000 0000 0000 0000 0000 0000 0001
<----------------------------------->|
              NSA = 30 bits          |______  sign 



0xFFFFFFFF 0    NSA = 31
0xFFFFFFFF -1   NSA = 31


0xFFFFFFFE -2                          _____  number
                                      |     
F    F    F    F    F    F    F    E  | 
1111 1111 1111 1111 1111 1111 1111 1110
<----------------------------------->|
              NSA = 30 bits          |______  sign 



0xFFFFFFFD -3                         ______  number    
                                     |      
F    F    F    F    F    F    F    D | 
1111 1111 1111 1111 1111 1111 1111 1101
<---------------------------------->|
              NSA = 29 bits         |_______  sign  



0xFFFFFFFC -4                         ______  number    
                                     |      
F    F    F    F    F    F    F    C | 
1111 1111 1111 1111 1111 1111 1111 1100
<---------------------------------->|
              NSA = 29 bits         |_______  sign      


0xFFFFFFFB -5                        _______  number    
                                    |       
F    F    F    F    F    F    F    B| 
1111 1111 1111 1111 1111 1111 1111 1011
<--------------------------------> |
              NSA = 28 bits        |________  sign                    
assembly
binary
normalization
twos-complement
fixed-point
asked on Stack Overflow Feb 19, 2019 by orfruit • edited Nov 3, 2019 by Jav

2 Answers

4

While this isn't, strictly speaking, a floating-point instruction, the primary use for this is likely to be in implementing software floating point operations.

Floats normally always have the most significant bit of their mantissa set, in fact that bit is often not explicitly stored. Given the raw result of some math operation, NSA gives you the amount to shift the mantissa to get it in normal form again. (You'd also adjust the exponent by the same amount, so that the float still represents the right value.)

answered on Stack Overflow Feb 19, 2019 by jasonharper
3

Sounds like a count leading zeros instruction, except it counts how many leading bits all have the same value, and produces a result of that - 1.

Or to put it anotherw way, it counts how many bits below the sign bit have the same value as the sign bit. (And thus are not part of the significant digits of the input.) The pseudocode expresses this bit-scan as a binary search, but the internal implementation could be anything.


Left shifting by that amount will "normalize" the value to use the full range of -2^31 .. 2^31-1, without overflowing. So the result of x << NSA(x) will be in one of the 2 ranges -2^31 .. -(2^30+1) or 2^30 .. 2^31-1. (As the docs say, left-shifting by that amount results in a value where the sign bit differs from the bit below. Except for inputs of 0 or -1 being a special case.)

Presumably the normal use case is to use the same normalization shift value for multiple input values?

answered on Stack Overflow Feb 19, 2019 by Peter Cordes

User contributions licensed under CC BY-SA 3.0