Splitting uint32 to two int16 and reconstructing them again

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • tomagu

    Splitting uint32 to two int16 and reconstructing them again

    I am trying to save a uint32 value to a modbus implementation where the registers are declared as signed short (should be the same as int16?).

    Code that should represent what I am trying to do:
    Code:
    uint32_t x = 0x8000;
    int16_t a;
    int16_t b;
    
    //Split
    a = x >> 16;
    b = x & 0x0000FFFF;
    
    //Reconstruction
    x = (uint32_t)a << 16 | (uint32_t)b
    Problem is that when the number to be splitted (x) exceeds 0x8000 the 16 higest bits is 0xFFFF after reconstruction.

    Would appreciate if anybody would help me understand what I have missed.
  • weaknessforcats
    Recognized Expert Expert
    • Mar 2007
    • 9214

    #2
    You are assuming an implementation.

    The only way to do this is by using a typecast.

    There is no way to stuff a 32 bit value into a 16 bit field without the possibility of losing data. The typecast tells the compiler you don't care.

    Comment

    • tomagu

      #3
      You are right, the assumption that a signed short equals a uint16_t is not valid for all systems, just happens to be so in my (bad implementation of modbus interface though, one of my first real C projects).

      Do I understand correctly that you claim there is no way of putting a 32-bit value in two 16-bit and then reconstruct with out risk of loosing data.

      Is it not a question of correct limitations and preventing uncertainties in the compilator specification?

      After more thought and reading selected chapters in C99 spec. I think this might work. Might be so that b was first promoted to a signed 32-bit (and thus sign extended) and there after interpreted as an unsigned.

      Code:
      x = (uint32_t)a << 16 | ((uint32_t)b & 0x0000FFFF);

      Comment

      • donbock
        Recognized Expert Top Contributor
        • Mar 2008
        • 2427

        #4
        tomago is correct, your problem is caused by sign-extension.

        At lines 6 and 7 you have what you expect:
        a is 0, and b is 0x8000.

        However, if perchance your implementation uses two's-complement encoding for signed integers, then b is considered to be a negative value.

        Let's split line 10 into several lines so we can look at each term:
        Code:
        uint32_t xa, xb;
        ...
        xa = (uint32_t)a << 16;
        xb = (uint32_t)b;
        x = xa | xb;
        Here xa is 0, but xb is 0xFFFF8000 because the sign bit is extended to the left.

        By the way, are you sure the expression for xa is correct? It is if the cast has a higher precedence than the shift-left. Does it? You don't need to remember the precedence table if you use parentheses.

        You can solve the sign-extension issue if your computations are always performed on unsigned values.
        Code:
        uint32_t x = 0x00008000uL;
        uint16_t ua, ub;
        int16_t = a, b;
        
        //Split
        ua = (uint16_t) (x >> 16);
        ub = (uint16_t) (x & 0x0000FFFFuL);
        a = (int16_t)ua;
        b = (int16_t)ub;
          
        //Reconstruction 
        ua = (uint16_t)a;
        ub = (uint16_t)b;
        x = (((uint32_t)ua) << 16) | ((uint32_t)ub);
        Notice also the use of the uL suffix.

        Comment

        Working...