encoding / decoding questions

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Evangelista Sami

    encoding / decoding questions

    hello

    i have to write a program in which i encode / decode datas into bits
    vector.
    and i do not really know how to do it.
    my solution is to represent my bits as a char *.
    so extracting the 32 first bits of a vector is done by :
    bits[0] << 24 | bits[1] << 16 | bits[2] << 8 | bits[3]
    or
    bits[0] >> 24 | bits[1] >> 16 | bits[2] >> 8 | bits[3]
    depending on how i encode my datas. is this correct?

    so the program makes the hypothesis that a char takes 8 bits.
    i dont know if this is correct. is this defined in C99?

    any help would be very useful. in particuliar is there a more
    efficient way to do this?
  • Christopher Benson-Manica

    #2
    Re: encoding / decoding questions

    Evangelista Sami <evangeli@cnam. fr> spoke thus:
    [color=blue]
    > so the program makes the hypothesis that a char takes 8 bits.
    > i dont know if this is correct. is this defined in C99?[/color]

    The macro CHAR_BIT is what you're looking for, although it's usually
    8.

    --
    Christopher Benson-Manica | I *should* know what I'm talking about - if I
    ataru(at)cybers pace.org | don't, I need to know. Flames welcome.

    Comment

    • Jack Klein

      #3
      Re: encoding / decoding questions

      On 12 Apr 2004 07:25:55 -0700, evangeli@cnam.f r (Evangelista Sami)
      wrote in comp.lang.c:
      [color=blue]
      > hello
      >
      > i have to write a program in which i encode / decode datas into bits
      > vector.
      > and i do not really know how to do it.
      > my solution is to represent my bits as a char *.
      > so extracting the 32 first bits of a vector is done by :
      > bits[0] << 24 | bits[1] << 16 | bits[2] << 8 | bits[3]
      > or
      > bits[0] >> 24 | bits[1] >> 16 | bits[2] >> 8 | bits[3]
      > depending on how i encode my datas. is this correct?
      >
      > so the program makes the hypothesis that a char takes 8 bits.
      > i dont know if this is correct. is this defined in C99?
      >
      > any help would be very useful. in particuliar is there a more
      > efficient way to do this?[/color]

      As already pointed out, the macro CHAR_BIT defined in <limits.h>
      contains the number of bits in objects of the character types. It is
      often best to just use 8, since CHAR_BIT must be at least 8, and if
      you actually port this code to an architecture where character types
      have more than 8 bits, it will still work.

      There are a few other things you should do, however. First you should
      used "unsigned char", not "signed char" or just plain "char", for the
      elements of your array.

      Second, you should cast the values to unsigned long when assembling
      them. On most platforms, the unsigned chars of your array will
      promote to signed ints. If ints have only 16 bits, shifting by 24 and
      16 produce undefined behavior. Even if your ints have 32 bits,
      shifting signed values can create a trap representation and cause
      undefined behavior.

      When dealing with bits, always use all unsigned types.

      --
      Jack Klein
      Home: http://JK-Technology.Com
      FAQs for
      comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
      comp.lang.c++ http://www.parashift.com/c++-faq-lite/
      alt.comp.lang.l earn.c-c++

      Comment

      Working...