Mathew Hendry's macro for binary integer literals

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • =?ISO-8859-1?Q?Tom=E1s_=D3_h=C9ilidhe?=

    Mathew Hendry's macro for binary integer literals


    Here's a macro that Mathew Hendry posted back in the year 2000 for
    achieving binary integer literals that evaluate to compile-time
    constants:

    #define BIN8(n)\
    (((0x##n##ul&1< < 0)>0)|((0x##n## ul&1<< 4)>3)\
    |((0x##n##ul&1< < 8)>6)|((0x##n## ul&1<<12)>9)\
    |((0x##n##ul&1< <16)>>12)|((0x# #n##ul&1<<20)>> 15)\
    |((0x##n##ul&1< <24)>>18)|((0x# #n##ul&1<<28)>> 21))

    Now admittedly I don't know how it works mathematically, but still I
    want to perfect it. The first thing I did was made it more readable
    (in my own opinion of course):

    #define BIN8(n)\
    (

    ((0x##n##ul & 1<<0) >0) | ((0x##n##ul & 1<<0) >>
    3) \
    | ((0x##n##ul & 1<<8) >6) | ((0x##n##ul & 1<<12)>>
    9) \
    | ((0x##n##ul & 1<<16)>>12) | ((0x##n##ul &
    1<<20)>>15) \
    | ((0x##n##ul & 1<<24)>>18) | ((0x##n##ul &
    1<<28)>>21) \
    )

    From there, the only flaw I can see is in the expression "1 << 24",
    which I think should be "1lu << 24", so that gives us:

    #define BIN8(n)\
    (

    ((0x##n##ul & 1lu<<0) >0) | ((0x##n##ul & 1lu<<0)
    >3) \
    | ((0x##n##ul & 1lu<<8) >6) | ((0x##n##ul &
    1lu<<12)>9) \
    | ((0x##n##ul & 1lu<<16)>>12) | ((0x##n##ul &
    1lu<<20)>>15) \
    | ((0x##n##ul & 1lu<<24)>>18) | ((0x##n##ul &
    1lu<<28)>>21) \
    )

    Is that perfect now? Or does it need more tweaking? Even if you post
    to say you think it's perfect then that'd be a help.

  • badc0de4@gmail.com

    #2
    Re: Mathew Hendry's macro for binary integer literals

    Tomás Ó hÉilidhe wrote:
    Here's a macro that Mathew Hendry posted back in the year 2000 for
    achieving binary integer literals that evaluate to compile-time
    constants:
    >
    #define BIN8(n)\
    (((0x##n##ul&1< < 0)>0)|((0x##n## ul&1<< 4)>3)\
    |((0x##n##ul&1< < 8)>6)|((0x##n## ul&1<<12)>9)\
    |((0x##n##ul&1< <16)>>12)|((0x# #n##ul&1<<20)>> 15)\
    |((0x##n##ul&1< <24)>>18)|((0x# #n##ul&1<<28)>> 21))
    >
    Now admittedly I don't know how it works mathematically
    The 0, 4, 8, ... correspond to the "bit position" when interpreted as
    a hexadecimal value.

    For example, the "1" at '0b00100000' occupies bit 20 in 0x00100000,
    so
    0x00100000 & (1 << 20) isolates that bit, and
    0x00100000 >15 "moves it (back) to its 'proper' hexadecimal
    position.
    , but still I
    want to perfect it. The first thing I did was made it more readable
    (in my own opinion of course):
    >
    ... so that gives us:
    >
    #define BIN8(n)\
    (
    >
    ((0x##n##ul & 1lu<<0) >0) | ((0x##n##ul & 1lu<<0)
    3) \
    | ((0x##n##ul & 1lu<<8) >6) | ((0x##n##ul &
    1lu<<12)>9) \
    | ((0x##n##ul & 1lu<<16)>>12) | ((0x##n##ul &
    1lu<<20)>>15) \
    | ((0x##n##ul & 1lu<<24)>>18) | ((0x##n##ul &
    1lu<<28)>>21) \
    )
    >
    Is that perfect now?
    Two things come to mind:

    a) it doesn't cope well with usenet (re-)formatting
    b) you have the original "ul" mixed with your "lu". I'd like it better
    if all suffixes were the same.

    Comment

    • Kaz Kylheku

      #3
      Re: Mathew Hendry's macro for binary integer literals

      On Jun 17, 6:51 am, badc0...@gmail. com wrote:
      Tomás Ó hÉilidhe wrote:
      Here's a macro that Mathew Hendry posted back in the year 2000 for
      achieving binary integer literals that evaluate to compile-time
      constants:
      >
        #define BIN8(n)\
          (((0x##n##ul&1< < 0)>0)|((0x##n## ul&1<< 4)>3)\
          |((0x##n##ul&1< < 8)>6)|((0x##n## ul&1<<12)>9)\
          |((0x##n##ul&1< <16)>>12)|((0x# #n##ul&1<<20)>> 15)\
          |((0x##n##ul&1< <24)>>18)|((0x# #n##ul&1<<28)>> 21))
      >
      Now admittedly I don't know how it works mathematically
      >
      The 0, 4, 8, ... correspond to the "bit position" when interpreted as
      a hexadecimal value.
      >
      For example, the "1" at '0b00100000' occupies bit 20 in 0x00100000,
      so
      0x00100000 & (1 << 20) isolates that bit, and
      0x00100000 >15 "moves it (back) to its 'proper' hexadecimal
      position.
      >
      , but still I
      want to perfect it. The first thing I did was made it more readable
      (in my own opinion of course):
      >
      ... so that gives us:
      >
      >
      >
      #define BIN8(n)\
          (
      >
                  ((0x##n##ul & 1lu<<0) >0)    |     ((0x##n##ul & 1lu<<0)
      >3)    \
              |   ((0x##n##ul & 1lu<<8) >6)    |     ((0x##n##ul &
      1lu<<12)>9)    \
              |   ((0x##n##ul & 1lu<<16)>>12)    |     ((0x##n##ul &
      1lu<<20)>>15)    \
              |   ((0x##n##ul & 1lu<<24)>>18)    |     ((0x##n##ul &
      1lu<<28)>>21)    \
          )
      >
      Is that perfect now?
      >
      Two things come to mind:
      >
      a) it doesn't cope well with usenet (re-)formatting
      b) you have the original "ul" mixed with your "lu". I'd like it better
      if all suffixes were the same.
      - Too much repetition. Adding 0x and UL can be done by a helper macro.

      - Suggest parentheses for awkward precedence of & relative to <<:

      #define HEX_CODED_BIN(N )\
      (((N & 1 << 0) >0)|((N & 1 << 4) > 3)\
      |((N & 1 << 8) >6)|((N & 1 << 12) > 9)\
      |((N & 1 << 16) >>12)|((N & 1 << 20) >15)\
      |((N & 1 << 24) >>18)|((N & 1 << 28) >21))


      #define BIN8(BITS) HEX_CODED_BIN(0 x ## BITS ## UL)

      Furthermore, the shifting can be done first and then the masking,
      which simplifies the choice of shift values:

      #define HEX_CODED_BIN(N ) \
      ((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
      (((N > 4) & 1) << 1) | (((N >20) & 1) << 5) |\
      (((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
      (((N >12) & 1) << 3) | (((N >28) & 1) << 7))

      See? The logic is is a lot clearer now, because offsets in hex space
      don't have to be translated into shift amounts in binary space. The 0,
      4, 8, 12 ... values are obvious: we are shifting a hex digit into the
      least significant digit position. The & 1 tells us we are masking out
      a 0 or 1, and the 0, 1, 2, 3 ... shifts are obvious also: shifting a
      bit into the correct position within the byte.

      I transposed the calculation into columns, for further readability.

      - Remark: A BIN32 macro is easy to make:

      #define BIN32(A, B, C, D) \
      (BIN8(A) << 24) | (BIN8(B) << 16) | (BIN8(C) << 8) | (BIN8(D))

      - Complete program:

      #include <stdio.h>

      #define HEX_CODED_BIN(N ) \
      ((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
      (((N > 4) & 1) << 1) | (((N >20) & 1) << 5) | \
      (((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
      (((N >12) & 1) << 3) | (((N >28) & 1) << 7))

      #define BIN8(BITS) HEX_CODED_BIN(0 x ## BITS ## UL)

      #define BIN32(A, B, C, D) \
      (BIN8(A) << 24) | (BIN8(B) << 16) | (BIN8(C) << 8) | (BIN8(D))

      int main(void)
      {
      unsigned int bin1 = BIN8(10101010);
      unsigned int bin2 = BIN8(01010101);
      unsigned int bin3 = BIN8(11111111);
      unsigned int bin4 = BIN8(00000000);
      unsigned long bin5 = BIN32(10101010, 01010101, 11110000,
      00001111);

      printf("bin1 == %x\n", bin1);
      printf("bin2 == %x\n", bin2);
      printf("bin3 == %x\n", bin3);
      printf("bin4 == %x\n", bin4);
      printf("bin5 == %lx\n", bin5);

      return 0;
      }

      Output:

      bin1 == aa
      bin2 == 55
      bin3 == ff
      bin4 == 0
      bin5 == aa55f00f

      Cheers.





      Comment

      • =?ISO-8859-1?Q?Tom=E1s_=D3_h=C9ilidhe?=

        #4
        Re: Mathew Hendry's macro for binary integer literals

        On Jun 17, 6:51 pm, Kaz Kylheku <kkylh...@gmail .comwrote:
        - Complete program:
        >
        #include <stdio.h>
        >
        #define HEX_CODED_BIN(N ) \
          ((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
           (((N > 4) & 1) << 1) | (((N >20) & 1) << 5) | \
           (((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
           (((N >12) & 1) << 3) | (((N >28) & 1) << 7))
        >
        #define BIN8(BITS) HEX_CODED_BIN(0 x ## BITS ## UL)
        >
        #define BIN32(A, B, C, D) \
          (BIN8(A) << 24) | (BIN8(B) << 16) | (BIN8(C) << 8) | (BIN8(D))
        >
        int main(void)
        {
            unsigned int bin1 = BIN8(10101010);
            unsigned int bin2 = BIN8(01010101);
            unsigned int bin3 = BIN8(11111111);
            unsigned int bin4 = BIN8(00000000);
            unsigned long bin5 = BIN32(10101010, 01010101, 11110000,
        00001111);
        >
            printf("bin1 == %x\n", bin1);
            printf("bin2 == %x\n", bin2);
            printf("bin3 == %x\n", bin3);
            printf("bin4 == %x\n", bin4);
            printf("bin5 == %lx\n", bin5);
        >
            return 0;
        >
        }
        >
        Output:
        >
        bin1 == aa
        bin2 == 55
        bin3 == ff
        bin4 == 0
        bin5 == aa55f00f

        Very nice, good stuff.

        Comment

        • Kaz Kylheku

          #5
          Re: Mathew Hendry's macro for binary integer literals

          On Jun 17, 10:51 am, Kaz Kylheku <kkylh...@gmail .comwrote:
          On Jun 17, 6:51 am, badc0...@gmail. com wrote:
          >
          >
          >
          >
          >
          Tomás Ó hÉilidhe wrote:
          Here's a macro that Mathew Hendry posted back in the year 2000 for
          achieving binary integer literals that evaluate to compile-time
          constants:
          >
            #define BIN8(n)\
              (((0x##n##ul&1< < 0)>0)|((0x##n## ul&1<< 4)>3)\
              |((0x##n##ul&1< < 8)>6)|((0x##n## ul&1<<12)>9)\
              |((0x##n##ul&1< <16)>>12)|((0x# #n##ul&1<<20)>> 15)\
              |((0x##n##ul&1< <24)>>18)|((0x# #n##ul&1<<28)>> 21))
          >
          Now admittedly I don't know how it works mathematically
          >
          The 0, 4, 8, ... correspond to the "bit position" when interpreted as
          a hexadecimal value.
          >
          For example, the "1" at '0b00100000' occupies bit 20 in 0x00100000,
          so
          0x00100000 & (1 << 20) isolates that bit, and
          0x00100000 >15 "moves it (back) to its 'proper' hexadecimal
          position.
          >
          , but still I
          want to perfect it. The first thing I did was made it more readable
          (in my own opinion of course):
          >
          ... so that gives us:
          >
          #define BIN8(n)\
              (
          >
                      ((0x##n##ul & 1lu<<0) >0)    |     ((0x##n##ul & 1lu<<0)
          3)    \
                  |   ((0x##n##ul & 1lu<<8) >6)    |     ((0x##n##ul &
          1lu<<12)>9)    \
                  |   ((0x##n##ul & 1lu<<16)>>12)    |     ((0x##n##ul &
          1lu<<20)>>15)    \
                  |   ((0x##n##ul & 1lu<<24)>>18)    |     ((0x##n##ul &
          1lu<<28)>>21)    \
              )
          >
          Is that perfect now?
          >
          Two things come to mind:
          >
          a) it doesn't cope well with usenet (re-)formatting
          b) you have the original "ul" mixed with your "lu". I'd like it better
          if all suffixes were the same.
          >
          - Too much repetition. Adding 0x and UL can be done by a helper macro.
          >
          - Suggest parentheses for awkward precedence of & relative to <<:
          >
             #define HEX_CODED_BIN(N )\
               (((N & 1 <<  0) >0)|((N & 1 <<  4) > 3)\
               |((N & 1 <<  8) >6)|((N & 1 << 12) > 9)\
               |((N & 1 << 16) >>12)|((N & 1 << 20) >15)\
               |((N & 1 << 24) >>18)|((N & 1 << 28) >21))
          >
             #define BIN8(BITS) HEX_CODED_BIN(0 x ## BITS ## UL)
          >
          Furthermore, the shifting can be done first and then the masking,
          which  simplifies the choice of shift values:
          >
             #define HEX_CODED_BIN(N ) \
                ((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
                 (((N > 4) & 1) << 1) | (((N >20) & 1) << 5) |\
                 (((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
                 (((N >12) & 1) << 3) | (((N >28) & 1) << 7))
          >
          See? The logic is is a lot clearer now, because offsets in hex space
          don't have to be translated into shift amounts in binary space. The 0,
          4, 8, 12 ... values are obvious: we are shifting a hex digit into the
          least significant digit position. The & 1 tells us we are masking out
          a 0 or 1, and the 0, 1, 2, 3 ... shifts are obvious also: shifting a
          bit into the correct position within the byte.
          >
          I transposed the calculation into columns, for further readability.
          >
          - Remark: A BIN32 macro is easy to make:
          >
             #define BIN32(A, B, C, D) \
                (BIN8(A) << 24) | (BIN8(B) << 16) | (BIN8(C) << 8) | (BIN8(D))
          >
          - Complete program:
          >
          #include <stdio.h>
          >
          #define HEX_CODED_BIN(N ) \
            ((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
             (((N > 4) & 1) << 1) | (((N >20) & 1) << 5) | \
             (((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
             (((N >12) & 1) << 3) | (((N >28) & 1) << 7))
          Furthermore, this pattern can be easily reduced like this:

          #define BIT(N, K) (((N >(4*K)) & 1) << K)

          #define HEX_CODED_BIN(N ) \
          (BIT(N, 0) | BIT(N, 1) | BIT(N, 2) | BIT(N, 3) | \
          BIT(N, 4) | BIT(N, 5) | BIT(N, 6) | BIT(N, 7))


          Let's have some fun: how about a decimal version of this? Decimal
          constants give us more digits (without having to go to C99).

          #define POW10_0 1
          #define POW10_1 10
          #define POW10_2 100
          #define POW10_3 1000
          #define POW10_4 10000
          #define POW10_5 100000
          #define POW10_6 1000000
          #define POW10_7 10000000
          #define POW10_8 100000000
          #define POW10_9 1000000000

          #define POW10(K) POW10_ ## K

          #define BIT(N, K) (((N / POW10(K)) % 2) << K)

          #define DEC_CODED_BIN(N ) \
          (BIT(N, 0) | BIT(N, 1) | BIT(N, 2) | BIT(N, 3) | \
          BIT(N, 4) | BIT(N, 5) | BIT(N, 6) | BIT(N, 7) | \
          BIT(N, 8) | BIT(N, 9))

          #define BIN10(N) DEC_CODED_BIN(N ## UL)

          But this version has serious bug---or, at least, a programmer pitfall.
          That bug brings me to the next point: if we switch to octal, we
          portably get 11 digits!

          #define BIT(N, K) (((N >(3*K)) & 1) << K)

          #define OCT_CODED_BIN(N ) \
          (BIT(N, 0) | BIT(N, 1) | BIT(N, 2) | BIT(N, 3) | \
          BIT(N, 4) | BIT(N, 5) | BIT(N, 6) | BIT(N, 7) | \
          BIT(N, 8) | BIT(N, 9) | BIT(N, 10))

          #define BIN11(N) OCT_CODED_BIN(0 ## N ## UL)

          :)

          Comment

          Working...