Why in stdint.h have both least and fast integer types?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • GS

    Why in stdint.h have both least and fast integer types?

    The stdint.h header definition mentions five integer categories,

    1) exact width, eg., int32_t
    2) at least as wide as, eg., int_least32_t
    3) as fast as possible but at least as wide as, eg., int_fast32_t
    4) integer capable of holding a pointer, intptr_t
    5) widest integer in the implementation, intmax_t

    Is there a valid motivation for having both int_least and int_fast?

    --
    TIA
  • Christian Bau

    #2
    Re: Why in stdint.h have both least and fast integer types?

    In article <79062cd3.04112 70240.29d3a4f7@ posting.google. com>,
    groupstudy2001@ yahoo.co.uk (GS) wrote:
    [color=blue]
    > The stdint.h header definition mentions five integer categories,
    >
    > 1) exact width, eg., int32_t
    > 2) at least as wide as, eg., int_least32_t
    > 3) as fast as possible but at least as wide as, eg., int_fast32_t
    > 4) integer capable of holding a pointer, intptr_t
    > 5) widest integer in the implementation, intmax_t
    >
    > Is there a valid motivation for having both int_least and int_fast?[/color]

    Of course. If 16 bit integers are slow in your hardware, and 32 bit
    integers are fast, then you would want int_least16_t to be 16 bit, and
    int_fast16_t to be 32 bit. That covers about every computer that you can
    buy in a shop.

    Comment

    • James Harris

      #3
      Re: Why in stdint.h have both least and fast integer types?


      "Christian Bau" <christian.bau@ cbau.freeserve. co.uk> wrote in message
      news:christian. bau-0813AA.12333027 112004@slb-newsm1.svr.pol. co.uk...[color=blue]
      > In article <79062cd3.04112 70240.29d3a4f7@ posting.google. com>,
      > groupstudy2001@ yahoo.co.uk (GS) wrote:
      >[color=green]
      >> The stdint.h header definition mentions five integer categories,
      >>
      >> 1) exact width, eg., int32_t
      >> 2) at least as wide as, eg., int_least32_t
      >> 3) as fast as possible but at least as wide as, eg., int_fast32_t
      >> 4) integer capable of holding a pointer, intptr_t
      >> 5) widest integer in the implementation, intmax_t
      >>
      >> Is there a valid motivation for having both int_least and int_fast?[/color]
      >
      > Of course. If 16 bit integers are slow in your hardware, and 32 bit
      > integers are fast, then you would want int_least16_t to be 16 bit, and
      > int_fast16_t to be 32 bit. That covers about every computer that you can
      > buy in a shop.[/color]

      Interesting example but what advantage does int_least16_t really give? If
      we are talking about a few scalars wouldn't it be OK to let the compiler
      represent them as int32s since they are faster? If, on the other hand,
      these were stored in arrays
      int_least16_t fred [10000];
      why not let the compiler choose whether to store as int16 or int32,
      depending on it's optimization constraints?

      --
      James


      Comment

      • Gordon Burditt

        #4
        Re: Why in stdint.h have both least and fast integer types?

        >>> The stdint.h header definition mentions five integer categories,[color=blue][color=green][color=darkred]
        >>>
        >>> 1) exact width, eg., int32_t
        >>> 2) at least as wide as, eg., int_least32_t
        >>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
        >>> 4) integer capable of holding a pointer, intptr_t
        >>> 5) widest integer in the implementation, intmax_t
        >>>
        >>> Is there a valid motivation for having both int_least and int_fast?[/color]
        >>
        >> Of course. If 16 bit integers are slow in your hardware, and 32 bit
        >> integers are fast, then you would want int_least16_t to be 16 bit, and
        >> int_fast16_t to be 32 bit. That covers about every computer that you can
        >> buy in a shop.[/color]
        >
        >Interesting example but what advantage does int_least16_t really give? If[/color]

        Space savings.
        [color=blue]
        >we are talking about a few scalars wouldn't it be OK to let the compiler
        >represent them as int32s since they are faster?[/color]

        The programmer asked for memory savings over speed savings by
        using int_least16_t over int_fast16_t. Speed doesn't do much good
        if the program won't fit in (virtual) memory.

        The few scalars might be deliberately made the same type as that
        of a big array (or disk file) used in another compilation unit.
        One example of this is storing data in dbm files using a third-party
        library. When you retrieve data from dbm files, you get back a
        pointer to the data, but it seems like it's usually pessimally
        aligned, and in any case the dbm functions do not guarantee alignment,
        so the way to use it is to memcpy() to a variable/structure of the
        same type, and access it there. This fails if different compilations
        have different sizes for int_least16_t.
        [color=blue]
        >If, on the other hand,
        >these were stored in arrays
        > int_least16_t fred [10000];
        >why not let the compiler choose whether to store as int16 or int32,
        >depending on it's optimization constraints?[/color]

        sizeof(int_leas t16_t) must be the same in all compilation units
        that get linked together to make a program. (of course, array
        subscripting, allocating a variable or array of int_least16_t, and
        pointer incrementing all implicitly use that size) The optimizer
        doesn't get much info on what size to make int_least16_t when the
        only reference to it is:

        void *vp;
        size_t record_count;

        qsort(vp, record_count, sizeof(int_leas t16_t), compar);

        However, using that information, the compiler *MUST* choose now.
        Perhaps before the part that actually allocates the array vp points
        at is even written.

        Gordon L. Burditt

        Comment

        • Charlie Gordon

          #5
          Re: Why in stdint.h have both least and fast integer types?

          "James Harris" <no.email.pleas e> wrote in message
          news:41a8a7f2$0 $1068$db0fefd9@ news.zen.co.uk. ..[color=blue]
          >
          > "Christian Bau" <christian.bau@ cbau.freeserve. co.uk> wrote in message
          > news:christian. bau-0813AA.12333027 112004@slb-newsm1.svr.pol. co.uk...[color=green]
          > > In article <79062cd3.04112 70240.29d3a4f7@ posting.google. com>,
          > > groupstudy2001@ yahoo.co.uk (GS) wrote:[/color][/color]
          ....[color=blue][color=green]
          > > Of course. If 16 bit integers are slow in your hardware, and 32 bit
          > > integers are fast, then you would want int_least16_t to be 16 bit, and
          > > int_fast16_t to be 32 bit. That covers about every computer that you can
          > > buy in a shop.[/color]
          >
          > Interesting example but what advantage does int_least16_t really give? If
          > we are talking about a few scalars wouldn't it be OK to let the compiler
          > represent them as int32s since they are faster? If, on the other hand,
          > these were stored in arrays
          > int_least16_t fred [10000];
          > why not let the compiler choose whether to store as int16 or int32,
          > depending on it's optimization constraints?[/color]

          That would create incompatibiliti es between modules compiled with different
          optimisation settings : a horrible side effect, that would cause unlimited
          headaches !
          My understanding is that int16_t must be exactly 16 bits.
          int_least16_t should be the practical choice on machines where 16 bit ints have
          to be emulated for instance, but otherwise would still be implemented as 16 bit
          ints, whereas int_fast16_t would only be 16 bits if that's the fastest option.

          There really is more than just the speed/size tradeoff: practical/precise is
          another dimension to take into account.

          --
          Chqrlie.


          Comment

          • Kevin Bracey

            #6
            Re: Why in stdint.h have both least and fast integer types?

            In message <79062cd3.04112 70240.29d3a4f7@ posting.google. com>
            groupstudy2001@ yahoo.co.uk (GS) wrote:
            [color=blue]
            > The stdint.h header definition mentions five integer categories,
            >
            > 1) exact width, eg., int32_t
            > 2) at least as wide as, eg., int_least32_t
            > 3) as fast as possible but at least as wide as, eg., int_fast32_t
            > 4) integer capable of holding a pointer, intptr_t
            > 5) widest integer in the implementation, intmax_t
            >
            > Is there a valid motivation for having both int_least and int_fast?[/color]

            The point you missed is that the _least types are supposed to be the
            *smallest* types at least as wide, as opposed to the *fastest*, which are
            designated by _fast.

            A typical example might be the ARM, which (until ARMv4) had no 16-bit memory
            access instructions, and still has only 32-bit registers and arithmetic
            instructions. There int_least16_t would be 16-bit, but int_fast16_t might be
            32-bit.

            How you decide what's "fastest" is the tricky bit.

            In a function, code like:

            uint16_t a, b, c;

            a = b + c;

            would be slow on the ARM, because it would have to perform a 32-bit
            addition, and then manually trim the excess high bits off. Using
            uint_fast16_t would have avoided that.[*]

            On the other hand, if you had an array of 2000 such 32-bit int_fast_16_ts you
            were working on, having them as 16-bit might actually be faster because they
            fit in the cache better, regardless of the extra core CPU cycles to
            manipulate them.

            That observation is likely to be true for pretty much any cached processor
            where int_fast_XX != int_least_XX, so as a programmer it's probably going to
            be a good idea to always use int_least_XX for arrays of any significant size.

            [*] Footnote - some good ARM compilers have "significan t bit tracking" that
            can actually figure out when such narrowing is mathematically
            unnecessary.

            --
            Kevin Bracey, Principal Software Engineer
            Tematic Ltd Tel: +44 (0) 1223 503464
            182-190 Newmarket Road Fax: +44 (0) 1728 727430
            Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/

            Comment

            • James Harris

              #7
              Re: Why in stdint.h have both least and fast integer types?


              "Gordon Burditt" <gordonb.4sa97@ burditt.org> wrote in message
              news:coaq1f$jc0 @library1.airne ws.net...[color=blue][color=green][color=darkred]
              >>>> The stdint.h header definition mentions five integer categories,
              >>>>
              >>>> 1) exact width, eg., int32_t
              >>>> 2) at least as wide as, eg., int_least32_t
              >>>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
              >>>> 4) integer capable of holding a pointer, intptr_t
              >>>> 5) widest integer in the implementation, intmax_t
              >>>>
              >>>> Is there a valid motivation for having both int_least and int_fast?
              >>>
              >>> Of course. If 16 bit integers are slow in your hardware, and 32 bit
              >>> integers are fast, then you would want int_least16_t to be 16 bit, and
              >>> int_fast16_t to be 32 bit. That covers about every computer that you
              >>> can
              >>> buy in a shop.[/color]
              >>
              >>Interesting example but what advantage does int_least16_t really give? If[/color]
              >
              > Space savings.[/color]

              For scalars?
              [color=blue][color=green]
              >>we are talking about a few scalars wouldn't it be OK to let the compiler
              >>represent them as int32s since they are faster?[/color]
              >
              > The programmer asked for memory savings over speed savings by
              > using int_least16_t over int_fast16_t. Speed doesn't do much good
              > if the program won't fit in (virtual) memory.[/color]

              You expect to run out of memory? If that is really a problem why not use
              int16_t?

              More to the point, memory constraints are more likely to be a feature of
              PICs or similar. In that case I would want to be able to tell the compiler
              to fit the code in X words but to still optimize to be as fast as possible.
              [color=blue]
              > The few scalars might be deliberately made the same type as that
              > of a big array (or disk file) used in another compilation unit.
              > One example of this is storing data in dbm files using a third-party
              > library. When you retrieve data from dbm files, you get back a
              > pointer to the data, but it seems like it's usually pessimally
              > aligned, and in any case the dbm functions do not guarantee alignment,
              > so the way to use it is to memcpy() to a variable/structure of the
              > same type, and access it there. This fails if different compilations
              > have different sizes for int_least16_t.[/color]

              Agreed but better, surely, to define the interface using int16_t. I expect
              that int_least16_t would be different for different implementations , making
              them incompatible with each other. This is an argument against the presence
              of int_least16_t.
              [color=blue][color=green]
              >>If, on the other hand,
              >>these were stored in arrays
              >> int_least16_t fred [10000];
              >>why not let the compiler choose whether to store as int16 or int32,
              >>depending on it's optimization constraints?[/color]
              >
              > sizeof(int_leas t16_t) must be the same in all compilation units
              > that get linked together to make a program. (of course, array
              > subscripting, allocating a variable or array of int_least16_t, and
              > pointer incrementing all implicitly use that size) The optimizer
              > doesn't get much info on what size to make int_least16_t when the
              > only reference to it is:
              >
              > void *vp;
              > size_t record_count;
              >
              > qsort(vp, record_count, sizeof(int_leas t16_t), compar);
              >
              > However, using that information, the compiler *MUST* choose now.
              > Perhaps before the part that actually allocates the array vp points
              > at is even written.[/color]

              Again, perhaps this is better written as int16_t, though I am beginning to
              see there could be benefits to separating int_fast16_t.

              --
              Cheers,
              James




              Comment

              • James Harris

                #8
                Re: Why in stdint.h have both least and fast integer types?


                "Charlie Gordon" <news@chqrlie.o rg> wrote in message
                news:cof17g$omq $1@reader1.imag inet.fr...[color=blue]
                > "James Harris" <no.email.pleas e> wrote in message
                > news:41a8a7f2$0 $1068$db0fefd9@ news.zen.co.uk. ..[color=green]
                >>
                >> "Christian Bau" <christian.bau@ cbau.freeserve. co.uk> wrote in message
                >> news:christian. bau-0813AA.12333027 112004@slb-newsm1.svr.pol. co.uk...[color=darkred]
                >> > In article <79062cd3.04112 70240.29d3a4f7@ posting.google. com>,
                >> > groupstudy2001@ yahoo.co.uk (GS) wrote:[/color][/color]
                > ...[color=green][color=darkred]
                >> > Of course. If 16 bit integers are slow in your hardware, and 32 bit
                >> > integers are fast, then you would want int_least16_t to be 16 bit, and
                >> > int_fast16_t to be 32 bit. That covers about every computer that you
                >> > can
                >> > buy in a shop.[/color]
                >>
                >> Interesting example but what advantage does int_least16_t really give?
                >> If
                >> we are talking about a few scalars wouldn't it be OK to let the compiler
                >> represent them as int32s since they are faster? If, on the other hand,
                >> these were stored in arrays
                >> int_least16_t fred [10000];
                >> why not let the compiler choose whether to store as int16 or int32,
                >> depending on it's optimization constraints?[/color]
                >
                > That would create incompatibiliti es between modules compiled with
                > different
                > optimisation settings : a horrible side effect, that would cause
                > unlimited
                > headaches !
                > My understanding is that int16_t must be exactly 16 bits.
                > int_least16_t should be the practical choice on machines where 16 bit
                > ints have
                > to be emulated for instance, but otherwise would still be implemented as
                > 16 bit
                > ints, whereas int_fast16_t would only be 16 bits if that's the fastest
                > option.
                >
                > There really is more than just the speed/size tradeoff: practical/precise
                > is
                > another dimension to take into account.
                >
                > --
                > Chqrlie.
                >
                >[/color]


                Comment

                • James Harris

                  #9
                  Re: Why in stdint.h have both least and fast integer types?


                  "Charlie Gordon" <news@chqrlie.o rg> wrote in message
                  news:cof17g$omq $1@reader1.imag inet.fr...
                  <snip>[color=blue][color=green]
                  >> Interesting example but what advantage does int_least16_t really give?
                  >> If
                  >> we are talking about a few scalars wouldn't it be OK to let the compiler
                  >> represent them as int32s since they are faster? If, on the other hand,
                  >> these were stored in arrays
                  >> int_least16_t fred [10000];
                  >> why not let the compiler choose whether to store as int16 or int32,
                  >> depending on it's optimization constraints?[/color]
                  >
                  > That would create incompatibiliti es between modules compiled with
                  > different
                  > optimisation settings : a horrible side effect, that would cause
                  > unlimited
                  > headaches ![/color]

                  But isn't that exactly what int_least16_t does? It requires compilation
                  under the same rules for all modules which are to be linked together (and
                  that share data). Otherwise chaos will ensue. Given that the compilation
                  rules must match why have the three types of 16-bit integer? I can see the
                  need for two,

                  1) an integer that is at least N bits wide but upon which operations are as
                  fast as possible,
                  2) an integer than behaves as if it is exactly N bits wide - for shifts
                  etc.,

                  but I'm not sure about having a third option. This seems a bit baroque and
                  not in keeping with the lean nature that is the essence of C. It also seems
                  to me to confuse the performance vs. space issue with program logic. Is
                  this a set of data types designed by committee? I wonder what Ken Thompson
                  and Dennis Ritchie make of it.
                  [color=blue]
                  > My understanding is that int16_t must be exactly 16 bits.
                  > int_least16_t should be the practical choice on machines where 16 bit
                  > ints have
                  > to be emulated for instance, but otherwise would still be implemented as
                  > 16 bit
                  > ints, whereas int_fast16_t would only be 16 bits if that's the fastest
                  > option.
                  >
                  > There really is more than just the speed/size tradeoff: practical/precise
                  > is
                  > another dimension to take into account.[/color]

                  Agreed.


                  Comment

                  • James Harris

                    #10
                    Re: Why in stdint.h have both least and fast integer types?


                    "Kevin Bracey" <kevin.bracey@t ematic.com> wrote in message
                    news:cc6f86154d .kbracey@temati c.com...[color=blue]
                    > In message <79062cd3.04112 70240.29d3a4f7@ posting.google. com>
                    > groupstudy2001@ yahoo.co.uk (GS) wrote:
                    >[color=green]
                    >> The stdint.h header definition mentions five integer categories,
                    >>
                    >> 1) exact width, eg., int32_t
                    >> 2) at least as wide as, eg., int_least32_t
                    >> 3) as fast as possible but at least as wide as, eg., int_fast32_t
                    >> 4) integer capable of holding a pointer, intptr_t
                    >> 5) widest integer in the implementation, intmax_t
                    >>
                    >> Is there a valid motivation for having both int_least and int_fast?[/color]
                    >
                    > The point you missed is that the _least types are supposed to be the
                    > *smallest* types at least as wide, as opposed to the *fastest*, which are
                    > designated by _fast.[/color]

                    The *smallest* type as least as wide as 16 is of width 16, no? If it is
                    impossible to support an integer of width 16 (18-bit word, for instance)
                    how does the implementation deal with this standard's int16_t?
                    [color=blue]
                    > A typical example might be the ARM, which (until ARMv4) had no 16-bit
                    > memory
                    > access instructions, and still has only 32-bit registers and arithmetic
                    > instructions. There int_least16_t would be 16-bit, but int_fast16_t might
                    > be
                    > 32-bit.
                    >
                    > How you decide what's "fastest" is the tricky bit.[/color]

                    Absolutely! There is no point making a data type "fast" if it is to be
                    repeatedly compared with values which are not the same width. Of course,
                    operations are fast or slow, not data values. Is the standard confusing two
                    orthogonal issues?
                    [color=blue]
                    > In a function, code like:
                    >
                    > uint16_t a, b, c;
                    >
                    > a = b + c;
                    >
                    > would be slow on the ARM, because it would have to perform a 32-bit
                    > addition, and then manually trim the excess high bits off. Using
                    > uint_fast16_t would have avoided that.[*][/color]

                    Yes, I think I'm coming round to having one that behaves as if it is
                    exactly 16 bits and another that behaves as if it has at least 16 bits.
                    [color=blue]
                    > On the other hand, if you had an array of 2000 such 32-bit int_fast_16_ts
                    > you
                    > were working on, having them as 16-bit might actually be faster because
                    > they
                    > fit in the cache better, regardless of the extra core CPU cycles to
                    > manipulate them.
                    >
                    > That observation is likely to be true for pretty much any cached
                    > processor
                    > where int_fast_XX != int_least_XX, so as a programmer it's probably going
                    > to
                    > be a good idea to always use int_least_XX for arrays of any significant
                    > size.[/color]

                    I can see your point here. It's a subtlety. I still wonder, though, if I
                    wouldn't prefer to specify that array as int16_t. Specifying int_least16_t
                    is making me a hostage to the compiler. If I am taking in to account the
                    architecture of the underlying machine (in this case, primary cache size)
                    wouldn't I be better writing more precise requirements than int_leastX_t?


                    Comment

                    • Gordon Burditt

                      #11
                      Re: Why in stdint.h have both least and fast integer types?

                      >>>>> The stdint.h header definition mentions five integer categories,[color=blue][color=green][color=darkred]
                      >>>>>
                      >>>>> 1) exact width, eg., int32_t
                      >>>>> 2) at least as wide as, eg., int_least32_t
                      >>>>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
                      >>>>> 4) integer capable of holding a pointer, intptr_t
                      >>>>> 5) widest integer in the implementation, intmax_t
                      >>>>>
                      >>>>> Is there a valid motivation for having both int_least and int_fast?
                      >>>>
                      >>>> Of course. If 16 bit integers are slow in your hardware, and 32 bit
                      >>>> integers are fast, then you would want int_least16_t to be 16 bit, and
                      >>>> int_fast16_t to be 32 bit. That covers about every computer that you
                      >>>> can
                      >>>> buy in a shop.
                      >>>
                      >>>Interestin g example but what advantage does int_least16_t really give? If[/color]
                      >>
                      >> Space savings.[/color]
                      >
                      >For scalars?[/color]

                      Yes. The compiler can't necessarily tell that there aren't malloc'd
                      arrays of these things also. Space savings might translate into
                      speed savings, too (since you seem to be stuck on fast == GOOD at
                      the expense of everything else) due to the operation of data caches.
                      [color=blue][color=green][color=darkred]
                      >>>we are talking about a few scalars wouldn't it be OK to let the compiler
                      >>>represent them as int32s since they are faster?[/color][/color][/color]

                      Wouldn't it be OK to let the compiler represent int_fast16_t as
                      an int16_t on a machine which requires shift-and-mask operations to
                      BECAUSE IT TAKES LESS MEMORY? No, because int_fast16_t is supposed
                      to be fast. Likewise, int_least16_t is supposed to be small.
                      [color=blue][color=green]
                      >> The programmer asked for memory savings over speed savings by
                      >> using int_least16_t over int_fast16_t. Speed doesn't do much good
                      >> if the program won't fit in (virtual) memory.[/color]
                      >
                      >You expect to run out of memory? If that is really a problem why not use
                      >int16_t?[/color]

                      int16_t is not guaranteed to exist at all, although it will
                      not be a problem on most current machines. Eventually it might
                      be an issue on machines where (char,short,int ,long,long long) are
                      (32, 64, 128, 1024, and 8192) bits, respectively.
                      [color=blue]
                      >More to the point, memory constraints are more likely to be a feature of
                      >PICs or similar. In that case I would want to be able to tell the compiler
                      >to fit the code in X words but to still optimize to be as fast as possible.[/color]

                      int_least16_t is a way of telling the compiler to save memory.
                      If you want fast, use int_fast16_t.

                      It is quite possible for there to be a tight memory constraint on
                      some memory but not others in embedded devices, for example, limited
                      space for NONVOLATILE memory (represented, for example, as one
                      struct containing all the nonvolatile parameters) but more generous
                      memory for the program to run.
                      [color=blue][color=green]
                      >> The few scalars might be deliberately made the same type as that
                      >> of a big array (or disk file) used in another compilation unit.
                      >> One example of this is storing data in dbm files using a third-party
                      >> library. When you retrieve data from dbm files, you get back a
                      >> pointer to the data, but it seems like it's usually pessimally
                      >> aligned, and in any case the dbm functions do not guarantee alignment,
                      >> so the way to use it is to memcpy() to a variable/structure of the
                      >> same type, and access it there. This fails if different compilations
                      >> have different sizes for int_least16_t.[/color]
                      >
                      >Agreed but better, surely, to define the interface using int16_t. I expect[/color]

                      int16_t need not exist.
                      [color=blue]
                      >that int_least16_t would be different for different implementations , making
                      >them incompatible with each other. This is an argument against the presence
                      >of int_least16_t.[/color]

                      If the data in question is not used outside the program (as would likely
                      be the case with arrays or with temporary disk files used only while
                      this program is running, portability of data between implementations
                      is not an issue.
                      [color=blue][color=green][color=darkred]
                      >>>If, on the other hand,
                      >>>these were stored in arrays
                      >>> int_least16_t fred [10000];
                      >>>why not let the compiler choose whether to store as int16 or int32,
                      >>>depending on it's optimization constraints?[/color]
                      >>
                      >> sizeof(int_leas t16_t) must be the same in all compilation units
                      >> that get linked together to make a program. (of course, array
                      >> subscripting, allocating a variable or array of int_least16_t, and
                      >> pointer incrementing all implicitly use that size) The optimizer
                      >> doesn't get much info on what size to make int_least16_t when the
                      >> only reference to it is:
                      >>
                      >> void *vp;
                      >> size_t record_count;
                      >>
                      >> qsort(vp, record_count, sizeof(int_leas t16_t), compar);
                      >>
                      >> However, using that information, the compiler *MUST* choose now.
                      >> Perhaps before the part that actually allocates the array vp points
                      >> at is even written.[/color]
                      >
                      >Again, perhaps this is better written as int16_t, though I am beginning to
                      >see there could be benefits to separating int_fast16_t.[/color]

                      int16_t need not exist.

                      I was very disappointed in the standard for not requiring int_least11_t,
                      int_fast37_t, and, if it exists in the implementation, int53_t.
                      (or, in general, int_leastN_t, int_fastN_t, and if present, intN_t
                      for all prime values of N up to the maximum size available, and
                      preferably non-prime values as well). It would at least be clear
                      in arguments over int_fast37_t vs. int_least37_t that there is a
                      good chance that int37_t doesn't exist.

                      Gordon L. Burditt

                      Comment

                      • Kevin Bracey

                        #12
                        Re: Why in stdint.h have both least and fast integer types?

                        In message <41ae0edd$0$106 8$db0fefd9@news .zen.co.uk>
                        "James Harris" <no.email.pleas e> wrote:
                        [color=blue]
                        >
                        > "Kevin Bracey" <kevin.bracey@t ematic.com> wrote in message
                        > news:cc6f86154d .kbracey@temati c.com...[color=green]
                        > > The point you missed is that the _least types are supposed to be the
                        > > *smallest* types at least as wide, as opposed to the *fastest*, which are
                        > > designated by _fast.[/color]
                        >
                        > The *smallest* type as least as wide as 16 is of width 16, no? If it is
                        > impossible to support an integer of width 16 (18-bit word, for instance)
                        > how does the implementation deal with this standard's int16_t?[/color]

                        If an implementation doesn't have a 16-bit type, then int16_t isn't defined.
                        Its presence can be detected by #ifdef INT16_MAX.

                        int_least16_t and int_fast16_t have to be provided by all implementations ,
                        int16_t only has to be provided if the implementation has 16-bit integers.
                        [color=blue]
                        > I can see your point here. It's a subtlety. I still wonder, though, if I
                        > wouldn't prefer to specify that array as int16_t. Specifying int_least16_t
                        > is making me a hostage to the compiler. If I am taking in to account the
                        > architecture of the underlying machine (in this case, primary cache size)
                        > wouldn't I be better writing more precise requirements than int_leastX_t?[/color]

                        The only point is that, theoretically, using int16_t makes your code less
                        portable. The code wouldn't compile on a system lacking 16-bit types.

                        If you use int_least16_t, then that type *must* be 16-bit on any platform
                        with a 16-bit type. So you're not really a "hostage to the compiler" on
                        mainstream platforms.

                        But the advantage is that the code will also now compile on an odd platform
                        without 16-bit types. The code may still fail, if it can't actually cope with
                        a wider int_least16_t, but at least it will have made the attempt.

                        Use of int_fast16_t is more problematic - that's much more likely to be
                        wider than 16-bit on a mainstream platform, so you'll definitely have to code
                        carefully to make sure you're not relying on it only being 16-bit, if you
                        care about portability. But then, I suppose the same applies to any use of
                        int or long.

                        Personally, I've stuck to using int16_t instead of int_least16_t just to save
                        typing, and because I have no expectation of my code ever going near a system
                        without 8,16,32-bit types.

                        --
                        Kevin Bracey, Principal Software Engineer
                        Tematic Ltd Tel: +44 (0) 1223 503464
                        182-190 Newmarket Road Fax: +44 (0) 1728 727430
                        Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/

                        Comment

                        • Charlie Gordon

                          #13
                          Re: Why in stdint.h have both least and fast integer types?

                          "Gordon Burditt" <gordonb.4sa97@ burditt.org> wrote in message
                          news:coaq1f$jc0 @library1.airne ws.net...
                          [color=blue][color=green]
                          > >If, on the other hand,
                          > >these were stored in arrays
                          > > int_least16_t fred [10000];
                          > >why not let the compiler choose whether to store as int16 or int32,
                          > >depending on it's optimization constraints?[/color][/color]
                          ....[color=blue]
                          > void *vp;
                          > size_t record_count;
                          >
                          > qsort(vp, record_count, sizeof(int_leas t16_t), compar);[/color]

                          Not a very safe way to call qsort().
                          I would recommend that vp be of the proper type and be used for the sizeof
                          operation :

                          int_least16_t fred[10000];
                          size_t record_count;
                          ....
                          int_least16_t *vp = fred;

                          qsort(vp, record_count, sizeof(*vp), compar);

                          It is a pity our favorite language cannot manipulate types with more ease.
                          This would allow much safer definitions such as:

                          typedef void T; /* T can be any type */
                          void qsort(T *, size_t, size_t == sizeof(T), int (*comp)(const T *, const T *));
                          /* T can be any type, but parameter consistency can be enforced.

                          This kind of template would not require any run time support, and would generate
                          generic code, but allows to enforce type consistency, without opening C++
                          template Pandora's box.

                          --
                          Chqrlie.





                          Comment

                          • Flash Gordon

                            #14
                            Re: Why in stdint.h have both least and fast integer types?

                            On Wed, 1 Dec 2004 18:35:19 -0000
                            "James Harris" <no.email.pleas e> wrote:
                            [color=blue]
                            > "Kevin Bracey" <kevin.bracey@t ematic.com> wrote in message
                            > news:cc6f86154d .kbracey@temati c.com...[color=green]
                            > > In message <79062cd3.04112 70240.29d3a4f7@ posting.google. com>
                            > > groupstudy2001@ yahoo.co.uk (GS) wrote:
                            > >[color=darkred]
                            > >> The stdint.h header definition mentions five integer categories,
                            > >>
                            > >> 1) exact width, eg., int32_t
                            > >> 2) at least as wide as, eg., int_least32_t
                            > >> 3) as fast as possible but at least as wide as, eg., int_fast32_t
                            > >> 4) integer capable of holding a pointer, intptr_t
                            > >> 5) widest integer in the implementation, intmax_t
                            > >>
                            > >> Is there a valid motivation for having both int_least and int_fast?[/color]
                            > >
                            > > The point you missed is that the _least types are supposed to be the
                            > > *smallest* types at least as wide, as opposed to the *fastest*,
                            > > which are designated by _fast.[/color]
                            >
                            > The *smallest* type as least as wide as 16 is of width 16, no?[/color]

                            Only on implementations *having* a type that is exactly 16 bits wide.
                            [color=blue]
                            > If it
                            > is impossible to support an integer of width 16 (18-bit word, for
                            > instance) how does the implementation deal with this standard's
                            > int16_t?[/color]

                            That's simple. It does not define int16_t
                            [color=blue][color=green]
                            > > A typical example might be the ARM, which (until ARMv4) had no
                            > > 16-bit memory
                            > > access instructions, and still has only 32-bit registers and
                            > > arithmetic instructions. There int_least16_t would be 16-bit, but
                            > > int_fast16_t might be
                            > > 32-bit.
                            > >
                            > > How you decide what's "fastest" is the tricky bit.[/color]
                            >
                            > Absolutely! There is no point making a data type "fast" if it is to be
                            > repeatedly compared with values which are not the same width. Of
                            > course, operations are fast or slow, not data values. Is the standard
                            > confusing two orthogonal issues?[/color]

                            There are generally speed issues which are related to size, such as a
                            system with a 32 bit address bus that can quickly access a 32 bit type
                            but has to either mask or shift the data to access a 16 bit value.

                            <snip>
                            [color=blue]
                            > I can see your point here. It's a subtlety. I still wonder, though, if
                            > I wouldn't prefer to specify that array as int16_t. Specifying
                            > int_least16_t is making me a hostage to the compiler. If I am taking
                            > in to account the architecture of the underlying machine (in this
                            > case, primary cache size) wouldn't I be better writing more precise
                            > requirements than int_leastX_t?[/color]

                            Well, when you port the SW to a 32 bit DSP processor that has absolutely
                            no support for 16 bit data and therefor does not provide int16_t? Such
                            an implementation can easily provide both int_least16_t and
                            int_fast16_t, anthough they would both be identical to int32_t
                            --
                            Flash Gordon
                            Living in interesting times.
                            Although my email address says spam, it is real and I read it.

                            Comment

                            • Lawrence Kirby

                              #15
                              Re: Why in stdint.h have both least and fast integer types?

                              On Wed, 01 Dec 2004 18:03:55 +0000, James Harris wrote:
                              [color=blue]
                              >
                              > "Gordon Burditt" <gordonb.4sa97@ burditt.org> wrote in message
                              > news:coaq1f$jc0 @library1.airne ws.net...[color=green][color=darkred]
                              >>>>> The stdint.h header definition mentions five integer categories,
                              >>>>>
                              >>>>> 1) exact width, eg., int32_t
                              >>>>> 2) at least as wide as, eg., int_least32_t
                              >>>>> 3) as fast as possible but at least as wide as, eg., int_fast32_t
                              >>>>> 4) integer capable of holding a pointer, intptr_t
                              >>>>> 5) widest integer in the implementation, intmax_t
                              >>>>>
                              >>>>> Is there a valid motivation for having both int_least and int_fast?
                              >>>>
                              >>>> Of course. If 16 bit integers are slow in your hardware, and 32 bit
                              >>>> integers are fast, then you would want int_least16_t to be 16 bit, and
                              >>>> int_fast16_t to be 32 bit. That covers about every computer that you
                              >>>> can
                              >>>> buy in a shop.
                              >>>
                              >>>Interestin g example but what advantage does int_least16_t really give? If[/color]
                              >>
                              >> Space savings.[/color]
                              >
                              > For scalars?[/color]

                              It probably makes sense to stick with fast variants for scalars. The
                              typical use for least variants would be in arrays and structures.
                              [color=blue][color=green][color=darkred]
                              >>>we are talking about a few scalars wouldn't it be OK to let the compiler
                              >>>represent them as int32s since they are faster?[/color]
                              >>
                              >> The programmer asked for memory savings over speed savings by
                              >> using int_least16_t over int_fast16_t. Speed doesn't do much good
                              >> if the program won't fit in (virtual) memory.[/color]
                              >
                              > You expect to run out of memory? If that is really a problem why not use
                              > int16_t?[/color]

                              Because it has no advantages over int_least16_t (except that it is shorter
                              to type, and maybe some minor modulo properties) and has a portability
                              disadvantage.
                              [color=blue]
                              > More to the point, memory constraints are more likely to be a feature of
                              > PICs or similar. In that case I would want to be able to tell the compiler
                              > to fit the code in X words but to still optimize to be as fast as possible.[/color]

                              Tht depends on whether the code or the data is subject to the memory
                              constraint (or some combination).
                              [color=blue][color=green]
                              >> The few scalars might be deliberately made the same type as that
                              >> of a big array (or disk file) used in another compilation unit.
                              >> One example of this is storing data in dbm files using a third-party
                              >> library. When you retrieve data from dbm files, you get back a
                              >> pointer to the data, but it seems like it's usually pessimally
                              >> aligned, and in any case the dbm functions do not guarantee alignment,
                              >> so the way to use it is to memcpy() to a variable/structure of the
                              >> same type, and access it there. This fails if different compilations
                              >> have different sizes for int_least16_t.[/color]
                              >
                              > Agreed but better, surely, to define the interface using int16_t. I expect
                              > that int_least16_t would be different for different implementations , making
                              > them incompatible with each other. This is an argument against the presence
                              > of int_least16_t.[/color]

                              Using int16_t doesn't fix the representation issues (byte order etc.). To
                              do this properly the external data format should be kept separate from the
                              representation of any internal datatype. You can do this just as well with
                              int_least16_t as int16_t.
                              [color=blue][color=green][color=darkred]
                              >>>If, on the other hand,
                              >>>these were stored in arrays
                              >>> int_least16_t fred [10000];
                              >>>why not let the compiler choose whether to store as int16 or int32,
                              >>>depending on it's optimization constraints?[/color][/color][/color]

                              If the compiler has a 16 bit datatype available then this is what it
                              should use for int_least16_t. This means that int_least16_t is equivalent
                              to int16_t where there is a 16 bit type available, and int_least16_t
                              still works where there isn't. Indeed it is difficult to think of a
                              situation where using int16_t is a sensible idea.
                              [color=blue][color=green]
                              >> sizeof(int_leas t16_t) must be the same in all compilation units
                              >> that get linked together to make a program. (of course, array
                              >> subscripting, allocating a variable or array of int_least16_t, and
                              >> pointer incrementing all implicitly use that size) The optimizer
                              >> doesn't get much info on what size to make int_least16_t when the
                              >> only reference to it is:
                              >>
                              >> void *vp;
                              >> size_t record_count;
                              >>
                              >> qsort(vp, record_count, sizeof(int_leas t16_t), compar);
                              >>
                              >> However, using that information, the compiler *MUST* choose now.
                              >> Perhaps before the part that actually allocates the array vp points
                              >> at is even written.[/color][/color]

                              Changing the model of object representation based on optimisation issues
                              seems a really bad idea even if it could be made to work. Keep it
                              simple - use the smallest available type. The programmer should be aware
                              that this saves data space, but not necessarily code space. He can then
                              make the appropriate judgements rather than having to 2nd guess the
                              compiler.
                              [color=blue]
                              > Again, perhaps this is better written as int16_t, though I am beginning to
                              > see there could be benefits to separating int_fast16_t.[/color]

                              int16_t doesn't make much sense here, if you are worried about the space
                              used by the array use int_least16_t otherwise int_fast16_t. C programmers
                              have been doing that for years in a less formal way (the types are called
                              short and int). It is an approach that has proved to work very well.

                              Lawrence

                              Comment

                              Working...