May the size argument of operator new overflow?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • James Kanze

    #16
    Re: May the size argument of operator new overflow?

    On Jun 18, 5:40 pm, Kai-Uwe Bux <jkherci...@gmx .netwrote:
    James Kanze wrote:
    On Jun 18, 11:44 am, "Angel Tsankov" <fn42...@fmi.un i-sofia.bgwrote:
    Does the C++ standard define what happens when the size
    argument of void* operator new(size_t size) cannot represent
    the total number of bytes to be allocated?
    For example:
    struct S
    {
    char a[64];
    };
    S* allocate(int size)
    {
    return new S[size]; // What happens here?
    }
    int main()
    {
    allocate(0x7FFF FFFF);
    }
    Supposing that all values in an int can be represented in a
    size_t (i.e. that size_t is unsigned int or larger---very, very
    probably), then you should either get the memory, or get a
    bad_alloc exception (which you don't catch). That's according
    to the standard; a lot of implementations seem to have bugs
    here.
    I think, you are missing a twist that the OP has hidden within
    his posting: the size of S is at least 64. The number of S
    objects that he requests is close to
    numeric_limits< size_t>::max().
    It's not on the systems I usually use, but that's not the point.
    So when new S[size] is translated into raw memory allocation,
    the number of bytes (not the number of S objects) requested
    might exceed numeric_limits< size_t>::max().
    And? That's the implementation' s problem, not mine. I don't
    see anything in the standard which authorizes special behavior
    in this case.
    I think (based on my understanding of [5.3.4/12]) that in such
    a case, the unsigned arithmetic will just silently overflow
    and you end up allocating a probably unexpected amount of
    memory.
    Could you please point to something in §5.3.4/12 (or elsewhere)
    that says anything about "unsigned arithmetic". I only have a
    recent draft here, but it doesn't say anything about using
    unsigned arithmetic, or that the rules of unsigned arithmetic
    apply for this calcule, or even that there is a calcule. (It is
    a bit vague, I'll admit, since it says "A new-expression passes
    the amount of space requested to the allocation function as the
    first argument of type std:: size_t." It doesn't really say
    what happens if the "amount of space" isn't representable in a
    size_t. But since it's clear that the request can't be honored,
    the only reasonable interpretation is that you get a bad_alloc.)

    --
    James Kanze (GABI Software) email:james.kan ze@gmail.com
    Conseils en informatique orientée objet/
    Beratung in objektorientier ter Datenverarbeitu ng
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

    Comment

    • James Kanze

      #17
      Re: May the size argument of operator new overflow?

      On Jun 18, 9:16 pm, Ian Collins <ian-n...@hotmail.co mwrote:
      Angel Tsankov wrote:
      Bo Persson wrote:
      Here is what one compiler does - catch the overflow and
      wrap it back to numeric_limits< size_t>::max().
      int main()
      {
      allocate(0x7FFF FFFF);
      00401000 xor ecx,ecx
      00401002 mov eax,7FFFFFFFh
      00401007 mov edx,40h
      0040100C mul eax,edx
      0040100E seto cl
      00401011 neg ecx
      00401013 or ecx,eax
      00401015 push ecx
      00401016 call operator new[] (401021h)
      0040101B add esp,4
      }
      0040101E xor eax,eax
      00401020 ret
      Yes, the size requested is rounded to the maximum
      allocatable size, but is this standard-compliant behavior?
      If the implementation can be sure that the call to operator
      new[] will fail, it's probably the best solution. (This would
      be the case, for example, if it really was impossible to
      allocate that much memory.)
      And if it is, how is client code notified of the rounding?
      It doesn't have to be.
      Your question has nothing to do with operator new() and
      everything to do with integer overflow.
      His question concerned operator new. Not unsigned integral
      arithmetic.
      The reason some of us answered the way we did is probably
      because we are used to systems where sizeof(int) == 4 and
      sizeof(size_t) == 8, so your original code would simply have
      requested 32GB, not a lot on some systems.
      Or because we take the standard literally.

      --
      James Kanze (GABI Software) email:james.kan ze@gmail.com
      Conseils en informatique orientée objet/
      Beratung in objektorientier ter Datenverarbeitu ng
      9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

      Comment

      • James Kanze

        #18
        Re: May the size argument of operator new overflow?

        On Jun 18, 7:53 pm, Jerry Coffin <jcof...@taeus. comwrote:
        In article <g3alej$t...@ai oe.org>, fn42...@fmi.uni-sofia.bg says...
        Does the C++ standard define what happens when the size
        argument of void* operator new(size_t size) cannot represent
        the total number of bytes to be allocated? For example:
        struct S
        {
        char a[64];
        };
        S* allocate(int size)
        {
        return new S[size]; // What happens here?
        }
        int main()
        {
        allocate(0x7FFF FFFF);
        }
        Chances are pretty good that at some point, you get something
        like:
        void *block = ::new(0x7FFFFFF F*64);
        There are a lot of implementations that do that. Luckily,
        there's nothing in the standard which allows it.

        --
        James Kanze (GABI Software) email:james.kan ze@gmail.com
        Conseils en informatique orientée objet/
        Beratung in objektorientier ter Datenverarbeitu ng
        9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

        Comment

        • James Kanze

          #19
          Re: May the size argument of operator new overflow?

          On Jun 18, 9:24 pm, Paavo Helde <nob...@ebi.eew rote:
          Jerry Coffin <jcof...@taeus. comkirjutas:
          [...]
          The standard says that for too large allocations
          std::bad_alloc must be thrown. In the user code there is no
          unsigned arithmetic done, thus no wraparound can occur. I
          would say that if the implementation does not check for the
          overflow and silently wraps the result, the implementation
          does not conform to the standard. It is irrelevant if the
          implementation uses unsigned arithmetics inside, or e.g.
          double.
          I have not studied the standard in detail, so this is just my
          opinion how it should work.
          I have studied the standard in some detail, and your analysis is
          clearly correct. Whether this is actually what the authors
          meant to say is another question, but it is clearly what the
          standard says. It is also obviously how it should work, from a
          quality of implementation point of view. Anything else more or
          less makes array new unusable. (On the other hand: who cares?
          In close to twenty years of C++ programming, I've yet to find a
          use for array new.)

          --
          James Kanze (GABI Software) email:james.kan ze@gmail.com
          Conseils en informatique orientée objet/
          Beratung in objektorientier ter Datenverarbeitu ng
          9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

          Comment

          • Kai-Uwe Bux

            #20
            Re: May the size argument of operator new overflow?

            James Kanze wrote:
            On Jun 18, 5:40 pm, Kai-Uwe Bux <jkherci...@gmx .netwrote:
            >James Kanze wrote:
            On Jun 18, 11:44 am, "Angel Tsankov" <fn42...@fmi.un i-sofia.bgwrote:
            >Does the C++ standard define what happens when the size
            >argument of void* operator new(size_t size) cannot represent
            >the total number of bytes to be allocated?
            >
            >For example:
            >
            >struct S
            >{
            > char a[64];
            >};
            >
            >S* allocate(int size)
            >{
            > return new S[size]; // What happens here?
            >}
            >
            >int main()
            >{
            > allocate(0x7FFF FFFF);
            >}
            >
            Supposing that all values in an int can be represented in a
            size_t (i.e. that size_t is unsigned int or larger---very, very
            probably), then you should either get the memory, or get a
            bad_alloc exception (which you don't catch). That's according
            to the standard; a lot of implementations seem to have bugs
            here.
            >
            >I think, you are missing a twist that the OP has hidden within
            >his posting: the size of S is at least 64. The number of S
            >objects that he requests is close to
            >numeric_limits <size_t>::max() .
            >
            It's not on the systems I usually use, but that's not the point.
            >
            >So when new S[size] is translated into raw memory allocation,
            >the number of bytes (not the number of S objects) requested
            >might exceed numeric_limits< size_t>::max().
            >
            And? That's the implementation' s problem, not mine. I don't
            see anything in the standard which authorizes special behavior
            in this case.
            The question is what behavior is "special". I do not see which behavior the
            standard requires in this case.

            >I think (based on my understanding of [5.3.4/12]) that in such
            >a case, the unsigned arithmetic will just silently overflow
            >and you end up allocating a probably unexpected amount of
            >memory.
            >
            Could you please point to something in §5.3.4/12 (or elsewhere)
            that says anything about "unsigned arithmetic".
            I qualified my statement by "I think" simply because the standard is vague
            to me. However, it says for instance

            new T[5] results in a call of operator new[](sizeof(T)*5+x) ,

            and operator new takes its argument at std::size_t. Now, whenever any
            arithmetic type is converted to std::size_t, I would expect [4.7/2] to
            apply since size_t is unsigned. When the standard does not say that usual
            conversion rules do not apply in the evaluation of the expression

            sizeof(T)*5+x

            what am I to conclude?
            I only have a
            recent draft here, but it doesn't say anything about using
            unsigned arithmetic, or that the rules of unsigned arithmetic
            apply for this calcule, or even that there is a calcule.
            It gives the formula above. It does not really matter whether you interpret

            sizeof(T)*5+x

            as unsigned arithmetic or as plain math. A conversion to std::size_t has to
            happen at some point because of the signature of the allocation function.
            If [4.7/2] is not meant to apply to that conversion, the standard should
            say that somewhere.
            (It is
            a bit vague, I'll admit, since it says "A new-expression passes
            the amount of space requested to the allocation function as the
            first argument of type std:: size_t." It doesn't really say
            what happens if the "amount of space" isn't representable in a
            size_t.
            So you see: taken litterally, the standard guarantees something impossible
            to happen.
            But since it's clear that the request can't be honored,
            the only reasonable interpretation is that you get a bad_alloc.)
            Hm, that is a mixure of common sense and wishfull thinking :-)

            I agree that a bad_alloc is clearly what I would _want_ to get. I do not
            see, however, how to argue from the wording of the standard that I _will_
            get that.


            Best

            Kai-Uwe Bux

            Comment

            • Jerry Coffin

              #21
              Re: May the size argument of operator new overflow?

              In article <Xns9AC1E3EAF76 68nobodyebiee@2 16.196.97.131>, nobody@ebi.ee
              says...

              [ ... ]
              The standard says that for too large allocations std::bad_alloc must be
              thrown. In the user code there is no unsigned arithmetic done, thus no
              wraparound can occur. I would say that if the implementation does not
              check for the overflow and silently wraps the result, the implementation
              does not conform to the standard. It is irrelevant if the implementation
              uses unsigned arithmetics inside, or e.g. double.
              >
              I have not studied the standard in detail, so this is just my opinion how
              it should work.
              Though it's in a non-normative note, the standard says ($5.3.4/12):

              new T[5] results in a call of operator new[](sizeof(T)*5+x)

              Even though that's a note, I think it's going to be hard to say it's
              _wrong_ for an implementation to do exactly what that says -- and if
              sizeof(T) is the maximum value for size_t, the expression above will
              clearly wraparound...

              --
              Later,
              Jerry.

              The universe is a figment of its own imagination.

              Comment

              • Ian Collins

                #22
                Re: May the size argument of operator new overflow?

                James Kanze wrote:
                On Jun 18, 9:16 pm, Ian Collins <ian-n...@hotmail.co mwrote:
                >
                >Your question has nothing to do with operator new() and
                >everything to do with integer overflow.
                >
                His question concerned operator new. Not unsigned integral
                arithmetic.
                >
                He asked:

                S* allocate(std::s ize_t size)
                {
                return new S[size]; // How many bytes of memory must the new operator
                allocate if size equals std::numeric_li mits<size_t>::m ax()?
                }

                Which has boils down to what is N*std::numeric_ limits<size_t>: :max()?

                --
                Ian Collins.

                Comment

                • James Kanze

                  #23
                  Re: May the size argument of operator new overflow?

                  On Jun 18, 11:09 pm, Kai-Uwe Bux <jkherci...@gmx .netwrote:
                  James Kanze wrote:
                  On Jun 18, 5:40 pm, Kai-Uwe Bux <jkherci...@gmx .netwrote:
                  James Kanze wrote:
                  On Jun 18, 11:44 am, "Angel Tsankov" <fn42...@fmi.un i-sofia.bgwrote:
                  Does the C++ standard define what happens when the size
                  argument of void* operator new(size_t size) cannot represent
                  the total number of bytes to be allocated?
                  For example:
                  struct S
                  {
                  char a[64];
                  };
                  S* allocate(int size)
                  {
                  return new S[size]; // What happens here?
                  }
                  int main()
                  {
                  allocate(0x7FFF FFFF);
                  }
                  Supposing that all values in an int can be represented in a
                  size_t (i.e. that size_t is unsigned int or larger---very, very
                  probably), then you should either get the memory, or get a
                  bad_alloc exception (which you don't catch). That's according
                  to the standard; a lot of implementations seem to have bugs
                  here.
                  I think, you are missing a twist that the OP has hidden within
                  his posting: the size of S is at least 64. The number of S
                  objects that he requests is close to
                  numeric_limits< size_t>::max().
                  It's not on the systems I usually use, but that's not the point.
                  So when new S[size] is translated into raw memory allocation,
                  the number of bytes (not the number of S objects) requested
                  might exceed numeric_limits< size_t>::max().
                  And? That's the implementation' s problem, not mine. I don't
                  see anything in the standard which authorizes special behavior
                  in this case.
                  The question is what behavior is "special". I do not see which
                  behavior the standard requires in this case.
                  I agree that it's not as clear as it could be, but the standard
                  says that "A new-expression passes the amount of space requested
                  to the allocation function as the first argument of type std::
                  size_t." That's clear enough (and doesn't talk about
                  arithmetic; how the compiler knows how much to allocate is an
                  implementation detail, as long as it gets it right). The
                  problem is what happens when the "amount of space" cannot be
                  represented in a size_t; the standard seems to ignore this case,
                  but since it is clear that the requested allocation can't be
                  honored, the only reasonable interpretation is that the code
                  behave as if the requested allocation can't be honored: throw a
                  bad_alloc, unless the operator new function is nothrow, in which
                  case return a null pointer.
                  I think (based on my understanding of [5.3.4/12]) that in such
                  a case, the unsigned arithmetic will just silently overflow
                  and you end up allocating a probably unexpected amount of
                  memory.
                  Could you please point to something in §5.3.4/12 (or elsewhere)
                  that says anything about "unsigned arithmetic".
                  I qualified my statement by "I think" simply because the
                  standard is vague to me. However, it says for instance
                  new T[5] results in a call of operator new[](sizeof(T)*5+x) ,
                  and operator new takes its argument at std::size_t. Now,
                  whenever any arithmetic type is converted to std::size_t, I
                  would expect [4.7/2] to apply since size_t is unsigned. When
                  the standard does not say that usual conversion rules do not
                  apply in the evaluation of the expression
                  Note that code is part of a non-normative example, designed to
                  show one particular aspect, and not to be used as a normative
                  implementation.
                  sizeof(T)*5+x
                  what am I to conclude?
                  That the example is concerned about showing the fact that the
                  requested space may be larger than simply sizeof(T)*5, and
                  doesn't bother with other issues:-).
                  I only have a recent draft here, but it doesn't say anything
                  about using unsigned arithmetic, or that the rules of
                  unsigned arithmetic apply for this calcule, or even that
                  there is a calcule.
                  It gives the formula above. It does not really matter whether
                  you interpret
                  sizeof(T)*5+x
                  as unsigned arithmetic or as plain math. A conversion to
                  std::size_t has to happen at some point because of the
                  signature of the allocation function. If [4.7/2] is not meant
                  to apply to that conversion, the standard should say that
                  somewhere.
                  (It is a bit vague, I'll admit, since it says "A
                  new-expression passes the amount of space requested to the
                  allocation function as the first argument of type std::
                  size_t." It doesn't really say what happens if the "amount
                  of space" isn't representable in a size_t.
                  So you see: taken litterally, the standard guarantees
                  something impossible to happen.
                  More or less. And since the compiler can't honor impossible
                  requests, the request must fail somehow. The question is how:
                  undefined behavior or something defined? In the case of
                  operator new, the language has specified a defined behavior for
                  cases where the request fails.

                  There are two ways to interpret this: at least one school claims
                  that if the system cannot honor your request, you've exceeded
                  its resource limit, and so undefined behavior ensues. While the
                  standard says you must get a bad_alloc, it's not really required
                  because of this undefined behavior. This logic has often been
                  presented as a justification of lazy commit. (Note that from
                  the user point of view, the results of overflow here or lazy
                  commit are pretty much the same: you get an apparently valid
                  pointer back, and then core dump when you try to access the
                  allocated memory.)

                  Note that the problem is more general. Given something like:

                  struct S { char c[ SIZE_MAX / 4 ] ; } ;
                  std::vector< S v( 2 ) ;
                  v.at( 4 ) ;

                  am I guaranteed to get an exception? (Supposing that I didn't
                  get a bad_alloc in the constructor of v.)
                  But since it's clear that the request can't be honored,
                  the only reasonable interpretation is that you get a bad_alloc.)
                  Hm, that is a mixure of common sense and wishfull thinking :-)
                  Maybe:-). I think that the wording of the standard here is
                  vague enough that you have to use common sense to interpret it.

                  In some ways, the problem is similar to that of what happens to
                  the allocated memory if the constructor in a new expression
                  throws. The ARM didn't specify clearly, but "common sense" says
                  that the compiler must free it. Most implementations ignored
                  common sense, but when put to the point, the committee clarified
                  the issue in the direction of common sense.
                  I agree that a bad_alloc is clearly what I would _want_ to
                  get. I do not see, however, how to argue from the wording of
                  the standard that I _will_ get that.
                  The absense of any specific liberty to do otherwise?

                  --
                  James Kanze (GABI Software) email:james.kan ze@gmail.com
                  Conseils en informatique orientée objet/
                  Beratung in objektorientier ter Datenverarbeitu ng
                  9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

                  Comment

                  • Greg Herlihy

                    #24
                    Re: May the size argument of operator new overflow?

                    On Jun 18, 1:48 pm, James Kanze <james.ka...@gm ail.comwrote:
                    On Jun 18, 5:40 pm, Kai-Uwe Bux <jkherci...@gmx .netwrote:
                    >
                    >
                    >
                    James Kanze wrote:
                    On Jun 18, 11:44 am, "Angel Tsankov" <fn42...@fmi.un i-sofia.bgwrote:
                    >Does the C++ standard define what happens when the size
                    >argument of void* operator new(size_t size) cannot represent
                    >the total number of bytes to be allocated?
                    >For example:
                    >struct S
                    >{
                    > char a[64];
                    >};
                    >S* allocate(int size)
                    >{
                    > return new S[size]; // What happens here?
                    >}
                    >int main()
                    >{
                    > allocate(0x7FFF FFFF);
                    >}
                    Supposing that all values in an int can be represented in a
                    size_t (i.e. that size_t is unsigned int or larger---very, very
                    probably), then you should either get the memory, or get a
                    bad_alloc exception (which you don't catch).  That's according
                    to the standard; a lot of implementations seem to have bugs
                    here.
                    I think, you are missing a twist that the OP has hidden within
                    his posting: the size of S is at least 64. The number of S
                    objects that he requests is close to
                    numeric_limits< size_t>::max().
                    >
                    It's not on the systems I usually use, but that's not the point.
                    >
                    So when new S[size] is translated into raw memory allocation,
                    the number of bytes (not the number of S objects) requested
                    might exceed numeric_limits< size_t>::max().
                    >
                    And?  That's the implementation' s problem, not mine.  I don't
                    see anything in the standard which authorizes special behavior
                    in this case.
                    >
                    I think (based on my understanding of [5.3.4/12]) that in such
                    a case, the unsigned arithmetic will just silently overflow
                    and you end up allocating a probably unexpected amount of
                    memory.
                    >
                    Could you please point to something in §5.3.4/12 (or elsewhere)
                    that says anything about "unsigned arithmetic".  I only have a
                    recent draft here, but it doesn't say anything about using
                    unsigned arithmetic, or that the rules of unsigned arithmetic
                    apply for this calcule, or even that there is a calcule.
                    The problem in this case that the calculated size of the array:
                    sizeof(T) * N wraps around if the result of the multiplication
                    overflows. The product is certain to overflow - because size_t is
                    required to be an unsigned integral type.

                    So it can well be the case that the size of the memory request as
                    passed to the allocation function winds up being small enough to be
                    allocated (due to the overflow), even though the size of the needed
                    memory allocation is much larger. So the behavior of a program that
                    attempts to allocate an array of an N-number of T objects (when N
                    *sizeof(T) overflows) is undefined,y.

                    Moreover, the C++ Standards Committee agrees with this interpretation
                    - but has (so far) decided not to require that std::bad_alloc be
                    thrown in this situation. They reasoned:

                    "Each implementation is required to document the maximum size of an
                    object (Annex B [implimits]). It is not difficult for a program to
                    check array allocations to ensure that they are smaller than this
                    quantity. Implementations can provide a mechanism in which users
                    concerned with this problem can request extra checking before array
                    allocations, just as some implementations provide checking for array
                    index and pointer validity. However, it would not be appropriate to
                    require this overhead for every array allocation in every program."

                    See: http://www.open-std.org/JTC1/SC22/WG...n2506.html#256

                    This same issue has since been reopened (#624) with the proposed
                    additional wording:

                    "If the value of the expression is such that the size of the allocated
                    object would exceed the implementation-defined limit, an exception of
                    type std::bad_alloc is thrown and no storage is obtained."

                    See: http://www.open-std.org/JTC1/SC22/WG...n2504.html#624

                    But until and unless Issue #624 is adopted, the behavior of a program
                    that makes an oversized allocation request - is undefined.

                    Greg

                    Comment

                    • James Kanze

                      #25
                      Re: May the size argument of operator new overflow?

                      On Jun 20, 1:03 am, Greg Herlihy <gre...@mac.com wrote:
                      On Jun 18, 1:48 pm, James Kanze <james.ka...@gm ail.comwrote:
                      On Jun 18, 5:40 pm, Kai-Uwe Bux <jkherci...@gmx .netwrote:
                      [...]
                      I think (based on my understanding of [5.3.4/12]) that in such
                      a case, the unsigned arithmetic will just silently overflow
                      and you end up allocating a probably unexpected amount of
                      memory.
                      Could you please point to something in §5.3.4/12 (or elsewhere)
                      that says anything about "unsigned arithmetic". I only have a
                      recent draft here, but it doesn't say anything about using
                      unsigned arithmetic, or that the rules of unsigned arithmetic
                      apply for this calcule, or even that there is a calcule.
                      The problem in this case that the calculated size of the array:
                      sizeof(T) * N wraps around if the result of the multiplication
                      overflows. The product is certain to overflow - because size_t is
                      required to be an unsigned integral type.
                      As I said, that's the implementation' s problem, not mine:-).
                      So it can well be the case that the size of the memory request as
                      passed to the allocation function winds up being small enough to be
                      allocated (due to the overflow), even though the size of the needed
                      memory allocation is much larger. So the behavior of a program that
                      attempts to allocate an array of an N-number of T objects (when N
                      *sizeof(T) overflows) is undefined,y.
                      Moreover, the C++ Standards Committee agrees with this interpretation
                      - but has (so far) decided not to require that std::bad_alloc be
                      thrown in this situation. They reasoned:
                      "Each implementation is required to document the maximum size of an
                      object (Annex B [implimits]). It is not difficult for a program to
                      check array allocations to ensure that they are smaller than this
                      quantity. Implementations can provide a mechanism in which users
                      concerned with this problem can request extra checking before array
                      allocations, just as some implementations provide checking for array
                      index and pointer validity. However, it would not be appropriate to
                      require this overhead for every array allocation in every program."
                      I thought that there was a DR about this, but I couldn't
                      remember exactly. Thanks for the reference.

                      Regretfully, the rational is technically incorrect; the user
                      hasn't the slightest way of knowing whether the required
                      arithmetic will overflow. (Remember, the equation is
                      n*sizeof(T)+e, where e is unspecified, and may even vary between
                      invocations of new. And since you can't know e, you're screwed
                      unless the compiler---which does know e---does something about
                      it.)
                      This same issue has since been reopened (#624) with the proposed
                      additional wording:
                      "If the value of the expression is such that the size of the allocated
                      object would exceed the implementation-defined limit, an exception of
                      type std::bad_alloc is thrown and no storage is obtained."
                      But until and unless Issue #624 is adopted, the behavior of a
                      program that makes an oversized allocation request - is
                      undefined.
                      In other words:

                      struct S { char c[2] ; } ;
                      new S[2] ;

                      is undefined, since e could be something outrageously large.

                      Also, while an implementation is required to document the
                      implementation-defined limit of the size of an object (lot's of
                      luck finding that documentation), it doesn't make this value
                      available in any standard form within the code, so you can't
                      write any portable checks against it. (Of course, you can write
                      portable checks against std::numeric_li mits<size_t>::m ax(),
                      which would be sufficient if there wasn't that e.)

                      --
                      James Kanze (GABI Software) email:james.kan ze@gmail.com
                      Conseils en informatique orientée objet/
                      Beratung in objektorientier ter Datenverarbeitu ng
                      9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

                      Comment

                      • peter koch

                        #26
                        Re: May the size argument of operator new overflow?

                        On 20 Jun., 18:34, James Kanze <james.ka...@gm ail.comwrote:
                        On Jun 20, 1:03 am, Greg Herlihy <gre...@mac.com wrote:
                        >
                        [snip]
                        Moreover, the C++ Standards Committee agrees with this interpretation
                        - but has (so far) decided not to require that std::bad_alloc be
                        thrown in this situation. They reasoned:
                        "Each implementation is required to document the maximum size of an
                        object (Annex B [implimits]). It is not difficult for a program to
                        check array allocations to ensure that they are smaller than this
                        quantity. Implementations can provide a mechanism in which users
                        concerned with this problem can request extra checking before array
                        allocations, just as some implementations provide checking for array
                        index and pointer validity. However, it would not be appropriate to
                        require this overhead for every array allocation in every program."
                        See:http://www.open-std.org/JTC1/SC22/WG...n2506.html#256
                        >
                        I thought that there was a DR about this, but I couldn't
                        remember exactly.  Thanks for the reference.
                        >
                        Regretfully, the rational is technically incorrect; the user
                        hasn't the slightest way of knowing whether the required
                        arithmetic will overflow.  (Remember, the equation is
                        n*sizeof(T)+e, where e is unspecified, and may even vary between
                        invocations of new.  And since you can't know e, you're screwed
                        unless the compiler---which does know e---does something about
                        it.)
                        I believe that turning off error-detection here is the wrong
                        direction. C++ does not need one more situation where you have to rely
                        on the compiler to do an entirely reasonable error detection. Also,
                        how expensive is the check? I can not imagine any program where
                        checking for overflow will lead to either bloated code or a
                        performance degradation that is at all perceptible.
                        I know that well-written programs rarely (if ever) need new[], but the
                        check should precisely be made for the weaker programmers, which could
                        risk transferring a negative value as the size.

                        /Peter
                        >
                        This same issue has since been reopened (#624) with the proposed
                        additional wording:
                        "If the value of the expression is such that the size of the allocated
                        object would exceed the implementation-defined limit, an exception of
                        type std::bad_alloc is thrown and no storage is obtained."
                        See:http://www.open-std.org/JTC1/SC22/WG...n2504.html#624
                        But until and unless Issue #624 is adopted, the behavior of a
                        program that makes an oversized allocation request - is
                        undefined.
                        >
                        In other words:
                        >
                            struct S { char c[2] ; } ;
                            new S[2] ;
                        >
                        is undefined, since e could be something outrageously large.
                        >
                        Also, while an implementation is required to document the
                        implementation-defined limit of the size of an object (lot's of
                        luck finding that documentation), it doesn't make this value
                        available in any standard form within the code, so you can't
                        write any portable checks against it.  (Of course, you can write
                        portable checks against std::numeric_li mits<size_t>::m ax(),
                        which would be sufficient if there wasn't that e.)
                        Right - but why should you bother in the first place?

                        /Peter

                        Comment

                        • Greg Herlihy

                          #27
                          Re: May the size argument of operator new overflow?

                          On Jun 20, 9:34 am, James Kanze <james.ka...@gm ail.comwrote:
                          On Jun 20, 1:03 am, Greg Herlihy <gre...@mac.com wrote:
                          >
                          Moreover, the C++ Standards Committee agrees with this interpretation
                          - but has (so far) decided not to require that std::bad_alloc be
                          thrown in this situation. They reasoned:
                          "Each implementation is required to document the maximum size of an
                          object (Annex B [implimits]). It is not difficult for a program to
                          check array allocations to ensure that they are smaller than this
                          quantity. Implementations can provide a mechanism in which users
                          concerned with this problem can request extra checking before array
                          allocations, just as some implementations provide checking for array
                          index and pointer validity. However, it would not be appropriate to
                          require this overhead for every array allocation in every program."
                          See:http://www.open-std.org/JTC1/SC22/WG...n2506.html#256
                          >
                          I thought that there was a DR about this, but I couldn't
                          remember exactly.  Thanks for the reference.
                          Actually, you deserve credit for filing Issue #256 (back in 2000), and
                          thereby first bringing this problem to the Committee's attention.
                          Regretfully, the rational is technically incorrect; the user
                          hasn't the slightest way of knowing whether the required
                          arithmetic will overflow.  (Remember, the equation is
                          n*sizeof(T)+e, where e is unspecified, and may even vary between
                          invocations of new.  And since you can't know e, you're screwed
                          unless the compiler---which does know e---does something about
                          it.)
                          The rationale provided is unsatisfactory on any number of levels.
                          Perhaps the most obvious shortcoming with the Committee's solution is
                          that a sizable number of C++ programmers (if the responses on this
                          thread are any indication) believe that this problem does not - or
                          could not - exist. (In fact, I was not aware of its existence either -
                          before I read this thread).
                          This same issue has since been reopened (#624) with the proposed
                          additional wording:
                          "If the value of the expression is such that the size of the allocated
                          object would exceed the implementation-defined limit, an exception of
                          type std::bad_alloc is thrown and no storage is obtained."
                          See:http://www.open-std.org/JTC1/SC22/WG...n2504.html#624
                          But until and unless Issue #624 is adopted, the behavior of a
                          program that makes an oversized allocation request - is
                          undefined.
                          >
                          In other words:
                          >
                              struct S { char c[2] ; } ;
                              new S[2] ;
                          >
                          is undefined, since e could be something outrageously large.
                          In theory, yes. In practice, almost certainly not. The default
                          allocators supplied with g++ and Visual C++ do throw a std::bad_alloc
                          for any outsized memory allocation request - even when the size of the
                          requested allocation has overflowed. So the rationale provided by the
                          Committee seems not only out of touch with most C++ programmer's
                          expectations, but out of touch even with current C++ compiler
                          implementations .

                          Greg

                          Comment

                          • Pete Becker

                            #28
                            Re: May the size argument of operator new overflow?

                            On Jun 20, 9:34 am, James Kanze <james.ka...@gm ail.comwrote:
                            >On Jun 20, 1:03 am, Greg Herlihy <gre...@mac.com wrote:
                            >>
                            >>Moreover, the C++ Standards Committee agrees with this interpretation
                            >>- but has (so far) decided not to require that std::bad_alloc be
                            >>thrown in this situation. They reasoned:
                            >>"Each implementation is required to document the maximum size of an
                            >>object (Annex B [implimits]). It is not difficult for a program to
                            >>check array allocations to ensure that they are smaller than this
                            >>quantity. Implementations can provide a mechanism in which users
                            >>concerned with this problem can request extra checking before array
                            >>allocations , just as some implementations provide checking for array
                            >>index and pointer validity. However, it would not be appropriate to
                            >>require this overhead for every array allocation in every program."
                            >>See:http://www.open-std.org/JTC1/SC22/WG...08/n2506.html#
                            256
                            >>
                            >
                            >Regretfully, the rational is technically incorrect; the user
                            >hasn't the slightest way of knowing whether the required
                            >arithmetic will overflow.  (Remember, the equation is
                            >n*sizeof(T)+ e, where e is unspecified, and may even vary between
                            >invocations of new.  And since you can't know e, you're screwed
                            >unless the compiler---which does know e---does something about
                            >it.)
                            I think that, properly read, it's right. The object being allocated is
                            the array, and the size of the array is the size of an element times
                            the number of elements. That's the value that has to be compared to the
                            maximum size of an object. Any internal overhead is part of the
                            allocation, but not part of the object. The implementation has to allow
                            for internal overhead when it specifies the maximum size of an object.

                            --
                            Pete
                            Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
                            Standard C++ Library Extensions: a Tutorial and Reference
                            (www.petebecker.com/tr1book)

                            Comment

                            • James Kanze

                              #29
                              Re: May the size argument of operator new overflow?

                              On Jun 21, 12:09 pm, Pete Becker <p...@versatile coding.comwrote :
                              On Jun 20, 9:34 am, James Kanze <james.ka...@gm ail.comwrote:
                              On Jun 20, 1:03 am, Greg Herlihy <gre...@mac.com wrote:
                              >Moreover, the C++ Standards Committee agrees with this interpretation
                              >- but has (so far) decided not to require that std::bad_alloc be
                              >thrown in this situation. They reasoned:
                              >"Each implementation is required to document the maximum size of an
                              >object (Annex B [implimits]). It is not difficult for a program to
                              >check array allocations to ensure that they are smaller than this
                              >quantity. Implementations can provide a mechanism in which users
                              >concerned with this problem can request extra checking before array
                              >allocations, just as some implementations provide checking for array
                              >index and pointer validity. However, it would not be appropriate to
                              >require this overhead for every array allocation in every program."
                              >See:http://www.open-std.org/JTC1/SC22/WG...08/n2506.html#
                              256
                              Regretfully, the rational is technically incorrect; the user
                              hasn't the slightest way of knowing whether the required
                              arithmetic will overflow. (Remember, the equation is
                              n*sizeof(T)+e, where e is unspecified, and may even vary between
                              invocations of new. And since you can't know e, you're screwed
                              unless the compiler---which does know e---does something about
                              it.)
                              I think that, properly read, it's right. The object being
                              allocated is the array, and the size of the array is the size
                              of an element times the number of elements. That's the value
                              that has to be compared to the maximum size of an object. Any
                              internal overhead is part of the allocation, but not part of
                              the object. The implementation has to allow for internal
                              overhead when it specifies the maximum size of an object.
                              In other words (if I understand you correctly), an
                              implementation isn't required to check for overflow on the
                              multiplication, but it is required to check on the following
                              addition?

                              --
                              James Kanze (GABI Software) email:james.kan ze@gmail.com
                              Conseils en informatique orientée objet/
                              Beratung in objektorientier ter Datenverarbeitu ng
                              9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

                              Comment

                              • Pete Becker

                                #30
                                Re: May the size argument of operator new overflow?

                                On 2008-06-21 06:48:35 -0400, James Kanze <james.kanze@gm ail.comsaid:
                                On Jun 21, 12:09 pm, Pete Becker <p...@versatile coding.comwrote :
                                >>On Jun 20, 9:34 am, James Kanze <james.ka...@gm ail.comwrote:
                                >>>On Jun 20, 1:03 am, Greg Herlihy <gre...@mac.com wrote:
                                >
                                >>>>Moreover, the C++ Standards Committee agrees with this interpretation
                                >>>>- but has (so far) decided not to require that std::bad_alloc be
                                >>>>thrown in this situation. They reasoned:
                                >>>>"Each implementation is required to document the maximum size of an
                                >>>>object (Annex B [implimits]). It is not difficult for a program to
                                >>>>check array allocations to ensure that they are smaller than this
                                >>>>quantity. Implementations can provide a mechanism in which users
                                >>>>concerned with this problem can request extra checking before array
                                >>>>allocations , just as some implementations provide checking for array
                                >>>>index and pointer validity. However, it would not be appropriate to
                                >>>>require this overhead for every array allocation in every program."
                                >>>>See:http://www.open-std.org/JTC1/SC22/WG...2008/n2506.htm
                                l#
                                >>256
                                >
                                >>>Regretfull y, the rational is technically incorrect; the user
                                >>>hasn't the slightest way of knowing whether the required
                                >>>arithmetic will overflow. (Remember, the equation is
                                >>>n*sizeof(T)+ e, where e is unspecified, and may even vary between
                                >>>invocation s of new. And since you can't know e, you're screwed
                                >>>unless the compiler---which does know e---does something about
                                >>>it.)
                                >
                                >I think that, properly read, it's right. The object being
                                >allocated is the array, and the size of the array is the size
                                >of an element times the number of elements. That's the value
                                >that has to be compared to the maximum size of an object. Any
                                >internal overhead is part of the allocation, but not part of
                                >the object. The implementation has to allow for internal
                                >overhead when it specifies the maximum size of an object.
                                >
                                In other words (if I understand you correctly), an
                                implementation isn't required to check for overflow on the
                                multiplication, but it is required to check on the following
                                addition?
                                That's my reading of it.

                                --
                                Pete
                                Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
                                Standard C++ Library Extensions: a Tutorial and Reference
                                (www.petebecker.com/tr1book)

                                Comment

                                Working...