C Question

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • chang

    C Question

    HI,
    I am confused of storing capacity in our primitivetypes.
    I delcared like that way..
    #include<stdio. h>
    main()
    {
    int a=0xFFFFFFFF;
    printf("The hex value:%x\n",a);
    printf("The dec value :%d",a);

    }
    Ans is showing like that way:-
    The hex value:FFFFFFFF
    The Dec value:-1

    1)**Why it is showing like that way in Decimal ?*******

    2)--->But if i add this line(a=a+0x01;) in programme before those two
    printfs then answer is :
    The hex value:0
    The Dec value:0

    3)--->and if i add like this way :a=a+0x02;
    Then answers are:
    The hex value:1
    The Dec value:1

    I will be happy if someone help me to know these concept.

    Thanks
    chang
  • Pawel Dziepak

    #2
    Re: C Question

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    chang wrote:
    HI,
    I am confused of storing capacity in our primitivetypes.
    I delcared like that way..
    #include<stdio. h>
    main()
    {
    int a=0xFFFFFFFF;
    printf("The hex value:%x\n",a);
    printf("The dec value :%d",a);
    >
    }
    Ans is showing like that way:-
    The hex value:FFFFFFFF
    The Dec value:-1
    >
    1)**Why it is showing like that way in Decimal ?*******
    It looks like int size on your architecture is 4 bytes. In printf format
    strings "%d" stands for signed decimals ("%ud" is for unsigned).
    0xFFFFFFFF is in two's complement [1] system a representation of -1.
    2)--->But if i add this line(a=a+0x01;) in programme before those two
    printfs then answer is :
    The hex value:0
    The Dec value:0
    >
    3)--->and if i add like this way :a=a+0x02;
    Then answers are:
    The hex value:1
    The Dec value:1
    If your implementation assumes that int is a signed value, a is equal to
    - -1. That's why you got that results for adding 1 and 2.
    On your architecture int is 32 bit, what means that the largest value
    unsigned int can contain is 0xFFFFFFFF. When you adds anything to it,
    there is an overflow. The results are the same as in adding to signed int.

    [1] http://en.wikipedia.org/wiki/Two%27s_complement

    Pawel Dziepak
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.9 (GNU/Linux)
    Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

    iEYEARECAAYFAkk UdkQACgkQPFW+cU iIHNrA3QCgquzM7 UV9s9NGY+hKg0Yr H2wE
    yi4AoIabtFyfBxB hNuLrh1wYj+qBny 7k
    =GGye
    -----END PGP SIGNATURE-----

    Comment

    • Keith Thompson

      #3
      Re: C Question

      Pawel Dziepak <pdziepak@quarn os.orgwrites:
      chang wrote:
      > I am confused of storing capacity in our primitivetypes.
      >I delcared like that way..
      >#include<stdio .h>
      >main()
      >{
      > int a=0xFFFFFFFF;
      > printf("The hex value:%x\n",a);
      > printf("The dec value :%d",a);
      >>
      >}
      >Ans is showing like that way:-
      >The hex value:FFFFFFFF
      >The Dec value:-1
      >>
      >1)**Why it is showing like that way in Decimal ?*******
      >
      It looks like int size on your architecture is 4 bytes.
      To be precise, it looks like int is 32 bits. A byte in C must be at
      least 8 bits, but it can be more; theoretically, int could be a single
      32-bit byte. In practice, a byte is almost certainly going to be
      exactly 8 bits on any system you run into, unless you work with DSPs
      (digital signal processors) or perhaps some other exotic embedded
      system.

      (I'm ignoring padding bits.)
      In printf format
      strings "%d" stands for signed decimals ("%ud" is for unsigned).
      No, "%u" is for unsigned int; "%ud" is valid, but it prints an
      unsigned int value (in decimal) followed by a letter 'd'.
      0xFFFFFFFF is in two's complement [1] system a representation of -1.
      Well, sort of. 0xFFFFFFFF is an integer constant; it denotes a value,
      not a representation, specifically the value 4294967295. C doesn't
      have a notation for representations .

      Assuming 32-bit int, 0xFFFFFFFF is of type unsigned int. The maximum
      representable value of type int is 2147483647. So the declaration:

      int a=0xFFFFFFFF;

      attempts to initialize a with a value that isn't of the same type
      as a and that's too big to be stored in a. But both the type
      of 0xFFFFFFFF and the type of a are arithmetic types, so the value
      will be implicitly converted and then stored.

      So what's the result of converting 0xFFFFFFFF (a value of type
      unsigned int) to type int? It's implementation-defined. In practice,
      the vast majority of systems use a 2's-complement representation *and*
      this kind of conversion is defined to copy the bits rather than doing
      anything fancier, so the value stored in a will probably be -1.

      But this depends on several non-portable assumptions, and you should
      probably avoid this kind of thing in real code. If type int is, say,
      64 bits, then a will be assigned the value 4294967295.

      If you want a to have the value -1, just write
      int a = -1;

      If you want a to have the value 0xFFFFFFFF -- well, if int is 32 bits
      you just can't do that.

      Decide what value you want a to have, and just initialize it with that
      value.
      >2)--->But if i add this line(a=a+0x01;) in programme before those two
      >printfs then answer is :
      >The hex value:0
      >The Dec value:0
      >>
      >3)--->and if i add like this way :a=a+0x02;
      >Then answers are:
      > The hex value:1
      >The Dec value:1
      >
      If your implementation assumes that int is a signed value,
      There is no "if"; type int is signed by definition.
      a is equal to
      -1. That's why you got that results for adding 1 and 2.
      a is *probably* equal to -1.
      On your architecture int is 32 bit, what means that the largest value
      unsigned int can contain is 0xFFFFFFFF. When you adds anything to it,
      there is an overflow. The results are the same as in adding to signed int.
      The rules for arithmetic are different for signed and unsigned types.
      If an arithmetic operation on a signed type yields a result that can't
      be represented in that type, the behavior is undefined. (On most
      systems, it will wrap around, but an implementation *could* insert
      range-checking code and crash your program.) For an unsigned type,
      however, the result just quietly wraps around. This may not
      necessarily be what you want, but it's how the language defines it.

      Finally, the "%x" printf format expects an argument of type unsigned
      int; you're giving it an argument of type int. There are some subtle
      rules that let you get away with this, but IMHO it's usually better
      just to use the expected type. For example, you could write:
      printf("a = %x\n", (unsigned int)a);

      --
      Keith Thompson (The_Other_Keit h) kst-u@mib.org <http://www.ghoti.net/~kst>
      Nokia
      "We must do something. This is something. Therefore, we must do this."
      -- Antony Jay and Jonathan Lynn, "Yes Minister"

      Comment

      • Eric Sosman

        #4
        Re: C Question

        Pawel Dziepak wrote:
        chang wrote:
        >HI,
        > I am confused of storing capacity in our primitivetypes.
        >I delcared like that way..
        >#include<stdio .h>
        >main()
        >{
        > int a=0xFFFFFFFF;
        > printf("The hex value:%x\n",a);
        > printf("The dec value :%d",a);
        >>
        >}
        >Ans is showing like that way:-
        >The hex value:FFFFFFFF
        >The Dec value:-1
        >>
        >1)**Why it is showing like that way in Decimal ?*******
        >
        It looks like int size on your architecture is 4 bytes. In printf format
        strings "%d" stands for signed decimals ("%ud" is for unsigned).
        "%ud" is for unsigned with a "d" after it. "%u!" is for unsigned
        with an exlamation point after it. "%u@!#&*@*! " is for unsigned with
        a comic-book curse after it. "%u" is for unsigned.
        0xFFFFFFFF is in two's complement [1] system a representation of -1.
        >
        >2)--->But if i add this line(a=a+0x01;) in programme before those two
        >printfs then answer is :
        >The hex value:0
        >The Dec value:0
        >>
        >3)--->and if i add like this way :a=a+0x02;
        >Then answers are:
        > The hex value:1
        >The Dec value:1
        >
        If your implementation assumes that int is a signed value, a is equal to
        - -1.
        Every C implementation assumes that int is a signed type.
        (Pedant preemption: A bit-field is not an int.)
        That's why you got that results for adding 1 and 2.
        On your architecture int is 32 bit, what means that the largest value
        unsigned int can contain is 0xFFFFFFFF. When you adds anything to it,
        there is an overflow. The results are the same as in adding to signed int.
        There is never an "overflow" in unsigned arithmetic in C.
        There is "wrap-around" or "reduction by the modulus," but never
        "overflow."

        Comment

        • Flash Gordon

          #5
          Re: C Question

          chang wrote, On 07/11/08 16:45:
          HI,
          I am confused of storing capacity in our primitivetypes.
          I delcared like that way..
          #include<stdio. h>
          main()
          {
          int a=0xFFFFFFFF;
          On your system int happens to be 32 bits and uses 2s complement, neither
          of these things are guaranteed by the standard.

          Now convert 0xFFFFFFFF to binary and write the number down. Then write
          down the binary representation of -1 for a 32 bit 2s complement system.
          printf("The hex value:%x\n",a);
          %x expects an unsigned int, you have passed a signed int. The result of
          this is undefined but on your machine it happens that it just interprets
          the bit pattern of the signed int as if it was an unsigned int.
          printf("The dec value :%d",a);
          >
          }
          Ans is showing like that way:-
          The hex value:FFFFFFFF
          The Dec value:-1
          >
          1)**Why it is showing like that way in Decimal ?*******
          See above.
          2)--->But if i add this line(a=a+0x01;) in programme before those two
          printfs then answer is :
          The hex value:0
          The Dec value:0
          >
          3)--->and if i add like this way :a=a+0x02;
          Then answers are:
          The hex value:1
          The Dec value:1
          For unsigned integers the C standard guarantees that they will wrap from
          the maximum positive number to 0. Sometimes this is called "clock
          arithmetic" and you just need to look at the way a clock (especially an
          analogue one) behaves to understand it.
          --
          Flash Gordon
          If spamming me sent it to smap@spam.cause way.com
          If emailing me use my reply-to address
          See the comp.lang.c Wiki hosted by me at http://clc-wiki.net/

          Comment

          • Pawel Dziepak

            #6
            Re: C Question

            -----BEGIN PGP SIGNED MESSAGE-----
            Hash: SHA1

            Keith Thompson wrote:
            >0xFFFFFFFF is in two's complement [1] system a representation of -1.
            >
            Well, sort of. 0xFFFFFFFF is an integer constant; it denotes a value,
            not a representation, specifically the value 4294967295. C doesn't
            have a notation for representations .
            Here I showed how -1 is represented using *two's complement system* and
            that's the value that you will get when assigning 0xFFFFFFFF to signed
            int on implementations that uses two's complement system.
            So what's the result of converting 0xFFFFFFFF (a value of type
            unsigned int) to type int? It's implementation-defined. In practice,
            the vast majority of systems use a 2's-complement representation *and*
            this kind of conversion is defined to copy the bits rather than doing
            anything fancier, so the value stored in a will probably be -1.
            Yes, it is implementation defined and we can see how it works on the
            implementation used by chang on the example given in the first post. I
            would like to remind you, that he wanted us to explain him what is
            happening. "It's implementation-defined" is not a helpful answer.
            >>2)--->But if i add this line(a=a+0x01;) in programme before those two
            >>printfs then answer is :
            >>The hex value:0
            >>The Dec value:0
            >>>
            >>3)--->and if i add like this way :a=a+0x02;
            >>Then answers are:
            >> The hex value:1
            >>The Dec value:1
            >If your implementation assumes that int is a signed value,
            >
            There is no "if"; type int is signed by definition.
            I thought so, but I weren't able to find part of C standard that states
            that. Could you point me to such paragraph?
            > a is equal to
            >-1. That's why you got that results for adding 1 and 2.
            >
            a is *probably* equal to -1.
            On chang's architecture it is equal to -1 *for sure*.

            I see that you are basing mainly on C standard, while I based my
            previous post on that what we can know about implementation used by chang.

            Pawel Dziepak



            -----BEGIN PGP SIGNATURE-----
            Version: GnuPG v1.4.9 (GNU/Linux)
            Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

            iEYEARECAAYFAkk UfncACgkQPFW+cU iIHNqAigCeInNwS 6iqeQ+UY0wv+NT/mjDn
            cNAAn2yZ2GQN7vQ wL45xi5ibh5cJuI PI
            =dfqx
            -----END PGP SIGNATURE-----

            Comment

            • Antoninus Twink

              #7
              Re: C Question

              On 7 Nov 2008 at 17:44, Pawel Dziepak wrote:
              Keith Thompson wrote:
              >So what's the result of converting 0xFFFFFFFF (a value of type
              >unsigned int) to type int? It's implementation-defined.
              >
              Yes, it is implementation defined and we can see how it works on the
              implementation used by chang on the example given in the first post. I
              would like to remind you, that he wanted us to explain him what is
              happening. "It's implementation-defined" is not a helpful answer.
              Of course not - KT's aim wasn't to be helpful!
              I see that you are basing mainly on C standard, while I based my
              previous post on that what we can know about implementation used by chang.
              You have to bear in mind that your purpose in providing an answer was to
              try to help the OP.

              Thomson, on the other hand, couldn't give a damn about giving an
              appropriate answer to the OP in the light of his current knowledge of C,
              etc. He just wants to play to the gallery, and show the other clc
              peacocks how thorough his knowledge of the "standard" is.

              So it's not surprising that your two responses were quite different.

              Comment

              • Ben Bacarisse

                #8
                Re: C Question

                Pawel Dziepak <pdziepak@quarn os.orgwrites:
                chang wrote:
                >HI,
                > I am confused of storing capacity in our primitivetypes.
                >I delcared like that way..
                >#include<stdio .h>
                >main()
                >{
                > int a=0xFFFFFFFF;
                > printf("The hex value:%x\n",a);
                > printf("The dec value :%d",a);
                >>
                >}
                >Ans is showing like that way:-
                >The hex value:FFFFFFFF
                >The Dec value:-1
                <snip>
                >2)--->But if i add this line(a=a+0x01;) in programme before those two
                >printfs then answer is :
                >The hex value:0
                >The Dec value:0
                >>
                >3)--->and if i add like this way :a=a+0x02;
                >Then answers are:
                > The hex value:1
                >The Dec value:1
                >
                If your implementation assumes that int is a signed value, a is equal to
                -1. That's why you got that results for adding 1 and 2.
                On your architecture int is 32 bit, what means that the largest value
                unsigned int can contain is 0xFFFFFFFF. When you adds anything to it,
                there is an overflow. The results are the same as in adding to
                signed int.
                Something that has got lost in all the details... There is no
                overflow in the additions the OP wrote. Overflow is possible (because
                the type of 'a' is signed) but it if it occurs at all (and it might)
                it will happen when you assign 0xFFFFFFFF. After that, if 'a' has the
                value -1 then neither 'a = a + 1' not 'a = a + 2' cause any overflow.
                They just add a and 2 respectively to -1. The result is 0 and 1
                respectively as one would hope!

                --
                Ben.

                Comment

                • Keith Thompson

                  #9
                  Re: C Question

                  Pawel Dziepak <pdziepak@quarn os.orgwrites:
                  Keith Thompson wrote:
                  >>0xFFFFFFFF is in two's complement [1] system a representation of -1.
                  >>
                  >Well, sort of. 0xFFFFFFFF is an integer constant; it denotes a value,
                  >not a representation, specifically the value 4294967295. C doesn't
                  >have a notation for representations .
                  >
                  Here I showed how -1 is represented using *two's complement system* and
                  that's the value that you will get when assigning 0xFFFFFFFF to signed
                  int on implementations that uses two's complement system.
                  My point is that the notation 0xFFFFFFFF is simply an integer
                  constant, denoting a particular value with a particular type.

                  You're also assuming that the conversion from unsigned int to int
                  behaves in a particular way; the assumption is almost universally
                  correct, but it's not guaranteed by the standard. Surely it can't
                  hurt to be explicit about what assumptions we're making.
                  >So what's the result of converting 0xFFFFFFFF (a value of type
                  >unsigned int) to type int? It's implementation-defined. In practice,
                  >the vast majority of systems use a 2's-complement representation *and*
                  >this kind of conversion is defined to copy the bits rather than doing
                  >anything fancier, so the value stored in a will probably be -1.
                  >
                  Yes, it is implementation defined and we can see how it works on the
                  implementation used by chang on the example given in the first post. I
                  would like to remind you, that he wanted us to explain him what is
                  happening. "It's implementation-defined" is not a helpful answer.
                  It might not have been helpful if that had been all I'd said. It
                  really is implementation-defined. I then went on to explain what
                  actually happens on most systems.

                  I added information; why is that a bad thing?
                  >>>2)--->But if i add this line(a=a+0x01;) in programme before those two
                  >>>printfs then answer is :
                  >>>The hex value:0
                  >>>The Dec value:0
                  >>>>
                  >>>3)--->and if i add like this way :a=a+0x02;
                  >>>Then answers are:
                  >>> The hex value:1
                  >>>The Dec value:1
                  >>If your implementation assumes that int is a signed value,
                  >>
                  >There is no "if"; type int is signed by definition.
                  >
                  I thought so, but I weren't able to find part of C standard that states
                  that. Could you point me to such paragraph?
                  C99 6.2.5p4:

                  There are five standard _signed integer types_, designated as
                  signed char, short int, int, long int, and long long int. (These
                  and other types may be designated in several additional ways, as
                  described in 6.7.2.)

                  Note that "int" may also be referred to as "signed int" or "signed" --
                  or, if you're feeling perverse, even as "int signed".

                  C99 5.2.4.2.1:

                  -- minimum value for an object of type int
                  INT_MIN -32767 // -(2**15 - 1)

                  -- maximum value for an object of type int
                  INT_MAX +32767 // 2**15 - 1

                  (I've used "**" to denote a superscript, i.e., exponentiation. )
                  >> a is equal to
                  >>-1. That's why you got that results for adding 1 and 2.
                  >>
                  >a is *probably* equal to -1.
                  >
                  On chang's architecture it is equal to -1 *for sure*.
                  I don't recall chang mentioning what architecture he's using. Given
                  the results he reported, you're almost certainly right -- but surely
                  it's useful to know what is specific to a particular system and what
                  can vary from one system to another.

                  On one system, "int a = 0xFFFFFFFF;" might set a to -1. On another,
                  it might set a to 4294967295. On yet another, it might raise an
                  implementation-defined signal and cause the program to terminate.
                  I've worked on systems where it's -1 and systems where it's
                  4294967295; I haven't seen one where it raises a signal.
                  I see that you are basing mainly on C standard, while I based my
                  previous post on that what we can know about implementation used by chang.
                  Yes. This is comp.lang.c, where we discuss the C programming
                  language, which is defined by the C standard. I *also* discussed what
                  actually happens on most implementations .

                  --
                  Keith Thompson (The_Other_Keit h) kst-u@mib.org <http://www.ghoti.net/~kst>
                  Nokia
                  "We must do something. This is something. Therefore, we must do this."
                  -- Antony Jay and Jonathan Lynn, "Yes Minister"

                  Comment

                  • Pawel Dziepak

                    #10
                    Re: C Question

                    -----BEGIN PGP SIGNED MESSAGE-----
                    Hash: SHA1

                    Keith Thompson wrote:
                    C99 6.2.5p4:
                    >
                    There are five standard _signed integer types_, designated as
                    signed char, short int, int, long int, and long long int. (These
                    and other types may be designated in several additional ways, as
                    described in 6.7.2.)
                    >
                    Note that "int" may also be referred to as "signed int" or "signed" --
                    or, if you're feeling perverse, even as "int signed".
                    >
                    C99 5.2.4.2.1:
                    >
                    -- minimum value for an object of type int
                    INT_MIN -32767 // -(2**15 - 1)
                    >
                    -- maximum value for an object of type int
                    INT_MAX +32767 // 2**15 - 1
                    >
                    (I've used "**" to denote a superscript, i.e., exponentiation. )
                    Thank you very much.

                    I think that discussing if our posts were helpful is pointless, and for
                    sure don't help anyone (it was my fault to start this). I think we both
                    have better things to do. chang has good description what C standard
                    tells us and how it is implemented on his architecture - that's was the
                    point of this discussion.

                    Pawel Dziepak
                    -----BEGIN PGP SIGNATURE-----
                    Version: GnuPG v1.4.9 (GNU/Linux)
                    Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

                    iEYEARECAAYFAkk UmqwACgkQPFW+cU iIHNoA8QCeJvXVV bpfsIrMgIell+K2 RC7O
                    KC8An3Oj2XVjPAr Yf6vm9uu9oHte6D 0z
                    =s4nj
                    -----END PGP SIGNATURE-----

                    Comment

                    • CBFalconer

                      #11
                      Re: C Question

                      Pawel Dziepak wrote:
                      Keith Thompson wrote:
                      >
                      .... snip ...
                      >
                      >So what's the result of converting 0xFFFFFFFF (a value of type
                      >unsigned int) to type int? It's implementation-defined. In
                      >practice, the vast majority of systems use a 2's-complement
                      >representati on *and* this kind of conversion is defined to copy
                      >the bits rather than doing anything fancier, so the value
                      >stored in a will probably be -1.
                      >
                      Yes, it is implementation defined and we can see how it works on
                      the implementation used by chang on the example given in the
                      first post. I would like to remind you, that he wanted us to
                      explain him what is happening. "It's implementation-defined" is
                      not a helpful answer.
                      Yes it is. Implementation defined means that the OP has to look at
                      his own system documentation. Here on c.l.c the documentation is
                      the C standard, which states that is is "implementa tion defined",
                      and what it means by that phrase.

                      --
                      [mail]: Chuck F (cbfalconer at maineline dot net)
                      [page]: <http://cbfalconer.home .att.net>
                      Try the download section.

                      Comment

                      • Andreas Eibach

                        #12
                        Re: C Question


                        "Keith Thompson" <kst-u@mib.orgwrote:
                        Note that "int" may also be referred to as "signed int" or "signed" --
                        or, if you're feeling perverse, even as "int signed".
                        >
                        C99 5.2.4.2.1:
                        >
                        -- minimum value for an object of type int
                        INT_MIN -32767 // -(2**15 - 1)
                        >
                        -- maximum value for an object of type int
                        INT_MAX +32767 // 2**15 - 1
                        >
                        Bingo, also reflecting my thoughts.

                        signed int <= 0x7FFF
                        unsigned int <= 0xFFFF.
                        Everything else beyond that will (mostly) require a long type on modern
                        architectures.
                        On one system, "int a = 0xFFFFFFFF;" might set a to -1.
                        Makes sense, actually ...
                        On another,
                        it might set a to 4294967295.
                        If I'm not completely wrong, on gcc, I would _always_ need a "long" type to
                        represent true decimal 2^32-1.
                        No special treatment, full stop. Or I'd get a shiny warning.
                        And since I'd always test with -c -Wall, this would not escape me.

                        -Andreas

                        Comment

                        • Keith Thompson

                          #13
                          Re: C Question

                          "Andreas Eibach" <aeibach@mail.c omwrites:
                          "Keith Thompson" <kst-u@mib.orgwrote:
                          >Note that "int" may also be referred to as "signed int" or "signed" --
                          >or, if you're feeling perverse, even as "int signed".
                          >>
                          >C99 5.2.4.2.1:
                          >>
                          > -- minimum value for an object of type int
                          > INT_MIN -32767 // -(2**15 - 1)
                          >>
                          > -- maximum value for an object of type int
                          > INT_MAX +32767 // 2**15 - 1
                          >>
                          >
                          Bingo, also reflecting my thoughts.
                          >
                          signed int <= 0x7FFF
                          unsigned int <= 0xFFFF.
                          Everything else beyond that will (mostly) require a long type on modern
                          architectures.
                          Type int must be at least 16 bits, but on most modern systems, at
                          least non-embedded ones, it's likely to be 32 bits.
                          >On one system, "int a = 0xFFFFFFFF;" might set a to -1.
                          Makes sense, actually ...
                          >
                          >On another,
                          >it might set a to 4294967295.
                          >
                          If I'm not completely wrong, on gcc, I would _always_ need a "long" type to
                          represent true decimal 2^32-1.
                          No special treatment, full stop. Or I'd get a shiny warning.
                          And since I'd always test with -c -Wall, this would not escape me.
                          I wasn't talking about gcc, I was talking about C implementations in
                          general. On a system where int is bigger than 32 bits,
                          int a = 0xFFFFFFFF;
                          will set a 4294967295. I don't know whether gcc can be configured to
                          use 64-bit ints, but other compilers certainly can.

                          --
                          Keith Thompson (The_Other_Keit h) kst-u@mib.org <http://www.ghoti.net/~kst>
                          Nokia
                          "We must do something. This is something. Therefore, we must do this."
                          -- Antony Jay and Jonathan Lynn, "Yes Minister"

                          Comment

                          • Pawel Dziepak

                            #14
                            Re: C Question

                            -----BEGIN PGP SIGNED MESSAGE-----
                            Hash: SHA1

                            Andreas Eibach wrote:
                            If I'm not completely wrong, on gcc, I would _always_ need a "long" type to
                            represent true decimal 2^32-1.
                            On gcc 4.3.0 running on x86 cpu INT_MAX == LONG_MAX and AFAIK gcc always
                            makes int the same size as the cpu word.

                            Pawel Dziepak
                            -----BEGIN PGP SIGNATURE-----
                            Version: GnuPG v1.4.9 (GNU/Linux)
                            Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

                            iEYEARECAAYFAkk XM4QACgkQPFW+cU iIHNrHiwCfc/b0EfRjGVviLSqT4 ni+n10o
                            3/4An2HAWW83gIweG y1fm2pmU1Qt07wl
                            =NHmH
                            -----END PGP SIGNATURE-----

                            Comment

                            • Pawel Dziepak

                              #15
                              Re: C Question

                              -----BEGIN PGP SIGNED MESSAGE-----
                              Hash: SHA1

                              Pawel Dziepak pisze:
                              Andreas Eibach wrote:
                              >If I'm not completely wrong, on gcc, I would _always_ need a "long" type to
                              >represent true decimal 2^32-1.
                              >
                              On gcc 4.3.0 running on x86 cpu INT_MAX == LONG_MAX and AFAIK gcc always
                              makes int the same size as the cpu word.
                              Sorry, my mistake. On gcc long is the same size as the cpu word (if
                              standard allows it), int is often 32 bit.

                              Pawel Dziepak
                              -----BEGIN PGP SIGNATURE-----
                              Version: GnuPG v1.4.9 (GNU/Linux)
                              Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

                              iEYEARECAAYFAkk XM9gACgkQPFW+cU iIHNorXQCfTP/Bm9+mLOqge3tJiS lVjuj4
                              bNEAnjkRvTv8UcK qYwNlzRVBpUPZWJ Df
                              =AdaZ
                              -----END PGP SIGNATURE-----

                              Comment

                              Working...