size of an int

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • axneer
    New Member
    • Jun 2007
    • 8

    size of an int

    i am newbie in c ,i want to know on what parameters the size of int or char or float is decided , i mean does it depend on compiler only or "compiler and OS" or combination of all above and processor or RAM we are using

    like...If we are using a 32 bit processor and OS is 16 bit , then will the size of int be 16 bits ?
  • Banfa
    Recognized Expert Expert
    • Feb 2006
    • 9067

    #2
    The size of a char is always 1 byte, the C standard guarantees that. The number of bits in a byte is platform dependent.

    The size of int should be the size that is most efficient for the platform to process (16 bits on a 16 bit processor etc) but that is not always the case.

    For this post the platform should be just the processor but the compiler can have an effect too.

    Comment

    • Arepi
      New Member
      • Sep 2008
      • 62

      #3
      Hi!
      You can take the size of en variable from softwer whit operator sizeof. This give the size of actual variable in bytes.
      use is simple:

      Code:
      int a,b;
      a=sizeof b;

      Comment

      • Ganon11
        Recognized Expert Specialist
        • Oct 2006
        • 3651

        #4
        With, of course, parentheses around the argument list:

        Code:
        int a, b;
        a = sizeof(b);

        Comment

        • Banfa
          Recognized Expert Expert
          • Feb 2006
          • 9067

          #5
          Actually the parentheses are only required if the argument is a type rather than an expression so

          Code:
          int a, b;
          a = sizeof b;       // OK
          a = sizeof(b);      // OK
          a = sizeof(int);    // OK
          a = sizeof int;     // NOT OK
          Personally when using the sizeof operator I try to take the size of an expression rather than a type, because it requires a little less maintenance.

          Comment

          • newb16
            Contributor
            • Jul 2008
            • 687

            #6
            Originally posted by axneer
            like...If we are using a 32 bit processor and OS is 16 bit , then will the size of int be 16 bits ?
            It depends only on compiler used ( and its settings ) - old borland C have 16-bit int wherever you run it, and you can as well compile some program with dos extender and run it with 32-bit int on 16-bit dos.

            Comment

            • Ganon11
              Recognized Expert Specialist
              • Oct 2006
              • 3651

              #7
              Originally posted by Banfa
              Actually the parentheses are only required if the argument is a type rather than an expression so

              Code:
              int a, b;
              a = sizeof b;       // OK
              a = sizeof(b);      // OK
              a = sizeof(int);    // OK
              a = sizeof int;     // NOT OK
              Personally when using the sizeof operator I try to take the size of an expression rather than a type, because it requires a little less maintenance.
              What!? Absurd. I didn't know C could look like Perl.

              Comment

              • Banfa
                Recognized Expert Expert
                • Feb 2006
                • 9067

                #8
                Remember sizeof is an operator, it is actually more absurd that it must have parentheses in 1 case, I mean you would write -b not -(b) normally wouldn't you?

                Comment

                • JosAH
                  Recognized Expert MVP
                  • Mar 2007
                  • 11453

                  #9
                  Originally posted by Ganon11
                  What!? Absurd. I didn't know C could look like Perl.
                  No it isn't absurd; the parentheses disambuigate several matters; have a look:

                  Code:
                  typedef int t;
                  
                  int main() {
                  	t u;
                  	char t= 42;
                  	double d= sizeof t * 3.14159;
                  	double e= sizeof (u * 3.14159);
                  	double f= sizeof (u) * 3.14159;
                  	printf("%f\n", d);
                  	printf("%f\n", e);
                  	printf("%f\n", f);
                  	return 0;
                  }
                  Play with the parentheses and see what happens.

                  kind regards,

                  Jos

                  Comment

                  • donbock
                    Recognized Expert Top Contributor
                    • Mar 2008
                    • 2427

                    #10
                    Integer types in Standard C have the following characteristics :
                    The width of 'char' is at least 8 bits;
                    the width of 'short' is at least 16 bits;
                    the width of 'long' is at least 32 bits;
                    the width of 'long long' is at least 64 bits;
                    the width of 'int' is greater than or equal to that of 'short' and less than or equal to that of 'long'.

                    An implementation is expected to select a size for 'int' that has the best performance on that platform and that also meets the preceding width constraints.

                    Refer to <limits.h> for these details for your particular implementation. This header specifies the number of bits in a 'char' (which is also the number of bits in a "storage unit", see below). Other than that, this header specifies the largest and smallest values that can be stored in each integral type rather than the width of the type in bits. Different implementations may use different encoding schemes for integers, so the same width in bits might corresponds to a different range of values.

                    The sizeof operator returns the size of a type or data object (in "storage units"). By definition, one storage unit is the amount of memory used to hold a 'char'; that is, sizeof(char) is always one. We like to think that sizeof returns a value in bytes, but that is only typically true. For example, an inefficient compiler implementation might store all integer types (including char) in 64-bit blocks of memory. If so, then sizeof any and all integer types would be one.

                    Comment

                    • Banfa
                      Recognized Expert Expert
                      • Feb 2006
                      • 9067

                      #11
                      Which C standard are you talking about Don because what you have said is what I understand for C++ but for C89 my understanding is that

                      char is 1 byte which has CHAR_BIT (defined in limits.h) number of bits that can be <8 (I have heard of definite cases of 7 (and 9) although I can't put my fingers on the platform names)

                      and that

                      sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

                      (I realise I haven't mentioned long long)

                      Comment

                      • Arepi
                        New Member
                        • Sep 2008
                        • 62

                        #12
                        Originally posted by Ganon11
                        What!? Absurd. I didn't know C could look like Perl.
                        Let me to quote from "Brian W. Kernighan. Dennis M. Ritchie :The C programming language"

                        url:<URL Removed>
                        On site 111:

                        C provides a compile-time unary operator called sizeof that can be used to compute the size
                        of any object. The expressions
                        sizeof object
                        and
                        sizeof (type name)

                        Comment

                        • donbock
                          Recognized Expert Top Contributor
                          • Mar 2008
                          • 2427

                          #13
                          My comments were intended to apply to C99. Refer to paragraph 5.2.4.2.1 (Sizes of integer types) of the Standard.

                          Code:
                          ... Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
                          
                          CHAR_BIT		8
                          SCHAR_MIN	-127	// -(2^7 - 1)
                          SCHAR_MAX	+127	// 2^7 - 1
                          UCHAR_MAX	255	// 2^8 - 1
                          SHRT_MIN	-32767	// -(2^15 - 1)
                          SHRT_MAX	+32767	// 2^15 - 1
                          USHRT_MAX	65535	// 2^16 - 1
                          LONG_MIN	...	// -(2^31 - 1)
                          LONG_MAX	...	// 2^31 - 1
                          ULONG_MAX	...	// 2^32 - 1
                          LLONG_MIN	...	// -(2^63 - 1)
                          LLONG_MAX	...	// 2^63 - 1
                          ULLONG_MAX	...	// 2^64 - 1
                          These CHAR_BIT constraint permits support for a 9-bit char, but appears to prohibit a 7-bit char. The other constraints refer to minimum and maximum values, not number of bits. I can't imagine any integer encoding scheme that satisfies these constraints that wouldn't require at least 16 bits for short, 32 bits for long, or 64 bits for long long.

                          I know C89 also had minimum values for these parameters, but I don't know what they were. I don't have a copy of C89 nearby.

                          Regarding the sizeof hierarchy, I agree that every implementation I've ever seen matches the sequence you listed. Notice that here we are talking about a hierarcy of storage allocation, not a hierarchy of maximum values. Suppose a processor provided support for 8-bit, 16-bit, and 32-bit accesses -- but suppose its operation was much faster if all accesses were aligned on 32-bit boundaries. I think the Standard would permit an implementation to allocate 32 bits for 8- and 16-bit variables, even though many of those bits would be unused padding. If so, sizeof would return the same value for char, short, and long. I can't quote chapter and verse from the Standard for this one. We know sizeof a struct includes all pad bytes.

                          Comment

                          • Banfa
                            Recognized Expert Expert
                            • Feb 2006
                            • 9067

                            #14
                            It is possible that the representation of a short or int or long includes bits or bytes not used to hold the actual value (bits used as padding or bits used for trap values).

                            However the story for char (or unsigned char at least) is different since the standard expressly dictates that using this type the CPU can contiguously access all bytes of memory. That precludes use of padding bytes in a char type to align the type to an efficient memory boundary for the processor.

                            Comment

                            • donbock
                              Recognized Expert Top Contributor
                              • Mar 2008
                              • 2427

                              #15
                              Originally posted by Banfa
                              However the story for char (or unsigned char at least) is different since the standard expressly dictates that using this type the CPU can contiguously access all bytes of memory. That precludes use of padding bytes in a char type to align the type to an efficient memory boundary for the processor.
                              Good point. You're right.

                              Comment

                              Working...