C# Extra byte in type classes?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • ShadowLocke
    New Member
    • Jan 2008
    • 116

    C# Extra byte in type classes?

    I'm looking for a better understanding of whats going on with the code i write. I have come across something that I thought I understood but its not working as expected. Its hard to explain so let me show the code first..

    Code:
            [DllImport("activeds.dll", EntryPoint = "ADsOpenObject", CharSet = CharSet.Unicode, ExactSpelling = true)]
            private static extern int IntADsOpenObject(string path, string userName, string password, int flags,ref MyGuid iid, [MarshalAs(UnmanagedType.Interface)] out object ppObject);
    
            struct MyGuid
            {
                public Int32 Data1;
                public Int16 Data2;
                public Int16 Data3;
    
                public Byte Data4_1;
                
                public Byte Data4_2;
                public Byte Data4_3;            
                public Byte Data4_4;
                public Byte Data4_5;            
                public Byte Data4_6;
                public Byte Data4_7;
    
                public Byte Data4_8;
            }
    
            static void Main(string[] args)
            {
                //Guid l = new Guid("00000000-0000-0000-C000-000000000046");
                
                object obj = null;
                MyGuid g = new MyGuid();
    
                g.Data4_1 = 192;
                g.Data4_8 = 70;
    
                int j = IntADsOpenObject("LDAP://thedomain.com", "username", "password", 1, ref g, out obj);
                Console.WriteLine(j.ToString());
            }
    This works as expected..(save replacing the strings with valid domain, username and password)

    When I try to replace the last two bytes in the structure with a short datatype it no longer works..i.e:

    Code:
            [DllImport("activeds.dll", EntryPoint = "ADsOpenObject", CharSet = CharSet.Unicode, ExactSpelling = true)]
            private static extern int IntADsOpenObject(string path, string userName, string password, int flags,ref MyGuid iid, [MarshalAs(UnmanagedType.Interface)] out object ppObject);
    
            struct MyGuid
            {
                public Int32 Data1;
                public Int16 Data2;
                public Int16 Data3;
                public Byte Data4_1;
                
                public Byte Data4_2;
                public Byte Data4_3;            
                public Byte Data4_4;
                public Byte Data4_5;            
                public Byte Data4_6;
    
                public Int16 Data4_7_8;//Same number of bytes?
                //public Byte Data4_7;
                //public Byte Data4_8;
            }
    
            static void Main(string[] args)
            {
                //Guid l = new Guid("00000000-0000-0000-C000-000000000046");
                
                object obj = null;
                MyGuid g = new MyGuid();
    
                g.Data4_1 = 192;
                g.Data4_7_8 = 70;
    
                int j = IntADsOpenObject("LDAP://thedomain.com", "username", "password", 1, ref g, out obj);
                //No longer works
    
                Console.WriteLine(j.ToString());
            }
    Am i misunderstandin g something? The Int16 class is a short datatype which is only two bytes (I thought)..so replacing the two bytes with the short should be exactly the same. What is breaking this?

    (Yes I know i could just pass the Guid type in and it would work but im looking to understand the code to stop just grabing solutions off the net.)
  • Plater
    Recognized Expert Expert
    • Apr 2007
    • 7872

    #2
    Well I don't really know the "why" either, but I had figured it would not work. Managed Structs don't work the way unmanged (c++) structs worked. Where the bytes were just folded on top of the struct and "filled it in"
    I think if you maybe fiddle with the [StructLayout] and packing stuff you can make it work

    Comment

    • mldisibio
      Recognized Expert New Member
      • Sep 2008
      • 191

      #3
      Not completely sure either, but Byte is unsigned and Int16 is signed, so the leftmost of the 16 bits is reserved for the sign, meaning it is not truly two 8 bit values. [1000 0000 0000 0000 = -32768 not +32768] and the MaxValue is [0111 1111 1111 1111 : 0x7FFF not 0xFFFF]

      That said, I cannot explain why the decimal 70 would make any difference.

      Nonetheless, theoretically it might work if you used UInt16, unsigned, but that is not CLR compliant and so I wonder if that would not also cause interop problems.

      Comment

      • ShadowLocke
        New Member
        • Jan 2008
        • 116

        #4
        thanks for the input! I had already tried using unsigned shorts but it gave the same results. I forgot about the StructLayout attribute, so i played around with it for a little bit (found a greate resource here: VSJ | Articles | Mastering structs in C#)

        But still was not able to get it through..seeing how the packing worked made me think for sure that was the problem. This is what I came up with:

        Code:
        [StructLayout(LayoutKind.Explicit, Size=16, Pack=1)]
                struct MyGuid
                {
                    [FieldOffset(0)]
                    public Int32 Data1;
                    [FieldOffset(4)]
                    public ushort Data2;
                    [FieldOffset(6)]
                    public ushort Data3;
                    [FieldOffset(8)]
                    public Byte Data4_1;
                    [FieldOffset(9)]
                    public Byte Data4_2;
                    [FieldOffset(10)]
                    public Byte Data4_3;
                    [FieldOffset(11)]
                    public Byte Data4_4;
                    [FieldOffset(12)]
                    public Byte Data4_5;
                    [FieldOffset(13)]
                    public Byte Data4_6;
        
                    [FieldOffset(14)]
                    public ushort Data4_7_8;//Same number of bytes?
                    //public Byte Data4_7;
                    //public Byte Data4_8;
                }
        Still no dice..I wonder is there some way I can debug and see exactly what is being passed in memory? i.e. Looking at a hexdump of somekind? (I'm currently using VS2008 if it has the feature im unaware)

        Comment

        • mldisibio
          Recognized Expert New Member
          • Sep 2008
          • 191

          #5
          I don't have VS08 installed, but in 05 during debugging there is an option from the Main Menu -> Debug -> Windows -> Disassembly or Memory or Registers.

          Comment

          • ShadowLocke
            New Member
            • Jan 2008
            • 116

            #6
            Awesome! Exactly what i was looking for. Doing this let me see in memory what the problem was..though it still doesnt make sense..

            The first example MyGuid goes through as:
            00 00 00 00 00 00 00 00 C0 00 00 00 00 00 00 46

            The second as:
            00 00 00 00 00 00 00 00 C0 00 00 00 00 00 46 00

            It was storing it backwards..
            If i change the line:

            Code:
            g.Data4_7_8 = 70;
            to:
            Code:
            g.Data4_7_8 = 17920;
            (17920 = 0x4600)

            It works..computer s are fun. Anyway..i thought only strings were held in memory backwards..i didnt realize other datatypes were as well

            Comment

            • Plater
              Recognized Expert Expert
              • Apr 2007
              • 7872

              #7
              .NET's CLR always uses the same byte-wise endian, regardless of what the underlying system byte-wise endian is.
              Which usually means it's opposite of unmanaged code.

              Comment

              • mldisibio
                Recognized Expert New Member
                • Sep 2008
                • 191

                #8
                So does this work?

                Code:
                // puts 0x4600 to rightmost bits 0x0046
                g.Data4_7_8 = (short)((70 << 8) & 0xFFFF);
                such that you could write:
                Code:
                Int16 lastTwoBytesAsShort = 70;
                g.Data4_7_8 = (short)((lastTwoBytesAsShort << 8) & 0xFFFF);
                I guess since your original was two separate bytes, you are expecting the second to last byte (Data4_7) to always have a value of 0x00 and the actual bits of the short to be in the range of Byte? Otherwise, and this is getting confusing, if you have two shorts each with byte values, you could stuff the final short something like this:
                Code:
                Int16 data7 = 255;
                Int16 data8 = 70;
                // writes ff 00 46 00 as ff 46
                g.Data4_7_8 = (short)(data7 | ((data8 << 8) & 0xFFFF));

                Comment

                • ShadowLocke
                  New Member
                  • Jan 2008
                  • 116

                  #9
                  Originally posted by mldisibio
                  So does this work?

                  Code:
                  // puts 0x4600 to rightmost bits 0x0046
                  g.Data4_7_8 = (short)((70 << 8) & 0xFFFF);
                  such that you could write:
                  Code:
                  Int16 lastTwoBytesAsShort = 70;
                  g.Data4_7_8 = (short)((lastTwoBytesAsShort << 8) & 0xFFFF);
                  I guess since your original was two separate bytes, you are expecting the second to last byte (Data4_7) to always have a value of 0x00 and the actual bits of the short to be in the range of Byte? Otherwise, and this is getting confusing, if you have two shorts each with byte values, you could stuff the final short something like this:
                  Code:
                  Int16 data7 = 255;
                  Int16 data8 = 70;
                  // writes ff 00 46 00 as ff 46
                  g.Data4_7_8 = (short)(data7 | ((data8 << 8) & 0xFFFF));

                  Using the shift works (70 << 8 == 17920) Why do put "& 0xFFFF" though?

                  (The last one would not work because we are expecting 0x00 for the second to last byte [were expecting the iunknown guid "00000000-0000-0000-C000-000000000046"])

                  Comment

                  • mldisibio
                    Recognized Expert New Member
                    • Sep 2008
                    • 191

                    #10
                    I believe "& 0xFFFF" ensures the registers are cleared correctly.
                    I have followed the code of others who are much more familiar with bit-shifting than I am, including classes in MS Rotor code where bit shifting was done. However, maybe I missed the true purpose of doing it.

                    Comment

                    • Plater
                      Recognized Expert Expert
                      • Apr 2007
                      • 7872

                      #11
                      0xFFFF is used as a mask, although 0xFFFF does nothing, using & 0x00FF or & 0xFF00 would mask off one byte (or the other)

                      Comment

                      Working...