What is wrong with this?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • DumRat
    New Member
    • Mar 2007
    • 93

    What is wrong with this?

    Hi,
    I'm confounded. Can anyone please help me? Here are two pieces of code.

    This is C++.NET code.
    Code:
    void CLRStackTest()
    {
    	int i = 0;
    	Stack<int> clrstack = gcnew Stack<int>();
    	for(i = 0; i < 1000000; i ++)
    	{
    		clrstack.Push(i);
    	}
    
    	while(clrstack.Count > 0)
    	{
    		clrstack.Pop();
    	}
    }
    This is native C++ code.
    Code:
    void StackTest()
    {
    	stack<int> *s = new stack<int>;
    	for(int i = 0; i < 1000000 ; i++)
    	{
    		s->push(i);
    	}	
    	while(!s->empty())
    	{
    		s->pop();
    	}
    }
    I ran the two functions (In seperate projects, 1 a CLR project, the other a win32 console app) and timed.
    I got ~30ms for the CLR implementation, and ~660ms for C++ implementation( This was when it was optimized for speed.).
    I want to know why it is that .NET implementation performed faster. Are there any stupid mistakes in my code, Is this a bad example, or what else? Thanks in advance.

    DumRat
  • weaknessforcats
    Recognized Expert Expert
    • Mar 2007
    • 9214

    #2
    How did you do your timing??

    The CLR is already loaded but a Windows console app has to do all the work itself.

    Comment

    • DumRat
      New Member
      • Mar 2007
      • 93

      #3
      Originally posted by weaknessforcats
      How did you do your timing??

      The CLR is already loaded but a Windows console app has to do all the work itself.
      Timing?

      I used timeGetTime() for the console app, you know, with timeBeginPeriod () at the beginning. And used DateTime::Now.T ickCount for CLR

      Comment

      • oler1s
        Recognized Expert Contributor
        • Aug 2007
        • 671

        #4
        Originally posted by DumRat
        I used timeGetTime() for the console app, you know, with timeBeginPeriod () at the beginning.
        From the MSDN page:
        Use the QueryPerformanc eCounter and QueryPerformanc eFrequency functions to measure short time intervals at a high resolution,
        I would imagine your timing suffers from resolution problems.

        EDIT: Maybe not. If you used it properly, it should be pretty accurate. Let me run some code myself to post something more substantial.

        Comment

        • oler1s
          Recognized Expert Contributor
          • Aug 2007
          • 671

          #5
          For me, the C++ version was about 55 ms. I don't write Managed C++, but since it compiles down to MSIL, I wrote a C# version, which took about 15-30 ms.

          It's pretty close. In case you're wondering about the timing disparity, the C++ version probably loses time in allocating/deallocating memory. C# version probably just preallocates the memory in the beginning and cleans up in the end.

          Comment

          • DumRat
            New Member
            • Mar 2007
            • 93

            #6
            Originally posted by oler1s
            For me, the C++ version was about 55 ms. I don't write Managed C++, but since it compiles down to MSIL, I wrote a C# version, which took about 15-30 ms.

            It's pretty close. In case you're wondering about the timing disparity, the C++ version probably loses time in allocating/deallocating memory. C# version probably just preallocates the memory in the beginning and cleans up in the end.
            1.The CLR code takes more time to load and initialize - Is this where the preallocation is done?
            2. If that is the case, how can i actually test the 'real' performances?
            3. According to you guys, if I were implementing a iterative deepening depth first search using stacks(Say, for 8-puzzle) in both native C++ and CLR, what would be more faster? I always thought that native CPP is faster - if so, by how much?

            Thanks for the patience

            Comment

            • oler1s
              Recognized Expert Contributor
              • Aug 2007
              • 671

              #7
              1.The CLR code takes more time to load and initialize - Is this where the preallocation is done?
              Maybe. Both languages significantly abstract from any underlying mechanics. I'm not experienced in .NET to confidently comment on what really happens. Even if I had an idea, a lot of what happens is implementation specific. For example, in the C++ code, which has significantly less abstractions, there's still the question of how stack is implemented, and how new allocates memory.

              But it's a good guess that the .NET version allocates a chunk of memory first, and when you time the code, you're timing not much more than the push/pop into the stack, and very limited overhead in keeping tracking of the memory being used. The actual allocation and freeing of memory is done before and after by the garbage collector.

              The C++ version is a bit more explicit though. In a sense, there is a memory "manager", created by your compiler for new and delete. This manager allocates memory from the operating system, and then appropriately allows you to make use of it. It's not a garbage collector, as it's up to you to manage the memory given to you, but there is an abstraction layer that is implementation specific.

              In any case, you still have to explicitly new and delete memory during runtime. When I was responding to this question, one thing I realized that was my implementation of the C++ code didn't involve a function that ran the stack code. It was all in main. So I would have skipped the initial construction and destruction of the stack (EDIT: in my timing), which no doubt involves allocating and freeing a chunk of memory, and so on.

              If that is the case, how can i actually test the 'real' performances?
              Which is really hard to do, because there is so much underlying mechanics going on. That's partly why experienced programmers take "benchmarks " with a grain of salt.

              If you see any serious comparisons in language, they probably are a whole battery of test code, each focused on various non-trivial problems, like something that involves a lot of arithmetic, something that involves lot of recursive behavior, and so on.

              Comparisons are then made across each "category" of tests.

              According to you guys, if I were implementing a iterative deepening depth first search using stacks(Say, for 8-puzzle) in both native C++ and CLR, what would be more faster? I always thought that native CPP is faster - if so, by how much?
              Maybe the C++ version would be very slightly faster. Properly coded in C++ and in .NET code, the differences should be miniscule. I'm not entirely sure how to dissuade people from looking at speed differences in language as a sort of one dimensional "race".

              I would recommend C# over C++ for application development on Windows now. I don't think it's worth spending time on C++ .NET. I believe your first code example was Managed C++, which is already obsolete. The current version is C++ / CLI, which doesn't get all the benefits that C# does anyway.

              Comment

              • DumRat
                New Member
                • Mar 2007
                • 93

                #8
                Looks like I've been living a lie. Thanks for clearing things out a little. Anyway, I changed the functions a little so that no "Cheat" type of optimizations aren't possible. Even then .NET was faster. Thanks

                Comment

                Working...