Hi,
I'm confounded. Can anyone please help me? Here are two pieces of code.
This is C++.NET code.
This is native C++ code.
I ran the two functions (In seperate projects, 1 a CLR project, the other a win32 console app) and timed.
I got ~30ms for the CLR implementation, and ~660ms for C++ implementation( This was when it was optimized for speed.).
I want to know why it is that .NET implementation performed faster. Are there any stupid mistakes in my code, Is this a bad example, or what else? Thanks in advance.
DumRat
I'm confounded. Can anyone please help me? Here are two pieces of code.
This is C++.NET code.
Code:
void CLRStackTest() { int i = 0; Stack<int> clrstack = gcnew Stack<int>(); for(i = 0; i < 1000000; i ++) { clrstack.Push(i); } while(clrstack.Count > 0) { clrstack.Pop(); } }
Code:
void StackTest() { stack<int> *s = new stack<int>; for(int i = 0; i < 1000000 ; i++) { s->push(i); } while(!s->empty()) { s->pop(); } }
I got ~30ms for the CLR implementation, and ~660ms for C++ implementation( This was when it was optimized for speed.).
I want to know why it is that .NET implementation performed faster. Are there any stupid mistakes in my code, Is this a bad example, or what else? Thanks in advance.
DumRat
Comment