En Tue, 03 Jun 2008 16:58:12 -0300, Pau Freixes <pfreixes@milno u.net>
escribió:
So the above code corresponds to the standalone version - what about the
embedded version? Are you sure it is exactly the *same* code? All those
global statements are suspicious, and you don't even need most of them.
Note that looking up a name in the global namespace is much slower than
using a local name.
Also, you're including the time it takes the OS to *generate* several
megabytes of random data from /dev/urandom (how big is _const_b?).
Usually it's easier (and more accurate) to measure the time it takes to
compute a long task (let's say, how much time it takes to compute 1000000
md5 values). You're doing it backwards instead.
I'd rewrite the test as:
def try_me():
from md5 import md5
buff = os.urandom(_con st_b)
for i in xrange(1000000) :
md5(buff).hexdi gest()
def main(req):
t0 = time.clock()
try_me()
t1 = time.clock()
# elapsed time = t1-t0
PS: Recuerdo que respondí esto en la lista de Python en castellano, pero
ahora veo que mi mensaje nunca llegó :(
--
Gabriel Genellina
escribió:
Hi list,
>
First Hello to all, this is my and hope not end message to the list :P
>
This last months I have been writting a program in c like to mod_python
for
embedding python language, it's a middleware for dispatch and execute
python
batch programs into several nodes. Now I'm writing some python program
for
test how scale this into several nodes and comparing with "standalone "
performance.
>
I found a very strange problem with one application named md5challenge,
this
aplication try to calculate the max number md5 digest in several seconds,
md5challenge use a simple signal alarm for stop program when time has
passed. This is the code of python script
>
def handler_alrm(si gnum, frame):
global _signal
global _nrdigest
global _f
>
>
_signal = True
>
def try_me():
global _nrdigest
global _f
global _signal
>
_f = open("/dev/urandom","r")
while _signal is not True:
buff = _f.read(_const_ b)
md5.md5(buff).h exdigest()
_nrdigest = _nrdigest + 1
>
if _f is not None :
_f.close()
>
def main( req ):
global _nrdigest
>
>
signal.signal(s ignal.SIGALRM, handler_alrm)
signal.alarm(re q.input['time'])
>
>
try_me()
>
req.output['count'] = _nrdigest
>
return req.OK
>
>
if __name__ == "__main__":
>
# test code
class test_req:
pass
>
req = test_req()
req.input = { 'time' : 10 }
req.output = { 'ret' : 0, 'count' : 0 }
req.OK = 1
>
main(req)
>
print "Reached %d digests" % req.output['count']
>
>
When I try to run this program in standalone into my Pentium Dual Core
md4challenge reached 1.000.000 milion keys in 10 seconds but when i try
to
run this in embedded mode md5challenge reached about 200.000 more keys
!!! I
repeat this test many times and always wins embedded mode !!! What's
happen ?
>
Also I tested to erase read dependencies from /dev/random, and calculate
all
keys from same buffer. In this case embedded mode win always also, and
the
difference are more bigger !!!
>
Thks to all, can anybody help to me ?
>
First Hello to all, this is my and hope not end message to the list :P
>
This last months I have been writting a program in c like to mod_python
for
embedding python language, it's a middleware for dispatch and execute
python
batch programs into several nodes. Now I'm writing some python program
for
test how scale this into several nodes and comparing with "standalone "
performance.
>
I found a very strange problem with one application named md5challenge,
this
aplication try to calculate the max number md5 digest in several seconds,
md5challenge use a simple signal alarm for stop program when time has
passed. This is the code of python script
>
def handler_alrm(si gnum, frame):
global _signal
global _nrdigest
global _f
>
>
_signal = True
>
def try_me():
global _nrdigest
global _f
global _signal
>
_f = open("/dev/urandom","r")
while _signal is not True:
buff = _f.read(_const_ b)
md5.md5(buff).h exdigest()
_nrdigest = _nrdigest + 1
>
if _f is not None :
_f.close()
>
def main( req ):
global _nrdigest
>
>
signal.signal(s ignal.SIGALRM, handler_alrm)
signal.alarm(re q.input['time'])
>
>
try_me()
>
req.output['count'] = _nrdigest
>
return req.OK
>
>
if __name__ == "__main__":
>
# test code
class test_req:
pass
>
req = test_req()
req.input = { 'time' : 10 }
req.output = { 'ret' : 0, 'count' : 0 }
req.OK = 1
>
main(req)
>
print "Reached %d digests" % req.output['count']
>
>
When I try to run this program in standalone into my Pentium Dual Core
md4challenge reached 1.000.000 milion keys in 10 seconds but when i try
to
run this in embedded mode md5challenge reached about 200.000 more keys
!!! I
repeat this test many times and always wins embedded mode !!! What's
happen ?
>
Also I tested to erase read dependencies from /dev/random, and calculate
all
keys from same buffer. In this case embedded mode win always also, and
the
difference are more bigger !!!
>
Thks to all, can anybody help to me ?
embedded version? Are you sure it is exactly the *same* code? All those
global statements are suspicious, and you don't even need most of them.
Note that looking up a name in the global namespace is much slower than
using a local name.
Also, you're including the time it takes the OS to *generate* several
megabytes of random data from /dev/urandom (how big is _const_b?).
Usually it's easier (and more accurate) to measure the time it takes to
compute a long task (let's say, how much time it takes to compute 1000000
md5 values). You're doing it backwards instead.
I'd rewrite the test as:
def try_me():
from md5 import md5
buff = os.urandom(_con st_b)
for i in xrange(1000000) :
md5(buff).hexdi gest()
def main(req):
t0 = time.clock()
try_me()
t1 = time.clock()
# elapsed time = t1-t0
PS: Recuerdo que respondí esto en la lista de Python en castellano, pero
ahora veo que mi mensaje nunca llegó :(
--
Gabriel Genellina