Hi,
I am working in a project dealing with a big many of data. Now I have
a problem with doing the FFT on a very long time trace, a signal with
over 300 million sampling points. After testing with my computer, I
realise that I can store only 2**27 points into memory, which will
need 2 GB RAM. With an array of 2**28 Double_t points the program
crash ("segmentati on violation"). I tried the Root cern framework, gsl
and fftw3 library. They all need to load the data into memory. So the
question is: Are there some mechanisms or algorithms to manage the
array on a TTree or somewhere else on the hard disk? And then load the
the data step by step into cache? Something likes a FileArray. Or you
get a better idea?
This is really urgent. I am very grateful if I can hear something from
you!
THX!
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]
I am working in a project dealing with a big many of data. Now I have
a problem with doing the FFT on a very long time trace, a signal with
over 300 million sampling points. After testing with my computer, I
realise that I can store only 2**27 points into memory, which will
need 2 GB RAM. With an array of 2**28 Double_t points the program
crash ("segmentati on violation"). I tried the Root cern framework, gsl
and fftw3 library. They all need to load the data into memory. So the
question is: Are there some mechanisms or algorithms to manage the
array on a TTree or somewhere else on the hard disk? And then load the
the data step by step into cache? Something likes a FileArray. Or you
get a better idea?
This is really urgent. I am very grateful if I can hear something from
you!
THX!
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]
Comment