Register
It is currently Sat Oct 25, 2014 8:58 pm

large files


All times are UTC - 6 hours


Post new topic Reply to topic  [ 1 post ] 
Author Message
 PostPosted: Wed Feb 28, 2007 1:23 am   

Joined: Wed Feb 28, 2007 12:49 am
Posts: 1
hi all,

In the file that I should use, there are about 50000 spectra. Each of them starts with '#' and have 6000 rows (with 2 column). Something like:

# spectrum 1
100 23.456
101 23.435
...
# spectrum 2
100 22.456
101 23.435
...
...
...
# spectrum 50000
100 53.456
101 53.435
...

So I have 50000 * 6001 rows alltogether in files. I wrote program in C that process each of these spectra. In the loop, I call awk in order to extract spectrum from these large files into separate file like:

awk '{if (NR == last) break} {if (NR >= first && NR <= last) print $0}' spectra.dat > extraxtedSpectra.dat

where 'first' and 'last' are loop variabeles. I use 'break' statement in awk to fast up extracting the spectrum. Extraction in this case is fast for the first lets say 100 spectra and then slows down. How to make extraction more efficient and faster?

thanks in advance
oliver


Top
 Profile  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1 post ] 

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
cron


BashScripts | Promote Your Page Too
Powered by phpBB © 2011 phpBB Group
© 2003 - 2011 USA LINUX USERS GROUP