hello. it would be of great use to me to have a script of the following sort.
- initially we will have a text file containing URLs. these URLs are yanked from vimperator and placed into the file into the following format:
with precisely zero spaces and an enter following each URL so that they may be read consistently. this is done manually. let this file be known as "bkurlsrc" and placed in a fixed directory.
now, bkurlsrc is to be read (by some means of which i am unaware
) and taken as input by wget. wget should download those pages only, following no links, with the files of the page, etc., in .html format. now, based on DATE only (only up to, say, 01/20/09 specificity) a folder by name "012009" (or similar format) is to be created if
it does not already exist, and those downloaded .html files must be moved from the current directory into that folder. using a tool such as html2text all of these html files should then be converted into text with the original htmls still kept.
now, the file "bkurlsrc" should be copied. one copy is to be cleared of all text. the second is to be renamed with the date and hour/minute/second appended to the filename. this file should again be copied and one copy should be placed into a folder "LOGS". the second "log" file should be moved into the folder 012009.
OK, i think thats it. i'd appreciate if you could help me. i am new to bash and besides being of utility it will help me learn some of the basic functions.