Let me see if I understood you right.
You're downloading that xml file and then want to cut out the url to something?
# Set the url variables :)
# Download the file to $pid.url (this is just to uniqueify the filename)
wget $url -O /tmp/$$.url
# the sed line looks throu what we just grep:ed for and searches for something starting with path="
# then it buffers whatever is contained between " and " and then replaces the whole line with the buffer.
file=$(grep files /tmp/$$.url | sed "s/^.*path=\"\(.*\)\">$/\1/")
# Download the url that we got above.
wget -c $file
# Remove the temp file so we don't get alot of crap lying around.
You were right on the money, the sed line is just what you needed
ps. If you're using cat file | grep
you can just aswell just use grep "pattern" file
, this way you only need to execute 1 program instead of 2. This saves you some time when doing large scripts that process alot of data