Sometimes you will end up with a list of URLs that you would like to check the HTTP response codes on. You might have 200 pages that are sending Google a 302 redirect header and you would like to check them all at once.
This very rough example reads a list of URLs from a file, fetches their HTTP response codes and redirect location, and prints them to the screen:
while read inputline do url="$(echo $inputline)" headers="$(lynx -dump -head $url | grep -e HTTP -e Location)" echo "$url $headers" sleep 2 done < filename.txt
It is a rough script because the Location field of the headers returned by Lynx sometimes spans two lines. (I'm going to fix that problem soon.)
The sleep command tells the script to pause for 2 seconds between requests. It is optional, but if I am requesting a lot of URLs from one site, I usually pause between requests so that it doesn't make the server do too much work at once.
The basic syntax for processing a file line-by-line in the shell is:
while read inputline do [some commands here] done < [input filename]