This is a very basic introduction to shell programming. A lot of my programs started out as a group of commands executed on the command line that I felt would be useful in the future. If you type a multiple line command and you think you might want to keep it, then (using ksh) go back to the command using the ESC-K func- tion and when you have the right command, hit "v" and it will open up the commands in a vi window which you can then edit and :w to save. Note: you'll have to make the executable before running it. Warning: be very careful because the commands will be executed when you exit the vi window whether you get out of it using q! or anything else. So if you accident- ly hit v on the rm -rf * command... it may have unintended and destructive consequences. There's a lot of possibilities - the basic shell script is just a collection of command lines collect- ed to use over and over to save a lot of typing. Beyond that is applying programming to those command lines to execute those com- mand lines depending on some variable criteria. The most useful things in shell to do that are the For and While loops, the if/then statement, and variable manipulation. A lot of good in- formation is in the man pages for the shell being run (man sh, man ksh). You have to know these. The UNIX man page can be view by using: man 5 regexp Just using man gives you the C regex(). Doh. Keep in mind, that you can assign a variable the output of any command by using the `` or $() syntax. For example: output=`ls /` or output=$(ls /) will assign a list of all of the files and directories in / to $output. A few variables are special and are used to input argu- ments into a program: $# Number of arguments (not including shell name) $$ Pid of process $@ all of the arguments $* Same as $@ $? Error code $0 Shell name $1-$9 Arguments. So if a program is called like this: program -a -b -c one two three then $# would be 6, $0 would be program, $1 would be -a, $2 -b, etc. The for loop takes a line of input and executes some action for every field in that line. For example: for number in 1 2 3 4 5 6 do echo $number done will echo 1-6 on a seperate line. More interesting is by using a variety of techniques mentioned in this document to come up with a for list to execute. For exam- ple: for dir in `mount -v|awk '{print $3}'` do bdf $dir done Will do a bdf on all mounted directories that are output from the mount -v command. The If/then and while loops use the "test" arguments (see man test). The most useful are: -z s1 True if the length of string s1 is zero. -n s1 True if the length of the string s1 is non-zero. n1 -eq n2 True if the integers n1 and n2 are algebraically equal. Any of the comparisons -ne, -gt, -ge, -lt, and -le can be used in place of -eq. = or != note that the above is for number comparisons use these for string comparisons. -o or operator. -a and operator. Note: don't use < or > for greater than and less than... they will be interpreted as redirection symbols and you'll end up cre- ating unwanted files. the basic structure is: [ "$1" -eq 1 ] or [ "string1" = "string2" ] with more complicated possibilities: if [ $# -lt 1 -o $# -gt 2 -o $# -eq 1 -a `echo $1|grep -c "^-"` -gt 0 ] which says, if the number of arguments ($#) is less than one OR the number of arguments ($#) is greater than 2 OR the number of arguments equals one AND the argument has a "-" as the first character. This line is used as for error handling in a program, if any of these conditions is true then a usage message is output. See another section of this document for an explantion of the command line arguments ($# $1). Note: it's a good idea to put all variables in quotes, because if a variable is not set to anything and if the quotes aren't there, then it will fail with a syntax error. Especially with the -z and -n arguments which are checking for the existence of a string. If the string is null and doesn't have quotes, then the test will read like [ -n ] which is a syntax error. By the way, I always confuse the -z and -n. The most important difference between a for loop and a while loop is that the for loop works on the fields of a list while the while loop uses the entire line of a list. This is an important difference and depending on the in- put and desired output, this difference will probably be what de- termines whether the for or the while loop is used. The most common application of a while loop is to read the lines in a file and process them line by line, for example: find / -print|egrep -v "^/home"|while read path do echo `basename $path` done which will read in the find output line by line and assign each line to $path. The while loop is more powerful than the for loop because it can be variable controlled using the test arguments. Note: to get an infinite loop, you can use: while : This can be useful to start an infinite process, such as: first=`who|awk '{print $1}'` first=doofus while : do when=`date` second=`who|awk '{print $1}'` if [ -n "$second" -a ! -n "$first" ] then echo "$second logged on at $when." >> /tmp/log.log else if [ -n "$first" -a ! -n "$second" ] then echo "$first logged off at $when." >> /tmp/log.log fi fi first=$second sleep 5 done Which writes a log file of everyone who logs on and off, in- finitely. The if/then statement follows this format: if [ ] then commands fi or, with an else: if [ ] then commands else commands fi or, as an if/then/else if: if [ ] then commands elif [ ] then commands fi The following example program automatically removes a volume group by first removing all of the logical volumes and then re- moving the volume group, note the if/then for the usage statement and the use of if/then/else to process the input: if [ $# -ne 1 ] then echo "USAGE: $0 " exit fi echo "This will permenantly remove EVERYTHING in $1" echo "are you sure [y,n]? reply=`line -t 10` if [ "$reply" = "y" ] then vgdisplay -v /dev/$1|grep "LV Name"|awk '{print $NF}'|xargs lvremove -f vgchange -a n /dev/$1 vgexport /dev/$1 echo "$1 whacked." else if [ -z "$reply" ] then echo "0imeout. Nothing changed." else echo "nothing changed." fi fi This is a pretty good example of a shell that is using a bunch of UNIX built-ins including the reply=`line -t 10` which says read from standard input for 10 seconds and whatever is input (or not) is assigned to $reply. The UNIX program xargs is also useful be- cause it takes whatever is coming through the pipe and puts it at the end of a command. So, whatever comes out of vgdisplay -v /dev/$1|grep "LV Name"|awk '{print $NF}' will be executed as: lvremove -f Math with shell is terrible, you can only do integer operations and it's very slow and not intuitive. But sometimes it's neces- sary, so here is the syntax: count=`expr $count + 1` Keep in mind, that the multiplication symbol is also a regular expression so to multiply you have to escape the symbol with the Besides the +, -, /, and you can use % to get the mod. Note: You need the spaces between the operator otherwise it fails. awk is a powerful language unto itself with a lot of dif- ferent things using many useful built in functions (see man awk for more information). To begin with, the most basic thing you'll end up using it for is to cut fields from some sort of in- put that is piped to it. Here's an example: mount -v|awk '{print $3}' That outputs all of the mount points, which is the third field of every line from the mount -v command. For every line that gets piped to awk, $1 is the first field, $2 is the second field, and so on... $0 is the whole line, NF is the last field, NR is the line number of the input, and FS is the field seperator. For basic operations FS is the only one that is important. Say you want the list of login names on a machine, by doing: cat /etc/passwd|awk '{FS=":"}{print $1}' you will get just the first field, which is the login name. If you would want the real name it would be $5 instead of $1. NR is only important if you have lines that are of variable length and you want a field that you can only get from the end of the line not the beginning. This line: awk '{print $NF}' Will always give you the last field while this line: awk '{print $(NF-1)}' will give you the second to the last number. Note: this function can also be done using the shell command "cut" but awk used in this way is much more intuitive and flexi- ble. Another Note: if you don't pipe any input into awk or the input is (unexpectedly) null it will just sit there waiting for input (Unless you use the BEGIN statement. The awk for loop is some- what different than shell, the syntax is: for (i=;i<=;i=i + ) { commands } The following example takes the $1 and $2 from the shell (which is done with the -v option) and prints the numbers from $1 to $2: awk -v MIN="$1" -v MAX="$2" 'BEGIN{ for (i=MIN;i<=MAX;i++){ print i } }' Note: the i++ is equivalent to: i=i+1. i-- is equivalent to i=i-1. The previous example is useful in a shell for loop: for number in `awk 'BEGIN{ for (i=1;i<=100;i++){ print i } }'` This will set number to 1-100 without having to type in all 100 numbers. The format for an awk if/then statement is: if () { commands } else { commands } the operators are different from the shell operators: == is equal. != is not equal. <= is less than or equal to. >= is greater than or equal to. < is less than. > is greater than. || is or. && is and. There are more operators, see `man awk` for more information. For example: if ((NF/2) == int(NF/2) && NF != 0){ print NR " " $0 } checks to see if the number of fields is even with (NF/2) == int(NF/2) and && makes sure that the number of fields is not 0 with NF != 0. NOTE: spaces between operators are not necessary with awk, but are recommended for readability. However, the {} brackets are very important and leaving one out will probably account for 90 percent of syntax errors. Math with awk is more powerful than shell in that it is not limited to integers. There are also a lot of built in functions (like the int function used above). The following example converts a number with a fraction to it's decimal equivalent: awk -v price="$*" 'BEGIN{ if (match(price," ") != 0) { #It's a fraction greater than one split(price,dollar," ") split(dollar[2],frac,"/") decimal=frac[1]/frac[2] print dollar[1]+decimal} if (match(price," ") == 0) { if (match(price,"/") == 0) { #It's a whole number only print price } else { split(price,frac,"/") #It's a fraction only decimal=frac[1]/frac[2] print decimal} } }' The match(,) function outputs the number of found in . The split(,,) splits into an using the field . To access the array variables, use the syntax [1], [2], , ... See an awk programming book for more information on all of the intricacies of this powerful language. Sed can get very, very complicated and for more information read the `man sed`. The most useful basic function of sed is for cutting parts of strings off of some input or another. For example: cat /etc/group|awk '{FS=":"}{print $4}' will output all the lognames that are listed in the groups found in etc/groups. The list is the fourth field and is comma seper- ated. To use this in a for loop, they would have to be space seperated. For example: for log in `cat /etc/group|awk '{FS=":"}{print $4}'|sed "s/,/ /g"` will give a space seperated list of lognames which can then be processed with the for loop. For example: for log in `cat /etc/group|awk '{FS=":"}{print $4}'|sed "s/,/ /g"|sort -u` do if [ `grep -c "^$log" /etc/passwd` -lt 1 ] then echo "${log} is in /etc/group but not in /etc/passwd." fi done Will output all the lognames that are listed in /etc/group that are not in /etc/passwd. Ed is very complicated (see man ed), but is the best thing to use for certian applications. It is also the only thing that you can use to join lines of input. A basic function of ed is to output just part of of a file. For example: ed - <<-! 5;10 p ! will print out lines 5-10. This can be useful by using the grep -n to find out a start and end line number and use them as vari- ables: ed - <<-! $start;$end p ! or use this shell to input a start and end line and a filename: # @(#) This prints lines in a range. if [ $# -lt 2 -o $# -gt 2 ] then cat <<-EOF USAGE: $0 []-[] examples: $0 10-20 $0 10- $0 -20 $0 20 EOF exit fi if [ `echo $1|grep -c "-"` -ne 1 ] then start=$1 end=$1 else start=`echo $1|awk '{FS="-"}{print $1}'` end=`echo $1|awk '{FS="-"}{print $2}'` fi if [ -z "$start" ] then start=1 elif [ -z "$end" ] then end="$" fi ed - $2 <<-! $start;$end p ! To use the line joining function, use a script like this: ed - <<-! /$line/ s/ ! Which finds a $line in adds a " to the end of the first line and then combines the line with the next line. So have fun and look at other programs to see what they do and learn from them. Take what you need from them because if you have a problem chances are someone has already solved it.