RAID Controllers
We’ve spent a great deal of time examining various RAID levels such as RAID-0, 1, 5, and 6, and Nested RAID levels such as RAID-10, 50, 51, 61, and 60 or even the more complicated RAID-100 or RAID-160. In all of these discussions we have assumed there was a RAID “controller” that performed the various RAID operations. This includes sending chunks of data to the appropriate disks, computing parity, hot-swapping, disk fail-over, checking read transactions to determine if the read was successfully and if not, declaring that disk as “down”, plus other important tasks related to RAID. All of these tasks require some sort of computation and have to be performed by a RAID controller.
You really have two options for RAID controllers: (1) a dedicated RAID controller such as those on add-in RAID cards, (2) software RAID that uses the CPU for RAID chores. In the first case you have a dedicated RAID controller, typically on an add-in RAID card but it can be on a motherboard, that performs the necessary RAID computations. This controller typically uses a dedicated lower power processor, often a real-time processor such as a PowerPC, to perform the computations. Typically these controllers are put on an add-in card and you plug the drives you want in the RAID array into the card.
With software RAID, all the RAID functions run on the CPU. Pretty much all Linux distributions comes with software RAID in the form of md (Multiple Device) which is a driver within Linux that provides virtual devices created from one or more underlying devices (e.g. storage block devices). Plus the great thing is that it comes with almost all Linux distributions so you have access to the source (plus the price isn’t too bad either and it has a great deal of functionality). In addition, you can create Nested RAID configurations as needed since all RAID functions are in software (I smell a Triple Lindy coming).
One thing you have to be very careful about is what is commonly called a fakeRAID card or fakeRAID controller. FakeRAID is not hardware RAID because there is no dedicated RAID controller. Rather, they use a standard disk controller chip on an add-in card or motherboard, with some specialized firmware and drivers. At boot-time they run a special application that allows users to configure disks attached to the fakeRAID as a RAID group. But the RAID processing is really handled by the drivers which are run on the CPUs (so the CPU provides the computational power). Consequently, it’s really a software RAID solution and not hardware RAID (hence the name “fakeRAID”).
There is a great deal of discussion and grinding of teeth within the Linux community about fakeRAID. One point of the discussions is that the vendors of fakeRAID don’t tell customers that what they are actually buying is not a RAID card with a dedicated RAID controller, but rather a simple card with a disk controller coupled with drivers that use the CPU for RAID processing (false advertising). Plus there is the additional problem of developing and supporting drivers for Linux to allow these fakeRAID cards to be used. Moreover, there is a strong argument that it is probably better to use software RAID that comes with Linux (md) since it is already part of Linux and can arguably give you better performance.
However, if you want to use software RAID that comes with Linux in the kernel (md), you still need some tools to control/manage/monitor the software RAID arrays. That’s where mdadm comes in. This article will do a brief examination of mdadm and some of its options.
Introduction to mdadm
Mdadm is a software tool primarily written by Neil Brown that allows you to create, assemble, report on, grow, and monitor RAID arrays that use software RAID in Linux. Actually according to the documentation there are seven modes of operation:
1. Create
2. Assemble
3. Follow or Monitor
4. Build
5. Grow
6. Manage
7. Misc
We’ll walk through these various modes of operation but the focus of this article is an introduction and not an in-depth HOWTO. You can find those types of articles on the web.
The first step in using mdadm, or any RAID for that matter, is to PLAN your RAID configuration carefully. Personally I like to work backwards starting with the purpose of the storage. Will it be used for a database? Will it be used for /home? Will it be used for high-speed scratch? Will it be used for data that requires a very high degree of reliability? Understanding the intent of the storage is really the key to creating the RAID configuration you need/want. Once you determine the storage use case, you need to develop an idea of how much I/O performance you need (throughput and IOPS) and the general ratio of read and write performance. You should also develop an idea of how much data redundancy you need for the storage.
Once you have an idea of the performance and redundancy of the array you can select the RAID configuration you think you might need. I would suggest you select a few candidate RAID configurations and then do some more reading/research on each one and select the RAID configuration that seems to be best. During this research be sure to examine the redundancy as well as the performance of the various RAID configurations and compare them to your estimations. But also be sure to examine the capacity and storage efficiency of each level. You may love the performance and storage efficiency of RAID-10 but the data redundancy may not be enough for you. Or you may love the data redundancy of RAID-61 but you may not willing to give up the performance or, perhaps more importantly, you may not be willing to have such low storage efficiency (especially if this is for your home system).
But just choosing the RAID configuration you want is not the end of your planning. You need to also consider a number of other things. Perhaps the most important thing you need to consider is if you will need to grow/shrink the storage. The reason this is important is because you are likely to have to use LVM (Logical Volume Manager) either on top of Linux software RAID or underneath it. This forces you to carefully consider how to build both LVM and software RAID and how you expand either one or both. I would recommend walking through the expansion steps to make sure you understand how to do it (you could even pass along your ideas to someone else to have another pair of eyes examine them).
One other thing you should consider before implementing your well formulated and thought-out RAID plan is the file system that sits on top of the storage. Based on your usage model for the storage, select one or two candidate file systems. Then do some research on each one to find out what problems or limitations exist, and also how you can optimize each file system for better performance (we’re all performance junkies at heart). There are a number of articles on the web that discuss tuning file systems with Linux software RAID.
Assuming that you have done your careful planning (including a backup solution), let’s move on to the first “mode” of mdadm, Creating a RAID array.
Creating a RAID array
Mdadm allows you to create a RAID array using Linux block devices. During the creation of the array, per-device superblocks are created for the RAID array (allows for the array to be assembled correctly). Using the “create” mode is the most common method for building the array and is recommended if you are just starting to use mdadm.
The basic mdadm command for creating a RAID array is fairly simple with the following generic command and typical options.
mdadm --create [md-device] --chunk=X --level=Y --raid-devices=Z [devices]
where the options are as follows:
-c, –chunk= Specify chunk size in kibibytes. The default is 64.
-l, –level= Set raid level, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty
-n, –raid-devices= Specify the number of active devices in the array.
Notice that the mdadm command begins with the “–create” option that tells mdadm that it will operate in “create” mode. I like to then define the specific md-device as well but be sure you aren’t using an existing md-device name. Specifying the “chunk” option is up to you. Then you define the RAID level you want by the “–level” option with the options listed above. You can also tell mdadm how many block devices you are using with the “–raid-devices=Z” option (Z is the number of devices). Then finally you give mdadm the list of block devices you are using.
An example of using the create option with mdadm is,
% mdadm --create --verbose /dev/md0 --level=0 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
which creates a RAID-0 configuration that is labeled as /dev/md0 and uses three block devices that are /dev/sda1, /dev/sdb1, and /dev/sdc1.
In the example, I have used the first partition of each of the three drives that are valid block devices as the block devices for mdadm. They could have easily been the entire disk such as /dev/sda or /dev/sdb. The point is that they need to be valid Linux block devices (they could even be network based devices but that’s a different discussion).
Mdadm is smart enough to build the RAID-0 configuration using the smallest common size of each of the three devices. So it is recommended that you check on the size of each block device using fdisk as shown below.
root@amadys-laptop:~/# /sbin/fdisk /dev/sdb
The number of cylinders for this disk is set to 19457.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000bca3e
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 18704 150239848+ 83 Linux
/dev/sdb2 18705 19457 6048472+ 5 Extended
/dev/sdb5 18705 19457 6048441 82 Linux swap / Solaris
Be sure to look at the column labeled “Blocks” for the particular partition you are going to use. Do this for all the devices and make sure the number of blocks is the same or very close (otherwise you are wasting space).
You might also notice that I used the “–verbose” option in the mdadm create command. I like to use this option to get more information about what mdadm is doing (“better informed than sorry” is a good motto). This is always a good habit to develop.
At this point, your RAID array should be created and running. An easy way to check this is to look at /proc/mdstat.
% cat /proc/mdstat
Fortunately, the output should be fairly easy to read at first glance. For much more detailed information you can read this article.
Immediately after the RAID array is created it may have to go through a synchronization process. This process performs the necessary RAID functions for configuration you created. For example, for RAID-1, the blocks on the first drive are copied to the second drive even if there isn’t any information on the blocks.
Once the array has finished synchronizing and is ready, then you can move to the next step which can be using the resulting RAID array device in LVM or creating a file system using the block device.
There are many options that you can use in “create” mode. You can read the man pages to get a list of them but below are some of the more important ones that haven’t been presented yet in this article.
-x, –spare-devices= This option allows you to specify spare devices in the initial array. These are devices (disks) that are used in the event that a disk in the RAID configuration fails. Mdadm then uses the spare drive immediately and starts restoring the array to the desired configuration. Mdadm also allows spare drives to be added and removed later (they don’t have to be added when the array is created). If you use spare devices be sure that the “–raid-devices” option lists the number of devices to be the actual RAID drives plus the spares.
-p, –layout=, –parity= Mdadm gives you remarkable control over your RAID configuration. This option lets you control the fine details of the data layout for RAID-5 and RAID-10 arrays and also controls the failure mode for a faulty or failed disk. Please read the manpages for more detail.
-z, –size= This option is the amount of space to be used from each drive in a RAID-1, RAID-4, RAID-5, or RAID-6 configuration. The size is given in kibibytes and must be a multiple of the chunk size. In addition, you must leave about 128KB (128 kibibytes) of space at the end of the drive for the RAID superblock. After the array is created you can use the “grow” mode of mdadm (–grow) to increase the size of the RAID configuration.
Assembling an mdadm RAID array
One of the other “modes” in mdadm is “assemble”. After your RAID array has been created using mdadm, you can stop the array using the following command:
% mdadm --stop /dev/md0
which stops the RAID array /dev/md0 (be sure to unmount the file system that uses the RAID array first). However, there are problems in restarting the RAID array. When you restart the array you have to use mdadm to reassemble the array. For example,
% mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1
This command assembles the parts of a previously created array into an active array and “restarts” the array (i.e. make it function). To automate this, you could put this command as part of the system startup (for example, /etc/rc.d/rc.local) and you could create a simple script for stopping and starting the array. But mdadm can do some of the leg work for you with the following command:
% mdadm --assemble --scan
These options allow mdadm to scan the drives and reassemble the RAID array (it looks for the RAID superblocks on the drives). Typically this is done during the init phase of the system starting. For example, on my CentOS 5.5 system, there is a line in /etc/rc.d/rc.sysinit that looks like the following.
/sbin/mdadm -A -s
which will scan (“-s”) the drives and assemble (“-A”) the mdadm arrays.
However, you can get into trouble with scanning and assembling mdadm RAID arrays when you have more than one array. What is recommended is that if you want to restart an array by hand you specify the uuid for the array (a unique “name” of the RAID array). For example,
% mdadm --scan --assemble --uuid=7121b438:7d36f9f6:8aa9c8b3:b5b0d211
Since the uuid’s are unique to each array, this will ensure that mdadm can reassemble the array properly. However, mdadm‘s scanning and assembling capabilities are quite good and I routinely run two mdadm RAID configurations on my desktop and I’ve never had any confusion between which disks belong to which array (thanks to the superblocks on the devices).
Monitoring/Following an mdadm RAID array
The third mode of operation of mdadm is monitoring or following an mdadm RAID array. This mode monitors one or more arrays and allows action to be taken if the state of the array changes. According to the manpage for mdadm, this mode of operation is really only useful for RAID-1, 4, 5, 6, and 10, or multipath arrays since they have interesting states. On the other hand, RAID-0, and linear RAID are not useful because missing, spare, or failed drives cause these RAID modes to fail (i.e. non-operational).
The basic option for following or monitoring mdadm controlled arrays is the following:
mdadm --monitor options... devices...
(Note: you can use “-F” or “–follow” in place of “–monitor”). There are several options that can be used for monitoring and following mdadm arrays as listed below:
-m, –mail This options allows you to define an email address where mdadm alerts are sent.
-p, –program, –alert This option allows mdadm to run a “program” whenever an event is detected (it is recommended to use the full path to the program).
-y, –syslog This option causes all events to be reported through “syslog”.
-d, –delay This options creates a delay (in seconds) from when mdadm polls the arrays to when it next polls the arrays (i.e. the interval between polling).
-f –daemonize This options tells mdadm to run as a background daemon if it is monitoring arrays. This causes mdadm to fork and run in the child process and disconnect from the terminal.
-i, –pid-file This option tells mdadm to write the pid of the daemon process when mdadm is run as a daemon (see previous option). The pid is written to a specified file.
-1, –oneshot This option checks the arrays only once and generates “NewArray” events as well as “DegradeArray” and “SparesMissing” events (this will show up in the logs). According to the manpages, if you run the command “mdadm –monitor –scan -1″ from a cron job, it will ensure regular notification of any degraded arrays (always a good thing).
-t, –test This option generates a “TestMessage” alert for every array found at startup. This alert gets mailed and passed to the alert program (if you have defined one). This is very useful for testing that alert messages get through successfully (i.e. they work).
As you can see, mdadm gives you several pretty good options for monitoring your arrays. The exact practice of how you monitor really depends upon how you want to function and any tools or processes you have developed. There are several articles on the web that show how to monitor your mdadm array.
In the interest of helping interested people get started I will present a few tips that you can use to get started with monitoring mdadm arrays. I first recommend configuring mdadm to email you in the event of a change in state of the array. There are several articles that discuss how to do this. For example, this blog shows you how to use mdadm to send email to you in the event of a problem.
If you want to be more involved in the monitoring of your arrays there are two primary ways to get more detail: (1) cat /proc/mdstat, and (2) mdadm --detail [device]. The first option gives you a quick overview of the status of the array(s). You can parse this output (perhaps with perl or python), and create a special log or send the output to “syslog” to allow processing by syslog tools. Alternatively, you could create a simple monitoring metric that can be used in conjunction with ganglia or something similar.
The second option gives you more detail than mdstat but again, you can parse this output and then perform some action (logs, syslog, ganglia, etc.). But the details are really up to you since you likely have specific processes or techniques for monitoring.
One last option is to use Munin which is a monitoring tool (somewhat similar to Ganglia). It has plugins that allow you to monitor your mdadm created RAID arrays.
Building an mdadm RAID array
Another mode of operation in mdadm is building an array but without having per-device superblocks. This means that you cannot have mdadm scan and assemble the devices into an array. Consequently, you have to be careful that you differentiate between the initial creation of the array and the assembly of the array (or you can lose data). In addition, if you built an array using build any checks that might have been done between devices are not performed. Basically, you have to be very careful when using this mode of operation and know exactly what you are doing.
I really don’t recommend using this mode of operation so I won’t be discussing it. If you are interested in using the “build” then you can read the manpage or you can search around the Internet for references.
Growing an mdadm RAID array
One of the secret weapons you get with mdadm is the ability to grow, reshape, or even change RAID levels with md arrays. There are some limitations for growing and/or reshaping arrays however, just having that ability is a pretty major accomplishment for mdadm.
The basic option for growing or reshaping md arrays is “-G” or “–grow”. There are a number of options that can be used and things can become complex fairly quickly so I won’t go over them in this introductory article. Please read the manpages for the options or you can Google for information about growing md arrays. However, let’s take a high-level look at what the “–grow” option can do.
We can use the “–grow” option to add a third disk to a two-disk RAID-1 configuration or add another disk to our existing RAID-5 configuration. But we can also use the “grow” option to change RAID levels. For example, we can convert a two-disk RAID-1 md array to a two-disk RAID-5 md array. Direct from the mdadm author’s blog is the list of what changes (reshape) can be done:
* A RAID-1 array can change the number of devices or change the size of individual devices. A 2 drive RAID-1 can be converted to a 2 drive RAID-5.
* A RAID-4 can change the number of devices or the size of individual devices. It cannot be converted to RAID-5 yet (though that should be trivial to implement).
* A RAID-5 can change the number of devices, the size of the individual devices, the chunk size and the layout. A 2 drive RAID-5 can be converted to RAID-1, and a 3 or more drive RAID-5 can be converted to RAID-6.
* A RAID-6 can change the number of devices, the size of the individual devices, the chunk size and the layout. And RAID-6 can be converted to RAID-5 by first changing the layout to be similar to RAID-5, then changing the level.
* A LINEAR array can have a device added to it which will simply increase its size.
* RAID-10 and RAID-0: These arrays cannot be reshaped at all at present.
So you can see that there is a great deal of flexibility in mdadm with respect to changing shape, RAID levels, adding devices, removing devices, etc., to md arrays.
As I mentioned previously, the details of reshaping and changing RAID levels can get complicated. I suggest that before you use this option, you read all the literature you can. Then I would ask some experts who can tell you if your commands are correct or not. And before starting anything, I would definitely make sure you have a backup copy of the data and make sure the storage array isn’t in production.
Managing a md RAID array
Managing a md RAID array primarily consists of managing the devices within the arrays. This can include adding, removing, or failing disks within an array.
An easy (sort of) way to tell if you are using the manage mode, is that if you give a device before any options on the mdadm command line, or if the first option is “–add”, “–fail”, or “–remove”, then you are using the manage mode of mdadm. The general form of the mdadm command is,
mdadm device options ... devices ...
The options used in manage mode are:
-a, –add [device] This option allows you to add the specified devices to the specified array while the array is running.
–re-add [device] This options allows you to re-add a device to an array that was recently removed from the array.
-r, –remove [device] This option allows you to remove the specified device. But the device must not be active so it must be either failed or a spare device.
-f, –fail [device] This option marks the specified device as faulty (failed).
–set-faulty This option is the same as -f.
You can combine the options in one command but all of the commands must affect the same array.
A simple example of the manage mode of mdadm is,
% /sbin/mdadm /dev/md1 --add /dev/sdc1 --fail /dev/sdb1 --remove /dev/sdb1
In this example, the array, /dev/md1 is the “target” of the mdadm command. The first option, “–add /dev/sdc1″ adds a device (/dev/sdc1) to the array. Then the option “–fail /dev/sdb1″ fails that device telling the array that the device is faulty. Finally, the third option, “–remove /dev/sdb1″ removes the device from the array at which point it can be removed from the system. Note that you need to first fail the device you intend to remove.
One cool feature of mdadm is that if you remove a device (disk) from an array you can add it back (–re-add) to the array and mdadm will only update the changed blocks from when the disk was removed. This is the default behavior if you used superblocks in the creation of the array (i.e. you didn’t use the “build” mode).
Misc
This last mode of operation is sort of a catch-all for options and commands that don’t fit into the other six modes. In general this mode supports some operations on active arrays, operations on component devices, and the gathering of information about the arrays.
The general form of the mdadm command in “misc” mode is the following.
mdadm options ... devices ...
Notice that for a “misc” command, no array was defined before the options. The options for this mode are the following:
-Q, –query This option examines a device to see if it is an md device and if it is a component of an md array. The information discovered by mdadm is presented in the output.
-D, –detail [md-device] This option prints out details of one or more md arrays. If you add the options “–brief” or “–scan” the amount of detail in the output is reduced but the format is amenable for /etc/mdadm.conf (an optional configuration file for mdadm).
-E, –examine This option prints the content of the md superblock on the device (or all devices if no device is specified). As with the “–detail” option, if you use “–brief” or “–scan” the amount of output is reduced and it is more amenable to /etc/mdadm.conf.
-X, –examine-bitmap This option reports the information about a bitmap file.
-R, –run This option will start (activate) a partially built md array.
-S, –stop This option stops (deactivates) an active md array.
-o, –readonly This option marks the active array as read-only if it is not being currently used.
-w, –readwrite This option marks the array as readwrite.
–zero-superblock This option over-writes a valid md superblock in an array. This is useful when using disks (devices) from an old md array are used in a new one.
-t, –test This option, when used with the –detail option, sets the exit status of mdadm to reflect the status of md device. This can be very useful when scripting monitoring tools. It is also useful if you want start the md array yourself rather than rely on the kernel to autostart it.
Of all of the “misc mode” operations the “–query”, “–detail”, and “–examine” options are the most generally used.
Summary
We’ve spent some time talking about RAID configurations, both single-level and Nested RAID configurations. In all of the discussions we’ve just mentioned that the RAID operations are handled by a “RAID controller”. There are two types of RAID controllers – hardware and software. The hardware RAID controller has a dedicated processor on an add-in card that handles all the RAID computations. In contrast, software RAID uses the system CPU for RAID computations (this includes “fakeRAID” controllers as well).
This article is an introduction to mdadm, the management/admin tool for Linux software RAID that comes with virtually every Linux distribution. The tool is very flexible allowing for the standard RAID levels and Nested RAID configurations, including some specialized RAID-10 configurations we’ve discussed previously. You can even use it to build some “Triple Lindy” Nested-RAID configurations if you want.
Mdadm has seven different “modes” of operation which we discussed. These modes allow you to create and start a RAID array, assemble a RAID array (useful when the system boots), follow or monitor a RAID array, build a RAID array (basically doing everything by hand – not recommended), grow a RAID array (one of the secret weapons of mdadm), manage a RAID array, and a “miscellaneous” category for functions that you may need that didn’t fall into the other categories.
There are some really great features in mdadm that can easily be glossed over in a mad rush to build a RAID configuration. There are two big ones that I want to highlight. The first features is the set of standard monitoring tools in mdadm that give you a great starting place for watching the status of your md array including the ability to send out email alerts. Plus you can write fairly simple scripts to parse array status information which can be used in monitoring tools such as ganglia or munin.
The second feature, which is probably the most significant, is the ability to grow and reshape md arrays. This allows you to add devices to an existing RAID array and grow the array to include the added space. However, the really cool feature is that you can use mdadm to change RAID levels without losing data. For example, you could convert a two-disk RAID-1 into a two-disk RAID-5 configuration. Then you could add disks to grow the RAID-5 configuration. Then you could convert the RAID-5 into a RAID-6 configuration. While I haven’t used this feature, this is pretty nifty if you ask me.
Mdadm is a great tool for Linux that is easy to use and gives you a great deal of control over RAID arrays. If you are thinking about RAID on Linux, be sure to take a look at mdadm.
Friday, June 24, 2011
Thursday, February 17, 2011
Learning Awk by example: Part 3
Formatting output
While awk's print statement does do the job most of the time, sometimes more is needed. For those times, awk offers two good old friends called printf() and sprintf(). Yes, these functions, like so many other awk parts, are identical to their C counterparts. printf() will print a formatted string to stdout, while sprintf() returns a formatted string that can be assigned to a variable. If you're not familiar with printf() and sprintf(), an introductory C text will quickly get you up to speed on these two essential printing functions. You can view the printf() man page by typing "man 3 printf" on your Linux system.
Here's some sample awk sprintf() and printf() code. As you can see, everything looks almost identical to C.
x=1
b="foo"
printf("%s got a %d on the last test\n","Jim",83)
myout=("%s-%d",b,x)
print myout
This code will print:
Jim got a 83 on the last test
foo-1
String functions
Awk has a plethora of string functions, and that's a good thing. In awk, you really need string functions, since you can't treat a string as an array of characters as you can in other languages like C, C++, and Python. For example, if you execute the following code:
mystring="How are you doing today?"
print mystring[3]
You'll receive an error that looks something like this:
awk: string.gawk:59: fatal: attempt to use scalar as array
Oh, well. While not as convenient as Python's sequence types, awk's string functions get the job done. Let's take a look at them.
First, we have the basic length() function, which returns the length of a string. Here's how to use it:
print length(mystring)
This code will print the value:
24
OK, let's keep going. The next string function is called index, and will return the position of the occurrence of a substring in another string, or it will return 0 if the string isn't found. Using mystring, we can call it this way:
print index(mystring,"you")
Awk prints:
9
We move on to two more easy functions, tolower() and toupper(). As you might guess, these functions will return the string with all characters converted to lowercase or uppercase respectively. Notice that tolower() and toupper() return the new string, and don't modify the original. This code:
print tolower(mystring)
print toupper(mystring)
print mystring
....will produce this output:
how are you doing today?
HOW ARE YOU DOING TODAY?
How are you doing today?
So far so good, but how exactly do we select a substring or even a single character from a string? That's where substr() comes in. Here's how to call substr():
mysub=substr(mystring,startpos,maxlen)
mystring should be either a string variable or a literal string from which you'd like to extract a substring. startpos should be set to the starting character position, and maxlen should contain the maximum length of the string you'd like to extract. Notice that I said maximum length; if length(mystring) is shorter than startpos+maxlen, your result will be truncated. substr() won't modify the original string, but returns the substring instead. Here's an example:
print substr(mystring,9,3)
Awk will print:
you
If you regularly program in a language that uses array indices to access parts of a string (and who doesn't), make a mental note that substr() is your awk substitute. You'll need to use it to extract single characters and substrings; because awk is a string-based language, you'll be using it often.
Now, we move on to some meatier functions, the first of which is called match(). match() is a lot like index(), except instead of searching for a substring like index() does, it searches for a regular expression. The match() function will return the starting position of the match, or zero if no match is found. In addition, match() will set two variables called RSTART and RLENGTH. RSTART contains the return value (the location of the first match), and RLENGTH specifies its span in characters (or -1 if no match was found). Using RSTART, RLENGTH, substr(), and a small loop, you can easily iterate through every match in your string. Here's an example match() call:
print match(mystring,/you/), RSTART, RLENGTH
Awk will print:
9 9 3
String substitution
Now, we're going to look at a couple of string substitution functions, sub() and gsub(). These guys differ slightly from the functions we've looked at so far in that they actually modify the original string. Here's a template that shows how to call sub():
sub(regexp,replstring,mystring)
When you call sub(), it'll find the first sequence of characters in mystring that matches regexp, and it'll replace that sequence with replstring. sub() and gsub() have identical arguments; the only way they differ is that sub() will replace the first regexp match (if any), and gsub() will perform a global replace, swapping out all matches in the string. Here's an example sub() and gsub() call:
sub(/o/,"O",mystring)
print mystring
mystring="How are you doing today?"
gsub(/o/,"O",mystring)
print mystring
We had to reset mystring to its original value because the first sub() call modified mystring directly. When executed, this code will cause awk to output:
HOw are you doing today?
HOw are yOu dOing tOday?
Of course, more complex regular expressions are possible. I'll leave it up to you to test out some complicated regexps.
We wrap up our string function coverage by introducing you to a function called split(). split()'s job is to "chop up" a string and place the various parts into an integer-indexed array. Here's an example split() call:
numelements=split("Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec",mymonths,",")
When calling split(), the first argument contains the literal string or string variable to be chopped. In the second argument, you should specify the name of the array that split() will stuff the chopped parts into. In the third element, specify the separator that will be used to chop the strings up. When split() returns, it'll return the number of string elements that were split. split() assigns each one to an array index starting with one, so the following code:
print mymonths[1],mymonths[numelements]
....will print:
Jan Dec
Special string forms
A quick note -- when calling length(), sub(), or gsub(), you can drop the last argument and awk will apply the function call to $0 (the entire current line). To print the length of each line in a file, use this awk script:
{
print length()
}
Financial fun
A few weeks ago, I decided to write my own checkbook balancing program in awk. I decided that I'd like to have a simple tab-delimited text file into which I can enter my most recent deposits and withdrawals. The idea was to hand this data to an awk script that would automatically add up all the amounts and tell me my balance. Here's how I decided to record all my transactions into my "ASCII checkbook":
23 Aug 2000 food - - Y Jimmy's Buffet 30.25
Every field in this file is separated by one or more tabs. After the date (field 1, $1), there are two fields called "expense category" and "income category". When I'm entering an expense like on the above line, I put a four-letter nickname in the exp field, and a "-" (blank entry) in the inc field. This signifies that this particular item is a "food expense" :) Here's what a deposit looks like:
23 Aug 2000 - inco - Y Boss Man 2001.00
In this case, I put a "-" (blank) in the exp category, and put "inco" in the inc category. "inco" is my nickname for generic (paycheck-style) income. Using category nicknames allows me to generate a breakdown of my income and expenditures by category. As far as the rest of the records, all the other fields are fairly self-explanatory. The cleared? field ("Y" or "N") records whether the transaction has been posted to my account; beyond that, there's a transaction description, and a positive dollar amount.
The algorithm used to compute the current balance isn't too hard. Awk simply needs to read in each line, one by one. If an expense category is listed but there is no income category (it's "-"), then this item is a debit. If an income category is listed, but no expense category ("-") is there, then the dollar amount is a credit. And, if there is both an expense and income category listed, then this amount is a "category transfer"; that is, the dollar amount will be subtracted from the expense category and added to the income category. Again, all these categories are virtual, but are very useful for tracking income and expenditures, as well as for budgeting.
The code
Time to look at the code. We'll start off with the first line, the BEGIN block and a function definition:
balance, part 1
#!/usr/bin/env awk -f
BEGIN {
FS="\t+"
months="Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec"
}
function monthdigit(mymonth) {
return (index(months,mymonth)+3)/4
}
Adding the first "#!..." line to any awk script will allow it to be directly executed from the shell, provided that you "chmod +x myscript" first. The remaining lines define our BEGIN block, which gets executed before awk starts processing our checkbook file. We set FS (the field separator) to "\t+", which tells awk that the fields will be separated by one or more tabs. In addition, we define a string called months that's used by our monthdigit() function, which appears next.
The last three lines show you how to define your own awk function. The format is simple -- type "function", then the function name, and then the parameters separated by commas, inside parentheses. After this, a "{ }" code block contains the code that you'd like this function to execute. All functions can access global variables (like our months variable). In addition, awk provides a "return" statement that allows the function to return a value, and operates similarly to the "return" found in C, Python, and other languages. This particular function converts a month name in a 3-letter string format into its numeric equivalent. For example, this:
print monthdigit("Mar")
....will print this:
3
Now, let's move on to some more functions.
Financial functions
Here are three more functions that perform the bookkeeping for us. Our main code block, which we'll see soon, will process each line of the checkbook file sequentially, calling one of these functions so that the appropriate transactions are recorded in an awk array. There are three basic kinds of transactions, credit (doincome), debit (doexpense) and transfer (dotransfer). You'll notice that all three functions accept one argument, called mybalance. mybalance is a placeholder for a two-dimensional array, which we'll pass in as an argument. Up until now, we haven't dealt with two-dimensional arrays; however, as you can see below, the syntax is quite simple. Just separate each dimension with a comma, and you're in business.
We'll record information into "mybalance" as follows. The first dimension of the array ranges from 0 to 12, and specifies the month, or zero for the entire year. Our second dimension is a four-letter category, like "food" or "inco"; this is the actual category we're dealing with. So, to find the entire year's balance for the food category, you'd look in mybalance[0,"food"]. To find June's income, you'd look in mybalance[6,"inco"].
balance, part 2
function doincome(mybalance) {
mybalance[curmonth,$3] += amount
mybalance[0,$3] += amount
}
function doexpense(mybalance) {
mybalance[curmonth,$2] -= amount
mybalance[0,$2] -= amount
}
function dotransfer(mybalance) {
mybalance[0,$2] -= amount
mybalance[curmonth,$2] -= amount
mybalance[0,$3] += amount
mybalance[curmonth,$3] += amount
}
When doincome() or any of the other functions are called, we record the transaction in two places -- mybalance[0,category] and mybalance[curmonth, category], the entire year's category balance and the current month's category balance, respectively. This allows us to easily generate either an annual or monthly breakdown of income/expenditures later on.
If you look at these functions, you'll notice that the array referenced by mybalance is passed in my reference. In addition, we also refer to several global variables: curmonth, which holds the numeric value of the month of the current record, $2 (the expense category), $3 (the income category), and amount ($7, the dollar amount). When doincome() and friends are called, all these variables have already been set correctly for the current record (line) being processed.
The main block
Here's the main code block that contains the code that parses each line of input data. Remember, because we have set FS correctly, we can refer to the first field as $1, the second field as $2, etc. When doincome() and friends are called, the functions can access the current values of curmonth, $2, $3 and amount from inside the function. Take a look at the code and meet me on the other side for an explanation.
balance, part 3
{
curmonth=monthdigit(substr($1,4,3))
amount=$7
#record all the categories encountered
if ( $2 != "-" )
globcat[$2]="yes"
if ( $3 != "-" )
globcat[$3]="yes"
#tally up the transaction properly
if ( $2 == "-" ) {
if ( $3 == "-" ) {
print "Error: inc and exp fields are both blank!"
exit 1
} else {
#this is income
doincome(balance)
if ( $5 == "Y" )
doincome(balance2)
}
} else if ( $3 == "-" ) {
#this is an expense
doexpense(balance)
if ( $5 == "Y" )
doexpense(balance2)
} else {
#this is a transfer
dotransfer(balance)
if ( $5 == "Y" )
dotransfer(balance2)
}
}
In the main block, the first two lines set curmonth to an integer between 1 and 12, and set amount to field 7 (to make the code easier to understand). Then, we have four interesting lines, where we write values into an array called globcat. globcat, or the global categories array, is used to record all those categories encountered in the file -- "inco", "misc", "food", "util", etc. For example, if $2 == "inco", we set globcat["inco"] to "yes". Later on, we can iterate through our list of categories with a simple "for (x in globcat)" loop.
On the next twenty or so lines, we analyze fields $2 and $3, and record the transaction appropriately. If $2=="-" and $3!="-", we have some income, so we call doincome(). If the situation is reversed, we call doexpense(); and if both $2 and $3 contain categories, we call dotransfer(). Each time, we pass the "balance" array to these functions so that the appropriate data is recorded there.
You'll also notice several lines that say "if ( $5 == "Y" ), record that same transaction in balance2". What exactly are we doing here? You'll recall that $5 contains either a "Y" or a "N", and records whether the transaction has been posted to the account. Because we record the transaction to balance2 only if the transaction has been posted, balance2 will contain the actual account balance, while "balance" will contain all transactions, whether they have been posted or not. You can use balance2 to verify your data entry (since it should match with your current account balance according to your bank), and use "balance" to make sure that you don't overdraw your account (since it will take into account any checks you have written that have not yet been cashed).
Generating the report
After the main block repeatedly processes each input record, we now have a fairly comprehensive record of debits and credits broken down by category and by month. Now, all we need to do is define an END block that will generate a report, in this case a modest one:
END {
bal=0
bal2=0
for (x in globcat) {
bal=bal+balance[0,x]
bal2=bal2+balance2[0,x]
}
printf("Your available funds: %10.2f\n", bal)
printf("Your account balance: %10.2f\n", bal2)
}
This report prints out a summary that looks something like this:
Your available funds: 1174.22
Your account balance: 2399.33
In our END block, we used the "for (x in globcat)" construct to iterate through every category, tallying up a master balance based on all the transactions recorded. We actually tally up two balances, one for available funds, and another for the account balance. To execute the program and process your own financial goodies that you've entered into a file called "mycheckbook.txt", put all the above code into a text file called "balance", "chmod +x balance", and then type "./balance mycheckbook.txt". The balance script will then add up all your transactions and print out a two-line balance summary for you.
Upgrades
I use a more advanced version of this program to manage my personal and business finances. My version (which I couldn't include here due to space limitations) prints out a monthly breakdown of income and expenses, including annual totals, net income and a bunch of other stuff. Even better, it outputs the data in HTML format, so that I can view it in a Web browser :) If you find this program useful, I encourage you to add these features to this script. You won't need to configure it to record any additional information; all the information you need is already in balance and balance2. Just upgrade the END block, and you're in business!
I hope you've enjoyed this series. For more information on awk, check out the resources listed below.
Resources
* Read earlier installments in the awk series: Awk by example, Part 1 and Part 2 on this blog.
* If you'd like a good old-fashioned book, O'Reilly's sed & awk, 2nd Edition is a wonderful choice.
* Be sure to check out the comp.lang.awk FAQ. It also contains lots of additional awk links.
* Patrick Hartigan's awk tutorial is packed with handy awk scripts.
While awk's print statement does do the job most of the time, sometimes more is needed. For those times, awk offers two good old friends called printf() and sprintf(). Yes, these functions, like so many other awk parts, are identical to their C counterparts. printf() will print a formatted string to stdout, while sprintf() returns a formatted string that can be assigned to a variable. If you're not familiar with printf() and sprintf(), an introductory C text will quickly get you up to speed on these two essential printing functions. You can view the printf() man page by typing "man 3 printf" on your Linux system.
Here's some sample awk sprintf() and printf() code. As you can see, everything looks almost identical to C.
x=1
b="foo"
printf("%s got a %d on the last test\n","Jim",83)
myout=("%s-%d",b,x)
print myout
This code will print:
Jim got a 83 on the last test
foo-1
String functions
Awk has a plethora of string functions, and that's a good thing. In awk, you really need string functions, since you can't treat a string as an array of characters as you can in other languages like C, C++, and Python. For example, if you execute the following code:
mystring="How are you doing today?"
print mystring[3]
You'll receive an error that looks something like this:
awk: string.gawk:59: fatal: attempt to use scalar as array
Oh, well. While not as convenient as Python's sequence types, awk's string functions get the job done. Let's take a look at them.
First, we have the basic length() function, which returns the length of a string. Here's how to use it:
print length(mystring)
This code will print the value:
24
OK, let's keep going. The next string function is called index, and will return the position of the occurrence of a substring in another string, or it will return 0 if the string isn't found. Using mystring, we can call it this way:
print index(mystring,"you")
Awk prints:
9
We move on to two more easy functions, tolower() and toupper(). As you might guess, these functions will return the string with all characters converted to lowercase or uppercase respectively. Notice that tolower() and toupper() return the new string, and don't modify the original. This code:
print tolower(mystring)
print toupper(mystring)
print mystring
....will produce this output:
how are you doing today?
HOW ARE YOU DOING TODAY?
How are you doing today?
So far so good, but how exactly do we select a substring or even a single character from a string? That's where substr() comes in. Here's how to call substr():
mysub=substr(mystring,startpos,maxlen)
mystring should be either a string variable or a literal string from which you'd like to extract a substring. startpos should be set to the starting character position, and maxlen should contain the maximum length of the string you'd like to extract. Notice that I said maximum length; if length(mystring) is shorter than startpos+maxlen, your result will be truncated. substr() won't modify the original string, but returns the substring instead. Here's an example:
print substr(mystring,9,3)
Awk will print:
you
If you regularly program in a language that uses array indices to access parts of a string (and who doesn't), make a mental note that substr() is your awk substitute. You'll need to use it to extract single characters and substrings; because awk is a string-based language, you'll be using it often.
Now, we move on to some meatier functions, the first of which is called match(). match() is a lot like index(), except instead of searching for a substring like index() does, it searches for a regular expression. The match() function will return the starting position of the match, or zero if no match is found. In addition, match() will set two variables called RSTART and RLENGTH. RSTART contains the return value (the location of the first match), and RLENGTH specifies its span in characters (or -1 if no match was found). Using RSTART, RLENGTH, substr(), and a small loop, you can easily iterate through every match in your string. Here's an example match() call:
print match(mystring,/you/), RSTART, RLENGTH
Awk will print:
9 9 3
String substitution
Now, we're going to look at a couple of string substitution functions, sub() and gsub(). These guys differ slightly from the functions we've looked at so far in that they actually modify the original string. Here's a template that shows how to call sub():
sub(regexp,replstring,mystring)
When you call sub(), it'll find the first sequence of characters in mystring that matches regexp, and it'll replace that sequence with replstring. sub() and gsub() have identical arguments; the only way they differ is that sub() will replace the first regexp match (if any), and gsub() will perform a global replace, swapping out all matches in the string. Here's an example sub() and gsub() call:
sub(/o/,"O",mystring)
print mystring
mystring="How are you doing today?"
gsub(/o/,"O",mystring)
print mystring
We had to reset mystring to its original value because the first sub() call modified mystring directly. When executed, this code will cause awk to output:
HOw are you doing today?
HOw are yOu dOing tOday?
Of course, more complex regular expressions are possible. I'll leave it up to you to test out some complicated regexps.
We wrap up our string function coverage by introducing you to a function called split(). split()'s job is to "chop up" a string and place the various parts into an integer-indexed array. Here's an example split() call:
numelements=split("Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec",mymonths,",")
When calling split(), the first argument contains the literal string or string variable to be chopped. In the second argument, you should specify the name of the array that split() will stuff the chopped parts into. In the third element, specify the separator that will be used to chop the strings up. When split() returns, it'll return the number of string elements that were split. split() assigns each one to an array index starting with one, so the following code:
print mymonths[1],mymonths[numelements]
....will print:
Jan Dec
Special string forms
A quick note -- when calling length(), sub(), or gsub(), you can drop the last argument and awk will apply the function call to $0 (the entire current line). To print the length of each line in a file, use this awk script:
{
print length()
}
Financial fun
A few weeks ago, I decided to write my own checkbook balancing program in awk. I decided that I'd like to have a simple tab-delimited text file into which I can enter my most recent deposits and withdrawals. The idea was to hand this data to an awk script that would automatically add up all the amounts and tell me my balance. Here's how I decided to record all my transactions into my "ASCII checkbook":
23 Aug 2000 food - - Y Jimmy's Buffet 30.25
Every field in this file is separated by one or more tabs. After the date (field 1, $1), there are two fields called "expense category" and "income category". When I'm entering an expense like on the above line, I put a four-letter nickname in the exp field, and a "-" (blank entry) in the inc field. This signifies that this particular item is a "food expense" :) Here's what a deposit looks like:
23 Aug 2000 - inco - Y Boss Man 2001.00
In this case, I put a "-" (blank) in the exp category, and put "inco" in the inc category. "inco" is my nickname for generic (paycheck-style) income. Using category nicknames allows me to generate a breakdown of my income and expenditures by category. As far as the rest of the records, all the other fields are fairly self-explanatory. The cleared? field ("Y" or "N") records whether the transaction has been posted to my account; beyond that, there's a transaction description, and a positive dollar amount.
The algorithm used to compute the current balance isn't too hard. Awk simply needs to read in each line, one by one. If an expense category is listed but there is no income category (it's "-"), then this item is a debit. If an income category is listed, but no expense category ("-") is there, then the dollar amount is a credit. And, if there is both an expense and income category listed, then this amount is a "category transfer"; that is, the dollar amount will be subtracted from the expense category and added to the income category. Again, all these categories are virtual, but are very useful for tracking income and expenditures, as well as for budgeting.
The code
Time to look at the code. We'll start off with the first line, the BEGIN block and a function definition:
balance, part 1
#!/usr/bin/env awk -f
BEGIN {
FS="\t+"
months="Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec"
}
function monthdigit(mymonth) {
return (index(months,mymonth)+3)/4
}
Adding the first "#!..." line to any awk script will allow it to be directly executed from the shell, provided that you "chmod +x myscript" first. The remaining lines define our BEGIN block, which gets executed before awk starts processing our checkbook file. We set FS (the field separator) to "\t+", which tells awk that the fields will be separated by one or more tabs. In addition, we define a string called months that's used by our monthdigit() function, which appears next.
The last three lines show you how to define your own awk function. The format is simple -- type "function", then the function name, and then the parameters separated by commas, inside parentheses. After this, a "{ }" code block contains the code that you'd like this function to execute. All functions can access global variables (like our months variable). In addition, awk provides a "return" statement that allows the function to return a value, and operates similarly to the "return" found in C, Python, and other languages. This particular function converts a month name in a 3-letter string format into its numeric equivalent. For example, this:
print monthdigit("Mar")
....will print this:
3
Now, let's move on to some more functions.
Financial functions
Here are three more functions that perform the bookkeeping for us. Our main code block, which we'll see soon, will process each line of the checkbook file sequentially, calling one of these functions so that the appropriate transactions are recorded in an awk array. There are three basic kinds of transactions, credit (doincome), debit (doexpense) and transfer (dotransfer). You'll notice that all three functions accept one argument, called mybalance. mybalance is a placeholder for a two-dimensional array, which we'll pass in as an argument. Up until now, we haven't dealt with two-dimensional arrays; however, as you can see below, the syntax is quite simple. Just separate each dimension with a comma, and you're in business.
We'll record information into "mybalance" as follows. The first dimension of the array ranges from 0 to 12, and specifies the month, or zero for the entire year. Our second dimension is a four-letter category, like "food" or "inco"; this is the actual category we're dealing with. So, to find the entire year's balance for the food category, you'd look in mybalance[0,"food"]. To find June's income, you'd look in mybalance[6,"inco"].
balance, part 2
function doincome(mybalance) {
mybalance[curmonth,$3] += amount
mybalance[0,$3] += amount
}
function doexpense(mybalance) {
mybalance[curmonth,$2] -= amount
mybalance[0,$2] -= amount
}
function dotransfer(mybalance) {
mybalance[0,$2] -= amount
mybalance[curmonth,$2] -= amount
mybalance[0,$3] += amount
mybalance[curmonth,$3] += amount
}
When doincome() or any of the other functions are called, we record the transaction in two places -- mybalance[0,category] and mybalance[curmonth, category], the entire year's category balance and the current month's category balance, respectively. This allows us to easily generate either an annual or monthly breakdown of income/expenditures later on.
If you look at these functions, you'll notice that the array referenced by mybalance is passed in my reference. In addition, we also refer to several global variables: curmonth, which holds the numeric value of the month of the current record, $2 (the expense category), $3 (the income category), and amount ($7, the dollar amount). When doincome() and friends are called, all these variables have already been set correctly for the current record (line) being processed.
The main block
Here's the main code block that contains the code that parses each line of input data. Remember, because we have set FS correctly, we can refer to the first field as $1, the second field as $2, etc. When doincome() and friends are called, the functions can access the current values of curmonth, $2, $3 and amount from inside the function. Take a look at the code and meet me on the other side for an explanation.
balance, part 3
{
curmonth=monthdigit(substr($1,4,3))
amount=$7
#record all the categories encountered
if ( $2 != "-" )
globcat[$2]="yes"
if ( $3 != "-" )
globcat[$3]="yes"
#tally up the transaction properly
if ( $2 == "-" ) {
if ( $3 == "-" ) {
print "Error: inc and exp fields are both blank!"
exit 1
} else {
#this is income
doincome(balance)
if ( $5 == "Y" )
doincome(balance2)
}
} else if ( $3 == "-" ) {
#this is an expense
doexpense(balance)
if ( $5 == "Y" )
doexpense(balance2)
} else {
#this is a transfer
dotransfer(balance)
if ( $5 == "Y" )
dotransfer(balance2)
}
}
In the main block, the first two lines set curmonth to an integer between 1 and 12, and set amount to field 7 (to make the code easier to understand). Then, we have four interesting lines, where we write values into an array called globcat. globcat, or the global categories array, is used to record all those categories encountered in the file -- "inco", "misc", "food", "util", etc. For example, if $2 == "inco", we set globcat["inco"] to "yes". Later on, we can iterate through our list of categories with a simple "for (x in globcat)" loop.
On the next twenty or so lines, we analyze fields $2 and $3, and record the transaction appropriately. If $2=="-" and $3!="-", we have some income, so we call doincome(). If the situation is reversed, we call doexpense(); and if both $2 and $3 contain categories, we call dotransfer(). Each time, we pass the "balance" array to these functions so that the appropriate data is recorded there.
You'll also notice several lines that say "if ( $5 == "Y" ), record that same transaction in balance2". What exactly are we doing here? You'll recall that $5 contains either a "Y" or a "N", and records whether the transaction has been posted to the account. Because we record the transaction to balance2 only if the transaction has been posted, balance2 will contain the actual account balance, while "balance" will contain all transactions, whether they have been posted or not. You can use balance2 to verify your data entry (since it should match with your current account balance according to your bank), and use "balance" to make sure that you don't overdraw your account (since it will take into account any checks you have written that have not yet been cashed).
Generating the report
After the main block repeatedly processes each input record, we now have a fairly comprehensive record of debits and credits broken down by category and by month. Now, all we need to do is define an END block that will generate a report, in this case a modest one:
END {
bal=0
bal2=0
for (x in globcat) {
bal=bal+balance[0,x]
bal2=bal2+balance2[0,x]
}
printf("Your available funds: %10.2f\n", bal)
printf("Your account balance: %10.2f\n", bal2)
}
This report prints out a summary that looks something like this:
Your available funds: 1174.22
Your account balance: 2399.33
In our END block, we used the "for (x in globcat)" construct to iterate through every category, tallying up a master balance based on all the transactions recorded. We actually tally up two balances, one for available funds, and another for the account balance. To execute the program and process your own financial goodies that you've entered into a file called "mycheckbook.txt", put all the above code into a text file called "balance", "chmod +x balance", and then type "./balance mycheckbook.txt". The balance script will then add up all your transactions and print out a two-line balance summary for you.
Upgrades
I use a more advanced version of this program to manage my personal and business finances. My version (which I couldn't include here due to space limitations) prints out a monthly breakdown of income and expenses, including annual totals, net income and a bunch of other stuff. Even better, it outputs the data in HTML format, so that I can view it in a Web browser :) If you find this program useful, I encourage you to add these features to this script. You won't need to configure it to record any additional information; all the information you need is already in balance and balance2. Just upgrade the END block, and you're in business!
I hope you've enjoyed this series. For more information on awk, check out the resources listed below.
Resources
* Read earlier installments in the awk series: Awk by example, Part 1 and Part 2 on this blog.
* If you'd like a good old-fashioned book, O'Reilly's sed & awk, 2nd Edition is a wonderful choice.
* Be sure to check out the comp.lang.awk FAQ. It also contains lots of additional awk links.
* Patrick Hartigan's awk tutorial is packed with handy awk scripts.
Learning Awk by example: Part 2
Multi-line records
Awk is an excellent tool for reading in and processing structured data, such as the system's /etc/passwd file. /etc/passwd is the UNIX user database, and is a colon-delimited text file, containing a lot of important information, including all existing user accounts and user IDs, among other things. In my previous article, I showed you how awk could easily parse this file. All we had to do was to set the FS (field separator) variable to ":".
By setting the FS variable correctly, awk can be configured to parse almost any kind of structured data, as long as there is one record per line. However, just setting FS won't do us any good if we want to parse a record that exists over multiple lines. In these situations, we also need to modify the RS record separator variable. The RS variable tells awk when the current record ends and a new record begins.
As an example, let's look at how we'd handle the task of processing an address list of Federal Witness Protection Program participants:
Jimmy the Weasel
100 Pleasant Drive
Edmonton, AB T6X0R5
Big Tony
200 Incognito Ave.
Toronto, ON M3R1X6
Ideally, we'd like awk to recognize each 3-line address as an individual record, rather than as three separate records. It would make our code a lot simpler if awk would recognize the first line of the address as the first field ($1), the street address as the second field ($2), and the city, state, and zip code as field $3. The following code will do just what we want:
BEGIN {
FS="\n"
RS=""
}
Above, setting FS to "\n" tells awk that each field appears on its own line. By setting RS to "", we also tell awk that each address record is separated by a blank line. Once awk knows how the input is formatted, it can do all the parsing work for us, and the rest of the script is simple. Let's look at a complete script that will parse this address list and print out each address record on a single line, separating each field with a comma.
BEGIN {
FS="\n"
RS=""
}
{
print $1 ", " $2 ", " $3
}
If this script is saved as address.awk, and the address data is stored in a file called address.txt, you can execute this script by typing "awk -f address.awk address.txt". This code produces the following output:
Jimmy the Weasel, 100 Pleasant Drive, Edmonton, AB T6X0R5
Big Tony, 200 Incognito Ave., Toronto, ON M3R1X6
OFS and ORS
In address.awk's print statement, you can see that awk concatenates (joins) strings that are placed next to each other on a line. We used this feature to insert a comma and a space (", ") between the three address fields that appeared on the line. While this method works, it's a bit ugly looking. Rather than inserting literal ", " strings between our fields, we can have awk do it for us by setting a special awk variable called OFS. Take a look at this code snippet.
print "Hello", "there", "Jim!"
The commas on this line are not part of the actual literal strings. Instead, they tell awk that "Hello", "there", and "Jim!" are separate fields, and that the OFS variable should be printed between each string. By default, awk produces the following output:
Hello there Jim!
This shows us that by default, OFS is set to " ", a single space. However, we can easily redefine OFS so that awk will insert our favorite field separator. Here's a revised version of our original address.awk program that uses OFS to output those intermediate ", " strings:
BEGIN {
FS="\n"
RS=""
OFS=", "
}
{
print $1, $2, $3
}
Awk also has a special variable called ORS, called the "output record separator". By setting ORS, which defaults to a newline ("\n"), we can control the character that's automatically printed at the end of a print statement. The default ORS value causes awk to output each new print statement on a new line. If we wanted to make the output double-spaced, we would set ORS to "\n\n". Or, if we wanted records to be separated by a single space (and no newline), we would set ORS to " ".
Multi-line to tabbed
Let's say that we wrote a script that converted our address list to a single-line per record, tab-delimited format for import into a spreadsheet. After using a slightly modified version of address.awk, it would become clear that our program only works for three-line addresses. If awk encountered the following address, the fourth line would be thrown away and not printed:
Cousin Vinnie
Vinnie's Auto Shop
300 City Alley
Vancouver, BC D7E1K6
To handle situations like this, it would be good if our code took the number of records per field into account, printing each one in order. Right now, the code only prints the first three fields of the address. Here's some code that does what we want:
BEGIN {
FS="\n"
RS=""
ORS=""
}
{
x=1
while ( x
print $x "\t"
x++
}
print $NF "\n"
}
First, we set the field separator FS to "\n" and the record separator RS to "" so that awk parses the multi-line addresses correctly, as before. Then, we set the output record separator ORS to "", which will cause the print statement to not output a newline at the end of each call. This means that if we want any text to start on a new line, we need to explicitly write print "\n".
In the main code block, we create a variable called x that holds the number of current field that we're processing. Initially, it's set to 1. Then, we use a while loop (an awk looping construct identical to that found in the C language) to iterate through all but the last record, printing the record and a tab character. Finally, we print the last record and a literal newline; again, since ORS is set to "", print won't output newlines for us. Program output looks like this, which is exactly what we wanted:
Our intended output. Not pretty, but tab delimited for easy import into a spreadsheet
Jimmy the Weasel 100 Pleasant Drive Edmonton, AB T6X0R5
Big Tony 200 Incognito Ave. Toronto, ON M3R1X6
Cousin Vinnie Vinnie's Auto Shop 300 City Alley Vancouver, BC D7E1K6
Looping constructs
We've already seen awk's while loop construct, which is identical to its C counterpart. Awk also has a "do...while" loop that evaluates the condition at the end of the code block, rather than at the beginning like a standard while loop. It's similar to "repeat...until" loops that can be found in other languages. Here's an example:
do...while example
{
count=1
do {
print "I get printed at least once no matter what"
} while ( count != 1 )
}
Because the condition is evaluated after the code block, a "do...while" loop, unlike a normal while loop, will always execute at least once. On the other hand, a normal while loop will never execute if its condition is false when the loop is first encountered.
for loops
Awk allows you to create for loops, which like while loops are identical to their C counterpart:
for ( initial assignment; comparison; increment ) {
code block
}
Here's a quick example:
for ( x = 1; x <= 4; x++ ) {
print "iteration",x
}
This snippet will print:
iteration 1
iteration 2
iteration 3
iteration 4
Arrays
You'll be pleased to know that awk has arrays. However, under awk, it's customary to start array indices at 1, rather than 0:
myarray[1]="jim"
myarray[2]=456
When awk encounters the first assignment, myarray is created and the element myarray[1] is set to "jim". After the second assignment is evaluated, the array has two elements.
Iterating over arrays
Once defined, awk has a handy mechanism to iterate over the elements of an array, as follows:
for ( x in myarray ) {
print myarray[x]
}
This code will print out every element in the array myarray. When you use this special "in" form of a for loop, awk will assign every existing index of myarray to x (the loop control variable) in turn, executing the loop's code block once after each assignment. While this is a very handy awk feature, it does have one drawback -- when awk cycles through the array indices, it doesn't follow any particular order. That means that there's no way for us to know whether the output of above code will be:
jim
456
or
456
jim
To loosely paraphrase Forrest Gump, iterating over the contents of an array is like a box of chocolates -- you never know what you're going to get. This has something to do with the "stringiness" of awk arrays, which we'll now take a look at.
Array index stringiness
In the previous article, I showed you that awk actually stores numeric values in a string format. While awk performs the necessary conversions to make this work, it does open the door for some odd-looking code:
a="1"
b="2"
c=a+b+3
After this code executes, c is equal to 6. Since awk is "stringy", adding strings "1" and "2" is functionally no different than adding the numbers 1 and 2. In both cases, awk will successfully perform the math. Awk's "stringy" nature is pretty intriguing -- you may wonder what happens if we use string indexes for arrays. For instance, take the following code:
myarr["1"]="Mr. Whipple"
print myarr["1"]
As you might expect, this code will print "Mr. Whipple". But how about if we drop the quotes around the second "1" index?
myarr["1"]="Mr. Whipple"
print myarr[1]
Guessing the result of this code snippet is a bit more difficult. Does awk consider myarr["1"] and myarr[1] to be two separate elements of the array, or do they refer to the same element? The answer is that they refer to the same element, and awk will print "Mr. Whipple", just as in the first code snippet. Although it may seem strange, behind the scenes awk has been using string indexes for its arrays all this time!
After learning this strange fact, some of us may be tempted to execute some wacky code that looks like this:
myarr["name"]="Mr. Whipple"
print myarr["name"]
Not only does this code not raise an error, but it's functionally identical to our previous examples, and will print "Mr. Whipple" just as before! As you can see, awk doesn't limit us to using pure integer indexes; we can use string indexes if we want to, without creating any problems. Whenever we use non-integer array indices like myarr["name"], we're using associative arrays. Technically, awk isn't doing anything different behind the scenes than when we use a string index (since even if you use an "integer" index, awk still treats it as a string). However, you should still call 'em associative arrays -- it sounds cool and will impress your boss. The stringy index thing will be our little secret. ;)
Array tools
When it comes to arrays, awk gives us a lot of flexibility. We can use string indexes, and we aren't required to have a continuous numeric sequence of indices (for example, we can define myarr[1] and myarr[1000], but leave all other elements undefined). While all this can be very helpful, in some circumstances it can create confusion. Fortunately, awk offers a couple of handy features to help make arrays more manageable.
First, we can delete array elements. If you want to delete element 1 of your array fooarray, type:
delete fooarray[1]
And, if you want to see if a particular array element exists, you can use the special "in" boolean operator as follows:
if ( 1 in fooarray ) {
print "Ayep! It's there."
} else {
print "Nope! Can't find it."
}
Next time
We've covered a lot of ground in this article. Next time, I'll round out your awk knowledge by showing you how to use awk's math and string functions and how to create your own functions. I'll also walk you through the creation of a checkbook balancing program. Until then, I encourage you to write some of your own awk programs, and to check out the following resources.
Resources
* Read Awk by example, Part 1 on this blog.
* If you'd like a good old-fashioned book, O'Reilly's sed & awk, 2nd Edition is a wonderful choice.
* Be sure to check out the comp.lang.awk FAQ. It also contains lots of additional awk links.
* Patrick Hartigan's awk tutorial is packed with handy awk scripts.
Awk is an excellent tool for reading in and processing structured data, such as the system's /etc/passwd file. /etc/passwd is the UNIX user database, and is a colon-delimited text file, containing a lot of important information, including all existing user accounts and user IDs, among other things. In my previous article, I showed you how awk could easily parse this file. All we had to do was to set the FS (field separator) variable to ":".
By setting the FS variable correctly, awk can be configured to parse almost any kind of structured data, as long as there is one record per line. However, just setting FS won't do us any good if we want to parse a record that exists over multiple lines. In these situations, we also need to modify the RS record separator variable. The RS variable tells awk when the current record ends and a new record begins.
As an example, let's look at how we'd handle the task of processing an address list of Federal Witness Protection Program participants:
Jimmy the Weasel
100 Pleasant Drive
Edmonton, AB T6X0R5
Big Tony
200 Incognito Ave.
Toronto, ON M3R1X6
Ideally, we'd like awk to recognize each 3-line address as an individual record, rather than as three separate records. It would make our code a lot simpler if awk would recognize the first line of the address as the first field ($1), the street address as the second field ($2), and the city, state, and zip code as field $3. The following code will do just what we want:
BEGIN {
FS="\n"
RS=""
}
Above, setting FS to "\n" tells awk that each field appears on its own line. By setting RS to "", we also tell awk that each address record is separated by a blank line. Once awk knows how the input is formatted, it can do all the parsing work for us, and the rest of the script is simple. Let's look at a complete script that will parse this address list and print out each address record on a single line, separating each field with a comma.
BEGIN {
FS="\n"
RS=""
}
{
print $1 ", " $2 ", " $3
}
If this script is saved as address.awk, and the address data is stored in a file called address.txt, you can execute this script by typing "awk -f address.awk address.txt". This code produces the following output:
Jimmy the Weasel, 100 Pleasant Drive, Edmonton, AB T6X0R5
Big Tony, 200 Incognito Ave., Toronto, ON M3R1X6
OFS and ORS
In address.awk's print statement, you can see that awk concatenates (joins) strings that are placed next to each other on a line. We used this feature to insert a comma and a space (", ") between the three address fields that appeared on the line. While this method works, it's a bit ugly looking. Rather than inserting literal ", " strings between our fields, we can have awk do it for us by setting a special awk variable called OFS. Take a look at this code snippet.
print "Hello", "there", "Jim!"
The commas on this line are not part of the actual literal strings. Instead, they tell awk that "Hello", "there", and "Jim!" are separate fields, and that the OFS variable should be printed between each string. By default, awk produces the following output:
Hello there Jim!
This shows us that by default, OFS is set to " ", a single space. However, we can easily redefine OFS so that awk will insert our favorite field separator. Here's a revised version of our original address.awk program that uses OFS to output those intermediate ", " strings:
BEGIN {
FS="\n"
RS=""
OFS=", "
}
{
print $1, $2, $3
}
Awk also has a special variable called ORS, called the "output record separator". By setting ORS, which defaults to a newline ("\n"), we can control the character that's automatically printed at the end of a print statement. The default ORS value causes awk to output each new print statement on a new line. If we wanted to make the output double-spaced, we would set ORS to "\n\n". Or, if we wanted records to be separated by a single space (and no newline), we would set ORS to " ".
Multi-line to tabbed
Let's say that we wrote a script that converted our address list to a single-line per record, tab-delimited format for import into a spreadsheet. After using a slightly modified version of address.awk, it would become clear that our program only works for three-line addresses. If awk encountered the following address, the fourth line would be thrown away and not printed:
Cousin Vinnie
Vinnie's Auto Shop
300 City Alley
Vancouver, BC D7E1K6
To handle situations like this, it would be good if our code took the number of records per field into account, printing each one in order. Right now, the code only prints the first three fields of the address. Here's some code that does what we want:
BEGIN {
FS="\n"
RS=""
ORS=""
}
{
x=1
while ( x
print $x "\t"
x++
}
print $NF "\n"
}
First, we set the field separator FS to "\n" and the record separator RS to "" so that awk parses the multi-line addresses correctly, as before. Then, we set the output record separator ORS to "", which will cause the print statement to not output a newline at the end of each call. This means that if we want any text to start on a new line, we need to explicitly write print "\n".
In the main code block, we create a variable called x that holds the number of current field that we're processing. Initially, it's set to 1. Then, we use a while loop (an awk looping construct identical to that found in the C language) to iterate through all but the last record, printing the record and a tab character. Finally, we print the last record and a literal newline; again, since ORS is set to "", print won't output newlines for us. Program output looks like this, which is exactly what we wanted:
Our intended output. Not pretty, but tab delimited for easy import into a spreadsheet
Jimmy the Weasel 100 Pleasant Drive Edmonton, AB T6X0R5
Big Tony 200 Incognito Ave. Toronto, ON M3R1X6
Cousin Vinnie Vinnie's Auto Shop 300 City Alley Vancouver, BC D7E1K6
Looping constructs
We've already seen awk's while loop construct, which is identical to its C counterpart. Awk also has a "do...while" loop that evaluates the condition at the end of the code block, rather than at the beginning like a standard while loop. It's similar to "repeat...until" loops that can be found in other languages. Here's an example:
do...while example
{
count=1
do {
print "I get printed at least once no matter what"
} while ( count != 1 )
}
Because the condition is evaluated after the code block, a "do...while" loop, unlike a normal while loop, will always execute at least once. On the other hand, a normal while loop will never execute if its condition is false when the loop is first encountered.
for loops
Awk allows you to create for loops, which like while loops are identical to their C counterpart:
for ( initial assignment; comparison; increment ) {
code block
}
Here's a quick example:
for ( x = 1; x <= 4; x++ ) {
print "iteration",x
}
This snippet will print:
iteration 1
iteration 2
iteration 3
iteration 4
Arrays
You'll be pleased to know that awk has arrays. However, under awk, it's customary to start array indices at 1, rather than 0:
myarray[1]="jim"
myarray[2]=456
When awk encounters the first assignment, myarray is created and the element myarray[1] is set to "jim". After the second assignment is evaluated, the array has two elements.
Iterating over arrays
Once defined, awk has a handy mechanism to iterate over the elements of an array, as follows:
for ( x in myarray ) {
print myarray[x]
}
This code will print out every element in the array myarray. When you use this special "in" form of a for loop, awk will assign every existing index of myarray to x (the loop control variable) in turn, executing the loop's code block once after each assignment. While this is a very handy awk feature, it does have one drawback -- when awk cycles through the array indices, it doesn't follow any particular order. That means that there's no way for us to know whether the output of above code will be:
jim
456
or
456
jim
To loosely paraphrase Forrest Gump, iterating over the contents of an array is like a box of chocolates -- you never know what you're going to get. This has something to do with the "stringiness" of awk arrays, which we'll now take a look at.
Array index stringiness
In the previous article, I showed you that awk actually stores numeric values in a string format. While awk performs the necessary conversions to make this work, it does open the door for some odd-looking code:
a="1"
b="2"
c=a+b+3
After this code executes, c is equal to 6. Since awk is "stringy", adding strings "1" and "2" is functionally no different than adding the numbers 1 and 2. In both cases, awk will successfully perform the math. Awk's "stringy" nature is pretty intriguing -- you may wonder what happens if we use string indexes for arrays. For instance, take the following code:
myarr["1"]="Mr. Whipple"
print myarr["1"]
As you might expect, this code will print "Mr. Whipple". But how about if we drop the quotes around the second "1" index?
myarr["1"]="Mr. Whipple"
print myarr[1]
Guessing the result of this code snippet is a bit more difficult. Does awk consider myarr["1"] and myarr[1] to be two separate elements of the array, or do they refer to the same element? The answer is that they refer to the same element, and awk will print "Mr. Whipple", just as in the first code snippet. Although it may seem strange, behind the scenes awk has been using string indexes for its arrays all this time!
After learning this strange fact, some of us may be tempted to execute some wacky code that looks like this:
myarr["name"]="Mr. Whipple"
print myarr["name"]
Not only does this code not raise an error, but it's functionally identical to our previous examples, and will print "Mr. Whipple" just as before! As you can see, awk doesn't limit us to using pure integer indexes; we can use string indexes if we want to, without creating any problems. Whenever we use non-integer array indices like myarr["name"], we're using associative arrays. Technically, awk isn't doing anything different behind the scenes than when we use a string index (since even if you use an "integer" index, awk still treats it as a string). However, you should still call 'em associative arrays -- it sounds cool and will impress your boss. The stringy index thing will be our little secret. ;)
Array tools
When it comes to arrays, awk gives us a lot of flexibility. We can use string indexes, and we aren't required to have a continuous numeric sequence of indices (for example, we can define myarr[1] and myarr[1000], but leave all other elements undefined). While all this can be very helpful, in some circumstances it can create confusion. Fortunately, awk offers a couple of handy features to help make arrays more manageable.
First, we can delete array elements. If you want to delete element 1 of your array fooarray, type:
delete fooarray[1]
And, if you want to see if a particular array element exists, you can use the special "in" boolean operator as follows:
if ( 1 in fooarray ) {
print "Ayep! It's there."
} else {
print "Nope! Can't find it."
}
Next time
We've covered a lot of ground in this article. Next time, I'll round out your awk knowledge by showing you how to use awk's math and string functions and how to create your own functions. I'll also walk you through the creation of a checkbook balancing program. Until then, I encourage you to write some of your own awk programs, and to check out the following resources.
Resources
* Read Awk by example, Part 1 on this blog.
* If you'd like a good old-fashioned book, O'Reilly's sed & awk, 2nd Edition is a wonderful choice.
* Be sure to check out the comp.lang.awk FAQ. It also contains lots of additional awk links.
* Patrick Hartigan's awk tutorial is packed with handy awk scripts.
Wednesday, February 16, 2011
Learning Awk by example: Part 1
In defense of awk
Sure, awk doesn't have a great name. But it is a great language. Awk is geared toward text processing and report generation, yet features many well-designed features that allow for serious programming. And, unlike some languages, awk's syntax is familiar, and borrows some of the best parts of languages like C, python, and bash (although, technically, awk was created before both python and bash). Awk is one of those languages that, once learned, will become a key part of your strategic coding arsenal.
The first awk
Let's go ahead and start playing around with awk to see how it works. At the command line, enter the following command:
$ awk '{ print }' /etc/passwd
You should see the contents of your /etc/passwd file appear before your eyes. Now, for an explanation of what awk did. When we called awk, we specified /etc/passwd as our input file. When we executed awk, it evaluated the print command for each line in /etc/passwd, in order. All output is sent to stdout, and we get a result identical to catting /etc/passwd. Now, for an explanation of the { print } code block. In awk, curly braces are used to group blocks of code together, similar to C. Inside our block of code, we have a single print command. In awk, when a print command appears by itself, the full contents of the current line are printed.
Here is another awk example that does exactly the same thing:
$ awk '{ print $0 }' /etc/passwd
In awk, the $0 variable represents the entire current line, so print and print $0 do exactly the same thing. If you'd like, you can create an awk program that will output data totally unrelated to the input data. Here's an example:
$ awk '{ print "" }' /etc/passwd
Whenever you pass the "" string to the print command, it prints a blank line. If you test this script, you'll find that awk outputs one blank line for every line in your /etc/passwd file. Again, this is because awk executes your script for every line in the input file. Here's another example:
$ awk '{ print "hiya" }' /etc/passwd
Running this script will fill your screen with hiya's. :)
Multiple fields
Awk is really good at handling text that has been broken into multiple logical fields, and allows you to effortlessly reference each individual field from inside your awk script. The following script will print out a list of all user accounts on your system:
$ awk -F":" '{ print $1 }' /etc/passwd
Above, when we called awk, we use the -F option to specify ":" as the field separator. When awk processes the print $1 command, it will print out the first field that appears on each line in the input file. Here's another example:
$ awk -F":" '{ print $1 $3 }' /etc/passwd
Here's an excerpt of the output from this script:
halt7
operator11
root0
shutdown6
sync5
bin1
....etc.
As you can see, awk prints out the first and third fields of the /etc/passwd file, which happen to be the username and uid fields respectively. Now, while the script did work, it's not perfect -- there aren't any spaces between the two output fields! If you're used to programming in bash or python, you may have expected the print $1 $3 command to insert a space between the two fields. However, when two strings appear next to each other in an awk program, awk concatenates them without adding an intermediate space. The following command will insert a space between both fields:
$ awk -F":" '{ print $1 " " $3 }' /etc/passwd
When you call print this way, it'll concatenate $1, " ", and $3, creating readable output. Of course, we can also insert some text labels if needed:
$ awk -F":" '{ print "username: " $1 "\t\tuid:" $3 }' /etc/passwd
This will cause the output to be:
username: halt uid:7
username: operator uid:11
username: root uid:0
username: shutdown uid:6
username: sync uid:5
username: bin uid:1
....etc.
External scripts
Passing your scripts to awk as a command line argument can be very handy for small one-liners, but when it comes to complex, multi-line programs, you'll definitely want to compose your script in an external file. Awk can then be told to source this script file by passing it the -f option:
$ awk -f myscript.awk myfile.in
Putting your scripts in their own text files also allows you to take advantage of additional awk features. For example, this multi-line script does the same thing as one of our earlier one-liners, printing out the first field of each line in /etc/passwd:
BEGIN {
FS=":"
}
{ print $1 }
The difference between these two methods has to do with how we set the field separator. In this script, the field separator is specified within the code itself (by setting the FS variable), while our previous example set FS by passing the -F":" option to awk on the command line. It's generally best to set the field separator inside the script itself, simply because it means you have one less command line argument to remember to type. We'll cover the FS variable in more detail later in this article.
The BEGIN and END blocks
Normally, awk executes each block of your script's code once for each input line. However, there are many programming situations where you may need to execute initialization code before awk begins processing the text from the input file. For such situations, awk allows you to define a BEGIN block. We used a BEGIN block in the previous example. Because the BEGIN block is evaluated before awk starts processing the input file, it's an excellent place to initialize the FS (field separator) variable, print a heading, or initialize other global variables that you'll reference later in the program.
Awk also provides another special block, called the END block. Awk executes this block after all lines in the input file have been processed. Typically, the END block is used to perform final calculations or print summaries that should appear at the end of the output stream.
Regular expressions and blocks
Awk allows the use of regular expressions to selectively execute an individual block of code, depending on whether or not the regular expression matches the current line. Here's an example script that outputs only those lines that contain the character sequence foo:
/foo/ { print }
Of course, you can use more complicated regular expressions. Here's a script that will print only lines that contain a floating point number:
/[0-9]+\.[0-9]*/ { print }
Expressions and blocks
There are many other ways to selectively execute a block of code. We can place any kind of boolean expression before a code block to control when a particular block is executed. Awk will execute a code block only if the preceding boolean expression evaluates to true. The following example script will output the third field of all lines that have a first field equal to fred. If the first field of the current line is not equal to fred, awk will continue processing the file and will not execute the print statement for the current line:
$1 == "fred" { print $3 }
Awk offers a full selection of comparison operators, including the usual "==", "<", ">", "<=", ">=", and "!=". In addition, awk provides the "~" and "!~" operators, which mean "matches" and "does not match". They're used by specifying a variable on the left side of the operator, and a regular expression on the right side. Here's an example that will print only the third field on the line if the fifth field on the same line contains the character sequence root:
$5 ~ /root/ { print $3 }
Conditional statements
Awk also offers very nice C-like if statements. If you'd like, you could rewrite the previous script using an if statement:
{
if ( $5 ~ /root/ ) {
print $3
}
}
Both scripts function identically. In the first example, the boolean expression is placed outside the block, while in the second example, the block is executed for every input line, and we selectively perform the print command by using an if statement. Both methods are available, and you can choose the one that best meshes with the other parts of your script.
Here's a more complicated example of an awk if statement. As you can see, even with complex, nested conditionals, if statements look identical to their C counterparts:
{
if ( $1 == "foo" ) {
if ( $2 == "foo" ) {
print "uno"
} else {
print "one"
}
} else if ($1 == "bar" ) {
print "two"
} else {
print "three"
}
}
Using if statements, we can also transform this code:
! /matchme/ { print $1 $3 $4 }
to this:
{
if ( $0 !~ /matchme/ ) {
print $1 $3 $4
}
}
Both scripts will output only those lines that don't contain a matchme character sequence. Again, you can choose the method that works best for your code. They both do the same thing.
Awk also allows the use of boolean operators "||" (for "logical or") and "&&"(for "logical and") to allow the creation of more complex boolean expressions:
( $1 == "foo" ) && ( $2 == "bar" ) { print }
This example will print only those lines where field one equals foo and field two equals bar.
Numeric variables!
So far, we've either printed strings, the entire line, or specific fields. However, awk also allows us to perform both integer and floating point math. Using mathematical expressions, it's very easy to write a script that counts the number of blank lines in a file. Here's one that does just that:
BEGIN { x=0 }
/^$/ { x=x+1 }
END { print "I found " x " blank lines. :)" }
In the BEGIN block, we initialize our integer variable x to zero. Then, each time awk encounters a blank line, awk will execute the x=x+1 statement, incrementing x. After all the lines have been processed, the END block will execute, and awk will print out a final summary, specifying the number of blank lines it found.
Stringy variables
One of the neat things about awk variables is that they are "simple and stringy." I consider awk variables "stringy" because all awk variables are stored internally as strings. At the same time, awk variables are "simple" because you can perform mathematical operations on a variable, and as long as it contains a valid numeric string, awk automatically takes care of the string-to-number conversion steps. To see what I mean, check out this example:
x="1.01"
# We just set x to contain the *string* "1.01"
x=x+1
# We just added one to a *string*
print x
# Incidentally, these are comments :)
Awk will output:
2.01
Interesting! Although we assigned the string value 1.01 to the variable x, we were still able to add one to it. We wouldn't be able to do this in bash or python. First of all, bash doesn't support floating point arithmetic. And, while bash has "stringy" variables, they aren't "simple"; to perform any mathematical operations, bash requires that we enclose our math in an ugly $( ) construct. If we were using python, we would have to explicitly convert our 1.01 string to a floating point value before performing any arithmetic on it. While this isn't difficult, it's still an additional step. With awk, it's all automatic, and that makes our code nice and clean. If we wanted to square and add one to the first field in each input line, we would use this script:
{ print ($1^2)+1 }
If you do a little experimenting, you'll find that if a particular variable doesn't contain a valid number, awk will treat that variable as a numerical zero when it evaluates your mathematical expression.
Lots of operators
Another nice thing about awk is its full complement of mathematical operators. In addition to standard addition, subtraction, multiplication, and division, awk allows us to use the previously demonstrated exponent operator "^", the modulo (remainder) operator "%", and a bunch of other handy assignment operators borrowed from C.
These include pre- and post-increment/decrement ( i++, --foo ), add/sub/mult/div assign operators ( a+=3, b*=2, c/=2.2, d-=6.2 ). But that's not all -- we also get handy modulo/exponent assign ops as well ( a^=2, b%=4 ).
Field separators
Awk has its own complement of special variables. Some of them allow you to fine-tune how awk functions, while others can be read to glean valuable information about the input. We've already touched on one of these special variables, FS. As mentioned earlier, this variable allows you to set the character sequence that awk expects to find between fields. When we were using /etc/passwd as input, FS was set to ":". While this did the trick, FS allows us even more flexibility.
The FS value is not limited to a single character; it can also be set to a regular expression, specifying a character pattern of any length. If you're processing fields separated by one or more tabs, you'll want to set FS like so:
FS="\t+"
Above, we use the special "+" regular expression character, which means "one or more of the previous character".
If your fields are separated by whitespace (one or more spaces or tabs), you may be tempted to set FS to the following regular expression:
FS="[[:space:]+]"
While this assignment will do the trick, it's not necessary. Why? Because by default, FS is set to a single space character, which awk interprets to mean "one or more spaces or tabs." In this particular example, the default FS setting was exactly what you wanted in the first place!
Complex regular expressions are no problem. Even if your records are separated by the word "foo," followed by three digits, the following regular expression will allow your data to be parsed properly:
FS="foo[0-9][0-9][0-9]"
Number of fields
The next two variables we're going to cover are not normally intended to be written to, but are normally read and used to gain useful information about the input. The first is the NF variable, also called the "number of fields" variable. Awk will automatically set this variable to the number of fields in the current record. You can use the NF variable to display only certain input lines:
NF == 3 { print "this particular record has three fields: " $0 }
Of course, you can also use the NF variable in conditional statements, as follows:
{
if ( NF > 2 ) {
print $1 " " $2 ":" $3
}
}
Record number
The record number (NR) is another handy variable. It will always contain the number of the current record (awk counts the first record as record number 1). Up until now, we've been dealing with input files that contain one record per line. For these situations, NR will also tell you the current line number. NR can be used like the NF variable to print only certain lines of the input:
(NR < 10 ) || (NR > 100) { print "We are on record number 1-9 or 101+" }
Another example:
{
#skip header
if ( NR > 10 ) {
print "ok, now for the real information!"
}
}
Awk provides additional variables that can be used for a variety of purposes. We've come to the end of our initial exploration of awk. In the meantime, if you're eager to learn more, check out the resources listed below.
Resources
Sure, awk doesn't have a great name. But it is a great language. Awk is geared toward text processing and report generation, yet features many well-designed features that allow for serious programming. And, unlike some languages, awk's syntax is familiar, and borrows some of the best parts of languages like C, python, and bash (although, technically, awk was created before both python and bash). Awk is one of those languages that, once learned, will become a key part of your strategic coding arsenal.
The first awk
Let's go ahead and start playing around with awk to see how it works. At the command line, enter the following command:
$ awk '{ print }' /etc/passwd
You should see the contents of your /etc/passwd file appear before your eyes. Now, for an explanation of what awk did. When we called awk, we specified /etc/passwd as our input file. When we executed awk, it evaluated the print command for each line in /etc/passwd, in order. All output is sent to stdout, and we get a result identical to catting /etc/passwd. Now, for an explanation of the { print } code block. In awk, curly braces are used to group blocks of code together, similar to C. Inside our block of code, we have a single print command. In awk, when a print command appears by itself, the full contents of the current line are printed.
Here is another awk example that does exactly the same thing:
$ awk '{ print $0 }' /etc/passwd
In awk, the $0 variable represents the entire current line, so print and print $0 do exactly the same thing. If you'd like, you can create an awk program that will output data totally unrelated to the input data. Here's an example:
$ awk '{ print "" }' /etc/passwd
Whenever you pass the "" string to the print command, it prints a blank line. If you test this script, you'll find that awk outputs one blank line for every line in your /etc/passwd file. Again, this is because awk executes your script for every line in the input file. Here's another example:
$ awk '{ print "hiya" }' /etc/passwd
Running this script will fill your screen with hiya's. :)
Multiple fields
Awk is really good at handling text that has been broken into multiple logical fields, and allows you to effortlessly reference each individual field from inside your awk script. The following script will print out a list of all user accounts on your system:
$ awk -F":" '{ print $1 }' /etc/passwd
Above, when we called awk, we use the -F option to specify ":" as the field separator. When awk processes the print $1 command, it will print out the first field that appears on each line in the input file. Here's another example:
$ awk -F":" '{ print $1 $3 }' /etc/passwd
Here's an excerpt of the output from this script:
halt7
operator11
root0
shutdown6
sync5
bin1
....etc.
As you can see, awk prints out the first and third fields of the /etc/passwd file, which happen to be the username and uid fields respectively. Now, while the script did work, it's not perfect -- there aren't any spaces between the two output fields! If you're used to programming in bash or python, you may have expected the print $1 $3 command to insert a space between the two fields. However, when two strings appear next to each other in an awk program, awk concatenates them without adding an intermediate space. The following command will insert a space between both fields:
$ awk -F":" '{ print $1 " " $3 }' /etc/passwd
When you call print this way, it'll concatenate $1, " ", and $3, creating readable output. Of course, we can also insert some text labels if needed:
$ awk -F":" '{ print "username: " $1 "\t\tuid:" $3 }' /etc/passwd
This will cause the output to be:
username: halt uid:7
username: operator uid:11
username: root uid:0
username: shutdown uid:6
username: sync uid:5
username: bin uid:1
....etc.
External scripts
Passing your scripts to awk as a command line argument can be very handy for small one-liners, but when it comes to complex, multi-line programs, you'll definitely want to compose your script in an external file. Awk can then be told to source this script file by passing it the -f option:
$ awk -f myscript.awk myfile.in
Putting your scripts in their own text files also allows you to take advantage of additional awk features. For example, this multi-line script does the same thing as one of our earlier one-liners, printing out the first field of each line in /etc/passwd:
BEGIN {
FS=":"
}
{ print $1 }
The difference between these two methods has to do with how we set the field separator. In this script, the field separator is specified within the code itself (by setting the FS variable), while our previous example set FS by passing the -F":" option to awk on the command line. It's generally best to set the field separator inside the script itself, simply because it means you have one less command line argument to remember to type. We'll cover the FS variable in more detail later in this article.
The BEGIN and END blocks
Normally, awk executes each block of your script's code once for each input line. However, there are many programming situations where you may need to execute initialization code before awk begins processing the text from the input file. For such situations, awk allows you to define a BEGIN block. We used a BEGIN block in the previous example. Because the BEGIN block is evaluated before awk starts processing the input file, it's an excellent place to initialize the FS (field separator) variable, print a heading, or initialize other global variables that you'll reference later in the program.
Awk also provides another special block, called the END block. Awk executes this block after all lines in the input file have been processed. Typically, the END block is used to perform final calculations or print summaries that should appear at the end of the output stream.
Regular expressions and blocks
Awk allows the use of regular expressions to selectively execute an individual block of code, depending on whether or not the regular expression matches the current line. Here's an example script that outputs only those lines that contain the character sequence foo:
/foo/ { print }
Of course, you can use more complicated regular expressions. Here's a script that will print only lines that contain a floating point number:
/[0-9]+\.[0-9]*/ { print }
Expressions and blocks
There are many other ways to selectively execute a block of code. We can place any kind of boolean expression before a code block to control when a particular block is executed. Awk will execute a code block only if the preceding boolean expression evaluates to true. The following example script will output the third field of all lines that have a first field equal to fred. If the first field of the current line is not equal to fred, awk will continue processing the file and will not execute the print statement for the current line:
$1 == "fred" { print $3 }
Awk offers a full selection of comparison operators, including the usual "==", "<", ">", "<=", ">=", and "!=". In addition, awk provides the "~" and "!~" operators, which mean "matches" and "does not match". They're used by specifying a variable on the left side of the operator, and a regular expression on the right side. Here's an example that will print only the third field on the line if the fifth field on the same line contains the character sequence root:
$5 ~ /root/ { print $3 }
Conditional statements
Awk also offers very nice C-like if statements. If you'd like, you could rewrite the previous script using an if statement:
{
if ( $5 ~ /root/ ) {
print $3
}
}
Both scripts function identically. In the first example, the boolean expression is placed outside the block, while in the second example, the block is executed for every input line, and we selectively perform the print command by using an if statement. Both methods are available, and you can choose the one that best meshes with the other parts of your script.
Here's a more complicated example of an awk if statement. As you can see, even with complex, nested conditionals, if statements look identical to their C counterparts:
{
if ( $1 == "foo" ) {
if ( $2 == "foo" ) {
print "uno"
} else {
print "one"
}
} else if ($1 == "bar" ) {
print "two"
} else {
print "three"
}
}
Using if statements, we can also transform this code:
! /matchme/ { print $1 $3 $4 }
to this:
{
if ( $0 !~ /matchme/ ) {
print $1 $3 $4
}
}
Both scripts will output only those lines that don't contain a matchme character sequence. Again, you can choose the method that works best for your code. They both do the same thing.
Awk also allows the use of boolean operators "||" (for "logical or") and "&&"(for "logical and") to allow the creation of more complex boolean expressions:
( $1 == "foo" ) && ( $2 == "bar" ) { print }
This example will print only those lines where field one equals foo and field two equals bar.
Numeric variables!
So far, we've either printed strings, the entire line, or specific fields. However, awk also allows us to perform both integer and floating point math. Using mathematical expressions, it's very easy to write a script that counts the number of blank lines in a file. Here's one that does just that:
BEGIN { x=0 }
/^$/ { x=x+1 }
END { print "I found " x " blank lines. :)" }
In the BEGIN block, we initialize our integer variable x to zero. Then, each time awk encounters a blank line, awk will execute the x=x+1 statement, incrementing x. After all the lines have been processed, the END block will execute, and awk will print out a final summary, specifying the number of blank lines it found.
Stringy variables
One of the neat things about awk variables is that they are "simple and stringy." I consider awk variables "stringy" because all awk variables are stored internally as strings. At the same time, awk variables are "simple" because you can perform mathematical operations on a variable, and as long as it contains a valid numeric string, awk automatically takes care of the string-to-number conversion steps. To see what I mean, check out this example:
x="1.01"
# We just set x to contain the *string* "1.01"
x=x+1
# We just added one to a *string*
print x
# Incidentally, these are comments :)
Awk will output:
2.01
Interesting! Although we assigned the string value 1.01 to the variable x, we were still able to add one to it. We wouldn't be able to do this in bash or python. First of all, bash doesn't support floating point arithmetic. And, while bash has "stringy" variables, they aren't "simple"; to perform any mathematical operations, bash requires that we enclose our math in an ugly $( ) construct. If we were using python, we would have to explicitly convert our 1.01 string to a floating point value before performing any arithmetic on it. While this isn't difficult, it's still an additional step. With awk, it's all automatic, and that makes our code nice and clean. If we wanted to square and add one to the first field in each input line, we would use this script:
{ print ($1^2)+1 }
If you do a little experimenting, you'll find that if a particular variable doesn't contain a valid number, awk will treat that variable as a numerical zero when it evaluates your mathematical expression.
Lots of operators
Another nice thing about awk is its full complement of mathematical operators. In addition to standard addition, subtraction, multiplication, and division, awk allows us to use the previously demonstrated exponent operator "^", the modulo (remainder) operator "%", and a bunch of other handy assignment operators borrowed from C.
These include pre- and post-increment/decrement ( i++, --foo ), add/sub/mult/div assign operators ( a+=3, b*=2, c/=2.2, d-=6.2 ). But that's not all -- we also get handy modulo/exponent assign ops as well ( a^=2, b%=4 ).
Field separators
Awk has its own complement of special variables. Some of them allow you to fine-tune how awk functions, while others can be read to glean valuable information about the input. We've already touched on one of these special variables, FS. As mentioned earlier, this variable allows you to set the character sequence that awk expects to find between fields. When we were using /etc/passwd as input, FS was set to ":". While this did the trick, FS allows us even more flexibility.
The FS value is not limited to a single character; it can also be set to a regular expression, specifying a character pattern of any length. If you're processing fields separated by one or more tabs, you'll want to set FS like so:
FS="\t+"
Above, we use the special "+" regular expression character, which means "one or more of the previous character".
If your fields are separated by whitespace (one or more spaces or tabs), you may be tempted to set FS to the following regular expression:
FS="[[:space:]+]"
While this assignment will do the trick, it's not necessary. Why? Because by default, FS is set to a single space character, which awk interprets to mean "one or more spaces or tabs." In this particular example, the default FS setting was exactly what you wanted in the first place!
Complex regular expressions are no problem. Even if your records are separated by the word "foo," followed by three digits, the following regular expression will allow your data to be parsed properly:
FS="foo[0-9][0-9][0-9]"
Number of fields
The next two variables we're going to cover are not normally intended to be written to, but are normally read and used to gain useful information about the input. The first is the NF variable, also called the "number of fields" variable. Awk will automatically set this variable to the number of fields in the current record. You can use the NF variable to display only certain input lines:
NF == 3 { print "this particular record has three fields: " $0 }
Of course, you can also use the NF variable in conditional statements, as follows:
{
if ( NF > 2 ) {
print $1 " " $2 ":" $3
}
}
Record number
The record number (NR) is another handy variable. It will always contain the number of the current record (awk counts the first record as record number 1). Up until now, we've been dealing with input files that contain one record per line. For these situations, NR will also tell you the current line number. NR can be used like the NF variable to print only certain lines of the input:
(NR < 10 ) || (NR > 100) { print "We are on record number 1-9 or 101+" }
Another example:
{
#skip header
if ( NR > 10 ) {
print "ok, now for the real information!"
}
}
Awk provides additional variables that can be used for a variety of purposes. We've come to the end of our initial exploration of awk. In the meantime, if you're eager to learn more, check out the resources listed below.
Resources
- If you'd like a good old-fashioned book, O'Reilly's sed & awk, 2nd Edition is a wonderful choice.
- Be sure to check out the comp.lang.awk FAQ. It also contains lots of additional awk links.
- Patrick Hartigan's awk tutorial is packed with handy awk scripts.
- Thompson's TAWK Compiler compiles awk scripts into fast binary executables. Versions are available for Windows, and DOS.
- The GNU Awk User's Guide is available for online reference.
Subscribe to:
Posts (Atom)