Archive | Not just Python

Manipulating tabular output with command-line tools

Introduction

Surprisingly, given that this is a website about programming, this article is going to be about how to avoid programming. Specifically: how to use simple existing command-line tools to process tabular data.

Even for folks who are comfortable writing programs to process tabular data, being able to use command-line tools to slice, dice and generally manipulate data files is a useful skill to have. For very simple tasks it’s often quicker to write a command line to do the job rather than switching mental gears and starting to write a new program. Also, the command-line tools we’ll be discussing here are likely to be faster than Python for many jobs.

Tabular data

By tabular data I mean simple text files where each line stores a single record and consists of a number of different fields separated by a delimiter. The delimiter – i.e. the thing that separates the fields – can be anything, but the most common examples are tabs, spaces, or commas. In this last case, we sometimes refer to the file as being in Comma Separated Value or CSV format.

Let’s look at a few concrete examples. A common type of tabular file that we encounter in biology is BLAST output. Normally when we run BLAST, we get the output in a human-readable format which shows the alignments (or if we’re using a web BLAST server we get a nice webpage with graphics). This format is easy for humans to read but not so easy for computers to deal with; if we need to do some processing of out BLAST results then tabular output is often a better option. We can get tabular output from BLAST by giving it the -m 9 option if we’re using the old BLAST package, or giving it the -outfmt 7 option if we’re using the newer BLAST+ package. If we’re using a web-based BLAST server, we can download our results in tabular format using the download link at the top:

Selection_011

However we get hold of our tabular BLAST results they’re going to look something like this:

# blastp
# Iteration: 0
# Query: my_protein
# RID: BFFAX0Z3015
# Database: nr
# Fields: query id, subject ids, % identity, % positives, alignment length, mismatches, gap opens, q. start, q. end, s. start, s. end, evalue, bit score
# 100 hits found
my_protein gi|60654447|gb|AAX29914.1| 100.00 100.00 599 0 0 1 599 1 599 0.0 1243
my_protein gi|397526517|ref|XP_003833169.1| 99.50 100.00 599 3 0 1 599 1 599 0.0 1240
my_protein gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN 99.83 99.83 599 1 0 1 599 1 599 0.0 1240
my_protein gi|674214810|gb|AIK97765.1| 99.83 99.83 599 1 0 1 599 1 599 0.0 1239
my_protein gi|694936914|ref|XP_009455498.1| 99.33 100.00 599 4 0 1 599 1 599 0.0 1239
...

This is only the first few lines; in reality there will be many more, but they’ll all share the same format. The first few lines are comments which tell us various things – the name of the query, the number of hits, etc. By far the most important comment line is this one:

# Fields: query id, subject ids, % identity, % positives, alignment length, mismatches, gap opens, q. start, q. end, s. start, s. end, evalue, bit score

because it tells us the order of the fields in each of the result lines – in other words, it tells us how to interpret the other lines. From looking at the list of fields we can see that, for example, the second field in each line is the name of the subject sequence, and the fifth field is the length of the match.

We’ll be using BLAST tabular output for the examples in this tutorial, but it’s worth remembering that exactly the same tools can be used for any type of tabular data. For example, here’s the first few lines of a BED file which holds annotation for genomic coordinates:

browser position chr7:127471196-127495720
browser hide all
track name="ItemRGBDemo" description="Item RGB demonstration" visibility=2
itemRgb="On"
chr7    127471196  127472363  Pos1  0  +  127471196  127472363  255,0,0
chr7    127472363  127473530  Pos2  0  +  127472363  127473530  255,0,0
chr7    127473530  127474697  Pos3  0  +  127473530  127474697  255,0,0
chr7    127474697  127475864  Pos4  0  +  127474697  127475864  255,0,0
chr7    127475864  127477031  Neg1  0  -  127475864  127477031  0,0,255
chr7    127477031  127478198  Neg2  0  -  127477031  127478198  0,0,255
chr7    127478198  127479365  Neg3  0  -  127478198  127479365  0,0,255
chr7    127479365  127480532  Pos5  0  +  127479365  127480532  255,0,0
chr7    127480532  127481699  Neg4  0  -  127480532  127481699  0,0,255

The data being stored are different, but the layout is just the same as our BLAST example: a handful of comment lines at the top followed by a bunch of lines with tab-separated fields. One more example: here’s GFF format, also used for annotating sequence data:

X	Ensembl	Repeat	2419108	2419128	42	.	.	hid=trf; hstart=1; hend=21
X	Ensembl	Repeat	2419108	2419410	2502	-	.	hid=AluSx; hstart=1; hend=303
X	Ensembl	Repeat	2419108	2419128	0	.	.	hid=dust; hstart=2419108; hend=2419128
X	Ensembl	Pred.trans.	2416676	2418760	450.19	-	2	genscan=GENSCAN00000019335
X	Ensembl	Variation	2413425	2413425	.	+	.
X	Ensembl	Variation	2413805	2413805	.	+	.

Thinking about the data formats that you encounter in your own work will furnish further examples.

Before we dive in and start looking at the tools we can use to manipulate tabular data, a few points about your computing environment. In this tutorial we’ll be looking at command-line tools – cut, sort, head, tail, uniq, wc, sed and awk. If you’re running Linux, or any UNIX-flavoured operating system, including Mac OS, these tools are part of the standard library and should already be installed. If you’re using Windows, you’ll need to either install them (cygwin is probably the best way to do this) or find somebody who can give you an account on a Linux server.

All the example command lines we’ll be looking at use the BLAST example from above – if you want to play along at home you can download a copy here. For the command-line examples below, the lines starting with a dollar sign ($) are the ones that we type, and the first few lines of the output is shown below. This will look slightly different on your machine depending on your settings.

Getting rid of comments

The first thing we need to do in order to start processing the file is to remove the comment lines at the start. There are seven comment lines, so we need to start looking at the eighth line. The easiest way to do this is with tail. The tail command is normally used for extracting the last few lines of a file, but with the -n option we can tell it to extract all the lines starting from a given line. Here, we want to start at line number eight, so we can use the command

$ tail -n +8 blast_example.txt
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi|397526517|ref|XP_003833169.1|	99.50	100.00	599	3	0	1	599	1	599	0.0	 1240
my_protein	gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN	99.83	99.83	599	1	0	1	599	1	599	0.0	 1240
my_protein	gi|674214810|gb|AIK97765.1|	99.83	99.83	599	1	0	1	599	1	599	0.0	 1239
my_protein	gi|694936914|ref|XP_009455498.1|	99.33	100.00	599	4	0	1	599	1	599	0.0	 1239
my_protein	gi|387018|gb|AAA36439.1|	99.50	99.50	599	3	0	1	599	1	599	0.0	 1236

Notice that we have to put a plus sign (+) before the 8. The output of this command will be all the lines of the file from the eighth line onwards, which we can either save by redirecting it into a new file:

$ tail -n +8 blast_example.txt >blast_example_nocomments.txt

or use a pipe to send the output directly to another program:

$ tail -n +8 blast_example.txt | some_other_program

For the rest of the examples we’ll use the file blast_example_nocomments.txt so that we don’t need to worry about the comment lines.

Extracting fields

One of the simplest things we can do with tabular data is to just display the fields we’re interested in. The command for doing this is called cut, and we use the -f option to say which field we want to extract. For example, we can extract just the percent identity for each hit, which is stored in the third field:

$ cut -f 3 blast_example_nocomments.txt
100.00
99.50
99.83
99.83
99.33
99.50

To get multiple fields, we just list them separated by commas. Here’s how to get the hit name, number of miss-matches, and bitscore:

$ cut -f 2,6,13 blast_example_nocomments.txt
gi|60654447|gb|AAX29914.1|	0	 1243
gi|397526517|ref|XP_003833169.1|	3	 1240
gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN	1	 1240
gi|674214810|gb|AIK97765.1|	1	 1239
gi|694936914|ref|XP_009455498.1|	4	 1239
gi|387018|gb|AAA36439.1|	3	 1236
gi|76885916|gb|ABA60099.1|	1	 1234
...

Notice that in the output from this command, the fields don’t line up nicely because some of the hit names are longer than others. By piping the results to the column -t command we can get the fields to line up:

$ cut -f 2,6,13 blast_example_nocomments.txt | column -t
gi|60654447|gb|AAX29914.1|                                        0  1243
gi|60654447|gb|AAX29914.1|                                        0  1243
gi|60654447|gb|AAX29914.1|                                        0  1243
gi|397526517|ref|XP_003833169.1|                                  3  1240
gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN  1  1240
gi|674214810|gb|AIK97765.1|                                       1  1239
gi|674214810|gb|AIK97765.1|                                       1  1239
gi|694936914|ref|XP_009455498.1|                                  4  1239
gi|387018|gb|AAA36439.1|                                          3  1236
gi|76885916|gb|ABA60099.1|                                        1  1234

Be careful when using column -t, as the fields are effectively padded to the length of the longest value, so if you have a very long value somewhere in the file you’ll end up with a lot of whitespace. Note that if you want to use cut on a file where the delimiter is something other than a tab, you’ll need to use the -d option. For example, on a CSV file use

$ cut -d , -f 1,2,3 myfile.txt

Sorting tabular data

The sort command, by default, sorts alphabetically on entire lines, so if we use it without any options it will effectively sort on the query name (if our BLAST report contains multiple queries) and then on the hit name (since that is the second field):

$ sort blast_example_nocomments.txt
my_protein	gi|109110319|ref|XP_001088270.1|;gi|544492470|ref|XP_005580931.1|;gi|685591740|ref|XP_009186406.1|	98.33	99.33	599	10	0	1	599	1	599	0.0	 1227
my_protein	gi|109110321|ref|XP_001088157.1|	98.32	99.33	596	10	0	4	599	37	632	0.0	 1222
my_protein	gi|130485856|ref|NP_001076150.1|;gi|75039091|sp|O97554.1|PGH1_RABIT;gi|4103591|gb|AAD01796.1|	91.49	95.83	576	49	0	24	599	31	606	0.0	 1121
my_protein	gi|14278642|pdb|1HT5|A;gi|14278643|pdb|1HT5|B;gi|14278644|pdb|1HT8|A;gi|14278645|pdb|1HT8|B	93.83	97.10	551	34	0	32	582	1	551	0.0	 1093
my_protein	gi|157835592|pdb|2OYE|P;gi|157835593|pdb|2OYU|P	92.57	95.85	579	43	0	21	599	22	600	0.0	 1130
....
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234
my_protein	gi|999675|pdb|1PRH|A;gi|999676|pdb|1PRH|B	93.48	96.92	552	36	0	32	583	1	552	0.0	 1091

For our example file, where the hit names begin with GI numbers, this effectively sorts by GI number (though notice that the sorting is not numerical: 76885916 comes before 999675 because it starts with a lower digit, even though it’s a higher number).

We can get much more use out of sort if we specify the field we want to sort on. To do this we use the -k option (short for key) and give two numbers (separated by a comma) which are the first and last fields to sort on. This looks a bit weird since most of the time we want to sort on just a single field, so the first and last fields are the same. Here’s how we sort by percent identity:

$ sort -k3,3  blast_example_nocomments.txt
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi|507558178|ref|XP_004662827.1|	90.62	94.79	576	54	0	24	599	25	600	0.0	 1103
my_protein	gi|507558180|ref|XP_004662828.1|	90.62	94.79	576	54	0	24	599	25	600	0.0	 1103
my_protein	gi|507558182|ref|XP_004662829.1|	90.62	94.79	576	54	0	24	599	24	599	0.0	 1102
my_protein	gi|586470237|ref|XP_006865713.1|	90.62	96.01	576	54	0	24	599	25	600	0.0	 1094
...
my_protein	gi|674214810|gb|AIK97765.1|	99.83	99.83	599	1	0	1	599	1	599	0.0	 1239
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234

Percent identity is the third field, so the first and last fields to sort on are both three. Looking closely at the output above reveals something odd: the first line, where the percent identity is 100, should be last, but instead it is first. The reason for this is that we’re still sorting alphabetically, so 100 comes before 99 because it starts with a 1. To switch to numerical sorting, we use the -n option:

$ sort -k3,3  -n blast_example_nocomments.txt
my_protein	gi|507558178|ref|XP_004662827.1|	90.62	94.79	576	54	0	24	599	25	600	0.0	 1103
my_protein	gi|507558180|ref|XP_004662828.1|	90.62	94.79	576	54	0	24	599	25	600	0.0	 1103
my_protein	gi|507558182|ref|XP_004662829.1|	90.62	94.79	576	54	0	24	599	24	599	0.0	 1102
...
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243

and the hit with 100% identity moves to its correct place at the end of the output. If we want to see the highest identity first – i.e. to reverse the sorting order – we can just use the -r option:

$ sort -k3,3  -n -r blast_example_nocomments.txt
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
...
my_protein	gi|507558182|ref|XP_004662829.1|	90.62	94.79	576	54	0	24	599	24	599	0.0	 1102
my_protein	gi|507558180|ref|XP_004662828.1|	90.62	94.79	576	54	0	24	599	25	600	0.0	 1103
my_protein	gi|507558178|ref|XP_004662827.1|	90.62	94.79	576	54	0	24	599	25	600	0.0	 1103

If we want to sort on multiple fields we can use multiple -k options. For example, here’s how we sort by percent identity (field number three) then by hit length (field number five):

$ sort -k3,3  -k5,5 -n -r blast_example_nocomments.txt

Head

Often when we sort BLAST hits by some criterion we’re interested in the first or last hits. For example, we might only want to see the five longest hits. To do this, we can take the output of sort and pipe it directly into head. The head command displays just the few lines of its input – by default, the first ten lines, though we can specify a different number with the -n option. Here’s how we extract the five longest hits from our BLAST report – we sort by hit length (the fifth field) in reverse order then pipe the results into head and extract the first 5 lines:

$ sort -k5,5 -n -r blast_example_nocomments.txt | head -n 5
my_protein	gi|724957135|ref|XP_010353695.1|	98.33	98.83	599	10	0	1	599	1	599	0.0	 1224
my_protein	gi|694936914|ref|XP_009455498.1|	99.33	100.00	599	4	0	1	599	1	599	0.0	 1239
my_protein	gi|674214810|gb|AIK97765.1|	        99.83	99.83	599	1	0	1	599	1	599	0.0	 1239
my_protein	gi|635071455|ref|XP_007966350.1|	98.33	99.33	599	10	0	1	599	60	658	0.0	 1225
my_protein	gi|60654447|gb|AAX29914.1|	       100.00	100.00	599	0	0	1	599	1	599	0.0	 1243

If we combine this with the cut command that we discussed earlier, we can do more sophisticated things. For example, we can look at just the hit names and bitscores for the five longest hits by piping the results of the above command into cut and extracting the second and thirteenth fields:

$ sort -k5,5 -n -r blast_example_nocomments.txt | head -n 5 | cut -f 2,13
gi|724957135|ref|XP_010353695.1|	 1224
gi|694936914|ref|XP_009455498.1|	 1239
gi|674214810|gb|AIK97765.1|	         1239
gi|635071455|ref|XP_007966350.1|	 1225
gi|60654447|gb|AAX29914.1|	         1243

This is an interesting example because the field that we’re using for sorting – the hit length – doesn’t form part of the final output. Because of this, we have to be careful with the order of commands – the sort has to come before the cut. If we try and do it the other way round:

$ cut -f 2,13 | sort -k5,5 -n -r blast_example_nocomments.txt | head -n 5

it won’t work, because by the time sort sees the data the hit length field is no longer there – it’s been removed by cut.

Filtering with awk

The last example we saw involved grabbing the five longest hits; what if, instead, we wanted to grab all the hits that were longer than a given length? We can’t use the sort/head approach outlined above, because we don’t know in advance how many lines we want to display. Instead, we need a tool that’s capable of filtering lines based on specific criteria, and the one we’re going to use is called awk. The awk command is a very flexible tool for text manipulation and can do lots of things; here we’re only going to use a tiny bit of its capability. To use awk for filtering, we run it with a command string that describes the criterion which lines must pass. Inside this string, we can use $1 to refer to the first field, $2 to refer to the second, and so on. We can also use mathematical comparisons like equals and less-than. A few examples will make it clearer – here’s how we display just the hits that are longer than 590 bases (i.e. just the lines where the fifth field is greater than 590):

$ awk '$5 > 590' blast_example_nocomments.txt
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi|397526517|ref|XP_003833169.1|	99.50	100.00	599	3	0	1	599	1	599	0.0	 1240
my_protein	gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN	99.83	99.83	599	1	0	1	599	1	599	0.0	 1240
my_protein	gi|674214810|gb|AIK97765.1|	99.83	99.83	599	1	0	1	599	1	599	0.0	 1239
my_protein	gi|694936914|ref|XP_009455498.1|	99.33	100.00	599	4	0	1	599	1	599	0.0	 1239
my_protein	gi|387018|gb|AAA36439.1|	99.50	99.50	599	3	0	1	599	1	599	0.0	 1236
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
my_protein	gi|109110319|ref|XP_001088270.1|;gi|544492470|ref|XP_005580931.1|;gi|685591740|ref|XP_009186406.1|	98.33	99.33	599	10	0	1	599	1	599	0.0	 1227
my_protein	gi|635071455|ref|XP_007966350.1|	98.33	99.33	599	10	0	1	599	60	658	0.0	 1225
my_protein	gi|724957135|ref|XP_010353695.1|	98.33	98.83	599	10	0	1	599	1	599	0.0	 1224
my_protein	gi|109110321|ref|XP_001088157.1|	98.32	99.33	596	10	0	4	599	37	632	0.0	 1222
my_protein	gi|544492468|ref|XP_005580930.1|;gi|355567462|gb|EHH23803.1|;gi|355753051|gb|EHH57097.1|	98.32	99.33	596	10	0	4	599	37	632	0.0	 1221
my_protein	gi|403266039|ref|XP_003925205.1|	97.16	98.50	599	17	0	1	599	1	599	0.0	 1209
my_protein	gi|667269156|ref|XP_008570258.1|	92.80	95.98	597	43	0	3	599	35	631	0.0	 1123
my_protein	gi|470608129|ref|XP_004314816.1|	90.95	94.81	597	54	0	3	599	4	600	0.0	 1122
my_protein	gi|403310639|ref|NP_001258093.1|	91.82	91.82	599	1	1	1	599	1	551	0.0	 1115
my_protein	gi|194033503|ref|XP_001926164.1|	91.12	95.48	597	53	0	3	599	4	600	0.0	 1102

Here’s how we display just the hits that include the very start of the query sequence (i.e. the lines where the eighth field is equal to one):

$ awk '$8 == 1' blast_example_nocomments.txt
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi|397526517|ref|XP_003833169.1|	99.50	100.00	599	3	0	1	599	1	599	0.0	 1240
my_protein	gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN	99.83	99.83	599	1	0	1	599	1	599	0.0	 1240
my_protein	gi|674214810|gb|AIK97765.1|	99.83	99.83	599	1	0	1	599	1	599	0.0	 1239
my_protein	gi|694936914|ref|XP_009455498.1|	99.33	100.00	599	4	0	1	599	1	599	0.0	 1239
my_protein	gi|387018|gb|AAA36439.1|	99.50	99.50	599	3	0	1	599	1	599	0.0	 1236
my_protein	gi|109110319|ref|XP_001088270.1|;gi|544492470|ref|XP_005580931.1|;gi|685591740|ref|XP_009186406.1|	98.33	99.33	599	10	0	1	599	1	599	0.0	 1227
my_protein	gi|635071455|ref|XP_007966350.1|	98.33	99.33	599	10	0	1	599	60	658	0.0	 1225
my_protein	gi|724957135|ref|XP_010353695.1|	98.33	98.83	599	10	0	1	599	1	599	0.0	 1224
my_protein	gi|403266039|ref|XP_003925205.1|	97.16	98.50	599	17	0	1	599	1	599	0.0	 1209
my_protein	gi|403310639|ref|NP_001258093.1|	91.82	91.82	599	1	1	1	599	1	551	0.0	 1115

Here’s how we display just the hits with fewer than four miss-matches (i.e. the lines where the sixth field is less than four):

$ awk '$6 < 4' blast_example_nocomments.txt
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi|397526517|ref|XP_003833169.1|	99.50	100.00	599	3	0	1	599	1	599	0.0	 1240
my_protein	gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN	99.83	99.83	599	1	0	1	599	1	599	0.0	 1240
my_protein	gi|674214810|gb|AIK97765.1|	99.83	99.83	599	1	0	1	599	1	599	0.0	 1239
my_protein	gi|387018|gb|AAA36439.1|	99.50	99.50	599	3	0	1	599	1	599	0.0	 1236
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
my_protein	gi|530391358|ref|XP_005252162.1|	99.47	99.65	571	3	0	29	599	4	574	0.0	 1186
my_protein	gi|403310639|ref|NP_001258093.1|	91.82	91.82	599	1	1	1	599	1	551	0.0	 1115

You get the idea. We can combine multiple criteria with && – here’s how we display just the hits with fewer than four miss-matches whose length is less than 599 bases:

$ awk '$6 < 4 && $5 < 599' blast_example_nocomments.txt
my_protein	gi|76885916|gb|ABA60099.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1234
my_protein	gi|76885914|gb|ABA60098.1|	99.83	99.83	596	1	0	4	599	35	630	0.0	 1233
my_protein	gi|530391358|ref|XP_005252162.1|	99.47	99.65	571	3	0	29	599	4	574	0.0	 1186

Unsurprisingly, combining awk with the other tools allows us to ask even more specific questions. For example, what are the bitscores (field number 13) of the hits with at least 99% identical positions (field number 3)? To get the answer, we filter the lines using awk then pipe the output to cut to extract just the fields we want:

 $ awk '$3 >= 99' blast_example_nocomments.txt | cut -f 13
 1243
 1240
 1240
 1239
 1239
 1236
 1234
 1233
 1186

Summarizing output

All of the examples we’ve seen so far result in multiple lines of output, but sometimes we want to summarize the output. Consider a variation on the example above: how many hits have at least 99% identical bases? We know how to use awk to filter out just the hits that satisfy the criterion, so all we need to do is count the number of lines of output we get. The wc tool (the name is short for word count) will do this if we use the -l (for lines) option:

$ awk '$3 >= 99' blast_example_nocomments.txt | wc -l
9

So that rather than getting back a list, we now get back a single number. Another way to summarize output is to take the average of a list of numbers – for example, what’s the average bitscore of all hits with at least 99% identity? A combination of awk and cut will give us the bitscores:

$ awk '$3 >= 99' blast_example_nocomments.txt | cut -f 13
 1243
 1240
 1240
 1239
 1239
 1236
 1234
 1233
 1186

and in fact, awk is also capable of calculating the mean average. The command that does so is ‘{a+=$13} END{print a/NR}’, which roughly translates as “for each line, add the thirteenth field to a variable called a, then at the end of the file divide a by the number of lines and print it”. Any further explanation of this command string would require a massive diversion into the syntax of awk, so we’ll just treat it as a magic string for now and look at how it’s used:

$ awk '$3 >= 99' blast_example_nocomments.txt | awk '{a+=$13} END{print a/NR}'
1232.22

To calculate the average for a different field, we just replace $13 with something else. Notice that in the above example we’re using awk twice: once to filter out the lines and once to calculate the average. We could do the whole thing in a single awk command:

$ awk '$3 >= 99  {a+=$13;b+=1} END{print a/b}' blast_example_nocomments.txt
1232.22

but it’s harder to read and would take even more space to explain.

An interesting feature of BLAST reports is that we can (depending on the settings) potentially have multiple hits to a single subject sequence, which will show up as multiple lines with identical second columns. To turn a list with duplicates into a unique list, we use the uniq command. For example, to get a list of all unique subject sequence names:

$ cut -f 2 blast_example_nocomments.txt | uniq 
gi|60654447|gb|AAX29914.1|
gi|397526517|ref|XP_003833169.1|
gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN
gi|674214810|gb|AIK97765.1|
...
gi|14278642|pdb|1HT5|A;gi|14278643|pdb|1HT5|B;gi|14278644|pdb|1HT8|A;gi|14278645|pdb|1HT8|B
gi|999675|pdb|1PRH|A;gi|999676|pdb|1PRH|B
gi|507932370|ref|XP_004677426.1|
gi|7245654|pdb|1EBV|A

If we compare the number of lines with and without uniq, we can get see how many subject names are duplicates:

$ cut -f 2 blast_example_nocomments.txt | wc -l
102
$ cut -f 2 blast_example_nocomments.txt | uniq | wc -l
98

In this case we get 102 lines but only 98 unique lines, so we know that 4 are duplicated. But we have no way of knowing how the duplicates are distributed; do we have a single subject sequence with five hits (resulting in four duplicates) or four subject sequences each with two hits (also resulting in four duplicates)? To find out, we can add the -c option to uniq, which will prefix each line with the number of times it occurs:

$ cut -f 2 blast_example_nocomments.txt | uniq -c
      3 gi|60654447|gb|AAX29914.1|
      1 gi|397526517|ref|XP_003833169.1|
      1 gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN
      2 gi|674214810|gb|AIK97765.1|
      1 gi|694936914|ref|XP_009455498.1|
      ...
      1 gi|471370351|ref|XP_004375688.1|
      2 gi|545512378|ref|XP_005625066.1|
      ...
      1 gi|507932370|ref|XP_004677426.1|
      1 gi|7245654|pdb|1EBV|A

An important limitation of uniq is that it only identifies duplicated lines that are adjacent in the file. This is fine in the case of subject sequence names, because BLAST groups multiple hits to the same sequence together, but if we try the same trick with a field where duplicated values are spread throughout the file it won’t work. For example, we might want to see how many hits there are of each length (remember, alignment length is the fifth field), but if we try doing this:

$ cut -f 5 blast_example_nocomments.txt | uniq -c
      1 599
      2 123
      3 599
      1 123
      2 599
      ...

we can see that the lines containing 599 have not been correctly grouped together because they are not adjacent. To make this command work we need to sort the alignment lengths numerically before passing them to uniq:

$ cut -f 5 blast_example_nocomments.txt | sort -n | uniq -c
      3 123
      1 309
      2 551
      1 552
      2 553
      1 567
     12 568
      1 569
      2 570
      3 571
      1 573
      5 574
      1 575
     35 576
      9 579
      1 580
      4 582
      4 596
      3 597
     11 599

Sorting the lengths in this way not only gives us the correct counts, but also makes it easier to interpret the results.

Editing fields

In the last section of this tutorial we’ll look at a few different ways that we can make changes to the fields in a tabular data file. Let’s start with our old friend awk; we already know that inside an awk command we can use $3, for example, to get the value of the third field. We can also use the same notation to set the value of the third field – say, to a specific number:

$ awk '$3=123456789' blast_example_nocomments.txt 
my_protein gi|60654447|gb|AAX29914.1| 123456789 100.00 599 0 0 1 599 1 599 0.0 1243
my_protein gi|60654447|gb|AAX29914.1| 123456789 100.00 123 0 0 1 599 602 866 0.0 1243
my_protein gi|60654447|gb|AAX29914.1| 123456789 100.00 123 0 0 1 599 702 904 0.0 1243
my_protein gi|397526517|ref|XP_003833169.1| 123456789 100.00 599 3 0 1 599 1 599 0.0 1240
...

This by itself is not very useful – let’s look at a more realistic example. Imagine that we want to express the proportion of identical bases for each hit as a decimal fraction rather than as a percentage. To accomplish this, we use the awk command ‘$3=$3/100’ i.e. “set the new value of the third field to be the current value of the third field divided by one hundred”:

$ awk '$3=$3/100' blast_example_nocomments.txt 
my_protein gi|60654447|gb|AAX29914.1| 1 100.00 599 0 0 1 599 1 599 0.0 1243
my_protein gi|60654447|gb|AAX29914.1| 1 100.00 123 0 0 1 599 602 866 0.0 1243
my_protein gi|60654447|gb|AAX29914.1| 1 100.00 123 0 0 1 599 702 904 0.0 1243
my_protein gi|397526517|ref|XP_003833169.1| 0.995 100.00 599 3 0 1 599 1 599 0.0 1240
my_protein gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN 0.9983 99.83 599 1 0 1 599 1 599 0.0 1240
my_protein gi|674214810|gb|AIK97765.1| 0.9983 99.83 599 1 0 1 599 1 599 0.0 1239
my_protein gi|674214810|gb|AIK97765.1| 0.9983 99.83 123 1 0 1 599 600 700 0.0 1239
my_protein gi|694936914|ref|XP_009455498.1| 0.9933 100.00 599 4 0 1 599 1 599 0.0 1239
my_protein gi|387018|gb|AAA36439.1| 0.995 99.50 599 3 0 1 599 1 599 0.0 1236
...

You’ll notice that in the output shown above, awk has separated each field with a space rather than a tab as they were originally. Depending on what we want to do with the data at this point, this may not be a problem, but if we want to get the original tab separator back we just have to set the -OFS (short for Output Field Separator) option to the tab character, which we represent as ‘\t’:

$ awk  '$3=$3/100' OFS='\t' blast_example_nocomments.txt 
my_protein	gi|60654447|gb|AAX29914.1|	1	100.00	599	0	0	1	599	1	599	0.0	1243
my_protein	gi|60654447|gb|AAX29914.1|	1	100.00	123	0	0	1	599	602	866	0.0	1243
my_protein	gi|60654447|gb|AAX29914.1|	1	100.00	123	0	0	1	599	702	904	0.0	1243
my_protein	gi|397526517|ref|XP_003833169.1|	0.995	100.00	599	3	0	1	599	1	599	0.0	1240
my_protein	gi|18104967|ref|NP_000953.2|;gi|317373262|sp|P23219.2|PGH1_HUMAN	0.9983	99.83	599	1	0	1	599	1	599	0.0	1240
my_protein	gi|674214810|gb|AIK97765.1|	0.9983	99.83	599	1	0	1	599	1	599	0.0	1239
my_protein	gi|674214810|gb|AIK97765.1|	0.9983	99.83	123	1	0	1	599	600	700	0.0	1239
...

If we want to make multiple edits to each line, we simply separate them with a comma. Imagine that we have want to take our list of hits to a protein sequence and translate the start/stop positions to the corresponding DNA sequence by multiplying them both by three. Here’s how we do it (remember that the query start and stop are fields number 8 and 9):

$ awk  '$8=$8*3,$9=$9*3' OFS='\t' blast_example_nocomments.txt 
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	599	0	0	3	1797	1	599	0.0	1243
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	123	0	0	3	1797	602	866	0.0	1243
my_protein	gi|60654447|gb|AAX29914.1|	100.00	100.00	123	0	0	3	1797	702	904	0.0	1243
my_protein	gi|397526517|ref|XP_003833169.1|	99.50	100.00	599	3	0	3	1797	1	599	0.0	1240
...

Notice that the command for multiplication is an asterisk (*), not an x.

Finally, what if we want to alter just a part of a field, rather than the whole field? For example, all those bar characters (|) in the subject hit names make them difficult to read, so we might want to change them to spaces. We could do this job with awk, but the sed tool is a bit easier to use in this case. When using sed to replace one character with another, we give it a command string that looks like this: ‘s/X/Y/g’, where X is the character we want to change and Y is the replacement. In our example, we want to change all bar characters to spaces:

$ sed 's/|/ /g' blast_example_nocomments.txt
my_protein	gi 60654447 gb AAX29914.1 	100.00	100.00	599	0	0	1	599	1	599	0.0	 1243
my_protein	gi 60654447 gb AAX29914.1 	100.00	100.00	123	0	0	1	599	602	866	0.0	 1243
my_protein	gi 60654447 gb AAX29914.1 	100.00	100.00	123	0	0	1	599	702	904	0.0	 1243
my_protein	gi 397526517 ref XP_003833169.1 	99.50	100.00	599	3	0	1	599	1	599	0.0	 1240

It’s important to note that these last few examples of editing fields do not alter the original file – if we want to save the ouput then we need to redirect it to a new file.

Doing more complicated things

The examples in this tutorial just give a very brief overview of the capabilities of command-line tools to deal with tabular data. Each of the tools mentioned here – especially sed and awk – have many more options and are capable of doing much more complicated jobs. However, as command-lines and scripts get larger, they become harder to edit and maintain.

For any job involving tabular data that’s more complex than the examples outlined above, it’s probably best to switch to a “real” programming language – Python and Perl are both excellent for this type of text manipulation. If you’d like to learn how to use Python to tackle more complicated versions of these types of problems, sign up for a mini-course by email below, or check out the Python for Biologists books.

0

Why readable, documented code is especially important for scientists (and a three-step plan for getting there)

During my most recent teaching engagement I spent some time talking specifically about code readability and documentation. As often happens, presenting these ideas to a roomful of novice programmers helped to crystallize my thoughts on the topic, and made me realize that I’d never written about it – plus I thought that it would be an ideal topic for first post of the new year, since documentation is something that many programmers constantly resolve to do better at!

There’s no shortage of articles and book chapters explaining the general importance of documenting your code – if you learned to program from a book or an online tutorial (such as Python for Biologists) then it will almost certainly have been mentioned. The arguments in favour of documentation are well-rehearsed: it makes it easier for you to work on your own code over a long period of time, it makes it easier for others to contribute fixes and features, it forces you to think about the purpose of each section, etc. In this post, I want to talk about why documentation is particularly important for you – somebody who is using programming for carrying out scientific work. The basis of my argument is that for us as scientists, code serves two important features over and above simply being executed: it acts as both the ultimate supplementary methods, and it’s the way in which you express your original ideas.

Code as supplementary methods

It’s probably fair to say that most users of most programs don’t care about how they actually work, as long as they do what they’re supposed to. When you fire up an image editing program to brighten up a dull digital photo or rotate one where the horizon isn’t straight, you probably aren’t interested in exactly what transformation is being applied to the RGB values, or what trigonometry is being used to straighten the image – you’re only interested in the end result.

Scientific software is different: the end-users are often extremely interested in how the program works internally, since understanding that is a part of understanding the results. And the ultimate way to resolve questions or disagreements about what a program is doing is to examine the source code. This is a great advantage we have when working in bioinformatics. For wet-lab work, there is only so much information you can give in the pages of a journal about how an experiment was carried out. Using supplementary information can help, but even then you’re limited to what the authors thought important enough to write down. For a bioinformatics experiment, however, one can always see exactly where the data came from and what happened to it, providing one has access to the source code. You can read about a piece of bioinformatics software in a journal, listen to a talk on it, discuss it with the authors, but at the end of the day if you still have questions about how it works, you can always go back to the source code.

The vast majority of programmers don’t have to worry about their users wanting to read the source, but we do – so we should make readability and documentation a priority to make sure that it’s as useful as possible.

Code as a way of expressing original ideas

The vast majority of software projects don’t implement any ideas that are particularly original. This isn’t a problem, it’s just a reflection of the fact that many pieces of software do very similar things to other pieces of software, and do them in similar ways. There are fairly standard ways of writing a blog engine, a stock management program, an image manipulation program etc. We could make an argument, therefore, that for those categories of software it’s not super-important that the code is really well-documented, since it’s unlikely to be doing anything surprising, and a reader can probably work out what’s going on in each section by referring to other software that carries out the same task.

Scientific software is different. Yes, we tend to write scripts to carry out tedious everyday tasks like tweaking file formats and correcting sequence headers, but we also use it for implementing entirely new ideas about how to assemble genomes, or how to correct frameshift mutations, or how to pick species for phylogenetic analysis. We’re far more likely than other programmers to write code that does something entirely new. As such our programs (at least the ones that do something interesting) are going to be harder to understand than yet another text editor or chat program.

As programmers, we’re very lucky in that the language we use to implement our original ideas – code – is also an excellent way to communicate them to other researchers. But the usefulness of that language depends on whether we write it in a readable way and document it well.

Three steps to readable, documented code

Documentation is a skill that is learned over the course of a career, but here’s an exercise that I often have my students do. Using a framework like this can make documenting your code less daunting if you’ve no idea where to start.

Step one: make sure your variable and function names are meaningful

Programmers are fond of talking about self-documenting code – i.e. code that doesn’t require external documentation to be understood. A large part of this is using meaningful variable names. Examples of bad variable and function/method names include:

  • Single-letter names e.g. a, b, f (with the exception of variable names that follow common conventions such as x and y for co-ordinates or i for an index)
  • Names that describe the type of data rather than the contents e.g. my_list, dict
  • Names that are extremely generic e.g. process_file(), do_stuff(), my_data
  • Names that come in multiples e.g. file1, file2
  • Names that are excessively shortened e.g. gen_ref_seq_uc
  • Multiple names that are only distinguished by case or punctuation e.g. input_file and inputfile, DNA_seq and dna_seq
  • Names that are misspelled – the computer does not care about spelling but your readers might

Go through your code and look for any instances of the above, and replace them with good names. Good variable names tell us the job of the variable or function. This is also a good opportunity to replace so-called magic numbers – constants that appear in the code with no explanation – with meaningful variable names e.g. 64 might be replaced by number_of_codons.

Example: we want to define two variables which hold the DNA sequence for a contig and a frame, then pass them to a method which will carry out protein translation and store the result. Here’s how not to do it, even though the code is perfectly valid Python:

a = 2
b = 'ATGCGATTGGA'
c = do_stuff(a, b)

This is much better:

frame = 2
contig_dna_seq = 'ATGCGATTGGA'
contig_protein_seq = translate(frame, contig_dna_seq)

Step two: write brief comments explaining the reasoning behind particularly important or complex statements

For most programs, it’s probably true to say that the complexity lies in a very small proportion of the code. There tends to be a lot of straightforward code concerned with parsing command-line options, opening files, getting user input, etc. The same applies to functions and methods: there are likely many statements that do things like unpacking tuples, iterating over lists, and concatenating strings. These lines of code, if you’ve followed step one above, are self-documenting – they don’t require any additional commentary to understand, so there’s no need to write comments for them.

This allows you to concentrate your documentation efforts on the few lines of code that are harder to understand – those whose purpose is not clear, or which are inherently difficult to understand. Here’s one such example – this is the start of a function for processing a DNA sequence codon-by-codon (e.g. for producing a protein translation):

for codon_start in range(0, len(dna)-2, 3):
codon_stop = codon_start+3
codon = dna[codon_start:codon_stop]
    ...

The first line is not trivial to understand, so we want to write a comment explaining it. Here’s an example of how not to do it:

# iterate over numbers from zero to the length of
# the dna sequence minus two in steps of three
for codon_start in range(0, len(dna)-2, 3):
...

The reason that this is a bad comment is that it simply restates what the code does – it doesn’t tell us why. Reading the comment leaves us no better off in knowing why the last start position is the length of the DNA sequence minus two. This is much better:

# get the start position for each codon
# the final codon starts two bases before the end of the sequence
# so we don't get an incomplete codon if the length isn't a multiple of three
for codon_start in range(0, len(dna)-2, 3):
...

Now we can see from reading the comment that the reason for the -2 is to ensure that we don’t end up processing a codon which is only one or two bases long in the event that there are incomplete codons at the end of the DNA sequence.

Go through your code and look for lines whose function isn’t obvious just from reading them, and add explanations

Step three: add docstrings to your functions/methods/classes/modules

Functions and methods are the way that we break up our code into discrete, logical units, so it makes sense that we should also document them as discrete, logical units. Everything in this section also applies to methods, classes and modules, but it keep things readable I’ll just refer to functions below.

Python has a very straightforward convention for documenting functions: we add a triple-quoted string at the start of the function which holds the documentation e.g.

def get_at_content(dna):
  """return the AT content of a DNA string.
     The string must be in upper case.
     The AT content is returned as a float"""
  length = len(dna)
  a_count = dna.count('A')
  t_count = dna.count('T')
  at_content = float(a_count + t_count) / length
  return at_content

This triple-quoted line is called a docstring. The advantage of including function documentation in this way as opposed to in a comment is that, because it uses a standard format, the docstring can be extracted automatically. This allows us to do useful things like automatically generate API documentation from docstrings, or provide interactive help when running the Python interpreter in a shell (take a look at the chapter on testing and documentation in Advanced Python for Biologists for an in-depth look at how this works).

There are various different conventions for writing docstrings. As a rule, useful docstrings need to describe the order and types of the function arguments and the description and type of the return value. It’s also helpful to mention any restrictions on the argument (for instance, as above, that the DNA string must be in upper case). The example above is written in a very unstructured way, but because triple-quoted strings can span multiple lines, we could also adopt a more structured approach:

def get_at_content(dna):
  """return the AT content of a DNA string.

     Arguments: a string containing a DNA sequence.
                The string must be in upper case.

     Returns: the AT content as a float"""
  ...

If you think it’s helpful, you can also give examples of how to use the function in the docstring. Notice that we’re not saying anything in the docstring about how the function works. The whole point of encapsulating code into functions is that we can change the implementation without worrying about how it will affect the calling code!

Summary

These three steps represent the minimum amount of work that you should do on any code that you plan on keeping around for more than a few weeks, or that you plan on showing to anybody else. As always, if you have questions or suggestions, leave a comment.

1

What you have in common with the Wright brothers

Warning: vast historical oversimplification below in pursuit of a point 🙂

Famously, the Wright brothers built and flew the first aircraft capable of sustained, powered flight in 1903. Looking at the famous photos with eyes used to seeing modern aircraft, it looks pretty airworthy:

220px-1904WrightFlyer

 

There were plenty of other people working on heavier-than-air flying machines around that time, many with much more money and far more resources. So what was the key to the Wright brothers’ success? Did they invent a new type of engine? A new type of wing? Not really – their greatest invention was this:

wright_tunnel

This unprepossessing-looking box is a wind tunnel, which the Wright brothers – realizing that it was far too time-consuming to test wing designs by building them full scale – used to test their aeronautical designs using models. The innovation that prompted their break-through was not an improvement to aircraft, but an improvement in the process for designing aircraft. By using a wind tunnel, they were simply able to make their mistakes faster than anyone else, and to learn from them. Others had to learn by building, and crashing, full-size aircraft.

This is far from an original observation, but I think it has some connection with programming. The story of the Wright brothers illustrates the power of rapid iterative improvement – their approach would probably be called “agile” if it were being used today. The difference between the Wright brothers and their contemporary rivals mirrors one that I often see between the different approaches to writing code I see being used by my students.

On the one hand, you have people who favour small, incremental improvements when writing a program or a function, testing each bit of code as soon as possible and uncovering bugs and mistakes early. Students who program in this way end up with programs and functions that resemble the Wright Flyer pictured above: crude and primitive, perhaps, but certainly fit-for-purpose and relatively unlikely to result in broken bones.

On the other hand, you have people who try to write an entire program or function all in one go, never testing any bit of it until the whole thing is written. Students who program in this way end up with programs and function that resemble other products of early aviation:

images

 

As the picture above attests, this is a recipe for pain.

 

0

Powered by WordPress. Designed by Woo Themes