1-DAV-202 Data Management 2023/24
Previously 2-INF-185 Data Source Integration

Materials · Introduction · Rules · Contact
· Grades from marked homeworks are on the server in file /grades/userid.txt
· Please submit project proposals until Friday April 12. Topics from potential bachelor topic supervisors can be found in /tasks/temy.txt (in Slovak).
· Due to Student Research Conference, Javascript and Bioinf3 homeworks are due on April 25, 9:00am.


Integrácia dátových zdrojov 2015/16

From MAD
Jump to navigation Jump to search

Contents

Kontakt

Vyučujúci

Rozvrh

  • Streda 14:50-17:10 M-217

Úvod

Cieľová skupina

Tento predmet je určený pre študentov 2. ročníka bakalárskeho študijného programu Bioinformatika a pre študentov bakalárskeho a magisterského študijného programu Informatika, obzvlášť ak plánujú na magisterskom štúdiu absolvovať štátnicové zameranie Bioinformatika a strojové učenie. Radi privítame aj študentov iných zameraní a študijných programov, pokiaľ majú požadované (neformálne) prerekvizity.

Predpokladáme, že študenti na tomto predmete už vedia programovať v niektorom programovacom jazyku a neboja sa učiť podľa potreby nové jazyky. Takisto predpokladáme základnú znalosť práce v Linuxe vrátane spúšťania príkazov na príkazovom riadku (mali by ste poznať aspoň základné príkazy na prácu so súbormi a adresármi ako cd, mkdir, cp, mv, rm, chmod a pod.). Hoci väčšina technológií preberaných na tomto predmete sa dá použiť na spracovanie dát z mnohých oblastí, budeme ich často ilustrovať na príkladoch z oblasti bioinformatiky. Pokúsime sa vysvetliť potrebné pojmy, ale bolo by dobre, ak by ste sa orientovali v základných pojmoch molekulárnej biológie, ako sú DNA, RNA, proteín, gén, genóm, evolúcia, fylogenetický strom a pod. Študentom zamerania Bioinformatika a strojové učenie odporúčame absolvovať najskôr Metódy v bioinformatike, až potom tento predmet.

Ak sa chcete doučiť základy používania príkazového riadku, skúste napr. tento tutoriál: http://korflab.ucdavis.edu/bootcamp.html

Cieľ predmetu

Počas štúdia sa naučíte mnohé zaujímave algoritmy, modely a metódy, ktoré sa dajú použiť na spracovanie dát v bioinformatike alebo iných oblastiach. Ak však počas štúdia alebo aj neskôr v zamestnaní budete chcieť tieto metódy použiť na reálne dáta, zistíte, že väčšinou treba vynaložiť značné úsilie na samotné získanie dát, ich predspracovanie do vhodného tvaru, testovanie a porovnávanie rôznych metód alebo ich nastavení a získavanie finálnych výsledkov v tvare prehľadných tabuliek a grafov. Často je potrebné tieto činnosti veľakrát opakovať pre rôzne vstupy, rôzne nastavenia a podobne. Obzvlášť v bioinformatike je možné si nájsť zamestnanie, kde vašou hlavnou náplňou bude spracovanie dát s použitím už hotových nástrojov, prípadne doplnených menšími vlastnými programami. Na tomto predmete si ukážeme niektoré programovacie jazyky, postupy a technológie vhodné na tieto činnosti. Veľa z nich je použiteľných na dáta z rôznych oblastí, ale budeme sa venovať aj špecificky bioinformatickým nástrojom.

Základné princípy

Odporúčame nasledujúci článok s dobrými radami k výpočtovým experimentom:

Niektoré dôležité zásady:

  • Citát z článku Noble 2009: "Everything you do, you will probably have to do over again."
  • Dobre zdokumentujte všetky kroky experimentu (čo ste robili, prečo ste to robili, čo vám vyšlo)
    • Ani vy sami si o pár mesiacov tieto detaily nebudete pamätať
  • Snažte sa udržiavať logickú štruktúru adresárov a súborov
    • Ak však máte veľa experimentov, môže byť dostačujúce označiť ich dátumom, nevymýšľať stále nové dlhé mená
  • Snažte sa vyhýbať manuálnym úpravám medzivýsledkov, ktoré znemožňujú jednoduché zopakovanie experimentu
  • Snažte sa detegovať chyby v dátach
    • Skripty by mali skončiť s chybovou hláškou, keď niečo nejde ako by malo
    • V skriptoch čo najviac kontrolujte, že vstupné dáta zodpovedajú vašim predstavám (správny formát, rozumný rozsah hodnôt atď.)
    • Ak v skripte voláte iný program, kontrolujte jeho exit code
    • Tiež čo najčastejšie kontrolujte medzivýsledky výpočtu (ručným prezeraním, výpočtom rôznych štatistík a pod.), aby ste odhalili prípadné chyby v dátach alebo vo vašom kóde

Pravidlá

Známkovanie

  • Domáce úlohy: 55%
  • Návrh projektu: 5%
  • Projekt: 40%

Stupnica:

  • A: 90 a viac, B:80...89, C: 70...79, D: 60...69, E: 50...59, FX: menej ako 50%

Formát predmetu

  • Každý týždeň 3 vyučovacie hodiny, z toho cca prvá je prednáška a na ďalšie dve cvičenia. Na cvičeniach samostatne riešite príklady, ktoré doma dokončíte ako domácu úlohu.
  • Cez skúškové obdobie budete odovzdávať projekt. Po odovzdaní projektov sa bude konať ešte diskusia o projekte s vyučujúcimi, ktorá môže ovplyvniť vaše body z projektu.
  • Budete mať konto na Linuxovom serveri určenom pre tento predmet. Toto konto používajte len na účely tohto predmetu a snažte sa server príliš svojou aktivitou nepreťažiť, aby slúžil všetkým študentom. Akékoľvek pokusy úmyselne narušiť chod servera budú považované za vážne porušenie pravidiel predmetu.

Domáce úlohy

  • Termín DÚ týkajúcej sa aktuálnej prednášky je vždy do 9:00 v deň nasledujúcej prednášky (t.j. väčšinou o necelý týždeň od zadania).
  • Domácu úlohu odporúčame začať robiť na cvičení, kde vám môžeme prípadne poradiť. Ak máte otázky neskôr, pýtajte sa vyučujúcich emailom.
  • Domácu úlohu môžete robiť na ľubovoľnom počítači, pokiaľ možno pod Linuxom. Odovzdaný kód alebo príkazy by však mali byť spustiteľné na serveri pre tento predmet, nepoužívajte teda špeciálny softvér alebo nastavenia vášho počítača.
  • Domáca úloha sa odovzdáva nakopírovaním požadovaných súborov do požadovaného adresára na serveri. Konkrétne požiadavky budú spresnené v zadaní.
  • Ak sú mená súborov špecifikované v zadaní, dodržujte ich. Ak ich vymýšľate sami, nazvite ich rozumne. V prípade potreby si spravte aj podadresáre, napr. na jednotlivé príklady.
  • Dbajte na prehľadnosť odovzdaného zdrojového kódu (odsadzovanie, rozumné názvy premenných, podľa potreby komentáre)

Protokoly

  • Väčšinou bude požadovanou súčasťou úlohy textový dokument nazvaný protokol.
  • Protokol sa tiež odovzdáva elektronicky (má byť umiestnený v odovzdanom adresári).

Formát protokolu

  • Protokol môže byť vo formáte .txt alebo .pdf a jeho meno má byť HWxx.pdf alebo HWxx.txt, kde xx je dvojciferné číslo domácej úlohy, napr. 01, 02,...
  • Pdf môžete vytvoriť ľubovoľným vami obľúbeným spôsobom, napr. v nejakom kancelárskom balíku, v LaTeXu, v systéme iPython notebook a pod. V odovzdanom pdf súbore by sa mali dať selektovať texty. V prípade použitia txt formátu a diakritiky ju kódujte v UTF8, ale pre jednoduchosť môžete protokoly písať aj bez diakritiky.
  • Protokol môže byť po slovensky alebo po anglicky.

Hlavička protokolu, vyhodnotenie

  • Na vrchu protokolu uveďte meno, číslo domácej úluhy a vaše vyhodnotenie toho, ako sa vám úlohu podarilo vyriešiť. Vyhodnotenie je prehľadný zoznam všetkých príkladov zo zadania, ktoré ste aspoň začali riešiť a kódov označujúcich ich stupeň dokončenia:
    • kód HOTOVO uveďte, ak si myslíte, že tento príklad máte úplne a správne vyriešený
    • kód ČASŤ uveďte, ak ste nevyriešili príklad celý a do poznámky za kód stručne uveďte, čo máte hotové a čo nie, prípadne ktorými časťami si nie ste istí.
    • kód MOŽNO uveďte, ak príklad máte celý, ale nie ste si istí, či správne. Opäť v poznámke uveďte, čím si nie ste istí.
    • kód NIČ uveďte, ak ste príklad ani nezačali riešiť
  • Vaše vyhodnotenie je pre nás pomôckou pri bodovaní. Príklady označené HOTOVO budeme kontrolovať námatkovo, k príkladom označeným MOŽNO sa vám pokúsime dať nejakú spätnú väzbu, takisto aj k príkladom označeným ČASŤ, kde v poznámke vyjadríte, že ste mali nejaké problémy.
  • Pri vyhodnotení sa pokúste čo najlepšie posúdiť správnosť vašich riešení, pričom kvalita vášho seba-hodnotenia môže vplývať na celkový počet bodov.

Obsah protokolu

  • Uveďte zoznam odovzdaných súborov. O každom uveďte jeho význam a či ste ho vyrobili ručne, získali z externých zdrojov alebo vypočítali nejakým programom. Ak máte väčšie množstvo súborov so systematickým pomenovaním, stačí vysvetliť schému názvov všeobecne. Súbory, ktorých mená sú špecifikované v zadaní, nemusíte v zozname uvádzať.
  • Uveďte tiež postupnosť všetkých spustených príkazov prípadne iných krokov, ktorými ste dospeli k získaným výsledkom. Tu uvádzajte príkazy na spracovanie dát a spúšťanie vašich či iných programov. Netreba uvádzať príkazy súvisiace so samotným programovaním (spúšťanie editora, nastavenie práv na spustenie a pod.), s kopírovaním úlohy na server a pod. Uveďte aj stručné komentáre, čo bolo účelom určitého príkazu alebo skupiny príkazov.
  • V protokole uveďte aj zoznam zdrojov (webstránok a pod.), ktoré ste pri riešení úlohy použili. Nemusíte uvádzať webstránku predmetu a zdroje odporučené priamo v zadaní.
  • Celkovo by protokol mal umožniť čitateľovi zorientovať sa vo vašich súboroch a tiež v prípade záujmu vykonať rovnaké výpočty, akými ste dospeli vy k výsledku. Nemusíte písať slohy, stačia zrozumiteľné a prehľadné heslovité poznámky.
  • Zadanie môže špecifikovať aj ďalšie veci, ktoré treba uviesť v protokole.

Projekty

Cieľom projektu je vyskúšať si naučené zručnosti na konkrétnom projekte spracovania dát. Vašou úlohou je zohnať si dáta, tieto dáta analyzovať niektorými technikami z prednášok, prípadne aj inými technológiami a získané výsledky zobraziť v prehľadných grafoch a tabuľkách. Ideálne je, ak sa vám podarí prísť k zaujímavým alebo užitočným záverom, ale hodnotiť budeme hlavne voľbu vhodného postupu a jeho technickú náročnosť. Rozsah samotného programovania alebo analýzy dát by mal zodpovedať zhruba dvom domácim úlohám, ale celkovo bude projekt náročnejší, lebo na rozdiel od úloh nemáte postup a dáta vopred určené, ale musíte si ich vymyslieť sami a nie vždy sa prvý nápad ukáže ako správny. V projekte môžete využiť aj existujúce nástroje a knižnice, ale pokiaľ možno používajte nástroje spúšťané na príkazovom riadku.

Zhruba v dvoch tretinách semestra budete odovzdávať návrh projektu (formát txt alebo pdf, rozsah 0.5-1 strana). V tomto návrhu uveďte, aké dáta budete spracovávať, ako ich zoženiete, čo je cieľom analýzy a aké technológie plánujete použiť. Ciele a technológie môžete počas práce na projekte mierne pozmeniť podľa okolností, mali by ste však mať počiatočnú predstavu. K návrhu vám dáme spätnú väzbu, pričom v niektorých prípadoch môže byť potrebné tému mierne alebo úplne zmeniť. Za načas odovzdaný vhodný návrh projektu získate 5% z celkovej známky. Návrh odporúčame pred odovzdaním konzultovať s vyučujúcimi.

Cez skúškové obdobie bude určený termín odovzdania projektu. Podobne ako pri domácich úlohách odovzdávajte adresár s potrebnými súbormi (veľmi veľké dátové súbory vynechajte) a so správou k projektu vo formáte pdf. Táto správa by mala obsahovať textovú časť a protokol. Textová časť by mala obsahovať nasledujúce časti:

  • úvod, v ktorom vysvetlíte ciele projektu, prípadne potrebné poznatky zo skúmanej oblasti a aké dáta ste mali k dispozícii
  • stručný popis metód, v ktorom neuvádzajte detailne jednotlivé kroky, skôr prehľad použitého prístupu a jeho zdôvodnenie
  • výsledky analýzy (tabuľky, grafy a pod.) a popis týchto výsledkov, prípadne aké závery sa z nich dajú spraviť (nezabudnite vysvetliť, čo znamenajú údaje v tabuľkách, osi grafov a pod.). Okrem finálnych výsledkov analýzy uveďte aj čiastkové výsledky, ktorými ste sa snažili overovať, že pôvodné dáta a jednotlivé časti vášho postupu sa správajú rozumne.
  • diskusiu, v ktorej uvediete, ktoré časti projektu boli náročné a na aké problémy ste narazili, kde sa vám naopak podarilo nájsť spôsob, ako problém vyriešiť jednoducho, ktoré časti projektu by ste spätne odporúčali robiť iným než vašim postupom, čo ste sa na projekte naučili a podobne

Textová časť by mala byť súvislý text v odbornom štýle, podobne ako napr. záverečné práce. Môžete písať po slovensky alebo po anglicky, ale pokiaľ možno gramaticky správne.

Protokol má podobný formát ako protokol z domácej úlohy, t.j. obsahuje zoznam súborov a podrobný postup pri analýze dát (spustené príkazy), ako aj použité zdroje (dáta, programy, dokumentácia a iná literatúra atď). Protokol môže byť neformálnejší so stručnými heslovitými poznámkami, ale prehľadný a zrozumiteľný.

Projekty môžete robiť aj vo dvojici, vtedy však vyžadujeme rozsiahlejší projekt a každý člen by mal byť primárne zodpovedný za určitú časť projektu, čo uveďte aj v správe. Dvojice odovzdávajú jednu správu, ale po odovzdaní projektu majú stretnutie s vyučujúcimi individuálne.

Ako nájsť tému projektu:

  • Môžete spracovať nejaké dáta, ktoré potrebujete do bakalárskej alebo diplomovej práce, prípadne aj dáta, ktoré potrebujte na iný predmet (v tom prípade uveďte v správe, o aký predmet ide a takisto upovedomte aj druhého vyučujúceho, že ste použili spracovanie dát ako projekt pre tento predmet). Obzvlášť pre BIN študentov môže byť tento predmet vhodnou príležitosťou nájsť si tému bakalárskej práce a začať na nej pracovať.
  • Môžete skúsiť zopakovať analýzu spravenú v nejakom vedeckom článku a overiť, že dostanete tie isté výsledky. Vhodné je tiež skúsiť analýzu aj mierne obmeniť (spustiť na iné dáta, zmeniť nejaké nastavenia, zostaviť aj iný typ grafu a pod.)
  • Môžete skúsiť nájsť niekoho, kto má dáta, ktoré by potreboval spracovať, ale nevie ako na to (môže ísť o biológov, vedcov z iných oblastí, ale aj neziskové organizácie a pod.) V prípade, že takýmto spôsobom kontaktujete tretie osoby, bolo by vhodné pracovať na projekte obzvlášť zodpovedne, aby ste nerobili zlé meno našej fakulte.
  • V projekte môžete porovnávať niekoľko programov na tú istú úlohu z hľadiska ich rýchlosti či presnosti výsledkov, obsahom projektu bude príprava dát, na ktorých budete programy bežať, samotné spúšťanie (vhodne zoskriptované) ako aj vyhodnotenie výsledkov.
  • A samozrejme môžete niekde na internete vyhrabať zaujímavé dáta a snažiť sa z nich niečo vydolovať.

Opisovanie

  • Máte povolené sa so spolužiakmi a ďalšími osobami rozprávať o domácich úlohách resp. projektoch a stratégiách na ich riešenie. Kód, získané výsledky aj text, ktorý odovzdáte, musí však byť vaša samostatná práca. Je zakázané ukazovať svoj kód alebo texty spolužiakom.
  • Pri riešení domácej úlohy a projektu očakávame, že budete využívať internetové zdroje, najmä rôzne manuály a diskusné fóra k preberaným technológiám. Nesnažte sa však nájsť hotové riešenia zadaných úloh. Všetky použité zdroje uveďte v domácich úlohách a projektoch.
  • Ak nájdeme prípady opisovania alebo nepovolených pomôcok, všetci zúčastnení študenti získajú za príslušnú domácu úlohu, projekt a pod. nula bodov (t.j. aj tí, ktorí dali spolužiakom odpísať) a prípad ďalej podstúpime na riešenie disciplinárnej komisii fakulty.

Zverejňovanie

Zadania a materiály k predmetu sú voľne prístupné na tejto stránke. Prosím vás ale, aby ste nezverejňovali ani inak nešírili vaše riešenia domácich úloh, ak nie je v zadaní povedané inak. Vaše projekty môžete zverejniť, pokiaľ to nie je v rozpore s vašou dohodou so zadávateľom projektu a poskytovateľom dát.

L1

Lecture 1: Perl, part 1

Why Perl

  • From Wikipedia: It has been nicknamed "the Swiss Army chainsaw of scripting languages" because of its flexibility and power, and possibly also because of its "ugliness".

Oficial slogans:

  • There's more than one way to do it
  • Easy things should be easy and hard things should be possible

Advantages

  • Good capabilities for processing text files, regular expressions, running external programs etc.
  • Closer to common programming language than shell scripts
  • Perl one-liners on the command line can replace many other tools such as sed and awk
  • Many existing libraries

Disadvantages

  • Quirky syntax
  • It is easy to write very unreadable programs (sometimes joking called write-only language)
  • Quite slow and uses a lot of memory. If possible do no read entire input to memory, process line by line

Warning: we will use Perl 5, Perl 6 is quite a different language

Sources of Perl-related information

  • In package perl-doc man pages:
    • man perlintro introduction to Perl
    • man perlfunc list of standard functions in Perle
    • perldoc -f split describes function split, similarly other functions
    • perldoc -q sort shows answers to commonly asked questions (FAQ)
    • man perlretut and man perlre regular expressions
    • man perl list of other manual pages about Perl
  • The same content on the web http://perldoc.perl.org/
  • Various web tutorials e.g. this one
  • Books
    • Simon Cozens: Beginning Perl [1] freely downloadable
    • Larry Wall et al: Programming Perl [2] classics, Camel book
  • Bioperl [3] big library for bioinformatics
  • Perl for Windows: http://strawberryperl.com/

Hello world

Dá sa spúšťať kód zadaný na príkazovom riadku (viac o tom neskôr):

perl -e'print "Hello world\n"'

Zhruba to isté ako skript uložený v súbore:

#! /usr/bin/perl -w
use strict;
print "Hello world!\n";
  • Prvý riadok je cesta k interpreteru
  • Prepínač -w zapne warnings, napr. keď manipulujeme s nedefinovanou hodnotou (ekvivalent use warnings;)
  • Druhý riadok use strict zapne prísnejšiu syntaktickú kontrolu, napr. každú premennú treba deklarovať.
  • Silne odporúčam používať -w aj use strict;
  • Program si uložíme do súboru, napr. hello.pl
  • Nastavíme mu práva na spustenie (chmod u+x hello.pl)
  • Spustíme ho príkazom ./hello.pl
  • Bez cesty k interpreteru aj práv na spustenie ho môžeme spustiť pomocou perl hello.pl

The first input file for today: sequence repeats

  • In genomes some sequences occur in many copies (often not exactly equal, only similar)
  • We have downloaded a table containing such sequence repeats on chromosome 2L of the fruitfly Drosophila melanogaster
  • It was done as follows: on webpage http://genome.ucsc.edu/ we select drosophila genome, then in main menu select Tools, Table browser, select group: variation and repeats, track: ReapatMasker, region: position chr2L, output format: all fields from the selected table a output file: repeats.txt
  • Each line of the file contains data about one repeat in the selected chromosome. The first line contains column names. Columns are tab-separated. Here are the first two lines:
#bin    swScore milliDiv        milliDel        milliIns        genoName        genoStart       genoEnd genoLeft        strand  repName repClass        repFamily       repStart        repEnd  repLeft id
585     778     167     7       20      chr2L   1       154     -23513558       +       HETRP_DM        Satellite       Satellite       1519    1669    -203    1
  • The file can be found at our server under filename /tasks/hw01/repeats.txt (17185 lines)
  • A small randomly selected subset of the table rows is in file /tasks/hw01/repeats-small.txt (159 lines)

A sample Perl program

For each type of repeat (column 11 of the file when counting from 0) we want to compute the number of repeats of this type

#!/usr/bin/perl -w
use strict;

#associative array (hash), with repeat type as key
my %count;  

while(my $line = <STDIN>) {  # read every line on input
    chomp $line;    # delete end of line, if any

    if($line =~ /^#/) {  # skip commented lines
       next;       # similar to "continue" in C, move to next iteration
    }

    # split the input line to columns on every tab, store them in an array
    my @columns = split "\t", $line;  

    # check input - should have at least 17 columns
    die "Bad input '$line'" unless @columns >= 17;

    my $type = $columns[11];

    # increase counter for this type
    $count{$type}++;
}

# write out results, types sorted alphabetically
foreach my $type (sort keys %count) {
    print $type, " ", $count{$type}, "\n";
}

This program does the same thing as the following one-liner (more on one-liners in two weeks)

perl -F'"\t"' -lane 'next if /^#/; die unless @F>=17; $count{$F[11]}++; END { foreach (sort keys %count) { print "$_ $count{$_}" }}' filename

Variables, types

Scalar variables

  • Scalar variables start with $, they can hold undefined value (undef), string, number, reference etc.
  • Perl converts automatically between strings and numbers
perl -e'print((1 . "2")+1, "\n")'
13
perl -e'print(("a" . "2")+1, "\n")'
1
perl -we'print(("a" . "2")+1, "\n")'
Argument "a2" isn't numeric in addition (+) at -e line 1.
1
  • If we switch on strict parsing, each variable needs to be defined by my, several variables created and initialized as follows: my ($a,$b) = (0,1);
  • Usual set of C-style operators, power is **, string concatenation .
  • Numbers compared by <, <=, ==, != etc., strings by lt, le, eq, ne, gt, ge,
  • Comparison operator $a cmp $b for strings, $a <=> $b for numbers: returns -1 if $a<$b, 0 if they are equal, +1 if $a>$b

Arrays

  • Names start with @, e.g. @a
  • Access to element 0 in array: $a[0]
    • Starts with $, because the expression as a whole is a scalar value
  • Length of array scalar(@a). In scalar context, @a is the same thing.
    • e.g. for(my $i=0; $i<@a; $i++) { ... }
  • If using non-existent indexes, they will be created, initialized to undef (++, += treat undef as 0)
  • Stack/vector using functions push and pop: push @a, (1,2,3); $x = pop @a;
  • Analogicaly shift and unshift on the left end of the array (slower)
  • Triedenie
    • @a = sort @a; (triedi abecedne)
    • @a = sort {$a <=> $b} @a; (triedi ciselne)
    • do brčkavých zátvoriek môžete dať hocijaký svoj porovnávací kód, $a a $b sú dva porovnávané prvky.
  • Konkatenácia @c = (@a,@b);
  • Vymena hodnot dvoch premennych: ($x,$y) = ($y,$x);
  • Iterate through values of an array (values can be changed):
perl -e'my @a = (1,2,3); foreach my $val (@a) { $val++; } print join(" ", @a), "\n";'
2 3 4

Asociatívne polia/hashe

  • Mena zacinaju %, napr %b
  • Pristup k prvky s klucom "X": $b{"X"}
  • Vypisanie vsetkych prvkov asociativneho pola %b
foreach my $key (keys %b) {
  print $key, " ", $b{$key}, "\n";
}
  • Inicializacia konstantou: %b = ("kluc1"=>"hodnota1","kluc2"=>"hodnota2")
    • namiesto => mozete dat aj ciarku
  • test na existenciu (nevyrobi testovany kluc): if(exists $a{"x"}) {...}

Viacrozmerne polia, zabava so smernikmi

  • Smernik na premennu: \$a, \@a, \%a
  • Smernik na anonymne pole [1,2,3], na anonymny hash {"kluc1"=>"hodnota1"}
  • Hash zoznamov:
my %a = ("kluc1"=>[1,2,3], "kluc2"=>[4,5,6]}
$x = $a{"kluc1"}[1];
push @{$a{"kluc1"}}, 4;
my $aref = \%a;
$x = $aref->{"kluc1"}[1];
  • Modul Data::Dumper ma funkciu Dumper, ktora rekurzivne vypise zlozite struktury

Strings, regular expressions

Strings

  • Substring: substr($string, $start, $length)
    • used also to access individual charaters (use length 1)
    • If we omit $length, considers until the end of the string, negative start counted from the end of the stringzaciatok rata od konca,...
    • We can also used to replace a substring by something else: substr($str, 0, 1) = "aaa" (replaces the first character by "aaa")
  • Length of a string: length($str)
  • Splitting a string to parts: split reg_expression, $string, $max_number_of_parts
    • if " " instead of regular expression, splits at whitespace
  • Connecting parts join($separator, @strings)
  • Other useful functions: chomp (removes end of line), index (finds a substring), lc, uc (conversion to lowercase/uppercase), reverse (mirror image), sprintf (C-style formatting)

Regularne vyrazy

$line =~ s/\s+$//;   # zmaz whitespace na konci riadku
$line =~ s/[0-9]+/X/g;  # nahrad kazdu postupnost cisel znakom X

#z mena fasta sekvencie (zacinajuceho zobacikom) zoberie retazec po prvu medzeru
#(\S je non-whitespace), ulozi ho do premennej $1, lebo je v zatvorkach
if($line =~ /^\>(\S+)/) { $name = $1; }

perl -le'$X="123 4 567"; $X=~s/[0-9]+/X/g; print $X'
X X X

Conditionals, loops

if(expression) {  # [] and () cannot be omitted
   commands
} elsif(expression) {
   commands
} else {
   commands
}

command if expression;   # here () not necessary
command unless expression;
die "negative value of x: $x" unless $x>=0;

for(my $i=0; $i<100; $i++) {
   print $i, "\n";
}

foreach my $i (0..99) {
   print $i, "\n";
}

$x=1;
while(1) {
   $x *= 2;
   last if $x>=100;
}
  • Undefined value, number 0 and strings "" and "0" evaluate as false, but I would recommmend always explicitly using logical values in conditional expressions, e.g. if(defined $x), if($x eq ""), if($x==0) etc.

Input, output

  • Reading one line from standard input: $line = <STDIN>
  • If no more input data available, returns undef
  • See also [5], [
  • Special idiom while(my $line = <STDIN>) equivalent to while (defined(my $line = <STDIN>))
    • iterates through all lines of input
  • chomp $line removes "\n", if any from the end of the string
  • output to stdout through print or printf

The second input file for today: DNA sequencing reads (fastq)

  • DNA sequencing machines can read only short pieces of DNA called reads
  • Reads are usually stored in fastq format
  • Files can be very large (gigabytes or more), but we will use only a small sample from bacteria Staphylococcus aureus, source [6]
  • Each read is on 4 lines:
    • line 1: ID of the read and other description, line starts with @
    • line 2: DNA sequence, A,C,G,T are bases (nucleotides) of DNA, N means unknown base
    • line 3: +
    • line 4: quality string, which is the string of the same length as DNA in line 2. Each character represents quality of one base in DNA. If p is the probability that this base is wrong, the quality string will contain character with ASCII value 33+(-10 log p), where log is decimal logarithm. This means that higher ASCII means base of higher quality. Character ! (ASCII 33) means probability 1 of error, character $ (ASCII 36) means 50% error, character + (ASCII 43) is 10% error, character 5 (ASCII 53) is 1% error.
    • Note that some sequencing platforms represent qualities differently (see article linked above)
  • Our file has all reads of equal length (this is not always the case)

The first 4 reads from file /tasks/hw01/reads-small.fastq

@SRR022868.1845/1
AAATTTAGGAAAAGATGATTTAGCAACATTTAGCCTTAATGAAAGACCAGATTCTGTTGCCATGTTTGAATGCCTTAAACCAGTAGCAGAATCAGTATAAA
+
IICIIIIIIIIIID%IIII8>I8III1II,II)I+III*II<II,E;-HI>+I0IB99I%%2GI*=?5*&1>'$0;%'+%%+;#'$&'%%$-+*$--*+(%
@SRR022868.1846/1
TAGCGTTGTAAAATAAATTTCTAGAATGGAAGTGATGATATTGAAATACACTCAGATCCTGAATGAAAGATTTATTAAAGTTAAGACGAGAGTCTCATTAT
+
4CIIIIIIII52I)IIIII0I16IIIII2IIII;IIAII&I6AI+*+&G5&G.@8/6&%&,03:*.$479.91(9--$,*&/3"$#&*'+#&##&$(&+&+

And now start on #HW01

HW01

See Lecture 1

Files

We have 4 input files for this homework. We recommend creating soft links to your working directory as follows:

ln -s /tasks/hw01/repeats-small.txt .  # small version of the repeat file
ln -s /tasks/hw01/repeats.txt .        # full version of the repeat file
ln -s /tasks/hw01/reads-small.fastq .  # smaller version of the read file
ln -s /tasks/hw01/reads.fastq .        # bigger version of the read file

We recommend writing your protocol starting from an outline provided in /tasks/hw01/HW01.txt

Submitting

  • Directory /submit/hw01/your_username will be created for you
  • Copy required files to this directory, including the protocol named HW01.txt or HW01.pdf
  • You can modify these files freely until deadline, but after the deadline of the homework, you will lose access rights to this directory

Task A

  • Consider the program for counting repeat types in the lecture 1, save it to file repeat-stat.pl
  • Extend it to compute the average length of each type of repeat
    • Each row of the input table contains the start and end coordinates of the repeat in columns 7 and 6. The length is simply the difference of these two values.
  • Output a table with three columns: type of repeat, the number of occurrences, the average length of the repeat.
    • Use printf to print these three items right-justified in columns of sufficient width, print the average length to 1 decimal place.
  • If you run your script on the small file, the output should look something like this (exact column widths may differ):
./repeat-stat.pl < repeats-small.txt
                 DNA         5     377.4
                LINE         4     410.2
                 LTR        13     355.4
      Low_complexity        22      47.2
                  RC         8     236.2
       Simple_repeat       106      39.0
  • Include in your protocol the output when you run your script on the large file: ./repeat-stat.pl < repeats.txt
  • Find out on Wikipedia, what acronyms LINE and LTR stand for. Do their names correspond to their lengths? (Write a short answer in the protocol.)
  • Submit only your script, repeat-stat.pl

Task B

  • Write a script which reformats FASTQ file to FASTA format, call it fastq2fasta.pl
    • fastq file should be on standard input, fasta file written to standard output
  • FASTA format is a typical format for storing DNA and protein sequences.
    • Each sequence consists of several lines of the file. The first line starts with ">" followed by identifier of the sequence and optinally some further description separated by whitespace
    • The sequence itself is on the second line, long sequences are split into multiple lines
  • In our case, the name of the sequence will be the ID of the read with @ replaced by > and / replaced by _
  • For example, the first two reads of reads.fastq are:
@SRR022868.1845/1
AAATTTAGGAAAAGATGATTTAGCAACATTTAGCCTTAATGAAAGACCAGATTCTGTTGCCATGTTTGAATGCCTTAAACCAGTAGCAGAATCAGTATAAA
+
IICIIIIIIIIIID%IIII8>I8III1II,II)I+III*II<II,E;-HI>+I0IB99I%%2GI*=?5*&1>'$0;%'+%%+;#'$&'%%$-+*$--*+(%
@SRR022868.1846/1
TAGCGTTGTAAAATAAATTTCTAGAATGGAAGTGATGATATTGAAATACACTCAGATCCTGAATGAAAGATTTATTAAAGTTAAGACGAGAGTCTCATTAT
+
4CIIIIIIII52I)IIIII0I16IIIII2IIII;IIAII&I6AI+*+&G5&G.@8/6&%&,03:*.$479.91(9--$,*&/3"$#&*'+#&##&$(&+&+
  • These should be reformatted as follows:
>SRR022868.1845_1
AAATTTAGGAAAAGATGATTTAGCAACATTTAGCCTTAATGAAAGACCAGATTCTGTTGCCATGTTTGAATGCCTTAAACCAGTAGCAGAATCAGTATAAA
>SRR022868.1846_1
TAGCGTTGTAAAATAAATTTCTAGAATGGAAGTGATGATATTGAAATACACTCAGATCCTGAATGAAAGATTTATTAAAGTTAAGACGAGAGTCTCATTAT
  • Submit files fastq2fasta.pl and reads-small.fasta
    • the latter file is created by running ./fastq2fasta.pl < reads-small.fastq > reads-small.fasta

Task C

  • Write a script fastq-quality.pl which for each position in a read computes the average quality
  • Standard input has fastq file with multiple reads, possibly of different lengths
  • By quality we will use simply ASCII values of characters in the quality string with value 33 substracted, so the quality is -10 log p
    • ASCII value can be computed by function ord
  • Positions in reads will be numbered from 0
  • Since reads can differ by length, some positions are used in more reads, some in fewer
  • For each position from 0 up to the highest position used in some read, print three numbers separated by tabs "\t": the position index, the number of times this position was used in reads, average quality at that position with 1 decimal place (you can again use printf)
  • The last two lines when you run ./fastq-quality.pl < reads-small.fastq should be
99      86      5.5
100     86      8.6
  • Run the following command, which runs your script on the larger file and selects every 10th position. Include the output in your protocol. Do you see any trend in quality values with increasing position? (Include a short comment in protocol.)
./fastq-quality.pl < reads.fastq | perl -lane 'print if $F[0]%10==0'
  • Submit only fastq-quality.pl

Task D

  • Write script fastq-trim.pl that trims low quality bases from the end of each read and filters out short reads
  • This script should read a fastq file form standard inout and write trimmed fastq file to standard output
  • It should also accept two command-line arguments: character Q and integer L
    • We have not covered processing command line arguments, but you can use the code snippet below for this
  • Q is the minimum acceptable quality (characters from quality string with ASCII value >= ASCII value of Q are ok)
  • L is the minimum acceptable length of a read
  • First find the last base in a read which has quality at least Q (if any). All bases after this base will be removed from both sequence and quality string
  • If the resulting read has fewer than L bases, it is omitted from the output

You can check your program by the following tests:

  • If you run the following two commands, you should get tmp identical with input and thus output of diff should be empty
./fastq-trim.pl '!' 101 < reads-small.fastq > tmp  # trim at quality ASCII >=33 and length >=101
diff reads-small.fastq tmp                         # output should be empty (no differences)
  • If you run the following two commands, you should see differences in 4 reads, 2 bases trimmed from each
./fastq-trim.pl '"' 1 < reads-small.fastq > tmp   # trim at quality ASCII >=34 and length >=1
diff reads-small.fastq tmp                        # output should be differences in 4 reads
  • If you run the following commands, you should get empty output (no reads meet the criteria):
./fastq-trim.pl d 1 < reads-small.fastq           # quality ASCII >=100, length >= 1
./fastq-trim.pl '!' 102 < reads-small.fastq       # quality ASCII >=33 and length >=102

Further runs and submitting

  • Run ./fastq-trim.pl '(' 95 < reads-small.fastq > reads-small-filtered.fastq # quality ASCII >= 40
  • Submit files fastq-trim.pl and reads-small-filtered.fastq
  • If you have done task C, run quality statistics on a trimmed version of the bigger file using command below and include in the protocol the result. Comment in the protocol on differences between statistics on the whole file in part C and D. Are they as you expected?
./fastq-trim.pl 2 50 < reads.fastq | ./fastq-quality.pl | perl -lane 'print if $F[0]%10==0'  # quality ASCII >= 50
  • Note: you have created tools which can be combined, e.g. you can create quality-trimmed version of the fasta file by first trimming fastq and then converting to fasta (no need to submit these files)

Parsing command-line arguments in this task (they will be stored in variables $Q and $L):

#!/usr/bin/perl -w
use strict;

my $USAGE = "
Usage:
$0 Q L < input.fastq > output.fastq

Trim from the end of each read bases with ASCII quality value less
than the given threshold Q. If the length of the read after trimming
is less than L, the read will be omitted from output.

L is a non-negative integer, Q is a character
";

# check that we have exactly 2 command-line arguments
die $USAGE unless @ARGV==2;
# copy command-line arguments to variables Q and L
my ($Q, $L) = @ARGV;
# check that $Q is one character and $L looks like a non-negative integer
die $USAGE unless length($Q)==1 && $L=~/^[0-9]+$/;

L2

Otvaranie suborov

my $in;
open $in, "<", "cesta/subor.txt" or die;  #otvorime na citanie
while(my $line = <$in>) {
  #sprav nieco s riadkom
}
close $in;

my $out;
open $out, ">", "cesta/subor2.txt" or die; #otvorime na pisanie
print $out "Hello world\n";
close $out;
# ak chceme pridavat na koniec, pouzijeme
# open $out, ">>", "cesta/subor2.txt" or die;

#standardne subory
print STDERR "Hello world\n";
my $line = <STDIN>;
#subor ako argument funkcie
citaj_subor($in);
citaj_subor(\*STDIN);

Praca so subormi a adresarmi

Pracovne adresare (daju sa aj subory). Sami sa zmazu po skonceni programu.

use File::Temp qw/tempdir/;
my $dir = tempdir("atoms_XXXXXXX", TMPDIR => 1, CLEANUP => 1 ); 
print STDERR "Creating temporary directory $dir\n";
open $out,">$dir/subor.txt" or die;

Kopirovanie suborov

use File::Copy;
copy("file1","file2") or die "Copy failed: $!";
copy("Copy.pm",\*STDOUT);
move("/dev1/fileA","/dev2/fileB");

Priamo v Perle zabudovane funkcie chdir, mkdir, unlink, chmod, ...

Funkcia glob hlada subory v hviezdickovej notacii (pozri tiez opendir a readdir a modul File::Find)

ls *.pl
perl -le'foreach my $f (glob("*.pl")) { print $f; }'

Na dalsiu pracu s menami suborov, cestami a pod su tiez moduly File::Spec a File::Basename.

Testovanie existencie suborov (viac v perldoc -f -X)

if(-r "subor.txt") { ... }  # da sa citat?
if(-d "adresar") {.... } # je to adresar?

Spustanie programov

my $ret = system("prikaz argumenty");
# vrati -1 ak sa nepodarilo spustit, inak return code prikazu
my $subory = `ls`;
#vrati vysledok prikazu (text) do premennej $subory
#neda sa otestovat return code prikazu

Pipe z ineho programu

open $in, "ls |";
while(my $line = <$in>) { ... }

Pipe do ineho programu

perl -e'open $out, "| wc"; print $out "1234\n"; close $out;'
      1       1       5

Command-line argumenty

#modul na spracovanie pomlckovych nastaveni
use Getopt::Std;
#retazec s navodom na pouzitie
my $USAGE = "$0 [options] length filename

Options:
-l           switch on lucky mode
-o filename  write output to filename
";

#parsuj pomlckove argumenty, odstrani ich z pola @ARGV
my %options;
getopts("lo:", \%options);
#v @ARGV by mali zostat dva argumenty
die $USAGE unless @ARGV==2;
#uloz si argumenty do rozumne pomenovancyh premennych
my ($length, $filenamefile) = @ARGV;
# v %options mame hodnoty z nastaveni
if(exists $options{'l'}) { print "Lucky mode\n"; }

Pozri aj modul Getopt::Long na dlhe mena nastaveni.

Vlastne funkcie a moduly

Defining new functions

sub meno_funkcie {
  #argumenty su v poli @_
  my ($prvy, $druhy) = @_;
  #vypocet...
  return ($vysledok, $druhy_vysledok);
}
  • Ak nepouzijeme return, vrati poslednu vypocitanu hodnotu.
  • Polia a hashe posielame ako reference: meno_funkcie(\@pole, \%hash);
  • Dlhe retazce (napr DNA) tiez mozeme posielat ako reference, aby sa nekopiroval: meno_funkcie(\$sequence);
  • Vo vnutri ich treba odreferecovat, substr($$sequence) alebo $pole->[0] atd

Modul s menom XXX ma byt v subore XXX.pm.

package shared;

BEGIN {
    use Exporter   ();
    our (@ISA, @EXPORT, @EXPORT_OK);
    @ISA = qw(Exporter);
    # symbols to export by default
    @EXPORT = qw(funkcia1, funkcia2);
}

sub funkcia1 {
...
}

sub funkcia2 {
...
}

#modul musi vratit true
1;

Pouzitie modulu umiestneneho v rovnakom adresari ako .pl subor:

use FindBin qw($Bin);  #$Bin is the directory with the script
use lib "$Bin";        #add bin to the library path
use shared;

Bioperl

use Bio::Tools::CodonTable;
sub translate
{
    my ($seq, $code) = @_;
    my $CodonTable = Bio::Tools::CodonTable->new( -id => $code);
    my $result = $CodonTable->translate($seq);

    return $result;
}

HW02

Biological background and overall approach

The task for today will be to build a phylogenetic tree of several species using sequences of several genes.

  • We will use 6 mammals: human, chimp, macaque, mouse, rat and dog
  • A phylogenetic tree is a tree showing evolutionary history of these species. Leaves are target present-day species, internal nodes are their common ancestors.
  • There are methods to build trees by comparing DNA or protein sequences of several present-day species.
  • Our input contains a small selection of gene sequences from each species. In a real project we would start from all genes (cca 20,000 per species) and would do a careful filtration of problematic sequences, but we skip this step here.
  • The first step will be to identify which genes from different species "correspond" to each other. More exactly, we are looking for groups of orthologs. To do so, we will use a simple method based on sequence similarity, see details below. Again, in real project, more complex methods might be used.
  • The result of ortholog group identification will be a set of genes, each gene having one sequence from each of the 6 species
  • Next we will process each gene separately, aligning them and building a phylogenetic tree for this gene using existing methods.
  • The result of the previous step will be several trees, one for every gene. Ideally, all trees would be identical, showing the real evolutionary history of the six species. But it is not easy to infer the real tree from sequence data, so trees from different genes might differ. Therefore, in the last step, we will build a consensus tree.

Technical overview

This task can be organized in different ways, but to practice Perl, we will write a single Perl script which takes as an input a set of fasta files, each containing DNA sequences of several genes from a single species and writes on output the resulting consensus tree.

  • For most of the steps, we will use existing bioinformatics tools. The script will run these tools and do some additional simple processing.

Temporary directory

  • During its run, the script and various tools will generate many files. All these files will be stored in a single temporary directory which can be then easily deleted by the user.
  • We will use Perl library File::Temp to create this temporary directory with a unique name so that the script can be run several times simultaneously without clashing filenames.
  • The library by default creates the file in /tmp, but instead we will create it in the current directory so that it is not deleted at restart of the computer and so that it can be more easily inspected for any problems
  • The library by default deletes the directory when the script finishes but again, to allow inspection by the user, we will leave the directory in place

Restart

  • The script will have a command line option for restarting the computation and omitting the time-consuming steps that were already finished
  • This is useful in long-running scripts because during development of the script you will want to run it many times as you add more steps. In real usage the computation can also be interrupted by various reasons.
  • Our restart capabilities will be quite rudimentary: before running a potentially slow external program, the script will check if the temporary directory contains a non-empty file with the filename matching the expected output of the program. If the file is found, it is assumed to be correct and complete and the external program is not run.

Command line options

  • The script should be named build-tree.pl and as command-line arguments, it will get names of the species
    • For example, we can run the script as follows: ./build-tree.pl human chimp macaque mouse rat dog
    • The first species, in this case human, will be so called reference species (see task A)
    • The script needs at least 2 species, otherwise it will write an error message and stop
    • For each species X there should be a file X.fa in the current directory, this is also checked by the script
  • Restart is specified by command line option -r followed by the name of temporary directory
  • Command-line option handling and creation of temporary directory is already implemented in the script you are given.

Input files

  • Each input fasta X.fa file contains DNA sequences of several genes from one species X
  • Each sequence name on a line starting with > will contain species name, underscore and gene id, e.g. ">human_00008"
  • Species name matches name of the file, gene id is unique within the fasta file
  • Species names and gene ids do not contain underscore, whitespace or any other special characters
  • Sequence of each gene can be split into several lines

Files and submitting

In /tasks/hw02/ you will find the following files:

  • 6 fasta files (*.fa)
  • skeleton script build-tree.pl
    • This script already contains handling of command line options, entire task B, potentially useful functions my_run and my_delete and suggested function headers for individual tasks. Feel free to change any of this.
  • outline of protocol HW02.txt
  • directory example with files for two different groups of genes

Copy the files to your directory and continue writing the script

Submitting

  • Submit the script, protocol HW02.txt or HW02.pdf and temporary directory with all files created in the run of your script on all 6 species with human as reference.
  • Since the commands and names of files are specified in the homework, you do not need to write them in the protocol (unless you change them). Therefore it is sufficient to include in the protocol self-assessment, and any used information sources other than those linked from this assignment or lectures.
  • Submit by copying to /submit/hw02/your_username

Task A: run blast to find similar sequences

  • To find orthologs, we use a simple method by first finding local alignments (regions of sequence similarity) between genes from different species
  • For finding alignments, we will use tool blast (ubuntu package blast2)
  • Example of running blast:
formatdb -p F -i human.fa
blastall -p blastn -m 9 -d human.fa -i mouse.fa -e 1e-5
  • Example of output file:
# BLASTN 2.2.26 [Sep-21-2011]
# Query: mouse_00492
# Database: human.fa
# Fields: Query id, Subject id, % identity, alignment length, mismatches, gap openings, q. start, q. end, s. start, s. end, e-value, bit score
mouse_22930     human_00008     90.79   1107    102     0       1       1107    1       1107    0.0     1386
mouse_22930     human_34035     80.29   350     69      0       745     1094    706     1055    3e-37    147
mouse_22930     human_34035     79.02   143     30      0       427     569     391     533     8e-07   46.1

(note last column - score)

  • For each non-reference species, save the result of blast search in file species.blast in the temporary directory.

Task B: find orthogroups

This part is already implemented in the skeleton file, you don't need to implement or report anything in this task

  • Here, we process all the species.blast files to find ortholog groups.
  • Matches are symmetric, and there can be multiple matches for the same gene. We are looking for reciprocal best hits: pairs of genes human_A and mouse_B, where mouse_B is the match with the highest score in mouse for human_A and human_A is the best-scoring match in human for mouse_B.
  • Some genes in reference species may have no reciprocal best hits in some of the non-reference species.
  • Gene in the reference species and all of its reciprocal best hits constitute orthogroup. If the size of an orthogroup is the same as the number of species, we will call it a complete orthogroup
  • In file genes.txt in temporary directory list we will list all orthogroups, one per line.
chimp_94013 dog_84719 human_15749 macaque_34640 mouse_17461 rat_09232
chimp_61053 human_18570 macaque_12627
chimp_41364 human_19217 macaque_88256 rat_82436

Task C: create a file for each orthogroup

  • For each complete orthogroup, we will create a fasta file with corresponding DNA sequences.
  • The file will be located in temporary directory and will be named genename.fa, where genename is the name of the orthogroup gene from reference species.
  • The fasta name for each sequence is the name of species, NOT the name of the gene.
>human
CTGCGGCTGAGAGAGATGTGTACACTGGGGACGCACTCCGGATCTGCATAGTGACCAAAGAGGGCATCAGGGAGGAAACTGTTTCCTTAAGGAAGGAC
>chimp
TGCGGCTGAGAGAGATGTGTACACTGGGGACGCACTCCGGATCTGCATAGTGACCAAAGAGGGCATCAGGGAGGAGACTGTTTCCTTAAGGAAGGAC
>macaque
CTGCGGCTGAGAGAGACGTGTACACTGGGGACGCGCTCCGGATCTGCATAGTGACCAAAGAGGGCATCAGGGAGGAGACTGTTCCCTTAAGGAAGGAC
>mouse
CAGCCGAGAGGGATGTGTATACTGGAGATGCTCTCAGGATCTGCATCGTGACCAAAGAGGGCATCAGGGAGGAAACTGTTCCCCTGCGGAAAGAC
>rat
CAGCCGAGAGGGATGTGTACACTGGAGACGCCCTCAGGATCTGCATCGTGACCAAAGAGGGCATCAGGGAGGAGACTGTTCCCCTTCGGAAAGAC
>dog
GAGGGATGTGTACACTGGGGATGCACTCAGAATCTGCATTGTGACTAAGGAGGGCATCAGGGAGGAGACTGTTCCCCTGAGGAAGGAT

Task D: build tree for each gene

  • For each orthogroup, we need to build a phylogenetic tree.
  • The result for file genename.fa should be saved in file genename.tree
  • Example of how to do this:
# create multiple alignment of the sequences
muscle -diags -in genename.fa -out genename.mfa
# change format of the multiple alignment
readseq -f12 genename.mfa -o=genename.phy -a
# run phylogenetic inferrence program
phyml -i genename.phy --datatype nt --bootstrap 0 --no_memory_check
# rename the result
mv genename.phy_phyml_tree.txt genename.tree
  • You can view the multiple alignment (*.mfa and *.phy) by using program seaview
  • You can view the resulting tree (*.tree) by using program njplot

Task E: build consensus tree

  • Trees built on individual genes can differ from each other.
  • Therefore we build a consensus tree: tree that only contains branches present in most gene trees; other branches are collapsed.
  • phylip is an "interactive" program for manipulation of trees. Specific command for building consensus trees is
phylip consense
  • input file for phylip needs to contain all trees of which consensus should be built, one per line
  • store the output tree from phylip in all_trees.consensus in temporary directory and also print it to standard output

L3

Today: using command-line tools and Perl one-liners.

  • We will do simple transformations of text files using command-line tools without writing any scripts or longer programs.
  • You will record the commands used in your protocol
    • We strongly recommend making a log of commands for data processing also outside of this course
  • If you have a log of executed commands, you can easily execute them again by copy and paste
  • For this reason any comments are best preceded by #
  • If you use some sequence of commands often, you can turn it into a script

Most commands have man pages or are described within man bash

Efficient use of command line

Some tips for bash shell:

  • use tab key to complete command names, path names etc
    • tab completion can be customized [7]
  • use up and down keys to walk through history of recently executed commands, then edit and resubmit chosen command
  • press ctrl-r to search in the history of executed commands
  • at the end of session, history stored in ~/.bash_history
  • command history -a appends history to this file right now
    • you can then look into the file and copy appropriate commands to your protocol
  • various other history tricks, e.g. special variables [8]
  • cd - goes to previously visited directory, also see pushd and popd
  • ls -lt | head shows 10 most recent files, useful for seeing what you done last

Instead of bash, you can use more advanced command-line environments, e.g. iPhyton notebook

Redirecting and pipes

# redirect standard output to file
command > file

# append to file
command >> file

# redirect standard error
command 2>file

# redirect file to standard input
command < file

# do not forget to quote > in other uses
grep '>' sequences.fasta
# (without quotes rewrites sequences.fasta)

# send stdout of command1 to stdin of command2
command1 | command2

# backtick operator executes command, 
# removes trailing from stdout \n, substitutes to command line
# the following commands do the same thing:
head -n 2 file
head -n `echo 2` file

# redirect a string in ' ' to stdin of command head
head -n 2 <<< 'line 1
line 2
line 3'

# in some commands, file argument can be taken from stdin if denoted as - or stdin
# the following compares uncompressed version of file1 with file2
zcat file1.gz | diff - file2

Make piped commands fail properly:

set -o pipefail

If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands in the pipeline exit successfully. This option is disabled by default, then pipe returns exit status of the rightmost command.

Text file manipulation

Commands echo and cat (creating and printing files)

# print text Hello and end of line to stdout
echo "Hello" 
# interpret backslash combinations \n, \t etc:
echo -e "first line\nsecond\tline"
# concatenate several files to stdout
cat file1 file2

Commands head and tail (looking at start and end of files)

# print 10 first lines of file (or stdin)
head file
some_command | head 
# print the first 2 lines
head -n 2 file
# print the last 5 lines
tail -n 5 file
# print starting from line 100 (line numbering starts at 1)
tail -n +100 file
# print lines 81..100
head -n 100 file | tail -n 20 

Commands wc, ls -lh, od (exploring file stats and details)

# prints three numbers: number of lines (-l), number of words (-w), number of bytes (-c)
wc file

# prints size of file in human-readable units (K,M,G,T)
ls -lh file

# od -a prints file or stdout with named characters 
#   allows checking whitespace and special characters
echo "hello world!" | od -a
# prints:
# 0000000   h   e   l   l   o  sp   w   o   r   l   d   !  nl
# 0000015

Command grep (getting lines in files or stdin matching a regular expression)

# -i ignores case (upper case and lowercase letters are the same)
grep -i chromosome file
# -c ocunts the number of matching lines in each file
grep -c '^[12][0-9]' file1 file2

# other options (plus others):
# -v print/count not matching lines (inVert)
# -n show also line numbers
# -B 2 -A 1 print 2 lines before each match and 1 line after match
# -E extended regular expressions (allows e.g. |)
# -F no regular expressions, set of fixed strings
# -f patterns in a file 
#    (good for selecting e.g. only lines matching one of "good" ids)

Commands sort, uniq

# some useful options of sort:
# -g numeric sort
# -k which column(s) to use as key
# -r reverse (from largest values)
# -s stable
# -t fiels separator

# sorting first by column 2 numerically, in case of ties use column 1
sort -k 1 file | sort -g -s -k 2,2

# uniq outputs one line from each group of consecutive identical lines
# uniq -c adds the size of each group as the first column
# the following finds all unique lines and sorts them by frequency from most frequent
sort file | uniq -c | sort -gr

Commands diff, comm (comparing files)

diff compares two files, useful for manual checking

  • useful options
    • -b (gnore whitespace differences)
    • -r for comparing directories
    • -q for fast checking for identity
    • -y show differences side-by-side

comm compares two sorted files

  • writes 3 columns:
    • 1: lines occurring in the first file
    • 2: lines occurring in the second file
    • 3: lines occurring in both files
  • some columns can be suppressed with -1, -2, -3
  • good for finding set intersections and differences

Commands cut, paste, join (working with columns)

  • cut selects only some columns from file (perl/awk more flexible)
  • paste puts 2 or more files side by side, separated by tabs or other character
  • join is a powerful tool for making joins and left-joins as in databases on specified columns in two files

Commands split, csplit (splitting files to parts)

  • split splits into fixed-size pieces (size in lines, bytes etc.)
  • csplit splits at occurrence of a pattern (e.g. fasta file into individual sequences)
csplit sequences.fa '/^>/' '{*}'

Programs sed and awk

Both programs process text files line by line, allow to do various transformations

  • awk newer, more advanced
  • several examples below
  • More info on wikipedia: awk, sed
# replace text "Chr1" by "Chromosome 1"
sed 's/Chr1/Chromosome 1/'
# prints first two lines, then quits (like head -n 2)
sed 2q  

# print first and second column from a file
awk '{print $1, $2}' 

# print the line if difference in first and second column > 10
awk '{ if ($2-$1>10) print }'  

# print lines matching pattern
awk '/pattern/ { print }' 

# count lines
awk 'END { print NR }'

Perl one-liners

Instead of sed and awk we will cover Perl one-liners

  • more examples [9], [10]
  • documentation for Perl switches [11]
# -e executes commands
perl -e'print 2+3,"\n"'
perl -e'$x = 2+3; print $x, "\n"';

# -n wraps commands in a loop reading lines from stdin or files listed as arguments
# the following is roughly the same as cat:
perl -ne'print'
# how to use:
perl -ne'print' < input > output
perl -ne'print' input1 input2 > output
# lines are stored in a special variable $_
# this variable is default argument of many functions, 
# including print, so print is the same as print $_

# simple grep-like commands:
perl -ne 'print if /pattern/'
# simple regular expression modifications
perl -ne 's/Chr(\d+)/Chromosome $1/; print'
# // and s/// are applied by dafault to $_

# -l removes end of line from each input line and adds "\n" after each print
# the following adds * at the end of each line
perl -lne'print $_, "*"' 

# -a splits line into words separated by whitespace and stores them in array @F
# the next example prints difference in numbers stored in the second and first column
# (e.g. interval size if each line coordinates of one interval)
perl -lane'print $F[1]-$F[0]'

# -F allows to set separator used for splitting (regular expression)
# the next example splits at tabs
perl -F '"\t"' -lane'print $F[1]-$F[0]'

# -i replaces each file with a new transformed version (DANGEROUS!)
# the next example removes empty lines from all txt files in the current directory
perl -lane 'print if length($_)>0' -i *.txt
# the following example replaces sequence of whitespace by exactly one space 
# and removes leading and trailing spaces from lines
perl -lane 'print join(" ", @F)' -i *.txt

# END { commands } is run at the very end, after we finish reading input
# the following example computes the sum of interval lengths
perl -lane'$sum += $F[1]-$F[0]; END { print $sum; }'
# similarly BEGIN { command } before we start

# variable $. contains line number. $ARGV name of file or - for stdin
# the following prints filename and line number in front of every line
perl -ane'printf "%s.%d: %s", $ARGV, $., $_' file1 file2

# moving files *.txt to have extension .tsv:
#   first print commands 
#   then execute by hand or replace print with system
#   mv -i asks if something is to be rewritten
ls *.txt | perl -lne '$s=$_; $s=~s/\.txt/.tsv/; print("mv -i $_ $s")'
ls *.txt | perl -lne '$s=$_; $s=~s/\.txt/.tsv/; system("mv -i $_ $s")'

HW03

Lecture 1, Lecture 2, Lecture 3

  • In this homework use command-line tools or one-liners in Perl, awk or sed. Do not write any scripts or programs.
  • Each task can be split into several stages and intermediate files written to disk, but you can also use pipelines to reduce the number of temporary files.
  • Your commands should work also for other input files with the same format (do not try to generalize them too much, but also do not use very specific properties of a particular input, such as number of lines etc.)
  • Document all relevant commands in your protocol and add a short description of your approach.
  • Submit the protocol HW03 and required output files.
  • Outline of protocol is in /tasks/hw03/HW03.txt, submit to directory /submit/hw03/yourname

Bonus

  • If you are bored, you can try to write solution of Task B using as small number of characters as possible
  • In the protocol, include both normal readable form and the condensed form
  • Winner with the shortest set of commands gets some bonus points

Task A

  • /tasks/hw03/names.txt contains data about several people, one per line.
  • Each line consists of given name(s), surname and email separated by spaces.
  • Each person can have multiple given names (at least 1), but exactly one surname and one email. Email is always of the form username@uniba.sk.
  • The task is to generate file passwords.csv which contains a randomly generated password for each of these users
  • The output file has columns separated by commas ','
  • The first column contains username extracted from email address, second column surname, third column all given names and fourth column the randomly generated password
  • Submit file passwords.csv with the result of your commands.
  • Include commands that warn user if the input has some problems, such as containing a line with fewer than 3 columns, missing @ in email, or containing commas.
  • Such checks are not necessary in the other tasks, but you can do them if you want.

Example line from input:

Pavol Országh Hviezdoslav hviezdoslav32@uniba.sk

Example line from output (password will differ):

hviezdoslav32,Hviezdoslav,Pavol Országh,3T3Pu3un

Hints:

  • Passwords can be generated using makepasswd (use option --count), we also recommend using perl, wc, paste.
  • In Perl, function pop may be useful for manipulating @F and function join for connecting strings with a separator.

Task B

File:

  • /tasks/hw03/saccharomyces_cerevisiae.gff contains annotation of the yeast genome
    • Downloaded from http://yeastgenome.org/ on 2016-03-09, in particular from [12].
    • It was further processed to omit DNA sequences from the end of file.
    • The size of the file is 5.6M.
  • For easier work, link the file to your directory by ln -s /tasks/hw03/saccharomyces_cerevisiae.gff yeast.gff
  • The file is in GFF3 format [13]
  • Lines starting with # are comments, other lines contain tab-separated data about one interval of some chromosome in the yeast genome
  • Meaning of the first 5 columns:
    • column 0 chromosome name
    • column 1 source (can be ignored)
    • column 2 type of interval
    • column 3 start of interval (1-based coordinates)
    • column 4 end of interval (1-based coordinates)
  • You can assume that these first 5 columns do not contain whitespace

Task:

  • For each chromosome the file contains a line which has in column 2 string chromosome, and the interval is the whole chromosome.
  • To file chrosomes.txt print a tab-separated list of chromosomes and their sizes in the same order as in the input
  • The last line of chromosomes.txt should list the total size of all chromosomes combined.
  • Submit file chromosomes.txt
  • Hint: tab is written in Perl as "\t". Command cat may be useful.
  • Your output should start and end as follows:
chrI    230218
chrII   813184
...
...
chrXVI  948066
chrmt   85779
total   12157105

Task C

  • Continue processing file from task B. Print for each type of interval (column 2), how many times it occurs in the file.
  • Sort from the most common to the least common interval types.
  • Hint: commands sort and uniq will be useful. Do not forget to skip comments.
  • Submit file types.txt with the output formatted as follows:
   7058 CDS
   6600 mRNA
...
...
      1 telomerase_RNA_gene
      1 mating_type_region
      1 intein_encoding_region

Task D

Overall goal:

  • Proteins from several well-studied yeast species were downloaded from database http://www.uniprot.org/ on 2016-03-09
  • We have also downloaded proteins from yeast Yarrowia lipolytica. We will pretend that nothing is known about these proteins (as if they were produced by gene finding program in a newly sequenced genome).
  • We have run blast of known proteins vs. Y.lip. proteins.
  • Now we want to find for each protein in Y.lip. its closest match among all known proteins.

Files:

  • /tasks/hw03/known.fa is a fasta file with known proteins from several species
  • /tasks/hw03/yarLip.fa is a fasta file with proteins from Y.lip.
  • /tasks/hw03/known.blast is the result of running blast of yarLip.fa versus known.fa by these commands:
formatdb -i known.fa
blastall -p blastp -d known.fa -i yarLip.fa -m 9 -e 1e-5 > known.blast
  • you can link these files to your directory as follows:
ln -s /tasks/hw03/known.fa .
ln -s /tasks/hw03/yarLip.fa .
ln -s /tasks/hw03/known.blast .

Step 1:

  • Get from known.blast the first (strongest) match for each query.
  • This can be done by printing the lines that are not comments but follow a comment line starting with #.
  • In a perl one-liner, you can create a state variable which will remember if the previous line was a comment and based on that you decide of you print the current line.
  • Print only the first two columns separated by tab (name of query, name of target), sort the file by second column.
  • Submit file best.tsv with the result
  • File should start as follows:
Q6CBS2  sp|B5BP46|YP52_SCHPO
Q6C8R4  sp|B5BP48|YP54_SCHPO
Q6CG80  sp|B5BP48|YP54_SCHPO
Q6CH56  sp|B5BP48|YP54_SCHPO

Step 2:

  • Submit file known.tsv which contains sequence names extracted from known.fa with leading > removed
  • This file should be sorted alphabetically.
  • File should start as follows:
sp|A0A023PXA5|YA19A_YEAST Putative uncharacterized protein YAL019W-A OS=Saccharomyces cerevisiae (strain ATCC 204508 / S288c) GN=YAL019W-A PE=5 SV=1
sp|A0A023PXB0|YA019_YEAST Putative uncharacterized protein YAR019W-A OS=Saccharomyces cerevisiae (strain ATCC 204508 / S288c) GN=YAR019W-A PE=5 SV=1

Step 3:

  • Use command join to join the files best.tsv and known.tsv so that each line of best.tsv is extended with the text describing the corresponding target in known.tsv
  • Use option -1 2 to use second column of best.tsv as a key for joining
  • The output of join may look as follows:
sp|B5BP46|YP52_SCHPO Q6CBS2 Putative glutathione S-transferase C1183.02 OS=Schizosaccharomyces pombe (strain 972 / ATCC 24843) GN=SPBC460.02c PE=3 SV=1
sp|B5BP48|YP54_SCHPO Q6C8R4 Putative alpha-ketoglutarate-dependent sulfonate dioxygenase OS=Schizosaccharomyces pombe (strain 972 / ATCC 24843) GN=SPBC460.04c PE=3 SV=1
  • Further reformat the output so that query name goes first (e.g. Q6CBS2), followed by target name (e.g. sp|B5BP46|YP52_SCHPO), followed by the rest of the text, but remove all text after OS=
  • Sort by query name
  • Submit file best.txt with the result
  • The output should start as follows:
B5FVA8  tr|Q5A7D5|Q5A7D5_CANAL  Lysophospholipase
B5FVB0  sp|O74810|UBC1_SCHPO    Ubiquitin-conjugating enzyme E2 1
B5FVB1  sp|O13877|RPAB5_SCHPO   DNA-directed RNA polymerases I, II, and III subunit RPABC5

Note:

  • not all Y.lip. are necessarily included in your final output (some proteins do not have blast match)
    • you can think how to find the list of such proteins, but this is not part of the assignment
  • but files best.txt and best.tsv should have the same number of lines

L4

Job Scheduling

  • Some computing jobs take a lot of time: hours, days, weeks,...
  • We do not want to keep a command-line window open the whole time; therefore we run such jobs in the background
  • Simple commands to do it in Linux: batch, at, screen
  • These commands run jobs immediately (screen) or at preset time (at), or when the computer becomes idle (batch)
  • Now we will concentrate on Sun Grid Engine, a complex software for managing many jobs from many users on a cluster from multiple computers
  • Basic workflow:
    • Submit a job (command) to a queue
    • The job waits in the queue until resources (memory, CPUs, etc.) become available on some computer
    • Then the job runs on the computer
    • Output of the job stored in files
    • User can monitor the status of the job (waiting, running)
  • Complex possibilities for assigning priorities and deadlines to jobs, managing multiple queues etc.
  • Ideally all computers in the cluster share the same environment and filesystem
  • We have a simple training cluster for this exercise:
    • You submit jobs to queue on vyuka
    • They will run on computer cpu02 with 8 CPUs
    • This cluster is only temporarily available until next Wednesday

Submitting a job (qsub)

  • qsub -b y -cwd 'command < input > output 2> error'
    • quoting around command allows us to include special characters, such as <, > etc. and not to apply it to qsub command itself
    • -b y treats command as binary, usually preferable for both binary programs and scripts
    • -cwd executes command in the current directory
    • -N name allows to set name of the job
    • -l resource=value requests some non-default resources
    • for example, we can use -l threads=2 to request 2 threads for parallel programs
    • Grid engine will not check if you do not use more CPUs or memory than requested, be considerate (and perhaps occassionally watch your jobs by running top at the computer where they execute)
  • qsub will create files for stdout and stderr, e.g. s2.o27 and s2.e27 for the job with name s2 and jobid 27

Monitoring and deleting jobs (qstat, qdel)

  • qstat displays jobs of the current user
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
     28 0.50000 s3         bbrejova     r     03/15/2016 22:12:18 main.q@cpu02.compbio.fmph.unib     1
     29 0.00000 s3         bbrejova     qw    03/15/2016 22:14:08                                    1
  • qstat -u '*' displays jobs of all users
    • finished jobs disappear from the list
  • qstat -F threads shows how many threads available
queuename                      qtype resv/used/tot. load_avg arch          states
---------------------------------------------------------------------------------
main.q@cpu02.compbio.fmph.unib BIP   0/2/8          0.03     lx26-amd64
        hc:threads=0
     28 0.75000 s3         bbrejova     r     03/15/2016 22:12:18     1
     29 0.25000 s3         bbrejova     r     03/15/2016 22:14:18     1
  • Command qdel allows you to delete a job (waiting or running)

Interactive work on the cluster (qrsh), screen

  • qrsh creates a job which is a normal interactive shell running on the cluster
  • in this shell you can then manually run commands
  • however, when you close the shell, the job finishes
  • therefore it is a good idea to run qrsh within screen
    • run screen command, this creates a new shell
    • within this shell, run qrsh, then whatever commands
    • by pressing Ctrl-a d you "detach" the screen, so that both shells (local and qrsh) continue running but you can close your local window
    • later by running screen -r you get back to your shells

Running many small jobs

For example tens of thousands of genes, runs some computation for each gene

  • Have a script which iterates through all and runs them sequentially (as in HW02).
    • Problems: Does not use parallelism, needs more programming to restart after some interruption
  • Submit processing of each gene as a separate job to cluster (submitting done by a script/one-liner).
    • Jobs can run in parallel on many different computers
    • Problem: Queue gets very long, hard to monitor progress, hard to resubmit only unfinished jobs after some failure.
  • Array jobs in qsub (option -t): runs jobs numbered 1,2,3...; number of the job in an environment variable, used by the script to decide which gene to process
    • Queue contains only running sub-jobs plus one line for the remaining part of the array job.
    • After failure, you can resubmit only unfinished portion of the interval (e.g. start from job 173).
  • Next: using make in which you specify how to process each gene and submit a single make command to the queue
    • Make can execute multiple tasks in parallel using several threads on the same computer (qsub array jobs can run tasks on multiple computers)
    • It will automatically skip tasks which are already finished

Make

  • Make is a system for automatically building programs (running compiler, linker etc)
  • Rules for compilation are written in a Makefile
  • Rather complex syntax with many features, we will only cover basics

Rules

  • The main part of a Makefile are rules specifying how to generate target files from some source files (prerequisites).
  • For example the following rule generates target.txt by concatenating source1.txt a source2.txt:
target.txt : source1.txt source2.txt
      cat source1.txt source2.txt > target.txt
  • The first line describes target and prerequisites, starts in the first column
  • The following lines list commands to execute to create the target
  • Each line with a command starts with a tab character
  • If we now have a directory with this rule in Makefile and files source1.txt and source2.txt, running make target.txt will run the cat command
  • However, if target.txt already exists, the command will be run only if one of the prerequisites has more recent modification time than the target
  • This allows to restart interrupted computations or rerun necessary parts after modification of some input files
  • Makefile automatically chains the rules as necessary:
    • if we run make target.txt and some prerequisite does not exist, Makefile checks if it can be created by some other rule and runs that rule first
    • In general in first finds all necessary steps and runs them in topological order so that each rules has its prerequisites ready
    • Option make -n target will show you what commands would be executed to build target (dry run) - good idea before running something potentially dangerous

Pattern rules

  • We can specify a general rule for files with a systematic naming scheme. For example, to create a .pdf file from a .tex file, we use pdflatex command:
%.pdf : %.tex
      pdflatex $^
  • In the first line, % denotes some variable part of the filename, which has to agree in the target and all prerequisites
  • In commands, we can use several variables:
    • $^ contains name for the prerequisite (source)
    • $@ contains the name of the target
    • $* contains the string matched by %

Other useful tricks in Makefiles

Variables:

  • Store some reusable values in variables, then use then several times in the Makefile:
PATH := /projects/trees/bin

target : source
       $(PATH)/script < $^ > $@

The following Makefile automatically creates .png version of each .eps file simply by running make:

EPS := $(wildcard *.eps)
EPSPNG := $(patsubst %.eps,%.png,$(EPS))

all:  $(EPSPNG)

clean:
        rm $(EPSPNG)

%.png : %.eps
        convert -density 250 $^ $@
  • variable EPS contains names of all files matching *.eps
  • variable EPSPNG contains desirebale names of png files
  • all if a "phony target" which is not really created
    • its rule has no commands but all png files are prerequisites, so are done first
    • first target in Makefile (in this case all) is default when no other target specified on command-line
  • clean is also phony target for deleting generated png files

Two useful special built-in target names (include these lines in your Makefile if desired)

.SECONDARY:
# prevents deletion of intermediate targets in chained rules

.DELETE_ON_ERROR:
# delete targets if a rule fails

Parallel make

  • running make with option -j 4 will run up to 4 commands in parallel if their dependencies are already finished
  • easy parallelization on a single computer

Snakemake

  • Relatively small open-source project https://bitbucket.org/snakemake/snakemake/wiki/Home
  • Köster, Johannes and Rahmann, Sven. "Snakemake - A scalable bioinformatics workflow engine". Bioinformatics 2012.
  • Create workflows similar to Makefiles
  • Workflows can contain shell commands or Python code
  • Big advantage compared to Make: pattern rules my contain multiple variable portions (in make only one % per filename)
    • For example, you have several fasta files and several HMMs representing protein families and you wans to run each HMM on each fasta file:
rule HMMER:
     input: "{filename}.fasta", "{hmm}.hmm"
     output: "{filename}_{hmm}.hmmer"
     shell: "hmmsearch --domE 1e-5 --noali --domtblout {output} {input[1]} {input[0]}"

HW04

See also Lecture 2, #HW02

In this homework, we will return to the example in homework 2, where we took genes from several organisms, found orthogroups of corresponding genes and built a phylogenetic tree for each orthogroup. This was all done in a single big Perl script. In this homework, we will write a similar pipeline using make and execute it remotely using qsub. We will use proteins instead of DNA and we will use a different set of species. Most of the work is already done, only small modifications are necessary.

  • Submit by copying requested files to /submit/hw04/username/
  • Do not forget to submit protocol, outline of the protocol is in /tasks/hw04/HW04.txt

Task A

  • In this task, you will run a long alignment job (>1 hour)
  • Copy directory /tasks/hw04/large to your home directory
    • ref.fa: all proteins from yeast Yarrowia lipolytica
    • other.fa: all proteins from 8 other yeast species
    • Makefile: run blast on ref.fa vs other.fa (also formats database other.fa before that)
  • run make -n to see what commands will be done (you should see formatdb and blastall), copy the output to the protocol
  • run qsub with appropriate options to run make (at least -cwd and -b y)
  • then run qstat > queue.txt
    • Submit file queue.txt showing your job waiting or running
  • when your job finishes, submit also the last 100 lines from the output file ref.blast under the name ref-end.blast (use tool tail -n 100)

Task B

  • In this task, you will finish a Makefile for splitting blast results into orthogroups and building phylogenetic trees for each group
    • This Makefile works with much smaller files and so you can run it many time even on vyuka, without qsub
  • Copy directory /tasks/hw04/small to your home directory
    • ref.fa: 10 proteins from yeast Yarrowia lipolytica
    • other.fa: selected subset of proteins from 8 other yeast species
    • Makefile: longer makefile
      • runs blast as above
      • then splits proteins into orthogroups and creates one directory for each group with file prot.fa contining protein sequences
      • these steps are done by make ref.brm
      • then by running make phy, alignment prot.phy will be created in each gene directory (after ref.brm is done)
    • brm.pl is a modified part of the Perl script from #HW02 which parses blast output, finds orthogroups and creates a directory with prof.fa for each group

Modify Makefile to build phylogenetic tree in each gene directory with two different evolutionary models WAG and LG, where LG is the default

phyml -i INPUT --datatype aa --bootstrap 0 --no_memory_check >LOG
phyml -i INPUT --model WAG --datatype aa --bootstrap 0 --no_memory_check >LOG
  • Modify INPUT and LOG in the commands to appropriate filenames using make variables $@, $^, $* etc.
  • Write rules similar for rules for .phy
  • Add variables and targets for creating trees of all genes
  • Note: phyml is unfortunate in the sense that it will always put output to files named INPUT_phyml_stats.txt and INPUT_phyml_tree.txt. If make runs phyml with two settings in parallel, it could try to write to the same output file, so either run both phyml in the same rule or create a copy of the input under a unique name for each run of phyml.
  • Run your Makefile
  • Submit the whole directory small, including Makefile and all gene directories with tree files.

Further possibilities

Here are some possibilities for further experiments, in case you are interested (do not submit these):

  • You could copy your extended Makefile to directory large and create trees for all orthogroups in the big set
    • This would take a long time, so submit it through qsub and only after the lecture is over to allow classmates to work on task A
    • After ref.brm si done, programs for individual genes can be run in parallel, so you can try running make -j 2 and request 2 threads from qsub
  • You can create consensus of all trees with model WAG and all trees with model LG, as in #HW02 and see if there are any differences (this can be done also in the small set)

L5

In this lecture we dive into SQLite3 and Python.

SQLite3

SQLite3 is a simple "database" stored in one file. Think of SQLite not as a replacement for Oracle but as a replacement for fopen(). Documentation: https://www.sqlite.org/docs.html

You can access sqlite database either from command line:

usamec@Darth-Labacus-2:~$ sqlite3 db.sqlite3
SQLite version 3.8.2 2013-12-06 14:53:30
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> CREATE TABLE test(id integer primary key, name text);
sqlite> .schema test
CREATE TABLE test(id integer primary key, name text);
sqlite> .exit

Or from python interface: https://docs.python.org/2/library/sqlite3.html.

Python

Python is a perfect language for almost anything. Here is a cheatsheet: http://www.cogsci.rpi.edu/~destem/igd/python_cheat_sheet.pdf

Scraping webpages

The simplest tool for scraping webpages is urllib2: https://docs.python.org/2/library/urllib2.html Example usage:

import urllib2
f = urllib2.urlopen('http://www.python.org/')
print f.read()

Parsing webpages

We use beautifulsoup4 for parsing html (http://www.crummy.com/software/BeautifulSoup/bs4/doc/). I recommend following examples at the beginning of the documentation and example about CSS selectors: http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors

Parsing dates

You have two options. Either use datetime.strptime or use dateutil package (https://dateutil.readthedocs.org/en/latest/parser.html).

Other usefull tips

  • Don't forget to commit to your sqlite3 database (db.commit()).
  • CREATE TABLE IF NOT EXISTS can be usefull at the start of your script.
  • Inspect element (right click on element) in Chrome can be very helpful.
  • Use screen command for long running scripts.
  • All packages are installed on vyuka server. If you are planning using your own laptop, you need to install them using pip (preferably using virtualenv).

HW05

  • Submit by copying requested files to /submit/hw05/username/

General goal: Scrape comments from several (hundreds) sme.sk users from last month and store them in SQLite3 database.

Task A

Create SQLite3 "database" with appropriate schema for storing comments from SME.sk discussions. You will probably need tables for users and comments. You don't need to store which comments replies to which one.

Submit two files:

  • db.sqlite3 - the database
  • schema.txt - brief description of your schema and rationale behind it

Task B

Build a crawler, which crawls comments in sme.sk discussions. You have two options:

  • For fewer points: Script which gets url of the user (http://ekonomika.sme.sk/diskusie/user_profile.php?id_user=157432) and crawls his comments from last month.
  • For more points: Scripts which gets one starting url (either user profile or some discussion, your choice) and automatically discovers users and crawls their comments.

This crawler should store comments in SQLite3 database built in previous task. Submit following:

  • db.sqlite3 - the database
  • every python script used for crawling
  • README (how to start your crawler)

L6

In this lecture we will use Flask and simple text processing utilities from ScikitLearn.

Flask

Flask is simple web server for python (http://flask.pocoo.org/docs/0.10/quickstart/#a-minimal-application) You can find sample flask application at /tasks/hw06/simple_flask. Before running change the port number. You can then access your app at vyuka.compbio.fmph.uniba.sk:4247 (change port number).

There may be problem with access to strange port numbers due to firewalling rules. There are at least two ways to circumvent this:

  • Use X forwarding and run web browser directly from vyuka
local_machine> ssh vyuka.compbio.fmph.uniba.sk -XC
vyuka> chromium-browser
  • Create SOCKS proxy to vyuka.compbio.fmph.uniba.sk and set SOCKS proxy at that port on your local machine. Then all web traffic goes through vyuka.compbio.fmph.uniba.sk via ssh tunnel. To create SOCKS proxy server on local machine port 8000 to vyuka.compbio.fmph.uniba.sk:
local_machine> ssh vyuka.compbio.fmph.uniba.sk -D 8000

(keep ssh session open while working)

Flask uses jinja2 (http://jinja.pocoo.org/docs/dev/templates/) templating language for showing html (you can use strings in python but it is painful).

Processing text

Main tool for processing text is CountVectorizer class from ScikitLearn (http://scikit--learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html). It transforms text into bag of words (for each word we get counts). Example:

from sklearn.feature_extraction.text import CountVectorizer

vec = CountVectorizer(strip_accents='unicode')

texts = [
 "Ema ma mamu.",
 "Zirafa sa vo vani kupe a hneva sa."
]

t = vec.fit_transform(texts).todense()

print t

print vec.vocabulary_

Useful things

We are working with numpy arrays here (that's array t in example above) Numpy arrays has also lots of nice tricks. First lets create two matrices:

>>> import numpy as np
>>> a = np.array([[#1,2,3],[4,5,6]])
>>> b = np.array([[#7,8],[9,10],[11,12]])
>>> a
array([[#1, 2, 3],
       [4, 5, 6]])
>>> b
array([[# 7,  8],
       [ 9, 10],
       [11, 12]])

We can sum this matrices or multiply them by some number:

>>> 3 * a
array([[# 3,  6,  9],
       [12, 15, 18]])
>>> a + 3 * a
array([[# 4,  8, 12],
       [16, 20, 24]])

We can calculate sum of elements in each matrix, or sum by some axis:

>>> np.sum(a)
21
>>> np.sum(a, axis=1)
array([ 6, 15])
>>> np.sum(a, axis=0)
array([5, 7, 9])

There is a lot other usefull functions check https://docs.scipy.org/doc/numpy-dev/user/quickstart.html.

This can help you get top words for each user: http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html#numpy.argsort

HW06

  • Submit by copying requested files to /submit/hw06/username/

General goal: Build a simple website, which lists all crawled users and for each users has a page with simple statistics for given user.

This lesson requires crawled data from previous lesson, if you don't have one, you can find it at (and thank Baska): /tasks/hw06/db.sqlite3

Submit source code (web server and preprocessing scripts) and database files.

Task A

Create a simple flask web application which:

  • Has a homepage where is a list of all users (with links to their pages).
  • Has a page for each user, which has simple information about user: His nickname, number of posts and hist last 10 posts.

Task B

For each user preprocess and store list of his top 10 words and list of top 10 words typical for him (which he uses much more often than other users, come up with some simple heuristics). Show this information on his page.

Task C

Preprocess and store list of top three similar users for each user (try to come up with some simple definition of similarity based on text in posts). Again show this information on user page.

Bonus: Try to use some simple topic modeling (e.g. PCA as in TruncatedSVD from scikit-learn) and use it for finding similar users.

L7

In this lesson we make simple javascript visualizations.

Your goal is to take examples from here https://developers.google.com/chart/interactive/docs/ and tweak them for your purposes.

Tips:

  • You can output your data into javascript data structures in Flask template. It is a bad practice, but sufficient for this lesson. (Better way is to load JSON through API).
  • Remember that you have to bypass the firewall.

HW07

  • Submit by copying requested files to /submit/hw07/username/

General goal: Extend user pages from previous project with simple visualizations.

Task A

Show a calendar, which shows during which days was user active (like this https://developers.google.com/chart/interactive/docs/gallery/calendar#overview).

Task B

Show a histogram of comments length (like this https://developers.google.com/chart/interactive/docs/gallery/histogram#example).

Task C

Try showing a word tree for a user (https://developers.google.com/chart/interactive/docs/gallery/wordtree#overview). Try to normalize the text (lowercase, remove accents). CountVectorizer has method build_analyzer, which returns function, which does this for you.

L8

#HW08

Program for today: basics of R (applied to biology examples)

  • very short intro as a lecture
  • tutorial as HW: read a bit of text, try some commands, extend/modify them as requested

In this course we cover several languages popular for scripting in bioinformatics: Perl, Python, R

  • their capabilities overlap, many extensions emulate strenghts of one in another
  • choose a language based on your preference, level of knowledge, existing code for the task, rest of the team
  • quickly learn a new language if needed
  • also possibly combine, e.g. preprocess data in Perl or Python, then run statistical analyses in R, automate entire pipeline with bash or make

Introduction

  • R is an open-source system for statistical computing and data visualization
  • Programming language, command-line interface
  • Many built-in functions, additional libraries
  • We will concentrate on useful commands rather than language features

Working in R

  • Run command R, type commands in command-line interface
    • supports history of commands (arrows, up and down, Ctrl-R) and completing command names with tab key
> 1+2
[1] 3
  • Write a script to file, run it from command-line: R --vanilla --slave < file.R
  • Use rstudio to open a graphics IDE [14]
    • Windows with editor of R scripts, console, variables, plots
    • Ctrl-Enter in editor executes current command in console
x=c(1:10)
plot(x,x*x)
  • ? plot displays help for plot command

Suggested workflow

  • work interactively in Rstudio or on command line, try various options
  • select useful commands, store in a script
  • run script automatically on new data/new versions, potentially as a part of a bigger pipeline

Additional information

Gene expression data

  • Gene expression: DNA->mRNA->protein
  • Level of gene expression: Extract mRNA from a cell, measure amounts of mRNA
  • Technologies: microarray, RNA-seq

Gene expression data

  • Rows: genes
  • Columns: experiments (e.g. different conditions or different individuals)
  • Each value is expression of a gene, i.e. relative amount of mRNA for this gene in the sample

We will use microarray data for yeast:

  • Strassburg, Katrin, et al. "Dynamic transcriptional and metabolic responses in yeast adapting to temperature stress." Omics: a journal of integrative biology 14.3 (2010): 249-259. [15]
  • Downloaded from GEO database [16]
  • Data already preprocessed: normalization, log2, etc
  • We have selected only cold conditions, genes with absolute change at least 1
  • Data: 2738 genes, 8 experiments in a time series, yeast moved from normal temperature 28 degrees C to cold conditions 10 degrees C, samples taken after 0min, 15min, 30min, 1h, 2h, 4h, 8h, 24h in cold

HW08

#L8

In this homework, try to read text, execute given commands, potentially trying some small modifications.

  • Then do tasks A-D, submit required files (3x .png)
  • In your protocol, enter commands used in tasks A-D, with explanatory comments in more complicated situations
  • In task B also enter required output to protocol

First steps

  • Type a command, R writes the answer, e.g.:
> 1+2
[1] 3
  • We can store values in variables and use them later on
> # The size of the sequenced portion of cow's genome, in millions of base pairs
> Cow_genome_size <- 2290
> Cow_genome_size
[1] 2290
> Cow_chromosome_pairs <- 30
> Cow_avg_chrom <- Cow_genome_size / Cow_chromosome_pairs
> Cow_avg_chrom
[1] 76.33333

Surprises:

  • dots are used as parts of id's, e.g. read.table is name of a single function (not method for object read)
  • assignment via <- or =
    • careful: a<-3 is assignment, a < -3 is comparison

Vectors, basic plots

  • Vector is a sequence of values of the same type (all are numbers or all are strings or all are booleans)
# Vector can be created from a list of numbers by function c
a<-c(1,2,4)
a
# prints [1] 1 2 4

# function c also concatenates vectors
c(a,a)
# prints [1] 1 2 4 1 2 4

# Vector of two strings 
b<-c("hello", "world")

# Create a vector of numbers 1..10
x<-1:10
x
# prints [1]  1  2  3  4  5  6  7  8  9 10

Vector arithmetics

  • Operations applied to each member of the vector
x<-1:10
# Square each number in vector x
x*x
# prints [1]   1   4   9  16  25  36  49  64  81 100

# New vector y: logarithm from a number in x squared
y<-log(x*x)
y
# prints [1] 0.000000 1.386294 2.197225 2.772589 3.218876 3.583519 3.891820 4.158883
# [9] 4.394449 4.605170

# Draw graph of function log(x*x) for x=1..10
plot(x,y)
# The same graph but use lines instead of dots
plot(x,y,type="l")

# Addressing elements of a vector: positions start at 1
# Second element of the vector 
y[2]
# prints [1] 1.386294

# Which elements of the vector satisfy certain condition? (vector of logical values)
y>3
# prints [1] FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE

# write only those elements from y that satisfy the condition
y[y>3]
# prints [1] 3.218876 3.583519 3.891820 4.158883 4.394449 4.605170

# we can also write values of x such that values of y satisfy the condition...
x[y>3]
# prints [1]  5  6  7  8  9 10

Task A

  • Create a plot of the binary logarithm with dots in the graph more densely spaced (from 0.1 to 10 with step 0.1)
  • Store it in file log.png and submit this file
  • Hints:
    • Create x and y by vector arithmetics
    • To compute binary logarithm check help ? log
    • Before running plot, use command png("log.png") to store the result, afterwards call dev.off() to close the file (in rstudio you can also export plots manually)

Data frames and simple statistics

  • Data frame: table similar to spreadsheet, each column is a vector, all are of the same length
  • We will use a table with the following columns:
    • The size of the sequenced portion of a genome, in millions of base pairs
    • Number of chromosome pairs
    • GC content
    • taxonomic group mammal or fish
  • Stored in CSV format, columns separated by tabs.
  • Data: Han et al Genome Biology 2008 [17]
Species    Size    Chrom   GC      Group
Human      2850    23      40.9    mammal
Chimpanzee 2750    24      40.7    mammal
Macaque    2650    21      40.7    mammal
Mouse      2480    20      41.7    mammal
...
Tetraodon   187    21      45.9    fish
...
# reading a frame from file
a<-read.table("/tasks/hw08/genomes.csv", header = TRUE, sep = "\t");
# column with name size
a$Size

# Average chromosome length: divide size by the number of chromosomes
a$Size/a$Chrom

# Add average chromosome length as a new column to frame a
a<-cbind(a,AvgChrom=a$Size/a$Chrom)

# scatter plot average chromosome length vs GC content
plot(a$AvgChrom, a$GC)

# complactly display structure of a 
# (good for checking that import worked etc)
str(a)

# display mean, median, etc. of each column
summary(a);

# average genome size
mean(a$Size)
# average genme size for mammals
mean(a$Size[a$Group=="mammal"])
# Standard deviation
sd(a$Size)

# Histogram of genome sizes
hist(a$Size)

Task B

  • Divide frame a to two frames, one for mammals, one for fish. Hint:
    • Try command a[c(1,2,3),]. What is it doing?
    • Try command a$Group=="mammal".
    • Combine these two commands to get rows for all mammals and store the frame in a new variable, then repeat for fish
    • Use a general approach which does not depend on the exact number and ordering of rows in the table.
  • Run the command summary separately for mammals and for fish. Which of their characteristics are different?
    • Write output and your conclusion to the protocol

Task C

  • Draw a graph comparing genome size vs GC content; use different colors for points representing mammals and fish
    • Submit the plot in file genomes.png
    • To draw the graph, you can use one of the options below, or find yet another way
    • Option 1: first draw mammals with one color, then add fish in another color
      • Color of points can be changed by: plot(1:10,1:10, col="red")
      • After plot command you can add more points to the same graph by command points, which can be used similarly as plot
      • Warning: command points does not change the ranges of x and y axes. You have to set these manually so that points from both groups are visible. You can do this using options xlim and ylim, e.g. plot(x,y, col="red", xlim=c(1,100), ylim=c(1,100))
    • Option 2: plot both mammals and fish in one plot command, and give it a vector of colors, one for eahc point
      • plot(1:10,1:10,col=c(rep("red",5),rep("blue",5))) will plot the first 5 points red and the last 5 points blue
  • Bonus task: add a legend to the plot, showing which color is mammal and which is fish

Expression data and clustering

Data here is bigger, better to use plain R rather than rstudio (limited server CPU/memory)

# Read gene expression data table
a<-read.table("/tasks/hw08/microarray.csv", header = TRUE, sep = "\t", row.names=1)
# Visual check of the first row
a[1,]
# plot starting point vs. situation after 1 hour
plot(a$cold_0min,a$cold_1h)
# to better see density in dense clouds of points, use this plot
smoothScatter(a$cold_15min,a$cold_1h)
# outliers away from diagonal in the plot above are most strongly differentially expressed genes
# these are more easy to see in MA plot:
# x-axis: average expression in the two conditions
# y-axis: difference between values (they are log-scale, so difference 1 means 2-fold)
smoothScatter((a$cold_15min+a$cold_1h)/2,a$cold_15min-a$cold_1h)

Clustering is a wide group of methods that split data points into groups with similar properties

  • We will group together genes that have a similar reaction to cold, i.e. their rows in gene expression data matrix have similar values

We will consider two simple clustering methods

  • K means clustering splits points (genes) into k clusters, where k is a parameter given by the user. It finds a center of each cluster and tries to minimize the sum of distances from individual points to the center of their cluster. Note that this algorithm is randomized so you will get different clusters each time.
  • Hierarchical clustering puts all data points (genes) to a hierarchy so that smallest subtrees of the hierarchy are the most closely related groups of points and these are connected to bigger and more loosely related groups.
Example of a heatmap
# Heatmap: creates hierarchical clustering of rows 
# then shows every value in the table using color ranging from red (lowest) to white (highest)
# Computation may take some time
heatmap(as.matrix(a),Colv=NA)
# Previous heatmap normalized each row, the next one uses data as they are:
heatmap(as.matrix(a),Colv=NA,scale="none")
# k means clustering to 7 clusters
k=7
cl <- kmeans(a,k)
# each gene has assigned a cluster (number between 1 and k)
cl$cluster
# draw only cluster number 3 out of k
heatmap(as.matrix(a[cl$cluster==3,]),Rowv=NA, Colv=NA)

# reorder genes in the table according to cluster
heatmap(as.matrix(a[order(cl$cluster),]),Rowv=NA, Colv=NA)

# compare overall column means with column means in cluster 3
# function apply uses mean on every column (or row if 2 change to 1)
apply(a,2,mean)
# now means within cluster
apply(a[cl$cluster==3,],2,mean)

# clusters have centers which are also computed as means
# so this is the same as previous command
cl$centers[3,]

Task D

Example of a required plot
  • Draw a plot in which x-axis is time and y-axis is expression level and center of each cluster is shown as a line
    • use command matplot(x,y,type="l") which gets two matrices x and y and plots columns of one vs the other
    • matplot(,y,type="l") will use numbers 1,2,3... as columns of the missing matrix x
    • create y from cl$centers by applying function t (transpose)
    • to create an appropriate matrix x, create a vector of times for individual experiments in minutes or hours (do it manually, no need to parse column names automatically)
    • using functions rep and matrix you can create a matrix x in which this vector is used as every column
    • then run matplot(x,y,type="l")
    • since time points are not evenly spaced, it would be better to use logscale: matplot(x,y,type="l",log="x")
    • to avoid log(0), change the first timepoint from 0min to 1min
  • Submit file clusters.png

L9

#HW09

Topic of this lecture are statistical tests in R.

  • Beginners in statistics: listen to lecture, then do tasks A, B, C
  • If you know basics of statistical tests, do tasks B, C, D

Introduction to statistical tests: sign test

  • [18]
  • Two friends A and B have played their favourite game n=10 times, A has won 6 times and B has won 4 times.
  • A claims that he is a better player, B claims that such a result could easily happen by chance if they were equally good players.
  • Hypothesis of player B is called null hypothesis that the pattern we see (A won more often than B) is simply a result of chance
  • Null hypothesis reformulated: we toss coin n times and compute value X: the number of times we see head. The tosses are independent and each toss has equal probability of being 0 or 1
  • Similar situation: comparing programs A and B on several inputs, counting how many times is program A better than B.
# simulation in R: generate 10 psedorandom bits
# (1=player A won)
sample(c(0,1), 10, replace = TRUE)
# result e.g. 0 0 0 0 1 0 1 1 0 0

# directly compute random variable X, i.e. sum of bits
sum(sample(c(0,1), 10, replace = TRUE))
# result e.g. 5

# we define a function which will m times repeat 
# the coin tossing experiment with n tosses 
# and returns a vector with m values of random variable X
experiment <- function(m, n) {
  x = rep(0, m)     # create vector with m zeroes
  for(i in 1:m) {   # for loop through m experiments
    x[i] = sum(sample(c(0,1), n, replace = TRUE)) 
  }
  return(x)         # return array of values     
}
# call the function for m=20 experiments, each with n tosses
experiment(20,10)
# result e.g.  4 5 3 6 2 3 5 5 3 4 5 5 6 6 6 5 6 6 6 4
# draw histograms for 20 experiments and 1000 experiments
png("hist10.png")  # open png file
par(mfrow=c(2,1))  # matrix of plots with 2 rows and 1 column
hist(experiment(20,10))
hist(experiment(1000,10))
dev.off() # finish writing to file
  • It is easy to realize that we get binomial distribution (binomické rozdelenie)
  • P-value of the test is the probability that simply by chance we would get k the same or more extreme than in our data.
  • In other words, what is the probability that in 10 tosses we see head 6 times or more (one sided test)
  • If the p-value is very small, say smaller than 0.01, we reject the null hypothesis and assume that player A is in fact better than B
# computing the probability that we get exactly 6 heads in 10 tosses
dbinom(6,10,0.5) # result 0.2050781
# we get the same as our formula above:
7*8*9*10/(2*3*4*(2^10)) # result 0.2050781

# entire probability distribution: probabilities 0..10 heads in 10 tosses
dbinom(0:10,10,0.5)
# [1] 0.0009765625 0.0097656250 0.0439453125 0.1171875000 0.2050781250
# [6] 0.2460937500 0.2050781250 0.1171875000 0.0439453125 0.0097656250
# [11] 0.0009765625

#we can also plot the distribution
plot(0:10,dbinom(0:10,10,0.5))
barplot(dbinom(0:10,10,0.5))

#our p-value is sum for 7,8,9,10
sum(dbinom(6:10,10,0.5))
# result: 0.3769531
# so results this "extreme" are not rare by chance,
# they happen in about 38% of cases

# R can compute the sum for us using pbinom 
# this considers all values greater than 5
pbinom(5, 10, 0.5, lower.tail=FALSE)
# result again 0.3769531

# if probability too small, use log of it
pbinom(9999, 10000, 0.5, lower.tail=FALSE, log.p = TRUE)
# [1] -6931.472
# the probability of getting 10000x head is exp(-6931.472) = 2^{-100000}

# generating numbers from binomial distribution 
# - similarly to our function experiment
rbinom(20, 10, 0.5)
# [1] 4 4 8 2 6 6 3 5 5 5 5 6 6 2 7 6 4 6 6 5

# running the test
binom.test(6, 10, p = 0.5, alternative="greater")
#
#        Exact binomial test
#
# data:  6 and 10
# number of successes = 6, number of trials = 10, p-value = 0.377
# alternative hypothesis: true probability of success is greater than 0.5
# 95 percent confidence interval:
# 0.3035372 1.0000000
# sample estimates:
# probability of success
#                   0.6

# to only get p-value run
binom.test(6, 10, p = 0.5, alternative="greater")$p.value
# result 0.3769531

Comparing two sets of values: Welch's t-test

  • Let us now consider two sets of values drawn from two normal distributions with unknown means and variances
  • The null hypothesis of the Welch's t-test is that the two distributions have equal means
  • The test computes test statistics (in R for vectors x1, x2):
    • (mean(x1)-mean(x2))/sqrt(var(x1)/length(x1)+var(x2)/length(x2))
  • This test statistics is approximately distributed according to Student's distribution with the degree of freedom obtained by
n1=length(x1)
n2=length(x2)
(var(x1)/n1+var(x2)/n2)**2/(var(x1)**2/((n1-1)*n1*n1)+var(x2)**2/((n2-1)*n2*n2))
  • Luckily R will compute the test for us simply by calling t.test
x1 = rnorm(6, 2, 1)
# 2.70110750  3.45304366 -0.02696629  2.86020145  2.37496993  2.27073550

x2 = rnorm(4, 3, 0.5)
# 3.258643 3.731206 2.868478 2.239788
> t.test(x1,x2)
# t = -1.2898, df = 7.774, p-value = 0.2341
# alternative hypothesis: true difference in means is not equal to 0
# means 2.272182  3.024529

x2 = rnorm(4, 5, 0.5)
# 4.882395 4.423485 4.646700 4.515626
t.test(x1,x2)
# t = -4.684, df = 5.405, p-value = 0.004435
# means 2.272182  4.617051

# to get only p-value, run 
t.test(x1,x2)$p.value

We will apply Welch's t-test to microarray data

  • Data from GEO database [19], publication [20]
  • Abbott et al 2007: Generic and specific transcriptional responses to different weak organic acids in anaerobic chemostat cultures of Saccharomyces cerevisiae
  • gene expression measurements under 5 conditions:
    • reference: yeast grown in normal environment
    • 4 different acids added so that cells grow 50% slower (acetic, propionic, sorbic, benzoic)
  • from each condition (reference and each acid) we have 3 replicates
  • together our table has 15 columns (3 replicates from 5 conditions)
  • 6398 rows (genes)
  • We will test statistical difference between the reference condition and one of the acids (3 numbers vs other 2 numbers)
  • See Task B in #HW09

Multiple testing correction

  • When we run t-tests on the reference vs. acetic acid on all 6398 genes, we get 118 genes with p-value<=0.01
  • Purely by chance this would happen in 1% of cases (from definition of p-value)
  • So purely by chance we would expect to get about 64 genes with p-value<=0.01
  • So perhaps roughly half of our detected genes (maybe less, maybe more) are false positives
  • Sometimes false positives may even overwhelm results
  • Multiple testing correction tries to limit the number of false positives among results of multiple statistical tests
  • Many different methods
  • The simplest one is Bonferroni correction, where the threshold on p-value is divided by the number of tested genes, so instead of 0.01 we use 0.01/6398 = 1.56e-6
  • This way the expected overall number of false positives in the whole set is 0.01 and so the probability of getting even a single false positive is also at most 0.01 (by Markov inequality)
  • We could instead multiply all p-values by the number of tests and apply the original threshold 0.01 - such artificially modified p-values are called corrected
  • After bonferroni correction we get only 1 significant gene
# the results of p-tests are in vector pa of length 6398
# manually multiply p-values by length(pa), count those that have value <=0.01
sum(pa * length(pa) < 0.01)
# in R you can use p.adjust form multiple testing correction
pa.adjusted = p.adjust(pa, method ="bonferroni")
# this is equivalent to multiplying by the length and using 1 if the result > 1
pa.adjusted = pmin(pa*length(pa),rep(1,length(pa)))

# there are less conservative multiple testing correction methods, e.g. Holm's method
# but in this case we get almost the same results
pa.adjusted2 = p.adjust(pa, method ="holm")
  • Other frequently used correction is FDR: false discovery rate, which is less strict and controls the overall proportion of false positives among results

HW09

Lecture 9

  • Do either tasks A,B,C (beginners) or B,C,D (more advanced). If you really want, you can do all four for bonus credit.
  • In your protocol write used R commands with brief comments on your approach.
  • Submit required plots with filenames as specified.
  • For each task also include results as required and a short discussion commenting the results/plots you have obtained. Is the value of interest increasing or decreasing with some parameter? Are the results as expected or surprising?
  • Outline of protocol is in /tasks/hw09/HW09.txt

Task A: sign test

  • Consider a situation in which players played n games, out of which a fraction of q were won by A (example in lecture corresponds to q=0.6 and n=10)
  • Compute a table of p-values for n=10,20,...,90,100 and for q=0.6, 0.7, 0.8, 0.9
  • Plot the table using matplot (n is x-axis, one line for each value of q)
  • Submit the plot in sign.png
  • Discuss the values you have seen in the plot / table

Outline of the code:

# create vector rows with valuies 10,20,...,100
rows=(1:10)*10
# create vector columns with required values of q
columns=c(0.6, 0.7, 0.8, 0.9)
# create empty matrix of pvalues 
pvalues = matrix(0,length(rows),length(columns))
# TODO: fill in matrix pvalues using binom.test

# set names of rows and columns
rownames(pvalues)=rows
colnames(pvalues)=columns
# careful: pvalues[10,] is now 10th row, i.e. value for n=100, 
#          pvalues["10",] is the first row, i.e. value for n=10

# create x-axis matrix (as in HW08, part D)
x=matrix(rep(rows,length(columns)),nrow=length(rows))
# matplot command
matplot(x,pvalues,type="l",col=c(1:length(columns)),lty=1)
legend("topright",legend=columns,col=c(1:length(columns)),lty=1)

Task B: Welch's t-test on microarray data

  • Read table with microarray data to R and transform it to log scale, then work with table a
input=read.table("/tasks/hw09/acids.tsv",header=TRUE,row.names=1)
a = log(input)
  • Columns 1,2,3 are reference, columns 4,5,6 acetic acid, 7,8,9 benzoate, 10,11,12 propionate, and 13,14,15 sorbate
  • Write a function my.test which will take as arguments table a and 2 lists of columns (e.g. 1:3 and 4:6) and will run for each row of the table Welch's t-test of the first set of columns vs the second set. It will return the resulting vector of p values
  • For example by calling pa <- my.test(a,1:3,4:6) we will compute p-values for differences between reference and acetic acids (computation may take some time)
  • The first 5 values of pa should be
> pa[1:5]
[1] 0.94898907 0.07179619 0.24797684 0.48204100 0.23177496
  • Run the test for all four acids
  • Report how many genes were significant with p-value<=0.01 for each acid
  • How many genes are significant for both acetic and propionic acids? (logical and is written as &)

Task C: multiple testing correction

Run the following snippet of code, which works on the vector of p-values pa obtained for acetate in task B

# adjusts vectors of p-vales from tasks B for using Bonferroni correction
pa.adjusted = p.adjust(pa, method ="bonferroni")
# add this adjusted vector to frame a
a <-  cbind(a, pa.adjusted)
# create permutation ordered by pa.adjusted
oa = order(pa.adjusted)
# select from table five rows with the lowest pa.adjusted (using oa)
# and display columns continng reference, acetate and adjusted p-value
a[oa[1:5],c(1:6,16)]

You should get output like this:

            ref1     ref2     ref3  acetate1   acetate2  acetate3 pa.adjusted
SUL1    7.581312 7.394985 7.412040 2.1633230 2.05412373 1.9169226 0.004793318
YMR244W 2.985682 2.975530 3.054001 0.3364722 0.33647224 0.1823216 0.188582576
DIP5    6.943991 7.147795 7.296955 0.6931472 0.09531018 0.5306283 0.253995075
YLR460C 5.620401 5.801212 5.502482 3.2425924 3.48431229 3.3843903 0.307639012
HXT4    2.821379 3.049273 2.772589 7.7893717 8.24446541 8.3041980 0.573813502

Do the same procedure for benzoate p-values. Comment the results for both acids.

Task D: volcano plot, test on data generated from null hypothesis

Draw a volcano plot for the acetate data

  • x-axis of this plot is the difference in the mean of reference and mean of acetate.
    • You can compute row means of a matrix by rowMeans.
  • y-axis is -log10 of the p-value (use original p-values before multiple testing correction)
  • You can quickly see genes which have low p-values (high on y-axis) and also big difference in mean expression between the two conditions (far from 0 on x-axis). You can also see if acetate increases or decreases expression of these genes.

Now create a simulated dataset sharing some features of the real data but observing the null hypotheses that the mean of reference and acetate are the same for each gene

  • Compute vector m of means for columns 1:6 from matrix a
  • Compute vectors sr and sa of standard deviations for reference columns and for acetate columns respectively
    • You can compute standard deviation for each row of a matrix by apply(some.matrix, 1, sd)
  • For each for each i in 1:6398, create three samples from normal distribution with mean m[i] and standard deviation sr[i], and three samples with mean m[i] and deviation sa[i]
    • Use function rnorm
  • On the resulting matrix apply Welches' t-test and draw the volcano plot.
  • How many random genes have p-value <=0.01? Is it roughly what we would expect under the null hypothesis?

Draw histogram of p-values from the real data (reference vs acetate) and from random data. How do they differ? Is it what you would expect?

  • use function hist

Submit plots volcano-real.png, volcano-random.png, hist-real.png, hist-random.png (real for real expression data and random for generated data)

L10

#HW10

In this lecture we'll learn how to use Biopython for sequence data retrieval and processing. I strongly recommend to use IPython for this tutorial.

Introduction to Biopython

You can find slides here. If you wish to accomplish some other analysis, have a look at Biopython Tutorial and Cookbook.

Starting with Biopyton

# Run Python ie iPython 
# import Biopython
from Bio import Entrez

# tell NCBI who you are (mandatory!)
Entrez.email="name@domain.com"

# get info about NCBI databases
handle = Entrez.einfo()
record = Entrez.read(handle)
print record

Search protein database for `opsin 1` from human

# define database
db="protein"
# look for `opsin 1` from human
query = '"opsin 1" AND "homo sapiens"[organism]'
handle = Entrez.esearch(db=db, retmax=10, term=query)
record = Entrez.read(handle)
print record

# get genbank ID associated with above query
gid = record['IdList'][0]

# fetch protein sequence in FASTA
handle = Entrez.efetch(db=db, id=gid, rettype="fasta")
fasta = "".join(handle.readlines())
print fasta

Find similar proteins using NCBI BLAST service

# perform blastP search remotely
from Bio.Blast import NCBIWWW, NCBIXML

# limit results to bunch of organisms only
myEntrezQuery = "Homo sapiens[Organism] OR Mus musculus[Organism] OR Gallus gallus[Organism] OR Danio rerio[Organism] OR Drosophila melanogaster[Organism]"

# submit query (may take some minutes)
result_handle = NCBIWWW.qblast("blastp", "swissprot", fasta, expect=1e-25, hitlist_size=100, entrez_query=myEntrezQuery)

# retrieve results
blast_record = NCBIXML.read(result_handle)

Working with BLAST alignments

## you can explore blast_record object using `blast_record.` + TAB

# parse alignments
for alignment in blast_record.alignments:
  for hsp in alignment.hsps:
    print alignment.title, hsp.expect
  
blast_hsp = blast_record.alignments[0].hsps[0]
print blast_hsp

Fetch BLAST matches

from Bio import SeqIO
# get gi ids
gids = [a.hit_id.split('|')[1] for a in blast_record.alignments]

handle = Entrez.efetch(db=db, id=",".join(gids), rettype="fasta")
records = []
for r in SeqIO.parse(handle, "fasta"):
  r.id = r.id.split('|')[-1]
  #r.description = ""
  records.append(r)


Multiple sequence alignment

# MSA using muscle
import subprocess, sys
from Bio.Align.Applications import MuscleCommandline

# save sequences to file
fn="seqs.fa"
with open(fn, "w") as out:
  SeqIO.write(records, out, "fasta")
  
muscle_cmd = MuscleCommandline(clwstrict=True, input=fn)
print(muscle_cmd)

# run MUSCLE subprocess
child = subprocess.Popen(str(muscle_cmd), stdout=subprocess.PIPE, shell=(sys.platform!="win32"))
child.wait()

# read alignments
from Bio import AlignIO
align = AlignIO.read(child.stdout, "clustal")
print(align)

Reconstruct phylogenetic tree

# convert alignements into PHYLIP format
phylip="seqs.phy"
with open(phylip, 'w') as out:
  AlignIO.write(align, out, 'phylip')


# reconstruct phylogenetic tree
from Bio import Phylo
from Bio.Phylo.Applications import PhymlCommandline, FastTreeCommandline
#cmd = PhymlCommandline(input=phylip, datatype='aa')
cmd = FastTreeCommandline(input=phylip, out=phylip+".nw")
out_log, err_log = cmd()

tree = Phylo.read(phylip+".nw", 'newick') # '_phyml_tree.txt'
Phylo.draw_ascii(tree)

Working with trees efficiently

# ete
import ete2 as ete
t=ete.PhyloTree(phylip+".nw")

# root by mid-point
t.set_outgroup(t.get_midpoint_outgroup())

print(t)
t.show()
t.render(fn+".svg")

Just in case ete is not working, you can install it using virtualenv (or try to activate it from my home source /home/lpryszcz/src/venv/py27/bin/activate):

# create Python virtual environment
mkdir -p ~/src/venv && cd ~/src/venv
virtualenv py27

# activate it
source py27/bin/activate

# update pip & install ete2
pip install -U pip
pip install -U ete2

# check if it's installed
python -c "import ete2 as ete; print ete.__version__"

HW10

Evolution of tumor suppressor p53 (TP53) family

Please submit multiple sequence alignment (in phylip format) and tree image (in PDF).

Some questions you should be able to answer:

  1. How many members of this family is present in human?
  2. Is number of p53 family consistent across all vertebrates?
  3. When did expansion of this family happen?

HINT: In case you have problem identifying right sequence in database, try looking for `P53_HUMAN` or `P53[Gene Name]`.

L11

#HW11

Biological story: tiny monkeys

  • Common marmoset (Callithrix jacchus, Kosmáč bielofúzý) weights only about 1/4 kg
  • Most primates are much bigger
  • Which marmoset genes differ from other primates and are related to the small size?
  • Positive selection scan computes of each gene a p-value, whether it evolved on the marmoset lineage faster
  • The result is a list of p-values, one for each genes
  • Which biological functions are enriched among positively selected genes? Are any of those functions possibly related to body size?

Gene functions and GO categories

Use mysql database "marmoset" on the server.

  • We can look at the description of a particular gene:
select * from genes where prot='IGF1R';
+----------------------------+-------+-------------------------------------------------+                       
| transcriptid               | prot  | description                                     |                       
+----------------------------+-------+-------------------------------------------------+                       
| knownGene.uc010urq.1.1.inc | IGF1R | insulin-like growth factor 1 receptor precursor |                       
+----------------------------+-------+-------------------------------------------------+
  • In the database, we have stored all the P-values from positive selection tests:
select * from lrtmarmoset where transcriptid='knownGene.uc010urq.1.1.inc';
+----------------------------+---------------------+
| transcriptid               | pval                |
+----------------------------+---------------------+
| knownGene.uc010urq.1.1.inc | 0.00142731425252827 |
+----------------------------+---------------------+
  • Genes are also assigned functional categories based on automated processes (including sequence similarity to other genes) and manual curation. The corresponding database is maintained by Gene Ontology Consortium. We can use on-line sources to search for these annotations, e.g. here.
  • We can also download the whole database and preprocess it into useable form:
select * from genes2gocat,gocatdefs where transcriptid='knownGene.uc010urq.1.1.inc' and genes2gocat.cat=gocatdefs.cat;
(results in 50 categories)
  • GO categories have a hierarchical structure - see for example category GO:0005524 ATP binding:
select * from gocatparents,gocatdefs where gocatparents.parent=gocatdefs.cat and gocatparents.cat='GO:0005524';
+------------+------------+---------+------------+-------------------------------+
| cat        | parent     | reltype | cat        | def                           |
+------------+------------+---------+------------+-------------------------------+
| GO:0005524 | GO:0032559 | isa     | GO:0032559 | adenyl ribonucleotide binding |
+------------+------------+---------+------------+-------------------------------+
... and continuing further up the hierarchy:
| GO:0032559 | GO:0030554 | isa     | GO:0030554 | adenyl nucleotide binding     |
| GO:0032559 | GO:0032555 | isa     | GO:0032555 | purine ribonucleotide binding |
| GO:0030554 | GO:0001883 | isa     | GO:0001883 | purine nucleoside binding |
| GO:0030554 | GO:0017076 | isa     | GO:0017076 | purine nucleotide binding |
| GO:0032555 | GO:0017076 | isa     | GO:0017076 | purine nucleotide binding |
| GO:0032555 | GO:0032553 | isa     | GO:0032553 | ribonucleotide binding    |
| GO:0001883 | GO:0001882 | isa     | GO:0001882 | nucleoside binding |
| GO:0017076 | GO:0000166 | isa     | GO:0000166 | nucleotide binding |
| GO:0032553 | GO:0000166 | isa     | GO:0000166 | nucleotide binding |
| GO:0001882 | GO:0005488 | isa     | GO:0005488 | binding |
| GO:0000166 | GO:0005488 | isa     | GO:0005488 | binding |
| GO:0005488 | GO:0003674 | isa     | GO:0003674 | molecular_function |
  • What else can be under GO:0032559 adenyl ribonucleotide binding?
select * from gocatparents,gocatdefs where gocatparents.cat=gocatdefs.cat and gocatparents.parent='GO:0032559';
+------------+------------+---------+------------+-------------+
| cat        | parent     | reltype | cat        | def         |
+------------+------------+---------+------------+-------------+
| GO:0005524 | GO:0032559 | isa     | GO:0005524 | ATP binding |
| GO:0016208 | GO:0032559 | isa     | GO:0016208 | AMP binding |
| GO:0043531 | GO:0032559 | isa     | GO:0043531 | ADP binding |
+------------+------------+---------+------------+-------------+

Mann–Whitney U test

  • also known as Wilcoxon rank-sum test
  • In Lecture 9, we have used Welch's t-test to test if one set of expression measurements for a gene are significantly different from the second set
  • This test assumes that both sets come from normal (Gaussian) distributions with unknown parameters
  • Mann-Whitney U test is called non-parametric, because it does not make this assumption
  • The null hypothesis is that two sets of measurements were generated by the same unknown probability distribution
  • Alternative hypothesis: for X from the first distribution and Y from the second P(X>Y) is not equal P(Y>X)
  • We will use a one-side version of the alternative hypothesis: P(X>Y) > P(Y>X)
  • Compute test statistics U:
    • compare all pairs X, Y (X from first set, Y from second set)
    • if X>Y, add 1 to U
    • if X==Y, add 0.5
  • For large sets, U is approximately normally distributed under the null hypothesis

How to use in R:

# generate 20 samples from exponential distrib. with mean 1
x = rexp(20, 1)  
# generate 30 samples from exponential distrib. with mean 1/2
y = rexp(30, 2)  

# test if values of x greater than y
wilcox.test(x,y,alternative="greater")  
# W = 441, p-value = 0.002336
# alternative hypothesis: true location shift is greater than 0
# W is the U statistics above

# now generate y twice from the same distrib. as x
y = rexp(30, 1)
wilcox.test(x,y,alternative="greater")
# W = 364, p-value = 0.1053
# relatively small p-value (by chance)

y = rexp(30, 1)
wilcox.test(x,y,alternative="greater")
# W = 301, p-value = 0.4961
# now much greater p-value

Another form of the test, potentially useful for HW:

  • have a vector of values x, binary vector b indicating two classes: 0 and 1
  • test if values marked by 0 are greater than values marked by 1
# generate 10 with mean 1, 30 with mean 1/2, 10 with mean 1
x = c(rexp(10,1),rexp(30,2),rexp(10,1))
# classes 10x0, 20x1, 10x0
b = c(rep(0,10),rep(1,30),rep(0,10))
wilcox.test(x~b,alternative="greater")

# the same test by distributing into subvectors x0 and x1 for classes 0 and 1
x0 = x[b==0]
x1 = x[b==1]
wilcox.test(x0,x1,alternative="greater")
# should be the same as above

HW11

Lecture 11

  • In this task, you can use a combination of any scripting languages (e.g. Perl, Python, R) but also SQL, command-line tools etc.
  • Input is in a database
  • Submit required text files (optionally also files with figures in bonus part D)
  • Also submit any scripts you have written for this HW
  • In the protocol, include shell commands you have run
  • Outline of protocol is in /tasks/hw11/HW11.txt

Available data

  • All data necessary for this task is available in the mysql database 'marmoset' on the server
  • You will find password in /tasks/hw11/readme.txt
  • You have read-only access to the 'marmoset' database
  • For creating temporary tables, etc., you can use database 'temp_youruserid' (e.g. 'temp_mrkvicka54'), where you are allowed to create new tables and store data
  • You can address tables in mysql even between databases: to start client with your writeable database as default location, use:
mysql -p temp_mrkvicka54
  • You can then access data in the table 'genes' in the database 'marmoset' simply by using 'marmoset.genes'

Task A: Get GO categories for each gene

  • Extract data from the database and for each gene create a complete list of GO categories it belongs to.
    • These include categories explicitly listed for this gene in DB, but also all ancestors of the listed categories
  • For the next task you will also need positive selection p-value for each gene from the database
  • You can try to do this part completely in SQL, building a new table containing pairs: category and its ancestor
    • Such transitive closure can be done by repeated joins until you find no more ancestors
  • Alternatively, you can simply extract data from the database and process them in a language of your choice
  • Submit file GO-sample.txt with a complete list of genes in category GO:0032555, one on each line

Task B: Run Man-Whitney U test for each GO category

  • Run Man-Whitney U test for each non-trivial category
    • Non-trivial categories are such that at least one of our genes is in the category and at least one of our genes is not in the category
    • You should test, if genes in a particular GO category have smaller p-values in positive selection than genes outside the category
  • Submit file test.tsv in which each line contains two tab separated values:
    • GO category id
    • p-value from the test

Task C: Report significant categories

  • Submit file report.tsv with 20 most significant GO categories (lowest p-values)
    • For each category list its ID, p-value and description
    • Order them from the most significant
  • To your protocol, write any observations you can make
    • Do any reported categories seem interesting to you, possibly related to size?
    • Are any reported categories likely related to each other?

Task D (bonus): cluster significant categories

  • Some categories in task C appear similar according to their name
  • Try creating k-means or hierarchical clustering of categories
  • Represent each category as a binary vector in which for each gene you have one bit indicating if it is in the category
  • Thus categories with the same set of genes will have identical vectors
  • Try to report results in an appropriate form (table, text, figure), discuss them in the protocol

Note

  • In part B, we have done many statistical tests, resulting P-values should be corrected by multiple testing correction from Lecture 9
    • This is not required in this homework, but should be done in a real study