1-DAV-202 Data Management 2023/24
Previously 2-INF-185 Data Source Integration

Materials · Introduction · Rules · Contact
· Grades from marked homeworks are on the server in file /grades/userid.txt
· Dates of project submission and oral exams:
Early: submit project May 24 9:00am, oral exams May 27 1:00pm (limit 5 students).
Otherwise submit project June 11, 9:00am, oral exams June 18 and 21 (estimated 9:00am-1:00pm, schedule will be published before exam).
Sign up for one the exam days in AIS before June 11.
Remedial exams will take place in the last week of the exam period. Beware, there will not be much time to prepare a better project. Projects should be submitted as homeworks to /submit/project.
· Cloud homework is due on May 20 9:00am.


Introduction

From MAD
Jump to navigation Jump to search

Target audience

This course is offered at the Faculty of Matematics, Physics and Informatics, Comenius University in Bratislava for the students of the second year of the bachelor Bionformatics study program and the students of the bachelor and master Computer Science study programs. It is a prerequisite of the master-level state exams in Bioinformatics and Machine Learning. However, the course is open to students from other study programs if they satisfy the following informal prerequisites.

We assume that the students are proficient in programming in at least one programming language and are not afraid to learn new languages. We also assume basic knowledge of work on the Linux command-line (at least basic commands for working with files and folders, such as cd, mkdir, cp, mv, rm, chmod). Although most technologies covered in this course can be used for processing data from many application areas, we will illustrate some of them on examples from bioinformatics. We will explain necessary terminology from biology as needed.

The basic use of command-line tools can be learned for example by using a tutorial by Ian Korf.

Course objectives

Computer science courses cover many interesting algorithms, models and methods that can used for data analysis. However, when you want to use these methods for real data, you will typically need to make considerable efforts to obtain the data, pre-process it into a suitable form, test and compare different methods or settings, and arrange the final results in informative tables and graphs. Often, these activities need to be repeated for different inputs, different settings, and so on. For example in bioinformatics, it is possible to find a job where your main task will be data processing using existing tools, possibly supplemented by small custom scripts. This course will cover some programming languages and technologies suitable for these activities.

This course is particularly recommended for students whose bachelor or master thesis involves substantial empirical experiments (e.g. experimental evaluation of your methods and comparison with other methods on real or simulated data).

Basic guidelines for working with data

As you know, in programming it is recommended to adhere to certain practices, such as good coding style, modular desgn, thorough testing etc. Such practices add a little extra work, but are much more efficient in the long run. Similar good practices exist for data analysis. As an introduction we recommend the following article by a well-known bionformatician William Stafford Noble (his advice applies outside of bionformatics as well):

Several important recommendations:

  • Noble 2009: "Everything you do, you will probably have to do over again."
    • After doing an entire analysis, you often find out that there was a problem with the input data or one of the early steps and therefore everything needs to be redone
    • Therefore it is better to use techniques that allow you to keep all details of your workflow and to repeat them if needed
    • Try to avoid manually changing files, because this makes reruning analyses harder and more error-prone
  • Document all steps of your analysis
    • Note what have you done, why have you done it, what was the result
    • Some of these things may seem obvious to you at present, but you may forgot them in a few weeks or months and you may need them to write up your thesis or to repeat the analysis
    • Good documentation is also indispensable for collaborative projects
  • Keep a logical structure of your files and folders
    • Their names should be indicative of the contents (create a sensible naming scheme)
    • However, if you have too many versions of the experiment, it may be easier to name them by date rather than create new long names (your notes should then detail the meaning of each dated version)
  • Try to detect problems in the data
    • Big files often hide some problems in the format, unexpected values etc. These may confuse your programs and make the results meaningless
    • In your scripts, check that the input data conform to your expectations (format, values in reasonable ranges etc)
    • In unexpected circumstances, scripts should terminate with an error message and a non-zero exit code
    • If your script executes another program, check its exit code
    • Also check intermediate results as often as possible (by manual inspection, computing various statistics etc) to detect errors in the data and your code