1-DAV-202 Data Management 2023/24
Previously 2-INF-185 Data Source Integration

Materials · Introduction · Rules · Contact
· Grades from marked homeworks are on the server in file /grades/userid.txt
· Dates of project submission and oral exams:
Early: submit project May 24 9:00am, oral exams May 27 1:00pm (limit 5 students).
Otherwise submit project June 11, 9:00am, oral exams June 18 and 21 (estimated 9:00am-1:00pm, schedule will be published before exam).
Sign up for one the exam days in AIS before June 11.
Remedial exams will take place in the last week of the exam period. Beware, there will not be much time to prepare a better project. Projects should be submitted as homeworks to /submit/project.
· Cloud homework is due on May 20 9:00am.


Difference between revisions of "Lweb"

From MAD
Jump to navigation Jump to search
Line 3: Line 3:
 
<!-- /NOTEX -->
 
<!-- /NOTEX -->
  
'''It is 2021. Use python3! The default `python` command on vyuka server is python 2.7. If you type `python3` you will get python3. '''
+
'''It is 2021. Use python3! The default `python` command on vyuka server is python 2.7. Some of the package do not work with python2. If you type `python3` you will get python3. '''
  
 
Sometimes you may be interested in processing data which is available in the form of a website consisting of multiple webpages (for example an e-shop with one page per item or a discussion forum with pages of individual users and individual discussion topics).  
 
Sometimes you may be interested in processing data which is available in the form of a website consisting of multiple webpages (for example an e-shop with one page per item or a discussion forum with pages of individual users and individual discussion topics).  

Revision as of 11:24, 16 March 2021

HWweb

It is 2021. Use python3! The default `python` command on vyuka server is python 2.7. Some of the package do not work with python2. If you type `python3` you will get python3.

Sometimes you may be interested in processing data which is available in the form of a website consisting of multiple webpages (for example an e-shop with one page per item or a discussion forum with pages of individual users and individual discussion topics).

In this lecture, we will extract information from such a website using Python and existing Python libraries. We will store the results in an SQLite database. These results will be analyzed further in the following lectures.

Scraping webpages

In Python, the simplest tool for downloading webpages is requests package:

import requests
r = requests.get("http://en.wikipedia.org")
print(r.text[:10])

Parsing webpages

When you download one page from a website, it is in HTML format and you need to extract useful information from it. We will use beautifulsoup4 library for parsing HTML.

  • In your code, we recommend following the examples at the beginning of the documentation and the example of CSS selectors. Also you can check out general syntax of CSS selectors.
  • Information you need to extract is located within the structure of the HTML document
  • To find out, how is the document structured, use Inspect element feature in Chrome or Firefox (right click on the text of interest within the website). For example this text is located within LI element, which is within UL element, which is in 4 nested DIV elements, one BODY element and one HTML element. Some of these elements also have a class (starting with a dot) or and ID (starting with #).
  • Based on this information, create a CSS selector

Parsing dates

To parse dates (written as a text), you have two options:

Other useful tips

  • Don't forget to commit to your SQLite3 database (call db.commit()).
  • SQL command CREATE TABLE IF NOT EXISTS can be useful at the start of your script.
  • Use screen command for long running scripts.
  • All packages are installed on our server. If you use your own laptop, you need to install them using pip (preferably in an virtualenv).