1-DAV-202 Data Management 2023/24
Previously 2-INF-185 Data Source Integration

Materials · Introduction · Rules · Contact
· Grades from marked homeworks are on the server in file /grades/userid.txt
· Dates of project submission and oral exams:
Early: submit project May 24 9:00am, oral exams May 27 1:00pm (limit 5 students).
Otherwise submit project June 11, 9:00am, oral exams June 18 and 21 (estimated 9:00am-1:00pm, schedule will be published before exam).
Sign up for one the exam days in AIS before June 11.
Remedial exams will take place in the last week of the exam period. Beware, there will not be much time to prepare a better project. Projects should be submitted as homeworks to /submit/project.
· Cloud homework is due on May 20 9:00am.


Lcloud

From MAD
Revision as of 21:28, 2 May 2022 by Usama (talk | contribs)
Jump to navigation Jump to search

Today we will work with Google Cloud (GCP), which is a cloud computing platform. GCP contains many services (virtual machines, kubernetes, storage, databases, ...). We are mainly interested in Dataflow and Storage. Dataflow allows highly parallel computation on large datasets. We will use an educational account which gives you a certain amount of resources for free.

Basic setup

You should have received instructions how to create GCloud account via MS Teams. You should be able to login to google cloud console. (TODO picture).

Now:

  • Login to some Linux machine (ideally vyuka)
  • If the machine is not vyuka, install gcloud command line package (I recommend via snap: [1]).
  • Run
    gcloud init --console-only</syntaxhightlight>
    * Follow instructions (copy link to browser, login and then copy code back to console).
    
    ==Input files and data storage==
    
    Today we will use [https://cloud.google.com/storage Gcloud storage] to store input files and outputs.
    Think of it as some limited external disk (more like gdrive, than dropbox). You can just upload and download files, no random access to the middle of the file.
    
    Run the following two commands to check if you can see the "bucket" (data storage) associated with this lecture: 
    
    <syntaxhighlight lang="bash">
    # the following command should give you a big list of files
    gsutil ls gs://mad-2022-public/
    
    # this command downloads one file from the bucket
    gsutil cp gs://mad-2022-public/splitaa splitaa
    
    # the following command prints the file in your console 
    # (no need to do this).
    gsutil cat gs://mad-2022-public/splitaa
    

You should also create your own bucket (storage area). Pick your own name, must be globally unique:

gsutil mb gs://mysuperawesomebucket

Apache beam and Dataflow

We will be using Apache Beam in this session (because Pyspark stinks).

Running locally

If you want to use your own machine, please install packages with pip install 'apache-beam[gcp]'

You are given basic template with comments in /tasks/cloud/example_job.py

You can run it locally as follows:

python3 example_job.py --output out

This job downloads one file from cloud storage and stores it into file starting with name out. You can change the name if you want. This is very useful for any debugging.

Running in Dataflow

First we need to create service account (account for machine). Run following commands:

  • gcloud projects list
  • This will show you the project-id, you will need it.
  • gcloud iam service-accounts create mad-sacc
  • This will create service account named mad-sacc.
  • gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:mad-sacc@PROJECT_ID.iam.gserviceaccount.com" --role=roles/storage.objectAdmin
  • gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:mad-sacc@PROJECT_ID.iam.gserviceaccount.com" --role=roles/dataflow.admin
  • gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:mad-sacc@PROJECT_ID.iam.gserviceaccount.com" --role=roles/editor
  • This will give your service account some permissions. Do not forget to change PROJECT_ID to your project-id. Also note that we are quite liberal with permissions, this is not ideal in production.
  • gcloud iam service-accounts keys create key.json --iam-account=mad-sacc@PROJECT_ID.iam.gserviceaccount.com
  • This creates key for your service account, named key.json (in your current directory).
  • export GOOGLE_APPLICATION_CREDENTIALS=/home/jano/hrasko/key.json
  • This is setup environment variable (change path to key to relevant path). You will need to set this everytime you open a console (or put it into .bashrc).


Now you can run Beam job in Dataflow on small sample:

python3 wordcount.py --output gs://YOUR_BUCKET/out/outxy --region europe-west1 --runner DataflowRunner --project PROJECT_ID --temp_location gs://YOUR_BUCKET/temp/ --input gs://mad-2022-public/splitaa

You will probably get an error like:

Dataflow API has not been used in project XYZ before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/dataflow.googleapis.com/overview?project=XYZ then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

Visit the URL (from you error, not from this lecture) and click enable API, then run command again.

Go can then download output using:

gsutil cp gs://YOUR_BUCKET/out/outxy* .


Now you can run job on full dataset (this is what you should be doing in homework):

python3 wordcount.py --output gs://YOUR_BUCKET/out/outxy --region europe-west1 --runner DataflowRunner --project PROJECT_ID --temp_location gs://YOUR_BUCKET/temp/ --input gs://mad-2022-public/* --num_workers 5 --worker_machine_type n2-standard-4

If you want to watch progress:

  • Go to web console. Find dataflow in the menu (or type it into search bar), and go to jobs and select your job.
  • If you want to see machines magically created, go to VM Instances.