1-DAV-202 Data Management 2023/24
Previously 2-INF-185 Data Source Integration
Difference between revisions of "Lcloud"
Line 56: | Line 56: | ||
This job downloads one file from cloud storage and stores it into file starting with name out. You can change the name if you want. | This job downloads one file from cloud storage and stores it into file starting with name out. You can change the name if you want. | ||
This is very useful for any debugging. | This is very useful for any debugging. | ||
+ | |||
+ | The actual job just counts amount of each base in the input data (and discards any fastq headers). | ||
==Running in Dataflow== | ==Running in Dataflow== |
Revision as of 09:32, 3 May 2022
Today we will work with Google Cloud (GCP), which is a cloud computing platform. GCP contains many services (virtual machines, kubernetes, storage, databases, ...). We are mainly interested in Dataflow and Storage. Dataflow allows highly parallel computation on large datasets. We will use an educational account which gives you a certain amount of resources for free.
Contents
Basic setup
You should have received instructions how to create GCloud account via MS Teams. You should be able to login to google cloud console. (TODO picture).
Now:
- Login to some Linux machine (ideally vyuka)
- If the machine is not vyuka, install gcloud command line package (I recommend via snap: [1]).
- Run
gcloud init --console-only
- Follow instructions (copy link to browser, login and then copy code back to console).
Input files and data storage
Today we will use Gcloud storage to store input files and outputs. Think of it as some limited external disk (more like gdrive, than dropbox). You can just upload and download files, no random access to the middle of the file.
Run the following two commands to check if you can see the "bucket" (data storage) associated with this lecture:
# the following command should give you a big list of files
gsutil ls gs://mad-2022-public/
# this command downloads one file from the bucket
gsutil cp gs://mad-2022-public/splitaa splitaa
# the following command prints the file in your console
# (no need to do this).
gsutil cat gs://mad-2022-public/splitaa
You should also create your own bucket (storage area). Pick your own name, must be globally unique:
gsutil mb gs://mysuperawesomebucket
Apache beam and Dataflow
We will be using Apache Beam in this session (because Pyspark stinks).
Running locally
If you want to use your own machine, please install packages with
pip install 'apache-beam[gcp]'
You are given basic template with comments in /tasks/cloud/example_job.py
You can run it locally as follows:
python3 example_job.py --output out
This job downloads one file from cloud storage and stores it into file starting with name out. You can change the name if you want. This is very useful for any debugging.
The actual job just counts amount of each base in the input data (and discards any fastq headers).
Running in Dataflow
First we need to create service account (account for machine). Run following commands:
# This will show you the project-id, you will need it.
gcloud projects list
# This will create service account named mad-sacc
gcloud iam service-accounts create mad-sacc
# This will give your service account some permissions. Do not forget to change PROJECT_ID to your project-id. Also note that we are quite liberal with permissions, this is not ideal in production.
gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:mad-sacc@PROJECT_ID.iam.gserviceaccount.com" --role=roles/storage.objectAdmin
gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:mad-sacc@PROJECT_ID.iam.gserviceaccount.com" --role=roles/dataflow.admin
gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:mad-sacc@PROJECT_ID.iam.gserviceaccount.com" --role=roles/editor
# This creates key for your service account, named key.json (in your current directory).
gcloud iam service-accounts keys create key.json --iam-account=mad-sacc@PROJECT_ID.iam.gserviceaccount.com
# This will setup environment variable for some tools, which communicate with google cloud (change path to key to relevant path). You will need to set this everytime you open a console (or put it into .bashrc).
export GOOGLE_APPLICATION_CREDENTIALS=/home/jano/hrasko/key.json
Now you can run Beam job in Dataflow on small sample:
python3 wordcount.py --output gs://YOUR_BUCKET/out/outxy --region europe-west1 --runner DataflowRunner --project PROJECT_ID --temp_location gs://YOUR_BUCKET/temp/ --input gs://mad-2022-public/splitaa
You will probably get an error like:
Dataflow API has not been used in project XYZ before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/dataflow.googleapis.com/overview?project=XYZ then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Visit the URL (from you error, not from this lecture) and click enable API, then run command again.
Go can then download output using:
gsutil cp gs://YOUR_BUCKET/out/outxy* .
Now you can run job on full dataset (this is what you should be doing in homework):
python3 wordcount.py --output gs://YOUR_BUCKET/out/outxy --region europe-west1 --runner DataflowRunner --project PROJECT_ID --temp_location gs://YOUR_BUCKET/temp/ --input gs://mad-2022-public/* --num_workers 5 --worker_machine_type n2-standard-4
If you want to watch progress:
- Go to web console. Find dataflow in the menu (or type it into search bar), and go to jobs and select your job.
- If you want to see machines magically created, go to VM Instances.