1-DAV-202 Data Management 2023/24
Previously 2-INF-185 Data Source Integration
Difference between revisions of "Lcloud"
(Created page with "Today we will work with [https://aws.amazon.com/ Amazon Web Services] (AWS), which is a cloud computing platform. It allows highly parallel computation on large datasets. We w...") |
|||
Line 4: | Line 4: | ||
==Credentials== | ==Credentials== | ||
* First you need to create <tt>.aws/credentials</tt> file in your home folder with valid AWS credentials. | * First you need to create <tt>.aws/credentials</tt> file in your home folder with valid AWS credentials. | ||
+ | * Also run `aws configure`. Press enter for access key ID and secret access key and put in `us-east-1` for region. Press enter for output format. | ||
+ | |||
<!-- NOTEX --> | <!-- NOTEX --> | ||
* Please use the credentials which were sent to you via email and follows steps in here (there is a cursor in each screen): | * Please use the credentials which were sent to you via email and follows steps in here (there is a cursor in each screen): |
Revision as of 10:25, 1 May 2020
Today we will work with Amazon Web Services (AWS), which is a cloud computing platform. It allows highly parallel computation on large datasets. We will use an educational account which gives you certain amount of resources for free.
Contents
Credentials
- First you need to create .aws/credentials file in your home folder with valid AWS credentials.
- Also run `aws configure`. Press enter for access key ID and secret access key and put in `us-east-1` for region. Press enter for output format.
- Please use the credentials which were sent to you via email and follows steps in here (there is a cursor in each screen):
https://docs.google.com/presentation/d/1GBDErp5xhrV2zLF5kKdwnOAjtmDEFN0pw3RFval419s/edit#slide=id.p
- Sometimes these credentials expire. In that case repeat the same steps to get new ones.
AWS command line
- We will access AWS using aws command installed on our server.
- You can also install it on your own machine using pip install awscli
Input files and data storage
Today we will use Amazon S3 cloud storage to store input files. Run the following two commands to check if you can see the "bucket" (data storage) assocated with this lecture:
# the following command should give you a big list of files
aws s3 ls s3://idzbucket2
# this command downloads one file from the bucket
aws s3 cp s3://idzbucket2/splitaa splitaa
# the following command prints the file in your console
# (no need to do this).
aws s3 cp s3://idzbucket2/splitaa -
You should also create your own bucket (storage area). Pick your own name, must be globally unique):
aws s3 mb s3://mysuperawesomebucket
MapReduce
We will be using MapReduce in this session. It is kind of outdated concept, but simple enough for us and runs out of box on AWS. If you ever want to use BigData in practice, try something more modern like Apache Beam. And avoid PySpark if you can.
For tutorial on MapReduce check out PythonHosted.org or TutorialsPoint.com.
Template
You are given basic template with comments in /tasks/cloud/example_job.py
You can run it locally as follows:
python3 example_job.py <input file> -o <output_dir>
You can run it in the cloud on the whole dataset as follows:
python3 example_job.py -r emr s3://idzbucket2 --num-core-instances 4 -o s3://<your bucket>/<some directory>
For testing we recommend using a smaller sample as follows:
python3 example_job.py -r emr s3://idzbucket2/splita* --num-core-instances 4 -o s3://<your bucket>/<some directory>
Other useful commands
You can download output as follows:
# list of files
aws s3 ls s3://<your bucket>/<some directory>/
# download
aws s3 cp s3://<your bucket>/<some directory>/ . --recursive
If you want to watch progress:
- Click on AWS Console button workbench (vocareum).
- Set region (top right) to Oregon.
- Click on services, then EMR.
- Click on the job, which is running, then Steps, view logs, syslog.