Skip to main content

My AWS S3 crash course

This post aims to be your quick start guide to AWS S3, ie. Amazon Web Services. Follow along and you'll feel like you just had a roller coaster ride in AWS S3 world.

First of all, get a free AWS account. AWS includes many services and as mentioned above I will only cover S3 here. You may be wondering what is S3. In simple words S3 is Amazon's cloud backup service. Their Free tier is limited, and any usage beyond the limit will incur charges. So make sure you read through the pricing structure and understand it. Charges can accumulate fast if you don't pay attention or are not careful of your usage. You have been warned. You and only you are responsible for your bills. See disclaimer at the end of this post. Your reading of this post implies your understanding and agreement to the disclaimer.

Once you are equipped with an AWS account, login to it and launch AWS management console.

Create a Bucket

In S3's terminology, a bucket is a collection of files or objects that you want to backup to your S3 cloud.

It is a good practise to compress files to make them into a single file before you upload to S3. This is because individual files would quickly consume your allocated number of objects in a free trier and you will start incurring charges for any usage beyond the limit. Remember that AWS S3 pricing policy tracks numbers of objects, and number of requests you make, therefore, you want to minimize number of objects you upload.

Click Create Bucket, enter your Bucket Name, and Region to your liking and your bucket should be ready to receive files.

Upload File - via web interface

Once you have the bucket ready, you can begin uploading your zip file(s). To upload file via web interface, click Upload button, then click Add Files and finally click Start Upload. Congrats! You have successfully uploaded your first archive.

Retrieve File - via web interface

To download a file, click checkbox next to file you want to download. Click Actions button, Right click on Download and Save Link As.... Click OK to close this menu.

Delete File - via web interface

Similarly, to delete a file Click same checkbox next to file, then Actions->Delete. Click OK to confirm.

Using S3 Command Line Interface

You can access S3 service via command line using Amazon's S3 CLI tool. CLI installer is available on AWS website. Download and install it. Using AWS Identity and Access Management Console, create a new user for s3 command line interface and give it access per the document here:
https://aws.amazon.com/getting-started/tutorials/backup-to-s3-cli/

Now you should be able to access your s3 bucket.

To create a new bucket (if you do not have one already):
aws s3 mb s3://your-bucket-name-here
To upload your zip file to your bucket:
aws s3 cp c:\Users\YourUser\Documents\ImportantDocsApr2017.zip s3://your-bucket-name-here/
To download it, just swap the order:
aws s3 cp s3://your-bucket-name-here/ImportantDocsApr2017.zip c:\Users\YourUser\Documents\DownloadedDocsApr2017.zip 
It's that simple!
Disclaimer: If you follow the information here, there is no warranty, I am not liable if it deletes your data, gets you hacked, burns your house down or anything else. If you follow the information contained here you do so entirely at your own risk.  My views and opinions are my own and not necessarily represent the views of my current or former employers.

© Raheel Hameed and www.raheelhameed.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to author and this website with appropriate and specific direction to the original content.

Comments

Popular posts from this blog

Using XPerf

XPerf comes with Windows SDK. You can get it from MSFT website. During installation there is an option to install just the performance tools should you want only perf tools and not the whole SDK.

To collect a trace from administrator command line:
Begin your workload.xperf -on base+cswitch+power -stackwalk Profile -f c:\kernel.etl  Let the workload execute for a while.
xperf -stop If you have previously created c:\merged.etl, delete it.xperf -merge c:\kernel.etl c:\merged.etl  Now, before you can view your trace in UI, setup your symbol path:
set _NT_SYMBOL_PATH=<path to PDB files>  You should now be all set to launch the viewer:
xperf c:\merged.etl (this launches the viewer)  On the UI, there are several different type of graphs for various measurements. From inside the Graph menu, you can enable these views:

CPU Usage by Process.
CPU Sampling by CPU.
Stack Counts by type.

Happy XPerf-ing!

My LXC crash course:

Remember chroot? Container is far beyond a chrooted environment. Think of it as an isolated subsystem in a host OS, which allows running processes in complete isolation - even the hardware visible to the container is virtualized. Containers typically take much less resources than a full virtual machine because containers leverage features in host OS to facilitate isolation.

LXC is one such user interface that leverages containment features in Linux kernel and allows creation and management of containers. Each container running on the same Linux host can run a separate linux distribution. Amazing isn't it?

Containers come in very handy to create isolated environments which do not take up as much resources as running separate virtual machines do. You can use containers to host multiple web services in isolation on a single system. Another good use could be to create multiple development environments.

To create a container in LXC, you first need to have LXC installed on your Linux di…