Recently I knew for a fact that I would be loosing one of my AWS accounts and I had a specific bucket which held some files that were near and dear to my heart. I offloaded the files into S3 knowing that they would be safe and sound and would not cluster up any local storage on my PC. All said these particular files were large video files which came to a total of about 150 GB.
Listing All Available Buckets in the AWS CLI
As a refresher, you can view all the S3 buckets you have by running the following command :
Viewing the Total Size and Number of Objects in a Bucket
To view the total size and number of objects in a bucket, navigate to the Metrics
tab of your specific
bucket and you will be able to view this data:
So I wanted to figure out how can I get all of these files which have several different folders inside a single S3 bucket to my local machine in the fastest way possible?
The answer is with the s3 sync command. This command allows you to take the remote files in a s3 bucket and write them to your local file system.
Example
For this example I created a folder on my C/:
called Test
which will be the location I pull my files into.
Open your command line and change the working directory and
execute the aws s3 sync
command with the target s3 bucket you want to pull files from. The .
in this context
specifies to pull all remote files to the current directory:
Tip: When you run this command you may be flooded with messages into your command line interface. These messages give you an indication of the process of the sync and exactly which files in the batch are being moved over.
Conclusion
And that’s it, when the command is done, you will have all the files in that S3 bucket on your local machine!
In this blog post we learned how to:
- List the buckets available with the
aws s3api list-buckets
command. - View the size of objects stored in a bucket with the AWS console.
- Sync an entire S3 bucket to your local file system with the
aws s3 sync
command.