Jeremy Zawodny
has an excellent article/discussion about the different tools currently
available to take advantage of Amazon simple storage service (S3).
After testing many tools available for S3 currently, I decided to use
the ruby program s3sync to backup my data to S3.
As I explained an earlier post, I wanted a simple low level tool to perform automatic backups S3. I decided to use s3sync to do the heavy lifting and use the jets3t Cockpit GUI to monitor my S3 account. The following explains how I successfully started automating my backups to S3 using s3sync and cockpit.
My server is running Ubuntu Dapper with samba server. All the machines in my house use a “Public” drive on the samba server to store all files from Windows and Linux. All of our important files like photos, home movies, and documents are stored on this “public” drive. This simplifies the backup procedure, since I don’t have to backup multiple sources.
The following steps describe how I backup my “public drive” to Amazon’s awesome S3 storage service. I decided to post this, because I haven’t found a fairly “simple” guide to actually automate backups to S3 that functions similar to rsync on Linux. This is a follow-up post to my original post on choosing a backup solution.
STEP 1: Activate an Amazon s3 account.
Go http://www.amazon.com/s3 and sign up for a s3 web service account
Have your Access Key ID and your Secret Access Key handy.
STEP 2: Install a management tool
(update, I no longer use cockpit, now I use the command line tools that come with s3sync that were not available at the time I wrote this original article, see Option 1.)
Option 1 use the command line shell tools that are included with s3sync (my new preferred method)
Here is a sampling of the commands from the readme file for command line tool, s3cmd.rb that can be used to create buckets and verify upload success or failure. If you use, this option, make sure you have the correct version of ruby installed on your system and you have downloaded the s3sync package (See step 3)
List all the buckets your account owns:
Option 2 (original option that I used before s3sync command line shell tools were available)
UPDATE: I have had trouble getting this (or any other GUI) to work for folders containing large amounts of files. If you plan to have thousands of files stored at Amazon, then I suggest option 1.
Download a GUI tool and make sure you can log into your S3 account, create a bucket, add files, and delete them.
I have tried a lot of them, but I prefer jets3t Cockpit. It is java and open source, plus it is able to read objects uploaded to S3 by other tools. Some tools like Jungle Disk create buckets and objects in a propietary format. This means you would not be able to see your files uploaded to S3 by other tools using JD.
Here is a screenshot of Cockpit.
Create a bucket that you will store your backups in. Make sure to give your Bucket a unique name, because bucket names have to be unique for all users of S3. Many recommend to use your Access Key ID from S3 as a prefix. For example, fakeaccesskey1234.backups. For the rest of this article, I will assume our bucket name is “mybucket”.
Cockpit will be a handy tool for you to monitor your backups in S3, but the actual file uploading/downloading will be done with a shell script using s3sync.
STEP 3: Install s3sync (ruby)
s3sync is an open source ruby script that acts similar to rsync, the linux file sync program. Remember to read the README file from s3sync. Also, all the normal warnings apply. Test this on a couple folders and files you don’t care about and make sure you understand what you are doing. Put the source/destination in the wrong order while using the –delete option and you could blow away all of your precious data.
Lets move on.
The following apply to a Debian/Ubuntu based distribution, but could easily be adapted to your own distro.
First, make sure you have ruby 1.8.4 or greater and the ssl lib for ruby or higher
download and unpack s3sync
upload.sh —————————————-
Create the local upload and download directories and put some test files in the upload folder
Test download.sh
Once you are confident everything is working fine and your understand what you are doing. Change the shell scripts to backup your actual folders. Run the scripts manually first to ensure everything is working properly. Remember, the upload script will be limited to the upload speed of your ISP, which can be very slow. If you have a typical Cable internet connection upload speed of 384 k it will take approx. 6 hours to upload 1GB. Download speeds are usually much faster, approx 1GB/20 min, but hopefully you never need it.
STEP 4: set up cronjob to run backup script once a week/month etc.
Once you are sure the script is working for your uploads, you can automate the task by creating a cron job to run once a week, day or month. I have it run once a week, because I do nightly backups locally to my Desktop machine using rsync.
Obviously, monitor to make sure everything is working.
STEP 5: kick back and relax
Now you can relax, if your laptop battery explodes and burns down your house, you know your data is safe sitting on Amazon’s geo-redundant servers right between some bits describing a new book from Oprah and a bad review on latest Ben Affleck movie!
Feel free to leave a comment if you find this useful, incorrect, or just plain uninteresting.
UPDATE 1: One additional step I did, was to create one additional bucket where I uploaded all the necessary code/scripts to restore my files using s3sync (minus my s3 information).
UPDATE 2: I have changed the chmod 755 to chmod 700 to make script not readable to all. (Credit Kelvin below). Also, updated the information about the tools I use. I no longer use cockpit to verify success, but I mostly rely on the s3sync command line tools there were not present at the time I wrote the original article.
UPDATE 3: I never gave enough credit to the actual author of s3sync. Without him, this entire process would not be possible, thanks again.
from http://blog.eberly.org/2006/10/09/how-automate-your-backup-to-amazon-s3-using-s3sync/
--------------------------
The following is how I automated my backups to Amazon S3 in about 5 minutes.
I lot has changed since my original post on automating my backups to s3 using s3sync. There are more mature and easier to use solutions now. I am switching because using s3fs gives you much more options for using s3, it is easier to set up and it is faster.
I now use a combination of s3fs to mount a S3 bucket to local directory and then use rsync to keep up to date with my files. The following directions are geared towards Ubuntu linux, but could be modified for any linux distribution and Mac OSX.
STEP 1: Install s3fs
The first step is to install s3fs dependencies. (Assuming Ubuntu)
The following assumes you already have a bucket created on Amazon S3. If this is not the case, you can use a tool like s3Fox to create one.
Choose a text editor of your choice and make a shell script to mount your bucket, perform rsync, then unmount. It is not necessary to unmount your S3 directory after each rsync, but I prefer to be safe. One mistake like an ‘rm’ on your root directory could wipe all of your files on your machine and your S3 mount. You should probably start with a test directory to be safe.
Make the file s3fs.sh
Change permissions to make executable
Next, run the script and let it do its work. This could take a long time depending on how much data you are uploading initially. Your internet upload speed will be the bottleneck.
STEP 3: Automate it with cron
from http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/
----------------------------
I've been collecting a list of Amazon S3 compatible backup tools to look at. Here's what I've discovered, followed by my requirements.
Are there other S3 tools that I'm missing?
Also, I've found that Amazon's S3 forum is quite helpful. The discussion there is generally of good quality and the software does the job nicely. Perhaps we should do something similar for YDN instead of using Yahoo! Groups?
I don't really need a fancy GUI. I'm really looking for a stand alone tool that's designed to work with S3 and keep bandwidth usage to a minimum. Alternatively, something that works at a lower level (such as a filesystem driver) to provide a "virtual drive" type of interface might work as well.
from http://jeremy.zawodny.com/blog/archives/007641.html
As I explained an earlier post, I wanted a simple low level tool to perform automatic backups S3. I decided to use s3sync to do the heavy lifting and use the jets3t Cockpit GUI to monitor my S3 account. The following explains how I successfully started automating my backups to S3 using s3sync and cockpit.
My server is running Ubuntu Dapper with samba server. All the machines in my house use a “Public” drive on the samba server to store all files from Windows and Linux. All of our important files like photos, home movies, and documents are stored on this “public” drive. This simplifies the backup procedure, since I don’t have to backup multiple sources.
The following steps describe how I backup my “public drive” to Amazon’s awesome S3 storage service. I decided to post this, because I haven’t found a fairly “simple” guide to actually automate backups to S3 that functions similar to rsync on Linux. This is a follow-up post to my original post on choosing a backup solution.
STEP 1: Activate an Amazon s3 account.
Go http://www.amazon.com/s3 and sign up for a s3 web service account
Have your Access Key ID and your Secret Access Key handy.
STEP 2: Install a management tool
(update, I no longer use cockpit, now I use the command line tools that come with s3sync that were not available at the time I wrote this original article, see Option 1.)
Option 1 use the command line shell tools that are included with s3sync (my new preferred method)
Here is a sampling of the commands from the readme file for command line tool, s3cmd.rb that can be used to create buckets and verify upload success or failure. If you use, this option, make sure you have the correct version of ruby installed on your system and you have downloaded the s3sync package (See step 3)
List all the buckets your account owns:
s3cmd.rb listbucketsCreate a new bucket:
s3cmd.rb createbucket BucketNameDelete an old bucket you don’t want any more:
s3cmd.rb deletebucket BucketNameFind out what’s in a bucket, 10 lines at a time:
s3cmd.rb list BucketName 10Only look in a particular prefix:
s3cmd.rb list BucketName:startsWithThisI plan to write a shell script to verify success of backup and run via cron job each night, but I haven’t done it yet. I will update here when I do.
Option 2 (original option that I used before s3sync command line shell tools were available)
UPDATE: I have had trouble getting this (or any other GUI) to work for folders containing large amounts of files. If you plan to have thousands of files stored at Amazon, then I suggest option 1.
Download a GUI tool and make sure you can log into your S3 account, create a bucket, add files, and delete them.
I have tried a lot of them, but I prefer jets3t Cockpit. It is java and open source, plus it is able to read objects uploaded to S3 by other tools. Some tools like Jungle Disk create buckets and objects in a propietary format. This means you would not be able to see your files uploaded to S3 by other tools using JD.
Here is a screenshot of Cockpit.
Create a bucket that you will store your backups in. Make sure to give your Bucket a unique name, because bucket names have to be unique for all users of S3. Many recommend to use your Access Key ID from S3 as a prefix. For example, fakeaccesskey1234.backups. For the rest of this article, I will assume our bucket name is “mybucket”.
Cockpit will be a handy tool for you to monitor your backups in S3, but the actual file uploading/downloading will be done with a shell script using s3sync.
STEP 3: Install s3sync (ruby)
s3sync is an open source ruby script that acts similar to rsync, the linux file sync program. Remember to read the README file from s3sync. Also, all the normal warnings apply. Test this on a couple folders and files you don’t care about and make sure you understand what you are doing. Put the source/destination in the wrong order while using the –delete option and you could blow away all of your precious data.
Lets move on.
The following apply to a Debian/Ubuntu based distribution, but could easily be adapted to your own distro.
First, make sure you have ruby 1.8.4 or greater and the ssl lib for ruby or higher
$ sudo apt-get install ruby libopenssl-rubycheck ruby version
$ ruby -v ruby 1.8.4 (2005-12-24) [i486-linux]change into the directory where you want to install s3sync, like /home/john/s3sync
download and unpack s3sync
$ wget http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz $ tar xvzf s3sync.tar.gzclean up
$ rm s3sync.tar.gzmake directory for ssl certificates and download some (important, read README for info about these SSL certs)
$ mkdir certs $ cd certs $ wget http://mirbsd.mirsolutions.de/cvs.cgi/~checkout~/src/etc/ssl.certs.sharrun this shell archive
$ sh ssl.certs.sharget back into main s3sync dir
$ cd ..create two files with your favorite editor, upload.sh and download.sh with the following contents and update to suit your needs. (Important, like rsync, slashes matter, see README for examples)
upload.sh —————————————-
#!/bin/bash # script to upload local directory upto s3 cd /path/to/yourshellscript/ export AWS_ACCESS_KEY_ID=yourS3accesskey export AWS_SECRET_ACCESS_KEY=yourS3secretkey export SSL_CERT_DIR=/your/path/to/s3sync/certs ruby s3sync.rb -r --ssl --delete /home/john/localuploadfolder/ mybucket:/remotefolder # copy and modify line above for each additional folder to be synceddownload.sh —————————————-
#!/bin/bash # script to download local directory upto s3 cd /path/to/yourshellscript/ export AWS_ACCESS_KEY_ID=yourS3accesskey export AWS_SECRET_ACCESS_KEY=yourS3secretkey export SSL_CERT_DIR=/your/path/to/s3sync/certs ruby s3sync.rb -r --ssl --delete mybucket:/remotefolder/ /home/john/localdownloadfolder # copy and modify line above for each additional folder to be syncedNOTICE: These scripts use the –delete option. This means it will delete any file on the destination not on source. Also, these shell scripts contain your Amazon secret info, so you will want to make sure they are only readable by you (chmod 700, credit Kelvin below). You can also add the “-v” option, so you get a verbose about of the changes. I did this this after my initial upload, so I can monitor activity via cron job emails.
Create the local upload and download directories and put some test files in the upload folder
$ mkdir localuploadfolder $ mkdir localdownloadfolderchange the permissions on the files
$ chmod 700 upload.sh $ chmod 700 download.shTest upload.sh
$./upload.shUse s3cmd.rb or Cockpit to make sure you can see the files made it to Amazon.
Test download.sh
$ ./download.shThe files you uploaded to S3 should now be in your localdownloadfolder.
Once you are confident everything is working fine and your understand what you are doing. Change the shell scripts to backup your actual folders. Run the scripts manually first to ensure everything is working properly. Remember, the upload script will be limited to the upload speed of your ISP, which can be very slow. If you have a typical Cable internet connection upload speed of 384 k it will take approx. 6 hours to upload 1GB. Download speeds are usually much faster, approx 1GB/20 min, but hopefully you never need it.
STEP 4: set up cronjob to run backup script once a week/month etc.
Once you are sure the script is working for your uploads, you can automate the task by creating a cron job to run once a week, day or month. I have it run once a week, because I do nightly backups locally to my Desktop machine using rsync.
$ crontab -eadd the following line.
30 2 * * sun /path/to/upload.shsave and exit.
Obviously, monitor to make sure everything is working.
STEP 5: kick back and relax
Now you can relax, if your laptop battery explodes and burns down your house, you know your data is safe sitting on Amazon’s geo-redundant servers right between some bits describing a new book from Oprah and a bad review on latest Ben Affleck movie!
Feel free to leave a comment if you find this useful, incorrect, or just plain uninteresting.
UPDATE 1: One additional step I did, was to create one additional bucket where I uploaded all the necessary code/scripts to restore my files using s3sync (minus my s3 information).
UPDATE 2: I have changed the chmod 755 to chmod 700 to make script not readable to all. (Credit Kelvin below). Also, updated the information about the tools I use. I no longer use cockpit to verify success, but I mostly rely on the s3sync command line tools there were not present at the time I wrote the original article.
UPDATE 3: I never gave enough credit to the actual author of s3sync. Without him, this entire process would not be possible, thanks again.
from http://blog.eberly.org/2006/10/09/how-automate-your-backup-to-amazon-s3-using-s3sync/
--------------------------
How I automated my backups to Amazon S3 using rsync and s3fs
The following is how I automated my backups to Amazon S3 in about 5 minutes.
I lot has changed since my original post on automating my backups to s3 using s3sync. There are more mature and easier to use solutions now. I am switching because using s3fs gives you much more options for using s3, it is easier to set up and it is faster.
I now use a combination of s3fs to mount a S3 bucket to local directory and then use rsync to keep up to date with my files. The following directions are geared towards Ubuntu linux, but could be modified for any linux distribution and Mac OSX.
STEP 1: Install s3fs
The first step is to install s3fs dependencies. (Assuming Ubuntu)
sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev libfuse-devNext, install the most recent version of s3fs. As of now the most recent is r177, but a quick check of s3fs downloads will show the most recent.
wget http://s3fs.googlecode.com/files/s3fs-r177-source.tar.gz tar -xzf s3fs* cd s3fs make sudo make install sudo mkdir /mnt/s3 sudo chown yourusername:yourusername /mnt/s3STEP 2: Create script to mount your Amazon s3 bucket using s3fs and sync files.
The following assumes you already have a bucket created on Amazon S3. If this is not the case, you can use a tool like s3Fox to create one.
Choose a text editor of your choice and make a shell script to mount your bucket, perform rsync, then unmount. It is not necessary to unmount your S3 directory after each rsync, but I prefer to be safe. One mistake like an ‘rm’ on your root directory could wipe all of your files on your machine and your S3 mount. You should probably start with a test directory to be safe.
Make the file s3fs.sh
#!/bin/bash /usr/bin/s3fs yourbucket -o accessKeyId=yourS3key -o secretAccessKey=yourS3secretkey /mnt/s3 /usr/bin/rsync -avz --delete /home/username/dir/you/want/to/backup /mnt/s3 /bin/umount /mnt/s3Note, the –delete option. This will delete any files that have been removed on the ‘source’.
Change permissions to make executable
chmod 700 s3fs.shBefore you run the entire script, you might want to run each line separately to make sure everything is working properly. The paths to rsync, umount might be different on your system. (Use ‘which rsync’ to check) Just for fun, I did a ‘df -h’, which showed I now have 256 Terabytes available on the s3 mount!
Next, run the script and let it do its work. This could take a long time depending on how much data you are uploading initially. Your internet upload speed will be the bottleneck.
sudo ./s3fs.shThat’s it! You are backing up to Amazon S3. You probably want to automate this using cron after you are sure everything is running o.k. Just for simplicity of this tutorial, lets assume you are setting up the cron job as root so we don’t need to worry about editing permissions for mount/umounting directory.
STEP 3: Automate it with cron
sudo su crontab -e 0 0 * * * /path/to/s3fs.sh # this runs it everyday at midnightp.s. I use this in combination with hourly backups to a second local machine using git to have revision history. I only backup nightly to s3 without revision history in case my house burns down etc. If you would like to know how I set up my git backups locally, just leave a comment and I can make a follow up post.
from http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/
----------------------------
A List of Amazon S3 Backup Tools
I've been collecting a list of Amazon S3 compatible backup tools to look at. Here's what I've discovered, followed by my requirements.
The List
I've evaluated exactly zero of these so far. That's next.- s3sync.rb is written in Ruby as a sort of rsync clone to replace the perl script s3sync which is now abandonware. Given that I already use rsync for much of my backup system, this is highly appealing.
- Backup Manager appears to now have S3 support as of version 0.7.3. It's a command-line tool for Linux (and likely other Unix-like systems).
- s3DAV isn't exactly a backup tool. It's provides a WebDAV front-end (or "virtual filesystem") to S3 storage, so you could use many other backup tools with S3. Recent versions of Windows and Mac OS have WebDAV support built-in. Java is required for s3DAV.
- S3 Backup is an Open Source tool for backing up to S3. It's currently available only for Windows. Mac and Linux versions appear to be planned. The UI is built on wxWidgets.
- duplicity is a free Unix tool that uses S3 and the librsync library. It is written in Python but not considered suitable for backing up important data quite yet.
- S3 Solutions is a list of other S3 related tools on the Amazon Developer Connection.
- Brackup is a backup tool written by Brad Fitzpatrick (of LiveJournal, SixApart, memcached, perlbal, etc...). It's written in Perl, fairly new, and doesn't have a lot in the way of documentation yet.
- Jungle Disk provides clients for Mac, Windows, and Linux. It also offers a local WebDAV server.
- DragonDisk has Linux and Windows clients.
Are there other S3 tools that I'm missing?
Also, I've found that Amazon's S3 forum is quite helpful. The discussion there is generally of good quality and the software does the job nicely. Perhaps we should do something similar for YDN instead of using Yahoo! Groups?
My Requirements
Most of what I need to backup lives on Linux servers in a few collocation facilities around the country (Bowling Green, Ohio; San Jose, California; San Francisco, CA). My laptop and desktop windows boxes have USB backup and get automatically synced to a Unix box on a regular basis already using the excellent SyncBack SE, so I don't need to re-solve that problem.I don't really need a fancy GUI. I'm really looking for a stand alone tool that's designed to work with S3 and keep bandwidth usage to a minimum. Alternatively, something that works at a lower level (such as a filesystem driver) to provide a "virtual drive" type of interface might work as well.
from http://jeremy.zawodny.com/blog/archives/007641.html