How Storage Works on Amazon

Exclusive offer: get 50% off this eBook here
Amazon Web Services: Migrating your .NET Enterprise Application

Amazon Web Services: Migrating your .NET Enterprise Application — Save 50%

Evaluate your Cloud requirements and successfully migrate your .NET Enterprise Application to the Amazon Web Services Platform using this book and eBook

$26.99    $13.50
by Rob Linton | July 2011 | Enterprise Articles

Amazon Web Services is an Infrastructure as a Service (IaaS) platform in the Cloud, which businesses can take advantage of as their needs demand. The Amazon Cloud provides the enterprise with the flexibility to choose whichever solution is required to solve specific problems, ultimately reducing costs by only paying for what you use.

In this article by Rob Linton, author of Amazon Web Services: Migrate your .NET Enterprise Application to the Amazon Cloud: RAW, we will look at how Amazon manages storage. We will look at the differences between S3 and EBS storage and how to implement both. We will create the storage locations that we will need for our sample application and look at how to implement storage using both the AWS console and the AWS command line.

 

Amazon Web Services: Migrating your .NET Enterprise Application

Amazon Web Services: Migrating your .NET Enterprise Application

Evaluate your Cloud requirements and successfully migrate your .NET Enterprise Application to the Amazon Web Services Platform

        Read more about this book      

(For more resources on this subject, see here.)

Creating a S3 bucket with logging

Logging provides detailed information on who accessed what data in your bucket and when. However, to turn on logging for a bucket, an existing bucket must have already been created to hold the logging information, as this is where AWS stores it. To create a bucket with logging, click on the Create Bucket button in the Buckets sidebar:

How Storage Works on Amazon

This time, however, click on the Set Up Logging button . You will be presented with a dialog that allows you to choose the location for the logging information, as well as the prefix for your logging data:

How Storage Works on Amazon

You will note that we have pointed the logging information back at the original bucket migrate_to_aws_01

Logging information will not appear immediately; however, a file will be created every few minutes depending on activity. The following screenshot shows an example of the files that are created:

How Storage Works on Amazon

Before jumping right into the command-line tools, it should be noted that the AWS Console includes a Java-based multi-file upload utility that allows a maximum size of 300 MB for each file

Using the S3 command-line tools

Unfortunately, Amazon does not provide official command-line tools for S3 similar to the tools they have provided for EC2. However, there is an excellent simple free utility provided at o http://s3.codeplex.com, called S3.exe, that requires no installation and runs without the requirement of third-party packages.

To install the program, just download it from the website and copy it to your C:\AWS folder.

Setting up your credentials with S3.exe

Before we can run S3.exe, we first need to set up our credentials. To do that you will need to get your S3 Access Key and your S3 Secret Access Key from the credentials page of your AWS account. Browse to the following location in your browser, https://aws-portal.amazon.com/gp/aws/developer/account/ index.html?ie=UTF8&action=access-key and scroll down to the Access Credentials section:

How Storage Works on Amazon

The Access Key is displayed in this screen; however, to get your Secret Access Key you will need to click on the Show link under the Secret Access Key heading.

Run the following command to set up S3.exe:

C:\AWS>s3 auth AKIAIIJXIP5XC6NW3KTQ
9UpktBlqDroY5C4Q7OnlF1pNXtK332TslYFsWy9R

How Storage Works on Amazon

To check that the tool has been installed correctly, run the s3 list command:

C:\AWS>s3 list

You should get the following result:

How Storage Works on Amazon

Copying files to S3 using S3.exe

First, create a file called myfile.txt in the C:\AWS directory.

To copy this file to an S3 bucket that you own, use the following command:

c:\AWS>s3 put migrate_to_aws_02 myfile.txt

How Storage Works on Amazon

This command copies the file to the migrate_to_aws_02 bucket with the default permissions of full control for the owner.

You will need to refresh the AWS Console to see the file listed.

(Move the mouse over the image to enlarge it.)

Uploading larger files to AWS can be problematic, as any network connectivity issues during the upload will terminate the upload. To upload larger files, use the following syntax:

C:\AWS>s3 put migrate_to_aws_02/mybigfile/ mybigfile.txt /big

How Storage Works on Amazon

This breaks the upload into small chunks, which can be reversed when getting the file back again.

If you run the same command again, you will note that no chunks are uploaded. This is because S3.exe does not upload a chunk again if the checksum matches.

Retrieving files from S3 using S3.exe

Retrieving files from S3 is the reverse of copying files up to S3.

To get a single file back use:

C:\AWS>s3 get migrate_to_aws_02/myfile.txt

To get our big file back again use:

C:\AWS>s3 get migrate_to_aws_02/mybigfile/mybigfile.txt /big

How Storage Works on Amazon

The S3.exe command automatically recombines our large file chunks back into a single file.

Importing and exporting large amounts of data in and out of S3

Because S3 lives in the cloud within Amazon's data centers, it may be costly and time consuming to transfer large amounts of data to and from Amazon's data center to your own data center. An example of a large file transfer may be a large database backup file that you may wish to migrate from your own data center to AWS.

Luckily for us, Amazon provides the AWS Import/Export Service for the US Standard and EU (Ireland) regions. However, this service is not supported for the other two regions at this time.

The AWS Import service allows you to place your data on a portable hard drive and physically mail your hard disk to Amazon for uploading/downloading of your data from within Amazon's data center.

Amazon provides the following recommendations for when to use this service.

  • If your connection is 1.55Mbps and your data is 100GB or more
  • If your connection is 10Mbps and your data is 600GB or more
  • If your connection is 44.736Mbps and your data is 2TB or more
  • If your connection is 100Mbps and your data is 5TB or more

Make sure if you choose either the US West (California) or Asia Pacific (Singapore) regions that you do not need access to the AWS Import/ Export service, as it is not available in these regions.

Setting up the Import/Export service

To begin using this service once again, you will need to sign up for this service separately from your other services. Click on the Sign Up for AWS Import/Export button located on the product page http://aws.amazon.com/importexport, confirm the pricing and click on the Complete Sign Up button .

Once again, you will need to wait for the service to become active:

How Storage Works on Amazon

Current costs are:

Cost Type

US East

US West

EU

APAC

Device handling

$80.00

$80.00

$80.00

$99.00

Data loading time

$2.49 per data loading hour

$2.49 per data loading hour

$2.49 per data loading hour

$2.99 per data loading hour

Using the Import/Export service

To use the Import/Export service, first make sure that your external disk device conforms to Amazon's specifications.

Confirming your device specifications

The details are specified at http://aws.amazon.com/importexport/#supported_ devices, but essentially as long as it is a standard external USB 2.0 hard drive or a rack mountable device less than 8Us supporting eSATA then you will have no problems.

Remember to supply a US power plug adapter if you are not located in the United States.

Downloading and installing the command-line service tool

Once you have confirmed that your device meets Amazon's specifications, download the command-line tools for the Import/Export service. At this time, it is not possible to use this service from the AWS Console. The tools are located at http:// awsimportexport.s3.amazonaws.com/importexport-webservice-tool.zip.

Copy the .zip file to the C:\AWS directory and unzip them, they will most likely end up in the following directory, C:\AWS\importexport-webservice-tool.

Creating a job

  1. To create a job, change directory to the C:\AWS\importexport-webservice- tool directory, open notepad, and paste the following text into a new file:

    manifestVersion: 2.0
    bucket: migrate_to_aws_01
    accessKeyId: AKIAIIJXIP5XC6NW3KTQ
    deviceId: 12345678
    eraseDevice: no
    returnAddress:
    name: Rob Linton
    street1: Level 1, Migrate St
    city: Amazon City
    stateOrProvince: Amazon
    postalCode: 1000
    phoneNumber: 12345678
    country: Amazonia
    customs:
    dataDescription: Test Data
    encryptedData: yes
    encryptionClassification: 5D992
    exportCertifierName: Rob Linton
    requiresExportLicense: no
    deviceValue: 250.00
    deviceCountryOfOrigin: China
    deviceType: externalStorageDevice

  2. Edit the text to reflect your own postal address, accessKeyId, bucket name, and save the file as MyManifest.txt. For more information on the customs configuration items refer to http://docs.amazonwebservices. com/AWSImportExport/latest/DG/index.html?ManifestFileRef_ international.html.

    If you are located outside of the United States a customs section in the manifest is a requirement.

  3. In the same folder open the AWSCredentials.properties file in notepad, and copy and paste in both your AWS Access Key ID and your AWS Secret Access Key. The file should look like this:

    # Fill in your AWS Access Key ID and Secret Access Key
    # http://aws.amazon.com/security-credentials
    accessKeyId:AKIAIIJXIP5XC6NW3KTQ
    secretKey:9UpktBlqDroY5C4Q7OnlF1pNXtK332TslYFsWy9R

  4. Now that you have created the required files, run the following command in the same directory.

    C:\AWS\importexport-webservice-tool>java -jar
    lib/AWSImportExportWebServiceTool-1.0.jar CreateJob Import
    MyManifest.txt .

(Move the mouse over the image to enlarge it.)

Your job will be created along with a .SIGNATURE file in the same directory.

Copying the data to your disk device

Now you are ready to copy your data to your external disk device. However, before you start, it is mandatory to copy the .SIGNATURE file created in the previous step into the root directory of your disk device.

Sending your disk device

Once your data and the .SIGNATURE file have been copied to your disk device, print out the packing slip and fill out the details. The JOBID can be obtained in the output from your earlier create job request, in our example the JOBID is XHNHC. The DEVICE IDENTIFIER is the device serial number, which was entered into the manifest file, in our example it was 12345678.

How Storage Works on Amazon

The packing slip must be enclosed in the package used to send your disk device.

 

Each package can have only one storage device and one packing slip, multiple storage devices must be sent separately.

Address the package with the address output in the create job request:

AWS Import/Export
JOBID TTVRP
2646 Rainier Ave South Suite 1060
Seattle, WA 98144

Please note that this address may change depending on what region you are sending your data to. The correct address will always be returned from the Create Job command in the AWS Import/Export Tool.

Managing your Import/Export jobs

Once your job has been submitted, the only way to get the current status of your job or to modify your job is to run the AWS Import/Export command-line tool. Here is an example of how to list your jobs and how to cancel a job.

To get a list of your current jobs, you can run the following command:

C:\AWS\importexport-webservice-tool>java -jar
lib/AWSImportExportWebServiceTool-1.0.jar ListJobs

To cancel a job, you can run the following command:

C:\AWS\importexport-webservice-tool>java -jar
lib/AWSImportExportWebServiceTool-1.0.jar CancelJob XHNHC

(Move the mouse over the image to enlarge it.)

Amazon Web Services: Migrating your .NET Enterprise Application Evaluate your Cloud requirements and successfully migrate your .NET Enterprise Application to the Amazon Web Services Platform using this book and eBook
Published: July 2011
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on this subject, see here.)

Accessing S3 using third-party tools

When S3 was first introduced, initially the access to it was via the S3 API.

Since then however, there have been a number of tools developed to facilitate accessing objects stored in S3. One of the most recent is the AWS console. Another tool is S3Fox (http://www.s3fox.net), a plugin for the Firefox browser. However, both of these tools are web-based interfaces to S3, and not integrated into Windows itself.

Commercial products to look at if you would like to create a virtual filesystem under Windows are:

Getting started with EBS

Elastic Block Store (EBS) provides EC2 instances access to persistent disk in a filesystem EBS is fast (much faster than S3), and does not have the same limitations surrounding access and naming that S3 has.

Creating an EBS volume

  1. To create an EBS volume click on the EC2 tab in the AWS console, click on the Volumes link in the Navigation section, and then click on the Create Volume button .
  2. The first thing you will notice in the Create Volume dialog is that you must select an Availability Zone. This is because EBS volumes are locked to a particular availability zone and cannot be accessed in any other zone from an EC2 instance.

    How Storage Works on Amazon

  3. Leave the Snapshot field blank for the moment and click on create.
  4. The Volume will appear with a status of creating in the status window:

    (Move the mouse over the image to enlarge it.)

  5. When it is available, its status will change to available. Before we go any further, make sure you have at least one EC2 instance running so that we can attach the new volume to running in the same availability zone.
  6. Right-click on the AWS Volume and select Attach Volume:

    How Storage Works on Amazon

  7. Once you have selected Attach Volume, the following dialog will appear:

    How Storage Works on Amazon

  8. You can see straightaway that the Instance does not show the Tag Name , but instead only shows the Instance Id . So before you go ahead and attach this volume, make sure that you write down the instance ID of the instance that you would like to attach this volume to.
  9. The Device field is the Windows device name that the volume will be attached to. Note that the dialog states that the allowable devices are devices xvdf through to xvdp, attaching any devices out of this range will result in the volume not attaching successfully.

    The maximum number of disks that can be attached to a Windows EC2 instance is 11. (xvdf – xvdp)

  10. After you have selected Attach, it will take a moment for the disk to become active on the selected instance.

    It is important to note that a given EBS volume can only be attached to one single EC2 instance at a time; however, a single EC2 instance can have multiple EBS volumes attached to it.

  11. The first time the disk is attached to an instance, you may find that the disk is Offline in disk manager; you will need to bring the disk online and initialize it, prior to creating a partition and formatting it:

    How Storage Works on Amazon

  12. When assigning a drive letter it is best to be consistent, I like to assign the same drive letter as the device name, for example, F:\ for the xvdf device. Note that I have named the drive label to F-DRIVE in this case:

    How Storage Works on Amazon

  13. To ensure that the same drive mapping occurs next time the disk is mounted, update the disk mapping in the EC2 Service Properties.

    How Storage Works on Amazon

  14. Click on the Mappings button and create entries for the disks that you will be attaching.
  15. This is an example of the drive mappings we will be using for our sample application:

    How Storage Works on Amazon

Note that the Volume Names have been set to match the names that were set in the disk manager.

By setting the Drive Letter Mappings in the EC2 Service Properties, we can ensure that volumes will always be mapped to the same drive letters.

Creating an EBS snapshot

Once we have attached an EBS volume to our running EC2 instance, the drive can be accessed just like any other hard drive or SAN volume. Now that the F drive has been attached in our example, we are going to edit some data and create a snapshot, which can be attached to another instance in a different availability zone.

  1. To create a snapshot, first create a text file in F:\myfile.txt and edit it with the value Snapshot One:

    How Storage Works on Amazon

  2. We will use this file to demonstrate the snapshot process.
  3. Now right-click on the Volume in the AWS Console and select Create Snapshot from the pop-up menu as shown in the next screenshot.

    Windows volumes have a disk cache, which is kept in memory, so if a snapshot is taken while the Windows server is still running, the data on the disk may not be consistent. If possible, do not take a snapshot of an EBS volume on a Windows server unless the server is in a stopped state.

    How Storage Works on Amazon

  4. The dialog that pops up will require you to give the snapshot a name, in this case, we have named our snapshot Snapshot One:

    How Storage Works on Amazon

  5. The snapshot will then be queued. To see all of the current snapshots click on the Snapshots link in the Navigator pane in the AWS Console:

    How Storage Works on Amazon

    Please note, AWS uses snapshots to keep track of your AMI bundles, so you will see more snapshots in this list than just the ones created by you.

  6. Now that we have a snapshot, (Snapshot One), right-click on the snapshot and select Create Volume from Snapshot in the pop-up menu:

    How Storage Works on Amazon

  7. When creating the volume you will have the option of selecting a different availability zone:

    How Storage Works on Amazon

  8. This time select the us-east-1b availability zone and click on create. The new volume will be created in the new availability zone and will be an exact copy of the original volume at the time the snapshot was initiated:

    How Storage Works on Amazon

  9. When the volume has finished creating, attempt to attach it to the original instance that it was created from. You will note that this is not possible as the new volume is now in a different availability zone to the original volume.

    To migrate volumes between availability zones, create a snapshot from the volume in the source availability zone, and re-create the volume from the snapshot in the new availability zone.

  10. Start an instance in the same availability zone as our new volume and attach it.

    The first time the volume is attached to an instance after being created from a snapshot, the volume will need to be set Online in disk manager.

An important note about EBS

EBS volumes do not lose their data when they are detached from an EC2 instance. The data still persists on the EBS volume. It is only when the EBS volume is deleted that the data is destroyed. Please make sure to create a snapshot of the volume before a volume is deleted, that way if you accidentally delete the wrong volume, you can recover by recreating the volume from the snapshot.

Using the EBS command-line tools

EBS volumes can be managed with the EC2 command-line tools.

  • To create an EBS volume, run the following command:

    C:\AWS> ec2-create-volume --size 20 --availability-zone us-east-
    1a

This creates a volume 20Gb in size in the us-east-1a availability zone and returns the Volume Id.

To attach a volume to an instance, run the following command:

C:\AWS> ec2-attach-volume vol-59da5c32 -i i-b73186db -d xvdf

This attaches volume vol-59da5c32 to instance i-b73186db on device xvdf.

To detach the volume, run the following command:

C:\AWS>ec2-detach-volume vol-59da5c32

Finally, to delete the volume, run the following command:

C:\AWS>ec2-delete-volume vol-59da5c32

(Move the mouse over the image to enlarge it.)

Setting up storage for our sample application

Now that we have covered S3 and EBS in detail, it's time to use these services to create storage for our sample application. In our sample application, we have two web servers, two application servers, two database servers, and one domain controller.

Group

Volumes

 

Web Servers

Data - 10Gb

For our web servers, we will be creating a small EBS volume of 10Gb in size for each web server. We will use this volume to store the actual website data, which will be linked to IIS via a virtual directory.

Application Servers

Data - 10Gb

For our application servers, we will be creating a small EBS volume of 10Gb in size.

Database Servers

 

Data - 200Gb

Log - 200Gb

Backup - 200Gb

For our database servers, we will be creating three volumes for each server.

Domain Controller

Data - 10Gb

For our domain controller server, we will be creating a small EBS volume of 10Gb in size.

Backup storage on S3

Along with the previous EBS storage, we will be creating a shared backup storage bucket on S3. The name for this storage location will be migrate_to_aws_backup. Our database backups will be copied to this location.

Summary

In this article, we learned about storage in detail in AWS. We learned about the differences between EBS and S3 and how to create and manage storage on both of these services. We also covered how to import large amounts of data into S3 and created the storage volumes we will need for our sample enterprise application.


Further resources on this subject:


Amazon Web Services: Migrating your .NET Enterprise Application Evaluate your Cloud requirements and successfully migrate your .NET Enterprise Application to the Amazon Web Services Platform using this book and eBook
Published: July 2011
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

About the Author :


Rob Linton

Rob Linton is the CTO and co-founder of LogicalTech SysTalk, a successful integration company based in Melbourne, Australia. He has been a database professional for the past 15 years, and for the 5 years before that was a spatial information systems professional, making him a data specialist for over 20 years.

He is a certified Security Systems ISO 27001 auditor and more recently has been specializing in cloud data persistence and security. He is a certified DBA and is proficient in both Oracle and Microsoft SQL Server and is a past Vice President of the Oracle User Group in Melbourne, Australia.

In his spare time he enjoys coding in C++ on his MacBook Pro and chasing his kids away from things that break relatively easily

Books From Packt


Amazon SimpleDB Developer Guide
Amazon SimpleDB Developer Guide

Microsoft SQL Azure Enterprise Application Development
Microsoft SQL Azure Enterprise Application Development

Microsoft Azure: Enterprise Application Development
Microsoft Azure: Enterprise Application Development

Oracle GoldenGate 11g Implementer's guide
Oracle GoldenGate 11g Implementer's guide

Oracle Information Integration, Migration, and Consolidation: RAW
Oracle Information Integration, Migration, and Consolidation: RAW

SAP BusinessObjects Dashboards 4.0 Cookbook
SAP BusinessObjects Dashboards 4.0 Cookbook

Microsoft Data Protection Manager 2010
Microsoft Data Protection Manager 2010

Microsoft Visual Studio LightSwitch Business Application Development
Microsoft Visual Studio LightSwitch Business Application Development


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
w
U
r
x
A
W
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software