Category Archives: Cloud scripts

Setting up the Amazon EC2 Tools on Linux

cmips-amazon-security-credentials-webTo download EC2 Tools you can use this page from the Amazon Developer tools page, that links to the zip file in S3.

The Tools use Java, so you will need Java, at least v. 1.6 and can be uncompressed to any directory.

The tools use AWS_ACCESS_KEY and AWS_SECRET_KEY environment variables to know your credentials, and so passing the requests to Amazon API with them.

Amazon AWS_ACCESS_KEY can be seen on your AWS account.

Edit your ~/.bashrc to add the exports:

export AWS_ACCESS_KEY=your-aws-access-key-id
export AWS_SECRET_KEY=your-aws-secret-key

Run:

source ~/.bashrc

Please note Tools commands support pass of authentication on execution time with the params:

    -O, --aws-access-key KEY
    -W, --aws-secret-key KEY

Export the EC2_HOME variable, you can also add to ~/.bashrc:

export EC2_HOME=/home/cmips/ec2-tools

And you add also and export also to the PATH

export PATH=$PATH:$EC2_HOME/bin

Export the JAVA_HOME dir:

export JAVA_HOME=/usr

That means that you java binary is in /usr/bin

Test the Tools with: ec2-describe-regions

cmips-aws-amazon-tools-ec2-describe-regions

Then use the parameter –url or -U to specifify the url of the region you want to query.

You can also set the parameter EC2_URL environment variable if you prefer.

Eg:

    --url https://ec2.us-east-1.amazonaws.com

Alternatively you can use the parameter –region

Eg:

    --region us-east-1

–region overrides the value of –url

Other util parameters are:

 –auth-dry-run or -D

To test instead of performing the action.

 –verbose or -v

So let’s test the whole thing:

ec2-create-keypair --region us-east-1 cmips-test-key

cmips-amazon-developer-tools-ec2-create-keypair

The first text starting with ca:75:f9… is called the fingerprint and is a checksum of the KEY.

From this PRIVATE KEY (or .pem file) you can generate your public key:

ssh-keygen -y

You can list all your instances with:

ec2-describe-instances

Or the images available to you with:

ec2-describe-images

The images available to you can be public, your own images (private) or images that other AWS account has granted to be launched from yours (explicit).

You can describe AMIs, AKIs, and ARIs (Amazon Ramdisk Images).

You can request all the images available to you (it will take several seconds as there are thousands):

ec2-describe-images --all

 cmips-amazon-tools-ec2-describe-all

Or your own images:

ec2-descibre-images --owner self

For owner you can specify any of these values:

amazon | aws-marketplace | self | AWS account ID | all

The second column is the AMI ID, that is unique for every zone.

Link to the Amazon’s documentation for ec2-describe-images.

You can also the list of official AMIs for Ubuntu here:

http://cloud-images.ubuntu.com/locator/ec2/

cmips-ubuntu-amazon-ec2-ami-locator

You can launch one or more instances (create new) with:

ec2run

Passing the AMI ID of the AMI, so the base image for the virtual disk, to launch.

For example:

ec2-run-instances ami-4bb39522 -O AWS_ACCESS_KEY -W AWS_SECRET_KEY

You can also import instances and create your own images:

ec2-import-instance
ec2-upload-disk-image

For example:

ec2-import-instance ./LinuxSvr13-10-disk1.vmdk  –f VMDK -t hi1.4xlarge -a x86_64 -b myawsbucket -o AKIAIOSFODNN7SAMPLE -w wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Requesting volume size: 25 GB
Disk image format: Stream-optimized VMDK
Converted volume size: 26843545600 bytes (25.00 GiB)
Requested EBS volume size: 26843545600 bytes (25.00 GiB)
TaskType        IMPORTINSTANCE  TaskId  import-i-fhbx6hua       ExpirationTime  2011-09-09T15:03:38+00:00       Status  active  StatusMessage   Pending InstanceID      i-6ced060c
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5070303744      VolumeSize      25      AvailabilityZone        us-east-1c      ApproximateBytesConverted       0       Status  active StatusMessage    Pending
Creating new manifest at testImport/9cba4345-b73e-4469-8106-2756a9f5a077/Linux_2008_Server_13_10_EE_64.vmdkmanifest.xml
Uploading the manifest file
Uploading 5070303744 bytes across 484 parts
0% |--------------------------------------------------| 100%
   |==================================================|
Done

Here you can find complete documentation on how to export instances/disks from Citrix, Microsoft Hyper-V and VMware.

Why is it important to know the APIs from the Cloud providers?.

In order to be able to automate tasks and to measure the time taken by the actions to be performed.

Most Cloud providers have their own APIs, while the smaller ones don’t.

Some Apache LibCloud supported ProvidersApache Libcloud provides an unified API for many providers, even for some of the providers that don’t offer direct API.

You can see the complete list of providers, and supported functionalities in:

http://libcloud.apache.org/supported_providers.html

 

How CMIPS cloud-init tests are done

To test aspects like the time that a server takes to become available there two approaches can be used:

1) Manually laborious launch the instance creation order (from web or from API call) and start the counter of a chronometer

Then keep refreshing for the instance id to get the public dns name, or Ip, and then ping to know when the interface is up, then keep trying to access via ssh.

Stop the chronometer…

2) Go more pro and automate test through Cloud-Init procedure

That’s specifying your script, that will be executed when the instance starts.

This is done in Amazon through the User data.

cmips-amazon-user-data-scriptThere you can provide your scripts in plain text, in base64, or add as a file and they are executed as root.

In our case I created my scripts to automate tests and save time, while being more accurate.

Sample User data script for cmips tests:

 

#!/bin/sh
# cmips v.1.0.3 cloud init execution tests

# Define routes
file_name=cmips-speed-test.000

# Complete Path, on cloud-init through user data $HOME is empty, so data will be at /
# user data script is executed as root, so no problem of permissions
file_route=$HOME/$file_name

# Get the time when the server is up
date_server_up=`date +"%Y-%m-%d %k:%M:%S:%N"`
date_server_up_unix_time=`date +"%s"`

# In case invoked from command line, show some info
echo "Using logfile $file_route.log Server up: $date_server_up Unix Time: $date_server_up_unix_time"
echo "-----------------------------------------------------------------------------------" >> $file_route.log
echo "Server up: $date_server_up Unix Time: $date_server_up_unix_time" >> $file_route.log

# Add packages you want
apt-get install htop >> $file_route.log
apt-get install git >> $file_route.log

# Here you can add packages like mysql, apache, php... and monitor the time
# You can also clone from github your source code to deploy your web

$date_end_packages_install=`date +"%Y-%m-%d %k:%M:%S:%N"`
$date_end_packages_install_unix_time=`date +"%s"`
echo "Package finished installing at $data_end_packages_install Unix Time: $date_end_packages_install_unix_time" >> $file_route.log

# Do Connection Speed tests
# ...

# Do cmips tests
# ...

# Get start of time for disk speed calculations
date_start_dd_unix_time=`date +"%s"`
date_start_dd=`date +"%Y-%m-%d %k:%M:%S:%N"`

echo "Starting cmips dd tests at $date_start_dd Unix time: $date_start_dd_unix_time"
echo "Starting cmips dd tests at $date_start_dd Unix time: $date_start_dd_unix_time" >> $file_route.log

dd if=/dev/zero of=$file_route bs=4M count=64 >> $file_route.log ; sync

date_end_dd_unix_time=`date +"%s"`
date_end_dd=`date +"%Y-%m-%d %k:%M:%S"`
total_seconds=`expr $date_end_dd_unix_time - $date_start_dd_unix_time`

echo "Ending cmips dd tests at $date_end Unix time: $date_end_dd_unix_time Total seconds dd with sync: $total_seconds"
echo "Ending cmips dd tests at $date_end Unix time: $date_end_dd_unix_time Total seconds dd with sync: $total_seconds" >> $file_route.log

In /var/log you can find the cloud-init.log file and examine it in deep if you’re curious.

I use dd to get data about disk performance. Is not so evident in Cloud, as all the Virtual platforms cache the file I/O from the guest instances, so tests with smalland medium-sized files are not trustworthy, and so certain aspects have to be taken in count:

  • Test with big files: 1 GB or bigger
  • Use block-size 4 MB at least
  • Use sync, and calculate the real time it takes to release (even if is the Host and not the guest who controls that, it brings more accurate results)
  • Do several tests, can have disparity in results
  • Use /dev/zero . To really prevent caching I would prefer to use /dev/urandom but it really slows the tests and distort the results