Skip to end of metadata
Go to start of metadata

1. Resize a volume, and attach to instance.
2. Create a Nexus Image, with a larger volume

1. How to Resize a EC2 Server's Disc Volume

This is instructions on how to resize a volume and attach to a Instance/Server.
The flow is to create a snapshot of the instance's volume, create a new larger volume, using the snapshot. Then, attach the new volume, resize it via linux command line, reboot and check to ensure all is well.

These steps are derived from the steps on this page http://alestic.com/2010/02/ec2-resize-running-ebs-root

  1. Determine the new size of the volume, such as 100GB or 256GBs
  2. Get the existing instance id, volume id and zone ( for instance, us-east-1d )

    a. Instance Id - i-c54152a6
    b. Volume Id - vol-7b596916
    c. Zone - us-east-1d
    d. server file system root name: such as /dev/sda1 or /dev/xvde1 (log on the server and cd / and then df -h . note the name and log off. )
    e. note: volume tagId and snapshot tagId, as they will need be applied to the newly created media via ec2-create-tag command or via EC2 Console
    f. note: the legacy volume and snapshot will need to have their tagId appended with .legacy, for instance ci.ws= ci.ws.legacy or ci.ws.oldsize.legacy
    
  3. Ensure you are running from your local machine and have logged out of the instances under update
  4. Stop the instance, detach the volume and create a new snapshot

    ec2-stop-instances i-c54152a6
    ec2-detach-volume vol-7b596916
    
  5. Create a new volume from the new snapshot. - Lessons Learned from this: if a snapshot of a volume has not been accomplished in the last week or two, plan on a 12-24 hour period to capture the volume via a snapshot. Best to do periodic captures and not get into this pickle.

    ec2-create-snapshot vol-7b596916
    
  6. Get the new snapshot id - snap-4b334431

    ec2-describe-snapshots
    
  7. The --size parameter indicates the size in gigabytes for the new volume

    ec2-create-volume --availability-zone us-east-1d-example  --size 256-example --snapshot snap-4b334431
    
  8. Get the new volume id - vol-697a4c05

    ec2-describe-volumes
    
  9. Attach the new volume to the instance

    ec2-attach-volume --instance i-c54152a6 --device /dev/sda1 vol-697a4c05
    
  10. Tag new media and apply legacy tags to legacy (old) media - duplicate the tag id from the original volume, such as ci.ws or ci.foundation
  11. Tag Volumes-legacy

    ec2-delete-tag vol-02239a91-example --tag "Name=ws.rice-example"
    ec2-create-tag vol-02239a91-example --tag "Name=ws.rice.156.legacy"
    
  12. Tag Volumes-New

    ec2-create-tag vol-697a4c05-example --tag "Name=ws.rice-example"
    
  13. Tag Snapshot-legacy

    ec2-delete-tag snap-048950e-example --tag "Name=ws.rice-example"
    ec2-create-tag snap-048950e-example --tag "Name=ws.rice.156.legacy"
    
  14. Tag Snapshot-New

    ec2-create-tag snap-4b334431-example --tag "Name=ws.rice-example"
    
  15. Re-start the instance

    ec2-start-instances i-c54152a6
    
  16. ssh into the instance and resize the root file system

    sudo resize2fs /dev/sda1-example ( see 2.d )
    
  17. Show that the root file system is the new, larger size (60 GB):

    df -h
    
  18. Restart Applications on Server as needed, such as tomcat
  19. Reset any DSN/Addressing redirection information

    Impact Areas to be aware of

  20. Workspace Server, (ci.ws.server) has hard coded variables which need review

    Update [https://svn.kuali.org/repos/foundation/trunk/kuali-ci/pom.xml]
    <image.ebs.volumeSize>256</image.ebs.volumeSize>
    <ws.server.volumeId>vol-f014608a</ws.server.volumeId>
    Follow update procedure [http://ci.rice.kuali.org/job/update-ci-configuration/]
    

2. Create a New EC2 Nexus Image with resized volume.

  • capture the old size volume with a snapshot
  • create the image, and launch the instance
  • resize the disk on that instance
  • capture the newly sized volume with a snapshot
  • create a Image from this snapshot
  • remove the pre-sized Image,snapshot,
  • remove the post-sized Instance
  • add tag to
    1. snapshot
    2. volume
    3. image
  1. Determine the new size of the volume

    500
    
  2. Get the existing instance id, volume id and zone

    a. Instance Id - i-bdf85cc0
    b. Volume Id - vol-132fbf69
    c. Zone - us-east-1d
    d. kernal - aki-94c527fd
    e. RAM Disk ID - ari-96c527ff
    f. Key Pair Name - kf-key
    g. server file system root name:  /dev/sda1 cd / and then df -h . note the name and log off. )
    h. note: volume tagId and snapshot tagId, as they will need be applied to the newly created media via ec2-create-tag command or via EC2 Console
    i. note: the legacy volume and snapshot will need to have their tagId appended with .legacy, for instance ci.ws= ci.ws.legacy or ci.ws.oldsize.legacy
    
  3. Create a new snapshot from the Nexus Volume ID - Legacy-Snapshot-60

    ec2-create-snapshot vol-132fbf69 -d nexus-60-legacy
    
  4. Wait for the snapshot to complete to 100%, Note the Snapshot id

    legacy-snapshot-60 id: _____________
    
  5. Create the Pre-Sized AMI using the new snapshot id

    ec2-register -n nexus-500.presized -d Nexus-500.presized  --kernel aki-94c527fd --architecture i386 --ramdisk ari-96c527ff --root-device-name /dev/sda1 -b /dev/sda1=snap-12345678:500:true
    new: /dev/sda1=presize-snapshot-id:500:true:standard
    
  6. Use the ami_id to start the instance

    ec2-run-instances ami-908016f9  -g sg-7228f91b -k kf-key -t m1.medium  --availability-zone us-east-1d --kernel aki-94c527fd --ramdisk ari-96c527ff
    new:	vol-________
    
  7. The new instance will have a new volume, so lets add a tag to it

    ec2-create-tags new-vol-id --tag "Name=nexus-500"
    
  8. Note the following for use later

    new instance id: _______________
    dns name:___________________
    pre-sized snapshot id:________________
    volume id:__________________
    
  9. Note the instance ec2 url id and ssh into the instance and resize the root file system

    ssh root@ec2-107-21-157-999.compute-1.amazonaws.com
    
  10. Resize the Partition

    First, we have to delete our current partition and create a bigger one (don’t be afraid, no data will be lost):
    
    # fdisk /dev/xvda
    Type m to get a list of all commands:
    
    Command (m for help): m
    Command action
       a   toggle a bootable flag
       b   edit bsd disklabel
       c   toggle the dos compatibility flag
       d   delete a partition
       l   list known partition types
       m   print this menu
       n   add a new partition
       o   create a new empty DOS partition table
       p   print the partition table
       q   quit without saving changes
       s   create a new empty Sun disklabel
       t   change a partition's system id
       u   change display/entry units
       v   verify the partition table
       w   write table to disk and exit
       x   extra functionality (experts only)
    Let’s print out the partition table and look for the ext3 partition, we want to enlarge (it’s easy here, there is just one partition):
    
    Command (m for help): p
    
    Disk /dev/xvda: 5218 MB, 5218304000 bytes
    255 heads, 63 sectors/track, 634 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvda1   *           1         634     5092573+  83  Linux
    Now we delete the old partition (/dev/xvda1) and create a new one with the maximum available size:
    
    Command (m for help): d
    Selected partition 1
    
    Command (m for help): n
    Command action
       e   extended
       p   primary partition (1-4)
    p
    Partition number (1-4): 1
    First cylinder (1-634, default 1):
    Using default value 1
    Last cylinder or +size or +sizeM or +sizeK (1-634, default 634):
    Using default value 634
    Our original /dev/xvda1 had the bootable flag (see fdisk -l output), so we must add it to our new /dev/xvda1 again:
    
    Command (m for help): a
    Partition number (1-4): 1
    Now let’s write our new partition table and exit fdisk:
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    
    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table.
    The new table will be used at the next reboot.
    Syncing disks.
    After rebooting the system (!) we can online-resize the filesystem with a simple command:
    
  11. reboot

    reboot
    
  12. log back into the instance (after ensuring it came up ok via the ec2 browser console )

    ssh root@ec2-107-21-157-999.compute-1.amazonaws.com
    
  13. Display the current disk size as follows:

    cd /
    df -h
    
  14. Resize the disk of the server

    sudo resize2fs /dev/sda1 -f -p
    Note: This may take sometime depending on the size of the disk
    
  15. Show that the root file system is the new, larger size 500 GB:

    cd /
    df -h
    


    Screen Capture of sudo resize output

  16. Exit from the server

    [root@ip-10-84-255-66 ~]#  exit
    
  17. Tag snapshot-legacy

    ec2-create-tags snap-_______ --tag "Name=nexus.60.legacy"
    
  18. Create a New Snapshot of the Resized Volume 500 GB

    ec2-create-snapshot vol-id -d "Nexus-500"
    ec2-create-tags snap-________ --tag "Name=Nexus-500"
    ec2-create-tags snap-________ --tag "BackUp=nexus.bckup.0"
    
  19. Monitor the snapshot status until complete

    ec2-describe-snapshots snap-
    
  20. Create a AMI with the snapshot from the resized volume

    ec2-register -n nexus-500 -d Nexus-500  --kernel aki-94c527fd --architecture i386 --ramdisk ari-96c527ff --root-device-name /dev/sda1 -b /dev/sda1=snap-_______:500:true
    
  21. remove the pre-sized Image

    ec2-deregister image-id--pre-sized
    
  22. remove the intermediate instance

    ec2-terminate-instances instance-id (intermediate )
    
  23. Reset any DSN/Addressing redirection information
    Impact Areas to be aware of

    tbd
    
  24. How corrupted volume was fixed:
    primary superblock features different from backup

    Once the "primary" instance corrects, the repaired volume activity was: snapshot (final snapshot-seed), ami(final image seed), and launched as an instance. 
    From this point on, hopefully all defects on the volume have been cleaned up.
    Once the second instance launched, I was still getting this error.  But, after about two hours, the self-diagnosis
    would complete (fsck.ext3 -a) and the volume would be healthy. So I used this corrected volume for the final image-seed.
    See the instance system log in ec2 console for final check.