This is instructions on how to resize a volume and attach to a Instance/Server. The flow is to create a snapshot of the instance's volume, create a new larger volume, using the snapshot. Then, attach the new volume, resize it via linux command line, reboot and check to ensure all is well.
Determine the new size of the volume, such as 100GB or 256GBs
Get the existing instance id, volume id and zone ( for instance, us-east-1d )
a. Instance Id - i-c54152a6
b. Volume Id - vol-7b596916
c. Zone - us-east-1d
d. server file system root name: such as /dev/sda1 or /dev/xvde1 (log on the server and cd / and then df -h . note the name and log off. )
e. note: volume tagId and snapshot tagId, as they will need be applied to the newly created media via ec2-create-tag command or via EC2 Console
f. note: the legacy volume and snapshot will need to have their tagId appended with .legacy, for instance ci.ws= ci.ws.legacy or ci.ws.oldsize.legacy
Ensure you are running from your local machine and have logged out of the instances under update
Stop the instance, detach the volume and create a new snapshot
Create a new volume from the new snapshot. - Lessons Learned from this: if a snapshot of a volume has not been accomplished in the last week or two, plan on a 12-24 hour period to capture the volume via a snapshot. Best to do periodic captures and not get into this pickle.
Get the new snapshot id - snap-4b334431
The --size parameter indicates the size in gigabytes for the new volume
2. Create a New EC2 Nexus Image with resized volume.
capture the old size volume with a snapshot
create the image, and launch the instance
resize the disk on that instance
capture the newly sized volume with a snapshot
create a Image from this snapshot
remove the pre-sized Image,snapshot,
remove the post-sized Instance
add tag to
Determine the new size of the volume
Get the existing instance id, volume id and zone
a. Instance Id - i-bdf85cc0
b. Volume Id - vol-132fbf69
c. Zone - us-east-1d
d. kernal - aki-94c527fd
e. RAM Disk ID - ari-96c527ff
f. Key Pair Name - kf-key
g. server file system root name: /dev/sda1 cd / and then df -h . note the name and log off. )
h. note: volume tagId and snapshot tagId, as they will need be applied to the newly created media via ec2-create-tag command or via EC2 Console
i. note: the legacy volume and snapshot will need to have their tagId appended with .legacy, for instance ci.ws= ci.ws.legacy or ci.ws.oldsize.legacy
Create a new snapshot from the Nexus Volume ID - Legacy-Snapshot-60
First, we have to delete our current partition and create a bigger one (don’t be afraid, no data will be lost):
# fdisk /dev/xvda
Type m to get a list of all commands:
Command (m for help): m
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Let’s print out the partition table and look for the ext3 partition, we want to enlarge (it’s easy here, there is just one partition):
Command (m for help): p
Disk /dev/xvda: 5218 MB, 5218304000 bytes
255 heads, 63 sectors/track, 634 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/xvda1 * 1 634 5092573+ 83 Linux
Now we delete the old partition (/dev/xvda1) and create a new one with the maximum available size:
Command (m for help): d
Selected partition 1
Command (m for help): n
p primary partition (1-4)
Partition number (1-4): 1
First cylinder (1-634, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-634, default 634):
Using default value 634
Our original /dev/xvda1 had the bootable flag (see fdisk -l output), so we must add it to our new /dev/xvda1 again:
Command (m for help): a
Partition number (1-4): 1
Now let’s write our new partition table and exit fdisk:
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
After rebooting the system (!) we can online-resize the filesystem with a simple command:
Once the "primary" instance corrects, the repaired volume activity was: snapshot (final snapshot-seed), ami(final image seed), and launched as an instance.
From this point on, hopefully all defects on the volume have been cleaned up.
Once the second instance launched, I was still getting this error. But, after about two hours, the self-diagnosis
would complete (fsck.ext3 -a) and the volume would be healthy. So I used this corrected volume for the final image-seed.
See the instance system log in ec2 console for final check.