How to reduce AWS EC2 instance volume EBS — [ root / non-root]

Ravinayag
5 min readJan 12, 2022

Amazon Web Services (AWS) makes it very easy to expand EBS volumes in a couple of clicks.[ right-click on the volume, select modify, and enter new size]. Done.

To shrink Amazon EBS volumes, it's a whole different game and there is no straightforward method to do it in AWS consoles.

In this article, I write my experience of how I shrank the volume and save some bucks when it mistakenly allocated high size and was not used for months while creating EC2 instance. [Approx 1GB cost $0.10]

I hope this article should help you to shrink the EBS volume in < 20 Mins.

Steps To Shrink Amazon EBS Volumes

Note: the steps below will compatible with Debian /Ubuntu based platform

Step 1: You need to take a snapshot of the current volume on the running instance as safety precautions.

  1. login to your AWS account, select and stop the EC2 instance that you want to shrink.
  2. Go to the “Elastic Block Store” section on the left side panel and select volumes than the volume attached to this EC2 instance and click detach from the Action menu.
  3. Select again the volume {detached volume} and create a snapshot from the Action menu.
  4. This will take time to complete the snapshot based on the size. meanwhile, create a new volume (The size of this volume should be the size you want to shrink) that will replace the oversized volume that you’re looking to shrink.
    Tips:
    You can also take a fresh snapshot of stopped instances, because it is your “last known good” state volume of running instance.
    For root volume: Create a new Amazon EC2 instance with the same operating system as the one on your existing instance. The EC2 instance creation process will ask you to add storage. Enter the volume size that you want to shrink to. When done, detach the volume from the instance and terminate the instance. Going this route saves you the stress of formatting the volume, creating partitions, marking the root flag, etc.

Step 2:
Create a new Amazon EC2 instance with the same operating system as your existing instance. A micro instance will satisfy the requirements. We only need this instance temporarily to run a few commands.
Tips : Device names will vary based on the instance type you choose, you can choose the same instance type if you are not comfortable with variant device names. Ex: xvda/nvme,
t series will name it xvda and m/c series will name it as nvme

Ref: Device names on Linux instances — Amazon Elastic Compute Cloud

In my scenario, it wasnvme disks. it won't be a stop gate for another type of device disks like xvda…. in your case.

Step 3:
You have a new instance running, now attach the two volumes as described below. The new volume attaches at/dev/sdf, the oversized volume attaches at /dev/sdg. eventually, it will become as /dev/nvme1n1 and /dev/nvme2n1 in the OS. Your root volume will be /dev/nvme0n1 or /dev/xvda0 always.

Step 4 : SSH to your new instance and run the below commands to ensure the disk names as discussed above.

$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 33.3M 1 loop /snap/amazon-ssm-agent/3552
loop1 7:1 0 25M 1 loop /snap/amazon-ssm-agent/4046
loop2 7:2 0 217.5M 1 loop /snap/code/84
loop3 7:3 0 217.5M 1 loop /snap/code/85
loop4 7:4 0 55.5M 1 loop /snap/core18/2246
loop5 7:5 0 99.4M 1 loop /snap/core/11993
loop6 7:6 0 99.5M 1 loop /snap/core/11798
loop7 7:7 0 61.9M 1 loop /snap/core20/1242
loop8 7:8 0 67.2M 1 loop /snap/lxd/21803
loop9 7:9 0 55.5M 1 loop /snap/core18/2253
loop10 7:10 0 67.2M 1 loop /snap/lxd/21835
loop11 7:11 0 61.9M 1 loop /snap/core20/1270
nvme1n1 259:0 0 30G 0 disk
└─nvme1n1p1 259:1 0 30G 0 part
nvme0n1 259:2 0 30G 0 disk
└─nvme0n1p1 259:3 0 30G 0 part /
nvme2n1 259:4 0 230G 0 disk
└─nvme2n1p1 259:5 0 230G 0 part

In my case i follwed step#1, 4 for the root device to avoid creating/labeling for root disk

Step 5: Let's create a partition on the new volume using fdisk

$ sudo fdisk /dev/nvme1n1 
$ sudo lsblk | grep nvme1
nvme1n1 259:0 0 30G 0 disk
└─nvme1n1p1 259:1 0 30G 0 part

NOTE: This step would only apply if you created a non-root volume in step #1, 4 above. If you created a root volume as per tips, you can completely ignore this step and move on. .

Step 6: Now let's do the fsck to the oversized volume disk.

$ sudo e2fsck -f /dev/nvme2n1p1Where “1” is the partition number you wish to resize. I assume that you are not going with multiple partitions now. So, go ahead and execute the command as is.

Step 7: well, This is the important step, so ensure no errors from above and then go ahead and run the below command:

sudo resize2fs -M -p /dev/nvme2n1p1

From the output of the above command, the last line will tell you how many 4k blocks are on your file system. Now, you have to calculate the number for 16MB blocks you need.
The formula is :
blockcount * 4 / (16 * 1024)
Where blockcount is from the last line of the resize2fs command.
Round up with a higher number. . We will use it soon.

example : 
$ sudo resize2fs -M -p /dev/nvme2n1p1
resize2fs 1.45.5 (07-Jan-2020)
Resizing the filesystem on /dev/nvme2n1p1 to 6555627 (4k) blocks.
Begin pass 2 (max = 2048)
Relocating blocks XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 204)
Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/nvme2n1p1 is now 6555627 (4k) blocks long.

==>> 6555627*4/(16*1024) = 1600.494873, so i took it as 1601

Step 8: We will copy the files using dd comand.

$ sudo dd bs=16M if=/dev/nvme2n1p1 of=/dev/nvme1n1p1 count=1601 status=progressNote: 1601 is the result from previous step. this will vary depends on your disk.

It will take time to complete depending on how much data you have. This step performs the actual copying of data from the old, oversized volume to the new, smaller volume disk.
Note : ensure no filesystem mounts for these two disks.
Tips : we can also use rsync for copying but i feel the dd command is faster and simplified with few steps.

Step 9: Once the above step is completed, run the below commands to check and verify the filesystem

$ sudo resize2fs -p /dev/nvme1n1p1$ sudo e2fsck -f /dev/nvme1n1p1
$ sudo mount /dev/nvme1n1p1 /mnt/newvol
$ sudo ls -l /mnt/newvol/boot/grub/grub.cfg
$ sudo ls -l /mnt/newvol/etc/default/grub.d/40-force-partuuid.cfg

Step 10: Finally need to modify grub files at two locations. Get the current PARTUUID from the new disk.

$ sudo blkid
/dev/nvme0n1p1: LABEL="cloudimg-rootfs" UUID="436cf32d-5e3d-46ca-b557-f870c8a25794" TYPE="ext4" PARTUUID="24ca9e81-01"
/dev/nvme1n1p1: LABEL="cloudimg-rootfs" UUID="a386d281-b132-4d60-84b5-f7e94687t6b9" TYPE="ext4" PARTUUID="c69dea44-01"
/dev/nvme2n1p1: LABEL="cloudimg-rootfs" UUID="a386d281-b132-4d60-84b5-f7e94687t6b9" TYPE="ext4" PARTUUID="a2f52878-01"

Make a note of PARTUUID from nvme1n1p1 and nvme2n1p1disk, we need to replace the new PARTUUID in grub config files.

$ sudo sed -i -e ‘s/a2f52878-01/c69dea44-01/g’ /mnt/newvol/boot/grub/grub.cfg
$ sudo sed -i -e ‘s/a2f52878-01/c69dea44-01/g’ /mnt/newvol/etc/default/grub.d/40-force-partuuid.cfg
$ sudo umount /dev/nvme1n1p1

Time to Launch instance with new shrunked volume

  1. Close your SSH session and stop the created instance.
  2. Detach all volumes from the created instance.
  3. Attach new volume at /dev/sda1 to your original instance
  4. Start the original instance and check with df command.
  5. Once all ok, you can delete your created instance and oversized volume and snapshots.

You now have the instance running with the new (lower) volume.

All the Best !!!

--

--

Ravinayag

Blockchain enthusiast & Research | DevOps Explorer | Hyperledger Explorer