What is the Maximum Drives for an EC2 Windows Instance? – EBS Volume Limit

Yesterday while I was doing some performance testing on Amazon EBS (elastic block storage) volumes attached to a Windows AMI (Amazon machine instance) I ran into an unanticipated issue – the maximum number of drives associated with an EC2 Windows server was lower than I expected.  The max connected drives is 12 – this includes both ephemeral drives and EBS volumes.  This is a little bit of a surprise, especially since Linux instances are supposed to handle 16.

NOTE: This article is about instance-store instances.  For information about drive limitations on EC2 Windows EBS-backed instances see, “Maximum EBS Volumes on EC2 Windows EBS-backed Instances.”

I haven’t run across this little tidbit anywhere, nor could I find it today when specifically searching for it, so I thought I’d post a few details about my findings.

First off, yesterday I spun up an Extra Large Instance (AKA m1.xlarge) and created a dozen or so 5GB EBS volumes and began attaching them to the instance.  Since I was creating several I used the EC2 command line tool ec2-create-volume:

ec2-create-volume -s 5 -z us-east-1d

In the preceding command the “-s 5” creates a 5GB volume and “-z us-east-1d” creates the volume in the specified Amazon Availability Zone, which by the way, has to match that of the instance to which you will attach the volume.

I attached some volumes using ElasticFox. . .

. . . then attached some with the EC2 command line tool ec2-attach-volume:

ec2-attach-volume vol-0d62c264 -i i-999919f2 -d xvdk
ec2-attach-volume vol-0362c26a -i i-999919f2 -d xvdl
ec2-attach-volume vol-0562c26c -i i-999919f2 -d xvdm

Doing this particular task isn’t for the faint of heart as you have to specify the device name (-d xvdm, for example) which has to be unique for each volume attached to a server instance.  You may find it easier generally to use ElasticFox or the AWS Management Console.

Let me take just a moment to point out that, depending on the instance type, you will already have two or more drives.  For example the instance I used here, m1.xlarge, has a 10GB “C drive,” and four 420GB drives, D, E, F & G (by default) in Windows.  In Windows Disk Management these will be disks 0-4.  As you add an EBS volume it will be Disk 5, and so on.

I actually attached five EBS volumes in one fell swoop from the command line, and much to my chagrin I immediately lost connectivity to my instance – I had an RDP session open at the time which immediately quit responding.

Since I lost connectivity to the instance and couldn’t re-establish a Remote Desktop connection I manually rebooted the instance with ElasticFox.  However, this didn’t work.  Initially I thought I had overlapped a device name which the instance couldn’t handle so I detached the five EBS volumes previously attached from the command line and rebooted the instance.  I was overjoyed when I was able to login again.

Next I set about to more carefully attach the volumes, which I did one at a time with ElasticFox.  Again, after attaching the five additional volumes my instance stopped responding.  At this point I wasn’t sure if I had reached the limit of attached volumes, if one or more volumes had some sort of problem, or if someone at Amazon was messing with me.  I had to find out so I did some testing. . .

I starting running a continuous ping (tcping actually) to the instance (so I would know if/when it crapped out and when it was back online after rebooting) and set about testing connecting EBS volumes to the instance. Sure enough, every time I connected too many EBS volumes the instance would hang.  I wanted to test this against instances with more (and less) ephemeral drives so I also started up a Small Instance (AKA m1.small) and the mack-daddy of them all a High-Memory Quadruple Extra Large Instance (AKA m2.4xlarge).  These two instance types come “out-of-the-box” with two and three drives each, respectively.

Don’t believe me on the m2.4xlarge instance?

So, with all three server types, m1.small, m1.xlarge and m2.4xlarge running Windows the magic number of (total) drives was 12 before I started having problems.  An interesting note is that you can actually add a 13th drive and everything appears to be fine.  It’s when you add the 14th drive that all hell breaks loose & you instantly lose access to the instance.  Once this happens you have to detach two volumes then forcibly reboot the instance before it starts to respond.  It certainly is good that you can at least regain access.

Remember how I said everything appears to be fine after adding the 13th drive?  Well, appearances aren’t everything. . .  What I found was that although you could connect the 13th drive/volume & the instance seems fine, when you reboot it the instance doesn’t come back online.  I had to detach the 13th drive then forcibly reboot the instance before I could connect.

Another interesting note is that the device names went up to xvdp (which is actually displayed as the highest device letter when attaching volumes in ElasticFox) then started back at xvdf.

Device range when attaching volumes in ElasticFox:

Attached EBS volumes:

The bottom line is that through a little work yesterday and today I was able to determine definitively that Windows instances (at least instance-store, or S3-backed instances running Windows 2003 – not sure about Windows 2008 on EBS-backed storage) cannot have more than 12 total drives attached.

See also:

4 thoughts on “What is the Maximum Drives for an EC2 Windows Instance? – EBS Volume Limit

  1. Great article. I haven't been able to find anything on the web about the max drives/volumes for amazon's ec2. this covers it well.

  2. This is the best article I've found on Amazon ebs maximum volumes – actually it's the only one I've found that addresses it directly. Thanks for the work and the information.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *