Wednesday, September 24, 2014

Storage Sizing: More Detailed

When it comes to measuring a storage system's overall performance, Input/Output Operations Per Second (IOPS) is still the most common metric in use. There are a number of factors that go into calculating the IOPS capability of an individual storage system.
In this blog, I provide introductory information that goes into calculations that will help you figure out what your system can do. Specifically, I explain how individual storage components affect overall IOPS capability. Here are three notes to keep in mind when reading the article:
·         Published IOPS calculations aren't the end-all be-all of storage characteristics. Vendors often measure IOPS under only the best conditions, so it's up to you to verify the information and make sure the solution meets the needs of your environment.
·         IOPS calculations vary wildly based on the kind of workload being handled. In general, there are three performance categories related to IOPS: random performance, sequential performance, and a combination of the two, which is measured when you assess random and sequential performance at the same time.
·         The information presented here is intended to be very general and focuses primarily on random workloads.
IOPS calculations
Every disk in your storage system has a maximum theoretical IOPS value that is based on a formula. Disk performance -- and IOPS -- is based on three key factors:
·         Rotational speed (aka spindle speed). Measured in revolutions per minute (RPM), most disks you'll consider for enterprise storage rotate at speeds of 7,200, 10,000 or 15,000 RPM with the latter two being the most common. A higher rotational speed is associated with a higher performing disk. This value is not used directly in calculations, but it is highly important. The other three values depend heavily on the rotational speed, so I've included it for completeness.
·         Average latency. The time it takes for the sector of the disk being accessed to rotate into position under a read/write head.
·         Average seek time. The time (in ms) it takes for the hard drive's read/write head to position itself over the track being read or written. There are both read and write seek times; take the average of the two values.
To calculate the IOPS range, use this formula:
 Average IOPS: Divide 1 by the sum of the average latency in ms and the average seek time in ms (1 / (average latency in ms + average seek time in ms).
Sample drive:
·         Model: XXXXXXX  2.5" SATA hard drive
·         Rotational speed: 10,000 RPM
·         Average latency: 3 ms (0.003 seconds)
·         Average seek time: 4.2 (r)/4.7 (w) = 4.45 ms (0.0045 seconds)
·         Calculated IOPS for this disk: 1/(0.003 + 0.0045) = about 133 IOPS
So, this sample drive can support about 133 IOPS. Compare this to the chart below, and you'll see that the value of 133 falls within the observed real-world performance exhibited by 10K RPM drives.
However, rather than working through a formula for your individual disks, there are a number of resources available that outline average observed IOPS values for a variety of different kinds of disks. For ease of calculation, use these values unless you think your own disks will vary greatly for some reason.
Below I list some of the values I've seen and used in my own environment for rough planning purposes. As you can see, the values for each kind of drive don't radically change from source to source.

Note: The drive type doesn't enter into the equation at all. Sure, SAS disks will perform better than most SATA disks, but that's only because SAS disks are generally used for enterprise applications due to their often higher reliability as proven through their mean time between failure (MTBF) values. If a vendor decided to release a 15K RPM SATA disk with low latency and seek time values, it would have a high IOPS value, too.
Multidisk arrays
Enterprises don't install a single disk at a time, so the above calculations are pretty meaningless unless they can be translated to multidisk sets. Fortunately, it's easy to translate raw IOPS values from single disk to multiple disk implementations; it's a simple multiplication operation. For example, if you have ten 15K RPM disks, each with 175 IOPS capability, your disk system has 1,750 IOPS worth of performance capacity. But this is only if you opted for a RAID-0 or just a bunch of disks (JBOD) implementation. In the real world, RAID 0 is rarely used because the loss of a single disk in the array would result in the loss of all data in the array.
Let's explore what happens when you start looking at other RAID levels.
The IOPS RAID penalty
Perhaps the most important IOPS calculation component to understand lies in the realm of the write penalty associated with a number of RAID configurations. With the exception of RAID 0, which is simply an array of disks strung together to create a larger storage pool, RAID configurations rely on the fact that write operations actually result in multiple writes to the array. This characteristic is why different RAID configurations are suitable for different tasks.
For example, for each random write request, RAID 5 requires many disk operations, which has a significant impact on raw IOPS calculations. For general purposes, accept that RAID 5 writes require 4 IOPS per write operation. RAID 6's higher protection double fault tolerance is even worse in this regard, resulting in an "IO penalty" of 6 operations; in other words, plan on 6 IOPS for each random write operation. For read operations under RAID 5 and RAID 6, an IOPS is an IOPS; there is no negative performance or IOPS impact with read operations. Also, be aware that RAID 1 imposes a 2 to 1 IO penalty.
The chart below summarizes the read and write RAID penalties for the most common RAID levels.

Parity-based RAID systems also introduce other additional processing that result from the need to calculate parity information. The more parity protection you add to a system, the more processing overhead you incur. As you might expect, the overall imposed penalty is very dependent on the balance between read and write workloads.
A good starting point formula is below. This formula does not use the array IOPS value; it uses a workload IOPS value that you would derive on your own or by using some kind of calculation tool, such as the Exchange Server calculator.
(Total Workload IOPS * Percentage of workload that is read operations) + (Total Workload IOPS * Percentage of workload that is read operations * RAID IO Penalty)
As an example, let's assume the following:
·         Total IOPS need: 250 IOPS
·         Read workload: 50%
·         Write workload: 50%
·         RAID level: 6 (IO penalty of 6)
Result: You would need an array that could support 875 IOPS to support a 250 IOPS RAID 6-based workload that is 50% writes.
This could be an unpleasant surprise for some organizations, as it indicates that the numberof disks might be more important than the size (i.e., you'd need twelve 7,200 RPM, seven 10K RPM, or five 15K RPM disks to support this IOPS need).
The transport choice
It's also important to understand what is not included in the raw numbers: the transport choice -- iSCSI or Fibre Channel. While the transport choice is an important consideration for many organizations, it doesn't directly impact the IOPS calculations. (None of the formulas consider the transport being used.)
If you want more proof that the iSCSI/Fibre Channel choice doesn't necessarily directly impact your IOPS calculations.

The transport choice is an important one, but it's not the primary choice that many would make it out to be. For larger organizations that have significant transport needs (i.e., between the servers and the storage), Fibre Channel is a good choice, but this choice does not drive the IOPS wagon.

Tuesday, September 23, 2014

Storage Sizing: A start

I credit my knowledge of storage sizing to brad fair (Solution architect)
 There are really only two factors in disk performance:
  • How long it takes for the disk to be read from or written to (measured in IOPS), and
  • How long it takes for an amount of data to get from the initiator to the target (measurement of bandwidth, typically Gb/s)
Notice that there's nothing about Fibre Channel vs iSCSI vs FCoE in there. People often make the mistake of generalizing performance, with the solid misconception that Fibre Channel is higher performing that iSCSI - it's simply not true. You might argue that the second factor in performance is where Fibre Channel wins. Thanks to iSCSI's redirect feature and MPIO capabilities, one was as fast as another. Now that 10Gb Ethernet is prevalent in most iSCSI vendors' products, iSCSI has the leg up.


IOPS, or Input/Output Per Second, is made up of two components on all disks but solid state. Those are the spindle speed and the time it takes for the head to move over the target track - the seek time. You can see based off of that information that disk types of the same spindle speed are going to be in the same performance category. Think that for each single disk, you might have this kind of performance:
  • 7200 RPM Drives - 80 IOPS
  • 10,000 RPM Drives - 120-130 IOPS
  • 15,000 RPM Drives - 160-180 IOPS
if you have multiple disks, you can perform some nifty calculations to determine your IOPS rather easily. Accurate calculations are dependent on knowing what percentage of the time you read, and what percentage of the time you write. 
Also, keep in mind that caching plays an important part of an IOPS situation - if there is frequently accessed data, or data that needs to be quickly written to disk in small bursts, caching is a life saver. In almost every situation, more cache is more performance.


In every situation I have analyzed, the bandwidth requirements have been amazingly low - a small percentage of one Gb per second. Modern technologies (and even outdated ones) allow for much greater than that. In every situation I have analyzed after implementing a SAN solution, the real performance data backs up the initial analysis. 

Sizing Storage

This is why you're here anyway, isn't it? Looking for a storage solution for your existing environment? Looking for something for an environment you are planning? Follow these steps:
  1. Sum the storage requirements - if you have 100GB used on each of six servers, you need 600GB of space usable.
  2. Sum the spindle speeds - if you have six 10,000 RPM disks and 2 15,000 RPM disks, you'd have a total of 90,000 RPM in the system. Because IOPS between spindle speeds are proportionate to the spindle speed itself, this gives us a good starting point.
  3. Look at your total RPM and divide it by each of your disk options. For instance, taking the 90,000 RPM number above, I know I would need 13 7200 RPM disks, 9 10,000 RPM disks, or 6 15,000 RPM disks to perform the same as the original system.
  4. IMPORTANT: Do a sanity check. If you have eight low traffic web servers each with two 15k disks, you do not necessarily need 16 15k disks... experience tells me you may even be able to get away with a very small number of SATA disks.

Tuesday, September 2, 2014

IBM and HP: Two different tales of Tech Giants

IBM and HP are two Tech Giants for nearly 75 years. These companies have gone transformation so many times that its difficult to recognize its original form.

Times have changes. So many different opportunities are arising daily and new competitors are emerging daily.

Lets discuss how these 2 Giants are doing:

IBM is pioneer in so many things that you cannot count. After nearly going bankrupt in early 1990s, it transformed into a service based company. It work well for more than 2 decades. But now IBM is again facing a huge challenge. Rules of games have changed, but IBM is still resisting to new challenges.

With the rise of cloud computing, IBM is struggling in the new environment. Things are so bad  that as per CNBC, it is a cloud challenged company. Its true. IBM still wants to do the business in old ways of managed services. Manages services is dying. IBM response for cloud is Softlayer. Softlayer is a substandard half cloud technology. IBM is loosing money now. its profits are declining. It is investing on  R and D different technologies but world is yet to see any investment in cloud. Bad move

Other Challenge for IBM is its massive workforce.IBM now is a fat elephant which is finding itself difficult to move. It needs to cut labour force. Its policy of having one manager per 25 people is the stupidiest policy so far. It has got so many layers of management that only god can help. IBM needs minimum 20% workforce reduction to make it more agile.

HP acquisition Of EDS changed the company forever. It gave HP a huge lifeline. It turned into a services company from just a hardware company. Ex-CEO Mark Hurd nearly destroyed HP for Just share price increase. He cut all the investment into R&D. He strangulated HP. After his exit , there was a turmoil in HP. Meg whitman came and steadied the ship. HP was smart in investing cloud computing technologies, mainly Openstack. It invested hugely in openstack and created HP CSA for cloud. Cloud computing software CSA, OOpswares, sitescope etc can work with any vendors. HP ended up with best private and hybrid cloud technology in the industry.
HP did huge investment in Big Data also. It gives an impression that HP is ready to face new competition and is ready for next century

Featured Post

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.Route 53  perform three main functions in any...