XIV: storage reinvented wizardry

It seems incredible to me that one of the most searched terms on Google heading to my blog are related to “XIV” and in particular regarding to “XIV IOPS”.

In the recent past I wrote a couple of articles about XIV (here and here) pointing out that XIV architecture sounds a little bit flaky to me but, being a reseller of competitive storage, I’m a little bit biased so I suggest you to compare what I write with other vendors claims and with more indipendent people out of there.

I’m also following the Linkedin group “XIV Storage reinvented” and I never found in it any customes speaking seriously about DB workloads and any more detailed random workloads in function of the size of the data sets and size of the XIV.

I would like to understand better the XIV, not just in terms of architecture or backend disks but real world performance: The case history IBM shows are a little bit silly and I would like to know more about any real user experience dealing with huge DBs and/or complex virtual infrastructures.

The theory

In theory XIV has a ethernet+SATA backend, two factors those surely don’t help to achieving small latency times nor big IOPS numbers but XIV has a big cache (up to 120GB of RAM in a full rack). The backend of a fully loaded XIV sports 180 SATA disks (each one helping with, more or less,  80 iops), then you might expect something like 14.400 IOPS of 100% random IOPS for a typical DB workload. Obviously, these are just coarse figures coming from theoretical HDDs performance and really far way from the numbers touted (but never proven) by IBM. I’m sure that cache will help a lot but it is well know that the best help from any cache can give is to sequential workloads and 100% random ones almost cancel its efficiency.

The cache/space function

If you have a small and heavy accessed data set in your XIV it is very simple to have huge benefit from cache: i.e. if you have 1TB very accessed data the benefit of a big (120GB) cache will be huge! But if you have a big array (nearly 160TB of allocated space) with dispersed data and different workloads the 120GB cache hasn’t the same effect. In the second case you will need raw IOPS pumping from the backed in order to sustain the traffic, but 80 IOPS per disk won’t surely be enough!

I surely have nothing more than circumstantial evidences to proof what I say, since I’m speaking just about theory. Only IBM or its customers can help me to better understand the wizardry of XIV.

Comments and measurements from real-end-users with their configuration, typical workload and sizes of data stored will be very appreciated. Thanks a lot in advance for your help in making clear this (too) long lasting argument.

Ti piace questo articolo?

Share on Linkdin
Share on Twitter
Share on Facebook
WhatsApp

Lascia un commento

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy