I struggle to find storage related stuff to write. I find virtualization and networking topics much easier because of my network administrator roots. However, most of the real innovation that directly impact application service levels have been in storage. I’ve spent the past year coming up to speed on storage technology. I recently was invited to HP Discover by HP Storage’s social media team. Perfect opportunity to understand what’s going on with 3Par and storage in general. This post is a highlight reel of what I learned. 
No future in hybrid arrays

As part of the event, HP gave bloggers access to storage product managers and executives. Manish Goel, GM of HP Storage, gave his thoughts the overall storage industry. One point that I keyed in – hybrid storage. Manish doesn’t believe there’s a long-term market for hybrid storage. Hybrid arrays combine flash and hard disk in a single array. An approach to managing the two type of storage is to provide storage tiers. Read heavy workloads that benefit from all flash could be provisioned with flash. Write heavy workloads could be provisioned with traditional HD’s. Some intelligence can be added automatically move workloads to the appropriate tier. 
Goel believes the complexity of building and managing these hybrid arrays too great. Goel is of the opinion the cost of flash is dropping faster than the value of hybrid arrays is realized. I didn’t get the chance to ask the question if flash will overtake traditional HD. I infer that HP believes the all-flash array will become the standard vs. traditional or hybrid arrays. The same complexity within a hybrid array remains when taking a dual array approach. Complexity is moved to a higher-level. The complexity issue highlights a gap in HP Storage data mobility strategy. 

HP Storage is about the hardware 
One of the frustrating aspects about HP is the lack of end to end solutions due to their size. HP has to make a decision on which aspects of their solutions are fully integrated across platforms and groups. The HP Software group, for example, has standardized on a consistent user interface across their portfolio. It’s such an high priority that HP has created a user interface open source project. It’s impossible to have that level of focus across every discipline. One such example is data mobility. 

Controlling movement of data to different tiers is a challenge. I wrote about data virtualization and the advantages of moving data across different tiers. Add to the tiers different providers and you have an even greater challenge. With all of HP’s storage and software IP you’d think there would be a focus on data mobility within HP. HP Storage is laser focused on storage infrastructure and mobility doesn’t fall within that umbrella. As a customer you have to go have a separate conversation with HP Software or HP Technical Service. (Listen to my tech talk on considering using IT vendor’s technical services)

Consistent experience 

  With the focus on infrastructure comes some advantages. What should be a given was a nice surprise. HP’s mid-tier storage solution, 3Par 7000 has the same software and services of the top-tier array, the 3Par 20000. This isn’t the case with industry leader EMC. The VNX and VMAX are based on two completely different software stacks. The difference in software approach can provide management challenges for environments that have both tiers of storage. 

Conclusion 

While mentally exhausting, I’m impressed with HP’s storage approach. I’d love to see a focus on data mobility, but I now understand all of the excitement around the 3Par acquisition. 

My musings on HP Storage

9 thoughts on “My musings on HP Storage

  • June 4, 2015 at 5:19 pm
    Permalink

    Keith, in one respect Manish is right, there is no long term future for hybrid arrays. However they are serving a very good stopgap, being by definition cheaper than all-flash. In the short term, hybrid arrays have provided performance improvements at a competitive price point. As CPU & memory speeds continue to increase (faster than HDD performance), then the benefit of hybrids will diminish. Then we will see more all-flash systems.

    The question is how quickly will that be realised? 6 months, 18 months, 5 years? I suspect hybrid arrays will be around for some time. Legacy HDD arrays will go first.

    However, here’s a devil’s advocate point for you. Does the hybrid definition mean flash+HDD or simply multiple storage components? Hybrid could mean NVDIMM & flash; that could be *faster* than an all flash array, even with high capacity MLC or 3D-NAND.

    I’d say that HDD usage will diminish and move to be being archive. Primary storage will migrate more to flash and new technologies like NVDIMM. It’s a complex discussion, not simply a case of hybrid or not hybrid.

    Reply
    • June 5, 2015 at 10:49 am
      Permalink

      How long the hybrid NVM(Non-volitial memory – Today it’s flash but NVM also includes future tech like PCM and STT-MRAM)/disk array remains a viable solution depends very much on what customer segment you’re looking at. Every organization has applications and data with varying performance requirements. Since no one ever complains about their application running too fast the only reason to buy a spinning disk is that it can store data cheaper than NVM.

      If you’re big you can build/buy multiple storage systems and have the talent on staff to determine which apps go on the AFA and which go on spinning disks. That talent includes the DBA that can take the invoice and shipment detail that’s more than 6-36 months old and migrate it to a seperate tablespace stored on the spinning disks. If you end up storing 15 or 20% of data on the AFA that doesn’t deserve it you can afford the extra flash capacity. Overall you’ll be more efficient as an all disk (or all disk except for metadata on flash so 2% flash) storage system will cost less per GB than the disk end of a hybrid.

      If you’re an SMB with 20TB of total data a hybrid that can do the data placement will be 1 system to manage, won’t require that DBA, and will be overall more cost effective. This will be true until and unless flash becomes only 20-50% more costly than high-capacity spinning disks which in my opinion will require a failure of the new disk technologies like HAMR and bit patterned media.

      Reply
      • June 5, 2015 at 11:09 am
        Permalink

        I think it’s driven by the price of HDD’s. Even for the large enterprise there becomes a inflection point where it just isn’t cost effective to manage tiers between NVM and HDD’s. However, the conversation on managing Tiers doesn’t go away. Array vendors will always make sure we have an expensive faster/more resilient layer to do the DBA dance between.

        Reply
  • June 4, 2015 at 6:06 pm
    Permalink

    Could you define data mobility ?
    3PAR hybrid arrays support automated tiering between flash and HDD today using adaptive optimisation. If it’s array to array workload mobility then this is also supported via federation between 3PAR’s running peer motion.

    Reply
    • June 4, 2015 at 6:11 pm
      Permalink

      My post on data virtualization describes what I was referring to but it’s array to anything not just 3Par to 3Par.

      Reply
      • June 4, 2015 at 6:53 pm
        Permalink

        Storage virtualization appliances have been around for ever doing this at least at block level. The downsides however tend to be cost, complexity, scale, lowest common denominator features as well as unintentional lock in to the appliance provider. In the low latency world of flash such appliances just represent another speed bump between application and data.

        BTW 3PAR can import one way from other arrays today without the need for external appliances e.g EMC CX/VNX/VMAX, HDS, EVA etc to enable migration / refresh using the same software. Appreciate its not two way but this keeps the complexity down and means you can immediately take advantage of 3PAR features instead of being stuck with all the legacy baggage after movement.

        If you truly need protocol agnostic and anything to anything then it’s a higher level function that needs to sit typically with the OS / hypervisor or specialist migration toolset.

        Reply
  • Pingback: TT 22: So long HP Discover – Closing thoughts | VirtualizedGeek

  • June 12, 2015 at 3:44 am
    Permalink

    I’m going to play devil’s advocate for a bit, I was looking at the slides of a talk today.

    And I’m wondering: let’s say you pay a lot of money and buy some nice high performance storage device. Isn’t it kind of sad how most of us are crippling their performance by using VM’s instead of getting baremetal performance with containers ?

    http://events.linuxfoundation.org/sites/events/files/slides/Standen_Linux_Clusters.pdf

    Just look at the read-latency graphs on page 32. Don’t you just want to cry ?

    Reply
    • June 12, 2015 at 4:08 am
      Permalink

      Violin? Are they still in business? 🙂

      That being said, if companies aren’t looking at mixing in some containers into their infrastructure, they aren’t paying attention.

      Reply

Leave a Reply

%d bloggers like this: