If you’ve never watched these Youtube videos from Cisco you should check them out.  They are almost as good as my Tech Talks :).  At the end of most of them they decide what part of the technology discussed is the unicorn.  Meaning what is realistic today and what still needs to mature.  The topic of this particular post was about the Software Defined Data Center (SDDC).  I think SDDC is somewhat a unicorn in itself; which makes the type of Cloud Management that brings value to organizations another unicorn.

To have SDDC you need to have mature Software Defined Networking (SDN), Software Defined Storage (SDS) and even if it’s not a proper term let’s keep it rolling Software Defined Compute (SDC).  Of the three SDC is obviously very mature.  It doesn’t matter if we are talking VMware, KVM, XEN or Hyper-V.  It seems as if vendors have SDC under control.  It’s the other two area’s that have some work in the form of both standards and an actual operating model in the case of SDN.

Looking at SDS as it kind of exists in various technical forms.  Vendors have virtualized or abstracted storage for a long time now.  You can present storage in a SDDC no matter what the backend physical components make up the block layer storage.  Using controller software you can present NFS, SMB or LUN’s to clients using these underlying protocols.  The physical storage can be white box servers with SATA drives or an EMC VMAX with a tray of Solid State Drives (SSD).  The gray is introduced when needing to peer into the physical layer.  Each vendor goes about it in a different way.  There a great write up on Duncan Epping’s Yellow-Bricks site that goes into detail on the state of SDS.

Then there’s SDN.  I won’t repeat a lot of the challenges that face SDN but if you want to start here.

This brings us back to Cloud Management.  Ultimately you have to ask yourself what capabilities would you like in a Cloud Management solution.  Some of which depends on if you are looking to extend your infrastructure or develop cloud aware applications.  The difference can basically be broken down into the vCloud vs. OpenStack question. But ultimately you want to be able to deliver as much control via abstraction to the SDDC components of your cloud without concern of the underlying hardware.  You want the end user of the Cloud Management solution to be able to select the storage, network and compute attributes related to their cloud application.

Most mature Cloud Management systems allow you accomplish these goals by interfacing directly with the API’s of the individual vendors of these products.  You want to deliver a “Fast” pool of disk to your application.  Well today the Cloud Management solutions need to communicate directly with the API’s of your storage vendor.  What if you want multiple storage vendors in the backend?  This is where SDS with standards will step in and help.  The current approach is for solutions like OpenStack to build that capability hardwired into the Cloud Management platform.  When the unicorn of Cloud Management is reached there will not be a need to hard wire the capability.  The ideal future your “Fast” storage can be backend by VMAX SSD’s or Hitachi SSD’s without making major configuration changes to your Cloud Management platform.  It would just be done at the middle-ware layer that’s SDS.

I actually started  just link the the video.  But hey, I like to talk.

The Unicorn that is Cloud Management
Tagged on:

3 thoughts on “The Unicorn that is Cloud Management

  • May 11, 2013 at 9:20 am
    Permalink

    I don’t think this unicorn is all that far off, my unicorn is a bit larger than theirs though.

    I want to go from with some software and declarative templates on a CD/USB-stick and some racks of commodity servers with switches to a fully auto-scaling web-scale self-managing fully HA, infrastructure-aware running production-ready system with web-administration and availability zones in a couple of hours based om just open and free software.

    OpenStack is working on it, have a look at the video: https://www.youtube.com/watch?v=O-SdNaFq2CQ (there are more on that channel related to this subject)

    Without relying on any outside projects, they have bare metal provisioning, networking, auto-scaling, some storage solutions and more. Ceph is probably an even better choice, which could be easily deployed within the same declaritive framework.

    And both projects are progressing fast.

    People working on OpenStack are busy adding encryption at several layers and key-management. Ceph now also supports encryption too. Ceph has incremental backup, which you could base a DR solution on.

    OpenStack even wants to do OpenStack on OpenStack. So you have one OpenStack handle the bare metal, then you roll out an OpenStack installtion for the users/tenants on top of that like any other tenant/application. Why have OpenStack on OpenStack ? So you can run production and test-environments and upgrade tests on the same hardware as production as different tenants next to each other auto-scaling as needed.

    With something like OpenShift on top of it you can have auto-scaling PaaS as well.

    In my mind the unicorn might be to the what I mentioned above and seamlessly burst into a hybrid cloud situation.

    Although Open Shift already supports running on OpenStack, Azure, AWS and more. Maybe they already support hybrid cloud.

    So is it going to take them half a year or one year to complete the unicorn ? I don’t know. But it won’t be 3 years.

    Reply
    • May 11, 2013 at 10:18 am
      Permalink

      I’m an idiot, it’s the enterprise workloads that people want to run on the cloud that is the unicorn. Supporting all legacy applications in a cloud environment is a lot of work to get right.

      Supporting them fully encrypted in a public cloud is the unicorn.

      Reply

Leave a Reply

%d bloggers like this: