If you have primarily worked in a Windows Server environment, you may be asking yourself what’s the big deal about containers? After all, we’ve seen this all play out on the desktop side for years.  Windows has had application virtualization in different forms from many vendors.  The best known would be Citrix’ XenApp.  XenApp has similar performance advantages over virtual desktop as containers have over full Linux OS virtualization.  In addition to the performance advantages, the Windows desktop ecosystem also has container solutions that address the challenge of application portability and environment independence.

Performance 

My experience has been that XenApp is about 300% denser than full VDI.  Years ago, I witnessed 45 or more users streaming Lotus Notes 6 clients from a single XenApp host vs. the watching the same hardware only able to support 15 full desktop OS’s (this was a while ago, but I think the comparison is linear based on modern hardware).  XenApp even allows for unique IP addresses for each application instance and even some dedicated memory spaces.

Application Portability

Both Microsoft and VMware offer solutions for pure application virtualization.  Both ThinApp (VMware) and App-V (Microsoft) allow customers to package an application with all the needed support files and registry to run independently of the underlying Windows installation.  If you were inclined, both applications would allow you to run every version of Office on the same OS.  In the case of ThinApp, you can package the application in an .EXE that can be run on a kiosk.  This is a very impressive set of capability that is basically not needed for server applications.  I’ll share why shortly.

Why Windows containers are not taking off

Parallels, the sometimes forgotten virtualization company offers a server specific application virtualization solution.  So there is at least one well-known vendor offering mature solutions for server application virtualization.  The simple answer is that the same gap doesn’t exist in the Windows application development ecosystem as in Linux.  The advantage of the Microsoft monopoly is that there’s only one distro for each major OS release.  There may be different SKU’s such as Standard, Enterprise and Datacenter but this is all basically licensing.  If a developer writes an application for Windows Server 2008 then, it’s reasonably assured that the application will run across Cloud providers as well as their private data center.

This is not the case for Linux.  When a developer writes an application on a VM provided by their private Cloud, there is no guarantee that the same packages or distro will be available in the target deployment environment.  While installing applications in Linux has gotten easier over the years, there still isn’t a comparison between ease of application installation between Windows and Linux.  For all the grief, we give (and receive) about Windows, the application installation experience is best of class.

What about the performance advantages? Keep in mind that we are dealing with enterprise applications.  Looking at the Parallels application, the target market is the Cloud provider.  Containers are extremely effective when you are running the same workload across containers on the same OS.  When you have varied applications they become less appealing from an administrative and I’d assume performance perspective.

It’s highly unlikely that there are enough similar workloads running in a Windows enterprise to make it worthwhile the overhead of running container management software.  Also, I believe the density offered by application virtualization is appealing but not appealing enough to replace the simplicity offered by OS virtualization with VMware or Hyper-V.  We can again look to the desktop as an indicator.  Even faced with the hard data, it’s difficult to convince an enterprise to adopt a pure XenApp deployment vs. full VDI such as VMware Horizon or XenDesktop.  My guess is the simplicity of desktop virtualization vs. the effort needed to engineer and manage streaming or virtualized applications are too much for the average decision maker.

Will containers take off in Windows Server?

Containers make sense in Linux environments.  The challenge of portability is strong enough for the enterprise to adopt containers for Linux infrastructures.  I’m skeptical as to containers taking off in Windows Server.  Unless enterprises begin to build scale-out applications that can take advantage of containers, I see little value.

Tagged on:             

8 thoughts on “Will Docker-like containers take off in Windows?

  • July 7, 2014 at 6:07 pm
    Permalink

    You say: Microsoft only has one distribution. But there are still many versions (think about things like updates).

    Docker is about packing up a server app, ‘microservice’, and everything it depends on and the environment.

    So in case of Windows, if a Windows update is applied, it should could not influence, thus break, the application.

    You can also run many different applications on the same machine, which depends on totally different versions of basically any part of the operating system. So you can run that old IIS6/Windows / Windows 2003/ASP website on the same box with an app build for a IIS8.5/Windows 2012R2 environment. Why ?

    Because when you deploy with Docker it will include the things it depends on with exactly the same configuration the developer used. Docker/microservices concept also makes it so if you need to talk to a database, the configuration of the server to connect to is static inside the container, but it can still be supplied when the container is started. So the container doesn’t need to be changed, it will be the same in all environments. Even though on the dev laptop it might be a locally running database, or maybe even just a mock-up service. In test it might be on 10.0.1.32, but in production you run the same container but it will be connected to 10.3.45.34

    At the same time, you are only running/loading the parts of the operating systems/environment the app depends on nothing more. So that is reason it is efficient.

    And what Docker got right is: there is nothing to manage for ops like with your XenApp example, it’s the application developer that creates the environment his app will run in and delivers it packed as one bundle.

    How that environment was build is written down, so it can be recreated on a newer version of the operating system and tested.

    That is why people use it, it makes it really easy to deploy things at large scale. It makes it reliable, repeatable.

    In the old IE6 days, I used to joke that there are probably 1000s of versions of IE6. IE at that time used to get a lot of partial updates. Only certain dll’s where updated and updates were delivered through Windows update kind of like hotfixes. But if your computer was installed at a later date, you’d get bundles of updates. That means the order of installation and timing of installation and bundled updates people got on their computer made IE6 behave very unpredictable. It seemed like IE6 installation was the same.

    That is still true for Windows updates. Microsoft has gotten better at it, but the problems still remain. They send out more bundles now. But bundles now update many, many different/unrelated parts of the operating system.

    In case of IE6, this made it really difficult to deliver web-based software for IE6, but you couldn’t test it reliably. Lots of: ‘works on my machine’.

    PS Sorry, for being a bit quiet on your blog. :-)

    Reply
  • July 7, 2014 at 6:12 pm
    Permalink

    And when I say scale: I do mean scale.

    Google used to use the term: the datacenter is the computer.

    When your applications are just microservices which can be deployed on any server which has capacity, you can finally just use a single scheduler manage all the servers as a large system. This is why people refer to it as an operating system for your datacenter.

    Reply
    • July 7, 2014 at 6:18 pm
      Permalink

      Great thoughts Lennie. One of my primary points is that this capability has been available in Windows for years in the form of ThinApp, Microsoft AppV and Parallels. The issue of updates and versions haven’t been disruptive enough to the development and distribution life cycle of Windows apps for application virtualization to be compelling.

      A post I’m thinking about is the whole relevance of Linux and Windows in general. Services and Microservices are the future of development. The future of application development will be based on these services vs. operating systems. I think we’ll see the independence of application development based on cloud-like services before we see the adoption of microservices on Windows to solve the challenge of scale in .Net applications.

      Reply
  • Pingback: The future of Containers | VirtualizedGeek

    • October 15, 2014 at 9:27 am
      Permalink

      Thanks Ken. Interesting news.

      Reply

Leave a Reply

%d bloggers like this: