Docker Enhances Container Portability, Plug-In Capability And Orchestration Services

On Monday, Docker announced a bevy of functionalities that augment the portability of distributed applications composed of multiple containers. Docker multi-host SDN renders containers portable across any infrastructure and enables distributed applications to communicate across IP networks. The multi-host SDN functionality allows developers to define the networking parameters of a distributed application and subsequently transport the application to another hosting environment without a fundamental transformation of the distributed application itself. In addition to multi-host SDN, the Docker platform now features a plugin architecture that facilitates enhanced plugin capabilities with technology vendors. Docker’s new plugin functionality allows for integrations with networking and storage technology from the likes of Cisco, Microsoft, Midokura, VMware, Weave for SDN and ClusterHQ.

Docker also revealed details of enhancements to its orchestration tools that enable a multi-container application to be “immediately networked across multiple hosts,” as noted in a press release. Docker’s expanded orchestration functionality also features integrations with Mesos as well as partnerships with the Amazon EC2 Container Service to optimize the scheduling of Docker-based applications for the Amazon Web Services EC2 platform. Taken together, Docker’s multi-host SDN functionality, plug-in architecture and orchestration capabilities continue to cement its emergence as the de facto infrastructure for the development of distributed applications in contrast to virtual machines. Docker’s multi-host SDN functionality, in particular, goes a long way toward rendering containers more portable across infrastructures while nevertheless allowing developers to tweak network topologies as needed to optimize the distributed application in relation to the specificities of the host environment in question.

CoreOS Announces Integration With Google’s Kubernetes To Signal Emergence Of Container Standard, Independent Of Docker

CoreOS has announced that its rkt (pronounced: rocket) container technology will be integrated with Google’s Kubernetes container management framework. The integration of CoreOS rocket technology with Kubernetes means that the Kubernetes framework need not leverage Docker containers, but can instead rely solely on CoreOS Linux container technology. CoreOS’s rkt technology consists of container runtime software that implements appc, the App container specification designed to provide a standard for containers based around requirements related to composability, security, image distribution and openness. CoreOS launched rocket on the premise that Docker containers had strayed from its original manifesto of developing “a simple component, a composable unit, that could be used in a variety of systems” as noted by CoreOS CEO Alex Polvi in a December 2014 blog post:

Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.

Here, Polvi notes how Docker has transitioned from an initiative focused around creating reusable components to a platform whose mission has deviated from its original manifesto. Today’s announcement of the integration of CoreOS with Kubernetes represents a deepening of the relationship between CoreOS and Google that recently included a $12M funding round led by Google Ventures. While CoreOS previously supported Kubernetes, today’s announcement of its integration into Google’s container management framework represents a clear sign that the battle for container supremacy is now likely to begin in earnest, particularly given that CoreOS brands itself as enabling other technology companies to build Google-like infrastructures. With Google’s wind behind its sails, and executives from Google, Red Hat and Twitter having joined the App container specification community management team, Docker now confronts a real challenger to its supremacy within the container space. Moreover, Google, VMware, Red Hat and Apcera have all pledged support for appc in ways that suggest an alternative standard that defines “how applications can be packaged, distributed, and executed in a portable and self-contained way” may well be emerging.

Guest Blog Post: Docker’s Jigsaw Puzzle Challenge by StackEngine CEO Bob Quillin

The following blog post is republished from the StackEngine blog by StackEngine CEO Bob Quillin, with permission. Cloud Computing Today had the pleasure of speaking with Bob Quillin about the contemporary interest in container technology and the concomitant pressures the industry faces with respect to container management in the context of the piece he authored below.

Container Days Austin, CoreOS, & Google

The unprecedented success of the first Container Days Austin (CDATX) and the latest series of product announcements from Google, CoreOS, and Rancher (plus soon to be others for sure) highlight two competing trends in the Container community today. Every POC (proof of concept), lab trial, and testbed evaluating Docker and containers has to fight through these issues:

  1. Mass Confusion: too many puzzle pieces

    For most organizations, especially enterprises, there are simply too many puzzle pieces to fit together. From where to start, to which app to convert, rewrite, or start anew, which distro to run, container platform to evaluate, or service discovery to use, not to mention networking, storage, build process integration, deployment, scheduling, orchestration, and so on – the process is just too complex. The sessions at CDATX ran the gamut up and down the container stack with attendees working diligently to put the puzzle together themselve

  2. Intense Interest: strong belief that this puzzle is the one

    Containerization arrives at a time in the market when virtualization has laid the groundwork for the next infrastructure revolution. IT, DevOps, and development teams are ready to take the next step to achieve greater capex (capital expenditure) cost savings and opex (operational expense) efficiency through automation. Furthermore, the DevOps movement has now crossed over to mainstream IT, and containers and Docker are the killer app for DevOps. Interest is beyond belief, and the senior practitioners who dominated CDATX attendance were on a clear mission to understand and learn.

It’s not surprising that the Docker market has too many jigsaw puzzle pieces to put together. This has been a developer-driven, open-source centric market built on the concept of pluggable components that developers compose together into a production application. That’s a hard problem to solve – so hard that many folks have just decided to jump to a PaaS-like approach. And if the market doesn’t start to pre-assemble the pieces for customers, we’ll likely see more and more move to a PaaS model that hides this underlying complexity from the end user. While there is nothing wrong with the PaaS model, the big opportunity for a new breed of solution combines the ease of developer deployment of a PaaS with the infrastructure control of an IaaS like Amazon AWS.

Puzzles or Portraits?

So how are we all responding to these challenges? There are several movements afoot that attack this problem:

  1. The Free Market Approach: create more puzzle pieces

    If you don’t like the Docker platform or available Linux distros, then build your own that’s smaller, skinnier, more secure, more enterprise-ready, more performant, more whatever it is you think the ecosystem is missing. While this approach may not help Docker customers to simplify the puzzle, it’s not realistic to think that Docker can or will solve all the problems out there on its own. We’ll likely see a variety of fixes, flavors, and options offered by emerging startups and established vendors alike.

  2. The Top-Down Approach: pre-assemble the puzzle with your own pieces

    While the whole puzzle can’t yet be assembled, vendors like CoreOS and Google (with their latest Tectonic initiative) and Docker itself see the need to begin to vertically integrate parts of the stack – to make it easier and more commercially viable. That is, if the ecosystem is becoming too fragmented (with too many puzzle pieces) then vendors may try to take the situation into their own hands and simplify. The question again here is, does the customer win?

  3. The Agile Market Approach: start with a picture – add pieces as you go

    StackEngine firmly believes that we need better pieces to the puzzle to make everything enterprise-grade and production ready, from Docker to containers to the associated compute, networking, and storage optimizations. That push should never end. But enterprises and organizations committed to DevOps methodologies need to apply agile methods throughout. They require an end-to-end solution where they can pay less attention to assembling the puzzle pieces and more to cutting licensing costs, deploying apps more reliably, quickly and frequently, with the freedom to plug-in new jigsaw puzzle pieces as they go and as they choose.

If you want to be part of the discussion on where and how this industry will move forward, keep an eye out for the next Container Days unconference, where you can come ready to learn more about this emerging puzzle and/or lead the conversation. Container Days is coming to Boston with Container Days Boston on June 5-6 and plans are in the works to head to Silicon Valley next. They are just getting organized and are looking for sponsors. Also, more cities are in the works and let us know if we can help get things off the ground! Finally, Container Days Austin 2016 is already in planning mode so check out Docker Austin in the meantime to get plugged in.

Docker Raises $95M In Series D Funding

On Tuesday, Docker, Inc. announced the finalization of a whopping $95M in Series D funding. The Series D funding round was led by Insight Venture Partners with additional participation from new investors Coatue, Goldman Sachs and Northern Trust and existing investors Benchmark, Greylock Partners, Sequoia Capital, Trinity Ventures and Jerry Yang’s AME Cloud Ventures. The funding will be used to strengthen strategic partnerships with companies such as Amazon Web Services, IBM and Microsoft, all of whom have differentially supported Docker on their respective cloud platforms and contributed to its go-to-market strategy. In addition, the funding will be used to accelerate product development, particularly as it relates to Docker management and application development lifecycle tools that promise to enhance the value of the Docker offering.

Solomon Hykes, founder and CTO of Docker, remarked on the significance of the funding raise as follows:

Our responsibility is to give people the tools they need to create applications that weren’t possible before. We will continue to honor that commitment to developers and enterprises. We think they are still looking for a platform that helps them build and ship applications in a truly standardized way, without lock-in or unwanted bundled features. That is what we set out to build, and we are not yet content with what we have achieved so far. We are getting a clear message from the market that they like what we are building, and we plan to keep building it. The financing enables us to deliver on that promise.

Although Docker has received clear market validation, Hykes notes that the company remains “not yet content” with what it has accomplished to date and hence hopes to use the extra funding to respond to customer needs to use a “platform that helps them build and ship applications in a truly standardized way.” Because Docker can run on a multitude of infrastructure platforms, users can avoid vendor lock-in while enjoying the benefits of Docker’s portability and ability to enhance operational agility by preserving the integrity of applications in development and production environments alike. Today’s Series D financing constitutes a dramatic affirmation of the validity of Docker’s business model and potential for even further growth by way of an investment that gives Docker the freedom to cement partnerships with major players in the IaaS-cloud community while enhancing its product portfolio and suite of tools for automating the management of clusters of Docker containers in distributed and non-distributed application environments alike. With an extra $95M in the bank, expect Docker to take ownership of the emerging cottage industry of vendors dedicated to Docker management tools and processes and bring Docker to more and more production-grade environments enterprises in anticipation of an IPO. Today’s Series D raise brings the total capital raised by Docker to roughly $160M, building upon a $40M Series C raise in September.

Docker Acquires SocketPlane To Promote Networking API Standards For Its Container Technology

On Wednesday, Docker announced the acquisition of SocketPlane, the software defined networking startup. From its inception in Q4 of 2014, SocketPlane sought to deliver networking specific to the Docker platform for distributed applications and participated extensively in early initiatives focused around building Docker’s open API for networking. As part of Docker, the SocketPlane team will focus on facilitating the development of networking APIs with Docker’s partner ecosystem. The acquisition of SocketPlane means that developers and Docker users can build applications knowing that they have a myriad of networking options for their Docker-based applications that collectively leverage one standard API protocol. Madhu Venugopal, CEO of SocketPlane, remarked on his experience with SocketPlane as follows:

We started SocketPlane with a goal of creating the best networking solution in the Docker ecosystem. We’re now excited to be broadening that vision to support and empower the partner ecosystem to create the best solutions possible for users. Given the myriad of networking use cases enabled by Docker, we believe strongly that we will be fostering broad opportunities for partners to build differentiated capabilities based upon Docker’s open standards.

Here, Venugopal elaborates on how SocketPlane intends to “empower the partner ecosystem” to create the most effective networking solutions for the Docker platform. As a result, Docker stands to bless a variety of APIs that allow vendors to select the networking option that optimally integrates with their platform while nevertheless conforming to the API standardization promoted by the acquisition of SocketPlane. Meanwhile, Adam Johnson, General Manager of Midokura, noted that Wednesday’s “Docker acquisition of SocketPlane is significant as it validates the need for overlay networking options within the popular Docker open source ecosystem, and now provides Docker with the right expertise to expand its own networking options, which will be hugely beneficial to the industry.” Johnson remarks on how SocketPlane’s acquisition stands to siginificantly expand Docker’s range of networking options in ways that benefit not only Docker, but also the entire industry insofar as anyone leveraging Docker’s containers for application development and portability will soon have access to an expanded range of networking options for Docker deployments. Docker’s acquisition of SocketPlane illustrates the acuity of Docker’s vision with respect to potential partners in conjunction with a perspicacious line of sight into the importance of building standardization into the proliferation of networking APIs for the Docker platform.

Docker Releases Suite Of Orchestration Tools And Expands Partner Ecosystem

On Thursday, Docker announced the release in public beta of a suite of orchestration tools that enhance the ability of developers to manage containers used for the development of distributed applications. Docker Machine, for example, empowers developers to install the Docker engine on a host machine by provisioning the host and then installing the engine within that infrastructure. Docker Machine can provision containers on Amazon Elastic Compute Cloud (Amazon EC2), Digital Ocean, Google Cloud Platform, IBM Softlayer, Microsoft Azure, Microsoft Hyper-V, OpenStack, Rackspace Cloud, VirtualBox, VMware Fusion®, VMware vCloud® Air™, and VMware vSphere. Docker’s deep integrations with the most widely used IaaS platforms and technologies in the industry allows developers to deploy Docker containers using one command that takes responsibility for provisioning the host infrastructure in addition to installing the Docker engine. Meanwhile, Docker Swarm creates clusters of Docker engines and manages relationships between containers as an application scales. In addition, Docker Swarm gives developers a unified developer interface for managing multiple Docker engines and handles the scheduling of application-related jobs and processes as they relate to multiple containers. Finally, Docker Compose enables developers to manage a multi-container application using a YAML file that defines and updates the relationships between the various containers that collectively constitute an application.

Docker’s release of its suite of orchestration tools comes head on the heels of an announcement by Mirantis and Google to integrate Google’s Kubernetes framework for managing containers with the OpenStack platform, thereby enhancing the ability of developers to transport container-based applications from OpenStack-based private clouds to public clouds that support Kubernetes such as the Google Cloud Platform. One advantage of Docker’s orchestration tools in comparison to other container management frameworks is that they deliver a unified end to end experience for deploying and managing Docker containers. Docker Swarm, for example, integrates with the Amazon Web Services Container Service as well as IBM Bluemix Container Service, Joyent Smart Data Center and Microsoft Azure, thereby enhancing the portability of applications and enabling the avoidance of vendor lock-in. Moreover, Swarm works with third party orchestration products in addition to the orchestration services specific to different cloud platforms. In all, Docker’s Beta release of its orchestration tools in conjunction with its expanded roster of partner integrations suggests that Docker and the container management industry at large may well have cracked the nut specific to the transportation of applications from one infrastructure to another and by extension solved part of the cloud computing industry’s problem related to vendor lock-in. Meanwhile, container usage stands to continue skyrocketing as more and more vendors contribute to their ease of deployment, management and migration and collectively create a rich and venerable ecosystem for container use and portability.