Private container vendor Rancher Labs today announced support for the orchestration of persistent storage services for Docker. The new functionality introduced by Rancher Labs surmounts challenges related to the storage of persistent data on the part of Docker applications. Rancher Labs now renders it possible for developers to orchestrate the deployment of storage services onto container host machines in conjunction with the use of software defined storage platforms such as Gluster, Ceph and Nexenta. Moreover, the new functionality from Rancher allows customers to launch applications that leverage storage services in support of stateful application services. Rancher’s integration with storage services such as Gluster, Ceph and Nexenta means that customers can take advantage of advanced storage functionality from storage vendors such as backup, remote replication and snapshot. Moreover, Rancher’s support for the orchestration of persistent storage services enables customers to deploy applications with storage services onto a multitude of environments and host machines that include public and private clouds as well as virtual machines and bare metal servers. By integrating persistent storage into Docker container management, Rancher Labs empowers customers to create and deploy container-based applications that contain persistent storage, thereby facilitating the development of applications that need stateful databases to realize their functionality. As such, today’s announcement marks a notable breakthrough in container management by bringing the orchestration of persistent storage to Docker containers and enhancing the ability of customers to deploy applications that require the storage of persistent data.
HashiCorp recently released Vault, an open source tool that protects and allows access to secrets within an organization. Examples of secrets include passwords, API keys and database credentials. Vault has the ability to encrypt secrets within a secret storage infrastructure, generate dynamic secrets on the fly that have a designated lifetime, encrypt data and revoke secrets as necessary. Designed for distributed infrastructures, Vault specializes in the management of the multitude of secrets necessary to deliver authentication for micro-services such as containers, or distributed applications more generally. The platform’s ability to lease secrets such that they expire after a designated period of time enables it to bolster the security of application and micro-services components within a distributed architecture that requires the orchestration of multiple components. Vault currently manages secrets within the HashiCorp application lifecycle management platform by taking ownership of secrets required by applications such as Packer, Terraform, Consul and Atlas. Vault is used by Cisco within its open source micro-services infrastructure community project and represents HashiCorp’s sixth open source project. The product illustrates the new demands on IT security imposed by distributed applications and microservices-based infrastructure and applications by underscoring the importance of a secret management solution that recognizes the way in which the different components that constitute contemporary infrastructures require heterogeneous secret management tools.
Microsoft recently announced the release of Microsoft Azure Service Fabric, a platform that supports the development of applications created using an assemblage of independent microservices as exemplified by the concept of Docker containers, for example. While Azure will be supporting Docker in a subsequent release of Windows Server, its support for microservices by means of the Microsoft Azure Service Fabric allows developers to enjoy the benefits of an architecture that uses Microsoft Azure’s indigenous microservices technology to create discrete application components that collectively enable enhanced scalability and the design of low latency, computationally intensive applications. Not surprisingly, Microsoft Azure Service Fabric features the ability to orchestrate and automate microservices in conjunction with application lifecycle management functionality for the distributed systems that are typically characteristic of applications composed of microservices. The platform also supports Visual Studio tools such as designers, editors and debuggers that facilitate the development, deployment and ongoing management of applications across a variety of operating systems, environments and devices. In the same way that Amazon Web Services recently rendered available the same machine learning and data science platform used by its own data scientists in the form of Amazon Machine Learning, the Microsoft Azure Service Fabric delivers the same core technology that Microsoft Azure has thus far used for Skype for Business, DocumentDB and Bing Cortana.
The platform’s deep experience with enterprise-grade applications that serve millions of users and files means that it “intrinsically understands the available infrastructure resources and needs of applications, enabling automatically updating, self-healing behavior that is essential to delivering highly available and durable services at hyper-scale.” As a result of its ability to understand the interplay between infrastructure, applications and distributed systems, the Microsoft Azure Service Fabric delivers a platform for development based on microservices designed to accommodate the needs of hyperscale applications. As such, Microsoft Azure Service Fabric constitutes a pre-packaged response and counterweight to the increasing traction of Docker technology by presenting Azure customers with a production-grade platform that supports stateless and stateful microservices while additionally featuring micro-services orchestration and automation, even though Microsoft plans to support Docker containers on its Azure platform in the future. In a nutshell, the Microsoft Azure Service Fabric gives developers much of the functionality of Docker and an attendant Docker management platform with the added feather in its cap that the platform has been used for years in production-grade environments for household name products such as Skype and Bing, while concurrently paving the way for streamlined usage of Docker containers on the Azure platform as well.
On Tuesday, Docker, Inc. announced the finalization of a whopping $95M in Series D funding. The Series D funding round was led by Insight Venture Partners with additional participation from new investors Coatue, Goldman Sachs and Northern Trust and existing investors Benchmark, Greylock Partners, Sequoia Capital, Trinity Ventures and Jerry Yang’s AME Cloud Ventures. The funding will be used to strengthen strategic partnerships with companies such as Amazon Web Services, IBM and Microsoft, all of whom have differentially supported Docker on their respective cloud platforms and contributed to its go-to-market strategy. In addition, the funding will be used to accelerate product development, particularly as it relates to Docker management and application development lifecycle tools that promise to enhance the value of the Docker offering.
Solomon Hykes, founder and CTO of Docker, remarked on the significance of the funding raise as follows:
Our responsibility is to give people the tools they need to create applications that weren’t possible before. We will continue to honor that commitment to developers and enterprises. We think they are still looking for a platform that helps them build and ship applications in a truly standardized way, without lock-in or unwanted bundled features. That is what we set out to build, and we are not yet content with what we have achieved so far. We are getting a clear message from the market that they like what we are building, and we plan to keep building it. The financing enables us to deliver on that promise.
Although Docker has received clear market validation, Hykes notes that the company remains “not yet content” with what it has accomplished to date and hence hopes to use the extra funding to respond to customer needs to use a “platform that helps them build and ship applications in a truly standardized way.” Because Docker can run on a multitude of infrastructure platforms, users can avoid vendor lock-in while enjoying the benefits of Docker’s portability and ability to enhance operational agility by preserving the integrity of applications in development and production environments alike. Today’s Series D financing constitutes a dramatic affirmation of the validity of Docker’s business model and potential for even further growth by way of an investment that gives Docker the freedom to cement partnerships with major players in the IaaS-cloud community while enhancing its product portfolio and suite of tools for automating the management of clusters of Docker containers in distributed and non-distributed application environments alike. With an extra $95M in the bank, expect Docker to take ownership of the emerging cottage industry of vendors dedicated to Docker management tools and processes and bring Docker to more and more production-grade environments enterprises in anticipation of an IPO. Today’s Series D raise brings the total capital raised by Docker to roughly $160M, building upon a $40M Series C raise in September.
In a blog post, Microsoft announced two new technologies related to containers in the form of Hyper-V containers and the Nano Server on April 8. Hyper-V containers represent an extension of Microsoft’s plans to support Docker containers on multiple operating systems and hypervisors. In October, Microsoft and Docker announced plans to bring the Docker engine to the Windows Server platform, roughly four months after Micosoft announced support for Docker on Linux VMs within the Microsoft Azure platform in June. Hyper-V containers provide an enhanced degree of security for containers by delivering a level of isolation characteristic of dedicated servers or VMs. The enhanced level of isolation specific to Hyper-V containers ensures that code within a container remains separate from the other containers and host operating system of the underlying infrastructure. Designed for the Hyper-V hypervisor, Hyper-V containers complement Windows Server Containers and can be deployed using the same development tools as those used for Windows Server Containers. Moreover, Hyper-V containers are inter-operable with Windows Server Containers to the extent that applications designed for Windows Server Containers can be deployed as Hyper-V containers without additional configuration or tweaks.
Meanwhile, Nano Servers represent a micro-installation of Windows Server that is optimized for the cloud and container technology. Nano Servers feature smaller server images, memory and networking infrastructures that render it an optimal infrastructure for container technology. Taken together, Microsoft’s announcement of Hyper-V containers and Nano Servers constitutes yet another illustration of Microsoft’s attempt to differentiate itself from other IaaS vendors by rolling out products that support and amplify the operational agility enabled by Docker technologies. Hyper-V containers, for example, promise to deliver enhanced levels of security for Docker containers and give users a container option for the Hyper-V hypervisor in addition to Windows Server Containers and Linux containers. The bottom line, here, is that Microsoft is responding to increased customer demand for Docker container technology by way of an expanding and increasingly nuanced catalogue of options for deploying containers on different operating systems and virtualization platforms.
On Tuesday, Mirantis announced the integration of OpenStack with Kubernetes, the open source framework developed by Google to manage containers. The integration between OpenStack and Kubernetes enhances the portability of applications between the private cloud infrastructures typical of OpenStack and public cloud environments such as the Google Cloud Platform and Microsoft Azure that support Kubernetes. Even though Docker containers are well known for enhancing the portability of applications across infrastructures, transporting applications and workloads from private clouds to public clouds remains challenging. The availability of Kubernetes within (OpenStack) private clouds in addition to public cloud environments now renders it easier to transport containerized applications from private to public clouds and subsequently obtain a greater return on investment from deploying hybrid cloud infrastructures.
Moreover, the integration between Kubernetes and OpenStack facilitates container management on the Mirantis OpenStack platform by automating and orchestrating the management of Docker containers within an OpenStack-based IaaS infrastructure. The integration between Kubernetes and OpenStack depends on the OpenStack Application Catalog Murano, which manages the infrastructure for Kubernetes clusters and deploys the Docker application to the Kubernetes cluster. As the application and Kubernetes cluster scale, Murano manages the interplay between OpenStack compute, storage and networking resources and the application to ensure support for the infrastructure needs of the application and its attendant Kubernetes cluster. Tuesday’s announcement underscores the burgeoning power of containers, container management frameworks such as Google’s Kubernetes, the significance of OpenStack within the private cloud space as well as the increasingly urgent need for technologies that promote communication across cloud infrastructures toward the end of realizing the true potentiality of hybrid cloud environments. The integration of Kubernetes and OpenStack’s Murano will be available for preview on the Mirantis OpenStack Express platform in April 2015.
Docker today announced details of Docker Hub Enterprise (DHE), a product that delivers workflow and docker management capabilities behind enterprise firewalls. As such, the Docker Hub Enterprise expands upon the capabilities of the existing Docker Hub platform by giving developers a method of sharing and collaborating on Docker applications behind their organization’s firewall. Docker users subsequently enjoy an enhanced degree of strategic control and security regarding the development and management of Docker applications. Docker CEO Ben Golub elaborated on the significance of Docker Hub Enterprise as follows:
Docker Hub Enterprise is Docker’s foundation for establishing relationships with our rapidly expanding enterprise customer base, who view the Docker open platform as the cornerstone of their distributed application strategy. These organizations want a behind-the-firewall solution that enables them to leverage both the broader ecosystem and the more dynamic development environment that Dockerization has enabled. Our vision for DHE is that it will evolve from the place to share and collaborate on distributed applications to a strategic control point for both developers and sysadmins to manage all aspects of the application development lifecycle – from build through production – on any infrastructure they choose.
Here, Golub elaborates on the way in which Docker Hub Enterprise promises to emerge as the “strategic control point” for the management of application development that leverages Docker containers. Prior to the launch of Docker Hub Enterprise, developers and system administrators needed to amalgamate open source tools to enable the sharing, distribution and collaboration of Docker applications behind an organization’s firewall. Now, DHE enables the creation of multi-container, distributed applications that accommodate the application lifecycle workflow requirements and protocols of Docker applications. Moreover, developers will continue to enjoy the ability to create distributed applications dispersed over multiple Docker containers by taking advantage of the functionality of Docker repositories and services hosted on the Docker hub.
Docker Hub Enterprise will be delivered through Docker Authorized Partners. At launch, Amazon Web Services, Microsoft and IBM will render DHE available to their customers. Microsoft and Amazon Web Services will make DHE available through the Azure marketplace while Amazon Web Services will make Docker available through AWS Test Drives and AWS Quick Start Reference Deployments. The AWS Test Drives and AWS Quick Start Reference Deployments allow customers to explore software applications and architectures at no cost before incorporating them into their production-grade application deployments. Meanwhile, IBM will deliver DHE as a cloud-based and on-premise solution. The announcement of Docker Hub Enterprise comes in conjunction with news of the availability of a platform of orchestration services that facilitate the management of multi-container applications in recognition of the evolving need to manage and orchestrate large numbers of containers across multiple host environments and infrastructures. The combination of Docker Hub Enterprise with Docker’s newly announced orchestration services underscores the paradigm shift in application development away from the creation of persistent applications on servers or VMs toward distributed applications constituted by discrete components housed within interoperable containers. Docker’s orchestration platform is amongst the most comprehensive in the market today and responds to the cottage industry of products and services focused around container management. Docker Hub Enterprise will be available for early access in February 2015.