On Wednesday, Docker announced plans to spin off and open source containerd, a component of Docker Engine that delivers the capability to manage containers on a host machine. Also known as Docker’s core container runtime, containerd features all of the core primitives required to manage containers on Linux and Windows hosts. In addition, containerd features functionality for container execution and supervision, the distribution of images as well as the implementation of network interfaces and local storage. Used in production by millions of Docker containers subsequent to the release of Docker 1.11 in April 2016, containerd encapsulates foundational components of the Docker Engine that third parties can use to create products and solutions that leverage a common platform for core container runtime technology within their container-based products. IBM Vice President of Cloud Technology and Architecture Dr. Angel Diaz remarked on the importance of a common container runtime platform within the container landscape as follows:
As container adoption continues to grow, it’s important that, as an industry, we establish an openly governed container runtime to ensure consistent behavior across platforms. IBM and Docker have worked in partnership in the past to bring the single container runtime to an open community – we are expanding on this by establishing containerd as the open source and open governed project that builds on OCI outputs (specs and runtime) to manage multiple containers. Developers can utilize containers today on the IBM Bluemix Container Service, and we look forward to seeing container technology to continue to grow in functionality and long-term stability through this new initiative.
Here, Diaz comments on the value of an “openly governed container runtime” that brings a respected standard to container infrastructure across the industry. The open sourcing of Docker’s core runtime component promises to contribute to the development of standards between containers from vendors such as Amazon Web Services, Google Cloud Platform and Microsoft and subsequently enhance the compatibility, stability and standardization of container technologies. The open sourced containerd technology will follow the OCI standard and achieve compatibility with its protocols by the time of the 1.0 containerd release. Docker’s decision to spin out containerd and hand over its stewardship to an independent foundation that presides over its governance marks a monumental step forward toward standardizing container technologies while concurrently allowing vendors to differentially add additional container functionality as they deem appropriate. In addition, the move to spin out containerd promises to enhance the footprint of Docker within the container space by consolidating its positioning as the leader of container-based standards and infrastructure, even though containerd will be branded independently of Docker and receive contributions from other vendors. Containerd will be compatible with all leading orchestration frameworks and intends to serve as a “boring infrastructure” component for the container landscape. The spinning out of containerd as an independent open source project promises to enhance the significance of containers within the contemporary application development and lifecycle management space by improving container standardization and compatibility across platforms and vendors and subsequently contributing to increased container adoption within the industry at large. Docker plans to donate containerd to an independent foundation by the end of Q1 2017.
Weaveworks recently announced the general availability of Weave Cloud, a SaaS platform that empowers DevOps teams to connect and monitor containers and microservices-based applications. Using Weave Net, Weave Cloud connects containers for deployment to a multitude of public cloud, private cloud, hybrid cloud and on-premise infrastructures. Subsequent to connecting containers together securely and overseeing their deployment, Weave Cloud monitors and manages assemblages of containers by giving DevOps resources granular analytics on relationships between containers and metrics regarding their performance. Weave Cloud delivers an unprecedented degree of visibility into the topography of relationships between containers as illustrated below:
As shown above, Weave Cloud allows customers to visually consume the inter-relationship between containers and leverage its data visualization capabilities to expeditiously identify containers of interest for analytics related to application performance. For example, Weave Cloud allows DevOps resources to understand the effect of a specific container or set of containers on application performance through analyzing baseline metrics automatically collected by Weave Cloud or custom metrics defined by users. By streamlining the process whereby DevOps teams understand the inter-relationships between containers via its graphical user interface, Weave Cloud enhances the ability of customers to troubleshoot as well as manage daily operations of their container deployments. Weave Cloud’s ability to automate networking, deployment and daily operations of container deployments renders it a powerful tool for managing container deployments, particularly given the richness of its visualization capabilities and ability to give customers real-time insight into the health of their container deployments. The richness of Weave Cloud’s data visualization and analytic capabilities, in conjunction with its ability to automate the deployment of containers, accelerates and automates application development on container-based infrastructures in addition to enhancing the application lifecycle management capabilities of DevOps teams.
ClusterHQ today announced the forthcoming availability of two products designed to facilitate data management for containers, namely, FlockerHub and Fli. FlockerHub enables teams to store data volumes for Docker containers independently of the containers themselves, thereby allowing teams to decouple the code that runs within containers from the data that feeds container-based applications. FlockerHub’s decoupling of container data volumes from containers allows customers to enhance their operational agility with respect to the transfer of container-based workloads across a multitude of environments such as public clouds, private clouds and on-premise infrastructures. Meanwhile, Fli allows users to version control the data stored within their FlockerHub deployments and subsequently track changes to a data volume. For example, customers can use Fli to understand the evolution of changes to a database that, over time, is differentially used in both a production and development deployment. In addition to tracking incremental backups, Fli enables teams to track branches in the evolution of code in ways that enhance the troubleshooting of anomalous application behavior. More generally, Fli helps developers manage parallel development streams that commences with a common database foundation by giving teams greater insight into the history of data used within container-based applications.
FlockerHub, however, constitutes the most important innovation introduced by ClusterHQ in today’s announcement because of its ability to simplify data management within container deployments. FlockerHub fills a gaping hole in the container management space by giving developers the capability to manage data volumes for container-based applications. Importantly, FlockerHub simplifies processes such as data backup and replication, whether for redundancy-related use cases or a use case such as the migration of container-based data from one environment to another by giving teams the ability to effectively manage data used in containers. FlockerHub users can manage the distribution of data to containers and track its subsequent utilization in an application. Furthermore, FlockerHub delivers role-based access control governance functionality to ensure that the right users have the privileges to distribute and track the utilization of data volumes by containers. Given the heterogeneity of container management platforms on the market today, ClusterHQ’s FlockerHub and Fli promise to take up a place of critical importance within the container landscape by addressing head on the problem of data management for containers, which up until now has largely been thrust back onto customers themselves, resulting in high degrees of complexity regarding the problem of managing data for containers. FlockerHub and Fli will be available as of November 8.
Here, Ben Golub, CEO of Docker, talks to John Furrier of SiliconANGLE and Brian Gracely, Lead Cloud Analyst at Wikibon, regarding the Docker ecosystem at DockerCon 16 in Seattle, WA from June 19 to June 21, 2016. Golub reflects on Docker’s efforts to democratize the use of containers as well the partner space featuring professional services offerings from firms such as Booz Allen Hamilton, Deloitte and Accenture.
On Monday, Docker announced integrated orchestration capabilities in Docker Engine, thereby streamlining access to container orchestration functionality in addition to simplifying the ongoing operational management of containers. The integration of orchestration functionality into the Docker Engine means that Docker Swarm components such as the Swarm Mode Manager, Swarm Mode Worker and load balancing functionality are now available within Docker Engine. By bringing Docker’s Swarm’s orchestration functionality into Docker Engine, Docker empowers customers to streamline and simplify the process of scaling container-based infrastructures. Docker Engine’s integrated orchestration functionality features service discovery, a strongly consistent data store and consistency, availability and resilience of the app. The integration of orchestration capabilities into Docker engine delivers enhanced operational simplicity and performance in addition to streamlined implementation of robust security. The default setting for orchestration functionality within Docker Engine will be set to off, thereby enabling Docker users to activate it as needed. Docker 1.12 is available on Mac OS X and PC Workstations via a public beta and on Amazon Web Services and Microsoft Azure, by means of a private beta. The integration of Docker orchestration directly into the Docker Engine represents a milestone in Docker’s evolution insofar as it underscores the maturity of containerization technology and a corresponding trajectory toward increased operational simplicity, performance and security.
On Monday, JFrog announced details of JFrog Xray, a product that delivers deep transparency into artifacts stored within the JFrog Artifactory repository. JFrog Xray performs binary-level analysis of JFrog artifacts to facilitate detection of security vulnerabilities. In addition, the product performs impact analysis that elaborates dependencies between container images and their constituent software applications and binary artifacts. JFrog Xray’s ability to deliver granular visibility into dependencies between binary artifacts used by an organization means that JFrog Artifactory customers can swiftly understand the scope of security vulnerabilities that may originate with one artifact and have an ancillary effect on other artifacts. As such, JFrog Xray tackles the problem of the “black box” related to the contents of a container and its potential impact on the IT infrastructure of an organization. Customers can further leverage JFrog’s Xray’s ability to map dependencies between artifacts to understand performance and architectural effects related to the impact of changes in one artifact on other components and applications.
Shlomi Ben Haim, CEO of JFrog, commented on the innovation of JFrog Xray as follows:
JFrog Xray responds to a profound pain of our users and the entire software development community for an infinitely expandable way to know everything about every component they’ve ever used in a software project – from build to production to distribution. While container technology revolutionized the market and the way people distribute software packages, it is still a ‘black hole’ that always contains other packages and dependencies. The Ops world has a real need to have full visibility into these containers plus an automated way to point out changes that will impact their production environment. With JFrog Xray, you can not only scan your container images but also to track all dependencies in order to avoid vulnerabilities and optimise your CI/CD flow.
With these remarks, Shlomi Ben Haim highlights the ability of JFrog Xray to penetrate the black hole specific to container technology and their contents. The graphic below illustrates the platform’s ability to map an impact path and enumerate affected artifacts via a custom notification generated by the “Performance Alerts” application:
JFrog Xray plays in the same space as Docker’s Security Scanning platform but claims competitive differentiation from Docker’s binary level scanning technology as a result of its advanced ability to map dependencies between artifacts and subsequently deliver a comprehensive impact analysis. JFrog Xray will be generally available as of June 30, 2016.
This week, Docker announced the general availability of Docker Security Scanning, a service that enables Docker Cloud private repo customers to perform security assessments of software in containers as an opt-in service. Docker Security Scanning evaluates the security of Docker images subsequent to their upload to the Docker Cloud and thereupon performs continuous monitoring of image security in conjunction with updates to the Continuous Vulnerability and Exposure database. The security scan delivers a Bill of Materials featuring a security profile of constituent components of a Docker image that empowers Independent Software Vendors (ISVs) to modify their content in the event of the detection of a security vulnerability. In addition, Docker’s security scanning service sends out automated notifications that enable IT teams to proactively manage risks associated with security vulnerabilities. By performing binary level scanning that assesses the security of every component of code housed within a container, Docker Security Scanning streamlines and simplifies the achievement of software security within a container-based environment for building, shipping and deploying code. Moreover, the platform allows users to remove compromised containers and thereby improve governance and control over software development that leverages a container framework. With the release in GA of Docker Security Scanning, Docker’s strengthens its position as the de facto infrastructure for building, shipping, deploying and managing updates to code. The service is available to all Docker Cloud private repo customers immediately and is expected to expand to all Docker Cloud customers by the end of Q3.