Weaveworks Launches Weave Cloud Enterprise Edition Featuring Enhanced Container Management Functionality

On March 29, Weaveworks announced the launch of the Weave Cloud Enterprise Edition (EE) tier for its container and microservices management platform. Compatible with all major container platforms and orchestration frameworks, the Weave Cloud Enterprise Edition simplifies, streamlines and accelerates the deployment and ongoing operational management of container-based applications. Developers can use the Weave Cloud Enterprise Edition (EE) to manage application releases for container-based applications. Additionally, Weaveworks allows developers to visualize the inter-relationship between different containers and monitor application performance as it relates to either individual containers or amalgamations of containers. Moreover, the Weave Cloud Enterprise Edition gives developers the capability to monitor application performance as measured by a multitude of metrics and to subsequently perform root cause analytics to understand the drivers of performance degradation or improvement. The platform also empowers developers to connect containers via its networking functionality that enables the creation of secure networked relationships between containers.

The recently launched platform features the availability of incident management functionality marked by the ability to obtain a granular understanding of the nexus of root causes responsible for an incident, the history of similar incidents as well as dashboards that elaborate on the timing and impact of the incident. In addition, the Weave Cloud Enterprise Edition (EE) boasts release automation functionality as well as the ability to roll-back releases to earlier points in time. Furthermore, the Weave Cloud Enterprise Edition (EE) features advanced analytics for the troubleshooting of Kubernetes that includes resource container mappings. Taken together with the delivery of its container management functionality via the cloud, the Weave Could Enterprise Edition empowers developers to focus on monitoring and improving container-based applications without the hassle of attention to the underlying infrastructure in which the containers are hosted.

The availability of incident management, release automation and Kubernetes troubleshooting functionality in this version of the Weave Cloud Enterprise Edition (EE) bolsters its positioning within the container management space by delivering enterprise-grade functionality that enable enterprises to track metadata associated with container-based applications and automate application releases.  But the larger story, here, is the narrative of an enterprise-grade container management platform that delivers on its promise of monitoring as well as ongoing operational management of container-based applications and infrastructures. Notable about Weaveworks is the sophistication of its advanced analytic capabilities for troubleshooting performance issues in container-based applications in conjunction with its unique visualization and secure networking capabilities for container-based infrastructures. As such, the platform differentiates in the container-management space by way of an end to end container management solution with strengths in monitoring and ongoing operational management.

Docker Spins Out Core Container Runtime As Separate Component To Enhance Standards And Compatibility For Containers

On Wednesday, Docker announced plans to spin off and open source containerd, a component of Docker Engine that delivers the capability to manage containers on a host machine. Also known as Docker’s core container runtime, containerd features all of the core primitives required to manage containers on Linux and Windows hosts. In addition, containerd features functionality for container execution and supervision, the distribution of images as well as the implementation of network interfaces and local storage. Used in production by millions of Docker containers subsequent to the release of Docker 1.11 in April 2016, containerd encapsulates foundational components of the Docker Engine that third parties can use to create products and solutions that leverage a common platform for core container runtime technology within their container-based products. IBM Vice President of Cloud Technology and Architecture Dr. Angel Diaz remarked on the importance of a common container runtime platform within the container landscape as follows:

As container adoption continues to grow, it’s important that, as an industry, we establish an openly governed container runtime to ensure consistent behavior across platforms. IBM and Docker have worked in partnership in the past to bring the single container runtime to an open community – we are expanding on this by establishing containerd as the open source and open governed project that builds on OCI outputs (specs and runtime) to manage multiple containers. Developers can utilize containers today on the IBM Bluemix Container Service, and we look forward to seeing container technology to continue to grow in functionality and long-term stability through this new initiative.

Here, Diaz comments on the value of an “openly governed container runtime” that brings a respected standard to container infrastructure across the industry. The open sourcing of Docker’s core runtime component promises to contribute to the development of standards between containers from vendors such as Amazon Web Services, Google Cloud Platform and Microsoft and subsequently enhance the compatibility, stability and standardization of container technologies. The open sourced containerd technology will follow the OCI standard and achieve compatibility with its protocols by the time of the 1.0 containerd release. Docker’s decision to spin out containerd and hand over its stewardship to an independent foundation that presides over its governance marks a monumental step forward toward standardizing container technologies while concurrently allowing vendors to differentially add additional container functionality as they deem appropriate. In addition, the move to spin out containerd promises to enhance the footprint of Docker within the container space by consolidating its positioning as the leader of container-based standards and infrastructure, even though containerd will be branded independently of Docker and receive contributions from other vendors. Containerd will be compatible with all leading orchestration frameworks and intends to serve as a “boring infrastructure” component for the container landscape. The spinning out of containerd as an independent open source project promises to enhance the significance of containers within the contemporary application development and lifecycle management space by improving container standardization and compatibility across platforms and vendors and subsequently contributing to increased container adoption within the industry at large. Docker plans to donate containerd to an independent foundation by the end of Q1 2017.

Weaveworks Announces General Availability Of Weave Cloud, SaaS Platform For Managing Containers

Weaveworks recently announced the general availability of Weave Cloud, a SaaS platform that empowers DevOps teams to connect and monitor containers and microservices-based applications. Using Weave Net, Weave Cloud connects containers for deployment to a multitude of public cloud, private cloud, hybrid cloud and on-premise infrastructures. Subsequent to connecting containers together securely and overseeing their deployment, Weave Cloud monitors and manages assemblages of containers by giving DevOps resources granular analytics on relationships between containers and metrics regarding their performance. Weave Cloud delivers an unprecedented degree of visibility into the topography of relationships between containers as illustrated below:

weave-cloud-ga-3

As shown above, Weave Cloud allows customers to visually consume the inter-relationship between containers and leverage its data visualization capabilities to expeditiously identify containers of interest for analytics related to application performance. For example, Weave Cloud allows DevOps resources to understand the effect of a specific container or set of containers on application performance through analyzing baseline metrics automatically collected by Weave Cloud or custom metrics defined by users. By streamlining the process whereby DevOps teams understand the inter-relationships between containers via its graphical user interface, Weave Cloud enhances the ability of customers to troubleshoot as well as manage daily operations of their container deployments. Weave Cloud’s ability to automate networking, deployment and daily operations of container deployments renders it a powerful tool for managing container deployments, particularly given the richness of its visualization capabilities and ability to give customers real-time insight into the health of their container deployments. The richness of Weave Cloud’s data visualization and analytic capabilities, in conjunction with its ability to automate the deployment of containers, accelerates and automates application development on container-based infrastructures in addition to enhancing the application lifecycle management capabilities of DevOps teams.

ClusterHQ’s FlockerHub and Fli Deliver Data Management Tools For Containers

ClusterHQ today announced the forthcoming availability of two products designed to facilitate data management for containers, namely, FlockerHub and Fli. FlockerHub enables teams to store data volumes for Docker containers independently of the containers themselves, thereby allowing teams to decouple the code that runs within containers from the data that feeds container-based applications. FlockerHub’s decoupling of container data volumes from containers allows customers to enhance their operational agility with respect to the transfer of container-based workloads across a multitude of environments such as public clouds, private clouds and on-premise infrastructures. Meanwhile, Fli allows users to version control the data stored within their FlockerHub deployments and subsequently track changes to a data volume. For example, customers can use Fli to understand the evolution of changes to a database that, over time, is differentially used in both a production and development deployment. In addition to tracking incremental backups, Fli enables teams to track branches in the evolution of code in ways that enhance the troubleshooting of anomalous application behavior. More generally, Fli helps developers manage parallel development streams that commences with a common database foundation by giving teams greater insight into the history of data used within container-based applications.

FlockerHub, however, constitutes the most important innovation introduced by ClusterHQ in today’s announcement because of its ability to simplify data management within container deployments. FlockerHub fills a gaping hole in the container management space by giving developers the capability to manage data volumes for container-based applications. Importantly, FlockerHub simplifies processes such as data backup and replication, whether for redundancy-related use cases or a use case such as the migration of container-based data from one environment to another by giving teams the ability to effectively manage data used in containers. FlockerHub users can manage the distribution of data to containers and track its subsequent utilization in an application. Furthermore, FlockerHub delivers role-based access control governance functionality to ensure that the right users have the privileges to distribute and track the utilization of data volumes by containers. Given the heterogeneity of container management platforms on the market today, ClusterHQ’s FlockerHub and Fli promise to take up a place of critical importance within the container landscape by addressing head on the problem of data management for containers, which up until now has largely been thrust back onto customers themselves, resulting in high degrees of complexity regarding the problem of managing data for containers. FlockerHub and Fli will be available as of November 8.

Interview With Ben Golub, CEO of Docker, At DockerCon 16 In Seattle, WA

Here, Ben Golub, CEO of Docker, talks to John Furrier of SiliconANGLE and Brian Gracely, Lead Cloud Analyst at Wikibon, regarding the Docker ecosystem at DockerCon 16 in Seattle, WA from June 19 to June 21, 2016. Golub reflects on Docker’s efforts to democratize the use of containers as well the partner space featuring professional services offerings from firms such as Booz Allen Hamilton, Deloitte and Accenture.

Docker 1.12 Integrates Orchestration Capabilities Directly Into Docker Engine

On Monday, Docker announced integrated orchestration capabilities in Docker Engine, thereby streamlining access to container orchestration functionality in addition to simplifying the ongoing operational management of containers. The integration of orchestration functionality into the Docker Engine means that Docker Swarm components such as the Swarm Mode Manager, Swarm Mode Worker and load balancing functionality are now available within Docker Engine. By bringing Docker’s Swarm’s orchestration functionality into Docker Engine, Docker empowers customers to streamline and simplify the process of scaling container-based infrastructures. Docker Engine’s integrated orchestration functionality features service discovery, a strongly consistent data store and consistency, availability and resilience of the app. The integration of orchestration capabilities into Docker engine delivers enhanced operational simplicity and performance in addition to streamlined implementation of robust security. The default setting for orchestration functionality within Docker Engine will be set to off, thereby enabling Docker users to activate it as needed. Docker 1.12 is available on Mac OS X and PC Workstations via a public beta and on Amazon Web Services and Microsoft Azure, by means of a private beta. The integration of Docker orchestration directly into the Docker Engine represents a milestone in Docker’s evolution insofar as it underscores the maturity of containerization technology and a corresponding trajectory toward increased operational simplicity, performance and security.