On March 29, Weaveworks announced the launch of the Weave Cloud Enterprise Edition (EE) tier for its container and microservices management platform. Compatible with all major container platforms and orchestration frameworks, the Weave Cloud Enterprise Edition simplifies, streamlines and accelerates the deployment and ongoing operational management of container-based applications. Developers can use the Weave Cloud Enterprise Edition (EE) to manage application releases for container-based applications. Additionally, Weaveworks allows developers to visualize the inter-relationship between different containers and monitor application performance as it relates to either individual containers or amalgamations of containers. Moreover, the Weave Cloud Enterprise Edition gives developers the capability to monitor application performance as measured by a multitude of metrics and to subsequently perform root cause analytics to understand the drivers of performance degradation or improvement. The platform also empowers developers to connect containers via its networking functionality that enables the creation of secure networked relationships between containers.
The recently launched platform features the availability of incident management functionality marked by the ability to obtain a granular understanding of the nexus of root causes responsible for an incident, the history of similar incidents as well as dashboards that elaborate on the timing and impact of the incident. In addition, the Weave Cloud Enterprise Edition (EE) boasts release automation functionality as well as the ability to roll-back releases to earlier points in time. Furthermore, the Weave Cloud Enterprise Edition (EE) features advanced analytics for the troubleshooting of Kubernetes that includes resource container mappings. Taken together with the delivery of its container management functionality via the cloud, the Weave Could Enterprise Edition empowers developers to focus on monitoring and improving container-based applications without the hassle of attention to the underlying infrastructure in which the containers are hosted.
The availability of incident management, release automation and Kubernetes troubleshooting functionality in this version of the Weave Cloud Enterprise Edition (EE) bolsters its positioning within the container management space by delivering enterprise-grade functionality that enable enterprises to track metadata associated with container-based applications and automate application releases. But the larger story, here, is the narrative of an enterprise-grade container management platform that delivers on its promise of monitoring as well as ongoing operational management of container-based applications and infrastructures. Notable about Weaveworks is the sophistication of its advanced analytic capabilities for troubleshooting performance issues in container-based applications in conjunction with its unique visualization and secure networking capabilities for container-based infrastructures. As such, the platform differentiates in the container-management space by way of an end to end container management solution with strengths in monitoring and ongoing operational management.
Weaveworks recently announced the general availability of Weave Cloud, a SaaS platform that empowers DevOps teams to connect and monitor containers and microservices-based applications. Using Weave Net, Weave Cloud connects containers for deployment to a multitude of public cloud, private cloud, hybrid cloud and on-premise infrastructures. Subsequent to connecting containers together securely and overseeing their deployment, Weave Cloud monitors and manages assemblages of containers by giving DevOps resources granular analytics on relationships between containers and metrics regarding their performance. Weave Cloud delivers an unprecedented degree of visibility into the topography of relationships between containers as illustrated below:
As shown above, Weave Cloud allows customers to visually consume the inter-relationship between containers and leverage its data visualization capabilities to expeditiously identify containers of interest for analytics related to application performance. For example, Weave Cloud allows DevOps resources to understand the effect of a specific container or set of containers on application performance through analyzing baseline metrics automatically collected by Weave Cloud or custom metrics defined by users. By streamlining the process whereby DevOps teams understand the inter-relationships between containers via its graphical user interface, Weave Cloud enhances the ability of customers to troubleshoot as well as manage daily operations of their container deployments. Weave Cloud’s ability to automate networking, deployment and daily operations of container deployments renders it a powerful tool for managing container deployments, particularly given the richness of its visualization capabilities and ability to give customers real-time insight into the health of their container deployments. The richness of Weave Cloud’s data visualization and analytic capabilities, in conjunction with its ability to automate the deployment of containers, accelerates and automates application development on container-based infrastructures in addition to enhancing the application lifecycle management capabilities of DevOps teams.
ClusterHQ today announced the forthcoming availability of two products designed to facilitate data management for containers, namely, FlockerHub and Fli. FlockerHub enables teams to store data volumes for Docker containers independently of the containers themselves, thereby allowing teams to decouple the code that runs within containers from the data that feeds container-based applications. FlockerHub’s decoupling of container data volumes from containers allows customers to enhance their operational agility with respect to the transfer of container-based workloads across a multitude of environments such as public clouds, private clouds and on-premise infrastructures. Meanwhile, Fli allows users to version control the data stored within their FlockerHub deployments and subsequently track changes to a data volume. For example, customers can use Fli to understand the evolution of changes to a database that, over time, is differentially used in both a production and development deployment. In addition to tracking incremental backups, Fli enables teams to track branches in the evolution of code in ways that enhance the troubleshooting of anomalous application behavior. More generally, Fli helps developers manage parallel development streams that commences with a common database foundation by giving teams greater insight into the history of data used within container-based applications.
FlockerHub, however, constitutes the most important innovation introduced by ClusterHQ in today’s announcement because of its ability to simplify data management within container deployments. FlockerHub fills a gaping hole in the container management space by giving developers the capability to manage data volumes for container-based applications. Importantly, FlockerHub simplifies processes such as data backup and replication, whether for redundancy-related use cases or a use case such as the migration of container-based data from one environment to another by giving teams the ability to effectively manage data used in containers. FlockerHub users can manage the distribution of data to containers and track its subsequent utilization in an application. Furthermore, FlockerHub delivers role-based access control governance functionality to ensure that the right users have the privileges to distribute and track the utilization of data volumes by containers. Given the heterogeneity of container management platforms on the market today, ClusterHQ’s FlockerHub and Fli promise to take up a place of critical importance within the container landscape by addressing head on the problem of data management for containers, which up until now has largely been thrust back onto customers themselves, resulting in high degrees of complexity regarding the problem of managing data for containers. FlockerHub and Fli will be available as of November 8.
On Monday, JFrog announced details of JFrog Xray, a product that delivers deep transparency into artifacts stored within the JFrog Artifactory repository. JFrog Xray performs binary-level analysis of JFrog artifacts to facilitate detection of security vulnerabilities. In addition, the product performs impact analysis that elaborates dependencies between container images and their constituent software applications and binary artifacts. JFrog Xray’s ability to deliver granular visibility into dependencies between binary artifacts used by an organization means that JFrog Artifactory customers can swiftly understand the scope of security vulnerabilities that may originate with one artifact and have an ancillary effect on other artifacts. As such, JFrog Xray tackles the problem of the “black box” related to the contents of a container and its potential impact on the IT infrastructure of an organization. Customers can further leverage JFrog’s Xray’s ability to map dependencies between artifacts to understand performance and architectural effects related to the impact of changes in one artifact on other components and applications.
Shlomi Ben Haim, CEO of JFrog, commented on the innovation of JFrog Xray as follows:
JFrog Xray responds to a profound pain of our users and the entire software development community for an infinitely expandable way to know everything about every component they’ve ever used in a software project – from build to production to distribution. While container technology revolutionized the market and the way people distribute software packages, it is still a ‘black hole’ that always contains other packages and dependencies. The Ops world has a real need to have full visibility into these containers plus an automated way to point out changes that will impact their production environment. With JFrog Xray, you can not only scan your container images but also to track all dependencies in order to avoid vulnerabilities and optimise your CI/CD flow.
With these remarks, Shlomi Ben Haim highlights the ability of JFrog Xray to penetrate the black hole specific to container technology and their contents. The graphic below illustrates the platform’s ability to map an impact path and enumerate affected artifacts via a custom notification generated by the “Performance Alerts” application:
JFrog Xray plays in the same space as Docker’s Security Scanning platform but claims competitive differentiation from Docker’s binary level scanning technology as a result of its advanced ability to map dependencies between artifacts and subsequently deliver a comprehensive impact analysis. JFrog Xray will be generally available as of June 30, 2016.
Rancher Labs recently announced $20M in Series B funding led by a new investor in the form of GRC SinoGreen. Existing investors Mayfield and Nexus Venture Partners also participated in the round. The announcement of Rancher’s Series B round comes roughly six weeks after news of the general availability of Rancher 1.0, the open source container management platform that supports the Docker Swarm and Kubernetes container orchestration frameworks. The Series B funding will be used to support the expansion of sales and marketing operations and, as noted by Rancher co-founder Shannon Williams, in an interview with Cloud Computing Today, iteratively accelerate product development as a result of customer feedback. The funding raise underscores meteoric adoption of Rancher’s open source product and its unique positioning as a cloud-agnostic container management platform that works on any public cloud, private cloud or on-premise deployment. As one of the few container management platforms that gives customers the flexibility to deploy more than one container orchestration framework for concurrent container deployments, Rancher’s capital raise positions it strongly to support customer demand and consolidate its leadership position in the container management space.
This week, Rancher Labs announced the general availability of the container management platform Rancher 1.0. Rancher 1.0 allows users to take advantage of the Docker Swarm and Kubernetes orchestration frameworks, while nevertheless delivering a unified management experience for production-grade container deployments. Organizations can use the open source Rancher platform to deploy and manage Docker Swarm and Kubernetes clusters of any size on any cloud, on-premise or hybrid cloud infrastructure. Meanwhile, the platform’s management console delivers deep visibility into container deployments in ways that allow users to enhance its software development and delivery lifecycle. Rancher Labs CEO Sheng Liang commented on the success of the Rancher platform as follows:
Since announcing our beta product less than a year ago, Rancher Labs has experienced incredible demand, as well as received encouraging and helpful feedback and community support for this open platform which has enabled us to make meaningful enhancements to Rancher. Now, with well over a million downloads, Rancher has quickly become the platform of choice for teams serious about running containers in production.
Liang remarks on how the “incredible demand” for the platform has led to feedback and customer engagement that accelerated enhancement of the open source Rancher platform and subsequently positioned it strongly as one of the industry’s leading container management platforms. Spearheaded by a rich user interface, its support for both Docker Swarm and Kubernetes and a cloud-agnostic framework, Rancher gives users a centralized platform for enterprise grade container deployment and management. In addition, the platform features a rich application catalogue that facilitates the creation of templates for storing applications that can subsequently be re-used, tweaked and variously customized. Rancher Labs supports a commercially licensed version of its container management platform that wraps professional services around the open source Rancher container management platform. Headquartered in Cupertino, CA, the company has raised $10M to date from Mayfield and Nexus Venture Partners.
CoreOS has announced that its rkt (pronounced: rocket) container technology will be integrated with Google’s Kubernetes container management framework. The integration of CoreOS rocket technology with Kubernetes means that the Kubernetes framework need not leverage Docker containers, but can instead rely solely on CoreOS Linux container technology. CoreOS’s rkt technology consists of container runtime software that implements appc, the App container specification designed to provide a standard for containers based around requirements related to composability, security, image distribution and openness. CoreOS launched rocket on the premise that Docker containers had strayed from its original manifesto of developing “a simple component, a composable unit, that could be used in a variety of systems” as noted by CoreOS CEO Alex Polvi in a December 2014 blog post:
Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.
Here, Polvi notes how Docker has transitioned from an initiative focused around creating reusable components to a platform whose mission has deviated from its original manifesto. Today’s announcement of the integration of CoreOS with Kubernetes represents a deepening of the relationship between CoreOS and Google that recently included a $12M funding round led by Google Ventures. While CoreOS previously supported Kubernetes, today’s announcement of its integration into Google’s container management framework represents a clear sign that the battle for container supremacy is now likely to begin in earnest, particularly given that CoreOS brands itself as enabling other technology companies to build Google-like infrastructures. With Google’s wind behind its sails, and executives from Google, Red Hat and Twitter having joined the App container specification community management team, Docker now confronts a real challenger to its supremacy within the container space. Moreover, Google, VMware, Red Hat and Apcera have all pledged support for appc in ways that suggest an alternative standard that defines “how applications can be packaged, distributed, and executed in a portable and self-contained way” may well be emerging.