On Monday, JFrog announced details of JFrog Xray, a product that delivers deep transparency into artifacts stored within the JFrog Artifactory repository. JFrog Xray performs binary-level analysis of JFrog artifacts to facilitate detection of security vulnerabilities. In addition, the product performs impact analysis that elaborates dependencies between container images and their constituent software applications and binary artifacts. JFrog Xray’s ability to deliver granular visibility into dependencies between binary artifacts used by an organization means that JFrog Artifactory customers can swiftly understand the scope of security vulnerabilities that may originate with one artifact and have an ancillary effect on other artifacts. As such, JFrog Xray tackles the problem of the “black box” related to the contents of a container and its potential impact on the IT infrastructure of an organization. Customers can further leverage JFrog’s Xray’s ability to map dependencies between artifacts to understand performance and architectural effects related to the impact of changes in one artifact on other components and applications.
Shlomi Ben Haim, CEO of JFrog, commented on the innovation of JFrog Xray as follows:
JFrog Xray responds to a profound pain of our users and the entire software development community for an infinitely expandable way to know everything about every component they’ve ever used in a software project – from build to production to distribution. While container technology revolutionized the market and the way people distribute software packages, it is still a ‘black hole’ that always contains other packages and dependencies. The Ops world has a real need to have full visibility into these containers plus an automated way to point out changes that will impact their production environment. With JFrog Xray, you can not only scan your container images but also to track all dependencies in order to avoid vulnerabilities and optimise your CI/CD flow.
With these remarks, Shlomi Ben Haim highlights the ability of JFrog Xray to penetrate the black hole specific to container technology and their contents. The graphic below illustrates the platform’s ability to map an impact path and enumerate affected artifacts via a custom notification generated by the “Performance Alerts” application:
JFrog Xray plays in the same space as Docker’s Security Scanning platform but claims competitive differentiation from Docker’s binary level scanning technology as a result of its advanced ability to map dependencies between artifacts and subsequently deliver a comprehensive impact analysis. JFrog Xray will be generally available as of June 30, 2016.
This week, Docker announced the general availability of Docker Security Scanning, a service that enables Docker Cloud private repo customers to perform security assessments of software in containers as an opt-in service. Docker Security Scanning evaluates the security of Docker images subsequent to their upload to the Docker Cloud and thereupon performs continuous monitoring of image security in conjunction with updates to the Continuous Vulnerability and Exposure database. The security scan delivers a Bill of Materials featuring a security profile of constituent components of a Docker image that empowers Independent Software Vendors (ISVs) to modify their content in the event of the detection of a security vulnerability. In addition, Docker’s security scanning service sends out automated notifications that enable IT teams to proactively manage risks associated with security vulnerabilities. By performing binary level scanning that assesses the security of every component of code housed within a container, Docker Security Scanning streamlines and simplifies the achievement of software security within a container-based environment for building, shipping and deploying code. Moreover, the platform allows users to remove compromised containers and thereby improve governance and control over software development that leverages a container framework. With the release in GA of Docker Security Scanning, Docker’s strengthens its position as the de facto infrastructure for building, shipping, deploying and managing updates to code. The service is available to all Docker Cloud private repo customers immediately and is expected to expand to all Docker Cloud customers by the end of Q3.
The Azure Container Service from Microsoft is now generally available for deploying and scheduling cloud-based containers after emerging in preview in December 2015. Built on “100% open source software to maximize portability of workloads,” the Azure Container Service offers a choice of Mesosphere’s Data Center Operating System (DC/OS) or Docker Swarm for container orchestration. The Azure Container Service allows customers to deploy and manage cloud-based containers at scale in ways that are optimized for the Azure cloud but concomitantly enable portability across any infrastructure that supports Mesosphere or Docker Swarm. The platform refrains from prescribing a specific orchestration framework for a customer’s various use cases and workloads and currently does not support the Google’s Kubernetes container management infrastructure. Today’s news tightens the partnership between Microsoft and Mesosphere, the latter of which just announced its decision to open source its DC/OS software in collaboration with over 60 partners including Microsoft. Meanwhile, the Azure Container Service continues to testify to Microsoft’s transformation under Satya Nadella as evinced by its embrace of open source technologies such as Docker and DC/OS and a broader vision about the place of open source technologies in contemporary cloud computing per its November 2015 selection of Red Hat at the preferred vendor for enterprise Linux on Azure.
Docker today announces details of the Docker Datacenter, an integrated platform for agile application development and management composed of the Docker Universal Control Plane, Docker Trusted Registry and support for the Docker Engine. Docker Datacenter empowers customers to deploy an on-premise-based containers as a service (CAAS) solution that can build, deploy, scale, manage and update container-based applications. Docker Datacenter features the integration of the Docker Container Runtime engine in collaboration with the orchestration functionality of Docker Swarm. The platform also boasts a security layer and universal management functionality as well as the Docker Trusted Registry and its concomitant ability to keep a universal record of containers and related artifacts. The integration of the Docker Engine, Docker Trusted Registry and Docker Universal Control Plane within the Docker Datacenter means that Docker users can leverage a battle-tested infrastructure designed especially to give developers an out of the box, holistic framework for building, shipping and managing containers and their constituent applications and subsequently depend less on custom, ad hoc integrations of Docker components. The Docker Datacenter delivers greater operational agility, portability and control over the management of the complete lifecycle of containers from their definition and development to their deployment and usage in production environments.
Payroll and Human Resource Solutions giant ADP will be using the Docker Datacenter Container as a Service offering to spearhead its transition to microservices while SA Home Loans plans to transition all enterprise-grade applications from a monolithic to a microservices-based architecture. That companies such as ADP and SA Home Loans, which depend on data security to run their core business, have chosen the Docker Datacenter as the framework to transform their application development practices testifies to the robustness of its security-related functionality. Meanwhile, the Docker Datacenter delivers enhanced operational agility and portability that allows developers to deploy applications on the environment of choice, whether it be Amazon Web Services, an OpenStack-based IaaS cloud or an on-premise infrastructure. Expect the Docker Datacenter to catapult Docker’s positioning within the agile application development landscape by accelerating build and release lifecycles for container-based applications through the delivery of tighter, standardized integrations between the Docker Engine and its surrounding universe of components.
On February 4, Docker announced the release of version 1.10 marked by enhanced orchestration and application composition functionality, improved security and better networking capabilities. Docker Compose now enables developers to define an application within a single file that enumerates its requirements and the relationships between its constituent components. The enhancements to Docker Compose in Docker 1.10 facilitate simplified management of distributed applications by empowering developers to define “application services, network topologies, volumes and their relationships” in one file. Moreover, this version of Docker gives developers the ability to define network attributes independent of a physical network and to subsequently integrate with Docker Networking. With respect to security, Docker 1.10 claims the general availability of User namespacing, which separates privileges specific to individual containers and the daemon. As a result, individual containers no longer have ability to access the root on the host. Moreover, customers can now restrict access to the host to a designated group of sysadmins in contrast to providing global sysadmin access. Additional security functionality in version 1.10 includes Seccomp profiles that deliver granular policy control on individual containers that ensures containers perform only the executable processes to which they have been assigned. Docker 1.10 also features content addressable IDs that provide reference ids to track downloaded content and authorization controls that allow for the configuration of granular access to the Docker daemon. The combination of the GA of User namespacing, seccomp profiles, content addressable image IDs and authorization profiles means that security takes center stage in Docker 1.10 by giving users a portfolio of tools to configure granular access control and container-based privileges and functionality. Read more about the release here.
Platform as a Service vendor dotCloud will be shutting down its operations on February 29 due to fiscal insolvency. dotCloud was purchased by cloudControl in August 2014 from Docker after Docker decided to focus exclusively on its containerization technology, to which it famously pivoted from a PaaS-based business model through the leadership and vision of Solomon Hykes. The demise of dotCloud marks a historic loss for the cloud computing community given that dotCloud was the company behind the team that spawned the idea to incorporate Docker’s container technology. dotCloud’s fiscal insolvency illustrates the depth of competition within the Platform as a Service space, which increasingly features competition between standalone PaaS players such as Engine Yard and Apprenda and larger vendors that are either based on Cloud Foundry or PaaS-platforms associated with behemoths such as Amazon and Microsoft Azure. dotCloud recommends that customers migrate their data to Heroku prior to February 29, 2016 to avoid service disruptions or data loss.