Linux distributor SUSE recently announced the finalization of an agreement to acquire OpenStack and Cloud Foundry assets from Hewlett Packard Enterprise (HPE). The agreement enables SUSE to incorporate OpenStack IaaS technology into the SUSE OpenStack Cloud as well as take advantage of HPE’s Cloud Foundry PaaS to ramp up SUSE’s entry into the PaaS space. SUSE’s acquisition of OpenStack and Cloud Foundry technology and talent positions it to deliver a stronger OpenStack Cloud offering to its customers. Meanwhile, SUSE’s acquisition of HPE’s Cloud Foundry assets signals SUSE’s recognition of the importance of PaaS platforms to emergent DevOps and cloud-native application development practices. SUSE’s acquisition of HPE’s OpenStack and Cloud Foundry technology and talent marks the emergence of another critical player to the commercial OpenStack space whose deep experience commercializing open source software qua Linux bodes well for its ability to productize OpenStack for the enterprise. HPE has agreed to designate SUSE as its preferred partner for OpenStack, Cloud Foundry and Linux technologies in a move that bolsters SUSE’s acquisition further, particularly given that HPE plans to OEM the SUSE OpenStack Cloud and Cloud Foundry PaaS within its Helion and Stackato solutions. Stay tuned for the emergence of SUSE’s Cloud Foundry PaaS because part of the success of SUSE’s IaaS offering is likely to hinge on its ability to offer turnkey PaaS offerings on its OpenStack-based IaaS platform.
Amazon today announced details of AWS Snowmobile, a 45 foot long truck that accelerates the process of transferring on-premise data to the Amazon cloud for customers that have petabytes or exabytes of data to migrate. Customers with massive volumes of data can connect AWS Snowmobile to their network as an NFS-mounted volume and use their existing applications to transfer data that is ultimately bound for Amazon S3 or Amazon Glacier. AWS Snowmobile requires 350 kW of power and features rugged physical protection as well as data protection functionality such as encryption and GPS-tracking. AWS Professional Services helps customers install and set up AWS Snowmobile and subsequently allow customers to reap the benefits of a process-driven infrastructure for transferring massive amounts of data securely to the cloud. AWS Snowmobile takes the hassle out of transferring exabyte-scale to the Amazon cloud and offers a solution to the problem of enterprise-level workload migration.
DigitalGlobe currently uses AWS SnowMobile to transfer 100 PB of high resolution satellite imagery to Amazon Glacier with a celerity and efficiency that was heretofore unavailable, thereby allowing customers greater access to its data and facilitating the execution of distributed analytics. DigitalGlobe characterizes AWS Snowmobile as a “game changer” but the larger question for AWS Snowmobile is whether customers will place their bets on such a brazenly un-Amazon-like technology solution given its lack of technological elegance and the sheer crudity of a truck showing up at a company’s doorstep to transfer petabytes of data in the era of digital transformation. The other obvious question is how many customers will jump at the opportunity to move petabytes of data to the Amazon Cloud but, as the example of DigitalGlobe illustrates, the urgency of the business need to transfer data to the cloud may well over-ride the sheer lack of elegance of the solution.
AMD has announced that Google Compute Engine and the Google Cloud Machine Learning platform will use AMD’s Radeon GPU (graphics processing units) technology to deliver accelerated performance to facilitate computationally-intensive simulations for use cases such as “complex medical and financial simulations, seismic and subsurface exploration, machine learning, video rendering and transcoding, and scientific analysis.” AMD FirePro™ S9300 x2 Server GPUs can handle parallel calculations that illustrate AMD’s progress with respect to GPU-based hardware. AMD’s deal with Google is particularly significant because most cloud vendors have used Nvidia GPUs for computationally intensive use cases such as deep learning, to date. As of 2017, Google will use the K80 and P100 GPU chips based on Nvidia’s Tesla architecture as well as AMD’s FirePro S9300, designed using AMD’s Polaris architecture.
Amazon Web Services, Microsoft Azure and the IBM Cloud, for example, use Nvidia GPUs. Google’s decision to use a combination of Nvidia and AMD GPUs for its data centers empowers Google Cloud to avoid vendor dependency and obtain greater negotiating power in the procurement process. AMD’s partnership with Google represents the second major cloud vendor that has opted to use its GPU technology, following upon Alibaba’s October 2016 decision to use AMD’s Radeon Pro chips for servers powering its cloud infrastructure. AMD’s deal with Google, however, represents a far more significant milestone in its bid to restore market share lost to Intel and Nvidia by carving out a niche within hyperscale cloud datacenters. AMD will hope to capitalize on its partnership with Google by expanding it the likes of Azure and Amazon Web Services. Meanwhile, AMD plans to release a GPU based on the combination of its forthcoming Vega architecture for GPUs and the recently announced Zen CPU that improves upon its existing Polaris GPU architecture.
GE Digital (GE) has acquired ServiceMax for $915M as part of its broader vision to build out its capabilities for delivering services for the industrial internet. ServiceMax focuses on digitizing the delivery of services to ensure the health of devices used in verticals that include life sciences and medical management, aerospace and defense, energy and utilities and building services. For example, ServiceMax optimizes the service lifecycle for the repair of devices such as elevators and oil rigs by giving field service engineers a platform for understanding, identifying and responding to issues related to devices. The ServiceMax platform manages workflows such as the discovery of installed devices, scheduling of device maintenance, documentation of issue resolution, the delivery of proactive alerts regarding device maintenance and the delivery of updates to customers. ServiceMax’s digitization of field services via a SaaS platform also handles inventory management, parts and returns management as well as contract and warranty management.
GE’s acquisition of ServiceMax positions it to continue leveraging and enhancing its Predix platform for aggregating and analyzing massive volumes of machine data from the internet of things. Built on Cloud Foundry, the Predix Platform as a Service provides a framework for connecting machines and then subsequently writing applications that deliver analytics and business intelligence about the performance of the ecosystem of connected devices. By acquiring ServiceMax, GE differentiates itself as the undisputed market leader in analytics and professional services for the internet of things and stands poised to drive a tighter integration between the Predix and ServiceMax platforms. The parallel value streams implicit in GE’s Predix cloud for facilitating app development for IoT and the digitization of field services specific to ServiceMax promise to synergistically inform each other and consolidate GE’s leadership in all things related to the industrial internet. ServiceMax was founded by Athani Krishna and Hari Subramanian. The company had raised a total of $204M prior to its acquisition.
Google has acquired Qwiklabs, the company behind an educational platform geared toward helping users understand cloud computing and how to write cloud-native applications. Launched in 2012, Qwiklabs has focused on helping users obtain training for Amazon Web Services, but Google plans to adapt it to facilitate the delivery of education related to the Google Cloud Platform and its associated G-Suite of applications. Google’s acquisition of Qwiklabs underscores the heterogeneity of products and services surrounding the cloud computing revolution and illustrates the urgency of the tech industry’s need for quality training and educational products specific to cloud computing and big data. Expect educational platforms related to cloud computing to proliferate as cloud adoption continues to skyrocket and fuel a need for educational materials that can facilitate the training of end users used on the rapidly evolving space of cloud technologies and platforms. Qwiklabs claims that over 500,000 users have received 5 million hours of training on its platform thus far. Terms of the acquisition were not disclosed.
Weaveworks recently announced the general availability of Weave Cloud, a SaaS platform that empowers DevOps teams to connect and monitor containers and microservices-based applications. Using Weave Net, Weave Cloud connects containers for deployment to a multitude of public cloud, private cloud, hybrid cloud and on-premise infrastructures. Subsequent to connecting containers together securely and overseeing their deployment, Weave Cloud monitors and manages assemblages of containers by giving DevOps resources granular analytics on relationships between containers and metrics regarding their performance. Weave Cloud delivers an unprecedented degree of visibility into the topography of relationships between containers as illustrated below:
As shown above, Weave Cloud allows customers to visually consume the inter-relationship between containers and leverage its data visualization capabilities to expeditiously identify containers of interest for analytics related to application performance. For example, Weave Cloud allows DevOps resources to understand the effect of a specific container or set of containers on application performance through analyzing baseline metrics automatically collected by Weave Cloud or custom metrics defined by users. By streamlining the process whereby DevOps teams understand the inter-relationships between containers via its graphical user interface, Weave Cloud enhances the ability of customers to troubleshoot as well as manage daily operations of their container deployments. Weave Cloud’s ability to automate networking, deployment and daily operations of container deployments renders it a powerful tool for managing container deployments, particularly given the richness of its visualization capabilities and ability to give customers real-time insight into the health of their container deployments. The richness of Weave Cloud’s data visualization and analytic capabilities, in conjunction with its ability to automate the deployment of containers, accelerates and automates application development on container-based infrastructures in addition to enhancing the application lifecycle management capabilities of DevOps teams.
On Tuesday, Qumulo announced the availability of its data-aware Qumulo Core scale-out storage solution on Hewlett Packward Enterprise (HPE) Apollo servers. Qumulo’s availability on HPE Apollo servers gives customers greater flexibility with respect to the deployment of Qumulo’s storage solutions by supplementing the existing, Qumulo appliance-based method of installation. Customers can use Qumulo Core on HPE Apollo servers to obtain data-aware storage solutions for the storage and management of billions of files and objects spanning petabytes of data. Qumulo Core gives enterprises enhanced abilities to understand files and objects stored within its infrastructure, thereby enabling advanced analytics related to the ingestion of streaming big data and the utilization of stored data by end users. In conjunction with news of the availability of Qumulo Core on HPE Apollo Servers, Qumulo announced details of Qumulo Core 2.5, which features the capability to take snapshots of storage infrastructures to obtain incremental backups of stored data for business continuity, data resiliency and business continuity purposes. In addition, Qumulo 2.5 delivers enhanced capabilities to visualize system throughput and understand the utilization of file storage. Furthermore, Qumulo Core 2.5 storage administrators have the ability to drill-down on areas of storage associated with performance degradation to facilitate more nuanced root cause analytics of issues that may involve infrastructure, applications or the nexus between the two.
Taken together, the announcement of Qumulo Core 2.5 in tandem with its availability on commodity hardware for the first time in the form of HPE Apollo Servers underscores Qumulo’s differentiation within enterprise storage as a player capable of delivering keen data awareness as well as extreme scale-out capabilities that can support the needs of hybrid cloud infrastructures. Expect Qumulo to continue expanding partnerships with other commodity hardware vendors as it deepens its traction within the enterprise and consolidates its brand as a data aware, scale-out storage solution that delivers a consistent storage infrastructure capable of accommodating the proliferating storage needs of the contemporary enterprise.
The graphic below illustrates some of the visualization capabilities of Qumulo Core 2.5: