VMware Launches vCloud Hybrid Service IaaS Platform By Leaning On Industry Familiarity With VMware Virtualization Tools

This week, VMware revealed details of its Infrastructure as a Service platform, vCloud Hybrid Service. Based on the premise that enterprise customers are interested in a cloud offering marked by an extension of the technology within their datacenters, VMware announced a cloud solution built around the VMware virtualization technologies with which the enterprise is deeply familiar. VMware’s offering is branded as a hybrid cloud because it enables customers to transport workloads back and forth between their public cloud platform and private customer data centers in ways that allow enterprises to leverage private and public cloud solutions in tandem as dictated by their business needs.

Key features of the VMware IaaS vCloud Hybrid Service include the following:

•IaaS platforms delivered through VMware service providers that provides vCloud Datacenter Services to enable customers to provision virtual environments with ease. vCloud Datacenter Services feature SLAs guaranteeing uptime of 99.5%, role based access control and the ability to configure stacks for compliance with SAS 70 Type II or ISO27001 standards.
•A choice of dedicated or virtual private cloud solutions. A dedicated solution offers customers “physically isolated infrastructure” in contrast to the “logically isolated infrastructure” specific to a virtual private cloud solution.
•An IaaS infrastructure delivered by certified VMware service providers such as AT&T Inc., Bluelock, Colt, CSC, Dell Services, Optus, SingTel, Softbank, T-Systems
vCloud Connector 2.0 enables customers to transfer workloads between private datacenters and VMware public clouds. Customers can effect the transfer of workloads betweeen infrastructures by using one network configuration instead of reconfiguring network settings in the destination infrastructure. Additionally, customers can manage the transfer of data between different infrastructures with “One Catalog” that synchronizes the list of available content across all relevant infrastructures, thereby avoiding the scenario whereby customers are forced to manage multiple content catalogs concurrently.

Because VMware’s IaaS vCloud Hybrid Service is delivered through a cluster of service partners, the offering is fundamentally different from the IaaS product offerings of Amazon Web Services and Rackspace. VMware plans to make its vCloud Hybrid Service technology and IP available to all service partners, and promises to build one of the most extensive IaaS partnerships for public cloud computing available in the world today. The product effectively gives new meaning to the term cloud interoperability given that customers can transfer workloads not only between private enterprise datacenters and public clouds enabled by VMware’s service partners, but also between VMware’s public cloud, partner datacenters as well. vCloud Hybrid Service will be available through an early access program in June and anticipates becoming generally available in Q3 of this year.

EMC’s Pivotal One Attempts To Bring IT Infrastructures Of Facebook, Google and Amazon Web Services To Enterprise

This week, EMC and its subsidiary VMware revealed details of the vision behind Pivotal, its spin-off company financed in part by $105 million in capital from GE. In a webcast announcing the launch of Pivotal on Wednesday, Pivotal CEO Paul Maritz, formerly CEO of VMware from 2008 to 2012, remarked that Pivotal attempts to bring to enterprises the technology platforms that have allowed internet giants such as Facebook, Google and Amazon Web Services to efficiently operate IT infrastructures on a massive scale while concurrently demonstrating cost and performance efficiencies in application development and data analytics.

Referring specifically to Facebook, Google and Amazon Web Services, Maritz elaborated on the strengths of their IT infrastructure as follows:

If you look at the way they do IT, it is significantly different than the way enterprises do IT. Specifically, they are good at storing large amounts of data and drawing information from it in a cost-effective manner. They can develop applications very quickly. And they are good at automating routines. They used these three capabilities together to introduce new experiences and business processes that have yielded — depended on how you want to count it — a trillion dollars in market value.

According to Maritz, the internet giants are a cut above everyone else with respect to data storage, data analytics, application development and automation. Enterprises, in contrast, leverage comparatively archaic IT infrastructures marked by on premise data centers and attempts to migrate to the cloud in conjunction with meager data analytics capability and poor or non-existent IT automation and orchestration processes. As a result, the enterprise market represents an opportunity to deploy technology platforms that allow for efficient storage, data integration across disparate data sources and interactive applications with real-time responses to incoming data as Maritz notes below:

It is clear that there is a widespread need emerging for new solutions that allow customers to drive new business value by cost-effectively reasoning over large datasets, ingesting information that is rapidly arriving from multiple sources, writing applications that allow real-time reactions, and doing all of this in a cloud-independent or portable manner. The need for these solutions can be found across a wide range of industries and it is our belief that these solutions will drive the need for new platforms. Pivotal aims to be a leading provider of such a platform. We are honored to work with GE, as they seek to drive new business value in the age of the Industrial Internet.

More specifically, Pivotal will provide a platform as a service infrastructure called Pivotal One that brings the capabilities currently enjoyed by the likes of Facebook and Google to enterprises in ways that allow them to continue their transition to cloud-based IT infrastructures while concurrently enjoying all of the benefits of advanced storage, analytics and agile application development. In other words, Pivotal One marks the confluence of Big Data, Cloud, Analytics and Application Development in a bold play to commoditize the IT capabilities held by a handful of internet giants and render them available to the enterprise through a PaaS platform.

Pivotal One’s key components include the following:

Pivotal Data Fabric
A platform for data storage and analytics based on Pivotal HD, which features an enterprise-grade distribution of Apache Hadoop in addition to Pivotal HD’s HAWQ analytics platform.

Pivotal Cloud and Application Platform
An application development framework for Java for the enterprise based on Cloud Foundry and Spring.

Pivotal Expert Services
Professional services for agile application development and data analytics.

Open Source Support
Active support of open source projects such as but not limited to Spring, Cloud Foundry, RabbitMQ™, Redis, OpenChorus™.

Pivotal currently claims Groupon, EMI, and Salesforce.com among its customer base. The company already has 1250 employees and, given GE’s financing and interests, is poised to take a leadership role in the industrial internet space whereby objects such as automobiles, washers, dryers and other appliances deliver real-time data to a circuit of analytic dashboards that iteratively provide feedback, automation and control. Pivotal One also represents a nascent trend within the Platform as a Service industry whereby PaaS is increasingly evolving into an “everything as a service” platform that sits atop various IaaS infrastructures. For example, CumuLogic recently announced news of a platform that allows customers to build Amazon Web Services-like infrastructures marked by suites of IaaS, Big Data, PaaS and application development infrastructures on top of private clouds behind their enterprise firewall. EMC’s Pivotal One is expected to be generally available by the end of 2013.

Joyent Fires Salvo At Amazon Web Services With Enhanced Joyent Cloud

On Thursday, Joyent announced the launch of a new public cloud computing platform that takes direct aim at Amazon Web Services in the increasingly competitive Infrastructure as a Service space. The newly improved Joyent Cloud offering from the San Francisco based company incorporates the company’s August 15 reconfiguration of its SmartOS operating system that allows users to deploy applications on Windows and Linux operating systems in addition to SmartOS. The new Joyent Cloud boasts four principal innovations to an upgraded infrastructure management system called SmartDataCenter:

Enhanced analytics
Customers will have increased visibility to the performance of their cloud infrastructure thanks to the deployment of DTrace, an open source analytics tool that Joyent had previously leveraged exclusively for corporate, troubleshooting purposes.

Safe storage, Increased speed, Data security
Joyent claims its Windows, Linux and SmartOS machines have superior processing speeds, data security standards and secure storage. SmartOS machines deliver even more granular analytics than their Windows and Linux counterparts.

Lower computational costs
Because Joyent’s servers are reportedly up to 14 times faster than comparable Amazon EC2 machines, computing costs for customers are amongst the lowest in the industry.

More predictable pricing
Joyent has shifted to a pay per use pricing model at a rate starting at $.085/hour, in contrast to their previous subscription model.

The enhanced analytics give Joyent a competitive advantage over Amazon Web Services, which has often been characterized as a black box when it comes to providing customers with visibility about the performance of their deployments. “Customers are going to get, in their user interface, the ability to measure real-time latency from the infrastructure all the way up through the application stack,” said Steve Tuck, General Manager for Joyent Cloud. Amazon Web Services customers frequently use third party applications such as RightScale in order to gain more insight into latency within a cloud stack.

Joyent customers can now deploy applications on Windows and Linux operating systems because of the company’s “porting” of the KVM hypervisor onto its SmartOS operating system. The integration of the KVM hypervisor onto SmartOS allows for hardware virtualization in addition to operating level system virtualization. Joyent founder and chief scientist Jason Hoffman hailed the new Joyent SmartOS as the “first hypervisor platform to emerge in five years” and the only cloud solution in the industry “that can manage both KVM hardware virtualization and operating system-level virtualization on a single OS.” After porting the KVM onto its hypervisor, Joyent open sourced its revamped cloud SmartOS cloud operating system. The outspoken Hoffman claimed that “this combination of virtualization options, data consistency through ZFS and access to DTrace for rapid troubleshooting, is the most powerful and efficient collection of technologies in cloud application development. I invite developers who use VMWare, Citrix, Red Hat or Microsoft hypervisor tools to try this open source package.”

Hardware virtualization refers to a scenario whereby a hypervisor enables one server to function as several servers operating independently of each other. For example, one server might thereby be virtualized into three servers, each of which runs a different operating system such as SmartOS, Windows and Linux simultaneously. Operating system-level virtualization, on the other hand, refers to a case where the operating system itself is virtualized. In this case, a virtualized operating system features discrete cases of the same operating system running independently. Speaking of the impetus for the company’s recognition of hardware virtualization in its August 15 KVM integration, Joyent’s Steve Tuck noted that “there are a lot of developers out there that say, ‘I just want Linux or I just want Windows. I don’t want to worry about a couple of small differences, even if they are minor, before I code.”

Joyent Cloud claims 13,000 customers including high profile names such as LinkedIn, Kabam, StackMob, and Gilt Groupe. The company’s aggressive push of its Infrastructure as a Service offering comes in the wake of competition from increasing OpenStack deployments, the enterprise oriented Direct Connect offering from Amazon Web Services, Dell’s announcement of a VMWare based cloud offering and HP’s forthcoming OpenStack based cloud. Joyent is a member of the Open Virtualization Alliance, an association dedicated to the adoption of the KVM hypervisor as a robust alternative to proprietary cloud solutions.

Joyent and Qihoo 360 Technologies Poised to Gain Traction in China’s Cloud Computing Market

Recent announcements by Joyent and Qihoo 360 Technologies indicate that the use of cloud computing technology in China is poised to proliferate dramatically in 2011. On May 16, Joyent revealed details of an alliance with ClusterTech whereby ClusterTech would become the provider of public cloud services to companies in the gaming, media, mobile and social media space in China. Under this arrangement, ClusterTech will provision Joyent’s cloud computing SmartDataCenter 6 software to “service providers, data center operators and systems integrators” that will, in turn, provide Joyent’s cloud computing technology to media, gaming and mobile companies in China. In licensing its cloud computing software to a third party distributor, Joyent leverages a business model that differs markedly from most of its U.S. competitors such as Amazon Web Services and Rackspace that retain control over the deployment of their cloud computing operating systems. Joyent’s partnership with ClusterTech builds upon its previous entry into the Chinese cloud computing market in 2009 with a public cloud data center in the Qinhuangdao Economic and Technological Development Zone (QETDZ), Hebei Province, China. Meanwhile, Qihoo 360 Technologies, developer of China’s most popular internet security software, recently announced plans to enter the cloud computing space by providing online data storage. Qihoo CEO Zhou Hongyi mentioned the possibility of acquiring relevant companies in order to expand into the cloud computing and data storage space. The company’s first quarter revenue more than doubled to $22.9 million as compared to $9.7 million from last year, largely as a result of increased online advertising revenue. Qihoo went public in March through an IPO that valued the company at $202 million, with the IPO share value at $14.50. As of June 1, the stock is trading at $26.25 a share, up more than 81% from its IPO value.

Google’s Blogger tight lipped about reasons for outage as service is restored

Google’s Blogger service experienced a major outage on Thursday May 12 that continued until service was finally restored on Friday, May 13 at 1030 AM PDT. Users were unable to log-in to the dashboard that enables bloggers to publish and edit posts, edit widgets and alter the design templates for their blogs. The outage coincided with the impending launch of a major overhaul to Blogger’s user interface and functionality, but a Blogger tweet asserted the independence of the outage from the upcoming redesign. Most notable about the outage, however, was Google’s tight lipped explanation of the technical reasons responsible for the outage in contradistinction to Amazon Web Service’s (AWS) exhaustively thorough explanation of its own service outage in late April. Blogger’s Tech Lead/Manager Eddie Kessler explained the Blogger outage as follows:

Here’s what happened: during scheduled maintenance work Wednesday night, we experienced some data corruption that impacted Blogger’s behavior. Since then, bloggers and readers may have experienced a variety of anomalies including intermittent outages, disappearing posts, and arriving at unintended blogs or error pages. A small subset of Blogger users (we estimate 0.16%) may have encountered additional problems specific to their accounts. Yesterday we returned Blogger to a pre-maintenance state and placed the service in read-only mode while we worked on restoring all content: that’s why you haven’t been able to publish. We rolled back to a version of Blogger as of Wednesday May 11th, so your posts since then were temporarily removed. Those are the posts that we’re in the progress of restoring.

Routine maintenance caused “data corruption” that led to disappearing posts and the subsequent outage to the user management dashboard. But Kessler resists from elaborating on the error that resulted from “scheduled maintenance” nor does he specify the form of data corruption that caused such a wide variety of errors on blogger pages. In contrast, AWS revealed that the outage was caused by misrouting network bandwidth from a high bandwidth connection to a low bandwidth connection on Elastic Block Storage, the storage database for Amazon EC2 instances. In their post-mortem explanation, AWS described the repercussions of the network misrouting on the architecture of EBS within the affected Region in excruciatingly impressive detail. Granted, Blogger is a free service used primarily for personal blogging, whereas AWS hosts customers with hundreds of millions of dollars in annual revenue. Nevertheless, Blogger users published half a billion posts in 2010 which were read by 400 million readers across the world. Users, readers and cloud computing savants alike would all benefit from learning more about the technical issues responsible for outages such as this one because vendor transparency will only increase public confidence in the cloud and help propel industry-wide innovation. Even if the explanation were not quite as thorough as that offered by Amazon Web Services, Google would do well to supplement its note about “data corruption” with something more substantial for Blogger users and the cloud computing community more generally.

Rackspace Targets Startups With its Rackspace Startup Program

Rackspace formally announced a program designed to target start-up companies as customers for its cloud computing products and services on March 11. Titled the “Rackspace Startup Program,” the strategy makes available Rackspace’s cloud computing offering to startups that are part of incubator and accelerator programs such as 500 Startups, TechStars, Y Combinator and General Assembly. Based on the understanding that Rackspace itself was a startup, the program offers customized guidance about deploying applications within a cloud computing environment alongside its Rackspace and OpenStack cloud resources. The program offers yet another illustration of divergences between Rackspace’s business model and that of Amazon Web Services. Whereas Amazon Web Services represents a pure product offering, Rackspace provides product enhanced services that complement its cloud offering with consulting services such as those recently formalized by its Cloud Builders service line. Dubset marks an example of a startup that uses Rackspace’s services to stream music and track and perform analytics on what gets played. In a note on Rackspace’s blog, Dubset reports that their “costs are low and we never have to worry about our Cloud Servers.” Alongside the release of its Startup Program, Rackspace also announced the availability of version 2.0 of its cloud computing application, Rackspace Cloud 2.0, which can additionally be accessed by iPhone, iPad and iPod Touch. The free iPhone and iPod Touch application enables cloud managers and development teams to access and transform their cloud computing environments while away from their desktop or laptop consoles.