EMC’s Pivotal One Attempts To Bring IT Infrastructures Of Facebook, Google and Amazon Web Services To Enterprise

This week, EMC and its subsidiary VMware revealed details of the vision behind Pivotal, its spin-off company financed in part by $105 million in capital from GE. In a webcast announcing the launch of Pivotal on Wednesday, Pivotal CEO Paul Maritz, formerly CEO of VMware from 2008 to 2012, remarked that Pivotal attempts to bring to enterprises the technology platforms that have allowed internet giants such as Facebook, Google and Amazon Web Services to efficiently operate IT infrastructures on a massive scale while concurrently demonstrating cost and performance efficiencies in application development and data analytics.

Referring specifically to Facebook, Google and Amazon Web Services, Maritz elaborated on the strengths of their IT infrastructure as follows:

If you look at the way they do IT, it is significantly different than the way enterprises do IT. Specifically, they are good at storing large amounts of data and drawing information from it in a cost-effective manner. They can develop applications very quickly. And they are good at automating routines. They used these three capabilities together to introduce new experiences and business processes that have yielded — depended on how you want to count it — a trillion dollars in market value.

According to Maritz, the internet giants are a cut above everyone else with respect to data storage, data analytics, application development and automation. Enterprises, in contrast, leverage comparatively archaic IT infrastructures marked by on premise data centers and attempts to migrate to the cloud in conjunction with meager data analytics capability and poor or non-existent IT automation and orchestration processes. As a result, the enterprise market represents an opportunity to deploy technology platforms that allow for efficient storage, data integration across disparate data sources and interactive applications with real-time responses to incoming data as Maritz notes below:

It is clear that there is a widespread need emerging for new solutions that allow customers to drive new business value by cost-effectively reasoning over large datasets, ingesting information that is rapidly arriving from multiple sources, writing applications that allow real-time reactions, and doing all of this in a cloud-independent or portable manner. The need for these solutions can be found across a wide range of industries and it is our belief that these solutions will drive the need for new platforms. Pivotal aims to be a leading provider of such a platform. We are honored to work with GE, as they seek to drive new business value in the age of the Industrial Internet.

More specifically, Pivotal will provide a platform as a service infrastructure called Pivotal One that brings the capabilities currently enjoyed by the likes of Facebook and Google to enterprises in ways that allow them to continue their transition to cloud-based IT infrastructures while concurrently enjoying all of the benefits of advanced storage, analytics and agile application development. In other words, Pivotal One marks the confluence of Big Data, Cloud, Analytics and Application Development in a bold play to commoditize the IT capabilities held by a handful of internet giants and render them available to the enterprise through a PaaS platform.

Pivotal One’s key components include the following:

Pivotal Data Fabric
A platform for data storage and analytics based on Pivotal HD, which features an enterprise-grade distribution of Apache Hadoop in addition to Pivotal HD’s HAWQ analytics platform.

Pivotal Cloud and Application Platform
An application development framework for Java for the enterprise based on Cloud Foundry and Spring.

Pivotal Expert Services
Professional services for agile application development and data analytics.

Open Source Support
Active support of open source projects such as but not limited to Spring, Cloud Foundry, RabbitMQ™, Redis, OpenChorus™.

Pivotal currently claims Groupon, EMI, and Salesforce.com among its customer base. The company already has 1250 employees and, given GE’s financing and interests, is poised to take a leadership role in the industrial internet space whereby objects such as automobiles, washers, dryers and other appliances deliver real-time data to a circuit of analytic dashboards that iteratively provide feedback, automation and control. Pivotal One also represents a nascent trend within the Platform as a Service industry whereby PaaS is increasingly evolving into an “everything as a service” platform that sits atop various IaaS infrastructures. For example, CumuLogic recently announced news of a platform that allows customers to build Amazon Web Services-like infrastructures marked by suites of IaaS, Big Data, PaaS and application development infrastructures on top of private clouds behind their enterprise firewall. EMC’s Pivotal One is expected to be generally available by the end of 2013.


Joyent Fires Salvo At Amazon Web Services With Enhanced Joyent Cloud

On Thursday, Joyent announced the launch of a new public cloud computing platform that takes direct aim at Amazon Web Services in the increasingly competitive Infrastructure as a Service space. The newly improved Joyent Cloud offering from the San Francisco based company incorporates the company’s August 15 reconfiguration of its SmartOS operating system that allows users to deploy applications on Windows and Linux operating systems in addition to SmartOS. The new Joyent Cloud boasts four principal innovations to an upgraded infrastructure management system called SmartDataCenter:

Enhanced analytics
Customers will have increased visibility to the performance of their cloud infrastructure thanks to the deployment of DTrace, an open source analytics tool that Joyent had previously leveraged exclusively for corporate, troubleshooting purposes.

Safe storage, Increased speed, Data security
Joyent claims its Windows, Linux and SmartOS machines have superior processing speeds, data security standards and secure storage. SmartOS machines deliver even more granular analytics than their Windows and Linux counterparts.

Lower computational costs
Because Joyent’s servers are reportedly up to 14 times faster than comparable Amazon EC2 machines, computing costs for customers are amongst the lowest in the industry.

More predictable pricing
Joyent has shifted to a pay per use pricing model at a rate starting at $.085/hour, in contrast to their previous subscription model.

The enhanced analytics give Joyent a competitive advantage over Amazon Web Services, which has often been characterized as a black box when it comes to providing customers with visibility about the performance of their deployments. “Customers are going to get, in their user interface, the ability to measure real-time latency from the infrastructure all the way up through the application stack,” said Steve Tuck, General Manager for Joyent Cloud. Amazon Web Services customers frequently use third party applications such as RightScale in order to gain more insight into latency within a cloud stack.

Joyent customers can now deploy applications on Windows and Linux operating systems because of the company’s “porting” of the KVM hypervisor onto its SmartOS operating system. The integration of the KVM hypervisor onto SmartOS allows for hardware virtualization in addition to operating level system virtualization. Joyent founder and chief scientist Jason Hoffman hailed the new Joyent SmartOS as the “first hypervisor platform to emerge in five years” and the only cloud solution in the industry “that can manage both KVM hardware virtualization and operating system-level virtualization on a single OS.” After porting the KVM onto its hypervisor, Joyent open sourced its revamped cloud SmartOS cloud operating system. The outspoken Hoffman claimed that “this combination of virtualization options, data consistency through ZFS and access to DTrace for rapid troubleshooting, is the most powerful and efficient collection of technologies in cloud application development. I invite developers who use VMWare, Citrix, Red Hat or Microsoft hypervisor tools to try this open source package.”

Hardware virtualization refers to a scenario whereby a hypervisor enables one server to function as several servers operating independently of each other. For example, one server might thereby be virtualized into three servers, each of which runs a different operating system such as SmartOS, Windows and Linux simultaneously. Operating system-level virtualization, on the other hand, refers to a case where the operating system itself is virtualized. In this case, a virtualized operating system features discrete cases of the same operating system running independently. Speaking of the impetus for the company’s recognition of hardware virtualization in its August 15 KVM integration, Joyent’s Steve Tuck noted that “there are a lot of developers out there that say, ‘I just want Linux or I just want Windows. I don’t want to worry about a couple of small differences, even if they are minor, before I code.”

Joyent Cloud claims 13,000 customers including high profile names such as LinkedIn, Kabam, StackMob, and Gilt Groupe. The company’s aggressive push of its Infrastructure as a Service offering comes in the wake of competition from increasing OpenStack deployments, the enterprise oriented Direct Connect offering from Amazon Web Services, Dell’s announcement of a VMWare based cloud offering and HP’s forthcoming OpenStack based cloud. Joyent is a member of the Open Virtualization Alliance, an association dedicated to the adoption of the KVM hypervisor as a robust alternative to proprietary cloud solutions.

Joyent and Qihoo 360 Technologies Poised to Gain Traction in China’s Cloud Computing Market

Recent announcements by Joyent and Qihoo 360 Technologies indicate that the use of cloud computing technology in China is poised to proliferate dramatically in 2011. On May 16, Joyent revealed details of an alliance with ClusterTech whereby ClusterTech would become the provider of public cloud services to companies in the gaming, media, mobile and social media space in China. Under this arrangement, ClusterTech will provision Joyent’s cloud computing SmartDataCenter 6 software to “service providers, data center operators and systems integrators” that will, in turn, provide Joyent’s cloud computing technology to media, gaming and mobile companies in China. In licensing its cloud computing software to a third party distributor, Joyent leverages a business model that differs markedly from most of its U.S. competitors such as Amazon Web Services and Rackspace that retain control over the deployment of their cloud computing operating systems. Joyent’s partnership with ClusterTech builds upon its previous entry into the Chinese cloud computing market in 2009 with a public cloud data center in the Qinhuangdao Economic and Technological Development Zone (QETDZ), Hebei Province, China. Meanwhile, Qihoo 360 Technologies, developer of China’s most popular internet security software, recently announced plans to enter the cloud computing space by providing online data storage. Qihoo CEO Zhou Hongyi mentioned the possibility of acquiring relevant companies in order to expand into the cloud computing and data storage space. The company’s first quarter revenue more than doubled to $22.9 million as compared to $9.7 million from last year, largely as a result of increased online advertising revenue. Qihoo went public in March through an IPO that valued the company at $202 million, with the IPO share value at $14.50. As of June 1, the stock is trading at $26.25 a share, up more than 81% from its IPO value.


Google’s Blogger tight lipped about reasons for outage as service is restored

Google’s Blogger service experienced a major outage on Thursday May 12 that continued until service was finally restored on Friday, May 13 at 1030 AM PDT. Users were unable to log-in to the dashboard that enables bloggers to publish and edit posts, edit widgets and alter the design templates for their blogs. The outage coincided with the impending launch of a major overhaul to Blogger’s user interface and functionality, but a Blogger tweet asserted the independence of the outage from the upcoming redesign. Most notable about the outage, however, was Google’s tight lipped explanation of the technical reasons responsible for the outage in contradistinction to Amazon Web Service’s (AWS) exhaustively thorough explanation of its own service outage in late April. Blogger’s Tech Lead/Manager Eddie Kessler explained the Blogger outage as follows:

Here’s what happened: during scheduled maintenance work Wednesday night, we experienced some data corruption that impacted Blogger’s behavior. Since then, bloggers and readers may have experienced a variety of anomalies including intermittent outages, disappearing posts, and arriving at unintended blogs or error pages. A small subset of Blogger users (we estimate 0.16%) may have encountered additional problems specific to their accounts. Yesterday we returned Blogger to a pre-maintenance state and placed the service in read-only mode while we worked on restoring all content: that’s why you haven’t been able to publish. We rolled back to a version of Blogger as of Wednesday May 11th, so your posts since then were temporarily removed. Those are the posts that we’re in the progress of restoring.

Routine maintenance caused “data corruption” that led to disappearing posts and the subsequent outage to the user management dashboard. But Kessler resists from elaborating on the error that resulted from “scheduled maintenance” nor does he specify the form of data corruption that caused such a wide variety of errors on blogger pages. In contrast, AWS revealed that the outage was caused by misrouting network bandwidth from a high bandwidth connection to a low bandwidth connection on Elastic Block Storage, the storage database for Amazon EC2 instances. In their post-mortem explanation, AWS described the repercussions of the network misrouting on the architecture of EBS within the affected Region in excruciatingly impressive detail. Granted, Blogger is a free service used primarily for personal blogging, whereas AWS hosts customers with hundreds of millions of dollars in annual revenue. Nevertheless, Blogger users published half a billion posts in 2010 which were read by 400 million readers across the world. Users, readers and cloud computing savants alike would all benefit from learning more about the technical issues responsible for outages such as this one because vendor transparency will only increase public confidence in the cloud and help propel industry-wide innovation. Even if the explanation were not quite as thorough as that offered by Amazon Web Services, Google would do well to supplement its note about “data corruption” with something more substantial for Blogger users and the cloud computing community more generally.


Rackspace Targets Startups With its Rackspace Startup Program

Rackspace formally announced a program designed to target start-up companies as customers for its cloud computing products and services on March 11. Titled the “Rackspace Startup Program,” the strategy makes available Rackspace’s cloud computing offering to startups that are part of incubator and accelerator programs such as 500 Startups, TechStars, Y Combinator and General Assembly. Based on the understanding that Rackspace itself was a startup, the program offers customized guidance about deploying applications within a cloud computing environment alongside its Rackspace and OpenStack cloud resources. The program offers yet another illustration of divergences between Rackspace’s business model and that of Amazon Web Services. Whereas Amazon Web Services represents a pure product offering, Rackspace provides product enhanced services that complement its cloud offering with consulting services such as those recently formalized by its Cloud Builders service line. Dubset marks an example of a startup that uses Rackspace’s services to stream music and track and perform analytics on what gets played. In a note on Rackspace’s blog, Dubset reports that their “costs are low and we never have to worry about our Cloud Servers.” Alongside the release of its Startup Program, Rackspace also announced the availability of version 2.0 of its cloud computing application, Rackspace Cloud 2.0, which can additionally be accessed by iPhone, iPad and iPod Touch. The free iPhone and iPod Touch application enables cloud managers and development teams to access and transform their cloud computing environments while away from their desktop or laptop consoles.


OpenStack Demo Environment to be Launched by Rackspace, Dell and Equinix

Rackspace, Dell and Equinix have decided to launch a demonstration environment of OpenStack, an open source, Infrastructure as a Service cloud computing platform. The OpenStack demonstration environment is intended to entice customers to investigate OpenStack’s cloud computing facilities to the point where they subsequently decide to purchase service offerings that manage the process of building and maintaining a customer’s application environment in the cloud. Rackspace, for example, has a service offering called Cloud Builders that facilitates the process of transitioning a customer’s internally hosted applications into a cloud computing environment. Cloud Builders assists customers design and launch applications either within a public cloud analogous to Amazon Web Services, or a private cloud behind the customer’s own firewall within their own data center.

The OpenStack demo environment will be available in three locations: the Rackspace data center in Chicago, and Equinix data centers in Silicon Valley and Ashburn, VA. The platform will run on Dell’s PowerEdge Intel C based server technology, Platform Equinix, a delivery platform for data centers across the U.S., OpenStack’s open source cloud computing code and Rackspace’s Cloud Builders services and support. OpenStack began in October 2010 as a collaboration between NASA and Rackspace designed to deliver a scalable, open source cloud computing operating system. The project features OpenStack Compute and OpenStack Storage, which respectively provide services to provision virtual servers and deliver an infrastructure for storing terabytes and petabytes of data. Today, over 50 companies have participated in the OpenStack project by providing technical expertise, mindshare, capital and real-time application deployments from partners such as Dell, AMD, Intel, Citrix and Cisco.

The demo of OpenStack represents an important moment for Rackspace, one of the key leaders in the OpenStack initiative. With the release of Cloud Builders, Rackspace has elected to pursue a business model diametrically opposed to Amazon Web Services because it offers customers an array of services to complement its product offering. Amazon Web Services, in contrast, delivers a highly streamlined, flexible, inexpensive deployment environment and experience that explicitly eschews consultative sales and service offerings, with minor exceptions for its premium support customers. If successful, the demo of OpenStack should make Rackspace an even more attractive target for acquisition amidst a flurry of impending acquisition speculations following the recent purchases of Terremark by Verizon and NaviSite by Time Warner. Rackspace CEO, Lanham Napier, denies interest in acquisition conversations in favor of a continued policy of organic growth and revenue stemming from its own acquisitions, such as cloud computing developer Anso Labs in February of 2011. Rackspace also acquired Cloudkick, the cloud monitoring company, in December of last year.


Amazon Web Services: Elastic Beanstalk and CloudFormation Explained

Amazon Web Services has recently released Elastic Beanstalk and CloudFormation, two applications that automate the process of provisioning hardware resources and deploying applications on AWS’s flexible, inexpensive development environment. Introduced on January 19, Elastic Beanstalk automates the process of deploying an application on Amazon’s virtual servers once it has been written. Currently in Beta mode for Java applications only, Elastic Beanstalk manages the specifics of provisioning servers, load balancing and auto-scaling for unexpected spikes in the volume of traffic once an application is written. Elastic Beanstalk’s auto-scaling functionality scales horizontally by creating a clone of the original server instance, instead of vertically provisioning a larger server with a correspondingly appropriate memory. Developers retain the flexibility to over-ride Elastic Beanstalk’s auto-scaling features, in which case the application conforms to the scaling parameters indicated by the user.

Like Elastic Beanstalk, CloudFormation fulfills an analogous, but more ambitious function of automating application deployment. Launched on February 25, CloudFormation uses templates to automate creation of an integrated hardware infrastructure for an application containing multiple components. For example, CloudFormation takes the images, storage, security and messaging components of an application, understands their dependencies, and launches them in the right order using the template. In other words, instead of requiring a developer to write discrete scripts for each individual Amazon Machine Instance (AMI), CloudFormation gathers together certain parameters specified by a developer and creates one script for the requisite “stack” of Amazon Machine Instances of servers that collectively specifies elastic IP addresses, message queues, load balancing and auto-scaling. CloudFormation operates through JSON templates that are used to understand an application’s configuration parameters.

In his AWS blog post about CloudFormation, Jeff Barr uses the metaphor of cooking and baking to describe the application’s innovation and importance. While cooking allows for individual discretion and ad hoc changes to a recipe, baking requires precise combinations of ingredients that allow for cookies of the same taste and texture to emerge from the oven time and time again. In the same vein, CloudFormation enables developers to become bakers by automating the creation of complex systems. Moreover, developers may wish to create the same development environment a number of times, and instead of memorizing and repeating the execution of the same set of scripts over and over again, they can now use CloudFormation to automate and scale their development needs. Amazon released CloudFormation with templates for a number of open source applications such as Drupal, WordPress, Gollum and Joomla.

Amazon’s Jeff Barr put it as follows:

First, AWS is programmable, so it should be possible to build even complex systems (sometimes called “stacks”) using repeatable processes. Second, the dynamic nature of AWS makes people want to create multiple precise copies of their operating environment. This could be to create extra stacks for development and testing, or to replicate them across multiple AWS Regions….Today, all of you cooks get to become bakers!

Together with Elastic Beanstalk, CloudFormation goes a long way toward streamlining the process of deploying applications on Amazon’s EC2 environment. Despite Amazon’s lack of managed services, the 2011 first quarter release of both of these applications should render AWS more attractive to both small and enterprise customers alike.