Amazon Promises To Detail “Other” Category In Earnings Reports And Reveal AWS Earnings Later In 2015

Today, in its fourth quarter earnings call, Amazon announced that Amazon Web Services claims more than one million active customers featuring year over year usage growth of the AWS platform by 90%. Moreover, in 2014, AWS rolled out 515 feature and service releases, or 80% more than the previous year. The most recent quarter also featured the release of AWS Lambda, a service that runs code in response to events driven by Amazon S3, Amazon Kinesis, and Amazon DynamoDB. During the same period, AWS also announced the Amazon EC2 Container Service and Amazon Aurora, a MySQL-compatible relational database for Amazon RDS that boasts performance improvements over MySQL by a factor as much as five at a price point of one tenth of the cost of commercial relational database products. Importantly, Amazon announced that it will finally reveal details of AWS earnings later in 2015, which are currently bucketed into the “Other” category in Amazon’s earnings reports and subsequently leave the exact figure open to endless amounts of analyst speculation and inference. Revenue in Amazon’s “Other” category for the most recent quarter was $1.67B, although that figure includes advertising and credit-card related revenues.

Amazon Web Services Reveals Low Cost T2 Instances Marked By Burstable CPU Capability

Amazon Web Services recently introduced T2 instances, a new EC2 instance offering for applications that do not require sustained high CPU performance. The T2 instances constitute the cheapest instance available on the EC2 platform with prices starting at $0.013/hour. T2 instances deliver a baseline CPU in conjunction with the capacity to “burst” above the baseline. The baseline CPU level and bursting ability are determined by “CPU Credits” that instances can accrue when they are idle. The more the CPU is idle, the greater the ability of the T2 instance to burst above its baseline capacity. The T2 instances are ideal for web servers, small databases and development environments that sparingly use the full capacity of the CPU. Applications with high, consistent CPU needs such as computationally intensive applications or applications leveraging streaming data will require fixed performance EC2 instances in contrast to the burstable performance instances specific to the T2 instance. T2 instances are available in micro, small and medium sizes with 1, 2 and 4 GB of memory respectively. Overall, the T2 instance offering positions Amazon even more strongly with respect to its competitors in catering to the needs of organizations with low CPU needs in general, but that “require the full CPU resources for short bursts,” according to Matt Garman, VP of Amazon EC2 at Amazon Web Services.

10 Things You Should Know About Zynga and Its IPO

Zynga, the social gaming company founded by Mark Pincus in 2007, hopes to raise $1 billion in an IPO that follows upon the heels of the LinkedIn and Groupon IPOs of the last few months. Zynga’s IPO is expected to offer 10 percent of its shares to the public at a valuation of $20 billion. Here are ten things you should know about Zynga and its July 1 S-1 filing.

1. Unlike Groupon, Zynga is profitable. The company reported $90.6 million in profit in 2010. In Q1 of 2011, Zynga reported an $11.8 million profit. Zynga’s 2010 revenues were $597.46 million. For the first quarter of 2011, its revenue was $235.42 million.

2. Zynga’s IPO features three categories of shares: Class A, B and C. Class A shares will be issued to public shareholders. Class B and C shares belong to senior executives and investors. CEO Mark Pincus owns all of the Class C shares. Pincus made almost $110 million by selling a percentage of his class B shares back to Zynga last March.

3. Zynga’s investors include Kleiner Perkins Caufield & Byers, Union Square Ventures, DST Global, Institutional Venture Partners (IVP), Foundry Group, Avalon Ventures, Google, Reid Hoffman, Peter Thiel, Andreessen Horowitz, Tiger Global and Kevin Rose. Key investors own the following percentages of Class B Shares: Kleiner Perkins Caufield & Byers owns 11%; IVP, Foundry Ventures and Avalon Ventures each own 6.1%; DST Global owns 5.8% and Union Square Ventures takes claim to 5.5%.

4. Zynga is the biggest developer of Facebook applications such as CityVille, FarmVille, Mafia Wars, Words with Friends and Zynga Poker. The company has 60 million daily active users on Facebook and more daily active users than the next 30 Facebook social game developers combined.

5. Zynga has the top two games in the word category for the Apple App Store for iPhone.

6. Zynga has 2000 employees that serve 148 million unique monthly users in 166 countries. Players create 38,000 virtual entities per second and spend 2 billion minutes a day gaming.

7. “Substantially all” of Zynga’s revenue derives from the Facebook platform. Any decisions made by Facebook that adversely affect Zynga’s gaming operations would have significant repercussions on its revenue stream.

8. Zynga sees its market opportunity in the context of: a) the growth of social networking; b) a culture of the “App Economy” whereby developers have access to social network platforms; and c) A “Free-to-Play” gaming culture that allows users to play games for free, thereby attracting a broader set of users and a richer ecosystem for social interaction within the gaming environment.

9. Zynga cites its cloud based technology infrastructure as one of its core strengths. Zynga uses Amazon’s EC2 platform as a testing stage for its applications before migrating them to its own cloud based infrastructure. The company’s cloud based infrastructure carries with it the ability to provision “tools have enabled us to add up to 1,000 servers in a 24-hour period in response to game demand,” according to its S-1 filing.

10. Notable challenges Zynga foresees include its dependence on Facebook, the small percentage of players that are responsible for company revenue, the challenge of developing quality games for mobile platforms and non-PC platforms more generally, and the difficulty of recruiting and maintaining world class talent.

LibCloud and DeltaCloud Lead the Charge Toward Cloud Inter-Operability

Apache LibCloud’s May 19 graduation from the Apache Incubator signifies that the race toward cloud inter-operability is firmly underway. Libcloud provides an open source Python library of back-end drivers that enables developers to connect to APIs of over 20 cloud computing platforms such as Amazon EC2, Eucalyptus, GoGrid, IBM Cloud, Linode, Terremark and vCloud. Developers can write code once and then re-deploy their applications on other cloud environments in order to avoid vendor lock-in and create redundant architectures for disaster recovery purposes. LibCloud was originally developed by the Rackspace acquisition CloudKick but subsequently migrated to the Apache Incubator Project in November 2009. LibCloud’s graduation from the Apache Incubator as a Top Level Project means that the product will be managed by a Project Management Committee that assumes responsibility for its evolution and subsequent releases. LibCloud is currently available under version 2.0 of an Apache Software License.

The principal drawback about LibCloud is its exclusive use of Python as the programming language to connect drivers to vendor APIs. Red Hat’s DeltaCloud, in contrast, leverages a REST based API that offers more flexibility than LibCloud’s Python library for the purpose of migrating software deployments from one cloud infrastructure to another. Like LibCloud, DeltaCloud is being groomed through the Apache Incubator project but has a few more steps to travel before graduation and the achievement of top level status. Nevertheless, open source options are clearly leading the charge toward cloud inter-operability although they all presently require the withdrawal of a cloud instance to a holding database followed by re-deployment through the activation of the linking API. In other words, neither LibCloud nor DeltaCloud enable developers to connect Amazon EC2 to Rackspace without an intermediary database as a preliminary step.

Red Hat Enters IaaS and PaaS Space with CloudForms and OpenShift

At its May 2010 summit in Boston, Red Hat, the world’s leading provider of open source solutions, announced the launch of CloudForms and OpenShift, two products that represent the company’s boldest entrance into the cloud computing space so far. CloudForms marks an IaaS service offering that enables enterprises to create and manage a private or hybrid cloud computing environment. CloudForms provides customers with Application Lifecycle Management (ALM) functionality that enables management of an application deployed over a constellation of physical, virtualized and cloud-based environments. Whereas VMWare’s vCloud enables customers to manage virtualized machines, Red Hat’s CloudForms delivers a more granular form of management functionality that allows users to manage applications. Moreover, CloudForms offers a resource management interface that confronts the problem in the industry known as virtual sprawl wherein IT administrators are tasked with the problem of managing multiple servers, hypervisors, virtual machines and clusters. Red Hat’s IaaS product also offers customers the ability to create integrated, hybrid cloud environments that leverage a combination of physical servers, virtual servers and public clouds such as Amazon EC2.

OpenShift represents Red Hat’s PaaS product that enables open source developers to build cloud computing environments from within a specified range of development frameworks. OpenShift supports Java, Python, PHP and Ruby applications such as Spring, Seam, Weld, CDI, Rails, Rack, Symfony, Zend Framework, Twisted, Django and Java EE. In supporting Java, Python, PHP and Ruby, OpenShift offers the most flexible development environment in the industry as compared to Amazon’s Elastic Beanstalk, Microsoft Azure and Google’s App Engine. For storage, OpenShift features SQL and NoSQL in addition to a distributed file system. Red Hat claims OpenShift delivers greater portability than other PaaS products because customers will be able to migrate their deployments to another cloud computing vendor using the DeltaCloud inter-operability API. The only problem with this marketing claim is that DeltaCloud is by no means the most widely accepted cloud computing inter-operability API in the industry. Red Hat submitted the DeltaCloud API to the Distributed Management Task Force (DMTF) in August 2010, but the Red Hat API faces stiff competition from open source versions of Amazon’s EC2 APIs as well as APIs from the OpenStack project.

In summary, Red Hat’s entrance into the IaaS and PaaS space promises to significantly change the cloud computing landscape. CloudForms signals genuine innovation in the IaaS space because of its Application Lifecycle Management capabilities and hybrid infrastructure flexibility. OpenShift, meanwhile, presents direct competition to Google Apps, Microsoft Azure and Amazon’s Elastic Beanstalk because of the breadth of its deployment platform and claims about increased portability. What makes OpenShift so intriguing is it that constitutes Red Hat’s most aggressive attempt so far to claim DeltaCloud as the standard API for the cloud computing industry.

Why Amazon’s Cloud Computing Outage Didn’t Violate Its SLA

Amazon’s cloud computing outage on April 21 and April 22 can be interpreted in one of two ways: (1) either the outage constitutes a reflection on Amazon’s EC2 platform and its processes for disaster recovery situations; or (2) the outage represents a commentary on the state of the cloud computing industry as a whole. The outage began on Thursday and involved problems specific to Amazon’s Northern Virginia data center. Companies affected by the outage include HootSuite, FourSquare, Reddit, Quora and other start-ups such as BigDoor, Mass Relevance and Spanning Cloud Apps. Hootsuite—a dashboard that allows users to manage content on a number of websites such as Facebook, LinkedIn, Twitter and WordPress—experienced a temporary crash on Thursday that affected a large number of sites. The social news website Reddit was unavailable until noon on Thursday, April 21. BigDoor, a 20 person start-up that provides online game and rewards applications, had restored most of its services by Friday evening even though its corporate website remained down. Netflix and Recovery.gov, meanwhile, escaped the Amazon outage either unscathed or with minimal interruption.

Amazon’s EC2 platform currently has five regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Each region is composed of multiple “Availability Zones”. Customers who launch server instances in different Availability Zones can, according to Amazon Web Services’s website, “protect [their] applications from failure of a single location.” The Amazon outage underscores how EC2 customers can no longer depend on having multiple “Availability Zones” within a specific region as insurance against system downtime. Customers will need to ensure their architecture plans for duplicate copies of server instances in multiple regions.

Amazon’s SLAs commit to 99.5% system uptime for customers who have deployments in more than one availability zone within a specific region. However, the SLA guarantees only the ability to commit to connect to and provision instances. On Thursday and Friday, Amazon’s US-East customers could still connect to and provision instances, but the outage adversely affected their deployments because of problems with Amazon’s Elastic Block Storage (EBS) and Relational Database Service (RDS) platforms. EBS is a storage database and RDS provides a way of relating multiple databases that store data provisioned on an EC2 platform. Because Amazon’s problems were confined to EBS and RDS in the US East region, Amazon’s SLA for customers affected by the outage was not violated. The immediate consequence here is that Amazon EC2 customers will need to deploy copies of the same server instance in multiple regions to guarantee 100% system uptime, assuming, of course, that the wildly unlikely scenario that multiple Amazon cloud computing regions experience outages at the same time never transpires.

Anyone familiar with the cloud computing industry knows full well that Amazon, Rackspace, Microsoft and Google have all experienced glitches resulting in system downtime in the last three years. The multiple instances of system downtime across vendors points to the immaturity of the technological architecture and processes for delivering cloud computing services. Until the architecture and processes for cloud computing operational management improves, customers will need to seriously consider the costs of redundant data architectures that insure them against system downtime in comparison with the risk and costs of actual downtime.

For a non-technical summary of the technical issues specific to the outage, see Cloud Computing Today’s “Understanding Amazon Web Services’s 2011 Outage“.

Amazon Web Services: Elastic Beanstalk and CloudFormation Explained

Amazon Web Services has recently released Elastic Beanstalk and CloudFormation, two applications that automate the process of provisioning hardware resources and deploying applications on AWS’s flexible, inexpensive development environment. Introduced on January 19, Elastic Beanstalk automates the process of deploying an application on Amazon’s virtual servers once it has been written. Currently in Beta mode for Java applications only, Elastic Beanstalk manages the specifics of provisioning servers, load balancing and auto-scaling for unexpected spikes in the volume of traffic once an application is written. Elastic Beanstalk’s auto-scaling functionality scales horizontally by creating a clone of the original server instance, instead of vertically provisioning a larger server with a correspondingly appropriate memory. Developers retain the flexibility to over-ride Elastic Beanstalk’s auto-scaling features, in which case the application conforms to the scaling parameters indicated by the user.

Like Elastic Beanstalk, CloudFormation fulfills an analogous, but more ambitious function of automating application deployment. Launched on February 25, CloudFormation uses templates to automate creation of an integrated hardware infrastructure for an application containing multiple components. For example, CloudFormation takes the images, storage, security and messaging components of an application, understands their dependencies, and launches them in the right order using the template. In other words, instead of requiring a developer to write discrete scripts for each individual Amazon Machine Instance (AMI), CloudFormation gathers together certain parameters specified by a developer and creates one script for the requisite “stack” of Amazon Machine Instances of servers that collectively specifies elastic IP addresses, message queues, load balancing and auto-scaling. CloudFormation operates through JSON templates that are used to understand an application’s configuration parameters.

In his AWS blog post about CloudFormation, Jeff Barr uses the metaphor of cooking and baking to describe the application’s innovation and importance. While cooking allows for individual discretion and ad hoc changes to a recipe, baking requires precise combinations of ingredients that allow for cookies of the same taste and texture to emerge from the oven time and time again. In the same vein, CloudFormation enables developers to become bakers by automating the creation of complex systems. Moreover, developers may wish to create the same development environment a number of times, and instead of memorizing and repeating the execution of the same set of scripts over and over again, they can now use CloudFormation to automate and scale their development needs. Amazon released CloudFormation with templates for a number of open source applications such as Drupal, WordPress, Gollum and Joomla.

Amazon’s Jeff Barr put it as follows:

First, AWS is programmable, so it should be possible to build even complex systems (sometimes called “stacks”) using repeatable processes. Second, the dynamic nature of AWS makes people want to create multiple precise copies of their operating environment. This could be to create extra stacks for development and testing, or to replicate them across multiple AWS Regions….Today, all of you cooks get to become bakers!

Together with Elastic Beanstalk, CloudFormation goes a long way toward streamlining the process of deploying applications on Amazon’s EC2 environment. Despite Amazon’s lack of managed services, the 2011 first quarter release of both of these applications should render AWS more attractive to both small and enterprise customers alike.