A day after Google announced price cuts and enhancements to its cloud computing products, Amazon Web Services responded with price cuts of its own at its AWS Summit in San Francisco. Price cuts for Amazon Web Services are nothing new, and the company took care to note as such by pointing out that the April 1, 2014 price reductions represent the 42nd time the Seattle tech behemoth has slashed prices since its 2006 inception. Amazon EC2 pricing cuts ranged from 10-40% for Linux/Unix virtual machines and 7-35% for Windows-based machines. Similarly, AWS announced deep price cuts on its reserved instances offering on the order of 10-40%. Prices for Amazon S3 were reduced by 51% on average, with a hefty discount of 65% for the 0-1 TB range. Meanwhile, Amazon RDS experienced a price cut of 28% on average. AWS also announced the general availability of Amazon Workspaces, a fully managed desktop as a service offering that allows customers to configure and deliver desktop environments for their employees from a centrally hosted location on the AWS cloud. Amazon Workspaces supports the synchronized, bundled delivery of designated software applications to end users on multiple devices. In addition, AWS elaborated on new functionality in the form of “peering connections” between virtual private clouds (VPC) in the same AWS Region that supports use cases such as separate virtual private clouds for different business units within a large organization. As an example of one such use case, VPC peering connections allow EC2 instances from a VPC for the Finance department to access data in a VPC dedicated to Operations, but not necessarily vice versa, depending on the business rules established by the customer for “peering” or data sharing. Finally, AWS took note of its recent achievement of Department of Defense (DoD) provisional authorization, which certifies it as compliant with DOD security protocols over and beyond those achieved by the FedRAMP certification which AWS has already earned. Overall, today’s announcements from the AWS Summit failed to match the depth and variety of cloud-specific product enhancements revealed by Google, but they confirmed Amazon’s enduring ability to cut prices and innovate as well as its growing credibility amongst U.S. government customers.
Amazon Web Services
AWS Cuts Prices For 42nd Time, Announces VPC Peering Capability And General Availability Of Amazon Workspaces
Customers using Amazon CloudFront can now benefit from Xplenty to parse and process their log files, all within the Xplenty design environment
Tel Aviv, Israel – March 4, 2014 – Xplenty, http://www.xplenty.com, provider of the innovative Hadoop-as-a-service platform, Amazon Web Services (AWS) Technology Partner in the AWS Partner Network, and seller on the AWS Marketplace, now offers its big data processing technology directly to customers in all AWS Regions. Xplenty is now available to customers from AWS’ Regions in South America (Sao Paolo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo). This adds to the existing Xplenty locations of U.S. East (N. Virginia), U.S. West (N. California and Oregon) and EU (Ireland).
Xplenty technology provides Hadoop processing on the cloud via a coding-free design environment, ensuring businesses can quickly and easily benefit from the opportunities offered by big data without having to invest in hardware, software or related personnel.
Meanwhile, users of the Amazon CloudFront content delivery network can now use Xplenty to analyze their log files. New predefined templates let users parse and process Amazon CloudFront logs easily. The processing engine transforms structured and semi-structured big data and easily scales to petabytes as data requirements grow, allowing companies to better understand their customers.
One company already using Xplenty to gain better insight to their customers is WalkMe. “We have customers from a wide range of industries and verticals – including banks, financial institutions, retail services, tourism, leading software vendors and more – all of which use WalkMe to simplify their customers’ online experience. By using Xplenty to break down our log files, we’re able to gain valuable insights into our customer needs and preferences,” says Nir Nahum, VP of R&D at WalkMe. “With the easy-to-use GUI, we just designate the file location for processing, and it automatically sets up the template and runs.”
Xplenty is available within the global AWS Marketplace to customers seeking to integrate a Hadoop-as-a-Service platform to solve their big data processing challenges.
“Big data is shaping the way companies of all sizes develop new products and identify new opportunities to increase their efficiency,” said Brian Matsubara, Head of Global Technology Alliances, Amazon Web Services. “By bringing their Big Data analysis tools to the AWS cloud, Xplenty is giving customers an innovative approach to solve their business challenges. Xplenty leverages the AWS global platform to provide scalable Big Data solutions to customers around the world.”
“As a cloud-based service provider, we offer organizations of any size the opportunity to learn more about their customers, further personalize their services, and increase their bottom lines, all by enabling their big data analyses,” says Yaniv Mor, co-founder and CEO of Xplenty. “Why shouldn’t everyone gain by using the data they are paying to store anyway?”
Xplenty was founded by data professionals for data professionals to deliver on the promise of big data. Xplenty’s true big data solution provides ROI almost immediately by uncovering valuable business insights, translating into higher revenues and increased competitiveness. Xplenty delivers a coding-free, cloud-based Hadoop-as-a-Service platform that transforms structured, unstructured, and semi-structured data into useable information in the AWS, Rackspace and Softlayer environments. Our goal is to make Hadoop accessible and cost-effective for everybody. http://www.xplenty.com
K2 Global Communications
tel: +972-9-794-1681 (+2 GMT)
U.S.: +1-913-440-4072 (+7 ET)
All product and company names herein may be trademarks of their registered owners.
Scarcely days after Azure’s VP, Scott Guthrie, claimed that the Azure platform differentiated itself from Amazon Web Services by virtue of its “coverage” in China, Amazon Web Services revealed details of its forthcoming China region for the Amazon Web Services platform. Available in limited preview in early 2014, the China region will be realized through partnerships between AWS and Chinese IT providers such as ChinaNetCenter and SINNET that will support delivery of the required infrastructure and bandwidth. Today, AWS China signed a Memorandum of Understanding with the municipal government of Beijing and the Government of Ningxia Hui Nationality Autonomous Region featuring a shared commitment to deliver “high-performing, reliable, and economical AWS cloud computing services” that use “facilities and resources” in Beijing and Western China. According to the Amazon Web Services press release, the Government of Ningxia Hui Nationality Autonomous Region will use the AWS China platform to host government-related applications.
The AWS China region represents its tenth region worldwide and fourth in the Asia-Pacific region. All told, the decision by Amazon Web Services to deploy an AWS Region in China represents an astute strategic move to gain early market traction in a geography where all major U.S. and European IaaS players, with the exception of Windows Azure and Joyent, have little or no market penetration, due largely to the morass of government relations specific to doing business in China. The AWS strategy of partnering with local Chinese vendors and governments enhances the credibility of its offering and is likely to convert a subset of the thousands of current Chinese customers that use regions in the U.S, Europe and elsewhere into users of the AWS China region. Accurate estimates of the market for cloud computing in China are tough to come by, but the AWS China region clearly has the potential to contribute significantly to the platform’s quarterly revenue figures assuming that its operations, in collaboration with local partners, run smoothly and without notable disruption.
Amazon Web Services (AWS) recently announced support for Impala, the open source technology platform developed by Cloudera for querying data in the Hadoop Distributed File System or HBase using SQL-like syntax as elaborated below:
Impala raises the bar for query performance while retaining a familiar user experience. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. Furthermore, it uses the same metadata, SQL syntax (Hive SQL), ODBC driver and user interface (Hue Beeswax) as Apache Hive, providing a familiar and unified platform for batch-oriented or real-time queries. (For that reason, Hive users can utilize Impala with little setup overhead.)
Amazon Web Services introduced Impala as part of the Amazon Elastic MapReduce project. Users will need to run Hadoop clusters that use Hadoop 2.x in order to take advantage of its Hadoop offering. Impala users can run queries on data sets in real time and enjoy low latency times enabled by the platform’s distributed query engine that allows Impala to boast speed and performance benefits over Apache Hive. The availability of Impala on the Amazon Web Services platform comes just weeks after its release of Amazon Kinesis, its platform for collecting and storing real time big data streams, and subsequently underscores the seriousness with which AWS plans to deploy products designed for the big data space.
Given The General Availability Of Google Compute Engine, Is Amazon Web Services Destined To Meet Its Match?
On Monday, Google announced the general availability of Google Compute Engine, the Infrastructure as a Service public cloud platform that Google first announced in June 2012. Unlike many of Google’s product offerings, which are not targeted toward enterprise customers, Google Compute Engine comes with 24/7 customer support and a 99.95% SLA. Moreover, the platform boasts encryption of data at rest in an effort to respond to customer concerns about data security, particularly given Google’s vaunted reputation for mining every piece of data touched by its hardware and range of software applications. Monday’s general availability release features a 10% price reduction on standard, server instances and a 60% price reduction in storage pricing per gigabyte for its persistent disk service.
At the level of functionality, the GA release of Google Compute Engine claims the following three notable features:
Expanded Support for Operating Systems
Whereas Google Compute Engine supported the Linux distributions Debian and Centos in preview mode, the GA version supports a range of Linux distributions including SELinux, CoreOS, SUSE and Red Hat Enterprise Linux (limited preview). This release also features support for Docker containers that enable users to spin up containers instead of virtual machines to accelerate automated testing, continuous integration and deployment.
Transparent, automated maintenance and live migration
Google Compute Engine is now the beneficiary of ongoing, transparent maintenance routines and processes in order to ensure the effective functioning of the GCE infrastructure. Transparent maintenance operates by working on “only a small piece of the infrastructure in a given zone” such that “Google Compute Engine automatically moves your instances elsewhere in the zone, out of the way of the maintenance work” with the help of live migration technology. Customer instances continue to operate as usual while maintenance is performed.
Three New 16 Core Instances
In order to serve the needs of customers that require greater computational power, Google Compute Engine now boasts three 16 core instances for the standard, high memory and high CPU instance types. Use cases for the computing power delivered by these instances include advanced simulations and NoSQL platforms that require high degrees of scalability and performance.
Gartner analyst Lydia Leong reflected on a comparison between GCE and Amazon Web Services in a blog post and concluded:
GCE still lags AWS tremendously in terms of breadth and depth of feature set, of course, but it also has aspects that are immediately more attractive for some workloads. However, it’s now at the point where it’s a viable alternative to AWS for organizations who are looking to do cloud-native applications, whether they’re start-ups or long-established companies. I think the GA of GCE is a demarcation of market eras — we’re now moving into a second phase of this market, and things only get more interesting from here onwards.
Leong sees the general availability of Google Compute Engine as the “second phase” of the IaaS market, whereby Google and AWS stand poised to out-innovate each other and subsequently push each other to new technological heights. The challenge for Google, however, as Leong rightly suggests elsewhere in her blog post, is that it will need to earn the trust of enterprise customers. The industry will not expect Google to deliver the “fanatical support” which became the hallmark and differentiator of Rackspace, for example, but it will expect degrees of white glove support and professional services that are not familiar parts of the Google apparatus, just yet.
Moreover, as part of the project of gaining the support of the enterprise, Google will need to deliver more explicit guarantees of the safety of data hosted within its IaaS platform from the prying eyes of its repertoire of tools for analyzing structured and unstructured data stored in every conceivable format and structure. Finally, Google will ultimately need an outward facing CTO comparable to Amazon’s Werner Vogels that can evangelize the platform and sell customers on a roadmap that ultimately achieves feature parity, if not superiority, as compared to Amazon Web Services. Technology and innovation has never been Google’s problem. Capturing the confidence of the enterprise, however, has been a different story entirely for Google, although as Leong notes, Monday’s announcement may signal a fork in the road for the IaaS space and the Mountain View-based, search engine and technology behemoth. Current GCE customers include Snapchat, Evite and Wix.
Amazon Web Services Continues To Increase IaaS/PaaS Market Share According To Synergy Research Group
A recent article by the Synergy Research Group (Synergy) claims that Amazon Web Services continues to dominate the IaaS and PaaS space in terms of revenue. According to Synergy, Amazon Web Services increased its quarterly revenue by 55% to over $700M in Q3 of 2013, whereas the aggregate of revenue for Salesforce, IBM, Windows Azure and Google was less than $400M for the same time period. Worldwide, total IaaS and PaaS revenues exceeded $2.5 billion for the quarter, with IaaS accounting for 64% of cloud revenues, a surprisingly small proportion given the limited penetration of platform as a service within the enterprise. Synergy Research’s John Dinsdale remarked on the company’s findings as follows:
We’ve been analyzing the IaaS/PaaS markets for quite a few quarters now and creating these leadership metrics, and the relative positioning of the leaders really hasn’t changed much. While Amazon dwarfs all competition, the race is on to see if any of the big four followers can distance themselves from their peers. The good news for these companies and for the long tail of operators with relatively small cloud infrastructure service operations, is that IaaS/PaaS will be growing strongly long into the future, providing plenty of opportunity for robust revenue growth.
Here, Dinsdale remarks that the “race is on to see if” Salesforce, IBM, Microsoft and Google can decisively secure second place in the battle for IaaS/PaaS market share. Strikingly, Microsoft, Google and IBM have revenues that are very close to one another, even though one might reasonably expect Microsoft’s Azure platform to edge out its competition given its earlier entry into the market than IBM and Google’s Compute Engine (GCE). That said, IBM’s sizeable IaaS revenue derives largely from its acquisition of SoftLayer, which itself had a rich and venerable history that predated IBM.
Synergy’s chart illustrating Q3 IaaS and PaaS revenues is given below:
Notable omissions from the findings include Rackspace, HP, Oracle, Pivotal One and Red Hat, the middle three of which (HP, Oracle and Pivotal One) are still relatively nascent, and hence justifiably excluded from the present calculation. As Dinsdale notes above, however, “the good news for these companies” and for remainder of the space is that revenues are set to increase significantly in the near term. Going forward, one of the key questions for subsequent IaaS market share analyses will be whether OpenStack’s momentum and gradual maturation propels disproportionate growth amongst OpenStack-based cloud platforms for vendors such as HP, IBM, Oracle, Rackspace and Red Hat.
Amazon Web Services (AWS) today announced the release of Amazon Kinesis, a revolutionary service for storing real-time data feeds that allows developers to write applications that respond to streaming data. Kinesis allows developers to store data from hundreds of sources and subsequently write applications that respond to real-time feeds related to streaming news feeds, financial data, social media applications and log and sensor data. Kinsesis integrates with real-time dashboards and business intelligence software and thereby enables the scripting of alerts and decision making protocols that respond to the trajectories of incoming real-time data. Terry Hanold, Vice President of New Business Initiatives, AWS, remarked on the innovation enabled by Amazon Kinesis as follows:
Database and MapReduce technologies are good at handling large volumes of data. But they are fundamentally batch-based, and struggle with enabling real-time decisions on a never-ending–and never fully complete–stream of data. Amazon Kinesis aims to fill this gap, removing many of the cost, effort and expertise barriers customers encounter with streaming data solutions, while providing the performance, durability and scale required for the largest, most advanced implementations.
Kinesis replicates data across Availability Zones within an AWS Region in order to ensure a high degree of availability. In addition, the product offers a managed service for dealing with incoming streams of real-time data that includes load balancing, failover, auto-scaling and orchestration. Moreover, customers can send incoming data to data stores such as Amazon S3, Amazon DynamoDB or Amazon Redshift either in its raw form, or filtered according to business rules in order to reduce the size of the data store. Overall, Kinesis represents a truly disruptive technology that promises to change the way applications respond to continuous, dynamic data feeds. Use cases for the product include applications that leverage meteorological data, military-related sensor-based data, data streams from the emerging internet of things such as automobiles and appliances, in addition to the typical use case of web-related data. Amazon Web Services continues to push the envelope with respect to technological innovation and proves, once again, that it is so much more than an infrastructure as a service vendor that rents commodity hardware for application development and storage. Google and Microsoft look archaic in comparison, and as such, Amazon Web Services continues to consolidate its position as the cloud-based technology platform of choice for application development and integration.