Google

Google Compute Engine Slashes Prices By 10% For All Instances In All Regions

On Wednesday, October 1, Google slashed price for its Google Compute Engine platform by 10% for all instances. The price cut represents yet another iteration on the trend of decreasing price cuts in the IaaS space as evinced by recent price reductions from Amazon Web Services, Microsoft Azure and Google itself. In a blog post announcing the change, Urs Hölzle, Senior Vice President, Technical Infrastructure at Google, noted that decreases in price in the IaaS industry were such that “only 20% of time is spent how it should be — building new products or systems that will be platforms for growth,” thereby allowing for increased time for application development. The results of Google’s IaaS cuts are reflected below:

Google’s price cuts render it increasingly competitive against the likes of Amazon Web Services, Microsoft Azure and the increasingly vibrant community of commercial OpenStack vendors. Holze proceeded to note how Snapchat, Workiva and sponsors of the 2014 World Cup differentially leverage the Google Compute Engine Platform to simplify their infrastructure needs. Meanwhile, Google’s Sundhar Pichai, SVP of Android, Chrome and Apps, reported at Atmosphere that Google Drive now claims 240 million users, or an increase of 50 million active users from June. The bottom line here is that Google is beginning to amplify its assault on enterprise cloud computing customers by cutting prices and rolling out educational campaigns to inform users of the benefits of its cloud platform. Google has the capital and cash position to cut prices further, so Amazon Web Services will need to take pay close attention to ensure that Google does not catch it off guard with an aggressive forthcoming price cut or promotion that brings in a slew of customers which cascades into a sizeable dent in AWS IaaS market share.

Categories: Google | Tags:

Google’s Mesa Data Warehouse Takes Real Time Big Data Management To Another Level

Google recently announced development of Mesa, a data warehousing platform designed to collect data for its internet advertising business. Mesa delivers a distributed data warehouse that can manage petabytes of data while delivering high availability, scalability and fault tolerance. Mesa is designed to update millions of rows per second, process billions of queries and retrieve trillions of rows per day to support Google’s gargantuan data needs for its flagship search and advertising business. Google elaborated on the company’s business need for a new data warehousing platform by commenting on its evolving data management needs as follows:

Google runs an extensive advertising platform across multiple channels that serves billions of advertisements (or ads) every day to users all over the globe. Detailed information associated with each served ad, such as the targeting criteria, number of impressions and clicks, etc. are recorded and processed in real time…Advertisers gain fine-grained insights into their advertising campaign performance by interacting with a sophisticated front-end service that issues online and on-demand queries to the underlying data store…The scale and business critical nature of this data result in unique technical and operational challenges for processing, storing and querying.

Google’s advertising platform depends upon real-time data that records updates about advertising impressions and clicks in the larger context of analytics about current and potential advertising campaigns. As such, the data model requires the ability to accommodate atomic updates to advertising components that cascade throughout an entire data repository, consistency and correctness of data across datacenters and over time, the ability to support continuous updates, low latency query performance, scalability as illustrated by the ability to support petabytes of data and data transformation functionality that accommodates changes to data schemas. Mesa utilizes Google products as follows:

Mesa leverages common Google infrastructure and services, such as Colossus, BigTable and MapReduce. To achieve storage scalability and availability, data is horizontally partitioned and replicated. Updates may be applied at granularity of a single table or across many tables. To achieve consistent and repeatable updates, the underlying data is multi-versioned. To achieve update scalability, data updates are batched, assigned a new version number and periodically incorporated into Mesa. To achieve update consistency across multiple data centers, Mesa uses a distributed synchronization protocol based on Paxos.

While Mesa takes advantage of technologies from Colossus, BigTable, MapReduce and Paxos, it delivers a degree of “atomicity” and consistency lacked by its counterparts. In addition, Mesa features “a novel version management system that batches updates to achieve acceptable latencies and high throughput for updates.” All told, Mesa constitutes a disruptive innovation in the Big Data space that extends the attributes of atomicity, consistency, high throughput, low latency and scalability on the scale of trillions of rows toward the end of a “petascale data warehouse.” While speculation proliferates about the possibilities for Google to append Mesa to its Google Compute Engine offering or otherwise open-source it, the key point worth noting is that Mesa represents a qualitative shift with respect to the ability of a Big Data platform to process petabytes of data that experiences real-time flux. Whereas the cloud space is accustomed to seeing Amazon Web Services usher in breathtaking innovation after innovation, time and time again, Mesa conversely underscores Google’s continuing leadership in the Big Data space. Expect to hear more details about Mesa at the Conference on Very Large Data Bases next month in Hangzhou, China.

Categories: Google, Big Data | Tags: , , , , , ,

Google Acquires Skybox Imaging For $500M For Technology Related to Mapping and Internet Connectivity

On Tuesday, Google agreed to pay $500M for Skybox Imaging, a tech startup that delivers high resolution satellite images. The acquisition is intended to consolidate Google’s impressive positioning in the geospatial mapping space by keeping Google Maps “accurate with up-to-date imagery.” Currently, Google licenses data from more than 1000 sources to keep its maps up to date, including satellite vendors Astrium and DigitalGlobe. The acquisition of Skybox promises to provide near real-time updates to Google maps in addition to expanded, high resolution coverage. Google noted that the acquisition could be used to “improve internet access and disaster relief” by leveraging Skybox’s satellite technology to deliver internet connectivity to parts of the world where the internet is currently lacking.

Skybox commented on its synergies with Google in a blog post about the acquisition as follows:

Skybox and Google share more than just a zip code. We both believe in making information (especially accurate geospatial information) accessible and useful. And to do this, we’re both willing to tackle problems head on — whether it’s building cars that drive themselves or designing our own satellites from scratch.

Founded in 2009, Skybox innovated in the satellite space by creating cheaper satellites than its competitors by using “off-the-shelf components.” Mountain View-based Skybox has raised $91 million in capital from venture capital firms such as Khosla Ventures and Bessemer Venture Partners and claims approximately 100 employees. In addition to acquiring its satellites, Google stands to claim ownership of Skybox’s data processing capabilities for mining and running analytics on massive amounts of satellite data. As reported in Forbes by Ellen Huet, Skybox mines over 1 TB of data daily. Skybox launched its first satellite, SkySat-1, in November 2013.

Categories: Google | Tags:

Google Announces An Impressive Array Of Cloud Price Cuts And Enhancements

At Google Cloud Platform Live, Google just announced a range of enhancements to its Infrastructure as a Service, Platform as a Service and Big Data analytics platforms. For starters, Google announced price cuts to its Google Compute Engine platform ranging from 30-85%. Prices for Google’s Infrastructure as a Service offering will be slashed by 32% for all “sizes, regions and classes.” Meanwhile, Google Cloud Storage and Google BigQuery experienced price reductions of 68% and 85% respectively. Google simplified the pricing of its platform as a service, Google App Engine, and reduced it by roughly 30%. In addition to price cuts, Google unveiled an analogue to the Amazon Web Services product reserved instances which provides deep discounts on VM pricing in the event they are used for one or three year time periods. Branded “Sustained-Use Discounts,” Google offers price cuts on top of its already announced reduction for customers who use a VM for more than 25% of a given month. Customers who use a VM for an entire month can see additional discounts of up to 30%, resulting in price cuts of over 50% compared to original prices given today’s other price reductions. Google is also launching BigQuery Streaming, an enhancement that enables the BigQuery platform to consume 100,000 rows of data per second and render the data available for real-time analytics in ways comparable to products such as Amazon Kinesis and Treasure Data. Moreover, Google announced a Managed Virtual Machines service that allows users to configure a virtual machine to their own specifications and subsequently deploy the VM to the Google App Engine infrastructure, thereby giving developers more flexibility vis-à-vis the type of machine managed that can take advantage of App Engine’s auto-scaling and management functionality. For developers, Google announced integration with Git featuring automated build and unit testing of changes committed as well as aggregated logs of testing results. Finally, Google revealed the general availability of Red Hat Enterprise Linux and SUSE Linux Enterprise Server and Windows Server 2008 R2 in limited preview for VMs.

All told, today’s price cuts and news of functionality represent much more than a price war with Amazon Web Services. Just a day before the AWS Summit in San Francisco, Google confirmed the seriousness of its intent to increase traction for its development-related cloud-based products. The variety of today’s enhancements to Google Compute Engine, Google App Engine, BigQuery and the introduction of its Managed Virtual Machines service indicate that Google is systematically preparing to service the cloud computing needs of enterprise customers. Despite all the media hype over the last two years about companies gearing up “take on Amazon,” no other cloud vendor has even been close to the depth of IaaS features and functionality possessed by Amazon Web Services with the exception of Google as it revealed itself today. All this means that we now have a two horse race in the Infrastructure as a Service space until the commercial OpenStack community convincingly demonstrates the value of OpenStack-based cloud inter-operability in conjunction with richness of features and competitive pricing.

Categories: Google | Tags: , , , , , , , , ,

Google’s Cloud Storage Price Cut Indicates Subtle Move To Revamp Its Market Perception

On Thursday, Google slashed prices on its cloud storage platform by lowering the price of 100GB storage from $4.99/month to $1.99/month, and 1TB from $49.99 to $9.99/month. Meanwhile, the price for 10TB is only $99.99/month. In comparison, Dropbox charges $10/month for 100GB and Microsoft OneDrive charges $50/year, or roughly $4.17/month. Google’s decision to cut Google Drive prices is likely to precipitate Dropbox and competitors to respond with similar cuts to stay competitive. More importantly, however, the price cut sends a subtle signal on Google’s part that it is gearing up to go after business customers both for cloud storage, and for its cloud offerings more generally via its Google Compute Engine platform. By increasing market share in the cloud storage space, Google affirms and underscores the reliability and cost-effectiveness of its cloud-based storage offering, and thereby continues to demonstrate its competency in verticals other than keyword search. Thursday’s aggressive price cut positions Google as a leader in the cloud storage space and stands to continue the transformation of Google’s market perception to a leader in cloud-based infrastructures more generally.

Categories: Dropbox, Google | Tags:

Introducing Google Compute Engine in General Availability Mode

Categories: Google | Tags:

Given The General Availability Of Google Compute Engine, Is Amazon Web Services Destined To Meet Its Match?

On Monday, Google announced the general availability of Google Compute Engine, the Infrastructure as a Service public cloud platform that Google first announced in June 2012. Unlike many of Google’s product offerings, which are not targeted toward enterprise customers, Google Compute Engine comes with 24/7 customer support and a 99.95% SLA. Moreover, the platform boasts encryption of data at rest in an effort to respond to customer concerns about data security, particularly given Google’s vaunted reputation for mining every piece of data touched by its hardware and range of software applications. Monday’s general availability release features a 10% price reduction on standard, server instances and a 60% price reduction in storage pricing per gigabyte for its persistent disk service.

At the level of functionality, the GA release of Google Compute Engine claims the following three notable features:

Expanded Support for Operating Systems

Whereas Google Compute Engine supported the Linux distributions Debian and Centos in preview mode, the GA version supports a range of Linux distributions including SELinux, CoreOS, SUSE and Red Hat Enterprise Linux (limited preview). This release also features support for Docker containers that enable users to spin up containers instead of virtual machines to accelerate automated testing, continuous integration and deployment.

Transparent, automated maintenance and live migration

Google Compute Engine is now the beneficiary of ongoing, transparent maintenance routines and processes in order to ensure the effective functioning of the GCE infrastructure. Transparent maintenance operates by working on “only a small piece of the infrastructure in a given zone” such that “Google Compute Engine automatically moves your instances elsewhere in the zone, out of the way of the maintenance work” with the help of live migration technology. Customer instances continue to operate as usual while maintenance is performed.

Three New 16 Core Instances

In order to serve the needs of customers that require greater computational power, Google Compute Engine now boasts three 16 core instances for the standard, high memory and high CPU instance types. Use cases for the computing power delivered by these instances include advanced simulations and NoSQL platforms that require high degrees of scalability and performance.

Gartner analyst Lydia Leong reflected on a comparison between GCE and Amazon Web Services in a blog post and concluded:

GCE still lags AWS tremendously in terms of breadth and depth of feature set, of course, but it also has aspects that are immediately more attractive for some workloads. However, it’s now at the point where it’s a viable alternative to AWS for organizations who are looking to do cloud-native applications, whether they’re start-ups or long-established companies. I think the GA of GCE is a demarcation of market eras — we’re now moving into a second phase of this market, and things only get more interesting from here onwards.

Leong sees the general availability of Google Compute Engine as the “second phase” of the IaaS market, whereby Google and AWS stand poised to out-innovate each other and subsequently push each other to new technological heights. The challenge for Google, however, as Leong rightly suggests elsewhere in her blog post, is that it will need to earn the trust of enterprise customers. The industry will not expect Google to deliver the “fanatical support” which became the hallmark and differentiator of Rackspace, for example, but it will expect degrees of white glove support and professional services that are not familiar parts of the Google apparatus, just yet.

Moreover, as part of the project of gaining the support of the enterprise, Google will need to deliver more explicit guarantees of the safety of data hosted within its IaaS platform from the prying eyes of its repertoire of tools for analyzing structured and unstructured data stored in every conceivable format and structure. Finally, Google will ultimately need an outward facing CTO comparable to Amazon’s Werner Vogels that can evangelize the platform and sell customers on a roadmap that ultimately achieves feature parity, if not superiority, as compared to Amazon Web Services. Technology and innovation has never been Google’s problem. Capturing the confidence of the enterprise, however, has been a different story entirely for Google, although as Leong notes, Monday’s announcement may signal a fork in the road for the IaaS space and the Mountain View-based, search engine and technology behemoth. Current GCE customers include Snapchat, Evite and Wix.

Categories: Amazon Web Services, Google | Tags: , , , , , ,

Create a free website or blog at WordPress.com. The Adventure Journal Theme.