Google

Google Announces Pre-emptible Virtual Machines Marked By Price Reductions Of 70%

On Monday, Google introduced Google Compute Engine pre-emptible virtual machines. Pre-emptible machines enjoy a 70% discount on standard pricing but may be shut down at any time and have a maximum runtime of 24 hours. The larger vision behind pre-emptible machines involves Google’s objective of reclaiming computing capacity depending upon the intensity and duration of other workloads within its public cloud environment. By shutting down pre-emptible machines and recovering compute capacity, the Google Cloud Platform can maintain a high degree of performance without spinning up additional VMs, thereby saving operational overhead and concurrently passing along some of the attendant cost savings to the customer. Given the unpredictability with which pre-emptible virtual machines may be shut down, they are suited only for select use cases such as massive data processing, data analytics, visual effects and simulations that are not time sensitive with respect to the allotted time period for their completion. Pre-emptible machines are ideal for applications that are architected such that they can handle the termination of a few VMs on a periodic basis. Meanwhile, customers stand to enjoy fixed pricing in addition to the 70% pricing discount and may subsequently decide to allocate a designated percentage of their fleet of VMs to pre-emptible machines in recognition of the way in which their computational processing is only minimally impacted by periodic VM shutdowns. Google announced pre-emptible virtual machines in conjunction with significant price cuts for VMs for Google Compute Engine amounting to a maximum of 30%. Google’s elaboration of Google Cloud Platform price cuts and the availability of pre-emptible machines indicate the intensity of competition in the IaaS space where prices skyrocket downward as major players such as Microsoft and Google intensify their assault on Amazon’s stranglehold on the leadership position in the IaaS space.

Categories: Google, IaaS | Tags:

Google Releases Google Cloud Bigtable, Managed NoSQL Database Platform For Big Data

On Wednesday, Google announced Google Cloud Bigtable, a NoSQL database that can be accessed via the Apache HBase API. Powered by Bigtable, the database that powers Google applications such as Google Search and Gmail, Google Cloud Bigtable delivers a highly scalable database that specializes in ingesting and performing analytics on massive datasets. Google Cloud Bigtable delivers latency on the order of single-digit milliseconds and twofold performance benefits in comparison to other “unmanaged NoSQL alternatives.” In a blog post, Google Product Manager Cory O’Connor revealed performance advantages of Google Cloud BigTable over Hbase and Cassandra with respect to both write throughput per dollar and read/write latency, in milliseconds. As a fully managed service, customers need not take responsibility for Google Cloud Bigtable’s infrastructure but can instead focus on populating Google Cloud Bigtable with data and subsequently refining the analytic insights needed to more effectively run business operations. The product integrates with Hadoop and subsequently supports the ingestion of big data in a variety of formats.

Because the platform’s underlying architecture has been used to power prominent Google applications for years, customers can reasonably expect Google Cloud Bigtable to deliver on its promises of low latency, high performance and scalability. The product targets organizations with massive data ingestion needs and embraces use cases related to the internet of things as well as verticals such as financial services that handle massive volumes of data, daily. By releasing Google Cloud Bigtable, Google renders the same technology used to underpin much of its commercial operations more broadly accessible and in so doing, draws a parallel to Amazon’s release of Amazon Machine Learning, a product that is similarly derived from the very technology used to run Amazon’s own internal business operations. Google’s decision to democratize the core technology of its BigTable application symptomatically illustrates a broader trend in enterprise IT whereby technology behemoths such as Amazon, Google, Microsoft and Yahoo have the capability to monetize curated versions of products they have used to run their own business operations for years and thereby make available, to everyday enterprises, battle-tested technology that can be used for data ingestion, analytics and visualization.

Categories: Google

CoreOS Announces Integration With Google’s Kubernetes To Signal Emergence Of Container Standard, Independent Of Docker

CoreOS has announced that its rkt (pronounced: rocket) container technology will be integrated with Google’s Kubernetes container management framework. The integration of CoreOS rocket technology with Kubernetes means that the Kubernetes framework need not leverage Docker containers, but can instead rely solely on CoreOS Linux container technology. CoreOS’s rkt technology consists of container runtime software that implements appc, the App container specification designed to provide a standard for containers based around requirements related to composability, security, image distribution and openness. CoreOS launched rocket on the premise that Docker containers had strayed from its original manifesto of developing “a simple component, a composable unit, that could be used in a variety of systems” as noted by CoreOS CEO Alex Polvi in a December 2014 blog post:

Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.

Here, Polvi notes how Docker has transitioned from an initiative focused around creating reusable components to a platform whose mission has deviated from its original manifesto. Today’s announcement of the integration of CoreOS with Kubernetes represents a deepening of the relationship between CoreOS and Google that recently included a $12M funding round led by Google Ventures. While CoreOS previously supported Kubernetes, today’s announcement of its integration into Google’s container management framework represents a clear sign that the battle for container supremacy is now likely to begin in earnest, particularly given that CoreOS brands itself as enabling other technology companies to build Google-like infrastructures. With Google’s wind behind its sails, and executives from Google, Red Hat and Twitter having joined the App container specification community management team, Docker now confronts a real challenger to its supremacy within the container space. Moreover, Google, VMware, Red Hat and Apcera have all pledged support for appc in ways that suggest an alternative standard that defines “how applications can be packaged, distributed, and executed in a portable and self-contained way” may well be emerging.

Categories: Docker, Google | Tags: , , ,

Avere Systems Partners With Google Cloud Platform To Deliver Hybrid Clouds For Compute And Storage Intensive Workloads

On Monday, Avere Systems announced a partnership with Google that empowers customers to transfer large data sets and workloads to the Google Cloud Platform. The collaboration between Avere Systems and Google means that companies can now transfer data from NAS storage systems to the Google Cloud Platform to enjoy the benefits of the scalability and performance of the same infrastructure that powers Google search, Gmail and Google Drive. Avere FXT Edge Filers technology from Avere Systems allows customers to run storage and compute workloads both on premise and in the cloud, thereby creating a hybrid cloud infrastructure optimized for cloud bursting scenarios and compute-intensive workloads. Avere Physical FXT Edge Filers deliver NAS for on-premise, file-based applications whereas Virtual FXT Edge Filers provides a software solution that manages a high performance storage infrastructure within a cloud-based platform. The combination of Avere Physical and Virtual FXT Filers allows customers to deploy solutions on premise and in the cloud while delivering high performance and low latency for big data applications. Because of its ability to support compute-intensive workloads and massive storage requirements, Avere Edge Filer technology has enjoyed notable success within the media and entertainment industry as evinced by its usage by the visual effects studio Framestore. The ability of Avere Systems to support the massive computational and storage needs of digital media and entertainment-related use cases strongly positions Avere Systems to support the needs of organizations that need to create a compute and storage intensive hybrid cloud infrastructure in collaboration with Google Cloud Platform.

Categories: Google, Miscellaneous | Tags: , ,

Google Cloud Storage Nearline Promises To Disrupt The Economics Of Cold Storage

On Wednesday, Google announced the Beta release of Google Cloud Storage Nearline, a cloud-based storage product that transforms the economics of hot and cold storage. Whereas enterprises currently wrestle with the problem of managing frequently accessed data versus “cold” data, Google Cloud Storage Nearline renders cold data accessible within three seconds. The ability of Google Cloud Storage Nearline to access cold data means that organizations need not have separate infrastructures for managing cold and hot data but can instead leverage Google’s high performance, low cost storage solution to render historical data available within a few seconds. As a result, enterprises can serve up historical emails, audits and compliance findings, log files and data specific to decommissioned products and services with a virtually negligible time lag in comparison to hot data. Google’s product charges 1 cent per GB to store data within a framework that delivers enterprise-grade security, integration with Google Cloud Storage services in addition to the ability to collaborate with vendors such as Veritas/Symantec, Netapp, Iron Mountain and Geminare for services such as backup, encryption, deduplication, data ingestion from physical hard drives and disaster recovery as a service. In the context of the larger cloud storage landscape, Google Cloud Storage Nearline poses a direct threat to Amazon’s Glacier, a solution that is similarly priced at 1 cent per GB with a focus on cold data. Unlike Google Cloud Storage Nearline, however, Amazon Glacier requires several hours for data retrieval in contrast to three seconds. Google Cloud Storage Nearline addresses the data conundrum faced by the world today given the paradox that, whereas material objects such as garbage, newspapers and man-made products in general confront technologies for recycling and transformation, data has managed to demarcate a unique place for itself marked by freedom from outright destruction. The immunity of data to being discarded is, of course, enabled by the ever decreasing price of hardware, but Google’s intervention to render historical data available within a few seconds stands to fundamentally disrupt and transform the economics of cloud storage.

Categories: Amazon Web Services, Google

Kubernetes Integrates With OpenStack Through Collaboration Between Google And Mirantis

On Tuesday, Mirantis announced the integration of OpenStack with Kubernetes, the open source framework developed by Google to manage containers. The integration between OpenStack and Kubernetes enhances the portability of applications between the private cloud infrastructures typical of OpenStack and public cloud environments such as the Google Cloud Platform and Microsoft Azure that support Kubernetes. Even though Docker containers are well known for enhancing the portability of applications across infrastructures, transporting applications and workloads from private clouds to public clouds remains challenging. The availability of Kubernetes within (OpenStack) private clouds in addition to public cloud environments now renders it easier to transport containerized applications from private to public clouds and subsequently obtain a greater return on investment from deploying hybrid cloud infrastructures.

Moreover, the integration between Kubernetes and OpenStack facilitates container management on the Mirantis OpenStack platform by automating and orchestrating the management of Docker containers within an OpenStack-based IaaS infrastructure. The integration between Kubernetes and OpenStack depends on the OpenStack Application Catalog Murano, which manages the infrastructure for Kubernetes clusters and deploys the Docker application to the Kubernetes cluster. As the application and Kubernetes cluster scale, Murano manages the interplay between OpenStack compute, storage and networking resources and the application to ensure support for the infrastructure needs of the application and its attendant Kubernetes cluster. Tuesday’s announcement underscores the burgeoning power of containers, container management frameworks such as Google’s Kubernetes, the significance of OpenStack within the private cloud space as well as the increasingly urgent need for technologies that promote communication across cloud infrastructures toward the end of realizing the true potentiality of hybrid cloud environments. The integration of Kubernetes and OpenStack’s Murano will be available for preview on the Mirantis OpenStack Express platform in April 2015.

Categories: Google, OpenStack | Tags: , , ,

Google Releases Open Source Tool For Cloud Performance Benchmarks And Comparisons

On Wednesday, Google announced the availability of PerfKit Benchmarker, an open source application for benchmarking cloud performance across a variety of cloud infrastructures. PerfKit Benchmarker tackles the notorious difficulty of obtaining metrics about cloud platforms that enable an apples to apple comparison of cloud performance and operational efficacy. PerfKit reports on metrics such as “application throughput, latency, variance and overhead” in addition to data related to the time required to provision resources. Available by means of an Apache License v2, PerfKit Benchmarker is complemented by Perfkit Explorer, a visualization platform that features dashboards and other tools that facilitate rapid comprehension of trends and the business significance of the metrics collected by PerfKit Benchmarker. In a blog post, Google pledged to keep PerfKit current with changes to the evolution of contemporary cloud infrastructures as follows:

PerfKit is a living benchmark framework, designed to evolve as cloud technology changes, always measuring the latest workloads so you can make informed decisions about what’s best for your infrastructure needs. As new design patterns, tools, and providers emerge, we’ll adapt PerfKit to keep it current. It already includes several well-known benchmarks, and covers common cloud workloads that can be executed across multiple cloud providers.

Perfkit currently supports the Google Cloud Platform in addition to Amazon Web Services and Microsoft Azure according to TechCrunch, . All told, the release of Perfkit Benchmarker constitutes a seminal moment for the cloud computing industry given the dearth of data that enable cross-vendor comparisons, metrics compilation and benchmarking. Despite the availability of platforms such as Cloud Harmony, New Relic and Splunk, few tools in the industry facilitate vendor comparisons by leveraging transparent methodologies and metrics-development practices. The key question regarding PerfKit, however, will be the degree to which its measurement practices indirectly play to the strengths of the Google Cloud Platform (GCP), although presumably the Google Cloud Platform Performance team would know better than to create a benchmarking tool that serves to cast a positive light on GCP. Moreover, Perfkit was developed in collaboration with the likes of CenturyLink, CloudHarmony, Intel, Microsoft, Rackspace and Red Hat which in and of itself suggests the cloud computing space stands poised to leverage Google’s record of innovation and quality in conjunction with “quarterly discussion on default benchmarks and settings proposed by the community” led by Stanford and MIT. Regardless, Perfkit represents an exciting moment for the technology landscape as cloud computing continues to lean in the direction of interoperability, open standards and APIs between proprietary platforms that facilitate workload sharing and an increasingly open ecosystem for application development and data sharing.

Categories: Google | Tags: , , , , ,

Blog at WordPress.com. The Adventure Journal Theme.