Today, in its fourth quarter earnings call, Amazon announced that Amazon Web Services claims more than one million active customers featuring year over year usage growth of the AWS platform by 90%. Moreover, in 2014, AWS rolled out 515 feature and service releases, or 80% more than the previous year. The most recent quarter also featured the release of AWS Lambda, a service that runs code in response to events driven by Amazon S3, Amazon Kinesis, and Amazon DynamoDB. During the same period, AWS also announced the Amazon EC2 Container Service and Amazon Aurora, a MySQL-compatible relational database for Amazon RDS that boasts performance improvements over MySQL by a factor as much as five at a price point of one tenth of the cost of commercial relational database products. Importantly, Amazon announced that it will finally reveal details of AWS earnings later in 2015, which are currently bucketed into the “Other” category in Amazon’s earnings reports and subsequently leave the exact figure open to endless amounts of analyst speculation and inference. Revenue in Amazon’s “Other” category for the most recent quarter was $1.67B, although that figure includes advertising and credit-card related revenues.
Datapipe today announced an expansion of its Managed Cloud for Amazon Web Services offering marked by the availability of enhanced functionality related to cloud security, risk management, the creation of hybrid cloud infrastructures and operational analytics. As an AWS Premier Consulting Partner, Datapipe’s Managed Cloud for Amazon Web Services delivers a fully managed solution for customers who would like to take advantage of the Amazon Web Services platform and its extraordinary range of features and ancillary product offerings. Specifically, Datapipe offers services that include round the clock issue resolution and monitoring, managed cloud provisioning, cloud scaling, database management, workload migration, SharePoint as a Service and orchestration. As a result of today’s announcement, Datapipe offers enhanced security and risk management solutions including advanced identity management and authentication services, managed security and threat alerts, backup and recovery options that leverage hybrid cloud infrastructures such as Datapipe’s on-premise datacenters and enterprise-level governance and security policies. In addition, Datapipe launches a “Managed hybrid cloud connect” solution that allows customers to create hybrid cloud infrastructures composed of the Amazon Web Services Cloud with Datapipe’s on-premise datacenters in Seattle, WA, Silicon Valley, Ashburn, VA, London and Singapore. Importantly, Datapipe’s “Managed hybrid cloud connect” solution leverages AWS Direct Connect, the dedicated connection to Amazon Web Services that bypasses the public internet. Finally, Datapipe revealed the availability of operational analytics about a customer’s AWS infrastructure that enables customers to track usage trends and operational KPIs towards the end of optimizing the performance of their deployments. Today’s announcement by Datapipe underscores the heterogeneity of strategic alliances in the IaaS space whereby vendors such as Datapipe partner with a leading IaaS player to deliver a fully managed offering with an increasingly rich range of features that enables enterprise customers to access a turnkey solution that meets their needs for infrastructure monitoring, security, data resilience and analytics. The industry should expect more vendors to offer managed cloud solutions on the platforms of major IaaS players as the market for cloud services continues to skyrocket and the need for cloud-related managed services increased in tandem.
At its Build Conference in San Francisco, Microsoft joined Google and Amazon Web Services in slashing IaaS and storage prices by announcing price cuts of up to 27-35% on compute services, and 44-65% on storage. Additionally, Microsoft revealed details of “Basic” VM instances that lack the load balancing and auto-scaling functionality that comes with the Standard instances. Price cuts were deepest for “Memory-Intensive” virtual machines and ranged from 35% for select Linux machines, and 27% for Windows-based machines. Microsoft also announced a new redundancy storage option branded Zone Redundant Storage (ZRS) that allows customers to store three copies of their data across “multiple facilities” which may be located either within the same region or across different regions. Zone Redundant Storage provides customers with an alternative redundancy option to the currently available Geo Redundant Storage (GRS) choice which enables customers to store data in regions “hundreds of miles apart” marked by 3 copies of their data in each region. Zone Redundant Storage will be 37.5% lower than Geo Redundant Storage in price. Notable about Microsoft’s announcements of Azure price reductions was its concomitant emphasis on quality and innovation in the cloud computing space:
While price is important, and something that will continue to grab headlines, there are three key factors at play in cloud computing: innovation, price, and quality. Innovation and quality will prove far more important than commoditization of compute and storage. Vendors will ultimately extol their track records for building and running services far more than their prices and SLAs.
Microsoft will continue to focus on bringing our customers a world-class service with an unrivaled user experience. This means best-in-class value while still providing the most complete cloud experience on the market. It means massive investments in cutting-edge infrastructure and world-class R&D. It means continuing to grow our developer and partner ecosystems. Simply put, it means devoting the bulk of our efforts to delivering innovation and a quality experience for our customers, developers, and partners.
With cloud guru Satya Nadella now at the helm of Microsoft, the industry should expect Microsoft to hold good on the promise made by Steve Martin, General Manager of Windows Azure, in his blog post regarding the devotion of “the bulk of our [Microsoft’s] efforts to delivering innovation and a quality experience for our customers.” All this suggests that, what had previously been a two horse race between Amazon Web Services and Google has now, within a matter of days, morphed into a three horse race that prominently features Microsoft and its renewed commitment to cloud and mobile technologies under Nadella as evinced by Microsoft’s release of Office on the iPad. Without question, Microsoft’s experience serving enterprise customers exceeds that of Google by far, but its ability to innovate in the cloud space with the frequency and depth of Amazon Web Services and Google remains to be seen.
At Google Cloud Platform Live, Google just announced a range of enhancements to its Infrastructure as a Service, Platform as a Service and Big Data analytics platforms. For starters, Google announced price cuts to its Google Compute Engine platform ranging from 30-85%. Prices for Google’s Infrastructure as a Service offering will be slashed by 32% for all “sizes, regions and classes.” Meanwhile, Google Cloud Storage and Google BigQuery experienced price reductions of 68% and 85% respectively. Google simplified the pricing of its platform as a service, Google App Engine, and reduced it by roughly 30%. In addition to price cuts, Google unveiled an analogue to the Amazon Web Services product reserved instances which provides deep discounts on VM pricing in the event they are used for one or three year time periods. Branded “Sustained-Use Discounts,” Google offers price cuts on top of its already announced reduction for customers who use a VM for more than 25% of a given month. Customers who use a VM for an entire month can see additional discounts of up to 30%, resulting in price cuts of over 50% compared to original prices given today’s other price reductions. Google is also launching BigQuery Streaming, an enhancement that enables the BigQuery platform to consume 100,000 rows of data per second and render the data available for real-time analytics in ways comparable to products such as Amazon Kinesis and Treasure Data. Moreover, Google announced a Managed Virtual Machines service that allows users to configure a virtual machine to their own specifications and subsequently deploy the VM to the Google App Engine infrastructure, thereby giving developers more flexibility vis-à-vis the type of machine managed that can take advantage of App Engine’s auto-scaling and management functionality. For developers, Google announced integration with Git featuring automated build and unit testing of changes committed as well as aggregated logs of testing results. Finally, Google revealed the general availability of Red Hat Enterprise Linux and SUSE Linux Enterprise Server and Windows Server 2008 R2 in limited preview for VMs.
All told, today’s price cuts and news of functionality represent much more than a price war with Amazon Web Services. Just a day before the AWS Summit in San Francisco, Google confirmed the seriousness of its intent to increase traction for its development-related cloud-based products. The variety of today’s enhancements to Google Compute Engine, Google App Engine, BigQuery and the introduction of its Managed Virtual Machines service indicate that Google is systematically preparing to service the cloud computing needs of enterprise customers. Despite all the media hype over the last two years about companies gearing up “take on Amazon,” no other cloud vendor has even been close to the depth of IaaS features and functionality possessed by Amazon Web Services with the exception of Google as it revealed itself today. All this means that we now have a two horse race in the Infrastructure as a Service space until the commercial OpenStack community convincingly demonstrates the value of OpenStack-based cloud inter-operability in conjunction with richness of features and competitive pricing.
CipherCloud today announces the launch of data discovery capabilities by means of a solution deployed on Salesforce.com’s AppExchange. CipherCloud customers will be able to use its data discovery tool to run analytics on applications and data infrastructures within its purview toward the end of proactively responding to abnormal usage patterns that suggest possible security risks. The tool allows customers to configure the parameters of rules and alerts in order to customize the data discovery functionality for the usage patterns specific to their user base. CipherCloud’s data discovery solution extends the capabilities of its data privacy, security and encryption services for cloud applications by giving customers access to data visualizations regarding user and platform activity as illustrated below:
The three data discovery charts identify top users by type of user activity, anomalous activities and the most widely exported reports. Here, examples of anomalous activities include excessive downloads and after hours usage, although the software’s capability to customize the meaning of anomalous renders the solution applicable to a vast variety of use cases for identifying suspicious activities and processes.
CipherCloud’s launch of its data discovery solution on the AppExhange platform builds on a recent product enhancement that delivers encryption to data prior to its transmission to Amazon Web Services, and specifically, AWS products such as the Amazon S3, RDS and EBS. The data discovery solution adds one more weapon to CipherCloud’s roster of products for protecting the security of cloud-based data, and represents a useful complement to its AES-256 bit encryption services, which allow customers to retain control of the encryption keys as opposed to to the cloud platform vendor on which a solution is deployed. Expect CipherCloud to continue to expand its data loss protection (DLP) and encryption services, particularly as the market for cloud security products explodes in the wake of increasing cloud adoption and elevated customer concerns over cloud security.
Cloud Computing Today recently had the privilege to interview Garantia Data CEO Ofer Bengal about the positioning of Redis within the larger landscape of NoSQL databases. Redis is an an open source, in-memory, key value data store. In his responses to the questions below, Bengal remarks on the ability of Redis to “serve a very high volume of write and read requests…at sub millisecond latency,” its “single threaded event-driven architecture,” and protocols that, in collaboration with its other features, render it 5-10 times faster than other in-memory databases. Bengal also elaborates on the richness of data structures within the Redis platform that empower developers to write more elegant and streamlined code.
Garantia Data’s core offering consists of the Redis Cloud and Memcached Cloud on well known cloud platforms such as Amazon Web Services and Windows Azure. Its Redis Cloud platform provides a fully managed service for Redis deployments that includes handling of scalabilty and failover considerations. Garantia Data recently acquired MyRedis, a production-grade deployment of Redis that runs on Heroku and AppHarbor.
1. Cloud Computing Today: Why, in your view, will Redis become the preferred database technology platform?
Ofer Bengal: NoSQL databases like Redis are becoming increasingly popular. According to a 451 Research report, Redis adoption is projected to increase from 11.3 percent today to 15.9 percent in 2015. Redis in particular will become a preferred database technology because it is faster than any other database and it has rich data structures – which are very similar to those of today’s high level programming languages. Leading companies like Twitter and Pinterest use Redis, which shows it is highly useful for companies with rapidly growing datasets.
2. Cloud Computing Today: What differentiates the performance of Redis from other datastores?
Ofer Bengal: Redis is an in-memory database designed from the ground up to serve a very high volume of write and read requests (over 100K ops/sec on a typical cloud instance) at sub millisecond latency. This, in most cases, means two orders of magnitude faster than other disk-based databases. Versus other in-memory databases, Redis is based on a single threaded event-driven architecture which frees it from lock mechanisms. In addition, its protocol is simple and fast to process – making Redis 5x-10x times faster than any other in-memory database available today.
3. Cloud Computing Today: What makes developing apps with Redis a much simpler task than with other database platforms?
Ofer Bengal: Redis has a rich set of data structures which are very similar to those of today’s high level programming languages. Users are also able to do more with Redis as an in-memory database because it is less complicated to manipulate than the same data structure on disk. This means developers do far less damage to the concepts of their programs when using Redis, resulting in faster development, improved code quality and more attractive code.
This week, VMware revealed details of its Infrastructure as a Service platform, vCloud Hybrid Service. Based on the premise that enterprise customers are interested in a cloud offering marked by an extension of the technology within their datacenters, VMware announced a cloud solution built around the VMware virtualization technologies with which the enterprise is deeply familiar. VMware’s offering is branded as a hybrid cloud because it enables customers to transport workloads back and forth between their public cloud platform and private customer data centers in ways that allow enterprises to leverage private and public cloud solutions in tandem as dictated by their business needs.
Key features of the VMware IaaS vCloud Hybrid Service include the following:
•IaaS platforms delivered through VMware service providers that provides vCloud Datacenter Services to enable customers to provision virtual environments with ease. vCloud Datacenter Services feature SLAs guaranteeing uptime of 99.5%, role based access control and the ability to configure stacks for compliance with SAS 70 Type II or ISO27001 standards.
•A choice of dedicated or virtual private cloud solutions. A dedicated solution offers customers “physically isolated infrastructure” in contrast to the “logically isolated infrastructure” specific to a virtual private cloud solution.
•An IaaS infrastructure delivered by certified VMware service providers such as AT&T Inc., Bluelock, Colt, CSC, Dell Services, Optus, SingTel, Softbank, T-Systems
•vCloud Connector 2.0 enables customers to transfer workloads between private datacenters and VMware public clouds. Customers can effect the transfer of workloads betweeen infrastructures by using one network configuration instead of reconfiguring network settings in the destination infrastructure. Additionally, customers can manage the transfer of data between different infrastructures with “One Catalog” that synchronizes the list of available content across all relevant infrastructures, thereby avoiding the scenario whereby customers are forced to manage multiple content catalogs concurrently.
Because VMware’s IaaS vCloud Hybrid Service is delivered through a cluster of service partners, the offering is fundamentally different from the IaaS product offerings of Amazon Web Services and Rackspace. VMware plans to make its vCloud Hybrid Service technology and IP available to all service partners, and promises to build one of the most extensive IaaS partnerships for public cloud computing available in the world today. The product effectively gives new meaning to the term cloud interoperability given that customers can transfer workloads not only between private enterprise datacenters and public clouds enabled by VMware’s service partners, but also between VMware’s public cloud, partner datacenters as well. vCloud Hybrid Service will be available through an early access program in June and anticipates becoming generally available in Q3 of this year.