On March 26, Amazon announced two new storage plans for Amazon Cloud Drive that deliver enhanced options regarding the storage of files in the cloud. Both storage plans give customers the option of unlimited storage and take direct aim at cloud storage competitors such as Box, Dropbox and Google Drive. The Unlimited Photos Plan from Amazon Web Services allows customers to store an unlimited number of photos for $11.99 per year while the Unlimited Everything Plan allows users to store an unlimited number of files, whether they be photos, videos or other files, for $59.99/year. Both plans feature a free three month trial and disrupt the cloud storage landscape by giving away photo storage for less than $1/month and file storage for less than $5/month. Amazon’s prices beat Dropbox and Google Apps per Google Drive, both of which charge $10/month for 1 TB of storage for a single user.
Meanwhile, Amazon recently broadened its deployment of Amazon Prime Now by rendering its one hour delivery service available in 35 zip codes in the Dallas area. Amazon Prime Now provides one hour delivery for $7.99 or free two hour delivery for subscribers to Amazon Prime, which costs $99/year. With the addition of Dallas to its list of available cities, Amazon Prime Now’s deployment currently spans New York, Baltimore, Miami and Dallas. Taken together, Amazon’s new product offerings for cloud storage and its expansion of Amazon Prime represent aggressive steps to redefine not only the cost, but also the logistics of day to day operations such as file storage and e-commerce. Amazon’s new product offerings aim to lure cloud-averse customers to become more comfortable with its platform as Amazon attempts to choke not only Dropbox and Google Drive, but also the likes of eBay, Walmart.com and Target.com. By rendering its existing products and services more attractive, Amazon is also likely to benefit from spillover revenue as customers surf through and sign up for a broader range of its products and services. But the real benefit, for Amazon, from expanding the overall usage of its platform is the enhancement to the reputation of the Infrastructure as a Service that undergirds its dizzying range of products and services, and the corresponding affirmation the company receives to expand and upgrade its infrastructure fleet toward the end of expanding its business-facing cloud-based products and services.
On Wednesday, Google announced the Beta release of Google Cloud Storage Nearline, a cloud-based storage product that transforms the economics of hot and cold storage. Whereas enterprises currently wrestle with the problem of managing frequently accessed data versus “cold” data, Google Cloud Storage Nearline renders cold data accessible within three seconds. The ability of Google Cloud Storage Nearline to access cold data means that organizations need not have separate infrastructures for managing cold and hot data but can instead leverage Google’s high performance, low cost storage solution to render historical data available within a few seconds. As a result, enterprises can serve up historical emails, audits and compliance findings, log files and data specific to decommissioned products and services with a virtually negligible time lag in comparison to hot data. Google’s product charges 1 cent per GB to store data within a framework that delivers enterprise-grade security, integration with Google Cloud Storage services in addition to the ability to collaborate with vendors such as Veritas/Symantec, Netapp, Iron Mountain and Geminare for services such as backup, encryption, deduplication, data ingestion from physical hard drives and disaster recovery as a service. In the context of the larger cloud storage landscape, Google Cloud Storage Nearline poses a direct threat to Amazon’s Glacier, a solution that is similarly priced at 1 cent per GB with a focus on cold data. Unlike Google Cloud Storage Nearline, however, Amazon Glacier requires several hours for data retrieval in contrast to three seconds. Google Cloud Storage Nearline addresses the data conundrum faced by the world today given the paradox that, whereas material objects such as garbage, newspapers and man-made products in general confront technologies for recycling and transformation, data has managed to demarcate a unique place for itself marked by freedom from outright destruction. The immunity of data to being discarded is, of course, enabled by the ever decreasing price of hardware, but Google’s intervention to render historical data available within a few seconds stands to fundamentally disrupt and transform the economics of cloud storage.
On December 23, Amazon Zocalo announced an enhancement to its collaboration platform by providing customers with the ability to sync content within shared folders. Customers use the shared folder sync feature by setting up shared folders for designated projects and teams and subsequently enabling the synchronization of those folders for all requisite team members. Users can activate Zocalo’s sync functionality during the set up and registration process by checking the “Enable Shared Folder Sync” box or visiting “Preferences” and doing the same. Amazon’s latest enhancement of its Zocalo platform illustrates its seriousness in tackling the market for enterprise-grade collaboration, file sharing and cloud storage products. That said, Amazon has its work cut out for it if it really intends to disrupt the collaboration and storage space, particularly given that competitors such as Box, Dropbox and Huddle deliver products with advanced security functionality in addition to the ability to configure corporate teams and third parties as collaborative parties. Nevertheless, Zocalo represents a notable feather in Amazon’s cap as it continues to diversify its product suite while staying true to its larger vision of delivering fully managed cloud-based products and services.
On December 2, Amazon announced a change to its EC2 reserved instance pricing model. Reserved instances allow customers to reserve computing power for three years in contrast to the on-demand, pay as you go model of cloud computing. Reserving instances for a fixed duration entitles customers to deep discounts in comparison to hourly pricing. Previously, Amazon Web Services customers were required to pay up front, in full, for either one to three years of reserved instance usage. Last week, however, Amazon unveiled two additional pricing options for reserved instances marked by the choice to pay either a partial amount of the reserved fee up front or no up front costs at all. Customers who pay no up front costs benefit from the cash flow advantages specific to spreading out their payments over the duration of the reserved instances term. However, paying up front for a three year reserved instances term gives customers a 63% discount in comparison to on demand pricing over three years, whereas the no up front payment delivers lesser savings, on the order of 30%.
Amazon’s reduction of its reserved instances pricing was announced days before a price reduction on outbound data transfer fees ranging from 6% to 43%, depending on the region and the volume of data transferred each month. Meanwhile, Amazon also recently announced an API function, PutRecords, that streamlines the insertion of data into a stream of Amazon Kinesis data. The PutRecords function can transmit as much as 4.5 MB of data into an Amazon Kinesis stream by means of a function that sends a maximum of 500 records, each one as large as 50 KB, into a Kinesis stream via a single HTTP call. The bottom line is that Amazon continues to innovate and remain competitive amidst growing competition from Google Compute Platform and other IaaS vendors whose products, pricing models and partner ecosystems are increasingly maturing and posing more of a competitive threat to Amazon’s market share dominance in the IaaS space.
Amazon Web Services recently rolled out a service called AWS Lambda that promises to continue Amazon’s history of and reputation for disrupting contemporary cloud computing with yet another stunningly innovative product and service. AWS Lambda allows developers to dispense with the need to create persistent applications that reside on virtual machines or servers. Instead, developers create libraries of code that respond to incoming data streams and perform event-driven computing by leveraging predefined Lambda functions. Lambda functions represent code written in Node.js that execute in response to changes to Amazon S3, data feeds from Amazon Kinesis and updates to tables in Amazon DynamoDB. Developers grant Lambda functions permission to access specific AWS resources, thereby enabling them to activate select AWS infrastructure components as necessary to perform their application logic. Part of the magic of Lambda functions is that they spin up infrastructure components as needed in response to incoming data feeds, and subsequently shut them down when they are not being used, thereby conserving resources and minimizing costs.
Last Thursday, November 13, Amazon Web Services announced the availability of EC2 Container Service (ECS) to facilitate the management of Docker technology qua containers on the Amazon Web Services platform. The announcement represents another notable endorsement of Docker technology by a major cloud vendor that promises to continue catapulting Docker’s container technology to the forefront of the cloud computing revolution. Docker, recall, is a platform that enables developers to create and transport distributed applications. Docker streamlines software development by ensuring that applications housed within Docker containers remain unchanged when transported from one environment to another, thereby reducing the probability that applications which run smoothly in test environments fail in production. Docker’s container technology also introduces greater efficiencies with respect to the creation of applications by means of well defined parameters regarding application dependencies that enable developers to more effectively diagnose bugs and performance-related issues as they arise.
ECS enables Amazon customers to create clusters featuring thousands of containers across multiple Availability Zones. Moreover, ECS empowers customers to terminate and start containers in addition to providing scheduling functionality that optimizes the collective performance of containers within a cluster. ECS also allows users to transport containers from the AWS platform to on-premise infrastructures and vice versa while additionally providing deep AWS integration that allows customers to take advantage of AWS’s “Elastic IP addresses, resource tags, and Virtual Private Cloud (VPC)” that effectively transform Docker containers into another layer of the AWS platform on par with EC2 and S3, according to a blog post by Amazon’s Jeff Barr. Amazon’s announcement of its EC2 Container Service for container management means that it accompanies Microsoft and Google in offering support for Docker deployment, management and orchestration. Google’s Kubernetes project enables Docker container management on the Google Cloud Platform, while Microsoft Azure recently announced support for Kubernetes on the Azure platform.
The bottom line here is that Docker’s ability to enable the deployment of applications within containers as opposed to virtual machines has captured the minds of developers and enterprise customers to such a degree that the most significant IaaS players in the industry are differentially announcing indigenous or borrowed support for Docker technology. The key question now concerns the extent to which Docker usage proliferates to the point where it becomes the de facto standard for the deployment of applications and whether its technology can support the convergence of cloud computing and Big Data in the form of data-intensive applications designed to perform analytics on real-time, streaming data. Docker users will also be interested in container management frameworks that inter-operate across cloud frameworks such as Google Cloud Platform and Amazon Web Services in contrast to management frameworks designed for one cloud infrastructure as opposed to another.
On October 23, Amazon Web Services announced the launch of its 11th region in the form of the AWS EU (Frankfurt) region. The AWS EU (Frankfurt) region is the second region in Europe and will contain two Availability Zones upon launch. The availability of the AWS EU (Frankfurt) region helps German organizations comply with EU data protection requirements that impose constraints on the storage of data across national boundaries. JP Schmetz, Chief Scientist of Hubert Burda Media, remarked on the announcement as follows:
Now that AWS is available in Germany it gives our subsidiaries the option to move certain assets to the cloud. We have long had policies preventing data to be hosted outside of German soil and this new German region gives us the option to use AWS more meaningfully.
As Schmetz notes, German customers who have internal policies requiring intra-national hosting of data can now leverage the services of Amazon Web Services. In addition, German organizations who currently use AWS can now more fully take advantage of the AWS platform’s offerings by expanding the scope of their usage to include production-grade workloads and sensitive data. The AWS EU (Frankfurt) region represents the second AWS region in Europe alongside AWS EU (Ireland) region. AWS EU (Frankfurt) is expected to reduce latency for European customers and provide additional options for the architecture of disaster recovery solutions, in addition to enabling select German customers feel more comfortable about hosting data on AWS and achieve compliance with their own internal organizational policies with respect to data hosting and data storage.