On December 23, Amazon Zocalo announced an enhancement to its collaboration platform by providing customers with the ability to sync content within shared folders. Customers use the shared folder sync feature by setting up shared folders for designated projects and teams and subsequently enabling the synchronization of those folders for all requisite team members. Users can activate Zocalo’s sync functionality during the set up and registration process by checking the “Enable Shared Folder Sync” box or visiting “Preferences” and doing the same. Amazon’s latest enhancement of its Zocalo platform illustrates its seriousness in tackling the market for enterprise-grade collaboration, file sharing and cloud storage products. That said, Amazon has its work cut out for it if it really intends to disrupt the collaboration and storage space, particularly given that competitors such as Box, Dropbox and Huddle deliver products with advanced security functionality in addition to the ability to configure corporate teams and third parties as collaborative parties. Nevertheless, Zocalo represents a notable feather in Amazon’s cap as it continues to diversify its product suite while staying true to its larger vision of delivering fully managed cloud-based products and services.
Amazon Web Services
Amazon Restructures Reserved Instances Pricing, Reduces Fees On Outbound Data Transfer And Streamlines Data Transmission To Amazon Kinesis
On December 2, Amazon announced a change to its EC2 reserved instance pricing model. Reserved instances allow customers to reserve computing power for three years in contrast to the on-demand, pay as you go model of cloud computing. Reserving instances for a fixed duration entitles customers to deep discounts in comparison to hourly pricing. Previously, Amazon Web Services customers were required to pay up front, in full, for either one to three years of reserved instance usage. Last week, however, Amazon unveiled two additional pricing options for reserved instances marked by the choice to pay either a partial amount of the reserved fee up front or no up front costs at all. Customers who pay no up front costs benefit from the cash flow advantages specific to spreading out their payments over the duration of the reserved instances term. However, paying up front for a three year reserved instances term gives customers a 63% discount in comparison to on demand pricing over three years, whereas the no up front payment delivers lesser savings, on the order of 30%.
Amazon’s reduction of its reserved instances pricing was announced days before a price reduction on outbound data transfer fees ranging from 6% to 43%, depending on the region and the volume of data transferred each month. Meanwhile, Amazon also recently announced an API function, PutRecords, that streamlines the insertion of data into a stream of Amazon Kinesis data. The PutRecords function can transmit as much as 4.5 MB of data into an Amazon Kinesis stream by means of a function that sends a maximum of 500 records, each one as large as 50 KB, into a Kinesis stream via a single HTTP call. The bottom line is that Amazon continues to innovate and remain competitive amidst growing competition from Google Compute Platform and other IaaS vendors whose products, pricing models and partner ecosystems are increasingly maturing and posing more of a competitive threat to Amazon’s market share dominance in the IaaS space.
Amazon Web Services recently rolled out a service called AWS Lambda that promises to continue Amazon’s history of and reputation for disrupting contemporary cloud computing with yet another stunningly innovative product and service. AWS Lambda allows developers to dispense with the need to create persistent applications that reside on virtual machines or servers. Instead, developers create libraries of code that respond to incoming data streams and perform event-driven computing by leveraging predefined Lambda functions. Lambda functions represent code written in Node.js that execute in response to changes to Amazon S3, data feeds from Amazon Kinesis and updates to tables in Amazon DynamoDB. Developers grant Lambda functions permission to access specific AWS resources, thereby enabling them to activate select AWS infrastructure components as necessary to perform their application logic. Part of the magic of Lambda functions is that they spin up infrastructure components as needed in response to incoming data feeds, and subsequently shut them down when they are not being used, thereby conserving resources and minimizing costs.
Last Thursday, November 13, Amazon Web Services announced the availability of EC2 Container Service (ECS) to facilitate the management of Docker technology qua containers on the Amazon Web Services platform. The announcement represents another notable endorsement of Docker technology by a major cloud vendor that promises to continue catapulting Docker’s container technology to the forefront of the cloud computing revolution. Docker, recall, is a platform that enables developers to create and transport distributed applications. Docker streamlines software development by ensuring that applications housed within Docker containers remain unchanged when transported from one environment to another, thereby reducing the probability that applications which run smoothly in test environments fail in production. Docker’s container technology also introduces greater efficiencies with respect to the creation of applications by means of well defined parameters regarding application dependencies that enable developers to more effectively diagnose bugs and performance-related issues as they arise.
ECS enables Amazon customers to create clusters featuring thousands of containers across multiple Availability Zones. Moreover, ECS empowers customers to terminate and start containers in addition to providing scheduling functionality that optimizes the collective performance of containers within a cluster. ECS also allows users to transport containers from the AWS platform to on-premise infrastructures and vice versa while additionally providing deep AWS integration that allows customers to take advantage of AWS’s “Elastic IP addresses, resource tags, and Virtual Private Cloud (VPC)” that effectively transform Docker containers into another layer of the AWS platform on par with EC2 and S3, according to a blog post by Amazon’s Jeff Barr. Amazon’s announcement of its EC2 Container Service for container management means that it accompanies Microsoft and Google in offering support for Docker deployment, management and orchestration. Google’s Kubernetes project enables Docker container management on the Google Cloud Platform, while Microsoft Azure recently announced support for Kubernetes on the Azure platform.
The bottom line here is that Docker’s ability to enable the deployment of applications within containers as opposed to virtual machines has captured the minds of developers and enterprise customers to such a degree that the most significant IaaS players in the industry are differentially announcing indigenous or borrowed support for Docker technology. The key question now concerns the extent to which Docker usage proliferates to the point where it becomes the de facto standard for the deployment of applications and whether its technology can support the convergence of cloud computing and Big Data in the form of data-intensive applications designed to perform analytics on real-time, streaming data. Docker users will also be interested in container management frameworks that inter-operate across cloud frameworks such as Google Cloud Platform and Amazon Web Services in contrast to management frameworks designed for one cloud infrastructure as opposed to another.
On October 23, Amazon Web Services announced the launch of its 11th region in the form of the AWS EU (Frankfurt) region. The AWS EU (Frankfurt) region is the second region in Europe and will contain two Availability Zones upon launch. The availability of the AWS EU (Frankfurt) region helps German organizations comply with EU data protection requirements that impose constraints on the storage of data across national boundaries. JP Schmetz, Chief Scientist of Hubert Burda Media, remarked on the announcement as follows:
Now that AWS is available in Germany it gives our subsidiaries the option to move certain assets to the cloud. We have long had policies preventing data to be hosted outside of German soil and this new German region gives us the option to use AWS more meaningfully.
As Schmetz notes, German customers who have internal policies requiring intra-national hosting of data can now leverage the services of Amazon Web Services. In addition, German organizations who currently use AWS can now more fully take advantage of the AWS platform’s offerings by expanding the scope of their usage to include production-grade workloads and sensitive data. The AWS EU (Frankfurt) region represents the second AWS region in Europe alongside AWS EU (Ireland) region. AWS EU (Frankfurt) is expected to reduce latency for European customers and provide additional options for the architecture of disaster recovery solutions, in addition to enabling select German customers feel more comfortable about hosting data on AWS and achieve compliance with their own internal organizational policies with respect to data hosting and data storage.
AT&T recently announced a collaboration with Amazon Web Services to integrate AWS into the AT&T NetBond Virtual Private Network (VPN) architecture. As a result of the integration, AWS customers will access AWS products and services via the AT&T NetBond infrastructure by means a private network that bypasses the public internet. Because the NetBond infrastructure is accessed via a private connection, it delivers enhanced security, performance and reliability for Amazon Web Services customers who otherwise stand to endure the vagaries of public internet connections and their corresponding fluctuations in performance. The NetBond infrastructure additionally boasts network elasticity that adjusts network bandwidth in relation to the volume of network traffic, thereby enabling customers to save on network-related expenses. The collaboration between Amazon Web Services and AT&T with respect to NetBond illustrates an emerging trend in the IaaS space whereby infrastructures that connect public cloud platforms to a secure, private internet connection such as the Equinix Cloud Exchange proliferate as enterprises increasingly prioritize the security, performance and reliability of their cloud deployments. Existing AT&T NetBond customers include VMware, IBM, Equinix, HP and Box.
Amazon Web Services has initiated a massive reboot of EC2 instances in response to a security flaw that the company has yet to identify. AWS has notified customers to expect reboots to instances spanning multiple availability zones and regions. Many sources speculate that the reboot involves a security vulnerability in the open source Xen-108 hypervisor, the security patch for which is currently available via pre-release, embargoed code. Depending on the time zone of the targeted instance, the reboot starts on September 25 or 26 and ends on September 30. Amazon has confirmed that details of a security flaw specific to the Xen hypervisor will be officially released on October 1, but that “following security best practices, the details of this update are embargoed until then.” According to a RightScale blog post, T1, T2, M2, R3, and HS1 instances will not be affected and less than 10% of all EC2 instances will be impacted by the reboot and concomitant security patch. Notably, customers that independently reboot their EC2 instance will not necessarily experience the installation of the requisite security patch on their host machine. The reboot represents one of Amazon’s largest reboots in recent years and has the potential to affect application uptime, although the reboots will be staggered across different availability zones to minimize service disruptions.