Merck Aims to Harness Capabilities of Amazon Alexa to Tackle Type 2 Diabetes with Alexa Diabetes Challenge

On April 10, pharmaceutical giant Merck announced the launch of the Alexa Diabetes Challenge, a competition aimed at encouraging the development of software applications that use Amazon’s Alexa technology to help patients with a recent diagnosis of Type 2 diabetes. The competition builds upon Merck’s exploration of the capabilities of Amazon Lex, the machine learning technology that undergirds Amazon Alexa by facilitating the development of “conversational interfaces” for applications, to enhance and enrich capabilities to manage and ameliorate chronic diseases. Kimberly Park, Vice President, Customer Strategy & Innovation, Global Human Health, Merck, remarked on the significance of Merck’s usage of Amazon Web Services to address chronic diseases as follows:

Merck has a deep heritage of tackling chronic diseases through our medicines, and we have been expanding into other ways to help, beyond the pill. We are excited to leverage the AWS Cloud to find innovative ways to leverage digital solutions, such as voice-activated technology, to help support better outcomes that could make a difference in the lives of those suffering from chronic conditions like diabetes.

Here, Park comments on Merck’s expansion into modalities of treatment that range “beyond the pill” in what amounts to a disruptive expansion of the company’s traditional business model. Powered by Luminary Labs and sponsored by Merck, the Alexa Diabetes Challenge will provide incentives for selected applicants to use Amazon Alexa as well as Amazon Web Services. In the first round of the competition, five entrants will be awarded $25,000 in addition to $100,000 in AWS credits. The entrants will subsequently receive access to mentoring resources about their proposed solutions and have the opportunity to further develop and refine their apps before vying for the grand prize of $125,000. The Alexa Diabetes Challenge illustrates increased interest in exploration of the intersection of machine learning, cloud computing and healthcare on the part of both technology companies as well as healthcare organizations and pharmaceutical firms. Moreover, in the case of AWS, the collaboration with Merck underscores Amazon CEO Bezos’s interest in embracing the contemporary trend of machine learning and artificial intelligence as elaborated in a recent letter to AWS shareholders. Learn more about the Alexa Diabetes Challenge here.

AWS S3 Outage Underscores The Need For Enhanced Risk and Control Frameworks For Cloud Services

The Amazon Web Services disruption that affected the Northern Virginia Region (US-EAST-1) on February 28 was caused by human error. At 937 AM PST, an AWS S3 team member that was debugging an issue related to the S3 billing system mistakenly removed the index and placement subsystems, the former of which was responsible for all of the metadata of S3 objects whereas the placement subsystem managed the allocation of new storage. The inadvertent removal of these two subsystems initiated a full restart of S3 that impaired S3’s ability to respond to requests. S3’s inability to respond to new requests subsequently affected related AWS services that depend on AWS S3 such as EBS, AWS Lambda and the launch of new instances of the AWS EC2. Moreover, the service disruption to S3 also prevented AWS from updating its AWS Service Health Dashboard from 937 AM PST to 11:37 AM PST. The full restart of the S3 subsystem took longer than expected as noted by the following excerpt from the AWS post-mortem analysis of the S3 service disruption:

S3 subsystems are designed to support the removal or failure of significant capacity with little or no customer impact. We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes. While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected.

Both the index subsystem and the placement subsystem within AWS were restored by 1:54 PM PST. The recovery of additional services took more time depending on the backlog they experienced due to S3’s disruption and restoration. As a result of the outage, AWS has begun examining risks associated with operational tools and processes that remove capacity and escalated the prioritization of a re-architecting of S3 into smaller “cells” that allow for accelerated recovery from a service disruption and the restoration of routine operating capacity. The S3 outage affected customers such as Airbnb, New Relic, Slack, Docker, Expedia and Trello.

The S3 outage underscores the lack of maturity of control frameworks related to operational processes specific to the maintenance and QC of cloud services and platforms. That manual error could lead to a multi-hour service disruption to Amazon S3 with downstream effects for other AWS services represents a stunning indictment of AWS’s risk management and control framework for mitigating risks to the availability and performance of services as well as implementing effective controls to monitor and respond to the quality of operational performance. The outage pointedly illustrates the lack of maturity of risk, control and automation frameworks at AWS and subsequently sets the stage for competitors such as Microsoft Azure and Google Cloud Platform to capitalize on the negative publicity received by AWS by foregrounding the sophistication of their own risk and control frameworks for preventing, mitigating and minimizing service disruptions. Moreover, the February 28 AWS S3-based outage underscores the need for the maturation of cloud services-focused risk and control IT frameworks that can respond to the specificity of risk and control frameworks specific to cloud platforms in contradistinction to on-premise, enterprise IT. Furthermore, the outage strengthens the argument for a multi-cloud strategy for enterprises interested in ensuring business continuity by using more than one public cloud vendor to mitigate risks associated with a public cloud outage. Meanwhile, the continued pervasiveness of public cloud outages underscores the depth of the opportunity for the implementation of controls to mitigate risks that threaten cloud services uptime and performance.

AWS Snowmobile Transfers Exabytes Of Data To Amazon S3 or Amazon Glacier via Data Truck

Amazon today announced details of AWS Snowmobile, a 45 foot long truck that accelerates the process of transferring on-premise data to the Amazon cloud for customers that have petabytes or exabytes of data to migrate. Customers with massive volumes of data can connect AWS Snowmobile to their network as an NFS-mounted volume and use their existing applications to transfer data that is ultimately bound for Amazon S3 or Amazon Glacier. AWS Snowmobile requires 350 kW of power and features rugged physical protection as well as data protection functionality such as encryption and GPS-tracking. AWS Professional Services helps customers install and set up AWS Snowmobile and subsequently allow customers to reap the benefits of a process-driven infrastructure for transferring massive amounts of data securely to the cloud. AWS Snowmobile takes the hassle out of transferring exabyte-scale to the Amazon cloud and offers a solution to the problem of enterprise-level workload migration.

DigitalGlobe currently uses AWS SnowMobile to transfer 100 PB of high resolution satellite imagery to Amazon Glacier with a celerity and efficiency that was heretofore unavailable, thereby allowing customers greater access to its data and facilitating the execution of distributed analytics. DigitalGlobe characterizes AWS Snowmobile as a “game changer” but the larger question for AWS Snowmobile is whether customers will place their bets on such a brazenly un-Amazon-like technology solution given its lack of technological elegance and the sheer crudity of a truck showing up at a company’s doorstep to transfer petabytes of data in the era of digital transformation. The other obvious question is how many customers will jump at the opportunity to move petabytes of data to the Amazon Cloud but, as the example of DigitalGlobe illustrates, the urgency of the business need to transfer data to the cloud may well over-ride the sheer lack of elegance of the solution.

AWS Announces EC2 Price Cuts Starting December 1

Amazon Web Services just announced a price cut on its C4. M4 and T2 instances that takes effect on December 1. Price cuts can amount to 5% for C4 instances and 10% for the M4 and T2 instances in the US East (Northern Virginia) region, with significantly higher price cuts of up to 25% in the Asia Pacific Mumbai and Singapore regions. The price reduction, which represents the 53rd price cut by Amazon Web Services, applies to all AWS regions and varies by platform and region. The announcement by AWS comes in the wake of notable declines in tech stocks given market uncertainty about President-elect Donald Trump’s position on offshore tech-related manufacturing and H1b visas, not to mention the question of Amazon CEO’s ownership of the Washington Post, which has been less than flattering of Trump in the months leading up to the election. Importantly, however, Amazon is sufficiently bullish on its own position that it can afford to make price cuts during a time of significant uncertainty for the technology industry more generally and, in so doing, pave the way for yet another round of IaaS price cuts from competitors such as Microsoft Azure and the Google Cloud Platform.

Amazon’s Q3 Earnings Miss Sparks Conversation About Ability Of AWS To Fend Off Competition From Microsoft, Oracle and Google

Amazon reported earnings per share of 52 cents on Thursday, missing the earnings target of 78 eps predicted by analysts by a margin that, in combination with other data points from the earnings report, subsequently sent the stock plummeting by 5% in trading on Friday. For the quarter ending September 30, 2016, the company reported revenue of $32.71 billion that slightly exceeded Wall Street estimates of $32.69 billion. Meanwhile, Amazon’s revenue guidance for the fourth quarter between $42 billion and $45.5 billion leaned toward the lower side of the spectrum of Wall Street’s expectation of $44.58 billion. To make matters even more worrisome for investors, Amazon projected operating income of between zero and $1.25 billion for the fourth quarter, whereas Wall Street had projected $1.62 billion. On a positive note, the company’s cloud services business unit, Amazon Web Services, claimed revenue of $3.23 billion, an increase of 55% in comparison to the $2.08 billion from the third quarter of last year that surpassed Wall Street’s expectation of $3.17 billion. Amazon explained its less than stellar earnings report by noting its heavy investments in original video content for Amazon Prime as well as fulfillment centers. Nevertheless, Amazon’s earnings per share miss and third quarter results more generally raised eyebrows in both the technology and investor community after a year of impressive growth and the preservation of its lead in the cloud computing space despite intensified competition. Axcient CEO Justin Moore remarked on Amazon’s recent earnings miss as follows:

Despite the Q3 EPS miss, over the longer-term, Amazon will continue to be a dominant force in both e-commerce and enterprise infrastructure – an incredible feat given that the customer sets are on the opposite ends of the spectrum. Amazon has been very clear that it will continue to focus on growth and not profitability. Investors have signed up for this approach for years so the blip in the stock will be tempered. Bezos has Amazon ‘primed’ for a dominant push to 2020 – and beyond. AWS and Prime continue to be Amazon’s primary growth and revenue drivers as the Seattle company broadens its lead in online commerce and cloud-computing services. The only real question for Amazon comes down to two factors: 1) its ability to appease investors appetite for ongoing record growth and 2) can it continue to maintain its lead over Microsoft, Google and Oracle who are equally committed to winning the cloud and have the benefit of being second mover which can be a benefit in these situation as infrastructure ages out and size and scale become inhibitors to innovation and performance. Expect to see all leverage M&A to acquire their way to technical leadership and hold an edge over the competition. That said, while there is plenty of startup talent to be bought at a premium, I don’t see Amazon losing this race anytime soon.

Here, Moore opines that Amazon’s ability to manage investor expectations and shake off the “second mover” advantage had by competitors such as Microsoft, Google and Oracle will determine whether it can continue the dominance in “e-commerce and enterprise infrastructure” that it has delivered, to date. Moore also notes that second movers stand to benefit from their ability to outpace the “size and scale” of their competitors with enhanced agility and innovation. Herein lies the stakes of Bezos’s gamble on innovation and investment in Amazon’s infrastructure: if Amazon can, indeed, afford to innovate on the rapidly expanding scale of its business and cloud operations with the agility of its competitors by re-investing resources acquired through its meteoric growth to date, Amazon stands poised to radically reconfigure the technology landscape over the next ten years in ways analogous to the disruption that Amazon Web Services brought to the cloud computing landscape. But in the event that the size and complexity of Amazon’s infrastructure mitigates against its ability to continue to deliver innovation, the chances of competitors such as Oracle and Google catching up to it, at least on the cloud services front, increase dramatically. According to Amazon’s CFO, Brian Olsavsky, Amazon built 18 new fulfillment centers in the third quarter while investing heavily in video content to enable Amazon Prime’s video services offerings to compete with Netflix. With respect to Amazon Web Services, however, one obvious question investors may have following last week’s earnings report concerns how Amazon intends to invest in AWS in response to Google’s rebranding of its cloud-based products and services coupled with Google’s aggressive emphasis on professional services for the enterprise.

Pivotal Cloud Foundry Adds Google Cloud Platform To Its Roster Of Major IaaS Partners

Pivotal and the Google Cloud Platform have announced a collaboration whereby Pivotal Cloud Foundry, the platform as a service based on the open source Cloud Foundry project, will now be generally available on the Google Cloud Platform. Pivotal’s collaboration with Google builds upon existing partnerships with Amazon Web Services and Microsoft Azure and gives it expanded access to developers building applications on cloud-based platforms. Key perks of using the Google Cloud Platform for Pivotal Cloud Foundry applications include access to Google Cloud’s load balancing technology as well as Google’s data and machine learning services such as Google BigQuery, Google Cloud SQL and Google Cloud Natural Language API. The availability of Google’s data and machine learning services testifies to an impressive depth of integration between Pivotal Cloud Foundry and the Google Cloud Platform, one that was made available by custom-built service brokers created by Google’s engineering team. The ability to create Pivotal Cloud Foundry-based apps on the Google Cloud Platform, with full access to Google’s enviable roster of data and machine learning products, gives developers a rich portfolio of battle-tested building blocks from which to build and iteratively enhance their applications. Stay tuned to the cadences of the integration between Pivotal Cloud Foundry and the Google Cloud Platform to understand whether the integration of the two platforms renders Google Cloud Platform a more promising partner for Pivotal Cloud Foundry developers and its customers than Amazon Web Services and Microsoft Azure. In the here and now, however, Pivotal—which is part of Dell Technologies—stands positioned to expand its availability to enterprise customers via a partnership that differentiates by way of access to Google’s renowned Big Data and machine learning technologies.