Airbnb Raises $1 Billion in Series F Funding and Vaults to Valuation of $31 Billion

Airbnb has finalized $1 billion in Series F funding that represents an increase from the $555M that it announced as constitutive of its September 2016 capital raise in September 2016. According to a recent SEC filing, Airbnb raised over $1B in Series F equity funding that brings its valuation to roughly $31 billion, or slightly more than its $30 billion valuation as noted in September 2016. Airbnb provides a web-based platform that allows individuals to find accommodation as delivered by the private homes offered by other members instead of hotels. The company has recently expanded its offerings to include “Trips” that allow members to sign up for itineraries and experiences in the destination of their choice. Operating in 65,000 cities spanning 191 countries, Airbnb became profitable in the second quarter of 2016 and currently has no immediate plans to go public in the near future. The company’s decision to remain private, for now, is illustrative of the access to capital had by technology companies and the corresponding deferral of the decision to go public until the product matures or legal and regulatory issues related to the disclosure of business practices are resolved.

Advertisement

Pivotal and Datometry Collaborate to Enable Customers To Run Teradata Workloads on Pivotal Greenplum

Pivotal has partnered with Datometry to enable enterprises to run Teradata-based workloads on Pivotal Greenplum’s cloud-based data warehouse. The collaboration between Datometry and Pivotal empowers enterprises to transfer Teradata workloads to Pivotal’s Greenplum platform without costly and complex database migrations or application re-architecting. By using Datometry’s Hyper-Q for Pivotal Data Suite, customers can migrate Teradata workloads to natively run on Pivotal Greenplum, thereby taking advantage of Pivotal Greenplum’s open source, massively parallel processing architecture and its Pivotal Query Optimizer that specializes in scaling analytics that run on massive datasets while preserving throughput, latency and query performance more generally. Available on AWS and Microsoft Azure in addition to on-premise deployments, Pivotal Greenplum gives customers enhanced flexibility and speed with respect to Teradata workload migrations to the cloud. The collaboration between Pivotal and Datometry radically accelerates the ability of customers to move Teradata workloads to the cloud and correspondingly underscores the relevance of Pivotal Greenplum as a massively parallel processing data warehouse that supports loading speeds that can scale to “greater than 10 terabytes per hour, per rack” in addition to machine learning and SQL functionality. Expect the initiative led by Datometry and Pivotal to accelerate Teradata workload cloud migrations while concomitantly foregrounding the importance of Pivotal Greenplum to big data warehousing and analytics as well as the innovation of Datometry’s Hyper-Q data virtualization platform, which aspires to free data from vendor lock-in by making applications translatable from one database to another.

AWS S3 Outage Underscores The Need For Enhanced Risk and Control Frameworks For Cloud Services

The Amazon Web Services disruption that affected the Northern Virginia Region (US-EAST-1) on February 28 was caused by human error. At 937 AM PST, an AWS S3 team member that was debugging an issue related to the S3 billing system mistakenly removed the index and placement subsystems, the former of which was responsible for all of the metadata of S3 objects whereas the placement subsystem managed the allocation of new storage. The inadvertent removal of these two subsystems initiated a full restart of S3 that impaired S3’s ability to respond to requests. S3’s inability to respond to new requests subsequently affected related AWS services that depend on AWS S3 such as EBS, AWS Lambda and the launch of new instances of the AWS EC2. Moreover, the service disruption to S3 also prevented AWS from updating its AWS Service Health Dashboard from 937 AM PST to 11:37 AM PST. The full restart of the S3 subsystem took longer than expected as noted by the following excerpt from the AWS post-mortem analysis of the S3 service disruption:

S3 subsystems are designed to support the removal or failure of significant capacity with little or no customer impact. We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes. While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected.

Both the index subsystem and the placement subsystem within AWS were restored by 1:54 PM PST. The recovery of additional services took more time depending on the backlog they experienced due to S3’s disruption and restoration. As a result of the outage, AWS has begun examining risks associated with operational tools and processes that remove capacity and escalated the prioritization of a re-architecting of S3 into smaller “cells” that allow for accelerated recovery from a service disruption and the restoration of routine operating capacity. The S3 outage affected customers such as Airbnb, New Relic, Slack, Docker, Expedia and Trello.

The S3 outage underscores the lack of maturity of control frameworks related to operational processes specific to the maintenance and QC of cloud services and platforms. That manual error could lead to a multi-hour service disruption to Amazon S3 with downstream effects for other AWS services represents a stunning indictment of AWS’s risk management and control framework for mitigating risks to the availability and performance of services as well as implementing effective controls to monitor and respond to the quality of operational performance. The outage pointedly illustrates the lack of maturity of risk, control and automation frameworks at AWS and subsequently sets the stage for competitors such as Microsoft Azure and Google Cloud Platform to capitalize on the negative publicity received by AWS by foregrounding the sophistication of their own risk and control frameworks for preventing, mitigating and minimizing service disruptions. Moreover, the February 28 AWS S3-based outage underscores the need for the maturation of cloud services-focused risk and control IT frameworks that can respond to the specificity of risk and control frameworks specific to cloud platforms in contradistinction to on-premise, enterprise IT. Furthermore, the outage strengthens the argument for a multi-cloud strategy for enterprises interested in ensuring business continuity by using more than one public cloud vendor to mitigate risks associated with a public cloud outage. Meanwhile, the continued pervasiveness of public cloud outages underscores the depth of the opportunity for the implementation of controls to mitigate risks that threaten cloud services uptime and performance.

Storj Labs Finalizes $3M In Seed Funding For Its Open Source Distributed Cloud Storage Platform

Storj Labs has announced the finalization of $3M in seed funding for its open source distributed cloud storage platform that delivers peer to peer to storage using blockchain technology and cryptography. Google Ventures, Qualcomm Ventures, Techstars and angel investors from venture capital firms in addition to Cockroach Labs, Ionic Security and Pindrop Security are amongst the company’s early investors.The Storj Labs storage platform uses “farmers” who rent space on their own hard drives and storage infrastructures to other users. Storj Labs claims that the decentralization of its storage platform enables enhanced security and lower costs as compared to storage solutions offered by vendors such as Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Because only end users have the encryption keys to their stored data, farmers cannot access data stored within the infrastructure they are providing to Storj Labs. Furthermore, the inherent decentralization of its platform means hackers and data thieves have no central servers to attack, compromise or destroy. Storj Labs currently boasts over 15,000 API users and more than 7,500 farmers. The company aims to disrupt cloud storage with improved performance, security and lower costs by means of its decentralized, peer to peer, client-side encrypted storage solution.

Google Cloud Platform Is First Public Cloud To Make Available Intel’s Xeon Skylake Processor

As noted by Urs Holzle, SVP of Google Cloud Infrastructure in a recent blog post, Google Cloud Platform has adopted the Intel Xeon Processor Skylake, a next generation micro-processor optimized for high performance compute workloads. Skylake’s Intel Advanced Vector Extensions render it exceptionally well suited for 3 dimensional modeling, compute-intensive data analytics, genomics, scientific modeling and engineering simulations. Moreover, Google Cloud Infrastructure has optimized Skylake for Google’s VMs to ensure Google Cloud Platform customers receive maximum benefit from Skylake’s capabilities. The availability of the Intel Xeon Processor Skylake on the Google Cloud Platform marks the fulfillment of a promise made by Google and Intel in November to integrate Intel’s most recent microprocessor into the Google Cloud Platform. Google Cloud Platform now claims the distinction of the first public cloud to use the Intel Xeon Processor Skylake in what amounts to a hugely important deal for Intel as it faces increased competition from AMD, particularly given the forthcoming launch of the AMD Ryzen microprocessor. Meanwhile, Google stands to benefit from Skylake’s ability to enhance customer capabilities with respect to applications requiring more intensive computational power and modeling capabilities. Importantly, the availability of the Xeon Skylake Processor on the Google Cloud Platform illustrates a broader partnership between Google and Intel aimed at accelerating cloud adoption for enterprises.

Datadog Announces General Availability Of APM Solution That Enables Integrated Insights Into Application and Infrastructure Performance From Datadog Platform

On February 15, Datadog announced the general availability of Application Performance Monitoring (APM). Datadog’s APM platform complements its infrastructure monitoring capabilities and enables it to deliver a holistic set of monitoring solutions that absolves customers of the need to implement siloed application and infrastructure monitoring solutions. Amit Agarwal, Chief Product Officer at Datadog, remarked on the significance of the company’s application monitoring capabilities as follows:

Based on customer demand, we are blurring the distinction between infrastructure monitoring and application performance monitoring by offering both within Datadog. We want to enable enterprises to benefit from APM deployed broadly across all of their hybrid or private cloud infrastructure that is running code. Traditionally, companies with scaling infrastructure are only deploying APM on a small percent of their applications or machines in order to cut down on costs.

Here, Agrawal elaborates on how Datadog’s APM capabilities motivate customers to select APM solutions for a broader subset of their application portfolio in contrast to “a small percent of their applications or machines” in order to curb costs. Moreover, Datadog’s APM offering facilitates the deployment of application performance monitoring for applications deployed within both private and hybrid cloud environments. Conceived in response to customer requests, the Datadog APM solution gives customers not only the combination of application and infrastructure performance monitoring, but more importantly, insight into the intersection between infrastructure and applications and their ability to reciprocally influence one another.

Key features of the Datadog APM platform include the detection of anomalies via machine learning-based algorithms, flame graphs that identify the most frequently used code paths, customizable dashboards and the ability to track end user application requests across host machines and other related infrastructure and application components. Datadog APM empowers Datadog to disrupt the space of IT monitoring platforms by leveraging the company’s machine learning and artificial intelligence technologies to understand the reciprocity between the effect of applications on infrastructure, and conversely, the impact of infrastructure on applications. Importantly, the general availability of Datadog’s APM solution positions it strongly to go head to head with the likes of New Relic, Splunk and AppDynamics by taking advantage of its holistic analytics and advanced data visualization capabilities as illustrated by the dashboards below:

Expect Datadog to continue enhancing its APM solution and aggressively expanding market share in the application monitoring space now that its APM solution is generally available.