Avere Systems Raises $14M in Series E Funding for Storage Solutions for Hybrid Clouds

Avere Systems recently announced the finalization of $14M in Series E funding. The Series E funding raise features a new investor, namely, Google Inc., and existing investors Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Capital. Avere Systems specializes in storage products and solutions that optimize the performance of storage infrastructures for hybrid cloud environments by empowering customers to access stored files, whether they are stored in the cloud or on premise, without sacrifices to file availability or performance. In recognition of the need to manage, scale and optimize storage solutions for on-premise storage as well as cloud-based storage, Avere Systems delivers a portfolio of products and services that help ease enterprise adoption of cloud computing, with a specialization in the intersection of storage and hybrid cloud deployments. Based in Pittsburgh, PA, the company has raised a total of $97M to date.

The Series E funding raise is notable because of Google’s addition to Avere’s roster of investors as well as the continued support of Western Digital Capital, the venture capital firm associated with Western Digital. The ability of Avere Systems to garner high profile investors with deep investments in enterprise storage such as Google and Western Digital speaks to its success in gaining traction in the rapidly growing hybrid cloud space. That Avere Systems has been able to carve out a niche with respect to facilitating high performance access to data for computational and long-term storage purposes underscores the success of its strategy to serve the storage needs of organizations that leverage a plurality of on-premise and cloud-based infrastructures. The funding will be used to support product development and innovation for Avere System’s customers with a specific focus on expanding its portfolio of products for the hybrid cloud.

Jamf Announces Readiness To Deliver Zero Day Support for Spring 2017 Apple OS Releases, Including Apple TV

Apple device management leader Jamf today announced support for the latest releases of Apple iOS, macOS and tvOS. Jamf’s support for upcoming releases of iOS 10.3, macOS 10.12.4, and tvOS 10.2 continues its long-standing history of delivering zero day support for Apple technologies in relation to new releases and operating system upgrades. Jamf’s support for Apple TV empowers organizations such as hotels and schools that have massive installations of Apple TV to customize settings and landing screens to ensure consistency of user experience. Moreover, Jamf gives customers the ability to integrate deployments of Apple devices such as phones, iPads and TVs in ways that give customers enhanced control over the synchronization across devices and a corresponding seamless interconnection between them.

Jamf Pro, a solution for professional Apple administrators, extends Jamf’s support for version management to Apple device security settings, configuration management as well as the ability to create Apple classrooms and enriched device management functionality for Apple TV. Today’s announcement from Jamf consolidates its leadership in the Apple device management space and particularly underscores the maturation of its platform as exemplified by its ability to offer enhanced capabilities for Apple TV management. As Apple TV adoption proliferates, expect Jamf to continue expanding its market share in the Apple device management space via its ability to cater not only to Apple TV but also to the inter-relationship between all Apple devices, both at the level of version upgrades and customization of user experience, settings and configuration management. The graphic below encapsulates Jamf’s commitment to simplicity with respect to both device management and user experience:

jamf graphic

Pyze Expands Support for App Development Platforms for its Mobile App Business Intelligence Platform

Pyze, leader in business intelligence for mobile apps, recently announced the addition of support for popular app development platforms such as CMS (WordPressSquareSpaceWix), XamarinReact Native, Apple tvOS and watchOS. Pyze’s expansion of support for developer platforms builds upon its existing support for platforms that include “iOS, iMessage Apps, Android, Web and SaaS (React.js, Angular.js, Vue.js), and Unity” as noted in a press release. By facilitating the integration of its platform with a broader range of mobile apps, Pyze strengthens its ability to expand market share in the landscape of business intelligence for mobile app developers. Pyze aims to empower mobile app developers to increase customer engagement and revenue by means of segmentation analytics that deliver actionable insights regarding the behavior of millions of app users. Pyze’s Growth Intelligence platform was recently recognized by Tech Trailblazers as a winner in the Mobile Trailblazers category.

Google Announces Always Free Tier and $300 Credit for Google Cloud Platform to Lure New Customers

Last week, Google announced details of Always Free, a free tier of Google Cloud Platform services that allow users to obtain familiarity with the Google Cloud Platform’s suite of offerings. The free tier of services allows users to take advantage of 15 Google Cloud Platform services that include Google Compute Engine, Google App Engine, Google Cloud Datastore, Google Cloud Functions, Google Stackdriver, Google BigQuery Public Datasets and Google Container Engine. In the case of Google Compute Engine, users have access to one 1 f1-micro instance each month, with the additional constraint that they can run a maximum of 8 cores of virtual CPUs concurrently as noted below, by the Google Cloud Platform:

You can have no more than 8 cores (or Virtual CPUs) running at the same time. For example, you can launch eight n1-standard-1 machines, or two n1-standard-4 machines, but you can’t launch a n1-standard-16machine. For more information about the types of virtual machines available and the number of cores they use, see Machine type pricing.

The availability of Google Cloud Platform is limited to U.S. regions and a 30 GB-months HDD and a 5 GB-months snapshot. As part of its Always Free tier, Google also elaborated details of a $300 credit that customers can apply to the usage of Google Cloud Platform products to further augment their ability to experiment with the capabilities of GCP services. The $300 credit applies to all Google Cloud Platform products and spans a duration of 12 months.

Announced by Sam Ramji, VP of Product Development at Google Cloud at the Google Cloud Next conference on Friday March 10, the Always Free Tier and the $300 credit represent an important sales and marketing initiative designed to lure new customers intro trying the features and functionality of the Google Cloud Platform. As enterprises increasingly leverage a multi-cloud strategy characterized by the use of multiple public clouds in an effort to minimize threats posed by vendor lock-in and the effects of cloud outages, Google Cloud Platform’s Always Free tier promises to increase its market share in a field dominated by Amazon Web Services but that additionally includes Microsoft Azure, Oracle and IBM. Meanwhile, Google’s ability to onboard new customers via its Always Free tier raises the obvious question around its ability to retain those customers in collaboration with an aggressive sales and customer satisfaction team capable of eliciting and responding to the needs of its growing customer base.

Airbnb Raises $1 Billion in Series F Funding and Vaults to Valuation of $31 Billion

Airbnb has finalized $1 billion in Series F funding that represents an increase from the $555M that it announced as constitutive of its September 2016 capital raise in September 2016. According to a recent SEC filing, Airbnb raised over $1B in Series F equity funding that brings its valuation to roughly $31 billion, or slightly more than its $30 billion valuation as noted in September 2016. Airbnb provides a web-based platform that allows individuals to find accommodation as delivered by the private homes offered by other members instead of hotels. The company has recently expanded its offerings to include “Trips” that allow members to sign up for itineraries and experiences in the destination of their choice. Operating in 65,000 cities spanning 191 countries, Airbnb became profitable in the second quarter of 2016 and currently has no immediate plans to go public in the near future. The company’s decision to remain private, for now, is illustrative of the access to capital had by technology companies and the corresponding deferral of the decision to go public until the product matures or legal and regulatory issues related to the disclosure of business practices are resolved.

Pivotal and Datometry Collaborate to Enable Customers To Run Teradata Workloads on Pivotal Greenplum

Pivotal has partnered with Datometry to enable enterprises to run Teradata-based workloads on Pivotal Greenplum’s cloud-based data warehouse. The collaboration between Datometry and Pivotal empowers enterprises to transfer Teradata workloads to Pivotal’s Greenplum platform without costly and complex database migrations or application re-architecting. By using Datometry’s Hyper-Q for Pivotal Data Suite, customers can migrate Teradata workloads to natively run on Pivotal Greenplum, thereby taking advantage of Pivotal Greenplum’s open source, massively parallel processing architecture and its Pivotal Query Optimizer that specializes in scaling analytics that run on massive datasets while preserving throughput, latency and query performance more generally. Available on AWS and Microsoft Azure in addition to on-premise deployments, Pivotal Greenplum gives customers enhanced flexibility and speed with respect to Teradata workload migrations to the cloud. The collaboration between Pivotal and Datometry radically accelerates the ability of customers to move Teradata workloads to the cloud and correspondingly underscores the relevance of Pivotal Greenplum as a massively parallel processing data warehouse that supports loading speeds that can scale to “greater than 10 terabytes per hour, per rack” in addition to machine learning and SQL functionality. Expect the initiative led by Datometry and Pivotal to accelerate Teradata workload cloud migrations while concomitantly foregrounding the importance of Pivotal Greenplum to big data warehousing and analytics as well as the innovation of Datometry’s Hyper-Q data virtualization platform, which aspires to free data from vendor lock-in by making applications translatable from one database to another.

AWS S3 Outage Underscores The Need For Enhanced Risk and Control Frameworks For Cloud Services

The Amazon Web Services disruption that affected the Northern Virginia Region (US-EAST-1) on February 28 was caused by human error. At 937 AM PST, an AWS S3 team member that was debugging an issue related to the S3 billing system mistakenly removed the index and placement subsystems, the former of which was responsible for all of the metadata of S3 objects whereas the placement subsystem managed the allocation of new storage. The inadvertent removal of these two subsystems initiated a full restart of S3 that impaired S3’s ability to respond to requests. S3’s inability to respond to new requests subsequently affected related AWS services that depend on AWS S3 such as EBS, AWS Lambda and the launch of new instances of the AWS EC2. Moreover, the service disruption to S3 also prevented AWS from updating its AWS Service Health Dashboard from 937 AM PST to 11:37 AM PST. The full restart of the S3 subsystem took longer than expected as noted by the following excerpt from the AWS post-mortem analysis of the S3 service disruption:

S3 subsystems are designed to support the removal or failure of significant capacity with little or no customer impact. We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes. While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected.

Both the index subsystem and the placement subsystem within AWS were restored by 1:54 PM PST. The recovery of additional services took more time depending on the backlog they experienced due to S3’s disruption and restoration. As a result of the outage, AWS has begun examining risks associated with operational tools and processes that remove capacity and escalated the prioritization of a re-architecting of S3 into smaller “cells” that allow for accelerated recovery from a service disruption and the restoration of routine operating capacity. The S3 outage affected customers such as Airbnb, New Relic, Slack, Docker, Expedia and Trello.

The S3 outage underscores the lack of maturity of control frameworks related to operational processes specific to the maintenance and QC of cloud services and platforms. That manual error could lead to a multi-hour service disruption to Amazon S3 with downstream effects for other AWS services represents a stunning indictment of AWS’s risk management and control framework for mitigating risks to the availability and performance of services as well as implementing effective controls to monitor and respond to the quality of operational performance. The outage pointedly illustrates the lack of maturity of risk, control and automation frameworks at AWS and subsequently sets the stage for competitors such as Microsoft Azure and Google Cloud Platform to capitalize on the negative publicity received by AWS by foregrounding the sophistication of their own risk and control frameworks for preventing, mitigating and minimizing service disruptions. Moreover, the February 28 AWS S3-based outage underscores the need for the maturation of cloud services-focused risk and control IT frameworks that can respond to the specificity of risk and control frameworks specific to cloud platforms in contradistinction to on-premise, enterprise IT. Furthermore, the outage strengthens the argument for a multi-cloud strategy for enterprises interested in ensuring business continuity by using more than one public cloud vendor to mitigate risks associated with a public cloud outage. Meanwhile, the continued pervasiveness of public cloud outages underscores the depth of the opportunity for the implementation of controls to mitigate risks that threaten cloud services uptime and performance.