On May 16, Crate.io announced the availability of CrateDB 2.0, an open source SQL database that specializes in IoT and machine data. The innovation of CrateDB consists in leveraging SQL to aggregate and perform real-time analytics on IoT and machine data instead of the NoSQL databases commonly used in the industry for related use cases. CrateDB’s ability to accommodate the ingestion of high velocity streams of data and to perform queries on rapidly changing datasets, with impressive levels of scalability and latency, allows developers to combine their familiarity with SQL alongside a solution specially designed for the unique needs of IoT and machine data applications. CrateDB 2.0 features clustering upgrades that deliver improved query performance by means of faster aggregations and new index structures. In addition, CrateDB 2.0 contains a bevy of SQL enhancements that give developers a greater range of options regarding joins, sub-selects and the renaming and re-indexing of tables. The Enterprise Edition of CrateDB 2.0 offers performance monitoring, enhanced security as well as the ability for end users to create user-defined functions. CrateDB 2.0’s clustering upgrades, SQL enhancements and enterprise-grade security and performance monitoring mark a new milestone in the platform’s evolution that testifies to its readiness to embrace enterprise-grade workloads that include sensor data, GPS data and the industrial internet more generally. Subsequent to news of its general availability in December 2016, Crate.io’s release of open source and enterprise-grade versions of CrateDB underscores the early traction the platform has received, with over 1.3 million downloads and 50 customers using Crate.io in production. With the IoT and machine data space gearing up for a rampant proliferation of devices and corresponding datasets in forthcoming years, expect Crate.io to continue building on its recent momentum, particularly as organizations look for scalable databases that allow organizations to leverage widely available skillsets in SQL.
The graphic below illustrates the platform’s Enterprise Edition user interface for monitoring the performance of clusters gives users real-time visibility into cluster performance with respect to the ingestion and transformation of IoT and machine data:
Crate.io today announces the general availability of CrateDB, an open source SQL-database platform that specializes in storing and analyzing machine data and related applications. CrateDB features a distributed SQL query engine that empowers users to run complex queries in real-time without the diminution of performance specific to “first generation SQL databases”, as noted in a press release. The platform also boasts columnar field caches and enhanced versatility with respect to SQL-based queries on machine data. For example, CrateDB delivers the capability to create outer joins as well as run queries on structured and unstructured data, perform time series analysis and leverage advanced database search functionality. In addition, CrateDB features extreme scalability marked by automated sharding and data redistribution that optimizes data performance and availability in correspondence with the volume of data stored within the platform. Importantly, CrateDB allows organizations to take advantage of SQL-oriented skills and tools to expedite its integration and adoption. As such, the platform represents a SQL-based alternative to NoSQL machine data solutions such as Splunk and Cassandra that empowers organizations to collect and analyze massive volumes of machine data in real-time in conjunction with the platform’s enhanced querying versatility and scalability. Available under an Apache 2.0 license, CrateDB marks the emergence of another key player in the machine data analytics space that promises to disrupt the landscape of machine data analytics platforms, particularly given the nexus of its advanced SQL-based querying functionality and extreme scalability. Organizations with resources versed primarily in SQL will lean toward CrateDB given the richness of its distributed SQL querying engine and ability to query data in real-time without resorting to an ancillary data warehousing option to append to their machine data analytics infrastructure.
Wire data analytics leader ExtraHop and machine data analytics vendor Sumo Logic recently announced a partnership whereby ExtraHop’s wire data will complement machine data aggregated by Sumo Logic’s cloud platform. The partnership brings together ExtraHop’s leadership in wire data analytics and Sumo Logic’s recognized machine data analytics platform to create a unified framework for event detection and management. As a result of the collaboration, ExtraHop’s Open Data Stream delivers real-time, streaming feeds of wire data to Sumo Logic’s platform for aggregating and analyzing machine data. Meanwhile, Sumo Logic customers enjoy access to a more comprehensive universe of data about an IT infrastructure and its constituent set of applications and networking topology. ExtraHop’s real-time wire data enhances Sumo Logic’s cloud-based machine data platform with L2-L7 wire data as illustrated below:
The ExtraHop dashboard depicted above elaborates the ability of the ExtraHop platform to analyze wire data that contains insights regarding application performance, security and infrastructure availability. The Sumo Logic dashboard shows the integration of ExtraHop’s wire data into its platform and its corresponding user interface. ExtraHop’s partnership with SumoLogic delivers real-time data feeds to Sumo Logic’s cloud platform that are ingested into Sumo Logic’s cloud platform for the purpose of delivering actionable business intelligence about the health of IT infrastructures based on the aggregation of log and wire data. The graphics differentially illustrate how ExtraHop’s wire data enriches Sumo Logic’s aggregation of machine data by providing it with an additional dataset that Sumo Logic’s cloud platform can integrate into its massive, multi-tenant unstructured cloud database built on Amazon Web Services to deliver advanced analytics and data visualization regarding the detection of infrastructure and application related events.
Mark Musselman, Vice President, Strategic Alliances at Sumo Logic, remarked on the significance of the partnership between ExtraHop and Sumo Logic as follows:
Adding ExtraHop data as a new source into the Sumo Logic service for proactive analysis against other feeds enables IT teams to gain deeper performance, security and business insights from across IT infrastructure. Sumo Logic’s cloud-native architecture means the service serves an aggregation point for diverse data sources. The result is an IT team that acts on timely information from within their infrastructure – even information they did not know to ask for. A critical piece to the puzzle lies in Sumo Logic’s Anomaly Detection, a proprietary capability that delivers insight from patterns in data and insights beyond what IT teams themselves know to query.
Here, Musselman comments on the way in which ExtraHop’s data facilitates “deeper performance, security and business insights” by serving as an additional data source that enables advanced analytics about enterprise IT architectures. The integrated data repository marked by the confluence of ExtraHop wire data and Sumo Logic log data leverages Sumo Logic’s proprietary advanced analytics and machine learning technology to deliver notifications about events of interest within the infrastructure while iteratively refining those same alerts in correspondence with the actions initiated by the recipients of those same notifications. In all, the partnership between ExtraHop and Sumo Logic underscores the significance of wire data for analytics related to machine data analytics and the internet of things while concurrently enriching the capabilities of Sumo Logic’s cloud-based log management and analytics platform. With ExtraHop’s real-time wire data now streaming into the Sumo Logic platform, the case for a Sumo Logic IPO grows stronger while ExtraHop similarly benefits from enumerating the value of its wire data aggregation and analytics technology.
Glassbeam Inc today announces $2M in additional funding by means of a capital raise led by the VKRM group. Glassbeam also revealed details of a strategic partnership with Tableau whereby Tableau can integrate with SQL-compliant extracts from Glassbeam to enable customers to more effectively visualize machine data and discern trends by way of Tableau’s powerful visualization capabilities. In an interview with Cloud Computing Today, Glassbeam CEO Puneet Pandit noted that the company plans to deepen its strategic partnership with Tableau by delivering offerings from Tableau that are specific to Glassbeam, pending the finalization of further negotiations. Moreover, Kumar Malavalli, a well known Silicon Valley technology entrepreneur, will take over as the company’s Chief Strategy Officer.
Glassbeam’s cloud-based platform is designed to ingest, process and analyze massive amounts of machine data. The platform’s deployment model involves the creation of a customized SPL file that, when deployed in the cloud, facilitates the integration of machine data into the Glassbeam platform. Glassbeam’s differentiation with the machine data analytics space involves its focus on the internet of things and the subsequent business need to rapidly transform massive amounts of complex data into actionable business intelligence. Today’s funding raise brings the total capital raised by to $8.1M. Glassbeam’s capital raise affirms its traction in the machine data space as evinced by its signing of Dimension Data as a recent customer. Expect to hear more details from Glassbeam as it continues to sharpen its product differentiation for big data analytics related to the internet of things.
On Tuesday, machine data analytics vendor Splunk today announced a 100% uptime SLA for the Splunk Cloud, its cloud-based platform for operational intelligence. The 100% uptime guarantee represents the first SLA in the machine analytics industry that guarantees uptime to a degree that effectively dispels objections about the reliability of cloud infrastructures. Not only does the 100% uptime SLA assuage customer concerns about reimbursement for downtime, but more importantly, it asserts the confidence had by Splunk that the Splunk cloud is engineered to remain fully operational even if one or more of its constituent infrastructure components experiences a disruption. Splunk also announced price reductions of up to 33% that derive from economies of scale and increased efficiencies in addition to revealing more flexible service plans marked by scaling limits from 5 GB/day to 5 TB/day and 10 fold bursting capabilities designed to accommodate especially high spikes in customer workloads. Given that the Splunk Cloud is hosted on AWS, its price reductions come as little surprise given that AWS has cut prices over 40 times, including a significant price cut announced as early as March. That said, Splunk’s 100% uptime guarantee represents an impressive differentiator in a space where vendors have largely shied away from guaranteeing 100% uptime, although one would need to delve deeper into Splunk’s policies for remuneration to understand the real delta between 100% uptime and something fractionally close. Splunk’s expanded scaling options and security features for a virtual private cloud hosted on AWS, marked by no data commingling, in conjunction with slashed prices, continue to consolidate its reputation as the leader in machine data analytics space. Expect Splunk to expand its market traction on the back of its notable 100% uptime guarantee as the enterprise increasingly embraces the necessity of running analytics on machine data dispersed across a variety of infrastructures.
Log management and analytics vendor Logentries today announced an enhancement to its platform marked by the availability of a suite of collaboration features that improve the ability of teams to analyze and share insights regarding log data. Users of the Logentries platform can now annotate log data, share dashboards and send automated notifications to individuals and groups. The newly released collaboration functionality enhances the ability of the platform to serve the needs of DevOps teams that demand real-time agility with respect to log data analytics as well as the ability to communicate their observations regarding log data. The real-time collaboration functionality enabled by today’s release of the Logentries platform empowers DevOps professionals to more efficaciously identify root causes for issues such as system downtime, diminished application performance or networking-related bottlenecks as illustrated by the screenshot below.
The graphic above illustrates the annotation capability specific to today’s release. The annotation on the 404 Failure identifies an issue on a development server that may pertain to production servers as well. Logentries further instantiates the theme of accessibility and collaboration by enabling users to search log data using natural language and a click-through user interface that frees analysts from the need to write complex queries to understand the significance of log data. The platform also leverages a pre-processing engine that powers its analytics and data visualization capabilities in ways that deliver actionable business intelligence regarding real-time data. As told to Cloud Computing Today by Logentries CEO Andrew Burton, the Logentries platform can be used to understand data within on premise, public cloud, private cloud and hybrid cloud environments. The platform differentiates itself from the likes of Splunk, Loggly and Sumo Logic by means of enhanced data visualization and collaboration functionality that renders the platform amenable to business stakeholders that have little or no experience with scripting languages. Logentries plays in the hot machine analytics space with a platform whose rich analytics, collaboration and UI render it distinctive. Expect to hear more about the progress of Logentries as it builds on its 25,000 user base in subsequent months.