Big Data

Arcadia Data Releases Business Intelligence Platform For Hadoop And Closes $11.5M In Series B Funding

Today, Arcadia Data revealed details of its business intelligence and data visualization platform for Big Data. Arcadia Data’s BI platform enables business stakeholders to create data visualizations of Hadoop data by means of a rich user interface that allows users to drag and drop data fields. In addition, customers can select datasets for drill-downs to perform more advanced analyses such as root cause analytics, correlation analytics and trend analytics. The platform’s rich drag and drop functionality supports exploratory analysis of Hadoop-based data as illustrated below:

The graphic above shows how customers can use the Arcadia data platform to obtain different aggregations of cab ride fares and duration within various geographies in NYC. Importantly, the simplicity and speed of the platform mean that business stakeholders can comfortably obtain the analyses and data visualizations needed to represent their own data-driven insights. Given that the Arcadia Data platform also features data modeling functionality that enables users to massage and organize data prior to taking advantage of Arcadia’s data visualization functionality, the platform also lends itself to use by more savvy data users in addition to business users. Arcadia supports all major Hadoop distributions including Cloudera, Hortonworks and MapR and additionally enables users to glean insights from applications built using MySQL, Oracle and Teradata. In addition to today’s product announcement, Arcadia Data today announced the finalization of $11.5M in Series A funding from Mayfield, Blumberg Capital and Intel Capital. As revealed to Cloud Computing Today in a live product demonstration, the depth and sophistication of the Arcadia Data platform illustrates the changing face of business intelligence in the wake of the big data revolution, particularly as evinced by the ease with which business stakeholders can now make sense of Hadoop-based data using data visualization, transformation, drill-downs, trend analysis and analytics more broadly.

Categories: Big Data, Hadoop, Miscellaneous, Venture Capital

Basho Data Platform Integrates Riak With Apache Spark, Redis And Apache Solr

Basho Technologies today announced the release of the Basho Data Platform, an integrated Big Data platform that enhances the ability of customers to build applications that leverage Basho’s Riak KV (formerly Riak) and Riak S2 (formerly Riak CS). By integrating Riak KV, Riak, Apache Spark, Redis and Apache Solr, the Basho Data Platform enhances the ability of customers to create high performing applications that deliver real-time analytics. The platform’s integration with Redis cache allows users to leverage the capability of Redis to improve the read performance of applications. The platform also boasts an integration with Apache Solr that builds upon the ability of Riak to support searches powered by Apache Solr. Moreover, the Basho Data Platform supports the replication and synchronization of data across its different components in ways that ensure continued access to applications and relevant data. The graphic below illustrates the different components of the Basho Data Platform:

The Basho Data Platform responds to a need in the marketplace to complement high performance NoSQL databases such as Riak with analytics and caching technologies such as Apache Spark and Redis, respectively. The platform’s cluster management and orchestration functionality absolves customers of the need to use Apache Zookeeper for cluster synchronization and cluster management. By automating provisioning and orchestration and delivering Redis-based caching functionality in conjunction with Apache Spark, the platform empowers customers to create high performance applications capable of scaling to manage the operational needs of massive datasets. Today’s announcement marks the release of an integrated platform that stands poised to significantly augment the ease with which customers can build Riak-based Big Data applications. Notably, the platform’s ability to orchestrate and automate the interplay between its different components means that developers can focus on taking advantage of the functionality of Apache Spark and Redis alongside Riak KV and Riak S2 without becoming mired in the complexities of provisioning, cluster synchronization and cluster management. As such, the platform’s out of the box integration of its constituent components represents a watershed moment in the evolution of Riak KV and Riak S2 and the NoSQL space more generally as well.

Categories: Basho Technologies, Big Data, NoSQL

Trillium Software And Cloudera Collaborate To Deliver Data Quality Solutions For Hadoop

Cloudera and Trillium Software recently announced a collaboration whereby the Trillium Big Data solution is certified for Cloudera’s Hadoop distribution. As a result of the partnership, Cloudera customers can take advantage of Trillium’s data quality solutions to profile, cleanse, de-duplicate and enrich Hadoop-based data. Trillium responds to a problem in the Big Data industry wherein the customer focus on deployment and management of Hadoop-based data repositories eclipses concerns about data quality. In the case of Hadoop-based data, data quality solutions predictably face challenges associated with the sheer volume of data that requires cleansing or quality improvements. Trillium’s Big Data Solution for data quality cleanses data natively within Hadoop because identifying data with data quality issues and then transporting it to another infrastructure becomes costly and complex. The collaboration between Trillium Software and Cloudera illustrates the relevance of data quality solutions for Hadoop despite the increased attention currently devoted to Big Data analytics and data visualization solutions. As such, Trillium fills a critical niche within the Big Data processing space and its alliance with Cloudera positions it strongly to consolidate its early traction within the space of solutions dedicated to data quality in the Big Data space.

Categories: Big Data, Cloudera, Hadoop

Microsoft Announces Azure Data Lake With Unlimited Storage For Enterprise-Wide Data

Microsoft Azure recently announced news of the Azure Data Lake, a product that serves as a repository for “every type of data collected in a single place prior to any formal definition of requirements or schema.” As noted by Oliver Chiu in a blog post, Data Lakes allow organizations to store all data types regardless of data type and size on the theory that they can subsequently use advanced analytics to determine which data sources should be transferred to a data warehouse for more rigorous data profiling, processing and analytics. The Azure Data Lake’s compatibility with HDFS means that products with data stored in Azure HDInsight and infrastructures that use distributions such as Cloudera, Hortonworks and MapR can integrate with it, thereby allowing them to feed the Azure Data Lake with streams of Hadoop data from internal and third party data sources as necessary. Moreover, the Azure Data Lake supports massively parallel queries that allow for the execution of advanced analytics on massive datasets of the type envisioned for the Azure Data Lake, particularly given its ability to support unlimited data both in aggregate, and with respect to specific files as well. Built for the cloud, the Azure Data Lake gives enterprises a preliminary solution to the problem of architecting an enterprise data warehouse by providing a repository for all data that customers can subsequently use as a base platform from which to retrieve and curate data of interest.

The Azure Data Lake illustrates the way in which the economics of cloud storage redefines the challenges associated with creating an enterprise data warehouse by shifting the focus of enterprise data management away from master data management and data cleansing toward advanced analytics that can query and aggregate data as needed, thereby absolving organizations of the need to create elaborate structures for storing data. In much the same way that Gmail dispenses with files and folders for email storage and depends upon its search functionality to facilitate the retrieval of email-based data, data lakes take the burden of classifying and curating data away from customers but correspondingly place the emphasis on the analytic capabilities of organizations with respect to the ability to query and aggregate data. As such, the commercial success of the Azure Data Lake hinges on its ability to simplify the process of running ad hoc and repeatable analytics on data stored within its purview by giving customers a rich visual user interface and platform for constructing and refining analytic queries on Big Data.

Categories: Big Data, Hadoop, Microsoft Azure | Tags: ,

DataTorrent Closes $15M In Series B Funding For Big Data Processing And Analytics Platform

DataTorrent today announces the finalization of $15M in Series B funding. The funding round is led by Singtel Innov8, with additional participation from GE Ventures and Series A investors August Capital, AME Cloud Ventures and Morado Venture Partners. DataTorrent’s platform provides an infrastructure for processing, storing and running analytics on streaming big data sets. The platform can ingest and analyze massive amounts of data by using over 75 connectors as well as 400 Java operators that allow data scientists to perform advanced analytics on multiple datasets in parallel. DataTorrent differentiates itself architecturally by performing in-memory processing that runs directly on Hadoop without the overhead that results from scheduled batches of Hadoop data for processing. The platform boasts massive scalability at sub-second latency while maintaining the capability to process batch and streaming datasets alike. Use cases for DataTorrent include internet of things analytics as well as web-analytics that push the limits of the platform’s ability to scale and ingest massive amounts of data. Today’s capital raise brings the total funding raised by DataTorrent to $23.8M. Building on its recent distinction as a Gartner Cool Vendor, DataTorrent stands to consolidate its early traction in the heavily contested Big Data analytics space with today’s infusion of capital and the guidance brought to its team by Innov8 Managing Director Jeff Karras, who joins DataTorrent’s board of directors as a result of the finalization of the Series B funding round.

Categories: Big Data, DataTorrent, Venture Capital

MapR And Cloudera Decline To Join Open Data Platform For Hadoop And Big Data

MapR has declined the invitation to participate in the Open Data Platform (ODP) after careful consideration, as noted in a recent blog post by John Schroeder, the company’s CEO and co-founder. Schroeder claims that the Open Data Platform is redundant with the governance provided by the Apache Software Foundation, that it purports to “solve” Hadoop-related problems that do not require solving and that it fails to accurately define the core of the Open Data Platform as it relates to Hadoop. With respect to software governance, Schroeder notes that the Apache Software Foundation has done well to steward the development of Apache Hadoop as elaborated below:

The Apache Software Foundation has done a wonderful job governing Hadoop, resulting in the Hadoop standard in which applications are interoperable among Hadoop distributions. Apache governance is based on a meritocracy that doesn’t require payment to participate or for voting rights. The Apache community is vibrant and has resulted in Hadoop becoming ubiquitous in the market in only a few short years.

Here, Schroeder credits the Apache Software Foundation with creating a Hadoop ecosystem in which Hadoop-based applications interoperate with one another and wherein the governance structure is based on a meritocracy that does not mandate monetary contributions in order to garner voting rights. In addition, the blog post observes that whereas the Open Data Platform defines the core of Apache Hadoop as MapReduce, YARN, Ambari and HDFS, other frameworks such as “Spark and Mesos, are gaining market share” and stand to complicate ODP’s definition of the core of Hadoop. Meanwhile, Cloudera’s Chief Strategy Officer Mike Olson explained why Cloudera also declined to join the Open Data Platform by noting that Hadoop “won because it’s open source” and that the partnership between Pivotal and Hortonworks was “antithetical to the open source model and the Apache way.” Given that 75% of Hadoop implementations use either MapR or Cloudera, ODP looks set to face some serious challenges despite support from IBM, Pivotal and Hortonworks, although the precise impact of the schism over the Open Data Platform on the Hadoop community remains to be seen.

Categories: Big Data, Cloudera, Hadoop, MapR

Metanautix Releases Personal Quest To Enhance Access To Its Platform For Integrated Analytics For SQL, NoSQL and Hadoop Datasets

On Tuesday, Metanautix released Metanautix Personal Quest, a product that enables individuals to leverage the power of the Metanautix platform to perform queries on data stored in Hadoop, NoSQL and relational database formats. Individual users can use Personal Quest to perform integrated analytics on data stored in relational and non-relational formats to obtain an integrated view of data stored throughout an organization’s different applications and data repositories. Metanautix allows users to download Personal Quest to their machine and subsequently test the capabilities of the Metanautix data compute engine for an unlimited time period for data limited to a designated size and number of queries. Metanautix Quest’s distributed compute engine enables the joining of SQL and non-SQL data sources without complex ETL processes. The video below shows how the integration of Metanautix Quest and Tableau enables customers to join data from Teradata SQL data to MongoDB NoSQL data to obtain a more granular understanding of sales by product by means of a few simple drag and drop operations. The clip illustrates how Metanautix Quest can execute a distributed join to analyze store sales data stored in a Teradata database to product data stored within MongoDB to enable a comparative analysis of sales across product categories such as books, children, electronics and shoes by month. After a visual review of sales by product category in a Tableau workbook reveals that shoes had a significant impact on overall sales, users can perform another join to drill down on shoe sales by shoe type to learn that men’s shoes and athletic shoes were largely responsible for the spike in sales specific to the shoe category. The distributed join performed by Metanautix Quest on Teradata SQL data and MongoDB NoSQL data facilitates a speedy analysis by means of a user interface that requires neither ETL nor the migration of data to a centralized staging repository. As such, Metanautix Quest radically simplifies data analysis and data visualization given the proliferation of different kinds of datasets in small, mid-size and enterprise-level organizations alike. By giving individual users unlimited time-based access to Metanautix Personal Quest, Metanautix intends to underscore the power of its analytic engine for performing analysis on data stored in sources that include Hadoop, Teradata, MongoDB and other SQL and NoSQL data repositories.

Categories: Big Data, Metanautix | Tags:

Create a free website or blog at WordPress.com. The Adventure Journal Theme.