Hadoop

Pivotal Open Sources Its Big Data Suite And Announces Partnership With Hortonworks

Pivotal recently announced the open sourcing of key components of its Pivotal Big Data Suite. Parts of the Pivotal Big Data Suite that will be outsourced include the MPP Pivotal Greenplum Database, Pivotal HAWQ and Pivotal GemFire, the NoSQL in-memory database. Pivotal’s decision to open source the core of its Big Data suite builds upon its success monetizing the Cloud Foundry platform and intends to accelerate the development of analytics applications that leverage big data and real-time streaming big data sets. The open sourcing of Greenplum, Pivotal’s SQL on Hadoop platform HAWQ and GemFire renders Pivotal’s principal analytics and database platforms more readily accessible to the developer community and encourages enterprises to experiment with Pivotal’s solutions. Sundeep Madra, VP of the Data Product Group, Pivotal, remarked on Pivotal’s decision to open source its Big Data suite as follows:

Pivotal Big Data Suite is a major milestone in the path to making big data truly accessible to the enterprise. By sharing Pivotal HD, HAWQ, Greenplum Database and GemFire capabilities with the open source community, we are contributing to the market as a whole the necessary components to build solutions that make up a next generation data infrastructure. Releasing these technologies as open source projects will only help accelerate adoption and innovation for our customers.

Pivotal’s announcement of the open sourcing of its Big Data suite comes in tandem with a strategic alliance aimed at synergistically maximizing the competencies of both companies to deliver best-in-class Hadoop capabilities for the enterprise. The partnership with Hortonworks includes product roadmap alignment, integration and the implementation of a unified vision with respect to leveraging the power of Apache Hadoop to facilitate the capability to derive actionable business intelligence on a scale rarely performed within the contemporary enterprise. In conjunction with the collaboration with Hortonworks, Pivotal revealed its participation in the Open Data Platform, an organization dedicated toward promoting the use of Big Data technologies centered around Apache Hadoop whose Platinum members include GE, Hortonworks, IBM, Infosys, Pivotal and SAS. The Open Data Platform intends to ensure components of the Hadoop ecosystem such as Apache Storm, Apache Spark and Hadoop-analytics applications integrate with and optimally support one another.

All told, Pivotal’s decision to open source its Big Data suite represents a huge coup for the Big Data analytics community at large insofar as organizations now have access to some of the most sophisticated Hadoop-analytics tools in the industry at no charge. More striking, however, is the significance of Pivotal’s alignment with Hortonworks, which stands to tilt the balance of the struggle for Hadoop market share toward Hortonworks and away from competitors Cloudera and MapR, at least for the time being. Thus far, Cloudera has enjoyed notable traction in the financial services sector and within the enterprise more generally, but the enriched analytics available to the Hortonworks Data Platform by means of the partnership with Pivotal promise to render Hortonworks a more attractive solution, particularly for analytics-intensive use cases and scenarios. Regardless, Pivotal’s strategic evolution as represented by its open source move, its collaboration with Hortonworks and leadership position in the Open Data Platform constitute a seismic moment in Big Data history wherein the big data world shakes as the world’s most sophisticated big data analytics firm qua Pivotal unites with Hortonworks, the company responsible for the first publicly traded Hadoop distribution. The obvious question now is how Cloudera and MapR will respond to the Open Data Platform and the extent to which Pivotal’s partnership with Hadoop distributions remains exclusive to, or focused around Hortonworks in the near future.

Categories: Big Data, Hadoop, Hortonworks, Pivotal

Cloudera And Cask Partner To Align Cask’s Application Development Platform With Cloudera’s Hadoop Product Portfolio

Cloudera and Cask recently announced a strategic collaboration marked by a commitment to integrate the product roadmaps of both companies into a unified vision based around the goal of empowering developers to more easily build and deploy applications using Hadoop. As part of the collaboration, Cloudera made an equity investment in Cask, the company formerly known as Continuity. Cask’s flagship product consists of the Cask Data Application Platform (CDAP), an application platform used to streamline and simplify Hadoop-based application development in addition to delivering operational tools for integrating application components and performing runtime services. The integration of Cask’s open source Cask Data Application Platform with Cloudera’s open source Hadoop distribution represents a huge coup for Cask insofar as its technology stands to become tightly integrated with one of the most popular Hadoop distributions in the industry and correspondingly vie for potential acquisition by Cloudera as its product develops further. Cloudera, on the other hand, stands to gain from Cask’s progress in building a platform for facilitating Big Data application development that runs natively within a Hadoop infrastructure. By aligning its product roadmap with Cask, Cloudera adds yet another feather to its cap vis-à-vis tools and platforms within its ecosystem that enhance and accelerate the experience of Hadoop adoption. Overall, the partnership strengthens Cloudera’s case for going public by illustrating the astuteness and breadth of its vision when it comes to strategic partners and collaborators such as Cask, not to mention the business and technological benefits of the partnership. Expect Cloudera to continue aggressively building out its partner ecosystem as it hurtles toward an IPO that it may well be already preparing, at least as reported in Venturebeat.

Categories: Big Data, Cloudera, Hadoop | Tags: ,

MapR Announces Selection By MediaHub Australia For Digital Archiving And Analytics

MapR recently announced that MediaHub Australia has deployed MapR to support its digital archive that serves 170+ broadcasters in Australia. MediaHub delivers digital content for broadcasters throughout Australia in conjunction with its strategic partner Contexti. Broadcasters provide MediaHub with segments of programs, live feeds and a schedule that outlines when the program in question should be delivered to its audiences. In addition to scheduled broadcasts, MediaHub offers streaming and video on demand services for a variety of devices. MediaHub’s digital archive automates the delivery of playout services for broadcasters and subsequently minimizes the need for manual intervention from archival specialists. MapR currently manages over 1 petabyte of content for the 170+ channels that it serves, although the size of its digital archive is expected to grow dramatically within the next two years. MapR’s Hadoop-based storage platform also provides an infrastructure that enables analytics on content consumption that help broadcasters make data-driven decisions about what content to air in the future and how to most effectively complement existing content. MediaHub’s usage of MapR illustrates a prominent use case for MapR, namely, the use of Hadoop for storing, delivering and running analytics on digital media. According to Simon Scott, Head of Technology at MediaHub, one of the key reasons MediaHub selected MapR as the big data platform for its digital archive concerned its ability to support commodity hardware.

Categories: Big Data, Hadoop, MapR

Teradata Acquires Hadoop Data Archival Specialist RainStor

On December 17, Teradata announced the finalization of the acquisition of RainStor, a big data archiving company that specializes in archival solutions for Hadoop. The acquisition of RainStor gives Teradata ownership of RainStor’s technology for compressing and freezing Hadoop datastores for archival purposes. RainStor’s archival technology empowers companies to compress and store Hadoop data as Hadoop-based datasets proliferate throughout the enterprise in conjunction with the larger transition to data-driven operational and strategic analytics. The acquisition represents Teradata’s fourth major acquisition this year following upon the purchase of Revelytix, Hadapt and Think Big Analytics. Terms of the acquisition were not disclosed although most of RainStor employees will remain in their pre-acquisition locations in San Francisco and Gloucester. The acquisition strengthens Teradata’s Hadoop solutions by augmenting its ability to provide customers with enterprise-wide data archival capabilities.

Categories: Hadoop, Teradata

Hortonworks Files For IPO

As reported by Arik Hesseldahl in Recode, Apache Hadoop vendor Hortonworks has filed for an IPO. The decision by Hortonworks to offer public shares represents the first IPO from a major Hadoop vendor. In fiscal year 2013, Hortonworks reported a loss of $36.6M relative to $11M in revenue. Meanwhile, for the first 9 months of 2014, Hortonworks increased its revenue to $33.3M but posted a loss of $86.7M. The decision by Hortonworks to go public comes after two major capital raises in 2014. In July, HP invested $50M in Hortonworks, following upon the $100M raised by Hortonworks in March. Given the gargantuan capital raises specific to Hortonworks competitors Cloudera and MapR as well, the Big Data landscape should also expect IPOs from Cloudera and MapR in the near future. Meanwhile, more detailed analysis regarding the prospects of Hortonworks executing a successful IPO will emerge in coming weeks in anticipation of the launch of the IPO either in late 2014 or early 2015. In 2011, Hortonworks was spun out of Yahoo, its principal investor. Hortonworks plans to raise up to $100M by means of its IPO.

Categories: Hadoop, Hortonworks

DataTorrent Enhances Platform For Real-Time Analytics On Streaming Big Data

DataTorrent recently announced the availability of DataTorrent Real-Time Streaming (RTS) 2.0, which builds on its June release of the 1.0 version of by providing enhanced capabilities to run real-time analytics on streaming Big data sets. DataTorrent RTS 2.0 boasts the ability to ingest data from “any source, any scale and any location” by means of over 75 connectors that allow the platform to ingest varieties of structured and unstructured data. In addition, this release delivers over 450 Java operators that allow data scientists to perform queries and advanced analytics on Big datasets including predictive analytics, statistical analysis and pattern recognition. In a phone interview with John Fanelli, DataTorrent’s VP of Marketing, Cloud Computing Today learned that the platform has begun work on a Private Beta of a product, codenamed Project DaVinci, to streamline the design of applications via a visual interface that allows data scientists to graphically select data sources, analytic operators and their inter-relationship as depicted below:

As the graphic illustrates, DataTorrent Project DaVinci (Private Beta) delivers a unique visual interface for the design of applications that leverage Hadoop-based datasets. Data scientists can take advantage of DataTorrent’s 450+ Java operators and the platform’s advanced analytics functionality to create and debug applications that utilize distributed datasets and streaming Big data. Meanwhile, DataTorrent RTS 2.0 also boasts the ability to store massive amounts of data in a “HDFS based distributed hash table” that facilitates rapid lookups of data for analytic purposes. With version 2.0, DataTorrent continues to disrupt the real-time, Big data analytics space by delivering a platform capable of ingesting data at any scale and running real-time analytics in the broader context of a seductive visual interface for creating Big data analytics applications. DataTorrent competes in the hotly contested real-time Big data analytics space alongside technologies such as Apache Spark, but delivers a range of functionality that supersedes Spark Streaming as illustrated by its application design, advanced analytics and flexible data ingestion capabilities.

Categories: Big Data, DataTorrent, Hadoop

Informatica Big Data Edition Comes Pre-Installed On Cloudera QuickStart VM And Hortonworks Sandbox

Earlier this month, Informatica announced 60 day free trials of Informatica Big Data Edition for Cloudera QuickStart VM and the Hortonworks Sandbox. The 60 day trial means that the Informatica Big Data Edition will be pre-installed in the sandbox environments of two of the leading Hadoop distributions in the Big Data marketplace today. Developers using the Cloudera QuickStart VM and Hortwonworks Sandbox now have streamlined access to Informatica’s renowned big data cleansing, data integration, master data management and data visualization tools. The code-free, graphical user interface-based Informatica Big Data Edition allows customers to create ETL and data integration workflows as well as take advantage of the hundreds of pre-installed parsers, transformations, connectors and data quality rules for Hadoop data processing and analytics. The Informatica Big Data platform specializes in Hadoop profiling, parsing, cleansing, loading, enrichment, transformation, integration, analysis and visualization and reportedly improves developer productivity five-fold by means of its automation and visual interface built on the Vibe virtual data machine.

Although the Informatica Big Data Edition supports MapR and Pivotal Hadoop distributions, the free 60 day trial is currently available only for Cloudera and Hortonworks. Informatica’s success in seeding its Big Data Edition with Cloudera and Hortonworks increases the likelihood that developers will explore and subsequently use its Big Data Edition platform as a means of discovering and manipulating Big Data sets. As such, Informatica’s Big Data Edition competes with products like Trifacta that similarly facilitate the manipulation, cleansing and visualization of Big Data by means of a code free user interface that increases analyst productivity and accelerates the derivation of actionable business intelligence. On one hand, the recent proliferation of Big Data products that allow users to explore Big Data without learning the intricacies of MapReduce democratizes access to Hadoop–based datasets. That said, the ability of graphical user interface-driven Big Data discovery and manipulation platforms to enable the granular identification of data anomalies, exceptions and eccentricities that may otherwise become obscured by large-scale trend analysis remains to be seen.

Categories: Big Data, Hadoop, Informatica | Tags:

Create a free website or blog at WordPress.com. The Adventure Journal Theme.