Redis Announces Redis Modules That Facilitate Development Of Additional Use Cases For Redis Database Functionality

At RedisConf2016, Redis, the open source, in-memory data structure store, announced the introduction of a new functionality called Redis Modules that allows developers to extend Redis to cover an expanded set of use cases. Redis Modules enable developers to create new database functionality in Redis by means of a Modules API. The modules render Redis extensible by allowing developers to create applications that have access to the Redis core without requiring that the module be rewritten in conjunction with updates to the Redis core. The Modules API for Redis allows Redis Modules to co-exist independently of the evolution of the Redis core such that a module will continue to function irrespective of updates to either the Redis core or the module itself. As noted in a blog post by Salvatore Sanfilippo, the creator of Redis, the vision that gave birth to the evolution of the Redis Module was marked by a desire for sustainable compatibility between the Redis Module and the Redis Core:

What I wanted was an extreme level of API compatibility for the future, so that a module wrote today could work in 4 years from now with the same API, regardless of the changes to the Redis core. I also wanted binary compatibility so that the 4 years old module could even *load* in the new Redis versions and work as expected, without even the need to be recompiled.

Sanfilippo goes on to note that the nature of the API in question required a low level API in contrast to Lua’s high level scripting capabilities:

What we wanted to accomplish was to allow Redis developers to create commands that were as capable as the Redis native commands, and also as fast as the native commands. This cannot be accomplished just with a high level API that calls Redis commands, it’s too slow and limited. There is no point in having a Redis modules system that can just do what Lua can already do. You need to be able to say, get me the value associated with this key, what type is it? Do this low level operation on the value. Given me a cursor into the sorted set at this position, go to the next element, and so forth. To create an API that works as an intermediate layer for such low level access is tricky, but definitely possible.

Here, Sanfllippo remarks on the differentiation of the Redis Module API from the Lua scripting language for Redis by commenting on the need for a fast, low level API that can access the Redis core. Examples of Redis modules that are preliminarily available include an image processing Module and a full text search Module. That said, the code for the Redis Modules API remains unstable and awaits incorporation into an official release of the open source Redis software platform. Nevertheless, Redis Labs, home of Redis and provider of a commercial, enterprise-grade Redis solution, recently announced the release of Modules Hub, a marketplace for Redis Modules that renders available battle-tested, production-grade Redis modules for Redis users. Judging by comments to Salvatore Sanfllippo’s blog post, however, Redis Modules have sparked enthusiasm galore from users as they experiment with the API and mull over its various possibilities.

Learn more about Redis Modules via their API reference manual here.

Advertisement

Datagres Announces PerfAccel For Couchbase Server For Storage Performance Management Featuring Granular I/O Analytics

Datagres this week announced the general availability of PerfAccel for Couchbase Server, its performance management solution for Couchbase’s NoSQL database storage platform. The PerfAccel storage management platform now boasts enhanced performance, improved analytics, an upgraded user interface and support for a broader range of storage infrastructures. In the case of Couchbase, PerfAccel delivers deep visibility into real-time I/O operations regarding storage performance and its intersection with the application layer as well as the underlying hardware. Couchbase customers can use PerfAccel to obtain real-time analytics on every Couchbase I/O to facilitate diagnosis of application bottlenecks or related issues by identifying root causes at the storage level and subsequently implementing prescriptive solutions to remediate the problem at hand. Unlike traditional application performance management vendors, PerfAccel focuses on the storage layer and delivers analytics with a granularity that facilitate increased IOPS, reduced latency and faster applications. The solution also features intelligent, auto-tiering of storage data to optimize application performance by reducing the time required for reads and writes between compute and storage depending on the frequency with which data is used. Overall, PerfAccel delivers performance improvements that result in cost savings of 50 to 80% by empowering customers to leverage data-driven analytics to build and manage big data applications marked by low latency and high throughput. The platform features a deep integration with Couchbase Server 4.0 marked by an exceptional level of granularity regarding Couchbase I/O operations to optimize performance and reduce operational costs as illustrated by the architecture diagram below:

Q&A With DBS-H Regarding Its Continuous Big Data Integration Platform For SQL To NoSQL

Cloud Computing Today recently had the privilege of speaking with Amos Shaltiel, CEO and co-founder and Michael Elkin, COO and co-founder of DBS-H, an Israel-based company that specializes in continuous Big Data integration between relational and NoSQL-based data. Topics discussed included the core capabilities of its big data integration platform, typical customer use cases and the role of data enrichment.

Cloud Computing Today: What are the core capabilities of your continuous big data integration platform for integrating SQL data with NoSQL? Is the integration unidirectional or bidirectional? What NoSQL platforms do you support?

DBS-H: DBS-H develops innovative solutions for a continuous data integration between SQL and NoSQL databases. We believe that companies are going to adopt a hybrid model where relational databases such as Oracle, SQL Server, DB2 or MySQL will continue to serve customers alongside new NoSQL engines. The success of Big Data adoption will ultimately rise and fall on how easily information can be accessed by key players in organizations.

The DBS-H solution releases data bottlenecks associated with integrating Big Data with existing SQL data sources, making sure that everyone has access to the data they are looking for transparently and without the need to change existing systems.

Our vision is to make the data integration process simple, intuitive and fully transparent to the customer without a need to hire a highly skilled personnel for expensive maintenance of integration platforms.

Core capabilities of the DBS-H Big Data integration platform are:

1. Continuous data integration between SQL and NoSQL databases. Continuous integration represents a key factor of successful Big Data integration.
2. NoSQL data modeling and linkage to existing relational model. We call it a “playground” where customers can :
a. Link a relational data model to a non-relational structure.
b. Create new data design of NoSQL database
c. Explore “Auto Link” where engine automatically generates 2 options of NoSQL data model based on existing SQL ERD design.
3. Data enrichment – capability that allows to add to each block of data additional information that significantly enriches that data on the target

Currently, we focus on unidirectional integration and avoid some of the conflict resolution scenarios specific to bidirectional continuous data integration. The unidirectional path is from SQL to NoSQL and in the near future we will add the opposite direction of NoSQL to SQL integration. Today, we support Oracle and MongoDB databases and plan to add support for additional database engines such as SQL Server, DB2, MySQL, Couchbase, Cassandra and full integration with Hadoop. We aspire to be the default solution of choice when customers think about data integration across major industry data sources.

Cloud Computing Today: What are the most typical use cases for continuous data integration from SQL to NoSQL?

DBS-H: NoSQL engines offer high performance on relatively low cost and flexible schema model.

Typical use cases of continuous data integration from SQL to NoSQL are driven principally from major NoSQL use cases, such as:

  1. Customer 3600 view – creating and maintaining unified view of a customer from multiple operational systems. Ability to provide consistent customer experience regardless of the channel, capitalize upsell or cross-sell opportunities and deliver better customer service. NoSQL engines provide performance response time required in customer service, scalability and flexible data model. DBS-H solution is an enabler for a “Customer 3600  view” business case by doing transparent and continuous integration from existing SQL based data sources.
  1. User profile management – applications that manage user preferences, authentications and even financial transactions. NoSQL provides high performance, flexible schema model for user preferences, however financial transactions will be usually managed by SQL system. By using DBS-H continuous data integration financial transactions data is found transparently inside NoSQL engines.
  1. Catalog management – applications that manage catalog of products, financial assets, employee or customer data. Modern catalogs often contain user generated data from social networks. NoSQL engines provide excellent capabilities of flexible schema that can be changed on the fly. Catalogs usually aggregate data from different organizational data sources such as online systems, CRM or ERP. DBS-H solution enables transparent and continuous data integration from multiple existing SQL related data sources into new centralized catalog NoSQL based system.

Cloud Computing Today: Do you perform any data enrichment of SQL-data in the process of its integration with NoSQL? If so, what kind of data enrichment does your platform deliver? In the event that customers prefer to leave their data in its original state, without enrichment, can they opt out of the data enrichment process?

DBS-H: The DBS-H solution contains data enrichment capabilities during the data integration process. The main idea of “data enrichment” in our case is to provide a simple way for the customer to add logical information that enriches original data by:

  1. Adding data source identification information, such as: where and when this data has been generated and by whom. This can be used by auditing for example.
  2. Classifying data based on the source. This information can be very useful when customers what to control data access based on different roles and groups inside organization.
  3. Assessing data reliability as low, medium or high. This enrichment is useful for analytic platforms that can make different decisions based on source reliability level.

Customers can create enrichment metrics that can be added to every block of information that goes through the DBS-H integration pipeline. If no enrichment is required then the customer can opt out of the enrichment step.

MongoDB Reveals Details Of Connector To SQL-Compliant Business Intelligence And Data Visualization Platforms

MongoDB today announced details of a technology that connects MongoDB to business intelligence and data visualization platforms such as Tableau, Business Objects, Cognos and Microsoft Excel. By rendering data stored in MongoDB compatible with SQL-compliant data analysis tools, the connector allows developers to leverage the rich querying ability of SQL to derive actionable business intelligence from MongoDB-based data. MongoDB customers can now directly take advantage of MongoDB’s connector to transform data from MongoDB’s JSON, nested format into the tabular format required of SQL-compliant tools, whereas previously, organizations interested in obtaining business intelligence on MongoDB-based data typically resorted to third party analytics and visualization platforms such as Jaspersoft, Pentaho and Informatica. By giving customers access to a richer, deeper connection between data aggregated in MongoDB and platforms such as Tableau and Business Objects, customers no longer need to consider transforming MongoDB-based data into a relational database prior to performing advanced analytical queries.

At this year’s MongoDB World conference, Tableau and MongoDB leveraged data from the U.S. Federal Aviation Administration to illustrate the likelihood that conference attendees would return home on time. The release of the connector is symptomatic of a broader, industry-wide trend toward deeper integration between NoSQL and SQL as evinced, for example, by the recent integration between Couchbase and Metanautix. Given the contemporary interest in real-time analytics on streaming Big Data, the obvious question raised by the tightened integration between MongoDB and SQL-compliant platforms concerns the degree to which BI platforms such as Tableau will be able to perform real-time queries on streaming data aggregated in MongoDB. Meanwhile, the release of the MongoDB connector illustrates the enduring popularity of SQL as a framework for querying heterogeneous datasets as exemplified by the way in which the convergence of SQL and NoSQL stands to complement the robust ecosystem of SQL on Hadoop platforms such as Lingual, Apache Hive, Pivotal HAWQ and Cloudera Impala.

CenturyLink Acquires Database As a Service Vendor Orchestrate

Cloud infrastructure and hosted IT solution provider CenturyLink has announced the acquisition of Portland-based Database as a Service vendor Orchestrate. Orchestrate provides a database as a service platform specifically engineered for rapid application development. Orchestrate delivers multiple databases that allow developers to “store and query JSON data” in ways that empower developers to integrate geospatial, time-series data, graphs and search queries into applications without worrying about managing the operations of the databases themselves. Ochestrate’s ability to provide a portfolio of NoSQL databases means that CenturyLink customers stand to enjoy enhanced abilities to build high performance, agile, big data applications for use cases that involve real-time streaming data, internet of things use cases and mobile applications. CenturyLink’s acquisition of Orchestrate complements its recent acquisition of predictive analytics vendor Cognilytics as well as the launch of Hyperscale, its high performance cloud instance offering designed for big data and computationally intensive workloads. The acquisition of Orchestrate illustrates the increasing confluence of cloud and Big data product offerings as vendors increasingly seek one platform to fulfill their infrastructure, application development, data storage and analytic needs.

BigPanda Emerges From Stealth To Manage Deluge Of IT Alerts And Notifications

BigPanda today launches from stealth to tackle the problem of managing the explosion of alerts and notifications that IT administrators increasingly receive, daily, from myriads of applications and devices. The Mountain View-based startup integrates alerts and notifications from disparate sources into a consolidated data feed that parses unstructured data into structured data to create an aggregated alerts and notifications data repository. BigPanda’s proprietary analytics subsequently run against the integrated data repository to enable the creation of topologies and relationships, time-based analytics and statistical analytics as indicated by the screenshot of an incident dashboard below:

Examples of statistical analytics include probabilistic determinations that the concurrent appearance of notification A, B and C is likely to lead to outcome X as suggested by historical data about the conjunction of the notifications in question. The platform’s machine-learning technology incrementally refines its analytics in relation to incoming data and thereby iteratively delivers more nuanced analyses and visualizations of notifications-related data. Overall, the platform enables customers to more effectively manage the tidal wave of data from notifications that bombard the inboxes of IT administrators by facilitating the derivation of actionable business intelligence based on the aggregation of notifications from discrete systems and applications.

As told to Cloud Computing Today by BigPanda CEO Assaf Resnick, the platform integrates with monitoring systems such as New Relic, Nagios and Splunk and additionally provides REST API functionality to connect to different applications, deployment infrastructures and ITSM tools. Moreover, BigPanda today announces the finalization of $7M in Series A funding in a round led by Mayfield with additional participation from Sequoia Capital. The $7M funding raise brings the total capital raised by BigPanda to $8.5M, following upon a $1.5M pre-Series A seed round of funding from Sequoia Capital. Deployed as a SaaS application that runs on AWS infrastructure while leveraging a MongoDB NoSQL datastore, BigPanda fills a critical niche in the IT management space by delivering one of the few applications aimed at consolidated notification management and analytics. As applications, infrastructure components and networking devices proliferate with dizzying complexity in the contemporary datacenter, platforms like BigPanda are likely to morph into necessary components of IT management as a means of taming the deluge of notifications produced by disparate systems. Meanwhile, BigPanda’s early positioning in the notification-management space renders it a thought leader as well as a technology standout.