On June 28 at MongoDB World, MongoDB announced details of MongoDB Atlas, a database as a service product platform for MongoDB. MongoDB Atlas renders it easier for MongoDB users to deploy and manage MongoDB on a multitude of cloud platforms. Whereas MongoDB users previously needed to manage discrete cloud-based MongoDB deployments to ensure scalability, high availability and security, they can now take advantage of MongoDB Atlas to automate cloud-related service operations across a plurality of cloud platforms. Dev Ittycheria, president and CEO of MongoDB, remarked on the significance of MongoDB Atlas as follows:
MongoDB Altas takes everything we know about operating MongoDB and packages it into a convenient, secure, elastic, on-demand service. This new offering is yet another major milestone for the most feature rich and popular database for modern applications, and expands the options for how customers can consume the technology – in their own data centers, in the cloud, and now as a service.
Here, Ittycheria comments on the ability of MongoDB Atlas to render MongoDB into a turnkey platform that allows developers to consume MongoDB as an on-demand service marked by elastic scalability. MongoDB Atlas delivers elastic scalability to cloud-based MongoDB deployments in addition to provisioning and upgrades as well as backup and recovery services. The elastic scalability delivered by MongoDB Atlas features automatic sharding functionality that allows for scaling with no application downtime. The MongoDB Atlas screenshot below gives customers a snapshot of metrics related to MongoDB deployments within the AWS North Virginia region:
As the graphic above illustrates, customers can use MongoDB Atlas to understand and monitor pricing across a multitude of instances. The larger vision of MongoDB Atlas, however, consists in its ability to deliver automation and oversight of MongoDB deployments across a multitude of cloud platforms, thereby giving customers a centralized platform from which to manage all of their cloud-based MongoDB infrastructures. MongoDB Atlas is currently available on Amazon Web Services although integrations with Microsoft Azure and the Google Cloud Platform are expected soon. The release of the platform marks a breakthrough moment not only with respect to enhanced capabilities for deployment and ongoing management of MongoDB but also with respect to data sovereignty and data governance, particularly in the context of multi-cloud, regionally dispersed hybrid cloud deployments. Expect MongoDB Atlas to facilitate increased adoption of MongoDB and subsequently expand its market share within the space of NoSQL, document-oriented databases.
MongoDB today announced the release of MongoDB 3.2, a breakthrough release that features two new storage engines, enhanced management offerings and expanded tools for exploring and analyzing data. This release’s enhanced management tools include improvements to Cloud Manager and Ops Manager that deliver a visual user interface for understanding the performance of queries. Customers can use Cloud Manager to identify which queries take longer to execute due to architectural or engineering-related reasons and considerations. Cloud Manager and Ops Manager both allow customers to add new indexes to improve query performance with a simple click of the mouse and the avoidance of application downtime. In addition, this release features new storage engines in the form of an encrypted storage engine as well as an in-memory storage engine, the latter of which brings the capabilities of expanded in-memory storage to facilitate high throughput and low latency for big data applications.
This release also includes a data exploration tool in the form of MongoDB Compass, an application that enables DBAs to use a graphical user interface to construct queries and perform deep dives into their databases to understand attributes such as data completeness and data quality. MongoDB Compass gives DBAs the ability to explore their databases toward the end of making smarter decisions about how they should be deployed and differentially managed by development teams, business analysts, data analysts and data scientists. In MongoDB 3.2, MongoDB Compass is accompanied by MongoDB Connector for BI, a tool that transforms MongoDB data into a format compatible with SQL-compliant tools such as Tableau, Cognos, Business Objects and Excel to empower customers to use popular BI tools to produce reports and analytics on their universe of data. Adding to MongoDB Connector for BI’s capability to integrate MongoDB data with SQL-compliant databases, this release also features integrations with popular application performance monitoring tools such as New Relic and AppDynamics to help customers optimize the performance of their deployments.
All told, MongoDB 3.2 marks yet another highly significant moment in the evolution of MongoDB with a release marked by a bevy of enhancements that render it easier for customers to understand and optimize the performance of their databases while seamlessly connecting them to other tools within the technology stack as well. As such, MongoDB 3.2 positions the platform more strongly than ever to manage production-grade workloads and integrate with the BI and application performance monitoring tools in the industry while using Cloud and Ops Manager to improve query performance without disruption to the application. The graphic below illustrates MongoDB 3.2’s ability to visualize query performance to facilitate their optimal performance either by way of enhanced indexes or other modifications to the database infrastructure.
MongoDB today announced details of a technology that connects MongoDB to business intelligence and data visualization platforms such as Tableau, Business Objects, Cognos and Microsoft Excel. By rendering data stored in MongoDB compatible with SQL-compliant data analysis tools, the connector allows developers to leverage the rich querying ability of SQL to derive actionable business intelligence from MongoDB-based data. MongoDB customers can now directly take advantage of MongoDB’s connector to transform data from MongoDB’s JSON, nested format into the tabular format required of SQL-compliant tools, whereas previously, organizations interested in obtaining business intelligence on MongoDB-based data typically resorted to third party analytics and visualization platforms such as Jaspersoft, Pentaho and Informatica. By giving customers access to a richer, deeper connection between data aggregated in MongoDB and platforms such as Tableau and Business Objects, customers no longer need to consider transforming MongoDB-based data into a relational database prior to performing advanced analytical queries.
At this year’s MongoDB World conference, Tableau and MongoDB leveraged data from the U.S. Federal Aviation Administration to illustrate the likelihood that conference attendees would return home on time. The release of the connector is symptomatic of a broader, industry-wide trend toward deeper integration between NoSQL and SQL as evinced, for example, by the recent integration between Couchbase and Metanautix. Given the contemporary interest in real-time analytics on streaming Big Data, the obvious question raised by the tightened integration between MongoDB and SQL-compliant platforms concerns the degree to which BI platforms such as Tableau will be able to perform real-time queries on streaming data aggregated in MongoDB. Meanwhile, the release of the MongoDB connector illustrates the enduring popularity of SQL as a framework for querying heterogeneous datasets as exemplified by the way in which the convergence of SQL and NoSQL stands to complement the robust ecosystem of SQL on Hadoop platforms such as Lingual, Apache Hive, Pivotal HAWQ and Cloudera Impala.
On February 3, MongoDB announced the release of MongoDB 3.0, the most significant release of MongoDB in the company’s history. The release features a fundamental rearchitecting of the database marked by the addition of a pluggable storage engine API that allows for additional storage engines. Last year’s acquisition WiredTiger constitutes one of the storage engines that highlight this release by delivering write performance improvements of 7-10x and 60 to 80% improvements in compression. MongoDB 3.0 includes a storage engine designed for read-intensive applications, one for write-intensive applications and an in-memory storage engine. As such, the newly enhanced MongoDB platform allows for the optimization of the database platform for different workloads and use cases while using a unified data model and operations interface.
Charity Majors, Production Engineering Manager at Parse (Facebook), remarked on the significance of the MongoDB 3.0 release as follows:
We at Parse and Facebook are incredibly excited for the 3.0 release of MongoDB. The storage API opens the door for MongoDB to leverage write-optimized storage engines, improved compression and memory usage, and other aspects of cutting edge modern database engines.
As Majors notes, the re-architecting of MongoDB expands the range of use cases that MongoDB can handle by rendering it more suitable for applications that require the writing of data. MongoDB 3.0 also boasts marked improvements in performance and scalability because of its redesigned storage architecture. This release additionally features the introduction of Ops Manager, an application that enables customers to deploy, monitor and update MongoDB deployments. Ops Manager integrates with well known monitoring tools such as AppDynamics, New Relic and Docker and stands to reduce the operational overhead of MongoDB deployments by automating routine tasks into one-click, push button functionality. Overall, MongoDB 3.0 represents a watershed moment in the development of MongoDB as evinced by its ability to embrace a variety of application workloads and use cases alongside a massively improved level of performance and scalability.
On October 14, MongoDB announced major enhancements to its cloud-based MongoDB Management Service (MMS) for managing MongoDB deployments. The most recent version of MMS introduces significant operational efficiencies that streamline and simplify the deployment and subsequent operational management of MongoDB. For example, MMS now enables users to provision MongoDB deployments with one click and configure the resulting infrastructure with minimal manual intervention and decision-making. Moreover, the recent enhancements consolidate the ability to upgrade and downgrade deployments expeditiously as well as to seamlessly scale out deployments to accommodate customer growth. Notably, this release boasts a deeper integration with Amazon Web Services that gives customers greater control over MongoDB deployments on AWS as illustrated by the screenshot below:
As told to Cloud Computing Today by Kelly Stirman, MongoDB’s Director of Products, MongoDB Management Service users can now deploy Amazon Web Services instances from within the MMS infrastructure itself by using the automation agent functionality depicted above. Previously, MMS customers needed to independently provision AWS instances from within the AWS platform, but they can now leverage the deep integration between MMS and AWS to enjoy greater operational efficiencies specific to the deployment of AWS infrastructures containing MongoDB deployments. That said, MMS remains infrastructure agnostic and can work with any public cloud, on premise environment or hybrid cloud infrastructure although, in the case of non-AWS hosting environments, customers will need to independently configure and deploy the underlying infrastructure outside of MMS. The other notable feature of MMS is that it now operates on a freemium model that allows customers to take advantage of its functionality free of charge for up to 8 servers. The freemium model positions MongoDB to significantly expand the range of customers that opt to try out the functionality of MMS and continue hurtling the company in the direction of a lucrative IPO.
Cloudera and MongoDB recently announced a strategic partnership designed to allow customers to take advantage of Cloudera’s Hadoop distribution and MongoDB’s NoSQL platform. Details of the partnership remain scant although we do know that both companies are working on enhancing the current version of the MongoDB connector for Hadoop, which is certified to run on Cloudera Enterprise 5. The MongoDB Connector for Hadoop “is a plugin for Hadoop that provides the ability to use MongoDB as an input source and/or an output destination.” In other words, the MongoDB Connector for Hadoop enables Hadoop users to output data to MongoDB and conversely, to receive MongoDB within a Hadoop environment. Cloudera’s Chief Strategy Officer Mike Olsen commented on the partnership by noting:
Volume, variety and velocity all strain traditional operational databases, calling for a fundamental reconsideration of how companies store and process data. A Hadoop-powered enterprise data hub is an alternative center for data storage and analytics, and together with MongoDB, we empower companies to keep all of their data in full fidelity and at minimal cost, in order to power the data needs of all connected applications and IT infrastructure.
One direction for the partnership consists of the delivery of a turnkey Big Data solution with the analytic capabilities to mine both structured and unstructured data. From a product development standpoint, the obvious question concerns how much both vendors will invest in querying, analytic and predictive modeling capabilities that span both Hadoop and NoSQL. That said, the Big Data and cloud landscape has witnessed a proliferation of partnerships that lead to amalgamations of heterogeneous technology components within a larger institutional framework, but rarely result in genuine innovation and breakthrough technologies as noted in IBM’s Acquisition of Cloudant and The Walmart Effect In Tech. All this is to say that while the Cloudera-MongoDB partnership holds tremendous, even disruptive promise for the Big Data industry, partnerships represent a markedly prevalent fashion in contemporary tech based on the principles of collage and montage that sometimes result in innovation and disruptive technology platforms, but all too often deliver varied combinations of elemental technologies that disappoint in proportion to the capital and human talent brought together by the collaboration in question. Cloudera’s Mike Olsen will present further details regarding the partnership in his keynote address at MongoDB World in NYC on June 24.
Last week, IBM announced an agreement to acquire NoSQL database as a service vendor Cloudant for an undisclosed sum. An active contributor to the Apache CouchDB project, Cloudant delivers a JSON document database-based platform that claims high availability, scalability and elasticity amongst its attributes. Cloudant customers can take advantage of its JSON-based database as a service to store and mine structured and unstructured data from a variety of sources. Because the JSON database format is so widely used by developers of mobile and web applications, IBM’s acquisition of Cloudant stands to strengthen its positioning with respect to the development of applications for mobile devices in conjunction with the build out of its OpenStack-based cloud solution for the enterprise. The acquisition of Cloudant will be central to IBM’s MobileFirst solutions as well as its Worklight application for developing mobile applications. From an industry perspective, the acquisition represents a huge coup for the NoSQL space in general. CouchDB has historically not had the traction of MongoDB, Cassandra and Couchbase, so we should expect brand name tech companies to make similar offerings for the likes of MongoDB in the ensuing few months. Moreover, IBM’s acquisition of Cloudant testifies to the increasing emergence of cloud and big data behemoths with solutions for both hosting infrastructure, as well as database solutions that accommodate enterprise needs for scalability and the ability to store unstructured data. Cloudant CEO Derek Schoettle surmised the significance of Cloudant’s contribution to IBM’s SoftLayer cloud platform as follows:
Cloudant’s decision to join IBM highlights that the next wave of enterprise technology innovation has moved beyond infrastructure and is now happening at the data layer. Our relationship with IBM and SoftLayer has evolved significantly in recent years, with more connected devices generating data at an unprecedented rate. Cloudant’s NoSQL expertise, combined with IBM’s enterprise reliability and resources, adds data layer services to the IBM portfolio that others can’t match.
Schoettle notes that IBM is extending its infrastructure innovations to the “data layer” and as such, follows in the footsteps of Amazon Web Services and EMC/VMware spin-off Pivotal, which similarly deliver a combination of cloud and big data solutions in their platform and product offerings. The notable consequence of this convergence of cloud and big data product offerings is that only large enterprises with the requisite capital and resources can afford to cobble together combined cloud-big data product offerings. As a result, cloud startups and smaller data vendors will need to continue to compete by way of their agility, responsiveness, consultative support and superior technology. In effect, the IBM acquisition of Cloudant signals a Walmart effect in technology, of sorts, whereby large, well capitalized vendors have the ability to create marts of diverse data and analytics products that threaten the viability of cloud, big data and analytics startups in the same way that massive retailers such as Walmart threaten the viability of independent stores or small chains. Oracle’s recent acquisition of Blue Kai, a big data management platform geared toward marketing, constitutes another example of the way in which tech giants are continuing to integrate diverse data products into increasingly heterogeneous product portfolios. The question that remains unanswered, however, is whether the emerging Walmart technology maze is sufficiently easy to navigate that enterprises opt to partner either with one vendor for all of their technology needs, or whether they feel more comfortable shopping from a diverse range of technology vendors in order to avoid vendor lock-in and locate products that richly respond to the specificities of their industry-vertical and customer needs.