Ben Golub, Docker CEO, On The Future Of Distributed Applications And Container Technologies

Roughly six weeks after Docker’s announcement of the Docker Hub Enterprise (DHE), Cloud Computing Today spoke to Docker CEO Ben Golub about Docker’s progress in 2014 as well as the future of distributed applications and containers more generally. Golub cited the landmark release of Docker 1.0 in June, the first DockerCon in history and “exponential growth” amongst financial services such as ING and Goldman Sachs as key achievements in 2014. Speaking of the future of distributed applications, Golub noted the increased agility enabled by Docker containers as exemplified by ING’s ability to implement hundreds of changes to an application per day in contrast to the previous state of affairs wherein code changes were released every several months. Golub also remarked on the way in which Docker has now become the “de facto standard” for creating distributed applications in collaboration with over 20,000 APIs and an increasingly vibrant “ecosystem of users contributors and partners.”

Cloud Computing Today: What were some landmarks and key achievements for Docker over the past year?

Ben Golub (CEO, Docker): 2014 was a tremendous year for the Docker project and the outstanding ecosystem of users contributors and partners that comprise the community. The biggest landmark of the year was the release of Docker 1.0 in June at the first ever DockerCon. That production-ready release signaled to developers everywhere that Docker could be depended upon for strategic development workflows. Docker Hub was also released at that time as well which was a critical component in showcasing Docker as an open platform for distributed applications.

Since 1.0, there has been exponential growth in usage in the enterprise with large financial services companies like ING and Goldman Sachs publicly referencing their successes with Docker. That customer traction has led to an incredible burst of strategic partnership announcements during the back half of the year including VMware, Microsoft, Google, AWS and IBM to name a few.

Cloud Computing Today: How do you foresee the future of distributed applications? What is driving their proliferation in the industry?

Ben Golub (CEO, Docker): We expect to see a new generation of Docker-based distributed applications go mainstream in 2015, as enterprises are recognizing the many benefits associated with these agile applications. Distributed applications are 100% portable, are composed of discrete interoperable services, have a dynamic lifestyle and are backed by an incredible ecosystem of technology partners .These capabilities are very attractive to today’s businesses which need to deliver differentiated offerings to maintain a competitive edge. For example, ING’s use of Docker enables the bank to make application innovations over 300 times a day, where in the past they were able to make a single change every nine months.

Cloud Computing Today: How do you envision the future of container technologies more generally in 2015?

Ben Golub (CEO, Docker): Docker container technology has become the de facto standard for creating a composable set of services for building distributed applications. Through support of open APIs, there is now a flourishing ecosystem of 20,000 tools to support Docker and over 70,000 Dockerized applications. This critical mass is helping to foster an evolution of how container technologies are being leveraged. That evolution as we showcased at DockerCon EU in December is to multi-container multi-host distributed applications.

Categories: Docker

Datapipe’s Acquisition Of GoGrid Underscores The Industry Trend Of The Intersection Of Cloud And Big Data

Managed hybrid cloud IT solution provider Datapipe recently announced the acquisition of GoGrid, a leader in facilitating the effective operationalization of Big Data for cloud deployments. While GoGrid boasts over a decade of experience in managed cloud and dedicated cloud hosting, the company recently added a slew of Big Data offerings to its product line including NoSQL database product offerings and a 1-button deployment process, in addition to a partnership with Cloudera to accelerate Hadoop deployments for the enterprise. Robb Allen, CEO of Datapipe, commented on the significance of Datapipe’s acquisition of GoGrid as follows:

GoGrid has made it easy for companies to stand up Big Data solutions quickly. Datapipe customers will achieve significant value from the speed at which we can now create new Big Data projects in the cloud. This acquisition advances Datapipe’s strategy to help our enterprise clients architect, deploy and manage multi-cloud hybrid IT solutions.

Here, Allen remarks on the way in which GoGrid’s success in streamlining the implementation of Big Data solutions enhances Datapipe’s ability to offer enterprise customers Big Data solutions in conjunction with managed cloud hosting solutions. As such, Datapipe stands poised to consolidate its leadership in the space of cloud vendors offering Big Data solutions to enterprise customers given that cloud adoption has significantly outpaced Big Data in the enterprise to date. By acquiring GoGrid, Datapipe positions itself to offer its customers the limitless scalability of the cloud in addition to the infrastructure to store petabytes of data. The adoption of cloud-based big data solutions enables customers to take advantage of the potential for running analytics in parallel on transactional and non-transactional datasets alike to derive insights that draw upon the union of financial, operational, marketing, sales and third party data. As a result, Datapipe’s acquisition of GoGrid cements its already strong marketing positioning in the nascent but about to explode space marked by the intersection of cloud computing and Big Data.

Categories: Big Data, Datapipe, GoGrid

Conversation With John Fanelli, DataTorrent’s VP of Marketing, Regarding Analytics On Streaming Big Data

Cloud Computing Today recently spoke to John Fanelli, DataTorrent’s VP of Marketing, about Big Data, real-time analytics on Hadoop, DataTorrent RTS 2.0 and the challenges specific to performing analytics on streaming Big Data sets. Fanelli commented on the market reception of DataTorrent’s flagship product DataTorrent RTS 2.0 and the mainstream adoption of Big Data technologies.

1. Cloud Computing Today: Tell us about the market landscape for real-time analytics on streaming Big Data and describe DataTorrent’s positioning within that landscape. How do you see the market for real-time analytics evolving?

John Fanelli (DataTorrent): Data is being generated today in not only unprecedented volume and variety, but also velocity. Human created data is being surpassed by automatically generated data (sensor data, mobile devices and transaction data for example) at a very rapid pace. The term we use for this is fast big data. Fast big data can provide companies with valuable business insight, but only if they act on them immediately. If they don’t, the business value declines as the data ages.

As a result of this business opportunity, streaming analytics is rapidly becoming the norm as enterprises rush to deliver differentiated offerings to generate revenue or create operational automated efficiencies to save cost. But it’s not just fast big data alone; it’s big data in general. Organizations have plenty of big data already in their Enterprise Data Warehouse (EDW) that is used to enrich and provide greater context to fast big data. Some examples of data that drives business decisions include customer information, location and purchase history.

DataTorrent is leading the way in meeting customer requirements in this market by providing extremely scalable ingestion of data from many sources at different rates (“data in motion” and “data at rest”), combined with fault tolerant, high performing analytics; flexible Java-based action and alerting, delivered in an easy to use and operate product offering, DataTorrent RTS.

The market will continue to evolve toward making analytics easier to use across the enterprise (think non-IT users), cloud-based deployments and even pre-built blueprints for “enterprise configurable” applications.

2. Cloud Computing Today: How would you describe the reception of DataTorrent RTS 2.0? What do customers like most about the product?

John Fanelli (DataTorrent):Customer feedback DataTorrent RTS 2.0 has been phenomenal. There are many aspects of the product that are getting rave reviews. I have to call out that developers have reacted very positively to the Hadoop Distributed Hast Table (HDHT) feature as it provides them with a distributed, fault-tolerant “application scratchpad,” that doesn’t require any external technology or databases. Of course, the marquee features that have the data scientist community abuzz are Project DaVinci (visual streaming application builder) and Project Michelangelo (visual data dashboard). Both enable quick experimentation over real-time data and will emerge from Private Beta over the coming months.

3. Cloud Computing Today: How would you describe the differentiation of DataTorrent RTS from Apache Spark and Apache Storm?

John Fanelli (DataTorrent):DataTorrent provides a complete enterprise-grade solution, not just an event-streaming platform. DataTorrent RTS includes an enterprise-grade platform, a broad set of pre-built operators and visual development and visualization tools. Enterprises are looking for what DataTorrent calls a SHARPS platform. SHARPS is an acronym for Scalability, Highly Availability, Performance and Security. In each of the SHARPS categories, DataTorrent RTS is superior.

4. Cloud Computing Today: What challenges do you foresee for Big Data achieving mainstream adoption in 2015?

John Fanelli (DataTorrent): Fast big data is gaining momentum! Every day I speak with customers and prospects about their fast big data, the use-case requirements and the projected business impact. The biggest challenge they share with me is that they are looking to move faster than they are able due to existing projects and technical skills on their team. DataTorrent RTS’ ease of use and operator libraries supports almost any input/output source/sink and provides pre-built analytics modules to address those challenges.

Categories: Big Data, DataTorrent

Google Cloud Monitoring Achieves Beta Status Eight Months After Google’s Stackdriver Acquisition

Last week, Google released the Beta version of the Google Cloud Monitoring platform. Derived from its May 2014 acquisition of Stackdriver, Google Cloud Monitoring enables users to obtain insight into the performance of Google App Engine, Google Compute Engine, Cloud Pub/Sub, and Cloud SQL. As noted in a blog post by Google’s Dan Belcher, Google Cloud Monitoring delivers integrated monitoring of infrastructure, systems, uptime, trend analysis and alerts by way of a SaaS application. In addition, Google Cloud Monitoring enables users to create aggregations of select resources for monitoring and leverage dashboards that elaborate on metrics such as latency, capacity, uptime and other performance-related metrics. The platform also enables users to configure alerts specifying the achievement of designated metrics as well as endpoint checks notifying users about the lack of availability of APIs, web servers and other “internet-facing resources.” The beta release of Google Cloud Monitoring comes after months of preparation that culminated in the ability of the Stackdriver-based cloud monitoring platform to support the needs of Amazon Web Services customers as well as Google Cloud Platform customers alike. The release also follows soon upon Google’s announcement of details of Google Cloud Trace, a Beta platform that allows users to analyze remote procedure calls (RPCs) created by a Google App Engine-based application to understand latency distributions between different RPCs and “performance bottlenecks” more generally. The larger significance of the Beta release of Google Cloud Monitoring is that it delivers a monitoring tool that can monitor both Google Cloud Platform and Amazon Web Services infrastructures, whereas Amazon’s CloudWatch, for example, is dedicated solely to monitoring the AWS platform. For now, though, the product underscores Google’s commitment to building its IaaS infrastructure as exemplified by two Beta releases within the space of the early weeks of 2015.

Categories: Google, IaaS

Treasure Data Closes $15M In Series B Funding For Fully Managed, Cloud-Based Big Data Platform

This week, Treasure Data announced the finalization of $15M in Series B funding led by Scale Venture Partners. The funding will be used to accelerate the expansion of Treasure Data’s proprietary, cloud-based platform for acquiring, storing and analyzing massive amounts of data for use cases that span industries such as gaming, the internet of things and digital media. Treasure Data’s Big Data platform specializes in acquiring and processing streaming big data sets that are subsequently stored in its cloud-based infrastructure. Notable about the Treasure Data platform is that it offers customers a fully managed solution for storing streaming big data that can ingest billions of records per day, in a non-HDFS (Hadoop) format. Current customers include Equifax, Pebble, GREE, Wish.com and Pioneer, the last of which leverages the Treasure Data platform for automobile-related telematics use cases. In addition to Scale Venture Partners, all existing board members and their associated funds participated in the Series B capital raise, including Jerry Yang’s AME Venture Fund.

Categories: Big Data, Treasure Data

Neo Technology Raises $20M In Series C Funding For Its Neo4j Graph Database Technology

Neo Technology today announced the finalization of $20M in Series C funding. Today’s Series C funding raise was led by Creandum with additional participation from Dawn Capital. Existing investors Fidelity Growth Partners Europe, Sunstone Capital and Conor Venture Partners all participated in the round. The funding will be used to expand sales operations, enhance product development and build the open source community supporting the Neo4j platform and its attendant partner ecosystem. The funding comes hot on the heels of a year of explosive growth for Neo Technologies and its vendor-led open source graph database, Neo4j. Neo Technology’s CEO and co-founder Emil Eifrem remarked on the company’s growth as follows:

There are two strong forces propelling our growth: one is the overall market’s increasing adoption of graph databases in the enterprise. The other is proven market validation of Neo4j to support mission-critical operational applications across a wide range of industries and functions.

Eifrem notes how Neo Technology’s growth has been fueled by increasing enterprise-wide adoption of graph databases in conjunction with Neo4j’s consistent demonstration of its ability to support a variety of production-grade environments. In a phone interview with Cloud Computing Today, Eifrem further remarked how one of the challenges for Neo Technology consists of developing an incisive sales outreach strategy given that almost every enterprise could benefit from the adoption of graphing technologies. Eifrem elaborated that Neo Technology has chosen to tackle the challenge of prioritizing its sales outreach efforts by focusing on use cases that include data-driven recommendations (in e-commerce and social networking, for example), master data management, identity and access management, graph based search, network and IT operations, the internet of things and pricing, while nevertheless remaining open to other client requests and interests. Since the launch of Neo4j 2.0 last January, Neo4j has experienced over 500,000 downloads and boasts thousands of enterprise-grade deployments featuring organizations such as Walmart, eBay, Earthlink, CenturyLink, Pitney Bowes and Cisco. Based on its impressive record in 2014 and the explosive proliferation of use cases for graphing technology, 2015 could well represent an inflection point for Neo Technologies as it consolidates its leadership in the graph database space by using its additional funding to gain more market traction while continuing to educate the industry on the value proposition of adopting Neo4j.

Categories: Neo Technology, Venture Capital | Tags:

MapR Announces Selection By MediaHub Australia For Digital Archiving And Analytics

MapR recently announced that MediaHub Australia has deployed MapR to support its digital archive that serves 170+ broadcasters in Australia. MediaHub delivers digital content for broadcasters throughout Australia in conjunction with its strategic partner Contexti. Broadcasters provide MediaHub with segments of programs, live feeds and a schedule that outlines when the program in question should be delivered to its audiences. In addition to scheduled broadcasts, MediaHub offers streaming and video on demand services for a variety of devices. MediaHub’s digital archive automates the delivery of playout services for broadcasters and subsequently minimizes the need for manual intervention from archival specialists. MapR currently manages over 1 petabyte of content for the 170+ channels that it serves, although the size of its digital archive is expected to grow dramatically within the next two years. MapR’s Hadoop-based storage platform also provides an infrastructure that enables analytics on content consumption that help broadcasters make data-driven decisions about what content to air in the future and how to most effectively complement existing content. MediaHub’s usage of MapR illustrates a prominent use case for MapR, namely, the use of Hadoop for storing, delivering and running analytics on digital media. According to Simon Scott, Head of Technology at MediaHub, one of the key reasons MediaHub selected MapR as the big data platform for its digital archive concerned its ability to support commodity hardware.

Categories: Big Data, Hadoop, MapR

Create a free website or blog at WordPress.com. The Adventure Journal Theme.