Alibaba Cloud has announced the release of PAI 2.0, an updated version of a platform designed to facilitate the deployment of “large-scale data mining and modeling,” with a specific focus on artificial intelligence and machine learning. Alibaba Cloud’s PAI 2.0 represents China’s first publicly available machine learning platform that encompasses use cases for its “ET Industrial Brain” related to manufacturing, optimized device and sensor configuration, energy utilization management and analytics related to the industrial internet of things, more generally. Separately, Alibaba has announced details of an “ET Medical Brain” that specializes in use cases related to drug discovery, patient management, hospital and clinical facility management as well as the deployment of virtual medical assistants to help patients interact with clinical protocols and tests. PAI 2.0 features over 100 pre-configured machine learning algorithms that can be adapted for different use cases and scenarios. The announcement of PAI 2.0 underscores the intersection between cloud platforms and machine learning technologies as cloud infrastructures increasingly seek to differentiate their platforms with value-driven analytic, coding and data management capabilities. PAI 2.0 allows Alibaba Cloud to claim parity with the likes of AWS, Azure, Google Cloud and IBM SoftLayer with respect to advanced machine learning functionality although the sophistication and ease of use of its algorithms and deep learning technologies remains in the process of discovery and realization by its customers.
Tencent Cloud, the rapidly growing Chinese public cloud and gaming platform, has announced that it will adopt the NVIDIA Tesla GPU accelerators within its public cloud platform. Tencent’s adoption of NVIDIA Tesla GPU accelerators expands its artificial intelligence and machine learning capabilities for its customer-base and subsequently strengthens its ability to support the growing contemporary proliferation of artificial intelligence, machine learning and neural network-based applications. Specifically, the Tencent Cloud will support the availability of NVIDIA Tesla GPU accelerators including the Tesla P100 and P40 GPU accelerators that feature NVIDIA’s Pascal-based architecture and NVIDIA’s NVLINK technology for connecting several GPUs with NVIDIA deep learning software. Tencent Cloud’s ability to support Tesla’s P100 and P40 GPU accelerators means that customers now have access to some of NVIDIA’s most powerful hardware for artificial intelligence and deep learning use cases that can empower data scientists to reduce the learning time required for artificially intelligent algorithms and support data intensive workloads.
In December, the Tencent Cloud launched cloud servers based on Tesla M40 GPUs. As of this week’s announcement, customers can expect the availability of cloud servers that support Tesla P100, P40 and M40 GPU accelerators to serve the artificial intelligence needs of customers that include cloud service providers, startups, research organizations and enterprises. Tencent’s announcement of its expanded support for NVIDIA GPU accelerators means that NVIDIA GPUs are now used on a roster of cloud platforms that include Google, Amazon, Microsoft and IBM Softlayer. Importantly, Tencent’s support for NVIDIA bolsters the latter’s positioning in the microprocessor space in the face of increased competition from AMD and its recently released line of Ryzen chips that have been instrumental to the meteoric rise of AMD’s share price. As a result, NVIDIA continues to strengthen its leadership space in the landscape of hardware dedicated to artificial intelligence by demonstrating continued success in seeding cloud platforms with its artificial intelligence and deep learning technology. NVIDIA’s collaboration with Tencent is particularly notable because it expands NVIDIA’s access to customers in China and East Asia, more generally, thereby giving it an invaluable leadership position in the rapidly growing market for artificial intelligence computing in China and the Asian Pacific Rim.
On October 21, Aviso announced the release of the Aviso Virtual Sales War Room, a platform that delivers granular sales analytics to help sales leaders achieve their goals. The Aviso Virtual Sales Warm Room leverages a combination of machine learning and advanced analytics to develop scenarios for executing sales campaigns that are updated in real-time in conjunction with contemporary updates to sales prospects. The platform features the ability for sales professionals to update the status of sales deals as they progress, scenario modeling functionality to assess the impact of updates to the status of deals, real-time notification capabilities to allow sales teams to effectively collaborate throughout the sales process and data about deals that are in the process of concurrent negotiations. Aviso’s technology brings the power of predictive analytics to forecast which deals will close, when and under what terms, and subsequently refines those algorithms using machine learning technology that iteratively understands the rhythms of each sales team and their style of closing deals. The platform’s proprietary analytics based on CRM data, emails and calendar appointments, market data and social media delivers a degree of analytic sophistication to decisions about sales operations that help sales leaders reach their targets and make the best decisions for their portfolio of prospects. As such, the Aviso Virtual Sales War Room promises to disrupt the operational process of sales execution by harnessing the power of advanced analytics and machine learning to improve sales performance.
Artificial intelligence vendor Sentient Technologies recently announced the finalization of $103.5M in Series C funding in a round led by Tata Communications (Hong Kong), existing investor Horizons Ventures and a group of private investors. Sentient’s technology features an artificial intelligence and machine learning platform that operates on distributed datasets to develop actionable business intelligence from disparate, asynchronous data sources. The company’s patent pending technology has thus far been used to develop analytic insights in the financial and healthcare industries. Sentient differentiates itself in the artificial intelligence space by way of its unique ability to scale to process artificial intelligence jobs on millions of nodes in parallel.
Vinod Kumar, MD and CEO of Tata Communications, the company that led the Series C round, remarked on the significance of Sentient Technologies as follows:
As an investor, we share a common vision on the transformative force that massively distributed computing and artificial intelligence can play in helping businesses get insights and solve their most complex big data problems. We see Sentient at the forefront of these technologies and bringing a disruptive approach to cloud based computing services. Furthermore, the scale of our leading global network infrastructure and data center footprint also complements Sentient’s growth plans and will enable its global deployment.
Here, Kumar positions Sentient Technologies as contributing to the “transformative force that massively distributed computing and artificial intelligence” currently plays in revolutionizing the way in which businesses manage big data analytics. Sentient delivers a “disruptive” approach to cloud-based distributed artificial intelligence that benefits from its collaboration with Tata’s global data center and network infrastructure. As such, Sentient participates in a resurgence of artificial intelligence technologies as evinced by IBM’s $100M venture fund in Watson supercomputing, Google’s acquisition of DeepMind technologies for $500M and early stage artificial intelligence startups such as Wit.ai, Idibon, Expect Labs and Prediction IO. Given that Sentient’s Series C funding represents the largest venture round funding investment in an artificial intelligence startup to date, the industry should expect more details of its technology platform and product roadmap to emerge in upcoming months. Sentient’s platform differentiates by way of its distributed artificial intelligence technology and massive ability to scale, although details of its predictive analytics and data management technology have yet to emerge. For now, however, the bottom line is that AI is hot both for investors and prospective customers that are increasingly interested in leveraging iterative machine learning technologies into business operations.
Today, StackStorm emerged from stealth mode and revealed details of a DevOps solution for IaaS cloud environments, with a specific focus on OpenStack at the present time. In much the same vein that Pivotal sought to bring the computing power, scalability and operational efficiencies of enterprises such as Facebook, Google, Yahoo and Twitter to mainstream enterprise IT, StackStorm proposes to bring automation technology analogous to that used by companies like Facebook to enterprises, SMBs and startups alike. StackStorm CEO Evan Powell elaborated on the company’s technology by noting that, “the world’s top cloud infrastructure operators are 10-100x more productive than the average operator thanks in part to homemade operations automation like Facebook’s FBAR. We built StackStorm to deliver exactly this kind of software and productivity boost to the broader market.” In a phone interview with Cloud Computing Today, Powell noted further that the platform specialized in simplifying the automation of workflows in addition to leaving an audit trail regarding the implementation of automation. Moreover, the StackStorm platform integrates machine learning into its product in order to render its automation technology more intelligently and intuitively responsive to the evolving needs of the infrastructure in question. Cofounded by by Evan Powell and Dmitri Zimine, StackStorm’s mission involves delivering automation and artificial intelligence to the operation of datacenters and cloud-based infrastructures, with a particular emphasis on empowering companies who lack top-tier DevOps talent to automate their workflows with efficacy and transparency.
DataRPM today announced the finalization of $5.1M in Series A funding in a round led by InterWest Partners. DataRPM specializes in a next generation business intelligence platform that leverages machine learning and artificial intelligence to facilitate the delivery of actionable business intelligence by means of a natural language-based search engine that allows customers to dispense with complex, time consuming data modeling and query production. DataRPM stores customer data within a “distributed computational search index” that enables its platform to apply its natural language query interface to heterogeneous data sources without modeling the data into intricate taxonomic relationships or master data management frameworks. Because DataRPM’s distributed computational search index empowers customers to run queries against different data sources without constructing data schemas that organize the constituent data fields and their relationships, it promises to accelerate the speed with which customers can derive insights from their data. Not only does the platform deliver a natural language interface, but it also performs data visualization of the requisite Google-like searches as illustrated below:
In an interview with Cloud Computing Today, DataRPM CEO Sundeep Sanghavi noted that its natural language search functionality is based on proprietary graphing technology analogous to Apache Giraph and Neo4j. The platform operates on data in relational and non-relational formats, although it currently does not support unstructured data. Available via both a cloud-based and on-premise deployment solution, DataRPM promises to disrupt Big Data analytics and contemporary business intelligence platforms by dispensing with the need for complex, time consuming and expensive data modeling as well as empowering business stakeholders with neither SQL nor scripting skills to analyze data. Today’s funding raise is intended to accelerate the company’s go-to-market strategy and correspondingly support product development in conjunction with the platform’s reception by current and future customers.
DataRPM belongs to the rapidly growing space of products that expedite Big Data analytics on Hadoop clusters as exemplified by the constellation of SQL-like interfaces for querying Hadoop-based data. That said, its natural language query interface represents a genuine innovation in a space dominated by products that render Hadoop accessible to SQL developers and analysts, as opposed to data savvy stakeholders with Google-like querying expertise. Moreover, DataRPM’s natural language search capabilities push the envelope of “next generation business intelligence” even further than contemporaries such as Jaspersoft, Talend and Pentaho, which thus far have focused largely on the transition within the enterprise from reporting to analytics and data discovery. Expect to hear more about DataRPM as the battle to streamline and simplify the derivation of actionable business intelligence from Big Data takes shape within a vendor landscape marked by the proliferation of analytic interfaces for petabyte-scale relational and non-relational databases.