Cogito Corporation recently announced the finalization of $5.5M in Series A funding to expands its sales operations and marketing teams. The Boston-based company specializes in behavioral analytics for customer engagement that helps organizations improve customer service experiences while concurrently improving customer service employee retention and satisfaction. Cogito leverages big data analytics applied to streaming voice data to produce visually-based, guided recommendations for customer service professionals to improve the end to end phone experience with customers. Cogito’s analytics platform further boasts predictive analytics that can predict what the customer and representative will do next, thereby adding an extra layer of insight into how to improve the overall customer service experience. The Series A funding round was led by Romulus Capital with additional participation from Salesforce Ventures.
Cogito Finalizes $5.5M In Series A Funding For Big Data Analytics Applied To Voice To Improve Phone-Based Customer Service Experience
On November 18, Vapor IO announced the general availability of the Open Data Center Runtime Environment (OpenDCRE) project, the open source API for the company’s hyper-collapsed infrastructure solution. OpenDCRE innovates with respect to Intel’s seventeen year old Intelligent Platform Management Interface (IPMI) in ways that facilitate enhanced data center infrastructure automation. The OpenDCRE API can be used to monitor and proactively manage data center environments to modulate metrics related to power consumption, temperature and energy consumption. The OpenDCRE API can be used to render current data center interfaces accessible to data center operators and facilitate the automation of the physical infrastructure within a data center. By bringing increased automation to contemporary data center management, the OpenDCRE API empowers data center operators to more effectively manage the heterogeneity of software and hardware as noted by Vapor CEO Cole Crawford below:
Data centers have proven to be nothing short of problematic. This is primarily due to poorly integrated systems, legacy software and hardware, and the disjointed approach the industry takes to building out data center environments. As IT prepares to support edge based computing, it is imperative we manage and orchestrate our infrastructure as a whole with no prejudice as to where or how many data centers are employed to support our workloads. This is a large feat and we’re thrilled to have support from partners like Future Facilities and Romonet as we move forward.
Here, Crawford remarks on fragmentation within the contemporary data center and the corresponding need to “manage and orchestrate our infrastructure as a whole.” The OpenDCRE can be deployed in both large scale data center environments within the enterprise and amongst service providers as well as in smaller deployments. Additionally, OpenDCRE works in cloud-based environments including private clouds and public clouds. The OpenDCRE API supports SSL, power control and analog sensors by means of a RESTful API. Vapor IO’s announcement of the general availability of OpenDCRE comes in conjunction with news of support from the likes of Future Facilities and Romonet. Expect Vapor IO to consolidate its early traction as the movement toward enhanced data center automation and energy-efficient, environmentally friendly data center management continues to gain steam.
As its name suggests, the following infographic illustrates the heterogeneity of use cases for cloud computing in everyday life. Courtesy of ERS Ltd., the infographic highlights drivers behind cloud computing as well as its benefits.
Guest Blog Post: The Cloud Hangover and How To Avoid It By Keith Tilley, EVP of Sungard Availability Services
The following guest blog post was authored by Keith Tilley, Executive Vice President of Global Sales and Customer Services at Sungard Availability Services. The post was adapted for Cloud Computing Today from a Sungard Availability Services white paper titled The Cloud Hangover.
Cloud computing was the great hope for many businesses, giving them access to IT resources that they could never have afforded (or perhaps did not even exist) within traditional IT infrastructures. Promising greater mobility, agility and collaboration at lower costs, cloud computing was marketed by some vendors as a quick fix to all your business needs. The reality, as many organisations are now discovering, is much more complex. For all its advantages, the adoption of this new technology has left many companies experiencing a “cloud hangover,” with integration issues and unexpected costs having a detrimental effect on their business.
From Hype to Hangover
Research carried out by Sungard Availability Services, and published in “The Cloud Hangover” whitepaper, looked at the reasons for cloud adoption and its subsequent impacts across 400 IT decision makers in the UK, Ireland, France and Sweden. It found that although there was plenty of optimism surrounding cloud migration, it wasn’t always well-founded. Cloud expenditure has grown rapidly, rising from an average of £350,000 in 2010 to £1.18 million in 2014. A number of factors are driving this growth, with 52 per cent of businesses expecting cloud solutions to reduce IT costs and 40 expecting reduced IT complexity. The Sungard AS research also reveals, however, that many businesses are not receiving the ROI that they expected.
In fact, the cloud hangover is so widespread that 53 per cent of organisations admitted to spending more on managing their cloud infrastructure than they originally planned and 37 per cent have not achieved the day-to-day savings that they expected. The reason for this unforeseen expenditure is that it is difficult to evaluate the long-term costs of cloud computing. Aside from the initial set-up and subscription costs, many organisations are not planning for future outgoings. Internal maintenance (40 per cent), systems integration (37 per cent) and people costs (33 per cent) were all cited as unplanned expenses related to cloud adoption.
Cloud computing is also being blamed for increasing the complexity of IT infrastructure, particularly when businesses are renting services from multiple vendors. Integrating these disparate systems together, alongside legacy resources, is proving a time-consuming, and financially draining, issue. 43 per cent of respondents to the Sungard AS survey revealed that the cloud had increased the complexity of their infrastructure, with 55 per cent now paying more to ensure the integration of their IT estates. The complexity and hidden costs of cloud computing threaten to undermine the many benefits that the technology can offer businesses, which is why it is critical that they understand the key steps to implementing effective cloud solutions.
Achieving Cloud Success
If businesses are to make a successful migration to the cloud, they must first invest time in planning and preparing for the transition. The first step is to identify the business drivers that make the cloud attractive to your organisation. IT benefits, such as automated software updates and increased flexibility, are obviously important, but wider business implications also need to be considered. Understanding what the cloud will deliver in terms of practical business benefits will ensure that the technology is only implemented where useful and helps organisations to select the right vendor for their needs.
When discussing their cloud options with third party suppliers, there are also a few key things for businesses to keep in mind. Evaluate the service level agreement (SLA) very carefully to understand what your supplier is guaranteeing in terms of availability, capacity, performance, upgrades and security. Consider the kind of insurances you need and ensure they are included in the very earliest stages of negotiation. Being able to mitigate and react to risk is a key aspect of cloud partnerships and will need effective dialogue between suppliers and their customers.
Automation is another important step to reducing your cloud expenditure. Which business processes can be streamlined and which need to remain manually controlled? Understanding this prior to cloud deployment, is vital if businesses want to reduce the amount of resources devoted to cloud management. The migration process itself must also be carefully considered. Can your existing apps be transferred to the cloud as they are, or will they need to be reconfigured? Cloud vendors may be able to help with the migration process, but ask for examples where they have transferred business processes in the past to avoid any potential headaches.
With 57 per cent of organisations viewing cost savings as a key driver for cloud migration, being able to keep cloud budgets in check is clearly vital. Before contracts are signed, businesses need to confirm any up-front capital expenses, operational expenditure and any potential increments as a result of “cloud bursting.” Being fully aware of your billing model is absolutely key if you wish to achieve cloud success without facing unexpectedly high costs.
OpsDataStore Delivers Real-Time, Integrated Platform For IT Management That Combines Infrastructure And Application Performance Optimization
OpsDataStore recently announced the availability of the OpsDataStore 1.0, a platform that improves the quality of online experiences by managing data from a heterogeneous assemblage of IT vendors. OpsDataStore enables the collection of infrastructure and application-related data to give customers a 360 degree view of factors affecting application performance. The platform’s Open Data Collection Architecture provides a repository for the collection of data from any hardware or software platform and includes connectors to data from ExtraHop, AppDynamics and the VMware vSphere data center virtualization platform. Data is stored within a dynamic object model that facilitates the construction of a topological visualization of relationships between data responsible for the functioning of applications as illustrated below:
After storing a heterogeneous assemblage of data from a variety of vendors, the OpsDataStore performs advanced analytics to identify the root cause of problems specific to the performance of applications using the topological data map illustrated above. For example, the platform leverages correlation-based analytics to facilitate troubleshooting and performance optimization in collaboration with vendors that specialize in root cause analytics. Built using technologies such as Cassandra, Spark and Kafka, OpsDataStore supports the ability to scale to accommodate the massive amounts of data ingested from the multiplicity of vendors for which it was designed. The platform supports a REST API and ODBC to maximally integrate with data sources that can provide insight into infrastructure and application performance while integrating with Tableau and Qlik for advanced data visualization functionality while supporting. Most importantly, however, OpsDataStore provides a big data, backend repository for application and infrastructure performance optimization in conjunction with root cause analytics that enable customers to identify and remediate causes for issues that are adversely affecting their deployments. The platform’s unique ability to integrate all management data into a unified, 360 degree view toward the end of infrastructure and application performance optimization gives customers a uniquely powerful management tool for improving the quality of online service. Given the increasing heterogeneity of data stores and applications within today’s IT environments, the ability of OpsDataStore to store and perform advanced analytics on data from a vast array of sources marks a breakthrough in IT management, particularly because of its ability to integrate infrastructure and application performance optimization from within the same framework.
Informatica Releases Big Data Management Suite Featuring Data Governance And Security Alongside Data Integration And Processing Functionality
This week, Informatica announced details of the Informatica Big Data Management suite, an integrated platform that delivers data integration, data quality, data governance and data security. Within the suite, over 200 Informatica data integration connectors enable the ingestion of massive volumes of Hadoop, NoSQL and MPP-based data. In addition, the platform supports high throughput and low latency data integration in conjunction with the ability to support processing, data mapping and iterative automation of data integration processes. Meanwhile, the platform’s data quality and governance functionality features data profiling and data quality functionality that can alert customers to data completeness, data corruption and related data issues. Moreover, the Informatica Big Data Management suite supports effective data governance by allowing data stewards to take responsibility for data ownership and collaboration between IT and business stakeholders. The platform also delivers a bevy of risk analytics that allow data owners to quantify threats to sensitive data, understand security risks and take advantage of data masking and de-identification practices to ensure the protection of sensitive data. As such, the Informatica Big Data Management suite represents a notable intervention in the big data landscape because of its ability to deliver tools for ingesting and processing Big Data from a multitude of sources in conjunction with data quality, data governance and data security tools that facilitate a turnkey implementation of a big data framework. Whereas customers typically cobble together a mélange of products to support the management and ongoing administration of big data, the Informatica Big Data Suite delivers an integrated platform for Big Data management differentiated in large measure by its data governance and data security functionality alongside its data integration, data processing and data quality capabilities.
Private container vendor Rancher Labs today announced support for the orchestration of persistent storage services for Docker. The new functionality introduced by Rancher Labs surmounts challenges related to the storage of persistent data on the part of Docker applications. Rancher Labs now renders it possible for developers to orchestrate the deployment of storage services onto container host machines in conjunction with the use of software defined storage platforms such as Gluster, Ceph and Nexenta. Moreover, the new functionality from Rancher allows customers to launch applications that leverage storage services in support of stateful application services. Rancher’s integration with storage services such as Gluster, Ceph and Nexenta means that customers can take advantage of advanced storage functionality from storage vendors such as backup, remote replication and snapshot. Moreover, Rancher’s support for the orchestration of persistent storage services enables customers to deploy applications with storage services onto a multitude of environments and host machines that include public and private clouds as well as virtual machines and bare metal servers. By integrating persistent storage into Docker container management, Rancher Labs empowers customers to create and deploy container-based applications that contain persistent storage, thereby facilitating the development of applications that need stateful databases to realize their functionality. As such, today’s announcement marks a notable breakthrough in container management by bringing the orchestration of persistent storage to Docker containers and enhancing the ability of customers to deploy applications that require the storage of persistent data.