Cogito Finalizes $5.5M In Series A Funding For Big Data Analytics Applied To Voice To Improve Phone-Based Customer Service Experience

Cogito Corporation recently announced the finalization of $5.5M in Series A funding to expands its sales operations and marketing teams. The Boston-based company specializes in behavioral analytics for customer engagement that helps organizations improve customer service experiences while concurrently improving customer service employee retention and satisfaction. Cogito leverages big data analytics applied to streaming voice data to produce visually-based, guided recommendations for customer service professionals to improve the end to end phone experience with customers. Cogito’s analytics platform further boasts predictive analytics that can predict what the customer and representative will do next, thereby adding an extra layer of insight into how to improve the overall customer service experience. The Series A funding round was led by Romulus Capital with additional participation from Salesforce Ventures.

Categories: Miscellaneous

Vapor IO Announces General Availability Of OpenDCRE API For Data Center Management And Monitoring

On November 18, Vapor IO announced the general availability of the Open Data Center Runtime Environment (OpenDCRE) project, the open source API for the company’s hyper-collapsed infrastructure solution. OpenDCRE innovates with respect to Intel’s seventeen year old Intelligent Platform Management Interface (IPMI) in ways that facilitate enhanced data center infrastructure automation. The OpenDCRE API can be used to monitor and proactively manage data center environments to modulate metrics related to power consumption, temperature and energy consumption. The OpenDCRE API can be used to render current data center interfaces accessible to data center operators and facilitate the automation of the physical infrastructure within a data center. By bringing increased automation to contemporary data center management, the OpenDCRE API empowers data center operators to more effectively manage the heterogeneity of software and hardware as noted by Vapor CEO Cole Crawford below:

Data centers have proven to be nothing short of problematic. This is primarily due to poorly integrated systems, legacy software and hardware, and the disjointed approach the industry takes to building out data center environments. As IT prepares to support edge based computing, it is imperative we manage and orchestrate our infrastructure as a whole with no prejudice as to where or how many data centers are employed to support our workloads. This is a large feat and we’re thrilled to have support from partners like Future Facilities and Romonet as we move forward.

Here, Crawford remarks on fragmentation within the contemporary data center and the corresponding need to “manage and orchestrate our infrastructure as a whole.” The OpenDCRE can be deployed in both large scale data center environments within the enterprise and amongst service providers as well as in smaller deployments. Additionally, OpenDCRE works in cloud-based environments including private clouds and public clouds. The OpenDCRE API supports SSL, power control and analog sensors by means of a RESTful API. Vapor IO’s announcement of the general availability of OpenDCRE comes in conjunction with news of support from the likes of Future Facilities and Romonet. Expect Vapor IO to consolidate its early traction as the movement toward enhanced data center automation and energy-efficient, environmentally friendly data center management continues to gain steam.

Categories: Miscellaneous | Tags:

Infographic: Cloud Computing And Daily Life

As its name suggests, the following infographic illustrates the heterogeneity of use cases for cloud computing in everyday life. Courtesy of ERS Ltd., the infographic highlights drivers behind cloud computing as well as its benefits.




Categories: Miscellaneous

Guest Blog Post: The Cloud Hangover and How To Avoid It By Keith Tilley, EVP of Sungard Availability Services

The following guest blog post was authored by Keith Tilley, Executive Vice President of Global Sales and Customer Services at Sungard Availability Services. The post was adapted for Cloud Computing Today from a Sungard Availability Services white paper titled The Cloud Hangover.


Cloud computing was the great hope for many businesses, giving them access to IT resources that they could never have afforded (or perhaps did not even exist) within traditional IT infrastructures. Promising greater mobility, agility and collaboration at lower costs, cloud computing was marketed by some vendors as a quick fix to all your business needs. The reality, as many organisations are now discovering, is much more complex. For all its advantages, the adoption of this new technology has left many companies experiencing a “cloud hangover,” with integration issues and unexpected costs having a detrimental effect on their business.

From Hype to Hangover

Research carried out by Sungard Availability Services, and published in “The Cloud Hangover” whitepaper, looked at the reasons for cloud adoption and its subsequent impacts across 400 IT decision makers in the UK, Ireland, France and Sweden. It found that although there was plenty of optimism surrounding cloud migration, it wasn’t always well-founded. Cloud expenditure has grown rapidly, rising from an average of £350,000 in 2010 to £1.18 million in 2014. A number of factors are driving this growth, with 52 per cent of businesses expecting cloud solutions to reduce IT costs and 40 expecting reduced IT complexity. The Sungard AS research also reveals, however, that many businesses are not receiving the ROI that they expected.

In fact, the cloud hangover is so widespread that 53 per cent of organisations admitted to spending more on managing their cloud infrastructure than they originally planned and 37 per cent have not achieved the day-to-day savings that they expected. The reason for this unforeseen expenditure is that it is difficult to evaluate the long-term costs of cloud computing. Aside from the initial set-up and subscription costs, many organisations are not planning for future outgoings. Internal maintenance (40 per cent), systems integration (37 per cent) and people costs (33 per cent) were all cited as unplanned expenses related to cloud adoption.

Cloud computing is also being blamed for increasing the complexity of IT infrastructure, particularly when businesses are renting services from multiple vendors. Integrating these disparate systems together, alongside legacy resources, is proving a time-consuming, and financially draining, issue. 43 per cent of respondents to the Sungard AS survey revealed that the cloud had increased the complexity of their infrastructure, with 55 per cent now paying more to ensure the integration of their IT estates. The complexity and hidden costs of cloud computing threaten to undermine the many benefits that the technology can offer businesses, which is why it is critical that they understand the key steps to implementing effective cloud solutions.

Achieving Cloud Success

If businesses are to make a successful migration to the cloud, they must first invest time in planning and preparing for the transition. The first step is to identify the business drivers that make the cloud attractive to your organisation. IT benefits, such as automated software updates and increased flexibility, are obviously important, but wider business implications also need to be considered. Understanding what the cloud will deliver in terms of practical business benefits will ensure that the technology is only implemented where useful and helps organisations to select the right vendor for their needs.

When discussing their cloud options with third party suppliers, there are also a few key things for businesses to keep in mind. Evaluate the service level agreement (SLA) very carefully to understand what your supplier is guaranteeing in terms of availability, capacity, performance, upgrades and security. Consider the kind of insurances you need and ensure they are included in the very earliest stages of negotiation. Being able to mitigate and react to risk is a key aspect of cloud partnerships and will need effective dialogue between suppliers and their customers.

Automation is another important step to reducing your cloud expenditure. Which business processes can be streamlined and which need to remain manually controlled? Understanding this prior to cloud deployment, is vital if businesses want to reduce the amount of resources devoted to cloud management. The migration process itself must also be carefully considered. Can your existing apps be transferred to the cloud as they are, or will they need to be reconfigured? Cloud vendors may be able to help with the migration process, but ask for examples where they have transferred business processes in the past to avoid any potential headaches.

With 57 per cent of organisations viewing cost savings as a key driver for cloud migration, being able to keep cloud budgets in check is clearly vital. Before contracts are signed, businesses need to confirm any up-front capital expenses, operational expenditure and any potential increments as a result of “cloud bursting.” Being fully aware of your billing model is absolutely key if you wish to achieve cloud success without facing unexpectedly high costs.

Categories: Miscellaneous | Tags: ,

OpsDataStore Delivers Real-Time, Integrated Platform For IT Management That Combines Infrastructure And Application Performance Optimization

OpsDataStore recently announced the availability of the OpsDataStore 1.0, a platform that improves the quality of online experiences by managing data from a heterogeneous assemblage of IT vendors. OpsDataStore enables the collection of infrastructure and application-related data to give customers a 360 degree view of factors affecting application performance. The platform’s Open Data Collection Architecture provides a repository for the collection of data from any hardware or software platform and includes connectors to data from ExtraHop, AppDynamics and the VMware vSphere data center virtualization platform. Data is stored within a dynamic object model that facilitates the construction of a topological visualization of relationships between data responsible for the functioning of applications as illustrated below:

After storing a heterogeneous assemblage of data from a variety of vendors, the OpsDataStore performs advanced analytics to identify the root cause of problems specific to the performance of applications using the topological data map illustrated above. For example, the platform leverages correlation-based analytics to facilitate troubleshooting and performance optimization in collaboration with vendors that specialize in root cause analytics. Built using technologies such as Cassandra, Spark and Kafka, OpsDataStore supports the ability to scale to accommodate the massive amounts of data ingested from the multiplicity of vendors for which it was designed. The platform supports a REST API and ODBC to maximally integrate with data sources that can provide insight into infrastructure and application performance while integrating with Tableau and Qlik for advanced data visualization functionality while supporting. Most importantly, however, OpsDataStore provides a big data, backend repository for application and infrastructure performance optimization in conjunction with root cause analytics that enable customers to identify and remediate causes for issues that are adversely affecting their deployments. The platform’s unique ability to integrate all management data into a unified, 360 degree view toward the end of infrastructure and application performance optimization gives customers a uniquely powerful management tool for improving the quality of online service. Given the increasing heterogeneity of data stores and applications within today’s IT environments, the ability of OpsDataStore to store and perform advanced analytics on data from a vast array of sources marks a breakthrough in IT management, particularly because of its ability to integrate infrastructure and application performance optimization from within the same framework.

Categories: Miscellaneous | Tags:

Ravello Systems and Nutanix Partner To Make First Hyperconverged Infrastructure Platform Available On Public Cloud

Ravello Systems today announced that its nested virtualization technology will render the Nutanix Community Edition available on Amazon Web Services and the Google Cloud. The partnership between Ravello Systems and Nutanix marks the first time hyperconverged infrastructure technology such as the Nutanix Community Edition is available on the public cloud and powerfully illustrates the efficacy of one of Ravello’s three use cases, namely the facilitation of the deployment of cloud environments by rendering an expanded roster of hypervisors compatible with a cloud environment and infrastructure. By partnering with Ravello Systems, Nutanix customers can take advantage of the community edition of its converged infrastructure platform without the operational challenge of purchasing, deploying and managing hardware. Instead, customers of Nutanix on AWS or the Google Cloud can select the Nutanix Community Edition blueprint from within the Ravello Systems platform and launch Nutanix infrastructures on the selected public cloud within minutes. The enhanced and streamlined operational agility with respect to both deployment and ongoing management means that Nutanix customers can focus on optimizing their converged infrastructure deployments by means of attention to analytics and KPIs that enable the optimization of infrastructure performance. More importantly, by making Nutanix Community Edition available on AWS and Google Cloud via Ravello, Nutanix stands to benefit from increased exposure to a wider range of customers that may be in the market for a converged infrastructure solution for their data center. As told to Cloud Computing Today in an interview with Shruti Bhat, Director of Marketing at Ravello Systems and Nikita Maheswari from Nutanix’s Product Marketing Team, the partnership between Ravello and Nutanix rests on a product integration marked by a rewriting of the Acropolis hypervisor used by Nutanix to render it compatible with Ravello’s nested hypervisor technology. As such, the announcement continues to underscore the ability of Ravello Systems to engineer its platform to accommodate a multitude of hypervisor technology toward the end of facilitating migrations from on premise environments to the cloud, or for other use cases such as dev and test or even security testing.

Categories: Ravello Systems

Microsoft Selects Red Hat As Preferred Vendor For Enterprise Linux Workloads On Microsoft Azure

Microsoft and Red Hat have reached a monumental agreement that enables hybrid cloud users to more easily deploy Red Hat solutions on the Microsoft Azure cloud.  As a result of the collaboration, Microsoft has designated Red Hat Enterprise Linux the preferred vendor for Linux workloads on Microsoft Azure. Enterprise customers who want to use RHEL on Azure can now do so with the blessing of Microsoft’s support for RHEL on the Azure platform. In other words, customers can now deploy RHEL on Azure in ways analogous to the deployment of RHEL on Amazon Web Services. To enable this partnership, Red Hat will designate Microsoft Azure one of Red Hat’s Certified Cloud and Service Providers in upcoming weeks and meanwhile, Microsoft Azure customers can leverage Red Hat’s application platform such as Red Hat JBoss Enterprise Application Platform, Gluster and Red Hat’s Platform as a Service, OpenShift.

The collaboration between Microsoft and Red Hat further includes enterprise-grade support for hybrid cloud environments marked by the participation of support personnel from both vendors to ensure that customers obtain the support they need. The partnership also features unified workload management within hybrid cloud infrastructures enabled by the integration of Red Hat CloudForms and Microsoft Azure and the ability of System Center Virtual Machine Manager to manage RHEL on Microsoft Azure. Moreover, the collaboration includes the ability to use .NET on Red Hat products and solutions in ways that expand the ability of developers to write .NET applications on Linux applications. Whereas previously developers often had to re-write .NET applications to use them on Linux, they can now use RHEL as the principal development platform for Linux.

All told, the agreement between Microsoft and Red Hat continues to illustrate Microsoft CEO Satya Nadella’s commitment to partnering with vendors in contrast to Microsoft’s historical stance of failing to integrate with vendors and potential competitors. More importantly, the announcement illustrates Microsoft’s commitment to supporting hybrid cloud environments and its willingness to support RHEL on Azure, not only at a technological level but also at the level of integrated enterprise-grade support. All this illustrates how Microsoft is putting its eggs in the cloud basket as it attempts to consolidate its relationships with enterprises and reputation for delivering enterprise-grade products and services in anticipation of an intensification of the battle for cloud market share with Amazon Web Services. Microsoft’s strategy of focusing on rendering hybrid cloud deployments using Azure more flexible goes right to the heart of CIO considerations regarding enterprise IT cloud adoption and stands to position Azure strongly, particularly given the prevalence of RHEL within the enterprise. Expect Microsoft to continue deepening its partnership and cloud-related acquisitions as it puts the cloud first and stakes out its contention as the world’s premier cloud provider for the enterprise.

Categories: Microsoft, Red Hat | Tags:

Blog at The Adventure Journal Theme.