Docker 1.10 Foregrounds Container Security

On February 4, Docker announced the release of version 1.10 marked by enhanced orchestration and application composition functionality, improved security and better networking capabilities. Docker Compose now enables developers to define an application within a single file that enumerates its requirements and the relationships between its constituent components. The enhancements to Docker Compose in Docker 1.10 facilitate simplified management of distributed applications by empowering developers to define “application services, network topologies, volumes and their relationships” in one file. Moreover, this version of Docker gives developers the ability to define network attributes independent of a physical network and to subsequently integrate with Docker Networking. With respect to security, Docker 1.10 claims the general availability of User namespacing, which separates privileges specific to individual containers and the daemon. As a result, individual containers no longer have ability to access the root on the host. Moreover, customers can now restrict access to the host to a designated group of sysadmins in contrast to providing global sysadmin access. Additional security functionality in version 1.10 includes Seccomp profiles that deliver granular policy control on individual containers that ensures containers perform only the executable processes to which they have been assigned. Docker 1.10 also features content addressable IDs that provide reference ids to track downloaded content and authorization controls that allow for the configuration of granular access to the Docker daemon. The combination of the GA of User namespacing, seccomp profiles, content addressable image IDs and authorization profiles means that security takes center stage in Docker 1.10 by giving users a portfolio of tools to configure granular access control and container-based privileges and functionality. Read more about the release here.

Categories: Docker, Miscellaneous

Trifacta Announces $35M In Funding For Its Data Wrangling Software

Trifacta today announced the finalization of $35M in funding from Accel Partners, Greylock Partners, Ignition Partners and new investor Cathay Innovation. The funding will be used to support the company’s explosive growth by scaling operations and accelerating product innovation. The funding raise comes head on the heels of a remarkable year for Trifacta marked by an increase in sales of 700% and the addition of users at 3000 companies spanning 105 countries around the world. 2015 represented a watershed year for Trifacta, whose data wrangling software enables users to prepare and explore large datasets for analytics and data visualization. In a data landscape featuring an abundance of applications that focus on data analytics and visualization, Trifacta’s data wrangling software differentiates by empowering business users to cleanse, normalize and standardize data for more advanced analytics. Moreover, Trifacta delivers data integration functionality automates and streamlines the ability to explore and analyze data from disparate sources featuring discrepant column headers and identifiers. Expect Trifacta to continue building on its brisk 2015 momentum as it continues to innovate on its data wrangling, data exploration and data discovery platform while pushing into new markets and expanding its presence in existing ones. The addition of Cathay Innovation to the company’s roster of investors sets the stage for new business execution and growth in France and China given that the fund is based in Paris and Shanghai. Today’s capital raise brings the total funding raised by Trifacta to $76M.

Categories: Miscellaneous, Trifacta

Apple’s Data Center Build Signals Possible Expansion Of Its Cloud Computing Capabilities

Morgan Stanley analyst Brian Nowak opines that Apple may be preparing to decrease its dependence on Amazon Web Services by building new data centers that increase its data center capacity. Citing analyst Katy Huberty, Nowak claims that Apple plans to build approximately 2.5 million in square feet of additional data center capacity, a whopping data center footage area given that the square footage of Amazon Web Services data centers totals 6.7 million square feet. Meanwhile, Oppenheimer analyst Tim Horan argues that Apple is gearing up to launch its own Infrastructure as a Service (IaaS) product offering. Horan notes that Apple has already built its own content delivery network to reduce its dependency on Akamai and that an IaaS product offering potentially offers a gateway for Apple to gain a foothold in the enterprise hardware space. Much of the speculation about Apple’s interest in expanding its own cloud computing, IaaS capabilities relates to Apple’s most recent quarterly earnings report, which features a 30% capex year over year increase for 2016 and commentary from Apple CFO Luca Maestri about the importance of new data centers in this year’s operating plans. Regardless of whether Apple chooses to launch its own IaaS offering or decide instead to expand its internal ability to deploy cloud computing for its corporate products and services, the company stands to gain by reducing its roughly $1 billion annual spend on Amazon Web Services. Apple has the cash and the talent to build out its cloud computing capabilities but it remains to be seen what results from the company’s investment in data centers around the world.

Categories: Apple, Miscellaneous

Midokura’s MEM 5.0 Delivers Enhanced Analytics Into Software Network Virtualization Infrastructures For OpenStack

This week, Midokura announced details of MEM 5.0, the next generation of its Midokura Enterprise MidoNet platform. As a network virtualization solution for IaaS clouds, MEM 5.0 boasts enhanced operational management functionality such as advanced analytics into the history of network flows through physical and virtual host machines. MEM 5.0 also provides data on network utilization by tenant in the form of usage reports that illustrate which tenants have consumed most network resources within a designated time period. In addition, the MEM Insights component of MEM 5.0 features analytics into bandwidth consumption by way of traffic counters that empower cloud operators to proactively monitor bandwidth consumption as well as port mirroring functionality that allows administrators to mirror devices such as ports, bridges and routers to identify anomalous behavior. Pino de Candia, CTO of Midokura, remarked on the innovation of this release as follows:

Operational tools are generally geared towards configurations, monitoring in OpenStack, but they offer no visibility into encapsulated traffic. From Midokura’s own experience as an operator, and by working with operators ourselves, we’ve seen firsthand the dire need for analytic and end-to-end operational tools for management of network infrastructure. Midokura Enterprise MidoNet 5.0 builds upon our popular technology to meet this need, making OpenStack far simpler to manage, operate and also troubleshoot.

As de Candia notes, MEM 5.0 delivers analytics into “encapsulated traffic” and simplifies the process of troubleshooting disruptive or anomalous network behavior within OpenStack deployments. Cloud operators who leverage Midokura to virtualize networks now have access to an enriched portfolio of operational reports and analytics that enables them to more effectively manage the performance of their network. Given that MEM 5.0 now comes replete with an enhanced set of tools for analyzing and remediating issues within network traffic, the network virtualization space for OpenStack features increasing competition as Midokura vies with the likes of Akanda, Juniper Networks and Plumgrid for market share amongst OpenStack cloud operators and IaaS cloud deployments more generally. The screenshot below illustrates the visualization of network-related analytics available through MEMInsights in MEM 5.0.

The screenshot below illustrates the visualization of network-related analytics available through MEMInsights in MEM 5.0.

Categories: Miscellaneous

MapR Awarded Patent For Converged Data Platform

MapR has been granted a patent from the USPTO for a converged data architecture that brings together “open source, enterprise storage, NoSQL, and event streams” with enterprise-grade security and disaster recovery functionality. MapR’s converged data architecture supports open source APIs such as POSIX, NFS, LDAP, ODBC, REST, and Kerberos while enabling real-time analytics on data in motion and data at rest. The platform delivers the power of Hadoop and Spark in conjunction with read-write and update functionality that can produce analytics for mission-critical applications and computationally intensive workloads, at scale. The MapR Converged Data Platform empowers customers to avoid data siloes by running analytics on multiple workloads housed within one cluster. Meanwhile, the platform’s enterprise-grade reliability allows customers to ingest, process and analyze big data from a multitude of sources while enjoying the benefits of production-grade data protection and disaster recovery. The innovation of the platform consists in its ability to support storage and analytics from a multitude of data formats and acquisition modalities such as batch uploads as well as streaming data. Wednesday’s patent announcement affirms the innovation specific to the architecture of MapR’s converged big data infrastructure. Expect to hear more details about MapR’s Converged Data Platform as use cases proliferate and differentially illustrate the platform’s ability to support big data analytics in mission critical environments for data from relational databases, NoSQL, Hadoop and streaming data sources, alike.

Categories: MapR

Google CEO Sundar Pichai Asserts Google Cloud Platform Is Used By 4 Million Applications

Google CEO Sundar Pichai recently announced that Gmail has surpassed 1 billion users per month and that its Google Cloud Platform is used by more than 4 million applications. Pichai also asserted that the Google Cloud Platform “is ready to be used at scale,” and that the company has reached a point where its cloud infrastructure and applications have reached a level of maturity at exactly the time when the broader, industry-wide “movement to cloud has reached a tipping point.” Pichai further noted that Catholic Health Initiatives, one of the nation’s largest non-profit health systems, announced its transition to Google Apps last quarter in what amounts to yet another example of the Google Cloud Platform’s readiness to embrace workloads from large organizations and enterprises. Unlike Microsoft and Amazon, Alphabet, Google’s parent company, failed to break out revenue run rate details about its subsidiary cloud business but the company’s appointment of VMware executive Diane Greene to head Google’s cloud services division in November constitutes ample proof of the company’s interest in building out its cloud business. The question now, however, is when and how Google plans to court the enterprise, which has traditionally been dominated by Microsoft and IBM in the enterprise software and infrastructure space. Without more details of its anticipated strategy for gaining traction for cloud products and services in the enterprise, investors and analysts alike will be hard pressed to understand how Google plans to build cloud market share, particularly given continued impressive revenue growth for Amazon Web Services and Microsoft’s growing ascendancy in the cloud products and services space under CEO Satya Nadella.

Categories: Google, Google Cloud Platform, Miscellaneous

Spark-Redis Connector Increases Speed Of Spark On Redis By Over 100x Compared To HDFS

Redis Labs today announced the release of a Spark-Redis connector that accelerates the performance of Spark in comparison to other database infrastructures. The Spark-Redis connector allows Redis users to leverage the power of Spark to perform analytics on streaming data in real-time on large datasets. The open source Spark-Redis connector boasts the capability to read and write to Redis clusters while preserving Redis data structures. The integration of Redis and Spark results in Spark performance acceleration by a factor of 135 when compared to HDFS and a 45 fold acceleration when compared to Spark using Tachyon. Yiftach Shoolman, co-founder and CTO of Redis Labs, remarked on the significance of the acceleration of Spark on Redis data stores as follows:

Big data is coming of age and customers are demanding that big data insights are extracted in real-time. This is where Redis Labs fills the gap by delivering both the right performance and optimized distributed memory infrastructure to accelerate Spark. Our goal is to make Redis the de-facto data store for any Spark deployment.

Here, Shoolman comments on how the integration between Redis and Spark enhances the derivation of analytic insights from big datasets. The increase in Spark’s ability to perform on Redis allows users to conduct analyses in real-time while subsequently enjoying the performance the “optimized distributed memory infrastructure” that Redis delivers. In addition to benefits in speed, one of the key advantages of using Spark with Redis consists of the latter’s ability to allow Spark to access to individual data elements in ways that avoid the operational overhead associated with transferring or running analytics on large batches of data. Today’s announcement features news of a Spark-Redis connector, support for Spark SQL and the capability to use Redis as a distributed memory database for Spark. The Spark-Redis connector’s acceleration of Redis to blistering speeds promises to catapult the positioning of Redis within the NoSQL database landscape and the database infrastructure space more generally. By accelerating Spark to speeds over 100 times faster than its performance on HDFS, Redis gives customers faster access to data analytics on real-time in ways that can be crucial for use cases that demand split second analytic transactions on massive datasets. Going forward, Redis plans to collaborate with Spark to enable the use of Spark’s functionality for machine learning and graph database use cases as well. The graphic below illustrates the acceleration in speed enabled by the Spark-Redis connector as compared to other database infrastructures:

 

Categories: Miscellaneous, Redis Labs

Create a free website or blog at WordPress.com. The Adventure Journal Theme.