Windows Research Award From Microsoft Research Attempts To Draw Research Community To Azure

Microsoft Research is attempting to lure researchers and scientists to use the Windows Azure platform to analyze data on its cloud platform by granting winners of its Windows Azure Research Award “large allocations of Windows Azure storage and compute resources for a period of one year.” To qualify, applicants need to be affiliated with a university or non-profit research laboratory. Proposals from any academic discipline are welcome, although the proposal submission site specifies an interest in granting awards to projects that render data more widely available to a community or collaborative group. The first deadline for proposals is October 15, 2013. Microsoft expects to announce the first batch of winners by the beginning of November.

Jaspersoft BI Suite Available On Windows Azure

Today, Jaspersoft announced the availability of its business intelligence suite on the Windows Azure platform. As a result, Windows Azure developers can now seamlessly embed Jaspersoft’s analytics and reporting into their applications. Today’s announcement represents yet another chapter in Jaspersoft’s business development strategy of seeding well known cloud-based platforms with its business intelligence (BI) suite of applications. In February, for example, Jaspersoft made its platform available on Amazon Web Services by way of a BI server that integrates with EC2, RDS and RedShift. Jaspersoft similarly boasts partnerships with Red Hat’s OpenShift PaaS and Cloud Foundry that render its BI suite available on both platforms. The Windows Azure partnership with Jaspersoft marks a coup for both Azure and Jaspersoft, and particularly the former as it ramps up its breadth of functionality in an attempt to keep up with Amazon Web Services as evinced by this week’s caching announcements and upgrades from both platforms. Jaspersoft represents the first open source BI application available on the Windows Azure platform.

Microsoft To Offer Open Source Java On Azure IaaS and PaaS Platforms

Microsoft announced plans to support an open-source version of Java on both its Windows Azure IaaS and PaaS platforms at last week’s O’Reilly Open Source Convention in Portland, Oregon. Microsoft will offer the Java Standard Edition (Java SE) and will work with Azul Systems to “build, certify and distribute a compliant OpenJDK-based distribution meeting the Java SE specification for use with Windows Server environments on Azure.” Azul will collaborate with Microsoft’s wholly-owned subsidiary Microsoft Open Technologies to develop the new OpenJDK in an effort that will focus largely on compliance, standards and specifications given Microsoft’s experience of being sued by Sun Microsystems for developing a non-compliant version of Java. Sunnyvale, CA-based Azul Systems is an experienced provider of Java runtime to enterprises that specializes in optimizing enterprise usage of Java by improving performance, scalability, latency, response times and consistency. Azul will license the OpenJDK on Azure under a GNU General Public License (GPL) version 2 and certify it for compliance with Java SE.
Microsoft’s support of Java on its Azure platform comes in the wake of a partnership announced in June whereby Oracle software such as Java will be certified and supported by Oracle to run on the Azure platform and Microsoft’s Hyper-V virtualization technology.

Windows Azure IaaS Takes Aim At Amazon Web Services Via Price, Functionality And Service

This was the week where Microsoft announced the general availability of Windows Azure Infrastructure as a Service. More than a simple declaration of production-grade availability, Microsoft’s announcement about its IaaS platform delivered the strongest possible elaboration of its intent to compete head to head with Amazon Web Services in the IaaS space to date. In a blog post, Microsoft’s Bill Hilf accurately assessed enterprise readiness with respect to cloud adoption by noting that customers are not interested in replacing traditional data centers with cloud based environments. Customers typically want to supplement existing data infrastructures with IaaS and PaaS installations alongside private cloud environments and traditional data center ecosystems. In other words, hybridity is the name of the game with respect to enterprise cloud adoption at present, and Hilf’s argument is that no one is better suited to recognize and respond to that hybridity than Microsoft. In conjunction with the general availability of its Azure IaaS platform, Microsoft pledges a commitment to “match Amazon Web Services prices for commodity services such as compute, storage and bandwidth” alongside “monthly SLAs that are among the industry’s highest.”

Microsoft also announced new, larger Virtual Machine sizes on the order of 28GB/4 core and 56 GB/8 core in addition to new Virtual Machine image templates featuring a gallery of image templates including Windows Server 2012, Windows Server 2008 R2, SQL Server, BizTalk Server and SharePoint Server as well as VM templates for applications that run on Ubuntu, CentOS, and SUSE Linux distributions. Overall, the announcement represents an incisive and undisguised assault on the market dominance of Amazon Web Services within the IaaS space that is all the more threatening given Microsoft’s ability to match AWS in price, functionality and service. The key question now is the degree to which OpenStack and Google’s Google Compute Engine (GCE) will emerge as major players within the IaaS space. OpenStack has already emerged as a major IaaS player, but it remains to be seen which distribution will take the cake at the enterprise level. Nevertheless, analysts should expect a tangible reconfiguration of IaaS market share by the end of 2013, with a more significant transformation in place roughly a year from the release in general availability of Google’s Compute Engine, which was released in Beta in June 2012.

Can Google’s Compute Engine Dethrone Amazon Web Services?

In June 2012, Google introduced its IaaS offering, Google Compute Engine (GCE). GCE allows users to deploy Linux Virtual Machines on the same infrastructure that powers Google’s world-class data centers and IT infrastructure. GCE complements Google’s related cloud offerings such as Google App Engine, Google Cloud Storage, and Google BigQuery and represents a significant competitive play to grab market share from Amazon Web Services (AWS), the undisputed market leader in the IaaS space. GCE’s value proposition rests upon Google’s reputation for scalability, performance, ability to compete in price and the allure that Google’s global technical infrastructure may prove itself virtually immune to the service disruptions that have affected both Amazon Web Services and Microsoft Azure over the last year.

Upon its launch in June, GCE offered users instances in four sizes constituted by 1,2,4 and 8 virtual cores with 3.75GB of memory per virtual core. Since then, GCE has added a constellation of additional instance options that include high memory and high CPU instances in addition to a diskless option for users that do not need dedicated disk storage attacked to their server. Pricing is competitive with AWS: the AWS medium Linux instance featuring 3.75GB costs $.130/hour in comparison to GCE’s $.138/hour. Similarly, AWS’s extra large, 15 GB instance runs at $.520/hour in comparison to GCE’s $.552/hour.

Cloud Computing Today spoke with Floyd Strimling, Technical Evangelist and the Senior Director of Marketing & Community at Zenoss, about the positioning of Google and its Google Compute Engine IaaS platform in relation to AWS:

Cloud Computing Today:
Which vendor poses the biggest threat to the market share leadership had by Amazon Web Services?

Floyd Strimling:
The reality is Amazon’s biggest threat comes from themselves as they must keep innovating while improving performance, availability, and reliability. Surprisingly, Microsoft has both the technical implementation and pricing structure to threaten Amazon. However, they have a perception problem that could be the subject of another article. Finally, Google has all the pieces to compete but must answer questions about their privacy, customer support, and long term commitment to the enterprise.

One last thought, Rackspace is really the wild card as they have all their bases covered. The main threat for Rackspace is the maturity of OpenStack and the effort it will take to get their cloud offerings to match Amazon’s solutions. Given how fast Amazon is innovating, this is not an easy feat.

Cloud Computing Today:
What advantages does Google have over other vendors, or even over Amazon Web Services?

Floyd Strimling:
Google is simply an overwhelming powerhouse that has great admiration and respect within the industry. They own/lease their own fiber connections, are building out Google Fiber in Kansas City, have the dominant search engine and mobile platform, are threatening Microsoft/OpenOffice with Google Docs, and maintain everything from file sharing to email and everything in between. Yet their greatest strength may lie within their ability to monetize their services via advertising.

Cloud Computing Today:
What are the most significant challenges Google faces as it gears up to pose a competitive, IaaS challenge to Amazon Web Services?

Floyd Strimling:
Privacy – Google must prove to the Enterprise that they will safe guard and not abuse the information they are collecting.
Customer Support – Google must understand that customer support is the key to Enterprise market. Customer support is more than simply posting questions on a forum and waiting for answers. If I was Google, I’d take a trip out to San Antonio and learn from the best, Rackspace.
Long-Term Commitment – Google has a history of endless betas and, now, shutting down services. They must prove to the Enterprise that they are in this for the long haul and will work with their customers to refine any and all solutions.


Market Mobility Within The IaaS Space Remains Significant

Although Amazon Web Services has clearly differentiated itself from the pack of IaaS vendors by way of its pricing and breathtaking track record of innovation, its rate of innovation represents a double-edged sword insofar as the industry expects AWS to roll out feature after feature and relentlessly redefine the meaning of the much maligned phrase “cloud computing”. That said, AWS’s record of innovation generates a converse pressure on potential rivals such as Rackspace and the commercial OpenStack community to similarly innovate at a rate faster than currently permitted by OpenStack’s six month release cycle. Nevertheless, Rackspace’s experience in the IaaS space and impeccable customer support pedigree renders it a key player that could well leverage OpenStack’s inter-operability to good measure.

Microsoft and Google both have the capital and wherewithal to compete with Amazon Web Services in price, but both struggle with “perception problems” of different flavors. The bottom line is that the IaaS race still remains wide open, particularly given the commitments made by tech giants and startups alike to platforms with similar functionality and visions. Strimling makes no mention of CloudStack, here, but one can assume they constitute a major player as well.

Google Will Need To Overcome Multiple Perception Problems To Compete With AWS

Even though Google has the technological infrastructure to pose a significant threat to AWS, it will need to shed its reputation for lack of dedication to enterprise customers. Admittedly, Google Docs has done some of the work of orienting Google towards the enterprise, but there is still much work to be done if Google wants to be perceived as less than fickle with respect to its history of rolling out products in Beta that it subsequently retracts. Moreover, given Google’s virtually unparalleled capability for searching through machine data, customers are likely to be wary of placing sensitive information in an infrastructure that permits Google to indulge its penchant for data mining. Google will need to appease customer concerns about privacy and security with strong, unequivocal customer agreements and licensing terms that guarantee the safety of its data from prying eyes qua search algorithms. Finally, Google will need a thought leader in the form of an outward facing CTO that can explain its technology and infrastructure to the enterprise in terms that CIOs, CTOs and the blogosphere understands and trusts. Just as Werner Vogels became the face of Amazon Web Services, Google will need to brand another cloud visionary with the ability to build trust amongst enterprise customers, developers and the “cloud computing” community more generally.

Understanding the December 2012 Windows Azure Outage

From December 28 to December 30, Microsoft’s Windows Azure platform experienced an outage for its South Central US Region that arrived head upon heels after the Amazon Web Services Christmas eve outage that became famous for incapacitating Netflix. The outage was first reported by Microsoft at 3:16 PM UTC on December 28 with the news that a networking issue was “partially affecting the availability of Storage service in the South Central US subregion” on its Windows Azure Service Dashboard. Hours later, Microsoft noted that the outage was affecting the ability to display the status of service for all other regions, even though service itself was unaffected outside the South Central US Region.

The first substantial elaboration on the cause of the outage came six hours after the disclosure of the outage at December 28, 9:16 PM UTC:

The repair steps are taking longer because it involves recovery of some faulty nodes on the impacted cluster. We expect this activity to take a few more hours. Further updates will be published after the recovery is complete. We apologize for any inconvenience this causes our customers. Note: The availability is unaffected for all other services and sub-regions. We are currently unable to display the status of the individual services and sub-regions due to the above mentioned issue.

Here, Microsoft specifies that the root cause of the problem consisted of “faulty nodes on the impacted cluster,” and that repair would be complete within a few hours. But 9 hours after this specification—or within 15 hours of the initial announcement—the Azure team announced that the problems which affected the recovery of the affected nodes was “likely to take a significant amount of time.” The impact on the creation of new VM jobs and Service Management operations had been addressed, in the meantime, but the full and complete recovery of the cluster would take more time.

On December 30, 9:00 PM UTC, the Azure team reported:

The repair steps are still underway to restore full availability of Storage service in the South Central US sub-region. Windows Azure provides asynchronous geo replication of Blob & Table data between data centers, but does not currently support geo-replication for Queue data or failover per Storage account. If a failover were to occur, it would impact all accounts on the affected Storage cluster, resulting in loss of Queue data and some recent Blob & Table data. To prevent this risk to customer data and applications, we are focusing on bringing the affected stamp back to full recovery in a healthy state. We continue to work to resolve this issue at the earliest and our next update will be before 6PM PST on 12/30/2012. Please contact Windows Azure Technical support for assistance. We apologize for any inconvenience this causes our customers.

With this announcement, impacted customers finally learn of the real root cause of the outage: the Azure platform currently fails to support georeplication for storage failover data and queue data. A failover such as the one experienced by affected clusters therefore results both in the loss of queue data as well as “recent Blob & Table data,” leading to a longer time to recover the faulty nodes on the affected cluster. Georeplication, recall, refers to the practice of maintaining replicas of customer data in locations that are hundreds of miles of apart in order to more effectively protect customers against data center outages. Azure Storage’s lack of support for georeplication of failover and queue data, however, led to the prolongation of the December 2012 outage.

The problem was finally, fully resolved at 10:16 AM UTC, December 31, 2012:

Storage is fully functional in the South Central US sub-region All customers should have access to their data. We apologize for any inconvenience this caused our customers.

Notable about the Microsoft Azure outage was its relative lack of media coverage in comparison to the Amazon Web Services outage, which lasted roughly 24 hours in comparison to 77 hours for the Azure outage. Granted, the Amazon Web Services outage affected Netflix, one of the IaaS industry’s most prominent customers alongside Zynga, but the contrast between the coverage accorded to each of these platforms illustrates the market dominance of Amazon Web Services as measured by the way in which its outages affect measurably more customers and end-users than other IaaS platforms. Another factor accounting for the relative disparity in media coverage between the AWS and Azure outages is AWS’s trademark painstaking post-mortem analysis of outages that Microsoft and all other vendors would do well to match in depth and specificity, going forward.