Telecom giant CenturyLink has decided to acquire Savvis for $2.5 billion in a move that signals early consolidation within the cloud computing industry. CenturyLink announced a deal whereby Savvis stock would be acquired for $40/share or 11% above the April 26, 2011 closing share price. CenturyLink’s acquisition of Savvis also involves the assumption of $700 million in debt, resulting in a total deal valuation of $3.2 billion. Under the terms of the transaction, Savvis shareholders would receive $30/share and $10 in shares of CenturyLink’s common stock. The acquisition enables CenturyLink to expand upon its existing hosting and colocation capabilities and include, alongside them, Savvis’s IaaS cloud computing platform. Together, CenturyLink and Savvis will operate a total of 48 data centers worldwide, composed of the union of Savvis’s 32 data centers and CenturyLink’s 16 data centers. CenturyLink announced plans to integrate Savvis as a distinct business unit that retains its current leadership team headed by Savvis chairman and CEO, James Ousley. The telecom company’s acquisition of Savvis comes upon the heels of its recent acquisition of Qwest for $10.6 billion in the increasingly consolidated telecommunications vertical. Savvis is known for its large enterprise customer base and annual revenues that are close to a $1 billion. In an interview with ZDNet’s Larry Dignan, Bill Fathers, President of Savvis, noted that cloud based revenues averaged $8 – $10 million a quarter, with 350 of its 3500 customers using its cloud based platform, Symphony. The remainder of Savvis’s revenues are generated by colocation, managed services such as application hosting and network services. Cloud revenues constitute a subset of Savvis’s managed services revenue. Rumours have swirled about the impending acquisition of Savvis since the recent acquisitions of Terremark by Verizon and NaviSite by TimeWarner. Leading industry analysts such as Gartner’s Lydia Leong contends that the acquisition could bode well for Savvis provided that the company is allowed to run semi-independently. Nevertheless, the larger question posed by this acquisition concerns whether acquired cloud vendors such as Terremark and Savvis can continue to deliver the level of product innovation of highly agile competitors such as Rackspace and Amazon.
Monthly Archives: April 2011
Sony PlayStation’s cloud computing network experienced significant downtime starting on April 21. The outage affected Sony’s PlayStation Network and its Qriocity music service. Sony PlayStation’s cloud based environment allows users to download and use online games, music, videos and movies. Patrick Seybold, Sony’s Senior Director of Corporate Communications and Social Media, announced that an “external intrusion” was responsible for the attack, generating suspicions that hackers were responsible for bringing down Sony’s cloud based gaming and music platform. The hacker group Anonymous was the principal suspect for the Sony outage after Sony initiated a lawsuit against George Hotz, a PlayStation user with the username GeoHot that jailbroke his PlayStation 3 and distributed jailbreaking tools to other users to download unauthorized applications. In early March, a Northern California court awarded Sony access to Hotz’s social media accounts, his PayPal account and the IP addresses of users who visited George Hotz’s website. The hacker collective Anonymous objected to Sony’s lawsuit against George Hotz, noting, “You have abused the judicial system in an attempt to censor information on how your products work. You have victimized your own customers merely for possessing and sharing information, and continue to target every person who seeks this information. In doing so, you have violated the privacy of thousands.” After Anonymous issued threats to Sony about their handling of the Hotz lawsuit, Sony experienced downtime on its main website, Style.com and the U.S. PlayStation site on April 6, in attacks that have been widely attributed to Anonymous.
But Anonymous denied responsibility for the recent outage by claiming, “For Once, We Didn’t Do It,” and that “While it could be the case that other Anons have acted by themselves, AnonOps was not related to this incident and does not take responsibility for whatever has happened. A more likely explanation is that Sony is taking advantage of Anonymous’ previous ill-will towards the company to distract users from the fact that the outage is actually an internal problem with the company’s servers.” Sony’s technical troubles follow high profile recent releases of Mortal Kombat and Portal 2. Considered alongside Amazon’s EC2 recent outage, Sony’s downtime raises increased concerns about quality of service and reliability in the world of cloud computing. Downtime on Sony’s PlayStation Network began on April 21 and continues as of the evening of April 24, 2011.
Amazon’s cloud computing outage on April 21 and April 22 can be interpreted in one of two ways: (1) either the outage constitutes a reflection on Amazon’s EC2 platform and its processes for disaster recovery situations; or (2) the outage represents a commentary on the state of the cloud computing industry as a whole. The outage began on Thursday and involved problems specific to Amazon’s Northern Virginia data center. Companies affected by the outage include HootSuite, FourSquare, Reddit, Quora and other start-ups such as BigDoor, Mass Relevance and Spanning Cloud Apps. Hootsuite—a dashboard that allows users to manage content on a number of websites such as Facebook, LinkedIn, Twitter and WordPress—experienced a temporary crash on Thursday that affected a large number of sites. The social news website Reddit was unavailable until noon on Thursday, April 21. BigDoor, a 20 person start-up that provides online game and rewards applications, had restored most of its services by Friday evening even though its corporate website remained down. Netflix and Recovery.gov, meanwhile, escaped the Amazon outage either unscathed or with minimal interruption.
Amazon’s EC2 platform currently has five regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Each region is composed of multiple “Availability Zones”. Customers who launch server instances in different Availability Zones can, according to Amazon Web Services’s website, “protect [their] applications from failure of a single location.” The Amazon outage underscores how EC2 customers can no longer depend on having multiple “Availability Zones” within a specific region as insurance against system downtime. Customers will need to ensure their architecture plans for duplicate copies of server instances in multiple regions.
Amazon’s SLAs commit to 99.5% system uptime for customers who have deployments in more than one availability zone within a specific region. However, the SLA guarantees only the ability to commit to connect to and provision instances. On Thursday and Friday, Amazon’s US-East customers could still connect to and provision instances, but the outage adversely affected their deployments because of problems with Amazon’s Elastic Block Storage (EBS) and Relational Database Service (RDS) platforms. EBS is a storage database and RDS provides a way of relating multiple databases that store data provisioned on an EC2 platform. Because Amazon’s problems were confined to EBS and RDS in the US East region, Amazon’s SLA for customers affected by the outage was not violated. The immediate consequence here is that Amazon EC2 customers will need to deploy copies of the same server instance in multiple regions to guarantee 100% system uptime, assuming, of course, that the wildly unlikely scenario that multiple Amazon cloud computing regions experience outages at the same time never transpires.
Anyone familiar with the cloud computing industry knows full well that Amazon, Rackspace, Microsoft and Google have all experienced glitches resulting in system downtime in the last three years. The multiple instances of system downtime across vendors points to the immaturity of the technological architecture and processes for delivering cloud computing services. Until the architecture and processes for cloud computing operational management improves, customers will need to seriously consider the costs of redundant data architectures that insure them against system downtime in comparison with the risk and costs of actual downtime.
For a non-technical summary of the technical issues specific to the outage, see Cloud Computing Today’s “Understanding Amazon Web Services’s 2011 Outage“.
Microsoft Corporation and Toyota Motor Corporation’s announcement that the Microsoft Azure cloud computing platform will host telematics applications for Toyota’s electric and plug-in hybrid vehicles marks an important step in the battle for enterprise market share amongst the top cloud computing vendors. Disclosed on April 6, the agreement signifies Microsoft’s increasing dominance in the automotive vertical as it expands its market base beyond Ford, Kia and Fiat. Microsoft and Toyota plan to invest $12 million or $1 billion yen in telematics services for the Toyota subsidiary, Toyota Media Service. Telematics marks the conjunction of information technology with telecommunications in a way that allows users to obtain increased control of energy management, multimedia and location-related services. Expected features of the Microsoft Azure based platform include:
• The ability to determine when to most economically recharge an electric battery in relation to energy costs
• A mobile application that checks battery levels and calculates how far users can drive before recharging their battery
• The ability to manage home energy and air conditioning units from automobiles
• Increased customization over streaming audio and video content
Initial deployment of telematics applications is expected amongst Toyota’s electric and plug-in hybrid vehicles in 2012. Toyota plans to deploy a global, Azure based platform to provide advanced telematics services to its customers by 2015. The initial roll-out of Toyota’s partnership with Microsoft will focus on energy management but the platform will more broadly enhance information access and control for its users. Toyota plans to render its telematics platform available to other car manufacturers as well in a move that would increase standardization within the automotive vertical with respect to driver access to energy, entertainment and location information.
Google is planning a major overhaul of YouTube that will enable it to provide streaming full length television and video content. The Mountain View search engine giant has reportedly allocated $100 million for the initiative to acquire content, finalize licensing agreements and execute the requisite technical challenges. Google plans to create channels such as “Sports” and “Drama” within YouTube containing original, professionally produced content that is proprietary to Google. According to the The Wall Street Journal, Google and YouTube executives have held meetings with Hollywood talent agencies such as Creative Artists Agency, William Morris Endeavor and International Creative Management to discuss the creation of original content. Google’s play to enter the video space is motivated by an effort to increase the amount of time spent by users on the site and thereby increase advertising revenue. Moreover, the company may decide to obtain additional revenue by offering select content to users on a paid or subscription basis. Google’s apparent decision to compete directly with Netflix signals intensified competition in the online streaming content space spearheaded by cloud computing vendors such as Amazon and Google that have the IT infrastructure to deal with the bandwidth considerations of delivering significant volumes of content to users on a daily basis. Amazon Prime, for example, offers its members access to 5000 streaming movies for an annual membership fee of $79.
Wedge Partners analyst Martin Pyykkonenn notes that Google’s plans to revamp YouTube constitute a significant threat to Netflix because of the sheer omnipresence of YouTube across virtually all online platforms. That said, Netflix has thus far proven to be an unprecedented market leader in video content acquisition as evinced by its recent finalization of a licensing agreement with Lions Gate Entertainment Corporation to stream seven seasons of the TV series “Mad Men.” So far, YouTube has been less than successful in acquiring licensing rights to longer video content. Nevertheless, the stock price of Netflix has dropped significantly over the last week. Netflix’s shares rose today by 2.52% to close at $233.92 though the share price has fallen as compared to $239.97 at the close of April 6.
Select Quotes from Steve Schuckenbrock, President of Dell Services (April 5, 2011):
“You know the demand for IT and the torque frankly in the system for CIOs, when you balance the huge demand for efficiency with really sort of unprecedented levels of efficiency being driven through cloud like execution, and whether that’s a public cloud, private cloud, whatever the case might be, the reality is, is there’s a significant amount of standardization that’s occurring in the world. And that standardization brings all sorts of value from an efficiency standpoint, and places real pressure on CIOs to make sure they embrace those opportunities as quickly as possible.
And at the same time, there’s increased demand based on sort of any information available anytime, anywhere to basically any device for flexibility and for speed, and the ability to respond to this enormous sort of expectation. You know I guess probably best summarized by us as consumers, and our need for instantaneous gratification of any information available anytime.
And it’s this torque between these two things that I think creates a tremendous opportunity and a bit of an inflection point. Dell, I believe, is positioned exceptionally well to respond on both of those two fronts. We have terrific leadership in the standardization of technology. We are in fact very focused on standing up highly virtualized, highly efficient, you might even call it optimized data center infrastructures for our customers, and we are doing the same with our own data centers from a services standpoint.
And that certainly gives the counter balance that says, from a flexibility standpoint, you get greater speed, when there’s standardization, you can respond faster, you can innovate faster. And you get a repeatable quality and cost proposition as a result.
Cloud services is certainly something that brings new levels of efficiency as well as flexibility. When you look from a cloud services standpoint, it’s the ability to frankly deliver an infrastructure all the way through a set of applications in a manner that takes advantage of all the efficiencies of the cloud, whether that be a private cloud or a public cloud, but at the same time, responding to this need for speed. The fact that people want just in time kind of capacity, they want the ability to provision services themselves, and to be able to turn them on and turn them off at their whim, as opposed to these sort of monolithic, contractual structures that have been a part of the services industry for so long.
And from a talent factory standpoint, there is a huge need for access to the right skill sets in the right place at the right time, and sometimes those skill sets are local and consultative in nature, and other times those skill sets might be leveraged in a cost optimized location someplace around the world. But these talent factories are vital in terms of being able to help customers move their applications to the standardized or optimized infrastructure footprint that I described above. And I think all three of these capabilities are absolutely crucial to embrace what’s happening in the services space today.”
Services Conference Call with Steve Schuckenbrock
Hosted by Sanford Bernstein
April 5, 2011
Dell announced it plans to spend $1 billion in cloud computing products and services over the next fiscal year in an attempt to gain market share in an environment currently dominated by Amazon, IBM, Microsoft, Google, Rackspace and HP. Over the next two years, the company plans to build 10 data centers devoted to deployment of cloud computing technology in the U.S., Europe and Asia. Moreover, the company plans to open a total of 22 Global Solutions Centers that enable customers to obtain consultative services about the cloud computing strategy that constitutes the best fit for their organization. In support of its plans to invest in cloud computing infrastructure, Dell announced the availability of vStart, a product that integrates server, storage, networking and management ability to provide customers with out of the box, racked and cabled virtualization hardware and software. Designed to instantly enable the virtualization of 100-200 machines in its initial configuration, vStart comes pre-loaded with VMware’s ESXi hypervisor virtualization technology but expects to accommodate a broader range of virtualization technology as the product matures. vStart 100′s technical specifications include a PowerEdge 610 server for managing the VMWare technology, 3 PowerEdge R710 servers, Dell EqualLogic™ PS6000XV iSCSI storage, Dell PowerConnect™ 6248 switches and Dell management tools.
Dell’s decision to invest heavily in cloud computing marks the most explicit recognition from the Texas based IT corporation that the market for PCs and data center servers is insufficient to sustain its growth in an enterprise environment that increasingly seeks IT standardization and efficiency, and a consumer environment that demands access to information in real-time, 24-7. Dell has yet to announce what cloud computing software will power its IaaS and PaaS offerings in the data centers it intends to build. One possibility is that the IaaS platform will feature the OpenStack platform while the PaaS leverages Microsoft Azure. In an April 6 press conference in San Francisco, Steve Schuckenbrock, Dell’s president of Dell Services, noted that Dell’s forthcoming cloud computing data centers will house “public and private cloud capabilities.”