Data Center Containers showing signs of growth


We first heard word of these Data Center containers in 2005 where it was said that a parking lot would be filled with containers resembling that of a shipping crate and inside would be a complete technology powerhouse.  One of the first cases to date which we heard about was from Google and they didn’t want to keep it a secret from us, they said it was much to fun to talk about.

Today Microsoft picks Virginia for their next major data center and to our surprise what has shown up is a Microsoft IT-PAC container.  Microsoft has selected Virginia for this new data center project and has plans to invest up to $499 million in the rural community on the southern part of the state.  The goal of this facility is to become the East Coast hub of all Microsoft online services.

During the point of bidding for the location Microsoft was also looking at neighboring North Carolina to win this huge project.  Virginia officials are welcoming Microsoft on this pitched battle which will create 50 new jobs.

Microsoft is adding capacity, mobility and advancing their technology in preparations for its battle with Google and other leading players in Cloud Computing.

This next-generation design gives Microsoft the ability to scale beyond the bricks of a Data Center facility in favor of the container structures, leaving the facility open to the air was guided by a research project which the company housed for eight months.

Microsoft and Google are not the only companies building container data centers.

Other Commercial Containers:

  • Rackable ICE Cube
  • HP POD
  • Verari Forest Container
  • IBM Portal Modular Data Center
  • Sun MD S20

    Read full story here.



Gaming in the Datacenter Hardware, Software and Architecture

The Curse Network is a comprehensive and accessible resource that enhances our users’ gaming experiences.
Their goal is to provide gamers with an unmatched suite of tools designed to meet every need.  Curse is a centralized hub for everything MMO
as an extension to the explosion in social media, online gaming has brought us a large audience from around the world.
The Curse network and its success may also be a direct result of the tremendous success of Blizzard Activision and their World of Warcraft online gaming.

The Curse network has seen a tremendous growth in the recent months, in 2009 Blizzard Activision announced that World of Warcraft has reached 12 million subscribers.
That is more subscriptions than some countries have in population and we saw that less than a year ago, that is impressive.  
Today new subscribers are joining this community every day, the framework is as I have mentioned social media
and these companies have identified that.  Linking popular games such as World of Warcraft and Starcraft 2 into Facebook and Twitter.

The Curse team from what I have been able to identify has partnered with Microsoft to deploy a robust highly scalable redundant and high performing architecture.  Built on Windows 2008 Server the Curse network platform would be a great case study showing that IIS 7.0 and .NET framework plays out as a
key to the core of the success in this gaming community.

Let’s take a look at the hardware involved to power this architecture.  Photographs taken from here.
I would be thrilled to have the opportunity to speak with the Curse network and learn more about what other technologies are used.
Please contact me, thank you.


In May of 2010, Datacenter Knowledge sent us this news.
Blizzard and AT&T have expanded the 10-year relationship in North America as the network to deliver World of Warcraft and Starcraft 2, this will most likely also include Diablo 3.  AT&T supports Blizzard with its ‘Gaming Core Team’ a special unit formed in 2004 to meet the infrastructure needs of customers gaming operations.

Paul Sams COO of Blizzard Entertainment commented on Datacenter Knowledge; “Over the years, AT&T has demonstrated that it understands the needs of our business”.  The shared mission, it is kindness, understanding and not being just another vendor but rather a partner…

In 2009 AT&T and Blizzard let the people know a little more about the hardware which drives this alternate universe.
Here are the data points:


·     Blizzard Online Network Services run in 10 data centers around the world, including facilities in Washington, California, Texas, Massachusetts, France, Germany, Sweden, South Korea, China, and Taiwan.

·     Blizzard uses 20,000 systems and 1.3 petabytes of storage to power its gaming operations.

·     WoW’s infrastructure includes 13,250 server blades, 75,000 CPU cores, and 112.5 terabytes of blade RAM.

·     The Blizzard network is managed by a staff of 68 people.

·     The company’s gaming infrastructure is monitored from a global network operating center (GNOC), which like many NOCs, features televisions tuned to the weather stations to track potential uptime threats across its data center footprint.



DRBD: For When it Absolutely, Positively, Has to be in Sync

LINBIT develops DRBD it stands for Distributed Replicated Block Device, and as the name implies it is used for replicating a block device between two servers. DRBD was designed to be used in “High Availability” (HA) clusters, and is conceptually similar to a level one RAID, or mirroring, setup.

In August 2009 Logicworks announced an enterprise solution lowering the cost and boosting performance/availability of managed cloud services using software based solution services with LINBIT, This full press release can be read here.

Let’s say you have two servers with a MySQL database that you want to make sure stays up, even if the server it is on crashes. Without some form of HA, if the server hosting MySQL goes away, so does the MySQL database. To provide HA, DRBD inserts itself into the IO stack and proxies all block level actions, simultaneously writing the data to the local disk, as well as the disk on the second server. So, when the time comes to fail over to the standby server, a script moves the IP address over from the primary, mounts the DRBD filesystem, and starts MySQL. Since everything is replicated, no data loss occurs during the failover.

There are a few important things to know that DRBD will not help with. If there is corruption in the filesystem, DRBD will happily replicate the corruption between nodes. This is because DRBD has no knowledge of what is happening farther up the IO stack, and therefore has no way of detecting if such corruption is occurring. Further, DRBD cannot provide instant failover between nodes. Failover is fast, but it’s not on the same level as MySQL Cluster. If you are using scripts like Heartbeat and Mon or Pacemaker, these systems must first detect that a failure has occurred, then add the IP address, take over primary DRBD role, mount the filesystem, and then start the service. These things take time, not a lot, but it might be noticeable, depending on the sensitivity of your environment.

Senior Engineering Expert; Kyle Khultman of Logicworks commented on he had this to say;

While many people may want to incorporate some of the features and protection of Hearbeat, I like to use either ucarp or keepalived. Both of these really just incorporate the VRRP protocol to linux, and ucarp is a port of bsd’s carp utility – for those familiar with that. I do disagree with you about fail over time though. We do run clusters here that have sub 5-second fail over times. This does take some extreme engineering precision to accomplish, but it is possible.

[Full Article available here.]

Kindness can be the hardest word of all

In the Wall Street Journal, an article about good business, Kindness was published yesterday august 24th 2010. The article written by Tom Peters whom is the writer is the author of The Little BIG Things his website

Quoted in the article, Good business is built on great people, decency, thoughtfulness and attentive listening.

My mother sent me this article yesterday and at first glance, the word kindness and business, I became interested.  In the past 2 years I have seen many changes in the recent economy as we have all the talks of unemployment, financial troubles with creditors and mortgages as well as a much more conservative spending market we have all become aware and or impacted by this.

With these new changes, the business relation decisions I make and or financial transactions I continue to engage in are now decided by not the biggest name but rather the kindness of the company I am exchanging with and I feel they understand my needs and are able to work with me.
My opinion is that the business to customer relation has changed and evolved to becoming a shared mission, this is a word that we have spoken about much in the recent months at my firm, Logicworks. 

The customer is no longer looking for just another vendor and to be honest, they want fewer vendors.  They want to work with companies whom have a shared mission, understand their business and operate as an extension of their team, helping business grow from both parties.  The use of social media and engaging with the clients to reach that personal connection has also helped path this new transformation, I for one love to engage with vendors, colleagues and other associates on Twitter.

In the article Tom Peters writes, in 1903 King Edward VII journeyed to Paris, crowds were lining the streets on his arrival and made clear that he was not welcome.  But the king charmed the French and everywhere he made gracious and tactful speeches about his friendship and admiration for the French, their glorious traditions, their beautiful city and his sincere pleasure in visiting Paris, all of which he spoke in perfect French.  Less than a year later the entente cordiale was signed between France and Britain and the history of the world was reshaped.

In the 19th century Henry Clay “Courtesies of a small and trivial character are the ones which strike deepest in the grateful and appreciating heart” The runner up from David Ogilvy, the father of modern advertising: “We do not take people to the elevator, we take them down the street”.

Tom Peters writes to us, If good business is built on great people and superb relationships and it is in 2010 as it was in 1910 and doubtless 1710 and 1870 then it is built on bedrock of decency, thoughtfulness.  Indeed it is built on Clay’s courtesies and Ogilvy’s willingness to escort clients to their car at street level in Manhattan.  An old theme for Mr. Peters was a six-word phrase. “Hard is soft. Soft is Hard”.  As the problems besting US corporations circa 1980 and the belief their advisers had got things backwards and that in the end it was supposedly hard numbers as we have seen of late and the plans that are so often flights of fantasy that were soft.  The true “hard stuff” was what business schools and their ilk undervalued as soft: people issues, character and the quality of relationships inside and beyond the organization’s wall.  Thinking about what has led to the softest word of all and the word with perhaps the most lasting impact on dealings… Kindness.

The novelist Henry James said “Three things in human life are important.  The first is to be kind, the second is to be kind and the third is to be kind”.  In the tradition of “hard” engineering background Mr. Peters put forth an equation labeled “all you need to know” on a PowerPoint slide it read K = R = P (Kindness = Repeat Process = Profit). As to the “R” and “p”, the evidence is clear: profitability whether at the corner shop or a global company is directly related to repeat business.

As to the kindness connection evidence of King Edwards magical 96 hours, Benjamin Franklin and the host of others from George Washington to Nelson Mandela.  The power is the indubitable link between small courtesies and earth-shattering events.

Mr. Peters concludes: If people and relationships are the sine qua non of enterprise success, and I flatly assert that they are, then decency, thoughtfulness and the likes of attentive listening should know no peers in the management canon.  I will stake my professional reputation on it; “soft” is indeed “hard”, and Kindness = Repeat business = Profit.

Then. Now. Tomorrow.

VMware 4.1 Features and Benefits

VMware claims that memory use for VM instances is more efficient in 4.1 compared to 4.0. Memory can also be compressed so that VMs don’t go to disk for virtual memory as often, which saves access speed.

VMware’s vCenter also tracks more storage statistics.

Amplify’d from

VMware 4.1 contains a sorely needed feature, the ability to use vMotion to move more than one VM at a time from server host to server host. VSphere 4.1 allows several VMs to move concurrently, but with a small catch.

The catch is that the source and target machines still need to be similar to each other in terms of processor type

With a 10GB switch, enterprise customers can expect to be able to move eight machines at once across a VMware cluster.

These improvements address the issue of how to quickly get production virtual machines off a failing hardware platform. When hardware sends alarms that problems are occurring, maintaining production requires moving the dense number of operating systems instances to another platform rapidly, and eight VMs at a time seems to be a good number.

VSphere 4.1’s core management application, vCenter, now runs only on 64-bit hosts. Upgrades to existing 32-bit platforms, like our Windows 2003 Server R2 host, aren’t allowed, so some administrators will have to upgrade to 64-bit versions of their vCenter host.


HP & Dell, Who will own 3PAR? Bidding War

Amazon RDS Human or Droid

I am not sure how much support this actually entails to call it managed database services since the most delicate DBA operations are not with the exception of backups and their liability are primarily performance based and tuning.

Overall this is a very compelling product as opposed to the actual implementation of a living human being to manage a database where as a DBA your primary role is keep it running and make sure its backed up.

Only question is what happens if I like that human being and who can I call whom really knows my database once its up and running, the personal white glove services is always nice.

Once again going back and forth I am still with this as a next step in enterprise automated solutions for a cloud service.

Amplify’d from

Amazon Relational Database Service (Amazon RDS) (beta)

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business.

Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period. You benefit from the flexibility of being able to scale the compute resources or storage capacity associated with your relational database instance via a single API call. In addition, Amazon RDS allows you to easily deploy your database instance across multiple Availability Zones to achieve enhanced availability and reliability for critical production deployments. As with all Amazon Web Services, there are no up-front investments required, and you pay only for the resources you use.

  • Automated Backups – Turned on by default, the automated backup feature of Amazon RDS enables point-in-time recovery for your DB Instance. Amazon RDS will backup your database and transaction logs and store both for a user-specified retention period. This allows you to restore your DB Instance to any second during your retention period, up to the last five minutes. Your automatic backup retention period can be configured to up to eight days.

  • DB Snapshots – DB Snapshots are user-initiated backups of your DB Instance. These full database backups will be stored by Amazon RDS until you explicitly delete them. You can create a new DB Instance from a DB Snapshot whenever you desire.

  • Multi-AZ Deployments – A deployment option for your DB Instance that provides enhanced availability and data durability by automatically replicating database updates between multiple Availability Zones. Availability Zones are physically separate locations with independent infrastructure engineered to be insulated from failure in other Availability Zones. When you create or modify your DB Instance to run as a Multi-AZ deployment, Amazon RDS will automatically provision and maintain a synchronous “standby” replica in a different Availability Zone. In the event of planned database maintenance or unplanned service disruption, Amazon RDS will automatically failover to the up-to-date standby so that database operations can resume quickly without administrative intervention. The increased availability and fault tolerance offered by Multi-AZ deployments are well suited to critical production environments. DB Instances can be converted to and from Multi-AZ deployments with a single API call or few clicks of the AWS Management Console. Click here for DB Instance pricing when running as a Multi-AZ deployment. To learn more about Multi-AZ deployments, visit our FAQs.