Centralized vs Distributed Computing

One of the biggest pain points for companies managing multiple remote sites is whether to centralize all computing resources or manage their infrastructure in a distributed fashion. There are arguments to be made for both models as well as several real world examples that can be used to help you make a decision.

Centralized Computing

In a purely Centralized model, all computing resources reside at the primary Datacenter. This includes Domain Authentication Services, Email, Applications, and Share Files. Remote Sites would access these resources using Thin Client devices (as opposed to PCs) and bandwidth-friendly enablers such as Citrix XenApp, Microsoft Terminal Services, or VMware Virtual Desktop technologies (there are pros and cons to each of these, but that is a topic for a different day).

The benefits of a Centralized model are lower capital and operational cost (minimal hardware at each site), security (all data stored in a secured datacenter), less administrative overhead (fewer resources needed since all equipment is in one location), less backup complexity, and greater control over potential risk areas such as Internet access.

The downside to a Centralized model is that your remote site’s WAN connection is now a major point of failure. Whether this is a point-to-point, MPLS, or VPN connection, if this link goes down, that site now has zero access to anything at the Datacenter. A backup WAN and failover capability is a must if you choose a highly Centralized computing model.

Distributed Computing

In a purely Distributed model, each site is self-sustained for the most part. While some connectivity to the primary datacenter is required, the remote site would host its own Email Server, manage its own backups, control its own Internet access, and host its own Shared Files. Application access may still rely on HQ, although many applications support this type of distributed model.

The benefit of a Distributed model is that each site can ‘survive’ on its own. There is no Single Point of Failure in this regard. Also, assuming that the hardware in some of the sites are stored in a secure Server Room and not with the office supplies (a big assumption in some cases, I know), this also would potentially facilitate Business Continuity by utilizing Sites that reference each other as contingency Sites.

The downside to this approach, obviously, is cost. Not only would this require additional hardware and software costs, but you most certainly would require at least a partial onsite presence at each location regardless of how many remote management components are in place. Another consideration would be the backup architecture. Unless each site had a healthy amount of bandwidth, at least the initial data backup processing would have to be handled locally before being shipped or replicated offsite.


Here are a few ‘real world’ examples, names removed to protect the innocent, of course:

• A prominent law firm with offices nationwide: Each office supports 50-200 users. The overriding concern here was to maximize uptime for lawyers so they went with a Distributed model. Every time a new site opens, they are shipped 2 Servers and a small SAN which are configured with VMware software. These 2 servers and SAN typically host about 7 or 8 Windows Operating systems providing all necessary Services and Applications. For Business Continuity, SAN data is replicated back to their primary Datacenter.

• A regional Dental Management company with small Dentist offices statewide: Each office supports 5-20 users. The overriding concern here was the lack of available bandwidth for many sites. Believe it or not, DSL via the phone or cable company was still a few years out in some areas! This company opted for a Centralized model. The majority of users connected back to HQ using Thin Clients on the front end with Citrix on the backend. Also, their primary Line-of-Business application was UNIX based with a DOS-based front end that required very little bandwidth.

• A manufacturing company with worldwide operations: Each office supports anywhere from 5 -200 users. Bandwidth reliability wildly varied, with some sites utilizing Satellite links for Internet access. They went with a more Hybrid approach: Although Email and many IT Services were managed centrally, most of the larger sites had their own Domain Controllers, File Servers, and local Line-of-Business servers as well.

Here are some technologies that can facilitate either model:

VMware as noted in the example above, but also with their Virtual Desktop Technology.

Citrix XenApp (formerly Presentation Server) has been a long time leader in the Application Delivery market.

Microsoft Terminal Server – Often referred to as ‘Citrix-light’, has made great improvements in Windows 2008.

Riverbed – The market leader in WAN optimization has several different models for small and midsize offices. For an added ‘cool’ factor, some of their devices come embedded with the free version of VMware, allowing you to deploy up to 5 Virtual Machines running inside the Riverbed appliance!

Hosted Backups – There are many vendors in this space, and utilizing them to backup data at your remote sites may eliminate this hurdle for Distributed computing.

Exagrid, Data Domain – These 2 compete in the data de-duplication market. De-duplication is a fancy way of saying that they allow you to backup your data utilizing a fraction of the original disk space. These devices also come with built in replication technology that sends the ‘deduped’ backup to a device back at the primary datacenter.

In the end, there is no ‘catch all’ answer. Business needs and cost (although the cost of downtime to a remote site, which is often overlooked, should be considered as well) should be analyzed to determine the right solution for your organization.

By: Jorge Azcuy

avatarJorge Azcuy
Director of Service Delivery

Posted on November 20th, 2009. Filed under Technical Education.