Please ensmarten me about LAN/WAN/Co-Lo
October 12, 2007 12:40 PM   Subscribe

3 LANS are getting hard to manage. Please ensmarten me about WANs and Colocation.

Right now we've got 3 physical sites for the same firm/domain, each with its own MS 2003 server and dataset. This separated setup has worked well to keep the others up and running if one has downtime/problems.

But now we're needing more flexibility / integration across the sites. The VPN we've currenly got has an upload bottleneck at about 500kbps, which is too slow for file transfers, much less application services, between the sites.

So we've set up Terminal Servers at each site, so that if staff at Site A need to use applications or manipulate data that's on Site B's server, they launch Remote Desktop and work on it "at" Site B via RDP.

This is working for now, but it's getting hard to manage all those boxes, back them up, etc.

So I've been thinking about moving from 3 servers with three datasets, one for each location, to one big fat server which consolidates all the data. But I don't know precisely how to locate that server, then serve up that data with speed to the various sites.

If it was just a webserver, that would be one thing - but this is going to be an applications server. And I'm just a recently unfrozen caveman with no knowledge of your "high tech".

So I've got Questions:

1) Say I host the consolidated server at Site A - would Sites B and C basically have to "remote" in to get anything done? (Let's say there was a site to site T1 in place).

2) If that's the case, am I better off just going for a colocator and having everyone working "remotely", with the server serving up desktops for everyone? Can that be done with just a T1 from each office to the colo? (40 users total).

3) If that's the case, then all the workstations basically become thin clients, right? So I can justify the colo expense by saying that we won't have to upgrade every workstation's hardware/software for Vista, right?

4) If all the processing, etc was handled by a fat server in a colo, could Users still have dual monitors? How would Printing work?

5) A different way altogether - keeping the 3 existing servers:
Let's say that the total dataset is 100gb, and there's maybe 3gb of changes made each day.

If there was a point-to-point-to-point T1 connection between the offices, would it be possible to Consolidate the data once, then Clone it onto the other two servers, then keep Replicating any changes?

Such that each site would keep working on its own server on its own LAN, but something in the background would keep each server a mirror of the others? (User adds a document to a folder hosted on Server A. Auto-magic happens and that new file is added to the servers at Sites B and C.) This would keep users from dealing with VPN/remote hassles, and also be good for continuity, as if one server goes down, the other two are identical copies).

This 5) scenario would be ideal, but I don't know if it only exists on the shelf between my personal jetpack and and my Mr. Fusion.

I know this is complex for an AskMeFi, but I need to figure this out while also working overtime and going back to school to learn this kind of stuff. There's lots of knowledge in this hive's mind, so I'm trying to use some of it. If you're in the Bay Area and do this for a living, send me an email.
posted by bartleby to Computers & Internet (3 answers total) 1 user marked this as a favorite
Colocation would be ideal - you'll get burstable bandwidth. Why not 3 application servers with one data server, all at the same colo?
posted by four panels at 1:36 PM on October 12, 2007

What kind of availability are you looking for?

Do you have the three servers at different locations for redundancy, is it due to pricing and load ?

As you said this is a pretty broad question, what is your userbase ? All RDP users accessing the application ?

Is the application such that it can be hardware load balanced across two decent sized servers?

Is the underlying database SQLserver? Is there an underlying database or is it flatfiles ?

What do you use to back this up currently ?

Off the top of my head unless you're shooting for >99.9% SLA's there's very little reason to not have everything in a single site with network, power, hardware and application redundancy.

Colocation, a couple of decent sized servers, maybe a hardware load balancer front ending the servers and shared storage coupled with datadomain (nearline backup) or tape backup.

Really it's all about whether your application supports this.
posted by iamabot at 1:58 PM on October 12, 2007

Yeah, we need more details to really answer appropriately, but my first instinct is that colocation/dedicated server is going to save you an enormous amount of trouble. Depending on your bandwidth/processing tradeoffs, you might be able to get away with a relatively stingy machine on a fat pipe which would make the actual colocation costs pretty small (the primary contested resource at any given colo is cooling and hence power use).

I priced out low-end colocation a while ago, and weak-ish 1U on a 1mbps 99%ile pipe can be had for ~$100/month, obviously that's the very low end of the market but it should give you a price range. Even at several times that it's likely to be cheaper than you schlepping all over the place when things break.

Being a unix/linux guy I haven't got a clue about the software side, but theoretically if your data changes are atomic enough (i.e. site A's changes can never collide with site B's) you could have the three-way sync with something like Unison and there are definitely VPNs that will happily fill your pipe, but they'll empty your wallet simultaneously.
posted by Skorgu at 7:30 AM on October 13, 2007

« Older Community ASP Software?   |   Lost in the Asian market; I cannot yet shop... Newer »
This thread is closed to new comments.