Scaling the small network
January 25, 2007 5:46 PM   Subscribe

How to scale a small network to support a growing workforce?

I have been working with a company that has 150 employees, 100 workstations, and 1 server (Windows 2003 + Exchange). I am slowly working on making the network more reliable with redundant/complementary servers, and was wondering if there are any guides on this?

Specifically I am having trouble with:
- How to effectively cluster Exchange (do I need to bite the bullet and go Enterprise?)
- How to effectively build a redundant file server
- DHCP clustering

Do I need to start looking at moving to Windows 2003 Enterprise?
posted by SirStan to Computers & Internet (9 answers total) 1 user marked this as a favorite
 
Have you considered free software? This sounds like something a couple of linux boxes could do.
posted by stereo at 5:58 PM on January 25, 2007


Response by poster: I don't have the budget to train 150 users on OpenOffice; There is no good replacement for Outlook. Our medical software requires Windows. Our BlackBerry's wouldn't work on Linux, etc, etc.

I run Linux on ~75% of my desktops (terminal clients), and some core services such as monitoring, reporting, and such.

Looking for Windows based options.
posted by SirStan at 6:10 PM on January 25, 2007


Have you considered hosted Exchange? You strike me as being right at the size where a single Exchange server isn't enough, but the cost of procuring and managing two is prohibitive and both would be underutilized. Hosted Exchange would let you just not worry about maintaining mail at all. It'd probably be more cost-effective than going Enterprise, although I'm not positive about that.

As for a redundant file server: Buy one. You're small enough to not need high-availability: if there's a catastrophic failure all that matters is that you can get it up and running again relatively quickly, not instantly. Go to your hardware vendors of choice and see what they've got for small-office NAS for your user- and data-size. RAID will keep you going on individual disk failures, and regular tape backups with offsite storage will keep you going on larger failures. At that scale they're not quite appliances -- they're usually based on low-end servers that happen to take a lot of disks -- but you can treat them like appliances.

You really shouldn't need DHCP clustering for 100 clients, but it couldn't hurt to have a second Active Directory server to guard against catastrophic failure there, and that second AD server could handle DHCP along with the primary. Even then you don't need to cluster -- just split up the scope between the two. Clients will only take one address when they're offered two, and they'll load-balance themselves. If one fails, bring up its scope on the other manually.
posted by mendel at 7:18 PM on January 25, 2007


(I should add that while I know my way around Active Directory and Windows NAS ok, I'm not an Exchange admin, but I'm at a site where we're trialing Exchange in place of Notes and we've gone hosted for that.)
posted by mendel at 7:19 PM on January 25, 2007


Wow, 150 clients is too many for Exchange? That's crazy. I didn't realize the limits were that low. FWIW, we've got over 400 people, and I think we only have one exchange server, although it is Enterprise and Active Directory is hosted on another machine.
posted by SpecialK at 8:11 PM on January 25, 2007


My first suggestion is you might want to contact a consultant in your area that has experience designing networks. It may save you some headache in the end. Of course, if you know exactly what you need this may not be necessary.

My first thought based on your specs would be to get another 2003 server to run active directory on. This way you will have a way for users to login to the network if the primary server goes down.

I would think that you could do without additional DHCP servers for such a small network, for the reasons mendel mentioned.

I don't have experience with clustering. My one bit of advice, and you may already have this covered, it to have a well tested and planned backup and disaster recovery procedure. Offsite backups, either to tape to external hard drive are highly recommended.
posted by nerosfiddle at 8:13 PM on January 25, 2007


Exchange 2007 makes it somewhat easier to use multiple Exchange servers in conjunction with each other, but clustering on 2003 is not too difficult, especially at a single site. It also really depends on how you're using your Exchange server. 150 users is not that many for one Exchange server, so unless you're looking at redundancy, you're probably fine. The main consideration is storage space, and there are many capacity planning docs for Exchange. Exchange Enterprise has no limit of the DB, while I think Standard is limited at 16GB.

You might check out the Load Simulator for Exchange 2003.

Do you have external MX servers so that your Exhcange server doesn't face the outside world, and that can handle the more intensive anti-virus/spam operations? We use Postfix/ClamAV/Dspam for this, and it reduces the load on Exchange by a huge margin.

For your network, make sure that the backplane on your switches have enough bandwidth to support your users. Higher end Cisco's (4000 and above series), Jupiter Networks gear, and many others will have the backplane to support multiple Gigabit switches, so you don't have bottlenecks which cause network degradation.

As for a Redundant File Server, it depends on what you're trying to do. Make sure you are using a fairly nice RAID controller with 15K SCSI drives. Choose either RAID 5 or RAID 10/01, depending on budget and read/write performance needs. Use of a SAN can also improve redundancy.

If you want to go with cutting edge stuff, get yourself 2 blade servers, and a copy VMWare Infrastructure. You can run a virtual server on each blade server, and swap between the two as necessary (yes, it duplicates the OS live in runtime). You an shutdown one virtual server, and the other one picks up as if nothing ever happened.

DHCP clustering? What's the point? DHCP is a fairly simple service, easily run on one of your domain controllers. Most likely you have two domain controllers. If the DHCP server dies, just have failover to the BDC.
posted by stovenator at 8:27 PM on January 25, 2007


Redunant file servers is expensive, because you need to deal with shared storage. Redundant storage is much easier -- RAID-5 with a hotspare.

NOTE: RAID-X does not protect against users, only hardware. You need backup as well as RAID.

AD: You *need* two domain controllers, because everything goes to hell if you lose all of your directories.

DNS: Under Active Directory, DNS is *critical*, far more so than DHCP. You can work around a down DHCP server by punching in IP addresses, but trying to hand-create a HOSTS file for AD is a nightmare beyond belief. Have at least two (I run three on the inside -- and run BIND, because I don't trust MSDNS.) You can run DNS on the AD controllers, but you need more than one DNS server.

Exchange needs its own box, and needs to be built as if it were a database server. That is, redundant, fast disk, with logs and data on separate spindles.

I think blades are the very least right answer here. Blades are for when you need processor density, you need storage density, and the initial cost of blades is pretty staggering. Blades aren't really useful until you have storage or you have to have very high CPU density (limited rack space). Blades are costly in price, energy, and heat.

If you had the money to play with blades, you'd be *far* better of playing with a SAN.

I would budget for three boxes. All spec 2xCPU (or 4, if dual cores are cheap), 2GB RAM, and SCSI or SAS for internal disk. Since you're just starting to scale, pick PCI-E. (If you had a large infrastructure, PCI-X compatibility would be ideal.)

One largish one with, ideally, six disks, set up as three mirrors, (OS, Logs, Data). This becomes your exchange box.

Box #2 is a simple box, with a mirror disk set. This becomes your AD server, and gets the Global Catalog.

Box #3 is a larger box, with 4-6 disks. This gets built into one RAID-5, with hot spare. This becomes your file server.

Migration.

1) Build out #2 as a AD controller. Migrate AD to it.

2) Build out #3 as the file server. Migrate files and users to it.

3) Build out #1 as the exchange server. Migrate the data to that.

4) Rebuild the original box as the secondary domain controller.

Budget? Off the cuff, I'm going to say $20K, but that depends on how much disk. To keep storage inside these boxes, you need larger frames (which costs). Count on at least a 2U for the Exchange and File servers. The AD controller should be a 1U.

You should budget a couple of spare drives (cold spares.) Ideally, you buy the same drives everywhere, which means some mirrors will be stupid large. If you can't, buy two sizes, and stock a couple of spares. They replace a failed drive *and* allow you to add storage in a rush. Provided, of course, you have slots in the frame. This is a real issue in the world, but at your size and budget, there really isn't a better rational answer.

BTW, if the company is starting to grow, this is the time for a rack, and the space to put the rack.

Clustering: Failover clusters are nice, but you double the capital costs, plus you add the cost of external storage. I'm looking at this right now with a large box (8CPU, 32GB, a smidge of disk, and two 4GB HBAs to a SAN for real storage. I already have the SAN.)

However, your server back end is too small to justify clusters -- you need those boxes working now, not waiting for failure, and to cluster exchange or your file server, you'd need two boxes *plus* external storage. This will triple the cost.

Cheaper redundancy can be had by buying four identical boxes, one of them without drives. If a machine fails for any reason other than disk, you move the disks to that machine, power up, configure the array, and boot.

A spare frame isn't cheap, but it's cheaper than a built out frame and external disk.

Your next steps after this project is finished.

1) Front end server for exchange. This can be another exchange server, Sendmail w/SpamAssassin and Clam/AV, or I happen to like this little product called Xwall, which stands as the one Windows mail server product that I recommend without reservation. This sits in front of your mail server (ideally, in a DMZ network), takes inbound mail, filters it, and moves it to the exchange server, and visa versa. That takes the spam and virus checking load off those servers. This box needs some CPU and a little disk.

Down the road, you'll have an Exchange Front End, for Active Sync/Webmail/etc., *and* a spam filter box. That's future planning.

2) Don't have a DMZ? Time for one. Can't do it with your current firewall? Time for a new one. Option #2 is to put facing servers in a colocation house, and use a VPN to bring data back and forth. Firewall can't do VPNs? Time for a new one.

3) Switches. I like Juniper and Cisco, but at the low to mid level tier, they're too expensive. For low to midrange, think HP. The rule I use -- if you think you need routing features in a switch, you need Cisco or Juniper. If you just need a switch, the HPs cost about 50% per port, and I've had zero failures in many years with them.

4) Cable plant. Is everything Cat5E or better? If not, you can't run Gigabit Ether over anything that isn't. Know before you dump $3K + on that gig switch.

5) IP space: 100 workstations + 1 server is 101 IP addresses. Add a few more for network, broadcast, and router ports. Are you sitting on a /24 (say, 192.168.0.0/24?) You have 253 max usable IP addresses.

If that's the case, and you're growing, now is the time to start planning the Great IP Space Migration. Renumbering is easy with DCHP, but you'll want to have a plan for things that can't be DHCP, so that things go smoothly. You'll also need to find everything that has hardcoded addresses, fix them, and smack the developer or admin who didn't use names.

In the distant future, esp. if databases become important, I cannot stress the importance of real storage systems, namely, SANS. Being able to do things like add disk to a volume on the fly, or clone an entire volume and mount it on a test server, or snap a volume, install a patch, find out that it's broken, and roll the volume back to where you started, is *way* too useful.

Alas, the initial cost to play is too high here, but when you get to the point where you can say "it'll cost us about $20K" and nobody even blinks is when you can write up the business case to get them to buy into a $150K SAN.
posted by eriko at 5:50 AM on January 26, 2007 [1 favorite]


To clarify, I wasn't actually advocating blade servers at this point, I was noting that on the cutting edge (expensive), there are other options.

Re: SANs. The entry point is lowered if you decide to go SATA instead of SCSI, but I haven't spoken to any companies who have actually gone that route. I'd really be interested in what the performance & MTBF data looks like, as you can get into a 1TB SATA SAN for around $20K.
posted by stovenator at 1:54 AM on January 27, 2007


« Older TRAC - 'how to' from a business perspective   |   A giant upright ponzi scheme? Newer »
This thread is closed to new comments.