Desktop Cluster
December 12, 2006 8:32 PM   Subscribe

How do I make one big, fast computer out of three smaller ones?

I guess would like to build a tiny cluster which has two computers and a laptop that can dock into it as a controller. If that is a technical improbability then just clustering the three computers to act as a single high powered work station would be totally rad. Any ideas?
posted by N8k99 to Computers & Internet (10 answers total) 2 users marked this as a favorite
 
This might be a good place to start.
posted by doublesix at 8:37 PM on December 12, 2006 [1 favorite]


It entirely depends on what you want to do with the computers. Some tasks might scale up to a cluster, some might not. What do you have in mind?

In any case, it isn't going to be really magically easy to do. Reality is nothing like the movies.
posted by xil at 8:38 PM on December 12, 2006


it is understood reality != hollywood.

just want to explore the possibility of using three computers in parallel to run applications for various artistic endeavors , video and sound editing, animation developments with blender. Am aware that I can control multiple computers through synergy and thus start one process on one computer, switch to another machine yet retain physical hold on the same input devices. Most likely will be the path that I travel- however as doublesix suggested with the beowolf article- sharing a /home/directory and some trusted shell scripts between the three computers would satisfy basic classification as a cluster.
posted by N8k99 at 9:00 PM on December 12, 2006


ClusterKnoppix would make your life easier - it's kind of like clustering for dummies. That said, this sounds like a bad idea unless you plan on leaving the laptop in place or power-cycling the whole thing every time you want to use it. Additionally, I'm sure 18 people will have echoed this by the time I finish writing, but clusters are just large space heaters without software designed to multi-thread. Blender would probably benefit in render-farm type tasks, but day-to-day 3d modeling work is definitely NOT something that can be helped with clustering. Such work specifically requires low-latency resources on the local bus, not across the subnet.
posted by datacenter refugee at 9:14 PM on December 12, 2006 [1 favorite]


Unless you have some specific scientific algorithm and dataset in mind that you know can be parallelized easily, forget it. Otherwise, you might as well just install VNC on each machine and then sit at one and task switch between displays.
posted by Rhomboid at 9:46 PM on December 12, 2006 [1 favorite]


A 2 and 1/2 node "cluster" isn't worth setting up, given the incrementally low cost of better hardware. Clusters don't do SMP or even NUMA really; they're orders of magnitude slower than smaller multiprocessor designs of better local capability, and are only truly justified by problems which rapidly exceed practial SMP or NUMA designs. If you "need" mainframe+ computational capabilities, that's one thing. If you want to burn some midnight oil learning about clustering, that's another. Minimum advice: make sure your interconnect fabric is at least gigabit Ethernet, with jumbo frame support. Else you're just fooling around for the hell of it.
posted by paulsc at 9:49 PM on December 12, 2006


I'm a graduate student working in high performance computing, and I'm trying to dream up some interesting artistic work you would be able to do better with three computers hooked together instead of one, but I'm having a hard time seeing it :(

Just to echo some of the technologies mentioned above that I'm familiar with, you definitely could use an NFS-mounted home directory, though you will want to use the hard drive on the local machine that you are working on for scratch.

It's not too hard to set up ssh to launch jobs remotely, I'm not familiar enough with mosix-style operating systems to make a proper recommendation on them.

As mentioned earlier, most programs aren't designed to run on a Beowulf-style cluster, even if they can take advantage of multiple threads. If you're working with an open source code that supports the distributed memory model (say it uses the MPI interface), you're speaking more realistically about getting a performance boost from multiple processors.

Cheers to you for even getting into this line of thought, there will be a lot of focus and money spent on this technology in the future.

Email's in the profile if you've got more questions.
posted by onalark at 9:54 PM on December 12, 2006


And contrary to what paulsc says :) there are plenty of processing tasks that scale well with low communication architectures (so-called Embarassingly Parallel workloads). This is the whole reason Beowulf computing got off the ground in the first place.
posted by onalark at 10:01 PM on December 12, 2006


"....This is the whole reason Beowulf computing got off the ground in the first place."
posted by onalark at 1:01 AM EST on December 13

And exactly 0 of the embarrasingly parallel problems listed in your linked Wikipedia article are what the poster asked about, which are "If that is a technical improbability then just clustering the three computers to act as a single high powered work station would be totally rad." or, on followup, "just want to explore the possibility of using three computers in parallel to run applications for various artistic endeavors , video and sound editing, animation developments with blender."
posted by paulsc at 10:23 PM on December 12, 2006


If you want to make your multiple machines look like a single one, you might want something along the lines of Single System Image. +1 on geek factor, -5,000 on practicality.
posted by Freaky at 12:09 AM on December 13, 2006


« Older interviewing while employed   |   What is this song called? Newer »
This thread is closed to new comments.