Join 3,375 readers in helping fund MetaFilter (Hide)


How do I set up a remote website development/maintenance workflow?
July 25, 2007 10:19 AM   Subscribe

I'm seeking advice on how to mange small website development and maintenance projects with a remote contractor.

For a long time, I've been an independent web developer. I do mainly small business 'brochure' sites. Most of the site's that I launch now are on Wordpress (PHP/MySQL) for a simple CMS. I recently started working with a remote contractor to help with my workload. As I result, I'm trying to figure out the best way to manage the development process.

Right now I develop on my local machine and then upload the code and the DB to the 'live' server when the site launches. Once a site launches, I usually end up making updates directly on the server. I know this probably isn't the best practice, but it seems easier then dumping the DB back and forth. For simple, non-DB sites, I usually do edit locally and then just upload the modified files to the live server.

So now I'm starting to work with the contractor and I have a feeling that my workflow could use some improvement. Right now, now when the contractor is editing a site for me, I pass along the FTP details and admin details to access the CMS and usually the details to access PHPMySQL. From there, the contractor logs in as needed and edits the site on the live server.

I have a feeling that I should be using some type of version control system (i.e. subversion). I understand the basics, but I'm still not clear on how it works with DB-driven sites. Usually the files and the DB are getting modified. And then I worry that this is going to be complicated and add time to the process of making relatively straightforward site edits. I really getting hung up on this DB thing. (As a side note, I think it would be rare that the remote contractor and I would be working on a project at the same time.)

So after all that, I guess I'm just looking for some suggestions on how to set up a development/maintenance environment for my scenario. Basically I want to protect the integrity of the sites I manage and limit the access information that must be manually shared in my current scenario.

Thanks!
posted by namith to Computers & Internet (5 answers total) 6 users marked this as a favorite
 
basecamp (organizational) and concepthare (visual)
posted by johoney at 10:57 AM on July 25, 2007


Over the last couple years, I've had this subversion for web applications conversation with a couple of people, and I always got hung up on the DB question too: checking out the files to a local host is pretty useless if I don't have the db structure, not to mention the possibility of name resolution issues when working locally.

However, we finally just went ahead and said 'screw it' and implemented svn regardless. I still feel our setup is a little less than ideal (reverting --which we haven't had to do yet -- is probably a pain), still, we get the benefits of a diff log and backups. Here's how we (haha, i mean my coworker, who had much more versioning experience than i) did it:

* installed svn on our dev machine, the repository base is the same as our web root.
* wrote a small script that dumps all database changes into a mysql backup file.

here's how we work with it (in theory):
1. run 'svn update'.
2. make changes
3. run the db dump script
4. run 'svn commit'

We *do* work directly on the server, mostly because otherwise we'd have to re-init our local db each time with the database files (could probably be easily done with a script, but downloading large dbs could get annoying) and the logistics involved with name resolution (there's probably a good way around this too, but working directly on the server is fine for us for now, because we work on different files most of the time).

For your setup, it might be better to always work on a dev server (which both of you would work on), and then roll-out changes to the live client server. This would mean you aren't constantly installing svn on client sites (some which may not have root on). Sure, the roll-out process would be a pain for those 'tiny' changes, but that's part of the cost of source-control. i think after you get to a solid version of the db structure, you wouldn't need to dump data back and forth from the live site unless for some reason you needed fresh test data (at which point you could use the same 'dump' script and manually upload it to your test server).

Anyways, not ideal, but how we're doing it (for now). I'll be very interested to see what other people are doing, because I'm pretty sure it can be done better.
posted by fishfucker at 11:01 AM on July 25, 2007


Do your changes required you to have a completely up-to-date mySQL database, or just a mostly up-to-date mySQL database? If the answer is mostly up-to-date, I usually do this - have an entire staging environment that is similar to the one you have on the live server on each of the machines that will be developing. Every now and then, dump the live database and update your local one.

This can be accomplished by installing php/apache/mySQL on your local machine, or if you feel like the discrepancies between your windows version from the live server linux version are too much, you could run a virtual machine with VMWare Server (free). I realize this sounds pretty over the top but then you could just give that virtual machine to your other developer and you're both working off the same staging server environment.

Then you host the SVN on the live server and make commits and updates to and from that. I also use rsync to push your changes to the live server, it's really nice (and fast).
posted by bertrandom at 5:31 PM on July 25, 2007


Thanks, everyone. I had hoped to get a few more comments on this. So it goes.

@fishfucker: Thanks for describing your process and also for posting the script. I'm going to try to implement something like that since I can't really find an ideal solution. I was hoping for a magic button that would handle the whole process.

@bertrandon: Thanks as well. Most of the time I'm updating the database, but working with a somewhat up to date database is definitely something that I'll consider. Good idea.
posted by namith at 11:27 AM on August 1, 2007


We store our SQL scripts in our Subversion repository and have one person dedicated to updating the database to deploy the changes.

When it comes to databases, I wouldn't make those be the "wild west" where everyone can update them directly as if the wrong data gets deleted, you won't be very happy. One person on each project is the go-to for our DB changes and they are done on agreed to schedule or as needed.

If there is going to be a lot of data being populated into the database, we always implement a Qcodo interface to the database since it can get up and running quickly and generates all the SQL for you.

I wish there was a slick and quick way to do this, but if there is, it probably costs and definitely isn't open source!
posted by kathk at 3:54 PM on September 10, 2007


« Older Trains and England. According...   |  Recommendations for the Superi... Newer »
This thread is closed to new comments.