What best practices do the pros use to maintain frameworks of software
June 9, 2013 4:32 AM   Subscribe

My company has built a "framework" of software which is a complex combination of relationships between various technologies, code in various programming languages and front and back end systems. I need to come up with a system for maintaining versions and subversions of different pieces and i'm not sure if I have the right idea. More inside...

Basically, what we have is a method for content generation with an XML output plus a crude CMS on the back end. On the front end, we have a class of website which takes that content and renders it in many different ways with interactive elements and reads and writes another class of information (statistical type information) to a third system which is a database. For simplicity lets imagine these systems are A, B and C. Each depends on the other and each is comprised of a bunch of files. For instance imagine the front end, B is a collection of stylesheets, custom javascript libraries, off-the-shelf javascript libraries etc..

This whole framework is basically a 'class'. In other words, we want to be able to say, implement an instance of the framework in country X and then be able to make changes to it, add capabilities and launch an updated instance in country Y without worrying about backwards compatibility with the version in country X. Most likely this would mean that instane X and Y would share 90% of the code and files but be different otherwise.

So the challenge we have is how to implement and maintain a versioning system whose goals are:

1. Maintain all the code in one place
2. Be able to know where common code is and be able to update that code in all deployed instances
3. Have an organized way of maintaining many instances who may differ by only a few files

We are using a CDN as a central repository for all our deployments. The idea that I was going to propose is that we keep all the code in as few folders as possible, organized by the major parts of the framework. For instance (crude example) /CMS /XML /FRONTEND /STATS .

If for instance, deployment X uses interaction1.js and we make a non backwards compatible change for deployment Y and create an interaction1.2.js, they would all still be kept in the same folder.

Then I imagined that each deployment would contain a build manifest which would specify a recipe of sorts of all the file URLs that it needs to function.

I'm hoping that you seasoned pros out there can tell me if my suggestion is sound and also help point me towards some best practices in this respect.

Thanks
posted by postergeist to Computers & Internet (9 answers total) 5 users marked this as a favorite
 
I have had very good experiences with managing version dependencies with leiningen, which is a clojure build / dependency resolution tool for clojure, using maven but providing more functionality. The basic mechanisms that really help as a developer:
  • Installation is local, and resolution and usage is per-package (ie. I can have foobar 1.6 in one project, and foobar 3.0 in another).
  • you don't ask for urls, you ask for versions, and the version to url mapping is done by the tool
  • all the downloading and recursive dependency resolution is done automatically by the tool, and is not a manual step
The problems I have seen: these are less related to the build tool and more to the structure of the project. We tried to organize the project as independent submodules that could potentially be used separately. This is a great idea for code maintenance (enforcing separation of concerns and modularity of the implementation), but leads to all sorts of complexity in development as soon as the cross-submodule behaviors are changed (which happens often in early development) which means that while you have to load each dependency manually, your system will not work if you pick the wrong subset to load. Also, think about the complexity that happens when versioning a project when a depends on b which depends on c, and you come up with a new version of c (which means upgrades of b and a for compatibility) and a team mate comes up with updates to a+b - the tangle of version numbers and dependencies can make what should be a simple deployment of a few changes into an hour's work. Finally, in practice nobody even uses any of the parts in isolation at the moment, so the extra complexity in deployment gets very little in used functionality.
posted by idiopath at 4:49 AM on June 9, 2013


I'd use a distributed version control system with good branching / merging support, like git. Then I'd spin off a branch for each deployment, and would be able to sensibly merge changes in from my main deployment.

It seems easier to manage this way than discrete build scripts- you get all sorts of old versions littering your system, and no clear way to package up testable units.
posted by jenkinsEar at 5:42 AM on June 9, 2013


Use git or mercurial for one code base. Handle different deployments by using some abstraction within the code that checks a local setting file and says "I'm in France, so add this content to the output of this process". Use something like Chef or Fabric for a deploy and setup tool to create that local file based on command line options passed for deployment. Keep a central repository of code, and deploy code changes by checking code out (after a pile of automated unit tests are passed)

If you fragment your codebase, you'll never reintegrate it. If you have to handle a sort of localization, do so within the context of a single codebase.
posted by fatbird at 7:21 AM on June 9, 2013


i don't think there's any way to get around testing each change you want merge with the main code on each country specific child. you can't know if it will break backwards compatibility without knowing what the change is and how it's used in each country specific version.

i think you need to try to have a core code that is general and flexible, and separate from each country specific instance. then, for each instance you have instance specific data that tells your core code what settings to use. for example change "for x= 1 to 10" to "for x= StartVal to EndVal". maybe that's too basic, but I hope that helps.
posted by cupcake1337 at 9:04 AM on June 9, 2013


This is a hard question to answer without some more specifics about your environment (what languages and frameworks does your CMS system depend on), how large your development team is and how your project is structured. Also it depends on how your framework is implemented. Do you plan on doing all implementations internally? Is this a client based environment where you need to hand off your code?

As a consumer of these frameworks and someone who has to do implementations with them, here is what I would prefer (from a .NET perspective, but obviously you can switch out nuget for gem in Ruby, etc.):

1. The ability to run Install-Package postergeist-cms and have it pull in all the dependencies (hopefully not that many). Even though you have xml, front-end code, etc. this should all be pulled in via package management to my project. postergeist-cms-front might be on 2.0 but I don't need to care about that the package management should resolve this for me.

2. If you're doing implementations for this yourself and your application is proprietary you can still setup a private package server and point to that. Give your customers access to this. I hate, hate that proprietary CMS systems still make me download a zip and setup my own feed for their software.

3. If you're entirely open source, or even internally among dependencies, I'd use git submodules. These will track against commits.

I agree with fatbird, if by different implementations for different countries you mean localization you need to do it within your code base. This is really not that hard and means you just create a dictionary file. This is especially true for backend systems like a CMS.

From your question it sounds like you're not separating your CMS from actual implementations, but I could be misreading this. Even if you're doing this all internally, I would still strongly encourage you to keep your framework distinct from your implementations. Force developers to pull in the CMS from some sort of package management feed, if they need to get hacky let them do it in the implementation, don't let them jack the CMS itself to get a certain feature done for a certain client ... if that makes sense. You want to keep your framework as clean and free of implementation concerns as possible.
posted by geoff. at 9:05 AM on June 9, 2013


Something I kind of took for granted in my answer above, but I guess should be more explicit: make the codebase a library that can be used without modification by each installation, with a straightforward way to implement changes that are specific to the installation (localizations, etc.).
posted by idiopath at 9:12 AM on June 9, 2013


Definitely follow the practice of semantic versioning for your major/minor/patch release numbers:
In systems with many dependencies, releasing new package versions can quickly become a nightmare. If the dependency specifications are too tight, you are in danger of version lock (the inability to upgrade a package without having to release new versions of every dependent package). If dependencies are specified too loosely, you will inevitably be bitten by version promiscuity (assuming compatibility with more future versions than is reasonable). Dependency hell is where you are when version lock and/or version promiscuity prevent you from easily and safely moving your project forward.



I call this system "Semantic Versioning." Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next.
posted by migurski at 9:23 AM on June 9, 2013


A significant chunk of what you are describing has, in the past, been known as configuration management, and these days has become part of what is being called devops. You might use those terms as you dig around for guidance.

I will say though, your description of how you'd like thinkgs to work sounds like it will quickly become a nightmare. What happens when you discovery a major bug or security problem in your code? What happens if there is a security fix for some of the third party code you've been relying on? Temporary divergence of your deployed code is fine, or even desirable as a way to test something before broadening the deployment, but you should strive to keep everything as consistent as possible. Automated testing will help make that possible, because it provides a way to quickly build confidence that changes and new versions aren't going to cause breakage.

If this all sounds like a lot of work, and too expensive: 1) It doesn't have to be, in part because 2) a lot of it you should be doing for other reasons anyway. 3) Pay now or pay later.
posted by Good Brain at 11:00 AM on June 9, 2013


You need one code base. Those few files that need to differ? Abstract until you can make the changes with configuration. Otherwise resign yourself to a nightmare. Either the changes are compatible or you have different software. Tests tests tests. The tests will compound in value as you go forward. You will sleep easily every night.
posted by maulik at 8:09 AM on June 10, 2013 [1 favorite]


« Older GO TEAM GO! Yay or Nay?   |   How much is it OK to tweak quotes? Newer »
This thread is closed to new comments.