How is the software sausage made?
April 29, 2015 5:47 AM   Subscribe

If you work at a forwarding thinking company that builds software, what does your process and tooling look like? I realize this is a big question, but I am very interested in how teams are working these days. Starting from an idea, how do you go about discovering, analyzing / estimating, planning / tracking, building, testing, deploying, and documenting your software?

I'm interested in the very specific stuff you do to build software? What does your testing framework look like. Do you use Selenium? If you have backlog, how does it get groomed / prioritized? How are you managing defects? Are you Scrum? Kanban? Some other hybrid of agile? What type of management checkpoints do you have in place? Any status reporting? What types of documentation are created before you release to your customers? Every company does it differently so the more specific the better? Tell me how the sausage is made!
posted by jasondigitized to Work & Money (2 answers total) 16 users marked this as a favorite
I am part of a very small team (More or less three of us) building bespoke tools, modelling and general problem solving, and data warehousing development and support.

We use Jira for issue tracking and also for a bit pf project management, it is the central go to ticketing tool. That said, it's often not used as fully as it could be or as well. We have some issues from long back which should be closed and aren't etc.

We have full unit tests and regression tests for our bigger data warehouse stuff using Fitnesse / dbFit and some other tools. We don't have much of a UI to maintain, but I am right now (it's on my other screen) picking up Selenium to try and build some better testing for it.

Oh and Plastic SCM as source control. That's probably the thing that gets used most and best.

Smaller projects use Plastic as well, but tend not to be build in a Test Driven way (although that is officially our process, it doesn't tend to happen on small fast projects, possibly a problem)
Our process is loosely defined, in a way that we can get away with because the team is so small (and sometimes we don't get away with it, usually my fault) and because we work with so many disparate things we don't have a strong carved in stone way of dealing with anything, except the data warehousing. The data warehouse stuff is always source controlled, test driven etc. We have a team wiki which is mostly maintained but not impeccably.

Documentation is the thing that always slips through the cracks. It ends up a last minute dash and we should do it better. There is usually this idea that we'll document as we go and cover the missing bits from Jira comments. As you might expect it generally ends up in having a finished tested piece of software with little or no documentation (at least not of the kind that we can give to clients). That's something that I definitely want to improve, if only for stress reasons.

We are by no means a good model for anything, but an excellent example of
a) how processes/tools/good practice can completely save you from going insane.
b) how processes/tools/good practice completely fall by the wayside when you've got hundreds of things to do.
posted by Just this guy, y'know at 6:57 AM on April 29, 2015

[On preview: wow, this is long, sorry!]

I'm going to tell you about the previous place I worked at, because at my current (contracting) job things are a bit more start-up-ish and free flowing. Genuinely, Canonical — specifically teams that grew out of the Launchpad team as-was — has some of the best development methodology going, and I constantly refer back to it. The last project I worked on before I left was this one; luckily that team was entirely ex-Launchpadders, so we kept the same development flow.

So, the tools first:
  • Bug tracking / feature requests / code reviews / specifications ("blueprints"): Launchpad. Old and not as widely used as Github, but my god was the bug tracker a good one (disclosure: I wrote a lot of it, so I'm tooting my own horn here).
  • Work tracking: Kanban. Brilliant for coordinating and visualising work, if used correctly. The backlog would often fill up because we a) couldn't see it by default and b) had too much other work on to deal with it all, but mostly this was a great resource for us.
  • Source control: Bazaar. Another Canonical-built too. I've a soft-spot for it because I got so used to using it. Just as good as Git or Hg in my opinion, but sadly lost the battle to be a front-line VCS.
  • Continuous integration: Jenkins (or, in days of yore, Buildbot) and Tarmac.
The workflow, which was as close to Lean / Kaizen as we could get, went something like this:
  1. Grand Vision is expounded by Self-Appointed Benevolent Dictator for Life (sabdfl).
  2. Team iterates over Grand Vision, working out the details, breaking it up into user stories, and the user stories into development / design tasks
  3. Team agrees on targets for the feature. The key parts of this are the user stories and a "We'll know we're done when" for each of those and for the feature as a whole.
  4. [Optional] Estimations done for each task. This didn't always happen, but when it did we were much better at giving realistic expectations to the sabdfl. Since Ubuntu has a 6-month release cadence this was often about what parts of the feature we could do in those six months, not how long the feature would take us. Lean methodology came in here a lot of the time: focusing on an MVP for a feature and then building from there so that we didn't release half-broken things.
  5. Cards for each task go on the Kanban board. At one point we tracked everything with Launchpad and wrote a tool that synced Launchpad Bugs to the Kanban board, but that became a headache. Kanban was simpler for feature development tracking.
  6. Development: A developer takes or is assigned a task card — which should be Ready To Code and ideally about one day's worth of work for one dev — and works on it until done. Methodology (for ex-LP'ers) is strictly TDD. You won't be able to land code unless there are tests, as a rule.
  7. Code review (using Launchpad). One review is necessary per branch that's ready to land. The branch can't land until the reviewer approves it and the tests pass. For LP, we had a fairly detailed review checklist; for other projects the review checklist wasn't necessarily codified, but was similar.
  8. Landing (automated, using Tarmac). If the tests don't pass, the branch won't be merged into the main line of development, and the developer is responsible for fixing the tests.
  9. Continuous Integration: Jenkins runs a set of acceptance and integration tests to ensure everything works end-to-end on the updated main line branch. If there's a failure here, the developer is responsible for fixing it. Kaizen was part of our methodology here: big failures were red-button issues, meaning that the whole team dropped their current work to focus on fixing the failure.
That was pretty much our sausage-making process... Hope that answers your question...
posted by gmb at 8:00 AM on April 29, 2015 [5 favorites]

« Older Looking for a mindfulness retreat near a sunny...   |   What's the best way to walk with my luggage if the... Newer »
This thread is closed to new comments.