Help me chart the battle against program complexity
August 16, 2005 11:19 AM   Subscribe

I'm doing some research for a paper about the ways in which computer science has been coping with program complexity throughout its short yet interesting history, and I'm worrying I might have overlooked some fundamental developments.

We're not talking about run-time complexity, I'm only interested in paradigms, methods and techniques that aid humans in understanding and managing a given codebase.

Currently, I've identified the following techniques:
  • Structured programming: restricted list of language constructs including assignment, composition, iteration and selection
  • Modularization: subroutines, coroutines, namespaces, packages, components, libraries, frameworks
  • Object-oriented programming: classes and objects, encapsulation, polymorphism, inheritance
  • Parametrization, templates, generic programming
  • Meta-techniques: design patterns, coding standards, conventions
  • Miscellanea: type checking, exception handling, garbage collection
Undoubtedly I've forgotten some fundamental developments in the field. Would you care to enlighten me? Any recommendations on seminal articles about the subject are highly appreciated too. Thanks a lot!
posted by koenie to Science & Nature (10 answers total)
 
Hm. I wouldn't necessarily label these as developments in computer science. Innovations in software development perhaps, but say "computer science" to a person with a background in computer science and they think computational complexity, natural language processing, discreet mathematics, graph theory...

It seems like what you're shooting for is a paper on comparative programming languages. It's a pretty standard CS class. If you can grab a syllabus, I'm sure there'll be plenty of reading.

Looking at your list subjectively, I might add:

aspect oriented programming
evolution of source control management
integrated development environments
software testing
posted by Loser at 11:44 AM on August 16, 2005


Functional programming has been an influential paradigm in academia, even if it hasn't caught on in the corporate world to the same degree object-oriented programming (for example) has. Wikipedia's article on the subject provides a good overview. See also the Wikipedia list of programming paradigms.
posted by Acetylene at 11:46 AM on August 16, 2005


There's aspect-oriented programming which may or may not be anything.

On the management side you have things like The Mythical Man-Month, and to some extent XP.

What you have listed as "structured programming" doesn't seem like structured programming to me but maybe I don't understand what you mean by your terms.
posted by fleacircus at 12:17 PM on August 16, 2005




I think you need to look at both process improvements and tool improvements.

On the process side, you need to look at the development of QA techniques (separating software testers from developers, test-driven development, bug tracking, etc.), software development methodologies (waterfall, RUP, CMM, extreme, agile, etc.), and even things like outsourcing.

On the tools side, you have a lot of the main language developments (be sure to include functional programing, as mentioned above), but also look at attempts at higher level languages, IDEs, modeling programs, UML, automated testing, build environments, and as you mentioned frameworks and the huge improvement in libraries. (Maybe dynamic linking vs. static linking was a big thing?)

Also looking at different strategies in platforms - unix's small tools you can link together vs. windows monolithic system approach.

There are also some bizarre formal specification languages, but that may be more of an attempt at quality than dealing with complexity, however intertwined those aspects are.
posted by babar at 2:35 PM on August 16, 2005


gentle hit all of the ones I could think of except perhaps code transformation (css, xslt, etc...).
posted by furtive at 3:32 PM on August 16, 2005


Oh, and I can't believe we forgot APIs.
posted by furtive at 3:35 PM on August 16, 2005


What you need on your list is some AlgoViz (Algorithm Visualization). AlgoViz / Software visualization is a way to understand what pieces of a computer program do what, when, and wherefore. It's sometimes used for teaching purposes -- i.e., to teach the difference between a bubble sort and a quick sort and a merge sort. It's also used to manage complexity in larger computer applications.

John Stasko worked on a recent project, Tarantula, to find bugs in large(ish) software projects. Hope this helps.
posted by zpousman at 4:25 PM on August 16, 2005


How about software configuration management / change management tools, like ClearCase and CVS? These are very important. Maybe not exactly in the focus you mentioned, but I can see them being a key development in allowing large, complex projects to go forward. Some people's entire job is working with these tools.

There are also standards for documenting code.

I'd be interested in reading your paper...
posted by amtho at 4:21 AM on August 17, 2005


(note: I'm a old hand, not an academic, so this is the practical and historical perspective)

Frederick Brooks' essay "No Silver Bullet", which you can find in the 25th Anniversary edition of The Mythical Man Month, is a must read. Brooks argues that software has two kinds of complexity, "accidental complexity" and "inherent complexity", and that software tools and techniques can help a bit with the former but do nothing for the latter. The paper was very influential and sparked a big debate when it was published in 1987.

At the time, the big fads that were going to give us all a great leap forward in software productivity were fourth-generation languages and rule-based expert systems (which were adopted as the foundation for Japan's failed Fifth Generation Computer project). Other grand, well-intentioned failures in this area include formal specification languages, Charles Simonyi's Intentional Programming, CASE tools, reusable component libraries, and code generation techniques.

Here are some opinions on the overarching trends of the last 25 years, that put the techniques you want to discuss into context.

- The widening of the developer base. Simpler languages (Excel macros, HTML) enable people who don't normally think of themselves as programmers to create software for themselves.

- The progressive externalization of functionality from the program into standardized platforms. The tradeoff is a higher cost of integration against the cost savings of using the platform's functionality. The key word here is "standardized". Once a standard solution becomes widely adopted, the integration costs go down to an acceptable level. Beautiful technical solutions that fail to attain a critical mass of integration around a de facto standard always fail.

- A reduction in project size and duration from several years to several weeks between deliverables. I can't find it, but I remember reading in one of Ed Yourdon's books about a study that showed that project duration was by far the factor most correlated with project failure. The agile methods camp takes this as their fundamental motivation. For more on real-world factors that influence project failure, look up Capers-Jones' work (summary: Brooks was right, it's not about the tools and techniques).

- The victory of loosely-coupled architectures. Back in the pre-Internet days, tight coupling between the language and the platform was seen as the way to increase integration and reduce complexity. Loose coupling and dynamically compiled languages were seen as unsuitable for very complex systems. That fails when confronted with networked systems, where you don't control all the pieces. Middleware, RPCs, and interface definition languages have all failed on the Internet, while the successful network architectures of today all respect Postel's Law ("be conservative in what you emit and liberal in what you accept", first stated here).

- Regular architecture revolutions. Back in the 80s, it was separating the DBMS from the code. Then came the shift to PCs, GUIs, and client-server, then OO, and then the Internet. Each shift followed the same pattern: initially, the promised benefits failed to appear. After a few years, the industry developed the three things necessary to demonstrate success: a critical mass of skilled developers, a critical mass of integration solutions, and a few specific types of application successes that were practically impossible with the old technology. That sets the stage for widespread adoption. If you want to understand why some revolutions succeed and others fail, you need to read Thomas Kuhn (The Structure of Scientific Revolutions) and Geoffrey Moore. (Crossing the Chasm and Inside the Tornado).
posted by fuzz at 5:40 AM on August 17, 2005 [1 favorite]


« Older Book Jacket For Powerbook?   |   theCD player without beeping beep?! Newer »
This thread is closed to new comments.