What are the most common programming tasks?
September 10, 2009 3:09 PM   Subscribe

I'd like a list of the most important, basic, essential programming algorithms.

By basic, I sort of mean useful in the workplace and real life. For example, sorting. But not reproducing the Fibonacci series.

This is a really great thread, but I'd like a list that's been narrowed down to a top-twenty or so. This question might provoke debate, which is fine. But I'm really just looking for a list of common programming tasks, plain and simple.

Reason? I'm just starting to learn programming and I'd like to know what topics to pay special attention to and be able to solve some common problems in different languages. Plus - curiosity.
posted by kitcat to Computers & Internet (36 answers total) 58 users marked this as a favorite
 
Response by poster: Ok, maybe 'useful' wasn't the best thing to request. I just mean, the basic ones that are most commonly employed - the ones that you'd be considered a loser if you were a so-called programmer/coder/developer and didn't know.
posted by kitcat at 3:29 PM on September 10, 2009


I'm just starting to learn programming and I'd like to know what topics to pay special attention to and be able to solve some common problems in different languages.

The problem with this strategy is that if it's really a common problem, it has been solved thousands of times already, and you should probably be using a library function to do it rather than re-writing the algorithm yourself.

For example, I can write half a dozen sorting algorithms from memory and tell you the pros and cons of each one, but I've never actually written a sorting algorithm in my current job. That's because the Array.Sort that is built into my programming language is going to beat pretty much anything I could come up with, and I don't want to waste time re-inventing the wheel and possibly introducing bugs into my code.

So although it's important that you learn classic algorithms to help get a feel for how programming works, don't worry about whether or not you'll actually use those specific algorithms. If you want to learn something that will actually help in a variety of situations, I suggest learning design patterns, which are higher level and can be applied in many situations. Also, although it's quite old now, browsing the Wiki Wiki is a great way to get exposed to various real world CS concepts and patterns. For example, here's the category of pages that show algorithms implemented in a variety of programming languages.
posted by burnmp3s at 3:31 PM on September 10, 2009 [3 favorites]


Just get an Algorithms 101 textbook.

As bmp3 says, you'll never actually write them from scratch, except for fun or in postnuclear world where you're reinventing the PC, but you do need to know what they are, so that when your library function uses a bubble sort, you know wtf that is, and why it might be good or bad for the problem at hand.
posted by rokusan at 3:32 PM on September 10, 2009 [1 favorite]


Not exactly algorithms, but if you're working in an object-oriented language, you'll want to learn the design patterns.

Anything else, yeah, use a library.

I have a decent "Data Structures in Java" book that was a textbook for my 200-level programming course in udergrad. It covers implementations of a lot of common data structures and algorithms like quicksort, mergesort, etc. A book like that might be worth picking up for the reasons described above. Substitute your language of choice for Java, of course.
posted by Alterscape at 3:36 PM on September 10, 2009 [1 favorite]


Best answer: Everyone should know how to do cooperative multitasking. (Preemptive multitasking requires hardware support, but cooperative multitasking can be implemented at the user level without any special privilege.)

Everyone should understand how a hash table works.

Everyone should understand how to build a singly-linked list, and a double-linked list, and how to do insertions and deletions in such lists. Everyone should understand why linked lists are useful, and know when to use them instead of linear lists.

Everyone should understand how to build an interpretive virtual machine. (I'm not kidding; they are astoundingly useful.)
posted by Chocolate Pickle at 3:40 PM on September 10, 2009


This book is basically the algorithms bible. I would just get that and read it. It's rare that you ever use a straight-up stock algorithm in the course of solving real-world programming problem. It's more important to have a solid grasp of algorithmic *approaches* to certain types of problems than to have specific implementations memorized.
posted by strangecargo at 3:43 PM on September 10, 2009


Response by poster: Dude (not directed to anyone in particular) - I appreciate the design patterns suggestion. But as far as suggesting using the language's library function...I said I want to know what the classics are. For learning and for fun.
posted by kitcat at 3:44 PM on September 10, 2009


To a first approximation, nobody writes the classic algorithms: they use a well specified and debugged lilbrary implementation. Who really wants to write quicksort (again) ?

What's important is *understanding* the algorithms: the problems they solve, their strengths and weaknesses etc etc.

(NB. The trouble with learning the more obscure agorithms is that you start looking for opportunities to use them. Ever since I learnt about Bloom Filters I've been resisting the temptation to insert them into my code just because they're cool.)

As Chocolate Pickle says, knowing the classic data structures is just as important as knowing the algorithms that operate on them.
posted by pharm at 3:45 PM on September 10, 2009


Most newbies think that programming is about writing code. Turns out that's just about the least important thing we do.

In a large program, the most important thing we do is to manage interfaces. The second most important thing we do is to manage data. Code is easy; anyone can write code. Getting interfaces and data structures right is much tougher, and that's where real programmers shine. (And where hacks fail.)

Don't sweat algorithms. Study up on data structures. I mentioned linked lists and hash tables. Also learn about stacks, queues, linear lists, and structs. (And note that these things can be mixed and matched; they're not mutually exclusive. For instance, an "open hash" is a hash table where each slot is the head of a linked list, which means that each slot can contain more than one thing. Sometimes a 16-box open hash is better than a 1024-box closed hash; learn to know when each is best.)

One of the reasons that learning about classic algorithms is pretty much a waste of time is because either they are so specific as to not be portable, or they are so general as to be trivial. The former have to be substantially redesigned if you need them, and the latter are library calls.
posted by Chocolate Pickle at 3:58 PM on September 10, 2009 [2 favorites]


I've found Algorithms in a Nutshell to be a pretty nifty overview and reference. Unlike Corman et al, mortals may actually read and understand the whole thing. Besides, if you really want the algorithms bible, you may as well go with Knuth.
posted by Cogito at 4:02 PM on September 10, 2009


Best answer: A different thing to study is classic failures:

memory leaks
off-by-one errors
table overflows
infinite loops
infinite recursion
windows of vulnerability
inter-thread coordination errors
race conditions

Understanding these things, and thus being sensitive to them as you code, will help you a lot more than study of algorithms.
posted by Chocolate Pickle at 4:03 PM on September 10, 2009


We learn by doing. I think it is a useful thing to actually implement the classic algorithms. Compare your implementation to the one provided by your libraries or your language. Then try to figure out why the differences exist -- if your version is much slower than the built-in one (or faster, or uses more memory, or...), why? (Of course it helps if you have the source to your library or language.)

So take everyone's suggestions and try to implement the classics on your own.
posted by phliar at 4:04 PM on September 10, 2009


Best answer: But as far as suggesting using the language's library function...I said I want to know what the classics are. For learning and for fun.

Yeah, that makes sense. I was mainly reacting to your statement that you didn't want to learn classic but useless algorithms like Fibonacci numbers. I admit that I didn't really answer your question though so here's a few algorithms or small programming tasks that are worth trying:

- Build and traverse a tree structure.
- Implement a critical section using a mutex to get two threads to play together nicely.
- Write an algorithm of some kind that uses dynamic programming.
- Write a parser of some kind.
- Use reflection to do something neat.
posted by burnmp3s at 4:08 PM on September 10, 2009 [1 favorite]


Best answer: Chocolate Pickle certainly has a good list of things that it will benefit you to learn. But maybe you are looking for an answer to "what is a good baseline of knowledge for working coders?" If so, here are some of the more theoretical things I expect all of my coworkers will know without having to ask:

- Binary trees, linked lists, hashtables: what they are, how to perform basic operations.
- Big-O: what do O(log n), etc. mean, how to determine (but not prove) big-O efficiency of an algorithm
- Sorting algorithms: quicksort and radix sort, at least
- Graph theory (what is a shortest path, minimum spanning tree, connected graph, DAG) and some basic graph algorithms (dijkstra's and kruskal's mainly)
- State machines, what they are and when to use them
- Basic computational complexity: what is P, NP-complete, NP-hard
- Recursion, and inductive proofs
- Basic algorithmic strategies: in particular, greedy, dynamic programming, and divide-and-conquer
- Other "stock" problems: knapsack, linear programming, traveling salesman, max-flow.
- Concurrency primitives: mutex, condition variable, semaphore, critical section

The idea is having a common vocabulary to describe problems and solutions. This way, if someone tells you, "oh, we can find a solution with O(n log n) lookups if we model the connections as a DAG and cache the shortest paths" or whatever, you get what they're saying.

As others have said, you will only have to implement a basic algorithm by yourself in extremely rare situations. But you can't put the building blocks together unless you know them pretty well at first.
posted by goingonit at 4:12 PM on September 10, 2009 [3 favorites]


Best answer: Trees in general should be second nature. More advanced tree structures, like splay trees or red-black trees are optional: you probably won't need to actually implement these yourself, but it's good to know how they work. Tries are also pretty handy.

Heaps come up a lot. There are also many, many different flavors of heaps, each with different and incomparable performance (i.e.: better at some operations, worse at others). It'll be worth it to familiarize yourself with what's out there. Since you'll probably be using an off-the-shelf library, it would be useful to be familiar with which particular flavor was implemented.

There are zillions of graph algorithms. Depending on your application area, max flow is either going to come up all the time or not at all. The A* search algorithm is more generic and occurs in many contexts. My personal recommendation is learning the Hopcroft-Karp matching algorithm: it is complicated but not too complicated, may actually prove useful, and is not often implemented in graph libraries (e.g.: it doesn't appear to be in Boost).

Speaking of matching, you should also be familiar with the Stable Marriage Problem and the Gale-Shapley algorithm. It's not hard at all and is potentially very useful. Also, if you're so inclined, you may want to look into the proof of correctness of the Gale-Shapley algorithm; it's very accessible and it gives you a sense of how you go about proving the correctness of an algorithm.
posted by mhum at 4:24 PM on September 10, 2009


Response by poster: Thanks everyone. I didn't pose the question very well. You've had to guess what I was looking for. And done a good job - especially in intuiting that I don't know what I'm talking about.

So...um...what's a data structure?
posted by kitcat at 4:26 PM on September 10, 2009


Data structures are a way of organizing data. They're intertwined with algorithms; algorithms usually presuppose some given data structure that you're operating on. People above have referred to several kinds of data structures: trees, hash tables, linked lists (there are lots of variants on all of these.)

You choose your data structures based on the nature of the data and of the operations you'll want to perform on it.
posted by Zed at 4:41 PM on September 10, 2009


So...um...what's a data structure?

If you don't know the answer to that, then that is where you begin your study. Data structures of various kinds are the most important thing there is in programming. (The data structures course I took as a sophomore in college was more useful to me professionally than everything else combined that I studied.)

You may have heard about "object oriented programming". What that means is that you create a critical data structure and then implement a body of functions for that structure that allow various kinds of manipulations.

Up until 20 years ago, the code was thought to be the core of the program. The reason for the object-oriented programming revolution was the dawning realization that data is what matters. Code serves the data, not the other way around.

One OO program I wrote one time had the simplest main line you can imagine. It created one instance of one data structure and invoked the initialization function associated with that data structure. Two lines of code. Everything else was encapsulated in support functions associated with that data structure and a bunch of other ones that it created and manipulated.

Other things to keep in mind for study later: Monte Carlo Simulations, and Genetic Programming. Those are both really cool, but that's pretty advanced stuff. You certainly don't start with them.
posted by Chocolate Pickle at 5:12 PM on September 10, 2009 [2 favorites]


I've heard the simplex algorithm is the most used algorithm ever, in the sense that it has occupied the most CPU clock cycles. Certainly if you want to use computers to do efficient computations, it should be of interest to you.
posted by Commander Rachek at 5:15 PM on September 10, 2009


Best answer: Data structures are the in memory (or disk) organization of data. Most of algorithm design is dependent on choosing a good data structure in the first place. Trying to separate them is a problem.

The other problem you'll encounter is that the things that are "classic" are not as commonly known or employed. For example, quiksort is a classic, that lots of people don't understand or avoid. Most graph algorithms are awesome but unknown. People will build regular expression parsers on data that isn't regular.

Lots of people declare this theoretical stuff useless, but it's their own imagination that needs work. Software dependencies can be modeled as a graph and an installation plan can be built from topological sort.

The reason there's such a massive disconnect between the classics and the real world is databases. Everyone's data is stored in an SQL database, and their programs are written for SQL databases. SQL is the "real world classic". While its possible to create crazy datastructures in SQL, its not straightforward. So they leave it up to the SQL server to handle organizing the data, and reduce every problem to an SQL query.
posted by pwnguin at 5:22 PM on September 10, 2009 [2 favorites]


Project Euler is my go-to place when learning a new programming language. The puzzles aren't overly difficult in the beginning but almost always have multiple ways of solving them and are thus great for sussing out implementing them in your new language.
posted by spatula at 5:23 PM on September 10, 2009


The most important analysis tool you can learn to use is a data flow diagram.
posted by Chocolate Pickle at 5:40 PM on September 10, 2009 [1 favorite]


Commander Rachek: I've heard the simplex algorithm is the most used algorithm ever

While this may be true, I would probably recommend putting it on the back burner for now. It is definitely a landmark algorithm and the foundational algorithm in linear programming -- this is "programming" in a mathematical sense, not a computer-y sense -- but it's not really of generic applicability. Also, it's one of those things you'll never need to actually do yourself.
posted by mhum at 5:49 PM on September 10, 2009 [1 favorite]


Response by poster: So helpful. Thanks all. Thanks Chocolate Pickle.
posted by kitcat at 5:49 PM on September 10, 2009


I would probably recommend putting it on the back burner for now.

To be clear, I wasn't really suggesting kitcat try to implement it (at least not this weekend); but it's a cool algorithm, even awe-inspiring to my naive undergraduate mind. Sure, neither you, me nor kitcat will likely directly implement it for anything in our day to day lives, but neither am I likely to ever read all the way through Finnegan's Wake, and yet I still know what it is and something of its significance. I think part of getting educated in a subject is learning about its masterworks; I think its safe to call simplex a masterwork of LP.
posted by Commander Rachek at 7:03 PM on September 10, 2009


Response by poster: Speaking of which, someone should start a Finnegan's Wake project, much like this one for Infinite Jest! It won't be me, though.
posted by kitcat at 7:20 PM on September 10, 2009


Response by poster: This thread has totally blown my mind. You guys are the best :)
posted by kitcat at 7:55 PM on September 10, 2009


As far as simplex goes, I could have implemented it a dozen times without learning anything---until I took a linear algebra class, when I realized the math behind it was absolutely fascinating. Honestly, I think that's one where the math is much more interesting than the CS: implementing all those matrix manipulations isn't really that exciting.
posted by goingonit at 8:30 PM on September 10, 2009


Best answer: There are some mighty fine answers here (particularly I like goingonit's post and Pickle's list of common errors - all important). However, I'd take a step back and recommend some even more basic building blocks from which the mentioned algorithms are built.

Specifically, the absolute most fundamental:
- selection (if/else)
- iteration (for/while)
- functions and recursion
- big-O notation of complexity
- data structures in general, i.e the underlying concepts instead of specific structures:
  - containment
  - abstraction
  - opacity / privacy
  - what are the typical use cases for this structure (are we always inserting? iterating through linearly? randomly seeking for a specific item)?, which leads to big-O
  - indexing; how that relates to access patterns
- memory management and different kinds of variables:
  - automatic/stack variables ("the usual kind")
  - static variables
  - arrays
  - pointers (and where they're used even if they don't look like pointers, eg in Java)
  - dynamic allocation (malloc/free, new/delete, garbage collectors)
  - the difference between scope and lifetime of a variable
- programming paradigms, e.g. imperative, OO, logical, etc; just have an awareness of what's out there

Other useful things to think about are program-design issues, i.e. why things are done in a certain way. These tend towards software engineering (how do I design and construct this big piece of software in a manageable and reliable way so that I can be assured that it will work?) issues rather than programming (how do I wrangle this specific algorithm or data structure) issues, but in the real world they are of supreme importance:
- maintainability and comprehensibility of code
- ability to reason about the behaviour of (and therefore debug) some code
- cohesion and coupling between modules, use of interfaces, etc
- testability
- will my assignment work when I submit it / will my boss not fire me for producing crap ?

I would assiduously avoid concurrent programming initially. Get to it eventually because it is important, but it's much much harder than most people give it credit for, even those who work with it. Correctness issues (eg races) with concurrent code are a form of non-determinism; non-determinism not only makes it very difficult to prove that something is right (so people don't bother), it also makes it very difficult to detect when it's wrong - it will only go wrong when someone else is using the code and not when you're looking for the bug.

If you can express a concurrent program without sharing ANY state between threads (e.g. communicate via messages only), you can avoid much of the pain but you still need to be able to analyse your system in terms of deadlocks and livelocks, pre-requisites for actions ("happens-before"), etc. Stuff like mutexes etc are just tools, they are not solutions. Learning about mutexes will not allow you to safely build a concurrent program any more than learning about a circular saw will allow you to build fine furniture.

Same goes for all of the other tools of programming (OO, fancy algorithms, shiny and hip languages/toolkits, etc) - they're just tools. You still need to know how to analyse a problem, split it up into manageable chunks and solve the chunks in such a way that your chunk-solutions will cooperate properly without falling in a screaming heap.
posted by polyglot at 8:44 PM on September 10, 2009 [8 favorites]


And the "learn by doing" point is critical. Theory is absolutely necessary, but equally so is practise. You need to go forth, make the mistakes and find the bugs.

Personally, I find that writing computer games (or anything else non-trivial that you will enjoy creating, enjoy using and be proud of once you're done) to be the best way to learn.
posted by polyglot at 8:46 PM on September 10, 2009


Best answer: I got to this thread so late there's not much for me to add, but I certainly support the suggestions made. I will just underline the things I have found to be most important in my professional career:

- interfaces (any decent sized project or company will have many applications which need to talk with each other by exchanging data)
- multi-tier architecture (client, application server, database server)
- three biggies mentioned by others: SQL, Object Oriented Programming and Design Patterns
- regression testing (testing everything, not just the parts you changed)
- audit and security (if your program loses money or material you won't be anybody's friend)
- internationalization (language, character set, date/time formats, currency formatting, etc.)
- macro generation (the generation of program source code from a higher level meta language, useful for large systems and/or maximizing the productivity of a skilled team of programmers by eliminating grunt coding, such as data validation or formatting)

I know, the above is boring and will require a lot of research on your part. But to give you a little entertainment, think of the following programming problem that is not as easy as it sounds:
- something happens at a given point in time, and that time is recorded in a database, then later something else happens and that time is recorded in a database. How much time elapsed between the two events? Hint: I didn't mention what part of the planet the events happened in, whether the time stamps were stored locally or in UTC, whether Daylight Savings Time was being observed during one or both events, and don't forget leap seconds!
posted by forthright at 8:53 PM on September 10, 2009


You can learn a lot about basic algorithms and design patterns by checking out TopCoder's educational content. Maybe even try your hand at a few challenges.
posted by Terheyden at 10:09 PM on September 10, 2009


You're asking what algorithms to learn so you can get better at programming, but if you want to be a kickass programmer, able to code rings around your peers and write the sorts of code that inspires joy and poetry, the algorithms themselves are almost beside the point. Algorithms are like APIs: you'll need a broad understanding of what classes of algorithms you should be looking at, how they work, and what the gotchas are, but you can always look up the details.

Lots of good stuff in here, but so far polyglot comes closest to the advice I'd give (as a guy with 20+ years of software development, 4 of them commercial, a PhD in Computer Science, and a couple of years of teaching undergrad CS classes. OK, so not quite a PhD. Few months yet.)

Read polyglot's post again (and the ones referenced from that post) for details, but what you really need is the mindset. You have to be able to breathe recursion. You have to understand tree structures in the depths of your soul. You need to be one with pointers. Yes, pointers - reviled by all, but (I'm convinced) incredibly important in learning to think at the multiple levels of indirect you need to be a good programmer. Yes, even in Java.

So, learn pointers. Which means (to cut to the chase)... learn C. Maybe C++, later, but learn C now. Play with pointers! Learn how to manage memory yourself! Spend hours tracking down that last #@$@#ing segfault! C is hard to be good at, but that's the point. It's a power tool. Sure, you can go a long time without needing it (and you can get a degree in computer science without ever writing a line of C, which is a tragedy IMHO) but sooner or later, you're gonna need C. You need to speak it (and fluently) if you want to be good at this. So learn it. And don't skip the pointers.

Now for the other extreme: if you really really want to be awesome, get comfortable with Scheme, or some other variant of Lisp. No, it's not useful in the real world (well, almost never). But learning it will rewire your brain, and force you to think about problems in a new light that will inevitably make you better. I rarely do stuff in Scheme, and never ever have I been able to use it for a job or anything like that. But the months I spent learning it are among the most valuable time I've ever spent on CS-related things.

Again, it's all about learning the mindset. And the most valuable thing you can do to get it: stop listening to me, or anyone else, and go code. Read books if you have to. Spend time thinking about how to design things nicely if you want, but geez, pick a project already (something with trees and recursion) and work on it. Doesn't matter if somebody's already done it before, because you haven't. Learn the algorithms as you need them, but you won't get much out of knowing the solutions to things until you've encountered the problems. Someday you'll be having a conversation with some guy who can discuss different set cover approximations with consulting his notes, but you'll be the one who has implemented a compiler and an image editor and a virtual machine, from scratch. Guess which approach is more fun?
posted by captainawesome at 10:58 PM on September 10, 2009


hehe, captainawesome's career sounds a bit like me (except I got that "few months" over a couple years ago). As an aside, I got my polyglot nickname for knowing too many programming languages when I was an undergrad.

Yes, learn C. Maybe not at first, but learn it. It is hard, very hard. You will reach a point where you think you know it, only to discover a year later that you were wrong and this is a difficult thing to admit. C++ is like that but moreso - it's a more powerful weapon that allows you to be more expressive, but it will blow bigger chunks out of your extremities when you don't pay attention. But when you master it, it is something magical.

Couple more things to add that you should look into fairly early:
- the memory models used by computers, both at the hardware (von Neumann vs Harvard) and the software (instruction pointers, stacks, etc) levels
- basic approaches to algorithm design

And one great big one I should have added first: design your program before you write it, even if the design is completely in your head and just lays out some basics. Hacking out code without a real clear plan of where you're going is the classic hotshot-rookie mistake and will get you in a world of pain. As you get better, your design tools and methods will get more powerful (new kinds of diagrams, charts, architectures and what have you), but all that matters is that you have a plan and you know what all the steps in the plan are, even if one of those steps is "figure this new algorithm out".

In terms of the "approaches to algorithm design", I mean the thought processes that go into all those "classic" algorithms and data structures that people think you should know. I'm talking here about generalities like "divide and conquer", "don't re-compute unnecessarily" (e.g. dynamic programming), "use another level of meta if it helps" (e.g. a computer program that writes computer programs), etc.

At this point those terms probably don't mean a lot to you, but consider the case of finding a word in a dictionary - you use the divide and conquer method. You don't search linearly through the dictionary going "nope, not aardvaark, not aesthetic, not animal" etc, you open it in about the right place, look at a word and then decide to move forward/backward the appropriate distance, repeating until you find the right one. The usual name for that is "binary search" and it is a "classic algorithm", used frequently in software and obviously daily life to find items in O(log n) time - which means if it takes you 1s to locate an item amongst 10, it might take you 2s if there are 100 and 3s if there are 1000. It's the sort of tool that makes the management of very large quantities of stuff manageable and is why google is possible.

To take that a step further, assume you know that divide and conquer will allow you to binary search a dataset... but binary search only works if the set is sorted. That gives you a constraint- is your input data sorted? In what ways is it unordered? Is it in reverse order? Is it very nearly sorted but with only a couple elements out of place? Could I arrange for it to be efficiently stored in-order as I collect that data? This sort of problem-solving thought process is how you go about not only designing new algorithms, but also combining existing algorithms to get the solution you need.

As you get into partitioning your code into libraries/modules/objects/abstractions/whatever, you need to ask "where do I cut?". Sometimes it's obvious, sometimes design patterns are very useful. Sometimes there's a particular interface or abstraction you're trying to achieve... but sometimes it's a lot less clear. A useful concept here is the thought that you should separate mechanism from policy. For example, a sorting function is a mechanism - it rearranges things so that they are "in order". But what is the ordering? The ordering is a policy, and when evaluated, it says "X comes before Y". Therefore when you look at a sorting function (the mechanism) in a library, it will generally have some means of passing a policy (usually a function, but it could be a more complex interface) into the sort function.

Therefore once a sort function is listed, it can work on integers, strings, apples, oranges, anything as long as there is some policy that defines the ordering.

Depressing fact: it takes 10,000 hours of honest hard work to master something that is worth mastering and there are no shortcuts. Being a programmer and/or computer scientist is no exception but it's totally worth it when you get there.
posted by polyglot at 12:54 AM on September 11, 2009


P.S. - how could I have forgotten "commenting" your code...not technically an "algorithm" but if you want to be looked at as a true professional, be able to work in a team, or have anyone else support your code when you're on vacation or at home asleep, you're code needs to be commented.

In general your comments should be the "why" of what you're doing, not the "what". If you are adding 1 to a counter don't include a comment saying "add 1 to click counter", say "track number of times user clicked same screen region".

Also, write your comments first, then put code between them. This will help you to lay out your program before your first line of code, and will expose weaknesses in your thoughts and plans, and will guarantee that your finished program will be commented to some degree.
posted by forthright at 5:12 AM on September 11, 2009


As a counterpoint to the recommendations to learn C, may I suggest that you don't have to learn it. People used to make the exact same arguments about the necessity of learning assembly language, and now nearly no one programs in assembly; most C compilers compile your C code into much more efficient assembly than you could ever hope to write.

Similarly, newer programming languages are abstracting away memory management (the painful part of C and company). Although C still does and may always outperform other languages, the difficulty of writing and maintaining programs in C substantially outweighs its performance gains in all but a handful of situations.

So while it's true that learning C will force you to understand memory management, I feel that it's on the cusp of becoming a dinosaur anyway, and is not absolutely necessary.

Most of all (and a lot of the advice pro-C included this as a caveat), as a beginner, I'm afraid that the frustration of banging your head against C will kill your motivation. Much better to become proficient while programming in a more forgiving language, then later become multilingual, I think.

For algorithms and data, writing the naive (easy, inefficient) code for the N-body problem (a small simulation of the interactions of bodies in spaaaace), and then doing it the efficient way was, for me, a great way to drive home the importance of choosing/inventing the right data structure. Difficulty: intermediate/advanced.
posted by olaguera at 10:37 PM on September 12, 2009


« Older Australian wine recommendation   |   Font Finder Newer »
This thread is closed to new comments.