help me understand programming grammar
August 14, 2006 11:36 AM   Subscribe

How do I learn what goes on "under the hood" of programming languages?

I HATE HATE HATE doings something "just because that's the way you're supposed to do it." I am not comfortable until I understand WHY I have to do it that way. Also, I've found that every time I've gotten to understand the logic behind a language construct -- the logic of the language's designers -- I become a better programmer.

I'm talking about basic stuff here, like what exactly is an operator and how is it different from a function? Why, in many languages, do we say x = Math.cos(3), but not x = Math.add(3,4)? Why is it x = 3 + 4. I realize that this sort of thing varies from language to language, but what is the decision based on.

What does the "new" mean when you create an object instance via something like var cat = new Feline()? I totally get that you're running a constructor function inside the Feline class. But what IS "new"?

Since my major programming language is actionscript, and since that language is about to radically change (ver 2.0 --> ver. 3.0), I recently got puzzled by the fact that in the old version, ALL data types are capitalized: Boolean, String, Void... Whereas in the new version, some are capitalized and some aren't. String, int, void, Boolean...

I spent about three hours searching the web for the WHY of this, and pretty much all I found were rules: remember that void now starts with a lower-case V. Fine. But WHY? What does it mean?

Feel free to answer these specific questions, but I come up with more like them every week. I'm more interested in a good source -- or sources -- to help me understand all these sorts of things (mostly in C-based languages). I'm a pretty smart guy, but I don't have a CS background. So I'm hoping for something that a sharp lay-person can get.
posted by grumblebee to Computers & Internet (34 answers total) 15 users marked this as a favorite
 
Best answer: Read The Design and Evolution of C++. Stroustrup engagingly and cogently explains his design decisions (including various compromises and trade-offs); the first third of it reads almost like a novel -- it's exciting, it has a plot and a protagonist.
posted by orthogonality at 11:39 AM on August 14, 2006


I came to all of these realizations by breaking things repeatedly over a long period of time and figuring out why they actually do work. But I agree with Orthogonality, that's a great read...
posted by SpecialK at 11:43 AM on August 14, 2006


Learn C.
posted by Leon at 11:45 AM on August 14, 2006


A ton of this stuff is arbitrary and designed to make the langauge easier to learn or use. For instance, in many languages (eg, C++) addition IS a function and they let you use an operator for it so you can write code that looks readable: a + b + c + d vs add(add(add(a,b),c),d)

Learning more languages is probably the best way to go, and learning the geneology of langauges (ActionScript was certainly designed to be easy to pick up by people who knew C/C++)
posted by aubilenon at 11:46 AM on August 14, 2006


Also, read some of the histories on C on the web -- many answers are of the form, "language Y does this this way because it's based on language X, which did it the same way, and it was simplest to copy that implementation, or it was least surprising to programmers (who we wanted to adopt our new language) to continue the tradition." You'll see this especially on the discussion of why C arrays are what they are, and why they decay to pointers when passed; the answer is basically, "because that's how BCPL did it".
posted by orthogonality at 11:46 AM on August 14, 2006


Best answer: Programming languages are made up by people so most of the rules are arbitrary. You pretty much just need to accept that.

You should probably spend some time learning the basics of assembler/machine code and 8-bit microprocessors if you want to learn what computers actually do. It makes programming them a whole lot easier, even in very high-level languages.

Why, in many languages, do we say x = Math.cos(3), but not x = Math.add(3,4)? Why is it x = 3 + 4.

This is inherited from languages that compile to machine code. Most CPUs can natively add two numbers together so there's no question about how it's done - it's a single machine code instruction and is thus "native" to the language, with its own symbol.

Most CPUs cannot calculate cosines on their own and must run a function created by the OS developers that does it, which probably contains many tens of machine code instructions. Thus it is represented as a function call in the language.

But what IS "new"?

It's a keyword. It tells the interpreter to run the function that creates new objects - reserving and configuring memory, etc. There's nothing deeper to it.
posted by cillit bang at 11:48 AM on August 14, 2006


not exactly what you are talking about. but I recently picked up 'code complete' which is a pretty great book about the 'why' of construction of code. really clear information on WHY programmers do things this way or that. the value of abstraction etc.

I came to programming via actionscripting with no CS background either and I feel like code complete is sort of the perfect book to fill ALOT of the stuff i just have been 'doing' because its just how i learned to 'do it' for so long.

mentioning code complete also seems to bring up 'the mythical man month' which i know nothing about but sounds similar. sorry this might be a slight derail from what you want but you might want to flip thru 'code complete' if you ever run across it.
posted by darkpony at 11:48 AM on August 14, 2006


Best answer: At even a more basic (C) level, K&R describes some of their rationale.

I'll tackle your questions, though.

Operators are generally the same as their mathematical equivalent. Programming languages largely evolved out of mathematics (even more-so now, but originally too). Cosine isn't an operator, its a function. It would make no sense to have a cosine operator. The only unary operators tend to be logic, and in C/C++, the increment/decrement operators, and some language features (new, delete, sizeof, address-of, and indirection are classified as unary operators).

new calls the constructor as a side-effect. The constructor itself isn't part of new. new allocates memory on the heap & returns a pointer, type-safely. It also supports exceptions, & a few other things dealing with objects. It also allows placement new, or creating an object into a specific (pre-allocated) portion of the heap. The 'new' keyword came about with object-oriented programming. In C/C++ there is a very conscious distinction between local/stack allocation & global/heap allocation. local/stack allocation didn't change when classes were introduced, since they could be treated in the same way as structs, but malloc was a rather ugly way to allocate memory for classes, since there was usually a bit more work that had to be done (and more type-safety that had to be observed). Hence the new operator.

The general rule with captialization/non-capitalization, that many new/designed languages observe (C# comes to mind instantly. I'm sure there are others), is that value types (those types that are primitive, & will go onto the stack) are written in all lower-case, and classes/object types, those with more members & that will go on the heap (this is the difference in C# - there's no distinction in C++, really - that was another design choice, btw - to treat UDTs as close as possible to built-in types) are mixed-case. The 'rationale' behind it was that C's datatypes were all lower-case, and Java's datatypes were mixed-case. C's types were simple, Java's were classes.

I have no idea where you would go to read about stuff like that. These are things that you get an understanding of when you work with a lot of different languages over a long period of time. I'm not sure you could even really write something like this all down.

That said, feel free to email with other questions.
posted by devilsbrigade at 11:52 AM on August 14, 2006


I think you might find it beneficial to learn how a compiler and operating system work under the covers, since some of these inconsistencies you find are to make the program run more efficiently, some are to make it easier to compile, and some are to make it easier to program in.

The dragon book is the classic compilers reference (and apparently a new edition is supposed to be coming out this month). The Elements of Computing Systems is something I haven't read myself, but I have heard good things about it as a textbook that takes you from the most basic building blocks of computers (AND, OR, NOT logic gates) to a full programming language.
posted by inkyz at 11:52 AM on August 14, 2006


If you *really* wanna know how compilers treat languages, you want the Dragon Book. This is a graduate-course level text in compiler design, and it's likely overkill for what you need, but maybe not.
posted by baylink at 11:54 AM on August 14, 2006


Note that the dragon book is ancient, and has nothing about any of the changes in compilers over the last 20 years. If you want something that covers anything other than C languages, you'll probably want to look elsewhere.
posted by devilsbrigade at 12:03 PM on August 14, 2006


Programming languages are arbitrary. Designers of programming languages use "+" to mean "add these things together" because they hope it will be intuitive to people learning the language. You could write a programming language where "+" means "subtract these things" and "-" means "multiply these things by the square root of 19". Do you think that that language would catch on faster or slower than other languages?

If you'd like to check out some programming languages that are less, um, user-friendly, see Brainfuck or Malbolge. Be sure to check out the "Hello World" example programs for each.
posted by jellicle at 12:03 PM on August 14, 2006


So... I'm not sure if there's any single-source answer to your question. Maybe get yourself a CS background. Programming languages make about as much sense as human languages, in that many decisions are arbitrary and there are a lot of idiomatic constructs that don't have any single point of clear definition.

(I once had a discussion with someone whose native language isn't English about why English has articles. He didn't see the need for them. I told him not using them made him sound... well, dumb. I think there's a parallel somewhere, but not that you're dumb.)

Many programming languages do things a certain way either to copy another lnaguage or as a reaction against other languages. So maybe learn some more programming languages, which isn't a trivial task.

As to your specific questions:

Why Math.cos(x) but not Math.add(x,y)?

It's a good question and my best answer is because it's what's always been done. But there are a lot of possible reasons.

One, most programming languages don't make assumptions about processor capabilities and the default language kewords/operators represent the common subset of processor instructions. Cosine is typically not expressible in a simple assembly instruction, while addition is.

Two, the built-in operators represent the most common operations. Addition happens a lot more then trigonometry and if every possible operation was in the language, the language spec would be huge. The less frequent operations are pushed out of the language core into libraries.

Three, every operation has to have a keyword and once your language has too many keywords, it gets really annoying, both from a compiler point of view as well as a developer, since you can't usually use keywords as variable names.

What does the "new" mean...?

When creating objects in langauges the put objects on the heap (as opposed to the stack), there are two components to creating the object: allocation and initialization.

Allocation is when memory is reserved to hold the object's data. And that's what new does - it allocates memory.

Implicitly, it also initializes the object. Presumably, an uninitialized object isn't very useful. So it invoked the constructor which sets the object's field's default values.

But that explanation assumes you understand how memory is laid out in most modern operatin systems and the distinction between heap and stack memory. And while it's not so complex, it's too much for me to explain here. You probably need a basic CS textbook.

Maybe try the MIT Open CourseWare: 6.035 Computer Language Engineering or one of the other CS courses MIT has.
posted by GuyZero at 12:07 PM on August 14, 2006


Best answer: If you don't want the whole academic treatise, Jack Crenshaw's Let's Build a Compiler may be more up your alley. It was never actually finished, but it uses a bottom-up approach that I found easy to understand.
posted by kindall at 12:08 PM on August 14, 2006


Inger is another thing that looks interesting; its authors wrote a free book on writing the compiler.
posted by kindall at 12:14 PM on August 14, 2006


I agree with cillit bang about assembly language. Machine language, expressed in binary digits, is what goes on under the hood of all programming languages. One level up from machine language (bits) is assembler.
posted by davcoo at 12:16 PM on August 14, 2006


I would also second learning C. It's like learning Latin: you suddenly discover you half-understand a whole bunch of other langauges. (not that I know latin)
posted by GuyZero at 12:26 PM on August 14, 2006


Never-mind C. Learn lisp if you want a nice consistent language where everything makes sense.

C has all the same arbitrary rules as action-script and other languages — there are operators and there are functions and there are data, and each has a special representation heavily influenced by two competing desires: understandability and compile-ability.

Lisp admits that everything is data, or everything is functions, or everything is operators... etc. By internalizing this realization, lisp is incredibly simple to define as a language. Lisp is just lists and lambas. Lists are nouns, lambdas are verbs.

What's more, lambdas are explicitly represented with lists, so really its all about lists. Even more cool, lists can be represented with lambdas (specifically the concatenation function). So pick your poison.

Lisp is the only language I know of besides assembly which exposes the fact that data is functionality and function is data to the programmer.

If you can learn lisp, you will then understand that everything is just notation. The C guys made some functions special and gave them "operator" status. Others they left as plain-old functions defined in the normal C way.
posted by clord at 1:00 PM on August 14, 2006


Response by poster: Fantastic answers (for some reason, the Mark as Best Answer link isn't working right -- or displaying right -- for me right now, so I'll go back and do it later).

I own "Code Complete" and started it once. I agree, it's great. Alas, I got busy on a project and had to put it down, but I'll definitely pick it up again.

I briefly studied C and Assembly in college. I would love to try Assembly again. Any suggestions for resources? (I mostly work on Wintel machines.)

RE: "programming languages are arbitrary." Yes, but that's not the whole story. That would be like telling someone who is trying to understand English grammar, "Don't bother. It's all random." The truth is more complicated.

Humans make programming languages. Humans CAN be arbitrary and often ARE arbitrary. Humans also can and do make mistakes. BUT... humans can also be logical and make things right. So surely any language is going to be a mixture of arbitrary decisions, mistakes, and soundly-reasoned choices. I want someone to guide me through the various features and tell me which ones well-reasoned and which ones weren't. (I'm not asking someone here to do all that work. I'm looking for a book or web resource.)

And I'd LOVE someone to spell out the difference between a heap and a stack!
posted by grumblebee at 1:04 PM on August 14, 2006


Best answer: Programming From the Ground Up

Assembly language will teach you what goes on when a function gets called, how arguments are passed, how the call stack is maintained, etc. After you learn it well, you can really "see" in your mind's eye what your higher-level programs will eventually translate down to.

And I'd LOVE someone to spell out the difference between a heap and a stack!

A stack is a data structure that has, at minimum, these operations: push something onto the stack, pop something off the stack. No random access to elements buried deep in the stack. This is very helpful for keeping track of potentially recursive calls: everytime you call a new function, put the return address onto the stack, when your function returns, pop off the return address and jump to it.

A heap is a totally different structure, of which there are very many variants. Wikipedia is actually a pretty good source for these kinds of general math/cs questions.
posted by sonofsamiam at 1:10 PM on August 14, 2006


Once you've learned C & you have the basic idea of how programming languages work, The C++ Object Model by Lippman is a great book explaining how C++ works. It's not for beginners but if you understand it you'll have answers to most of your questions.

Book Cover

Stuff like void vs Void is not that technical. It's more historical, conventional & social. It's similar to how notation for vectors versus scalars will change from time to time and from discipline to discipline.
posted by Wood at 1:19 PM on August 14, 2006


I think he means heap as a memory storage area, as opposed to the data structure. Although many heaps are implemented as heaps, you could implement a heap as a linked list.

From a programming standpoint in a C-like language, the stack is where "automatic" variables are kept as well as where function arguments are kept. Stacks are automatically managed, but the amount of space available is fixed at compile time. i.e. you have to figure out before the program runs how much space you need, which is usually no problem.

The heap is where you can request variable-sized chunks of memory storage. For example, if you're writing a browser, where do you store the image data you download? Because heap storage is requested dynamically, it can only be referenced by a pointer (or a reference, depending on what terms you use). You need to specify how much space you want as well.

Many modern language (like Java) make a lot of use of the heap, but hide it well. In a piece of code like this:
public void f()
{
String s = new String("Hello");
}
s is an automatic local variable on the stack, which contains a reference to a block of memory on the heap, which is allocated by "new" when the program runs this function. So the distinction between the stack and the heap is blurry in some languages. In C it's a bit more obvious:
void f()
{
char *s;
s = malloc(8);
strcpy(s,"Hello");
}
where s is an automatic local and the allocation of heap memory happens in the call to malloc and the initialization happens in the call to strcpy.

of course, not knowling C or Java, this probably makes no sense to you, so sorry about that.

Learning C is good because it painfully forces you to do everything manually. There is next to nothing hidden "under the hood". It's only barely better than programming in assembly.
posted by GuyZero at 1:24 PM on August 14, 2006


Response by poster: You can use C analogies. I HAVE worked with C. It's just been about 15 years. But I understand stuff like pointers. And I know the idea of a stack as a data structure.
posted by grumblebee at 1:30 PM on August 14, 2006


I'll second clord's recommendation of Lisp, though I'd say you should look at C or Assembly as well. C and Assembly will teach you more about the internal details of how computers work, and Lisp will teach you more about the abstract nature of computation. Any language has to balance these aspects of programming (working effeciently with the machine, and being as comprehensible as possible to the human mind), so it's worth thinking about them both.

For C, people have already listed some good sources. For Lisp, there are a couple of books that could suit your purposes well. The Structure and Interpretation of Computer Programs is a classic. It's the introductory computer science textbook at MIT, and uses Scheme (a dialect of Lisp) to illustrate some of the core concepts of computer science. It's fairly math-heavy in places, but if you can follow it, it'll give you the background to really understand a lot of the things you're asking about. The full text is available online, as are recordings of a couple of courses based on it: video of a set of lectures given by two of the authors in a course for Hewlett-Packard employees, and audio and video of a U.C. Berkeley computer science class. (I've been listening to the latter as a podcast, and have learned a lot. It also presents some concepts from the book in a way that might be easier for you if you're more a language person than a math person). The Little Schemer is less detailed in some ways--it's more aimed at thinking about computation than at teaching the ins and outs of programming--but might also teach you what you want to know. It's also playful and a lot of fun, and feels easier without being dumbed down. (I also haven't read as much of it as I have of SICP, so there's less I can usefully say about it).

If you feel like diving in over your head, or if you just enjoy primary sources, you might also take a look at the original 1960 paper where the Lisp language was introduced, John McCarthy's "Recursive Functions of Symbolic Expressions and Their Computation by Machine" (part 1, but just ignore that--there was never a part 2). The language has changed a lot since then, but this is still the first presentation of a number of key programming concepts. You may or may not be able to follow it, but it's pretty short, and you could have a lot of fun trying if you enjoy that sort of thing. (And if you don't enjoy it, you'll know soon enough).
posted by moss at 1:49 PM on August 14, 2006


I disagree with some of the above because I think several of your questions reflect someone who learned programming in a very results-focused manner rather than a How To Program methodology. So saying "learn C" or "learn PDQRFD" isn't necessarily a help.

Case in point - you ask about the 'new' operator and mention calling a constructor, which is more or less correct on its face. The why really covers several topics. At its base you have the issue of what is a variable and how the easy-to-remember word for us gets turned into a location in memory and the allocation of that memory, something that is expanded on very effectively when you learn assembly language - assuming you're taught assembly with an eye towards revealing what's happening in the machine.

Depending on the use of the 'new' keyword you also get into the topic of data structures, a topic compsci students likely take a class on all but itself. In an object oriented language you're then onto the topics of data hiding and strong typing.

Personally I disagree with the suggestion to learn C in particular because a lot of the things you [can] do in C that illustrate these points are somewhere between bad practices and discouraged or unnecessary in modern/typesafe languages.

The problem with learning assembly to learn these things is you need to find a book that's interested in telling you the answers to the questions you're asking and not how to best write a device driver or eke some additional speed out of something. It looks like Duntemann's book would be a good solution for that. The fact that it comes with a CD with everything you need to do all the examples is a big plus.

Another option I just came across is Kip Irvine's book which I mention not because I know anything about it but because I used to work with the guy at M-DCC and it seems that it's now being used at my alma mater. I am no doubt biased but I felt that FIU found an excellent balance between theoretical and practical so if they're using it I suspect it meets your requirements.

All that said, if you're really interested in this stuff I say why not pursue proper education in computer science? Being interested in how these things come together is the core of what makes someone a good CS practitioner and for this level of stuff nothing beats having a good instructor talk about it. I'm not big on nostalgia but those classes were some of the most fun I had in academia. holy crap I am a mega-nerd
posted by phearlez at 2:13 PM on August 14, 2006


Best answer: It doesn't have anything about the history of programming language conventions, but for understanding what goes on between the electricity and your program, I can't recommend "Write Great Code, Volume 1: Understanding the Machine" highly enough.

Ignore the vague idea that the aim of the book is to help you write more elegant efficient code. It's more of a comprehensive and, as importantly, a surprisingly readable guide to the way the CPU and memory work, how numbers and characters are encoded and manipulated, essential I/O concepts, and more.

For instance, pages 320-321 will answer your stack/heap question completely and concisely in about four paragraphs. And if you want to read another six pages about how memory allocation actually works, well, you can!

I know this isn't exactly answering your original question, since it's not about programming languages per se, but I'm getting the feeling that you're interested in this stuff as well.
posted by chrismear at 2:20 PM on August 14, 2006


I googled chrismear's suggestion several times before discovering he didn't actually recommend "Great White Code".
posted by GuyZero at 2:24 PM on August 14, 2006


Response by poster: All that said, if you're really interested in this stuff I say why not pursue proper education in computer science?

I would LOVE to do that, but it's not practical for me. Too many responsibilities. I can't quit my life and go back to school. But I CAN study on my own time.

I think several of your questions reflect someone who learned programming in a very results-focused manner rather than a How To Program methodology.

Yup, and it's something I've been rectifying over the last few years. For someone like me, there's a ton of easy-access material about best practices -- which is what's really important in the long run (in terms of writing clear code, working on a team, etc.). So I'm good with design patterns, oop code, etc. But great as these techniques are, they don't tell me much about the sort of questions I'm asking here. Or maybe they do, and I'm not good at getting the message.

I think there are many people in my boat. More than ever before. There was a time when most coders had some sort of formal education. Nowadays, many people learned coding via Flash experimentation -- or something like that. They learned how to get the job done in a clunky way, without any real understanding about what they were doing.

Actionscript -- Flash's language -- has actually evolved into a fairly clear, well-thought-out language (especially in version 3.0). So one can finally employ tried and true "best practices" in that language. And there education mill is waking up to that fact and starting to produce books and courses based around such practices.

Same with Javascript and PHP. When I got into the webdev game, there were all these hack books about how to make rollover images -- and there were super-advance books for CS types. Now, it's not rare to find books on Design Patterns or OOP written for beginners.

But as a non-CS person, I'm still in the dark much of the time about what's REALLY going on. I feel like I'm being taught the best way to drive a car, which is great, but it gets me no closer to knowing how the engine works than when I used to veer all over the road.

Most people don't really care how an engine works, so there's not an industry based around presenting this knowledge to non-specialists. But I'm hoping that there IS a source somewhere out there -- or a group of sources -- that THIS non-specialist can use.
posted by grumblebee at 2:35 PM on August 14, 2006


Response by poster: moss mentioned "The Little Schemer." I worked through this about 20 years ago, back when the book was called "The Little Lisper." It was one of the most fun intellectual romps I've ever had. The book, in its own quiet way, is a work of art. It doesn't really answer my questions here, but I highly recommend it to anyone who wants a fun, enlightening braintwister.
posted by grumblebee at 2:41 PM on August 14, 2006


Well I wouldn't have suggested quitting life and going back to school - I'd have suggested being a part-time student. But you know what your capabilities are, not us. I just think it's a shame for someone interested in the real meat not to pursue it. But after tracking down your writing blog it sounds like you've got a pretty full schedule.

If you want to duplicate the experience on your own and I were to look back on my school experience to guide you then I'd say....

Work with one of the assembly books before anything else. If you have a grounding in solving problems programmatically then you're poised to do well with it. I think this is an area where an instructor would make a world of difference, but simply having to go through the process of using keywords to allocate memory and doing pointer arithmetic will probably illuminate you on what happens "under the hood."

After that point there's books on algorithms and data structures - maybe someone here has a suggestion for a good book to use. My course used Weiss' book but he was a professor at FIU and I am always suspicious of textbook selections where a colleague is the author. I wasn't fond of the book either, though it was the first edition of the (apparently now abandoned - pity) Ada book. On the other hand, you can get it cheap.

After that point the logical step would be Operating Systems. None of the questions you've posited here seem to relate to them but if you get into issues of file structures you might find it worthwhile.

More likely you'd find worth in the kind of material that is covered in Theory of Algorithms. One of the greatest tragedies of the current computing age is how completely unable so many professionals are to identify a good and bad way of doing things. Yes, the STL and other things may provide you a linked list and you may never need to write your own but knowing what tools -exist- is part of selecting good ones.

While these may not make up a complete CS program and may not necessarily be the best for your career (DB normalization, anyone?) they're some of the ones that made the biggest impression on me.
posted by phearlez at 3:18 PM on August 14, 2006


Some languages do use Math.add or its equivalent for some types: Ex, in Java for instance, you would do:

BigDecimal first = new BigDecimal(74);
BigDecimal second = new BigDecimal(26);
BigDecimal third = first.add(second);

...
posted by Arthur Dent at 3:55 PM on August 14, 2006



I coded in assembler first, which helped understand what went on in C. I had only three registers available at the time, and if you jumped off to a "function" (section of code that does something common) they would use the registers for their own use.

Then you learn to push your registers to the stack, and call the function, then pop the values back off again.

Which is what C does for you, when you call a function. Pushes all the registers to the stack, and then the arguments. The function reads as many variables off the stack as it was defined to (hopefully the same amount as the compiler pushed)

Eventually, the function completes, and pops off the arguments, then the registers. The reason the caller pushes and pops is so that the stack doesn't get messed up if you end up passing the wrong number of arguments.

So, by knowing assembler, I learnt how C really worked. C just does a lot of tedious things for you. If you setup a "struct" in assembler, you as a coder have only the pointer to the start, and have to reference the third "int" by using "ptr + 3 * sizeof int" essentially. C does that for you automatically when you say "struct.nameofint". etc etc.

Then when you go to C++ you realise how icky it is, what "new" might-end-up-doing when you create something :) (All the functions it calls for you, all the memory it allocates). But, it hides even more tedious things.

If you really want to understand it, you'd have to do assembler. Alas, Intel assembler is about the worst out there, but probably the most useful to know these days.
posted by lundman at 7:06 PM on August 14, 2006


Best answer: grumblebee: I can't believe you tagged my two-word answer. Anyway, to expand...

I'd argue that there are four fundamental approaches to computer languages that you need to get your head round - procedural (C), object-oriented (C#), functional (Scheme) and relational (SQL).

I say learn C first because, once you've got pointers, malloc etc under your belt, a lot of what those other languages are doing can be mapped to C-like concepts (eg "x = new Object" == "malloc enough space on the heap to contain an Object, call Object_constructor() with the address of the space, return the address of the space as a pointer").

If you want to deconstruct it as far as assembly that's fine, but C gets as close to the hardware as I ever want to go, personally.
posted by Leon at 7:32 AM on August 15, 2006


The school at which I studied programming twenty years ago gave you an "introduction to computers" class (which didn't even include "this is a mouse" back then) before throwing you, sink-or-swim, into IBM 360 assembler. It was cruel, but probably less cruel than teaching you BASIC or Pascal first and letting you think you understand computers, taking your money all the while.

I suspect that you can't really master programming until you know at least one assembly language. Ideally it would be one for which you have to write your own multiply and divide routines. I cut my teeth on the 6502.
posted by kindall at 11:27 PM on August 17, 2006


« Older Could balance be considered a sixth sense?   |   Help researching future wireless technology Newer »
This thread is closed to new comments.