Reading for computer science fundamentals
April 26, 2015 11:59 AM   Subscribe

I work in the software industry and I studied CS in undergrad, but I feel I either didn't get the best education in CS fundamentals or I wasn't paying attention (or both). Most of my knowledge of CS is fairly practical. Now I want to spend a few hours a week re-learning CS core subject matter to deepen my understanding.

I am especially interested in how things like the lambda calculus were derived and *proved*. I don't really care about rehashing data structures, I want to understand Church and how and why scientists figured out that a Turing machine was functionally equivalent to the lambda calculus. I sort of get it in a high-level way, but it's all fairly spooky and mysterious to me. I'd also like to learn the details of what monads and functors are and how higher level languages (e.g. java) are actually implementing some version of the lambda calculus under the hood somewhere.

I'd also be interested in a refresher on set theory and category theory as I understand there is a lot of overlap. I'm generally not afraid of math but don't necessarily want to go on a deep dive of these subjects if it didn't pertain to CS.

Grad students, PhDs, et all, what books and/or papers would you recommend? Or is there a Coursera class better suited for my purposes? I'm open to suggestions.

Thanks!
posted by deathpanels to Science & Nature (8 answers total) 31 users marked this as a favorite
 
Many of the topics you mention (like Turing machines) fall under the "Theory of Computation" sub-branch of computer science. Sipser's textbook is one of the standards and is good if you can learn from textbooks, though it's also expensive due to the crazy economics of college textbooks.

The Feynman Lectures on Computation is a slightly eclectic and it may take a little more effort to work through it, but it is also a good intro to some of these topics.

For learning about the how programming languages are implemented, I can recommend Write Yourself a Scheme in 48 Hours, preferably combined with another Haskell intro like Real World Haskell. This is just a taste of the world of lambdas, monads, and compilers, but it'll leave you with some good mental tools to explore further.

For math: I haven't read it, but Lehman and Leighton's Mathematics for Computer Science seems to be well regarded and is free online. Concrete Mathematics is similar and is written by some of the giants of the field.

Reading the original papers is often good; you can find various lists of good papers to read in lots of blogs or forums, like this one.
posted by mbrubeck at 12:46 PM on April 26, 2015 [3 favorites]


Sipser is the standard -- it's what they used to teach me this stuff, anyway -- but I never found it that great. As much as it's somewhat hokey and breathless, you could do worse than Godel, Escher, Bach for an intro. It will lead you to a proof sketch of Godel's incompleteness theorem, which builds recursion out of first-order logic. By that point, you're nearly there for Church-Turing.

I'd also like to learn the details of what monads and functors are and how higher level languages (e.g. java) are actually implementing some version of the lambda calculus under the hood somewhere.

They're not; that's kind of the point. If you want to learn about some of the more crazy theoretical constructs that the functional programming community has built, though, the best way to do that is to learn Haskell. The Haskell School of Expression by Hudak is a good intro. Recognize though that this will not help you understand anything other than certain functional programming languages!

Finally, I would be remiss if I didn't mention SICP. A professor (not at MIT) once told me that the footnotes of SICP are an education in and of themselves. Certainly it is the best principles-based introduction to computer science out there.

Enjoy!
posted by goingonit at 3:45 PM on April 26, 2015 [2 favorites]


Response by poster: My understanding was that a language is Turing complete iff it is functionally equivalent to lambda calculus. But maybe I'm way off the mark.
posted by deathpanels at 8:14 PM on April 26, 2015


The proof that lambda calculus and Turing machines are equivalent is not that exciting. The original sketch of it is in Turing's famous 1937 paper, in the appendix. Basically, you can make a Turing machine that takes a lambda calculus expression on its tape and crunches through the derivations until it reaches a result. And you can make a lambda expression that just keeps applying the transition function of the Turing machine to an input state (represented as a string of binary, i.e. a number) until it hits a halting state. I can't actually find a clearer explanation of this than Turing's sketchy construction.

For reasons discussed here Turing machines tend to be the usual model in theoretical computer science (as well as Boolean circuits). In terms of computable functions they are basically equivalent to the lambda calculus, but it is much easier to define coherent notions of time and space usage with Turing machines. So courses and textbooks usually don't cover the lambda calculus, although I guarantee you if you got a basic understanding of both it and Turing machines, you could flesh out the proof yourself with some work.

Sipser's book is pretty good (imo, VERY good) and unless your interest is very specifically in the lambda calculus, it's worth reading. You will get a solid understanding of the necessary set theory, then regular languages (which are exactly the languages recognizable by regular expressions like in Perl), context-free grammars, Turing machines, the Halting Problem, the actual meaning of P vs. NP, and then there's more esoteric stuff at the end about probabilistic Turing machines and interactive proof systems. Look for online courses that use it as a textbook.

There's a whole separate world of programming language theory and design. They like the lambda calculus, but not for the reasons Church and other mathematicians care about it, they just think it's a more fruitful model to use when trying to design programming languages. These are the people who made Haskell, but their work is kind of orthogonal to what's normally called theoretical CS.

On preview:
Yes, but like these equivalences are only really mathematically meaningful. You could write a lambda expression that would run Java programs (encoded as huge binary numbers), and you could (obviously) write a Java program to run lambda expressions. That's all. Languages like Haskell do implement the lambda calculus in some sense, but that's a design choice.
posted by vogon_poet at 8:17 PM on April 26, 2015 [3 favorites]


I feel I should add: the Curry-Howard correspondence is worth reading about, for the connection between typed lambda-calculi (and therefore typed programming languages) and mathematical proofs. But I know basically nothing about this, it just seems to align a lot with your interests.
posted by vogon_poet at 8:47 PM on April 26, 2015 [1 favorite]


Some of Phil Wadler's more populist articles might be helpful:
Old ideas form the basis of advancements in functional programming

Church's Coincidences

Java doesn't implement some version of the lambda calculus, it implements some version of a Turing machine. Lisp (and derivatives such as Haskell, Scheme, Clojure) implment some version of the lambda calculus.

For a general overview of CS topics, A. K. Dewdney's The Turing Omnibus is good.

For Monads, Wadler's '92 paper, 'The essence of functional programming' introduced them to the Haskell community (from Moggi's work).
posted by at at 1:01 PM on April 27, 2015 [2 favorites]


deathpanels: you say, "My understanding was that a language is Turing complete iff it is functionally equivalent to lambda calculus". This is true, but it is a roundabout of getting there. I was going to write a long post about this, but when at posted Wadler's papers, I realized that "Church's Coincidences" says exactly what I was about to say.

Long story short, the whole point of this stuff is that it doesn't matter which model of computation you start with, you'll end up with the same computational power. Since the computers we actually have are, more-or-less, Von Neumann machines, compiler and interpreter writers tend to base their under-the-hood abstractions on this model. So Java doesn't use the Lambda calculus to do what it does, but that doesn't matter, because it gets to the same place.
posted by goingonit at 1:08 PM on April 27, 2015 [1 favorite]


Just to add another thing: you might enjoy this still-unfinished tutorial on how to actually write a lambda-calculus-based strongly-typed functional programming language, from the ground up. It looks like it's fallen behind schedule, unfortunately, but the material there is pretty good.
posted by vogon_poet at 6:23 PM on April 27, 2015


« Older WHERE ARE YOU YOU LITTLE SHIT   |   Do I need a lawyer for my totaled car accident... Newer »
This thread is closed to new comments.