Self-revealing algorithms
September 15, 2005 8:17 AM   Subscribe

Is there a way to represent algorithms in a form that in turn requires minimal or no knowldege of other algorithms?

Is there a way to separate the representation of any algorithm from the algorithm? That is, can an algorithm be described in such a way that no or very minimal a priori knowledge about the representational language is required? Or to be described in such a way that an algorithm can "represent itself"? (I'm thinking pseudocode does not count, since knowledge of the English language and how to parse English grammar is highly specialized; but it is closer to what I am after.)
posted by Rothko to Science & Nature (33 answers total) 2 users marked this as a favorite
 
Flowcharts?
posted by loquacious at 8:23 AM on September 15, 2005


Flowcharts?

And you call yourself Loquacious?

He has a good point. UML is an attempt to provide a visual representation of a program's logical flow.

But as far as algorithms themselves, you need a notational language, because you have constructs (decisions, loops, operations) to represent.
posted by curtm at 8:38 AM on September 15, 2005


Nah. Flowcharts are just as bad as pseudocode. You have to know the language that the flowchart's written in. You also have to know what a box means, what an arrow means, and so on.
posted by nebulawindphone at 8:38 AM on September 15, 2005


Best answer: Other than building a functioning physical model of the algorithm, I don't see how that'd be possible. Any diagrammatic or descriptive representation is going to require some understanding of how the diagrams or descriptions work, isn't it?

Or are we talking about some SETI-like self-describing build-from-first-principles language?

UML is an attempt to provide a visual representation of a program's logical flow

And if you don't already speak UML, it's utterly incomprehensible.
posted by ook at 8:47 AM on September 15, 2005


Response by poster: Or are we talking about some SETI-like self-describing build-from-first-principles language?

What general principles can be selected independent of any "parsing" or other algorithmic processing that takes place?
posted by Rothko at 8:52 AM on September 15, 2005


Most cultures have a game that involves passing a ball to accomplish an objective. In a simplistic way the rules that govern the game are themselves algorithms, so a game could be designed from an algorithm, to physically demonstrate the algorithm. The only trick is applying the "one action at a time" part, because a computer can only hit one instruction at a time while several people can play a game at once. It could be something as simple as a single file line of people passing a notebook from person to person, each changing what has been written on notebook according to their "job" (their part in the algorithm). Some American India tribes also have a tradition of a talking stick (think conch shell) which could also be a handy metaphor.

So let's say the algorithm was to take a variable, split it in half, and place the 2nd half before the first half. I.e.

Input: 5678901234
Process: 56789 and 1234 (split the variable)
Process: 01234 and 56789 (make the exchange)
Process: 0123456789 (rejoin the halves)
Output: 0123456789

Accomplished in real life this would be a single file line of 5 people, Jake, Jack, Martha, Bob, and Mary.

Jake has a notebook and writes 5678901234 in it. He shows the observers and passes it to Jack.
Jack rips the paper in half, shows the observers, and passes the halves to Martha.
Martha places the 2nd half in front of the 1st one, shows the observers, and passes them, in order, to Bob.
Bob tapes the halves together in their new form, shows the observers, and passes the new form to Mary.
Mary shows the observers 0123456789

After a couple of runs with different number sequences the observors would get what the equation is, assuming they can see. The numbers aren't important though, it could just as easily be a sequence of pictures or objects, so the language element is eliminated and the observor just needs to be able to see and be curious enough to sit long enough and spot the pattern.
posted by jwells at 9:02 AM on September 15, 2005


Um, machine code?

What exactly are you trying to do here? How can any complex concept be described without using smaller concepts?
posted by delmoi at 9:09 AM on September 15, 2005


Can you clarify your question? It sounds like you want to express an alorithm without using any kind of language. Is this correct?
posted by callmejay at 9:54 AM on September 15, 2005


No.

Algorithms are ideas. Implementations are just implementations. I can hand you source code in some language that implements an algorithm and you will have to work hard to understand the idea behind it.

If I describe how an algorithm works, then you can write an implementation in any given programming language with much less work than it takes to go the other way.

Pseudo-code is a way of expanding a explanation. UML is a graceless way of communicating the structure of a program.

I don't get the bit about dependence on other algorithms. Many algorithms have no sub-algorithms embedded in them.

See here.
posted by rdr at 10:02 AM on September 15, 2005


Response by poster: Can you clarify your question? It sounds like you want to express an alorithm without using any kind of language. Is this correct?

Or with a language that explains itself, yes, which is why pseudocode is closer to the kind of "self-evident" abstraction I'm thinking of, but not quite.
posted by Rothko at 10:22 AM on September 15, 2005


You can express an algorithm as a strip for a Turing machine. It wouldn't be very readable, but doesn't require any language.
posted by callmejay at 10:26 AM on September 15, 2005


Rothko, what do you mean by 'explains itself'? That doesn't make any sense. How can anything be expressed without using some language?

Computer chips (and turing machines, like callmejay mentioned) are hardwired to understand a particular language.

You could define a machine, like a turing or RAM machine (ram machine = normal computers) and the input to that machine, but that will only 'self-describe' in a mathematical theoretical sense... you still need to use some concrete language to describe the machine.
posted by delmoi at 10:40 AM on September 15, 2005


Best answer: He probably means that no other cognitive domains need to be employed. In terms of Hofstadter's analysis, he wants an algorithm where the inner message serves as the outer message and the frame, as well. Any embodied explanation will recruit the metaphors of the medium. An animation will make use of common-sense notions of space, shape, motion..etc. So, the strict answer is No. You'll have to draw line, beyond which you concede that "extra" tools are being used, and find something within.
posted by Gyan at 11:05 AM on September 15, 2005


Neal Stephenson's books feature many algorithms represented in form of phsyical objects. In Diamond Age, algorithms are represented as links on the chain, flood gates on canals and so on. In Cryptonomicon, there is a very comprehensive description of an encryption algorythm being represented as a game of cards (indeed, that's how two characters in the book are able to communicate covertly), in addition to an interesting interlude about missing links on bicycle chains and the probability of the chain coming off the ring when the right constrains are meant - an algorythm of sorts.
posted by blindcarboncopy at 11:30 AM on September 15, 2005


Response by poster: You can express an algorithm as a strip for a Turing machine. It wouldn't be very readable, but doesn't require any language.
posted by callmejay at 10:26 AM PST on September 15 [mark as best answer] [!]


You would need a culture that would understand and be able to implement state machines. That's why card games, programming languages etc. are out, because they all boil down more or less to the equivalent of running a strip through a Turing machine. On preview, Gyan is probably closest as of yet to what I am after. Thanks to everyone so far for the great answers!
posted by Rothko at 12:22 PM on September 15, 2005


Write it in brainfuck. with only 8 characters and no words, its about as language-free as you're going to get. even simpler than numbers.

The best algorithms I've read are the ones in Knuth's books. Each step is numbered, instructions are explicit and in english with mathematical notation. Seriously, would you rather have everyone be able to not really understand it, or have a few people, who will probably be implementing it, thoroughly understand it?

You can please all the people some of the time, you can please some of the people all of the time, but you can't please all the people all the time... pick your battles.
posted by devilsbrigade at 12:23 PM on September 15, 2005


Gyan's "no" makes a lot of sense, but only in the abstract. In the real world and given enough time and research (is research breaking your rules?!) you can often figure out what is supposed to happen. Some assumptions help, which is why ook talks about a prototype. If you know that the thing performs a certain definable task correctly (definable in your own independent language, but it is still a language) then fully understanding the algorithm becomes a practical if time consuming task - like reverse engineering.
posted by Chuckles at 12:37 PM on September 15, 2005


Rothko, I still can't tell what you're looking for: All sign-systems (like human languages, computer languages, visual languages like those used in org charts) must be taught or at the very least "decoded." That's just how signs are. I won't get into the question about whether one can have the idea of an algorithm in their head without a lanugage to express it (i.e., is there a language of thought?). But what I will say is that for an arbitrary algorithm, there's no way to represent it ("re-present" it if you like word games) without language.

For some very simple algorithms, it's probably possible to find some leverage with pictures. In semiotics, these are called "indexical" or "iconic" signs and not "symbolic" signs (languages are symbolic). There is a picture in Envisioning Information, by Edward Tufte (p85) that is a visual explanation / proof / algorithm for the pythagorean theorem. Sad that Amazon won't let you look inside for the graphic. And sadder that, looking at it again, it's still got a few words in it, though those might be removed.

If you're trying to explain algorithms to aliens, you'll have start with the Voyager "records".
posted by zpousman at 12:37 PM on September 15, 2005


Best answer: In practical terms, the question can be rephrased as this:

Given qualifications X, in what ways can I represent an algorithm such that no background or peripheral knowledge is required for comprehension?

Rothko has to decide what X is e.g. 18-year old English speaking Western male with a high-school education --OR-- 6 year old female of normal cognitive development, who can be expected to grasp most cultural-independent concepts. Once Rothko supplies X, there could be practical answers.
posted by Gyan at 1:01 PM on September 15, 2005


Best answer: That is, can an algorithm be described in such a way that no or very minimal a priori knowledge about the representational language is required?

The world we live in is full of algorithms which didnt strictly require knowledge of the frame (except the a priori conditions of being inhabitants of the same frame) Most of these are biological or physical or chemical. The Sun's CNO cycle for example is an algorithm for generation of atomic nuclei - we didn't invent the CNO cycle but we "discovered" it. For its representation it uses the analog representation mechanisms available in the Universe:

Procedural Flow is represented as Time
Operators and Operands are represented as Forces and Objects

There's a bit of a Catch-22 here since you could argue that we inherited the language of algorithms from the universe itself and so it is false to say that that same Universe is just another representational language. But thats more of a philosophical question.

In practical terms, you can simplify the demonstration of the CNO cycle by drawing little arrows and chemical names, This is a high-order representational language which gains economy by introducing a new frame which must first be understood before the message can be deciphered.

The choice, as Gyan points out is how much "frame" you want to introduce in order to gain the economy of representation. It's a trade-off and you have to decide where that trade-off lies.
posted by vacapinta at 1:25 PM on September 15, 2005


Response by poster: I'm not sure I would know what X is, specifically.
posted by Rothko at 1:28 PM on September 15, 2005


Response by poster: What general principles can be selected independent of any "parsing" or other algorithmic processing that takes place?

Are axioms cultural?
posted by Rothko at 1:33 PM on September 15, 2005


Best answer: I would call vacapinta's example a process rather than an algorithm. An algorithm is computational method. You can argue that the sun is an analog computer and in that case the CNO cycle is an algorithm but I think that's stretching it.

If I understand Gyan's response correctly, then what you're looking for is algorithms that can be completely expressed in an object language (in the logical sense, not the computer science sense) without a meta-language. I assume that what you're driving at when you ask whether axioms are cultural is that you want to know whether there's an underlying reality that mathematicians and computer scientists are discovering or whether algorithms and axioms are inventions arising from a specific cultural or linguistic context.

I have no idea of the answer but I can tell you that the question has a pretty long history.
posted by rdr at 2:13 PM on September 15, 2005


rdr, I think you are over emphasizing the distinction between discrete and continuous systems... Consider those elaborate Sesame Street marble drop videos - It is easy enough to build a physical prototype of a discrete algorithm.

Falling back on reality as a short hand for logical steps can bridge lots of language problems. vacapinta's point is great, but it is easy enough to extend it to some human in the loop process. If an animal of a certain size sniffs the bait a tree/spring is released and a noose catches the animal. That is a human engineered algorithm that requires virtually no abstract language.
posted by Chuckles at 2:32 PM on September 15, 2005


Best answer: rdr : "you want to know whether there's an underlying reality that mathematicians and computer scientists are discovering or whether algorithms and axioms are inventions arising from a specific cultural or linguistic context."

The answer is it's both. No thing is extracted ex nihilo. Function is represented via Form. It's oxymoronic to perceive the "raw" function.
posted by Gyan at 3:27 PM on September 15, 2005


...brainfuck...

I wonder why this language never caught on in computer science curricula?
posted by ZenMasterThis at 3:43 PM on September 15, 2005


Best answer: Some wikipedia jumping points on what other people have thought about whether mathematics is discovered or invented:

Intuitionism

Platonic Idealism

My reason for making such a clear distinction between an algorithm and an instance of it's implementation is utility. Without an understanding of the reasoning behind an algorithm we can only observe an implementation's behavior over a limited set of inputs. Those observations may help us understand some portions of the algorithm but without a mental model those observations are not very useful. We cannot know where an implementation's behavior is buggy because if an algorithm is defined by it's implementation then the implementation's output is by definition correct whether or not the output is what we wanted.

That distinction between an implementation of an algorithm and the algorithm itself holds for translations of an implementation or physical models also.
posted by rdr at 4:38 PM on September 15, 2005


rdr : "Those observations may help us understand some portions of the algorithm but without a mental model those observations are not very useful."

A mental model is itself a manifestation of an implementation.
posted by Gyan at 4:59 PM on September 15, 2005


Best answer: From what I can make of this question there's an assumption that you want to represent these algorithms to a person. So the answer is no -- you can't represent anything to a person without something doing the representing.

You can make that thing as lightweight as possible, but you can't get away from the need for some medium. And that medium both helps you and constrains you. That's what pretty much all of semiology is about.

The good news is that you have some choice in the medium. The best way to represent an algorithm is with a notation specifically created to represent algorithms, but that only works if the person you're representing the ideas to speaks that language. If they don't you'll have to use a different one. Natural language is easier in some ways but it's terribly verbose. That's why it isn't used much in math books or the Knuth or other scholarly literature on algorithms. What makes the dense technical representation useful is that the mathematical representation of, say, Bayes' Theorem is both the theorem and the algorithm. If you want to get simpler you can decompose and decompose until you get to axioms. Then you have a collection of axioms that you can assemble and reference while building the overarching algorithm. The downside is that you get a bunch of overhead, which is a pain in the ass if you want to communicate quickly.

Are axioms cultural?

Sure, but that doesn't make them less valid or useful. I'd actually phrase this differently and say that axioms are consequences of epistemologies. You might have an epistemology that has things to say about knowledge and truth and in that epistemology you get axioms that underpin, say, a system of formal logic or mathematics.
posted by amery at 5:12 PM on September 15, 2005


Epiphany:

You're talking about an algorithm, (let's call it A) that can be understood by 'X' without any 'explanation', i.e. not understanding the language A is written in, or being able to understand A without seeing it described in a symbolic system it understands.

Well, what if (X = A). i.e. can X understand itself. You might consider that type of thing to fulfill your criteria.

Would the human mind fufill those criteria as well? Hmm...
posted by delmoi at 9:28 PM on September 15, 2005


What do you mean by an algorithm understanding itself?
posted by Gyan at 9:54 PM on September 15, 2005



What do you mean by an algorithm understanding itself?


Well, if the human mind is an algorithm, and it can understand another algorithm... uh, yeah.
posted by delmoi at 10:57 PM on September 16, 2005


First, the human mind is bootstrapped onto a substrate i.e. matter, so it's not just an algorithm. Second, at the very least, it's contentious whether the human mind does understand itself.
posted by Gyan at 12:08 AM on September 17, 2005


« Older How do I get a digital recording to sound like the...   |   Home Air Conditioning: Can the defrosting of the... Newer »
This thread is closed to new comments.