# What are 1s and 0s?January 25, 2008 3:29 AM   Subscribe

What are the 1s and 0s of computers? I.e., in what sense is it true (and in what sense is it false) to say that there are 1s and 0s in computers?

Computer science is not fundamental physics, it's a macro science; otherwise 1s and 0s would (absurdly) be taken as fundamental physical entities (imagine that there are quarks, fermions, 1s, 0s, etc. at the base level). Rather, computer scientists talk about things like neurons, nodes, tape, each of which is composed of smaller parts, and, similarly, processes such as writing and erasing, each requiring many parts. But in the case of, say, a neuron, we can look at it in a microscope, dissect it, figure out what it's made out of. Whereas I'd like to know whether such is possible with regards to 1s and 0s; if not, why not?

I distinguish between numerals and numbers. In my terminology, numerals are physical but arbitrary, whereas numbers are non-physical but don't ultimately exist. Presumably, the sense in which there are 1s and 0s in computers is a sense in which there are numerals--though I suppose they don't really look like 2D "1"s or "0"s; I suspect their geometries are different. What are their geometries really like?
posted by Eiwalker to Computers & Internet (32 answers total) 8 users marked this as a favorite

The way I understand it (which could quite possibly be wrong, and hopefully someone will correct me if it is)...... is that the binary 1s and 0s symbolically represent whether a transister is "open" or "closed". So if you want to "see" what the 1s and 0s look like, what you would see is transisters opening and closing in rapid succession as calculations are performed.
posted by jmnugent at 3:36 AM on January 25, 2008

Digital Logic.
posted by arimathea at 3:42 AM on January 25, 2008

The "1"s and "0"s are voltages. A "1" is either a high or low voltage in some range, a "0" is the opposite. "1"s and "0"s are only used as terminology out of convenience (the binary number system is convenient--there's no reason these two values couldn't be "yes/no" or "minneapolis/saint paul", but it's easier to do "1"+"1" than it is to do "yes" + "yes")

The only requirement when talking about binary data is the ability to store and retrieve binary units consistently. If programmers an do something like:

"PUT bit 1 in place 1545"

and later say:

"GET bit from place 1545"

and have that return "1" (because it was put there earlier), she has a valid programming system. This is abstraction at it's most basic--the programmer doesn't care how the units are stored, only that "PUT" and "GET" operations are available. In a processor, these "1"s and "0"s are voltages, on a magnetic disk drive they're polarity, on a CD they're the pits in the CD itself. All of these devices implement the "PUT" and "GET" operations, allowing the programmer to ignore the underlying representation of the bits. In fact, most storage advancements (holographic drives, etc), are attempts to make the underlying representation more efficient.
posted by null terminated at 3:48 AM on January 25, 2008 [4 favorites]

*If a programmer can do something like
posted by null terminated at 3:49 AM on January 25, 2008

1 and 0 are just symbols used to indicate which of two possible states something is in, and there are many different things inside a computer that have two possible states: voltage on a wire, internal state of a flip-flop, magnetization of a particular spot on a magnetic disk, carved pits or lack thereof on a CD (well, kind of). Here's a good place to start.
posted by equalpants at 3:49 AM on January 25, 2008

Depending on where you are in the computer, a one or a zero might be represented by different physical phenomena.

Within a CPU, a 1 might be represented by a more positive voltage, and a 0 might be represented by a more negative voltage.

In DRAM, a 1 might be a high charge in a capacitor, and a 0 might be a low charge.

Once you start getting into input and output, what a CPU will abstractly interpret as a 1 or a 0 can be represented by more complicated physical phenomena, such as NRZI. On a CD-ROM, eight-to-fourteen modulation is used, which means that every 8 CPU bits are stored as pits or lands on the surface of the CD.
posted by grouse at 3:52 AM on January 25, 2008 [1 favorite]

Er, every 8 CPU bits are stored as 14 pits or lands.
posted by grouse at 3:54 AM on January 25, 2008

1 and 0 are a convention to allow shorthand and binary mathematics. It's essentially an abstraction, and in the guts of computers, it's not always exact.

The simplest is in transfer of data, where as fast as it can, the wire sends fluctuating current in two modes: high (1) and low (0) in accordance with an internal clock that divides the seconds into tiny slices. Note that it's "low" and not "off", as "off" is a lack of transmission or power.

This low or high power is a differential sufficient to have it transmit through the semiconductor differently, and this is how transistors work their magic. Hard drives store 1s and 0s by magnetic media, essentially leaving more "stuff" to read at the ones, and less to read at the zeros.

Dissecting a physical computer, you'll find silicon chips doped with various chemicals and metals to create the tiniest components you could imagine. They're essentially micro versions of the resistors, semiconductors, capacitors (there are large ones of these present), wires, etc. that you'd buy at Radio Shack. It's not nearly so impressive or exciting as "The Net" appears in movies or TV.

Dissecting computer science on a less literal scale, you'll find lots and lots of heavy math. Aside from the folks who create the chips and hardware (chemical engineers and computer engineers who are sort of specialized mech engineers), the software is an offshoot of mathematics. The goal is often to find the most efficient algorithm to shunt around the 1s and 0s in the manner in which we like. This is no small matter, as 2 college boys put multiple search engines out of business when they invented Google.

I hope this explains rather than confusing further, but essentially computers don't contain 1s and 0s, they contain representations of 1s and 0s. Sort of how your house doesn't contain offs and ons, but objects in your house may be turned off or on.
posted by explosion at 3:56 AM on January 25, 2008

I am not an electrical engineer, so someone else will no doubt chime in with more information. Also the following description is simplified somewhat. The 1s and 0s are represented differently depending on where the data is:
It is the CPU that deals with data, it is a physical piece of silicon with many wires leading in and out. The CPU has a clock, on each "tick" a certain voltage on each wire represents a 1, another voltage represents 0. It is possible to connect a oscilloscope to a wire and view the trace of the ones and zeros going into or out of the CPU as the voltage alternates over time.
The data in memory is stored in arrays of transistors. The exact arrangement is not really important and varies between different designs (jmnugent is basically right except there is more than one transistor per bit).
Hard disks store the bits as regions of electro-magnetic charge on a metallic surface. With the right equipment you can map out the ones and zeros on the surface. In truth, it is a little more complicated than this since the bits as seen by the computer do not map directly to the magnetic regions.
Basically, the ones and zeros all boil down to different voltages, this is because we know how to handle electricity very well and the components can be made very small and mass produced. However, there is nothing special about electricity - the ones and zeros could be stored and processed by mechanical devices or bits of string and the principles would be the same.
posted by AndrewStephens at 4:06 AM on January 25, 2008

What they all said. The ones and zeros are an interpretation, not inherent in reality.

From a strict point of view, a computer is a theoretical entity which is, among other things, a discrete state machine. It moves from one state to another with nothing intermediate. These states are conveniently represented by sets of digits which are either one or zero, never anything between. But nothing in the macroscopic real world is actually like that. Real world computers are machines which have been carefully constructed so that for practical purposes they approximate to discrete state machines and various parts of them (usually the sorts of things equalpants mentions, but in principle anything that lends itself to that interpretation) approximate to sets of binary digits, so it's safe for us to interpret them as being those things, which turns out to be extremely useful.

Am I making sense...?
posted by Phanx at 4:18 AM on January 25, 2008 [1 favorite]

Well, it depends on what you mean by computation. The mathematical logician Dana Scott invented a very lovely system of computaton using the single letter, G. Of course the fact that it was a G and not a Q or an esperanto ĉapelita letter, say Ĥ, is irrelevant. The interesting thing is that G, along with a very complicated rule for reducing expressions produced by applying G to itself functionally, allow you to compute all the functions which are computable by the fastest Blue Gene computers at the Nuke factories of the World's Richest (and most deeply endebted) Country. However, the G language is somewhat slower.

The point is, ones and zeros, or G, or S and K, are just symbols. The real fundamental objects in computation are the symbols along with the rules.
posted by vilcxjo_BLANKA at 4:43 AM on January 25, 2008

Others have already said it, but I'll reword it again just in case it didn't quite make sense: computers are made mostly of transistors. Transistors have two states; "on", and "off".

By combining them in patterns, you make machines that produce a predictable output given a particular input. The fundamental design blocks are small circuits called 'gates'. An OR gate samples two inputs, and raises its output if either one, or both, is high. An AND gate will do so only if both inputs are high. A NOT gate has one input, and outputs only when the input is turned off. An XOR gate is almost like an OR gate, but it turns off if BOTH inputs are raised.

Most computer circuits are made from these fundamental blocks. The earliest computer chips were designed by hand, every gate done by humans. Even these very early chips were massively complex by human standards, requiring tens of thousands of transistors to fully work. (it takes, as you can imagine, A LOT of these simple gates to make a machine that's able to run arbitrary programs.) Real people were doing the math to prove the design worked. It must have been incredibly difficult.

Once they had those early computers working, they started to write automation programs to abstract a lot of the work away, by automatically laying out standard pieces to accomplish a given task, and running all the internal circuitry to make it work. As the chips running them have gotten better, so have the programs; at this point, only the high-level design is done by humans, and computers automatically do all the layout work.

Computers are an agglomeration of very simple things. They've gotten complex because of the layer after after layer of software and hardware design addad to the foundations. Even early computer chips were faster than humans could really understand; the modern monstrosities that tick over, internally, three billion times a second have gotten so fast they're damn near magic if you look closely at them.

But, at their core, they're made of circuits that take an input voltage and emit an output voltage. Everything else comes from that fundamental starting point. Emit or don't emit; yin and yang; 0 or 1.
posted by Malor at 4:48 AM on January 25, 2008

As has already been noted, in your home computer the binary symbols correspond to voltages and the transistor states that these voltages enable.

But the binary symbols don't have to correspond to voltages and transistors, they can correspond to anything that can represent two states. This is why you can build a clockwork computer, or a marble-drop/marble-race computer (or in the case of Discworld fiction, a computer built out of tunnels and gates for an ant colony :-)

Anything that can operate as a logic gate (AND, OR, NOT, etc) can be the physical basis of a computer. That means a physical gate - the kind that swings open on hinges - can be a logic gate (and the engine of a computer) if it is reliably opened by, say, a marble running down a chute. (And as you already know, that basis does not even need to be binary, it just makes everything simpler to have only two states in your gate mechanics).
posted by -harlequin- at 5:33 AM on January 25, 2008 [3 favorites]

Computer science is not fundamental physics, it's a macro science;

No, Computer Science isn't a science at all. It is the bastard child of Mathematics and Philosophy.

Computer scientists talk about computational models. Many of these models are mathematically equivalent but appear to humans to have more expressive power in one domain or another.

The core model is the Universal Turing Machine. Many computational system have been proven equivalent to the Turing machine, so if you can prove a new model is equivalent to the set of known Turing machine models, you get equivalence with the entire set transitively.

One way to classify languages is the Chomsky Hierarchy. The most powerful languages in the hierarchy require a Turing machine. Simpler languages (like regular expressions) don't require machines as sophisticated as a Turing machine.

A different way to think about computation is via complexity theory. Some problems execute quickly on Turing machines, others may take longer than the expected life of the universe to execute.

The key point here is that none of this analysis changes if your computer is made out of vacuum tubes, transistors, or tinker toys. We might be able to build machines more powerful than Turing machines with quantum components, but if we use those components to build a machine equivalent to a Turing machine, it will be just as powerful (though possibly much faster) than our current computers.

But in the case of, say, a neuron, we can look at it in a microscope, dissect it, figure out what it's made out of. Whereas I'd like to know whether such is possible with regards to 1s and 0s; if not, why not?

If you are looking at a typical desktop or laptop computer, you can observe the physical behavior of your hardware with a logic analyzer. If your computer is made of tinker toys, your eyes will probably be sufficient
posted by b1tr0t at 5:53 AM on January 25, 2008 [2 favorites]

I would have sent this by mefi mail, but you don't seem to have that enabled. (Mods, feel free to delete this, if you think it's too off-topic.)

Don't be so sure that numbers don't exist. Platonism about numbers is a very viable, realist theory about numbers. Check out some of the work by Michael Jubien (especially the first two chapter of this book). His work (in that book) is very accessible and easy to understand.
posted by oddman at 6:00 AM on January 25, 2008

No, Computer Science isn't a science at all. It is the bastard child of Mathematics and Philosophy.

...and Engineering.

---

You can "see" the ones with a {volt,gauss,micro,...}meter. That's all a one is, presence of electricity (or electrical pathway that /could/ conduct), or electrical field, or a pit on the surface of a CD, or pretty much anything that could be mechanically set and checked.... It's a count of things at a location that can hold at most one thing.

There's a binary state of ostriches in your bedroom closet. If you have an ostrich, think "one", and if you don't, think "zero".
posted by cmiller at 6:04 AM on January 25, 2008 [1 favorite]

As you've intuited, things turn out to be far more complicated than '1s and 0s'

Even integrated circuits of transistors are fundamentally analog in their interactions — there are really four 'binary' states:
• Off — an open circuit, if this was '0' you wouldn't be able to differentiate it from things being disconnected
• Low — since digital logic is usually done in the negative (saves power, takes advantage of the logical completeness of NAND/NOR) this is usually interpreted as '1'
• Grey — Even if the threshold between low and high is very small, there will still be room for weirdness in between
• High — usually '0'
I've done a lot of work with FPGAs, where you quite often have to explicitly account for these properties when designing processors. Unexplainable analog circuits (where often vital parts are not connected) that work amazingly are often present in FPGA designs that were evolved using genetic algorithms.
posted by blasdelf at 6:34 AM on January 25, 2008

grouse has it right.

1's and 0's are entirely arbitrary mappings to physical phenomena. They can be represented by things like voltages, the states of a transistor, how a certain piece of metal is magnetized, the reflectivity of a spot on an optical disc, etc.

At the most basic level, any set-up (system, object, whatever) which can reliably represent two or more states (on/off, up/down, MissionAccomplished/SectarianViolenceIsIncreasing) can be used to represent 1's and 0's for computation.
posted by NucleophilicAttack at 6:40 AM on January 25, 2008

I think people are confusing a lot of different topics here.

At the simplest circuit level, in a normal computer, 1s and 0s are represented by two different voltage range, while voltages outside those ranges produce undefined behavior (like blowing up the chip).

Rather then thinking about transistors, think about Relays. A relay works like a transistor, but it's a (much larger) mechanical devise. When you apply a current to one of the pins, it actually opens a physical switch, letting electricity flow through two other pins.

Let's suppose you wanted to use relays to calculate the logical 'and' function. It would be pretty easy, you would just connect your two inputs input lines to the 'switch' line on the relay, and one to one of the transfer pins. When the switch line was 'on', and the input was 'on' the output line would also be 'on'. But when either of the first two was off, the output would be off, and thus you would calculate the logical and value of the two pins.

So in that example '0' would mean a physical neutral line, and 'on' would mean any voltage that wasn't too high to cause damage to the relay.

Transistors work similarly, but usually and gates used are a little more complicated, because making And gates with just one transistor is electrically inefficient.

Once you're outside of the computer, 1's and 0's can be stored in all kinds of ways, as long as they can be read back in and digitized by some process.
posted by delmoi at 6:44 AM on January 25, 2008

If you really want to see and hear 1s and 0s, there's nothing better than a relay computer. You can see the guy toggling in the 1s and 0s himself by flipping switches, and you can hear the relays in the computer clunk around as they switch between 1 and 0 as his program runs.
posted by zsazsa at 6:59 AM on January 25, 2008 [1 favorite]

Yes, one and zero are very real. In some cases you can even see them. If you put a CD under a microscope you can see the pits.

http://www.stereophile.com/features/827/

However, it's not about the ones and zeros. It's about being digital instead of being analogic. Some flash card store information as zeros, ones, and twos. Zero is no charge in the capacitor, one is high charge, and two is a middle charge.

The reason why are most device builder sticks with two states is because it is much easier to build a reliable device when you only have to distinguish between two states. But in the case of flash memory, the gains in memory density are worth the trouble of distinguishing low from medium.
posted by gmarceau at 7:03 AM on January 25, 2008

Computer processors have been designed to work in base 10 before - 10 different levels of voltage signifying the relevent bit. However, blasdef is right, it's hard to get precise levels, so as an engineering solution computers work in base 2, i.e. binary. Even a transistor is not completely on or off at a given time, you have leakage current across the gate, and there has to be a certain minimum control current before a sufficiently large current flows across. Therefore you tend to say everything below a certain voltage is a 1, and everything above a certain voltage is a 0, and ignore the bit in the middle.

There are no 1's and 0's in computers. There are physical and electromangetic inputs that are combined together and then interpreted as strings of 1's and 0's if they're strong enough. We use binary numbers because its convenient and relatively easy to represent, and it makes the maths easier when you're using powers of 2 for everything.
posted by ArkhanJG at 7:21 AM on January 25, 2008

Computer science is not fundamental physics, it's a macro science; otherwise 1s and 0s would (absurdly) be taken as fundamental physical entities (imagine that there are quarks, fermions, 1s, 0s, etc. at the base level).

Well, "1" and "0" sort of are physical entities. Claude Shannon found that each bit of information represented a certain amount of ordered energy, and that information transmission was subject to potential degradation due to the Second Law of Thermodynamics.

Shannon's Information Theory was in part based on that observation.
posted by Steven C. Den Beste at 7:29 AM on January 25, 2008

I liked the way this book explains these concepts.
posted by davar at 7:30 AM on January 25, 2008

If you really want to hear ones and zeros, put a computer CD-ROM disc into an old CD player. The resulting static "shhh!" sound you hear is millions of changes of state, from 1 to 0 and back per second.
posted by Wild_Eep at 7:36 AM on January 25, 2008

These 1s and 0s are fundamental. They are called "bits". They are the smallest unit of information. If you will, they are quantum units of information. From this, if you have 2 bits, you now have 4 possible states of the bits. 00, 01, 10, and 11. This is the binary thing, base-2 counting. For a number of bits (n) there are n^2 possible values.

We string bits together in packs of 8, and call that a byte. Often, bytes are given corresponding values in letters of the alphabet, numerals, and 'special characters'. That makes up a lot of data. There are various conventions for assigning values to bytes, the one which seems to have held out is this "ascii" you hear about.
posted by Goofyy at 7:56 AM on January 25, 2008

davar's recommendation is spot-on. C.O.D.E. is a fantastic lay introduction to binary circuits and computation.
posted by NucleophilicAttack at 8:44 AM on January 25, 2008

Since a lot of people have already gone through the low-level technical answers to this question, I'll try to explain it at a very high level.

Computers are digital, whereas the world is not. Take the time of day, for instance. You can measure time in hours, or seconds, or milliseconds, or mircroseconds, or nanoseconds, or picoseconds, or attoseconds or some infinitely small interval of time. For a computer to store what time it is, the computer needs to make time digital. One way to do this would be to store the number of milliseconds that have passed since January 1st 1970.

Computers have to "digitize" pretty much everything. The movement of your mouse is mapped onto a grid on your screen, so that your computer knows the mouse is at coordinates (492,875). A color on a web page might be #6495ED, which is a hexidecimal representation of 100 red, 149 green, 237 blue.

Computer hardware is digitized too. Analog signals in the circuits of the CPU can be digitized into high/low (0/1). Magnetic surfaces inside your hard drive can be intepreted as having one kind of magnetic field or another, which can be digitized into 0/1. A CD has a series of pits, and when there is a change from pit to land it's intepreted as a 1, whereas no change is a 0.

Basically, none of the 1s and 0s that your computer cares about actually exist in the real world. Instead, things in the real world are "digitized", so that computers can store and manipulate them. The end result is an approximation of things in the real world.
posted by burnmp3s at 9:31 AM on January 25, 2008

There are a lot of answers that say that 1 and 0 are abstractions of the low level voltage/polarity/pit&land implementation, which is true, but they are not totally arbitrary.

You could indeed call those two states Vanilla and Chocolate instead of 1 and 0, but that is a much less useful model for thinking about things. A lot of the operations a computer can perform make a lot of obvious sense on binary numbers.

ADD 1001, 11 -> 1100 makes mathematical sense, whereas OPERATION {Chocolate, Vanilla, Vanilla, Chocolate}, {Chocolate, Chocolate} -> {Chocolate, Chocolate, Vanilla, Vanilla} is the same relationship, but without mapping it to binary 1s and 0s it is difficult to make sense of.

1's and 0's is a good model for the bits in a computer; the designers of a computer even had them in mind as a way to do normal mathematical operations. In that sense the voltages or magnetic fields become comparable to special configurations of ink we ourselves interpret as numbers. Ink shaped like a 7 is not actually seven, it just represents that. Frquently bits are used in non-numeric ways (to represent letters in ASCII, or opcodes, or left vs right, or whatever). This is done with digits too (e.g., b1tr0t), but it is less common.
posted by aubilenon at 10:33 AM on January 25, 2008

I'll N'th grouse's comment. They are mappings of states that could be anything. For example, these Lego logic gates represent the same things we use in digital computation but use physical Lego positions as 1/0 states. Binary arithmetic is simple, simple enough for small machines to run. The representational flexibility of binary logic is what makes computers so universal.

... I think I just wanted to link to lego...
posted by chairface at 11:33 AM on January 25, 2008

I liked the way Feynman Lectures on Computation handled things. The last chapter (on building semiconductor chips) is a bit out of date, but on the whole it's a big win, namely a fantastic explainer (i.e., Feynman) giving the understanding of a genius (i.e., Feynman) in the terms most usable for a non-computer-scientist (i.e., Feynman).
posted by eritain at 1:27 PM on January 25, 2008

I am an electrical engineer, and there are a lot of bad comments in this thread, mostly coming from well-intentioned people that work in software. What a CPU (or any combinational or synchronous logic system) does, in part, is create an abstraction layer by manipulating electrons in both space AND time to create the illusion of discrete 1's and 0's. Transistors do not have "on / off" states that map to some concept of 1's and 0's as some people have been claiming - such comments demonstrate a lack of awareness of this abstraction layer that connects the symbolic world to the physical world.

Without this abstraction layer, compilers would be two to three orders of magnitude more complex than they are, and we wouldn't recognize them as having much in common with compilers as we know them. Actually, the concept of programming probably wouldn't exist.

If you want to study this, you should first study semiconductor physics. You don't need to dig that deep and a basic qualitative guide should be enough. If you took a year of college level physics and chemistry, and understand the concept of valence electrons and why atoms sometimes "want" to gain or lose a particular number of electrons, I think you'll have the baseline you need. At the top end of semiconductor physics, learn the different regions of operation of a MOSFET.

From there you'll have to study how information is stored and manipulated using electrons. For storage, study DRAM. You'll need to learn how capacitors work, as that is where the storage occurs. There are of course other storage methods as mentioned by others - CDs, magnetic tape, etc. Even with CDs, however, the storage itself is not digital - the pits on a CD are just pits until properly interpreted by a particular device. There are tolerances in both space and time that must be adhered to in order for symbolic manipulation to occur.

For data manipulation, my last sentence also applies. Study how logic gates are formed from transistors, and beyond the truth table (the mapping of inputs to outputs), learn how the discrete notion of 1's and 0's emerge from the logic gate's ability to represent two ranges of input voltages as two discrete output states.
posted by MillMan at 3:33 PM on January 25, 2008

« Older Good mayo. Stat!   |   Wanted: a stats Gini-us Newer »