Analog Life, Digital Lies

The Paradox of Analysis

Light bulbs are, for the most part, digital. Life is not. In case you are not technically minded, let me explain what "digital" means. Computers run on a simple principle: yes/no, on/off, closed/open, one/none. All the communication in the machine is done with binary numbers — "zeroes and ones," more specifically. Keep in mind that the ten digits in a decimal system are not 1-10 but 0-9. The representation '10' actually means "1 in the tens place and zero in the ones place."

Any number can be represented as a binary value, although this system of notation is unwieldy and inefficient, since it works on powers of 2. For example, '7' is '111' (1 in the 1's place, 1 in the 2's place, and 1 in the 4's place, or 1+2+4=7), and '8' is '1000' (1 in the 8's place and zero in the 1, 2, and 4 places). For human convenience, we convert the computer's binary numbers to two-digit hexdecimal numbers, making them shorter and more readable. Thus '7' becomes '07', '8' is '08' and '9' '09', but at '10' instead of adding a "place" as we would for a decimal number, we use '0A' and so on until '15' is '0F' and '16', the number that moves us into the second digit, is '10' (1 in the 16's place and 0 in the 1's). This is even more efficient that the digital system; for example, it lets us represent 255 with two numbers (FF).

And having changed the representation of the binary values, we then use alphanumeric symbols to represent the numbers and words humans are used to, all linked to their "hexadecimal values." So we say 'I' to the computer, and it translates that into '01001001' (five lights off, three on) to understand what it means, then converts it to '49', the hexadecimal value of 'I' in its standard English character tables.

This juggling is necessary because computers work like light bulbs. Millions of tiny light bulbs, each with a switch. Throw some current and it's on (1). Kill the current and it's off (0). Just like that. Snap snap. Of course, those of us who have watched the slow process of a neon or sodium bulb coming to the flare point know that some lights are not "snap snap" at all. In fact, none are, as far as I know. Incandescent bulbs "heat up" to produce light and "cool off" to go out, but they do it faster than most of us can see. So we pretend it's instantaneous. And the switches in the computer are so small, so fast, that they are virtually instantaneous.

But "virtually" is the point here. Even in computers, the concept of digital data, two-valued logic (yes or no, true or false), is nothing more than a way of representing things. Not "the way things are," but "how we see them." It's a distinction we have a terrible time keeping clear. Are apples "red" or do we see "red"? Apples are not "red" for the color blind, and they are more than "red" for a creature that sees a wider spectrum than we do. But we think of the "redness" as being in the apple somehow. Just as we think of the apple as having a shape, a boundary, which is arbitrarily defined as what we can perceive with the eye or hand. Rather than, say, the fuzzy atomic level where apple stuff and surrounding stuff is rather blurry.

And having accepted the notion that something as complex as computing can be reduced to digital logic, we now find ourselves bombarded by attempts to digitize life. Computers can make cleaner, purer music by digitizing it. Sharper, crisper pictures. Clockwork carrots. Life is not digital. The faithfully digital protest furiously that everything is either part of the carrot or not, that a thing can't be "black and not black at the same time." But at what point in the digestive process does the carrot cease to be a carrot? How do we get from black to not black; do we pass through a gate or a grey field? If it's a gate, are you in black or not black while you are in the gate? If a field, is the field black, or not black? They are unanswerable questions, because they depend upon assumptions, consensus, and representation, not the muddle of reality. Yet the opposite, reductive, binary notion is pervasive in our perceptions, our culture, and our language.

When is the brain dead? we ask in the courts, certain we can find the moment when brain function switches from on to off. And brain-death is self-death. Of course; we all agree on that. When is the fetus viable? How long is the coast of Florida? Are you happy or not? Is the latest McDonald's ad good, or not? Will it sell, or not? The Social Darwinists and Behaviorists insist that once they have enough "data," they will be able to predict the details of human behavior. The problem, they would have us believe, is one of scale. I don't think so, and I suspect none of us would think so, if we would take a moment to reflect. I think they have lost track of the paradox that analysis depends on. Analysis captures static snapshots of a living process. It culls and selects those snapshots to gather representative ones. It accounts for the unrepresentative by discarding it. It eventually can present us with a "description," which is a movie of the process, not the process itself. Even if the movie has more frames per second than we can perceive, it is still a fragmented representation of the process.

A case in point is photography. Photographic film and paper and chemistry are all "grainy." What that means is that once you magnify the image enough, it becomes individual colored grains. "Ah-ha!" the digists cry, "Digits! Either the grain is black or white." Well, no. If you magnify it more, the "grain" grows texture, just like a grain of sand. If you magnify it enough, it becomes an atom. And even if an atom has color as an agreed-upon property (they don't yet, as far as I know), it isn't "there" or "not there." That, folks, is an elementary principle of quantum physics. Atoms fizz and percolate. They move. The particles of an atom are indeed "there and not there."

And so it is with all life. We see analog life, we refine it to digital facts. But the digital facts are just a representation.

To see what I mean, imagine using analysis and language to teach someone to walk. First we have to figure out "how we walk." Take a step forward, thinking about it. Most Americans begin by stepping out on the right foot. We shift our weight to the ball of our left foot, then raise our weight with the muscles of the left foot and leg, raise the ball of the right foot, and swing the foot forward, allowing our weight to fall forward off the planted, supporting left foot and continuing to raise the toes of the right foot until they are about four degrees above the heel. The heel rises about an inch from the surface. The angle of the foot relative to the plane of the hips is pointed outward, to the right, one degree. We advance the foot, and our center of gravity moves forward. When the foot has advanced so that one-third of our height separates our feet, the foot descends, heel first, and our weight shifts onto the right leg. Try it, and stop there.

That was easy. Now how are you going to get your left foot unplanted so it can advance? Your left foot is angled into the ground like a tent peg. If you just swing it forward, you will dig it into the dirt. Do you raise the foot slightly? If so, do you level it first, or leave the toe pointed down? Do you lift by pushing with the muscles of the foot or pulling with the muscles of the leg? If the latter, which muscles — the calf, front thigh, rear thigh? Do you rock your weight farther forward, till the left foot is raised slightly and released? Do you raise the toes? Bend the knee? Swing the foot outward a bit? If so, from the heel or toe? Work it out; try doing it wrong.

How do babies learn to walk? They watch enviously. They imitate, experimenting. If you try to help, supporting them by taking their hands, you will see two simple inefficiencies that they eventually solve. They lean forward too far, naturally enough, since their natural position is crawling and since they want to go forward. Most babies walk pigeon-toed, probably because the big toe is heavier than the outside edge of the front of the foot. (I would guess our prejudice against "pigeon-toed" comes from correcting this in babies.) And babies lift their feet with the toes dangling, from the heel, like puppets. Their toes scuff on the floor as they advance their feet. The principle of getting the toes out of the way is not clear to them.

But they watch, they adjust, they learn, and eventually, we all "walk." That's what we call it, "walking," as if it was all the same. But no two "walks" are the same, even for the same walker. When I lived in North Dakota, in the first winter I learned to walk flat-footed, planting my foot a bit sooner and all at once rather than heel first, to cope with the ice. I walk "tired" sometimes, and "happy" others. I walk "warily" in the dark and "quietly" when I see something interesting on the trail. Walking on the beach, I walk differently than I do on tile, concrete, or loamy dirt. Meaningless discriminations? I don't think so. Noise on the digital grid. Yup.

Analysis is digitization. We search for "meaningful generalizations." How would we analyze "walking?" The Muybridge approach: timed-interval photographs. Five snapshots of a woman at the moment when she shifts her weight to the advanced right foot, five more taken a quarter-second later, five more. The camera will tell us how to walk, if we choose the intervals and moments well. But no, the camera tells us how a woman walks. How that woman walks. Or rather, how she walks on that surface, in that mood. So we add five more subjects. Now we have seventy-five photographs. Much better. Now we know how people walk barefoot on a hardwood floor in front of a camera. One of them walks a little funny, messes up the data, so we'll eliminate him: a non-standard walker.

And so it goes. We digitize reality and when it fails to resolve into generalization, we adjust reality to fit. It fact, walking is culturally determined. A characteristic Asian female walk does not lift the feet but sweeps them forward in small, dancing arcs, the "mincing" caricatured in Western stereotypes. Moccasined American Indian men walked slightly pigeon-toed. Ballet dancers walk duck-toed. Heavy men and pregnant women walk differently than thin children, much more focused on their weight shifts. Nothing complicated about all this, as long as we keep it simple as a song: "Put one foot in front of the other." And nothing especially pernicious about focusing on the fine details, either. Folklore has it that Steve Prefontaine, the great distance runner, found an inefficiency in his stride by watching slow-motion videos, corrected it, and improved his times.

What is pernicious is losing sight of the fact that digital data is merely representations. Steve Prefontaine didn't learn to run by breaking down the digital data. Nor did that data teach him to run long and fast. It provided him with details that let him polish his technique, which he learned just as we learn to walk, by trial, error, observation and imitation. The notion that digital representations are more efficient than the analog soup of actual running is the latest flavor of Platonic forms. And it was debunked millenia ago by an old Greek who never saw a camera. Digitization is, after all, the trick in Zeno's paradox, where Achilles tries hopelessly to catch the turtle with a hundred-yard head start by running half the distance, half the rest, half the rest... and no matter how close he gets, there is always some (the other half) left. Distances do not come in units to be neatly cut into halves. If you had to mark the "halfway" point on the coast between Los Angeles and San Francisco, how would you find it? Digitization represents reality in snapshots. Life is in the spaces between.


Home: Dancing Badger
Google
 

Go to Powell's Books