apexart :: Coding the Body :: Leah Buechley
Coding the Body
organized by Leah Buechley

Opening reception:
Wednesday, March 19: 6-8 pm


On view:
March 20 - May 10, 2014



Featuring work by:
Francis Bitonti
Karen Bradley
Kelly Dobson
Sarah Fdili Alaoui
Ben Fry
Yves Gellie
Eunsuk Hur
Nervous System
Thecla Schiphorst
Cait & Casey Reas
Diane Willow
Amit Zoran


Dr. Sarah Fdili Alaoui, Dr. Thecla Schiphorst,
Karen Bradley, and Nathan Evans, Decoding, 2014 (video still)

Download brochure pdf
Download checklist pdf
Download press release pdf
View images page

Press:
The New York Times mention
Fathom blog post
RELATED EVENTS:
     
March 22 - May 10
Weekly Saturday Performance Series, explores the ambiguous and qualitative constraints on our bodies.
  Tuesday, March 25
Touchy Feely Tech, a presentation of tech prototypes geared towards social and emotional interactions.
  Saturday, March 29
Algorithmic Tattoo Workshop, teaching you how to turn algorithms into tattoo designs.
  Thursday, April 10
From Somewear to Everywear: The Future of Wearable Computing and Augmented Reality


If you visited Cambridge, Massachusetts, in the 1990s, you might’ve encountered a group of intense young people who looked like they stepped out of a sci-fi novel. Sporting fanny packs overflowing with cables and electronic equipment, and bulky headgear, these MIT students self-identified as cyborgs. A few members of the team kept their systems on at all times—in classes, meetings, and parties; out on dates and trips to the beach. (Thad Starner, who is now one of the technical leads on the Google Glass project, has worn his “wearable computer” continuously since 1993.)

Their primary aim was, in a phrase coined by the sci-fi author Verner Vinge, “intelligence amplification”.(1) They envisioned a benign human-machine synergy that would make people smarter, faster, and more efficient. The physical systems they wore consisted of a head-mounted display—a small transparent computer screen through which you could see both the real world and computer-generated graphics—connected to a computer and a one-handed keyboard.(2) At the heart of these systems though was code—the software that enabled wearers to interact with their devices and to store, organize, and retrieve data.

With this outfit they were able to take notes, look up information, send messages, and snap photos instantaneously, wherever they were. In the 1990s, before the widespread use of smart phones, such powers were unheard of. These prescient researchers foresaw and embraced an extreme version of the always-on, totally-connected life.

The cyborg is a popular imagining of the relationship between code and the body, of how people can and should relate to computers. Alluring and unsettling, this vision promises that we can be better than human: smarter, stronger, faster; but it leaves us suspicious that we will lose ourselves in the process—as parts of our bodies are gradually “augmented” or simply replaced by machines and software. I grew up on a farm, the child of back-to-the-earth hippies, steeped in a culture with a deeply rooted distrust of technology and the very notion of human progress. My relationship with technology continues to be an ambivalent one—equal parts enchantment, skepticism, and trepidation.

I was drawn to computers not for their abilities to augment or replace human intelligence, but for their expressive potential. A decade after the MIT cyborgs first emerged, when I began to research wearable computing, my interest stemmed from a fascination with fashion. I joined a small group of designers and engineers who were investigating how computers might expand the pallet we use to adorn and identify ourselves.

Nervous System’s designs look like they’ve been harvested from an ocean or meadow, not made with machines. A lacy filigreed earring looks like a veiny leaf skeleton; a twisting spiral necklace, like a piece of coral. Jessica Rosenkrantz and Jesse Louis-Rosenberg, the founders of the company, develop software that mimics biological processes. Their programs generate designs for objects that are, in turn, fabricated by 3D printers, laser cutters, or other automated tools.

The algorithms Jesse and Jessica write define not so much individual objects as entire families of forms. A program produces a basic structure. Each time it is run with new input parameters, it generates a new variation on the structure. Variations can also be created in a collaborative process between the software and the person who will ultimately wear the design. Code determines the framework for a piece, but a prospective wearer can fine-tune and play with contours until she finds a version she fancies.

A piece might begin with a 3D scan of her body. This scan provides a precise canvas on which to work—a perfect virtual representation of her figure. The software then digitally “grows” a pattern around this shape. Once the basic pattern is set, she can lower a hemline, loosen a sleeve, or create a more finely detailed decoration on a bodice. Despite these adjustments, the piece remains true to the aesthetic framework laid out by the program. Once the design is just the way the wearer wants, it’s 3D printed. No one else will ever have one like it. The product of this digital collaboration is utterly unique.

Software gives us new tools—interactivity, dynamics, and seemingly limitless complexity—that we can use to make beautiful and entirely original things. Code can help us communicate and explore our identities in compelling and delightful new ways.

In a famous paper in 1950, Alan Turing, the father of computing, posed the question, “can machines think?”.(3) Turing, like many scientists and mathematicians, assumed that there was no fundamental difference between the capabilities of a computer and that of a human brain. He supposed that it would only be a matter of time before a robot was developed that would be indistinguishable from a human.

Turing’s paper, Computing machinery and intelligence, described a simple test—the Turing Test—to determine whether or not a machine was sentient. The idea was straightforward: if a machine was good enough to fool a person into believing it was human, we could fairly assume it was indeed thinking—carrying out the same kind of process that occurs in our brains when we think.

Computer scientists were by no means the first to dream of building human replicas. There is a long history of this kind of play and exploration. To take just one instance, the great automaton builder Jacques de Vaucanson constructed a collection of dazzling life-like forms in the 18th century. One of them, the Flute Player, was a full-sized android that could play twelve different songs on a flute by blowing air out of its mouth. Nine bellows powered the machine, which included a mechanism to mimic each facial muscle used in human playing.(4) As astonishing as it was, the Flute Player was limited; it couldn’t walk, talk, or smile.

Though robotics has progressed since then, it’s not clear that anyone is much closer to making a synthetic human being. However, contemporary researchers have discovered something useful: people are actually quick to ascribe personhood to machines. A robot need not be a perfect or even a close copy to pass Turing’s test. Humans, it turns out, are hardwired to look for life. We see faces in clouds, empathize with stick-figure cartoons, and relate to robots like they’re people even if they’re poor imitations.

Robots are servants. They do what we want them to do, what they’re programmed to do, often distasteful jobs we’d rather not do ourselves. They sweep our floors and fight our wars. Increasingly, they’re taking care of us. The Japanese government recently allocated approximately $24 million to develop robots for elder care, hoping that machines will be able to nurse the country’s aging population.(5) In the USA and Asia, “nanny robots” are being proposed as a solution to the lack of affordable childcare.(6) Roxxxy, the “sex robot girlfriend,” is a full service partner.(7)

If the researchers have it right, we’re likely to fall for, even to love, these synthetic companions. Will they love us back? Is a relationship with a machine that is programmed to please, to fulfill our every wish, a desirable substitute for the complex rewarding struggle that is a relationship with a person? Are we headed for a future in which we’re increasingly living “alone together” in professor Sherry Turkle’s phrase—next to one another, yet each of us isolated in our own personal, machine-mediated bubble?(8)

Codes and machines are claiming more and more of our time, our attention, and our physical selves. With each year, we spend more time interacting with computers and less time interacting with people and the natural environment. And yet, software is revealing vast new spaces of knowledge, expression, and experience—introducing us to entirely new ways of thinking about and interacting with the world.

Coding the Body interrogates the relationships between humans and code. It turns to cyborgs, robots, fashion designers, geneticists, artists, and others to explore how code is being used to understand, control, decorate, and replicate us. The exhibition celebrates the beauty of code and its manifestations while casting a wary eye on its ever-expanding power.

Leah Buechley © 2014

1. Vinge, V. (1993), “The Coming Technological Singularity: How to Survive in the Post-Human Era.” VISION-21 Symposium, NASA technical reports, NASA CP-10129.
2. Starner, T., Mann, S., Rhodes, B., et al. “Augmented Reality Through Wearable Computing,” Presence, Special Issue on Augmented Reality, vol 6(4) 1997.
3. Turing, A.M. (1950). “Computing machinery and intelligence.” Mind, 59, 433-460.
4. Wood, G. Living dolls: a magical history of the quest for mechanical life. Faber and Faber, London, 2002.
5. Iida, M. “Robot niche expands in senior care.” The Japan Times Online, 2013. http://www.japantimes.co.jp/news/2013/06/19/national/ robot-niche-expands-in-senior-care/.
6. “Roxxxy Sex Robot: World’s First ‘Robot Girlfriend’ Can Do More Than Chat.” Huffington Post, 2010. http://www.huffingtonpost. com/2010/01/10/roxxxy-sex-robot-photo-wo_n_417976.html.
7. Editor, B.R.H., Science. “Robot nannies threat to child care.” Telegraph.co.uk, 03:33. http://www.telegraph.co.uk/science/sciencenews/ 3343667/Robot-nannies-threat-to-child-care.html.
8. Turkle, S. Alone Together: Why We Expect More From Technology and Less From Each Other. Basic Books, New York, 2011.




Leah Buechley is a designer, engineer, artist, and educator whose work explores intersections and juxtapositions—of “high” and “low” technologies, new and ancient materials, and masculine and feminine making traditions. She also develops tools that help people build their own technologies, among them the LilyPad Arduino kit. She recently left her position as an Associate Professor at the MIT Media Lab to start a design firm. While at MIT she founded and directed the High-Low Tech research group. Her work has been exhibited internationally in venues including the Victoria and Albert Museum, the Ars Electronica Festival, and the Exploratorium, and has been featured in publications including The New York Times, Boston Globe, Popular Science, and Wired. Leah received a PhD in computer science from the University of Colorado at Boulder and a BA in physics from Skidmore College. At both institutions she also studied dance, theater, fine art, and design.