Sunday, July 16, 2017

The Restless Clock A History of the Centuries-Long Argument over What Makes Living Things Tick By Jessica Riskin Published 03.10.2016 University Of Chicago Press 544 Pages



In Late March, I met Actroid-F in the University of Southern California’s (USC) Doheny Memorial Library. Part of an interactive showcase on “thinking and feeling in the age of synthetic humans,” she is a remarkably persuasive android. Sitting at her station in a beige cardigan with hair in a ponytail, she leans forward with an expression of empathetic concern and asks the crowd how we are doing. I walk up for a closer look and make eye contact, surprised by the blink that greets me. Her makers, like humanoid researchers in general, seek to simulate aspects of the human body, including movement, especially difficult in the face, described by Ludwig Wittgenstein as the soul of the body with its ability to emote. A “telerobot,” Actroid-F can be wielded by a remote operator through a motion-capturing system, imitating gestures and enunciating words, in effect transmuting a human presence via pneumatic actuators. It is easy to focus on her uncanny female-likeness, and so on the ghost in this particular machine, but that would risk missing the bigger picture: the importance of ephemeral qualities like “emotion” in this new generation of robots.

On the evening I met Actroid-F, Jonathan Gratch, from USC’s Institute for Creative Technologies (ICT), sat with her creator, Yoshio Matsumoto, both, or should I say all three of them, part of a panel to discuss the uncanny valley, defined as a repulsive tendency toward too much human-ness in something that is not fully human. Gratch, whose background is in the nascent field of affective computing, focused his remarks on the artificial intelligence they are building to bring Actroid-F to “life.” He mused on “if and why and how a machine could ‘have’ an emotion and what good that could be.” Motioning up to the screen, he then introduced “Ellie,” a human-like software agent and the prototype for Actroid-F’s AI. Both of them endowed with a strong posture and calm poise, it is easy to see their similarities.

Built to combat the difficulties inherent in eliciting honest and accurate disclosure, Ellie has already successfully elicited “disclosure” in at least one trial with a National Guard Unit returning from Afghanistan. Gratch demonstrated Ellie’s therapeutic potential with the following exchange. Responding to a question about whether she has “bad dreams,” a returning veteran who’d been in a convoy that hit an IED, causing her vehicle to flip and catch fire, responded on a questionnaire that no, she did not have bad dreams. By contrast, the virtual human Ellie elicited from this vet a far more candid response:

Sure, Ellie […] do I wake up thinking I’m dying? Yes. Thinking I’m on fire? Sure. Thinking I didn’t get out of my vehicle. Yup. […] But they’re just dreams so … I mean I’m thankful for that cause it didn’t happen, right? […] Dreams are just dreams.

After a pause, Ellie followed up with “How much of a problem has this been for you?” to which the soldier quickly answered, “It’s not a problem […] It’s just life.”

This was apparently deemed a successful exchange: the robot had elicited “disclosure.” Early affective computing pioneer Rosalind Picard pointed out in 1997 that people with too little emotion are akin to computers in how they “malfunction”: “If we want computers to be genuinely intelligent, to adapt to us, and to interact naturally with us,” she wrote, “then they will need the ability to recognize and express emotions, to have emotions, and to have what has come to be called ‘emotional intelligence.’”

Ellie represents a step in this direction, but her performance complicates the goal. Of course many social and cultural factors come into play in valuing disclosure, and with this sort of disclosure in particular. Research has found that we are more likely to disclose information about ourselves on web forms than we are to other people, unless, that is, rapport with our fleshly interlocutors is first established. With Ellie, according to Gratch, we can get the best of both worlds: rapport and the benefits of anonymity (she is not a person after all). Studying the effects of embodiment, Gratch and Matsumoto may build the first android therapist through Actroid-F.

Androids aren’t new of course; they have a long and storied history, and notions of how to understand and interact with them have fluctuated over time in accordance with cultural and social norms. The term “android” is derived from Greek roots meaning “manlike” and was coined in the 17th century by French King Louis XIII’s personal physician, Gabriel Naudé. A time of great mechanical and spiritual upheaval, the 17th century birthed mechanism, science’s clockwork model of nature.

Stanford science historian Jessica Riskin sat with Matsumoto and Gratch on the panel to discuss this history, covering key themes from her book The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick. Focusing on the artisans and engineers who created the ancestors of both Actroid-F and the robots on factory floors, her book illustrates not only how these machines inspired modern theories of life, but also how the language we use to describe them has important consequences. Early premonitions about artificial intelligence — and, more recently, films like Ex Machina and TV shows like Westworld — rely on mechanism. As does Elon Musk’s ambition, with his start-up Neuralink, of enhancing brain power to stave off the potentially dangerous rise of AI. More practically, law professor Ryan Calo has argued for a new federal agency to address the novel experiences and harms robotics enable. Understanding the backstory of robots might save us, as Musk puts it, from “summoning the demon.”

¤

“Summoning” suggests a kind of mystical power but that power, since at least the 17th century, resides in science and technology. Riskin, however, brings us even further back to late medieval and early modern Europe, exploring the setting in which “a philosopher’s view of the physical world [became] precisely an artisan’s view of a machine.” Rather surprisingly, the main patron of automata (before “robot,” “automaton” was used to refer to artificial humans and animals) was the Catholic Church. Riskin amusingly describes automaton Christs blinking and scowling on the Cross and mechanical devils howling and sticking out their tongues:

Rolling their eyes, moving their lips, gesturing and grimacing, these automata dramatized the intimate, corporeal relation between representation and divinity, icon and saint […] The icons were mechanical but neither passive nor rote. On the contrary, they were representations in motion, inspirited statues: they were mechanical and divine.

Even outside the Church, automatons were familiar features of daily life, providing entertainment in palaces and gardens.

Descartes’s accounts of anatomical systems, Riskin reminds us, were thus anything but revolutionary. His body-machine analogies resembled those of the ancient and Scholastic writers from Aristotle to Galen to Thomas Aquinas. What was radical, provoking consternation from his Catholic critics, was his disembodiment of the human soul. By putting into words a modern sense of subjective selfhood, Descartes created Cartesian dualism. Riskin clarifies that his argument that animals had no soul only meant that they had no reason, and did “not entail refusing life to them.” Rather than reducing life to mechanism, she argues, Descartes “meant to elevate mechanism to explain life, never to explain it away.”

The question of whether a purely material entity — a machine — can be alive animates her historical tour. The French engineer Jacques de Vaucanson — celebrated by Voltaire as Prometheus’s Rival — created an android in the early 18th century that played the flute: “[B]lowing air from its lungs (three sets of bellows), and exercising flexible lips, a supple tongue, and soft, padded fingers with a skin of leather,” the android executed 12 tunes in succession.

The Swiss clock-making family Jaquet-Droz’s androids from the mid-18th century, included a toddler-sized writing “boy,” the first known programmable machine. Able to write any message of up to 40 characters — not to mention dip his pen in a bottle of ink and write in elegant, 18th-century script — his hand penned “Je pense donc je suis.” A harpsichord-playing automaton, nicknamed the Musicienne, defied many critics who claimed that “to express the passions of the soul, one must have a soul.” During her performances, the Musicienne would bend “her head with grace, following her fingers with her eyes as she played, her breast heaving in time to the music with apparent emotion.”

The Austro-Hungarian engineer Wolfgang von Kempelen’s android “The Turk,” also from the mid-18th century, played an expert game of chess, reportedly beating Frederick the Great, Benjamin Franklin, Napoleon, and Charles Babbage. Despite being a fraud — a human chess player was concealed in the Turk’s pedestal — the Turk helped incite a discussion over whether a machine could think, and vice versa. “By the mid-nineteenth century,” writes Riskin, “the consensus was that if a machine exhibited responsive agency of the sort needed to play chess, there must be a Worousky [an individual skilled at chess and of diminished stature] in the works.”

Our experience today thus reflects a reversal in assumptions, albeit a very recent one. In Deep Thinking, former world chess champion Garry Kasparov points out that in the mid-20th century “it was not yet a foregone conclusion that computers would come to dominate humans at chess, or anything else beyond rote automation and calculation.” To Descartes, however, in the 17th century, machines did not just carry out rote calculation or predetermined sequences of movements.

Industrial automata explain part of the transformation after Descartes, according to Riskin. The same artisans and engineers building the philosophical automata mentioned above were also building new automatic looms and mechanical calculators, restructuring divisions of labor.

The equivalence of machines to lowly people of various sorts — slaves, conscripts, workers — was an attractive supposition and a theme that would recur throughout the development of automatic machinery, industrial as well as experimental. In such conversations, machines unsurprisingly had a decidedly Cartesian meaning: they signified the lack of a rational soul, of a capacity for reason and intellect.

Vaucanson boasted, for instance, that with his automatic loom a “horse, an ox, an ass makes fabrics much more beautiful and much more perfect than the most clever workers of silk.” Mechanical calculators similarly demoted calculation from a paradigm of intelligence to its antithesis. Riskin points to the “figurative” meaning of “machine” between 1765 and 1832 as particularly revealing. “Machine” begins neutrally enough, defined as “that which serves to augment or direct moving forces” or an “Engine, instrument suited to move, pull, life, drag or throw something,” with figurative uses such as “man is an admirable machine,” to including, in 1832, “Prov. and fig., It is nothing but a machine, it is a pure machine, a walking machine, a person without spirit, without energy.”

She also illustrates the influence of an earlier cultural transformation, the Protestant Reformation. Accompanied by what the German sociologist Max Weber called the “disenchantment of the world,” it induced a “sea change” in the way Christian Europe understood the relations between matter and spirit, nature and divinity. Setting their clockmaker God apart from his clockwork Creation meant that automatons began to appear “brute and stupid,” even deceptive, writes Riskin, and “[o]ver the humming, groaning, chirping, whistling, chattering ecclesiastical machinery, the Reformation cast a partial hush.”

Descartes represents a fork in history. On the one hand, the idea of inert material parts in motion evacuated agency, perception, consciousness, and will from nature. A new science-theology was then created, devoted to demonstrating God’s existence by detailing the “mechanical perfection of his artifact, the world-machine.” “Intelligent Design” represents what Riskin calls the brute-mechanist tradition.

On the other hand, Gottfried Wilhelm Leibniz, the first in a group of detractors Riskin mentions, sought to naturalize agency and sentience, rekindling them in matter. He represents the other road, the active-mechanist tradition. Not only did he object to Descartes’s claim that animals lack souls, but he claimed that nothing really lacks a soul — or as Riskin puts it, “machines weremore than machines.” In La Monadologie (1714), for instance, Leibniz proposed a thought experiment: he instructed his reader to imagine a big machine, “the size of a mill, that could think, feel, and perceive.” If you walked inside this great factory of thought, Leibniz argued, you would find only “pieces that push each other and never anything to explain a perception,” and so you would be led to understand that perception and thought “resided not in the operation of the mechanism but in its very substance.” Rather than being a “designer,” Leibniz proposed that God worked through “organization,” with natural machinery unfolding over time.

The prehistory of Darwinian evolution can then, as Riskin illustrates, be understood as an interplay between these two sides, with “organization” becoming a popular alternative to “design” in the mid-18th century through individuals like Julien Offray de La Mettrie, Immanuel Kant, and Jean-Baptiste Lamarck. She thus provocatively proposes a new secular notion of “history” itself as a self-organizing living machine — to which Charles Babbage’s mathematical engines during the 1820s and ’30s added currency.

Babbage disagreed with the consensus of the 19th century on responsive machinery. He argued that not only is “every game of skill is susceptible of being played by an automaton,” but also that “a purely material, mechanical entity might overcome its physical finitude by operating over indefinitely long periods of time.” By stringing together punch cards and then feeding the result of the operation back into the machine, a process Babbage described as “the Engine eating its tail,” he demonstrated that though no mechanism could transcend its own spatial limits “perhaps, particularly if it transmitted its mechanical action from one generation to the next, it might achieve a form of transcendence through the dimension of time.”

In ridding his theory of evolution both of internal (striving) and external (divine) agency, Charles Darwin ultimately adopted modes of explanation informed by both strands:

[the] historical mode that went with the notion of an internal striving agency according to which living beings actively transformed themselves over time; and the fitness mode that went with the assumption of divine agency, according to which living beings were static, passive, designed devices.

Intelligent Design was in fact already intellectually suspect by the time Darwin proposed his theory of evolution by natural selection, but the German physiologist and physicist Hermann von Helmholtz dealt a particularly decisive blow. His monumental study of the physiology of vision (1867) demonstrated that the eye was anything but a superior creation — “[it] has every possible defect that can be found in an optical instrument, and even some which are peculiar to itself.” Helmholtz concluded that “if an optician wanted to sell me an instrument which had all these defects, I should think myself quite justified in blaming his carelessness in the strongest terms, and giving him back his instrument.”

Rather than attaching an omnipotent God to a limited contrivance, such as to the faulty eye, Helmholtz “did away with the perfect eye, replacing it with the active process of perception.” As Riskin puts it, we see by thinking.

¤

Darwin’s theory was eradicated of active mechanism around the turn from the 19th century to the 20th, according to Riskin. “The ban on agency,” she writes, “seems as close to the heart of what science is as any scientific rule or principle.” Her book illustrates what it means to create a scientific model of a living being. Just as importantly, she shows how inextricable ideas are from their context — in other words, from the world in which they arise and from the language in which they are couched.

Yet, this said, agency often figures in scientists’ figures of speech. Perhaps most famous is the “gene’s eye view” popularized by evolutionary biologist Richard Dawkins, which removes any remnants of agency from living things, who become “vehicles” for will-less and purpose-less genes:

If we allow ourselves the license of talking about genes as if they had conscious aims, always reassuring ourselves that we could translate our sloppy language back into respectable terms if we wanted to, we can ask the question, what is a single selfish gene trying to do? It is trying to get more numerous in the gene pool.

The philosopher Colin McGinn recently articulated the overall scientific project in a letter to The New York Review of Books by asking, “If these recent ways of talking in biology, psychology, and neuroscience are really just loose but useful metaphors, now deeply ingrained in these sciences, what is the literally true way of speaking for which they substitute?”

Perhaps an unanswerable question — and Riskin undoubtedly would quibble with the word “true” — but The Restless Clock offers a different angle: a nuanced appreciation of our figures of speech and what they reveal. She quotes a biologist who explains that scientists speak and write as if natural entities expressed all sorts of purposes and intentions but only as placeholders “for things we don’t know yet […] to make it easier to talk about.” Genes for instance “dictate” the production of enzymes; proteins “regulate” cell divisions; muscle cells “harvest energy” — all shorthand for complex processes. Though these all imply purposeful action, the biologist explains that the least permissible of all are words like “want” that anthropomorphize a “mechanical” process — and, as the historian Adelheid Voskuhl points out, interfere heavily with the very methodologies that bestow credibility onto scientific knowledge. The dangers of this type of language are clearly becoming more pressing in the current AI context as robots play ever more important roles in our lives.

¤

In 2015, robotics company Boston Dynamics came under fire over videos in which employees kick a dog-like robot named Spot to demonstrate its stability. Even PETA weighed in. At USC on the evening discussed earlier — where Riskin, Gratch, and Matsumoto spoke — these videos were shown to the audience and elicited similar reactions of discomfort.

Renowned sociologist Sherry Turkle and many others have voiced pointed concerns about the implications of anthropomorphizing technology — ranging from deteriorating human relationships to threats of emotional manipulation. The outpouring of grief for the death of Steve Jobs in 2011, Turkle notes, has everything to do with the relationship he nurtured between consumers and Apple products: “Jobs was making the computer an extension of yourself. It wasn’t just for you, it was you.” In “How Should the Law Think about Robots” (2013) cyberlaw scholar Neil M. Richards and roboticist William D. Smart opine on the promises and perils of the personal robotics revolution, and issue a powerful critique against “seductive but dangerous metaphors.” For example, in the context of cyberlaw, they highlight the issue of wiretapping and the Fourth Amendment:

In the 1928 case of Olmstead v. United States, the Supreme Court was called upon to determine whether the police’s warrantless wiretapping of a phone line leading into the home of notorious bootlegger Roy Olmstead constituted a “search” that would have required a warrant. Chief Justice Taft’s opinion for the Court used a physical conception of a search rather than a broader understanding of the emerging capabilities of electronic technologies. The Court accordingly held that because wiretaps required neither physical trespass into the home nor the seizure of “tangible material effects,” the Fourth Amendment was inapplicable to wiretapping.

In his dissent, however, Justice Brandeis warned that avenues “may some day be developed by which the government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home.” In essence, he foresaw that a physical metaphor focusing on trespassing rather than a broader one rooted in conceptions of privacy failed to recognize the threat new technologies would present. Richards and Smart see the “Android Fallacy” of projecting human attributes onto robots as particularly treacherous. “In research labs,” they write, “cameras are described as ‘eyes’, robots are ‘scared’ of obstacles, and they need to ‘think’ about what to do next.” We might consider this language harmless, but quoting a recent study they report that when an android acted to deprive a human subject of a $20 reward, 65 percent of test subjects ascribed moral accountability to it. “This does not happen with vending machines that fail,” Richards and Smart deadpan. They conclude that we must remember the dangers of designing legislation based on the form of a robot rather than the function: “robots are, and for many years will remain, tools.”

In a forthcoming volume entitled Robot Ethics 2.0, robot ethicist Kate Darling argues that it’s not so simple. With robots creeping into more aspects of our lives — from war zones to bedrooms — she asks:

As we increasingly create spaces where robotic technology is purposed to interact with people, should we encourage this inclination [anthropomorphizing] or discourage it? And how do we change human perception of robotic objects when even the simplest of robots engenders anthropomorphism?

Paying attention to how we “frame” robots, such as whether we give them names or serial numbers, is important, according to Darling. Anthropomorphic framing can indeed be desirable when it enhances the function of the technology, such as with social robots in health and education. Paro, a robot baby seal used for animal therapy in nursing homes, and MIT’s DragonBot and Tega that “engage children in learning, often with better results than books or computers,” serve as cases in point, much as the therapist Actroid-F might in the near future.

This discussion highlights our growing realization that no technology is neutral, apolitical, or amoral, much as no metaphor is. Just as the therapist Actroid-F raises ethical questions around designing and using technology that manipulates us into divulging more information than we otherwise would, so does describing crime as either a “beast” or a “virus,” as psychologists Paul Thibodeau and Lera Boroditsky have demonstrated (the former incites calls for strong law enforcement, whereas the latter calls for rehabilitation).

When asked “What do you think about machines that think?”, neuroscientist Robert Sapolsky recently responded, “Well, of course it depends on who that person is.” Machines no longer bear the stigma of being dull, rote automatons, but instead ephemeral qualities like fairness and guilt have become “hackable.” In The Economist’s affiliate 1843, Simon Parkin recently profiled budding institutions dedicated to understanding the ethical dimension of robots including GoodAI, The Future of Life Institute, The Responsible Robotics Group, and The Global Initiative on Ethical Autonomous Systems. Marek Rosa of GoodAI, for instance, seeks to instill “good values” in robots through digital mentors: “This does not mean pre-programming AI to follow a prescribed set of rules,” she says, “[r]ather, the idea is to train them to apply their knowledge to situations they’ve never previously encountered.” Echoing Alan Turing’s observation that, when building a thinking machine, it makes more sense to try to simulate a child’s mind than build an adult intelligence, Rosa compares AI to children into whom basic values are instilled, later to be intuitively applied by them in unforeseen scenarios. Parkin also introduces computer ethicist Ronald Arkin who, rather than emulate human behavior, seeks to simulate human emotions, like guilt, to help robots learn from their mistakes.

Equipping robots to make moral decisions as they become more autonomous may certainly help us avoid “summoning the demon.” In addition, and surely just as importantly, robots could, argues Arkin, even become ethically superior to humans: “After all, they cannot rape, pillage or burn down a village in anger” — unless we program them to. Westworld TV series co-developer Jonathan Nolan observed that the Singularity will most likely not happen all at once: our technology will be “smart enough to manipulate us, but not smart enough to be responsible.” Actroid-F and Ellie, designed for the therapeutic aim of eliciting “disclosure,” help us understand the stakes and dangers. As for Riskin’s The Restless Clock, it helps us understand the backstory, and the world-altering consequences of language; it joins an important literature in calling for far greater vigilance in the metaphors we use to understand not only ourselves, but also our creations.

No comments:

Post a Comment