Even though we have eyes to see others, we need a mirror to see ourselves.
– Sakya Pandita, via Thupten Jinpa
I teach yoga and I think a lot about embodiment and abstraction. Taking a broad view, I see yoga as a system built on the sacred geometry created (or discovered?) by the first yogis. Asana practice is one way to confront something that is deeply human – a desire to overwrite the complexity of being alive with a streamlined, idealized order. The gap between the ultimate triangle pose and the form a human body can make after office work on a Thursday is the space of learning. If I’m a good teacher, I don’t make a forceful adjustment to try to turn a student’s body into a picture in Light on Yoga. The systems that we create have a tremendous capacity to serve us, but when we lunge thoughtlessly toward an ideal, pain and trouble often arise. It seems to me that negotiating the distance between the real and the ideal requires compassion.
The word compassion gets tossed around a lot, often in the context of the care of the body. I’ve been wondering what we mean by it, and what role it might play in this current moment of ours, which is so full of contradictions. On the one hand, there seems to be a lot of interest in yoga and other physical practices, on the other, the body is treated as a kind of poor relation to the mind. When I say the mind, I find myself thinking of the rapidly evolving artificial intelligence systems that are increasingly commonplace in our lives. If we are truly on the cusp of a “post-human” future, I wonder what that means for those of us who wish to cultivate compassion.
Recently I had the pleasure of attending “AI and Karma” a talk by Nikki Mirghafori and Steve Omohundro at the California Institute of Integral Studies. Mirghafori and Omohundro bring an abiding interest in the ethical implications of artificial intelligence to their work. Mirghafori, who teaches Buddhist contemplation and meditation, is an expert in speech recognition technology for computer systems. Omohundro, a physicist by training, is working on (among other things) creating and deploying crypto currencies (such as Bitcoin) within an ethical framework. At CIIS, they raised thorny questions that could seed hundreds of sci-fi storylines: Should a robot be imprisoned for illegal acts? Who is responsible for a crash of driverless cars? Would a robot kill another being in order to defend itself? If all manual labor is done by robots, can everyone enjoy leisure time?
They emphasized something that is important to remember: that we have choices about the systems we create. They noted that in Buddhist understanding, intentionality plays a critical role in karmic results. If we want to develop technology to serve the good of all, we, as a society, must pay attention to how we participate in these increasingly powerful systems. Artificial intelligence has grown by leaps and bounds in part because we choose to participate in social media and to interface with one another through machines. I often wonder what we expect to receive in return for turning our desires into data. What do we lose when we turn our experience of the present into a commodity to be traded by moneyed interests? Is being marketed to the same thing as being known? Doesn’t knowing ourselves have something to do with learning to live in our bodies healthfully and peacefully?
At the end of their talk, I asked Mirghafori and Omohundro to weigh in on the relationship of compassion to physicality. Taking a dystopic tack, I confessed that I fear a world ruled by robots, because I understand compassion to be rooted in the experience of physical suffering. If we cede our decision-making to disembodied systems, how can we expect mercy? Their answers led to more questions, and later I contacted them in order to continue the conversation.
Talking with Mirghafori helped me to refine my terms as we explored the difference between empathy and compassion. As we talked, she emphasized that she was not offering a definitive Buddhist perspective, but sharing her ideas with humility and respect for the teachings.
“Empathy is one component of compassion. It arises naturally, hypothesized to be a result of the mirror neurons activity in the brain,” she said, noting that a state of compassion has additional components. When we see another being in pain, our mirror neurons fire up and we step, to some degree, into their shoes. As Mirghafori put it, when we feel another’s pain, we experience empathy. Compassion, in addition to this feeling, includes the desire to see the relief of that pain, and potentially, the willingness to help bring relief. Compassion may arise naturally, or be cultivated through an intentional practice.
“Would artificial beings be capable of compassion? Maybe they could be programmed,” said Mirghafori. “After all, we are programmed.”
I asked Mirghafori what such a program might look like. She made a distinction between the “rule-based” computer systems of the past and the “deep learning” models in play today. Rule-based systems use cumbersome “if X then Y” programming, which require every possibility to be explicitly identified, and therefore are not responsive to unforeseen problems. Deep learning systems engage the gestalt, involving the system as a whole to work to transform the input into the output.
“The grandmother cell theory didn’t work,” said Mirghafori, referring to an idea that specific neurons in our brains are dedicated exclusively to certain tasks, such as recognizing a familiar person’s face. The biologically-inspired “neural networks” that are being created today have a great deal more adaptability and plasticity. Rather than firing along narrow, if-then pathways, the new systems come closer to the way our neurons process stimuli, to and from multiple directions at once.
In my understanding, cultivating compassion has to do with witnessing another’s experience without judgement. Feeling with takes us out of the realm of if-then, an essentially reactive mode. Perhaps we could create artificially intelligent systems that make choices based on the wide, dispassionate view that contemplatives seek. Yet here I’m reminded of something else Mirghafori said, which is that computers are stupid. They are literal entities, assigned to solve specific tasks. What would be the purpose of a system that could model contemplative wisdom? Would we consult it as an oracle that would arrive via algorithm at the most compassionate solution to complex human problems? Can we find the math precise enough to map “the most compassionate solution”?
Though our technologies are increasingly refined, we have yet to unravel the mystery of learning. There is, as Mirghafori put it, a “black box” between the input and the output. What is happening when we make the leap from unknowing to knowing or from a concept to an action? As we practice yoga, or play an instrument, or learn a second language, we sometimes receive an unexpected gift. A difficult pose is suddenly possible, we suddenly play correctly a challenging combination of notes, the words we didn’t know we’d learned become available. We’ve been practicing, of course, but it’s hard to say why something works one time and not another.
Artificial intelligence is a reflection of human intelligence, in particular of collective human intelligence. The data we produce by living our lives online has made possible the growth of systems that manage (and manipulate) that data. On Steve Omohundro’s website, under possibility research, I found the following quote: The essence of intelligent systems is making good decisions in uncertain environments.
“A thermostat has a certain kind of intelligence,” said Omohundro. Though a thermostat isn’t aware of itself, it can perform a specialized task. “Researchers once thought that in order to create a system to translate from language to another, the system would need to know the meaning of the words. But it turns out that Google Translate is pretty good. Today’s neural networks don’t really know what they are doing. Their improvement does not require awareness.”
Current systems create parameters for sets of neural nets, and can then expand the parameters, creating more complex layers and deepening capacity to problem solve. Still, the systems and the researchers who create them don’t necessarily know what they’re creating. Omohundro reminded me that most researchers receive funding to address specific tasks rather than for exploratory efforts fueled by scientific curiosity.
Yet researchers do bring technology to seek insight into what makes us different from other beings. Omohundro pointed out that it has been proven that you can find agape (here meaning the highest form of love and charity) in dogs. At the same time, there is ample evidence that the ability to solve problems does not make us kinder. If disregard for the well-being of others is the problem we want to solve, there has yet to be an artificial intelligence system dedicated to that task.
Omohundro and Mirghafori both mentioned the distinction between psychopaths and sociopaths. While psychopaths are described as having no capacity for empathy, sociopaths have the capacity for empathy but learn to ignore it. Brain scans of psychopaths reveal that there is something different about the wiring of their brains, while that is not necessarily the case with sociopaths.
“Our society promotes sociopaths by idealizing competition in our culture,” said Omohundro, adding that he doesn’t think that the urge to crush a competitor is ingrained human nature.
Omohundro reiterated a point that he and Mirghafori made during the talk at CIIS: our human capacity for suffering does not necessarily lead us to make choices based on empathy or compassion. Have a look at the headlines, or at our history as a species, and there are ample examples of this. So how can we ensure that technological advances serve our collective well-being and honor the needs of all human beings? Neither he nor Mirghafori claim to know, but they return to the idea that technologists have to keep an eye on their intentions and on the outcome of the work that they do. There are as yet no AI systems that experience pain, and even if there were, that would not ensure that a robot that could feel it would work to help others heal.
I can say one thing with assurance, robots do not (as yet) engage in yoga or meditation. Both of these practices lay the groundwork for feeling with, for expressing compassion toward ourselves and those around us. We are programmed by our experiences and by our actions, and we can take a systematic approach to building our capacity to accept one another as we are and to find ways to care for one another.
As I write this, vast sums of money are being exchanged to support the creation of automated systems. In the rush to create such things, we can easily forget that we already have the tools to create healthier human bodies, and that in that effort, agape can be served. I sympathize with the scientific urge – the desire to learn what makes things work, and to make things work to serve our desires. But we have yet to create any system that serves everyone equally, that feels with everyone.Those who are enchanted with technology are deeply engaged with the illusion that promoting these systems will turn our difficult, messy world into a place that matches their ideal – whether or not that ideal promotes another system we have created, human rights.
My vision of a utopia doesn’t necessarily involve driverless cars and robots that plant and pick only the best berries. Some theorists suggest that automated systems could help to provide universal basic income, and well, that would be a start. But to practice compassion, we could begin by putting the preservation of our soft animal bodies first. A civilization built around that effort would be the realization of our highest intelligence.