Technology Archives - Tricycle: The Buddhist Review https://tricycle.org/category/science/technology/ The independent voice of Buddhism in the West. Wed, 08 Nov 2023 13:51:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://tricycle.org/wp-content/uploads/2019/08/site-icon-300x300.png Technology Archives - Tricycle: The Buddhist Review https://tricycle.org/category/science/technology/ 32 32 What’s Buddhism Got to Do with AI? https://tricycle.org/article/buddhist-ai-harm/?utm_source=rss&utm_medium=rss&utm_campaign=buddhist-ai-harm https://tricycle.org/article/buddhist-ai-harm/#respond Wed, 08 Nov 2023 13:00:33 +0000 https://tricycle.org/?p=69800

Can Buddhist teachings serve to mitigate the harms generated by AI?

The post What’s Buddhism Got to Do with AI? appeared first on Tricycle: The Buddhist Review.

]]>

Today, we are in a very different age than when I first looked at a computer printout in 1965. Information, disinformation, and “infoshum”—or “information noise” that aims to maximize clickability while minimizing actual information—is generated and disseminated in a mind-moment over our laptops and cell phones.

We are in a global race around the development and deployment of AI technologies, driven to a great extent by capitalistic and power-hungry interests as well as our own inherently competitive mindsets; the greatest benefits accruing to the individual, corporation, or country that dominates the race for political, military, economic, or ego power.

Some headlines are asking about another race as well. A recent Harvard International Review article begins with the headline: “A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous.” In the midst of various rushes to adopt burgeoning technology, one critic has even asked, “how can we fill up the depleted reserves of trust and reason in our world as we see the debilitating impact of the digital world on our individual and collective psyche?”

It is clear to many of us that we have a moral responsibility to understand and inform ourselves to the extent that we can about the benefits and burdens of AI. As many are pointing out, persuasive technology is quickly degrading our sovereignty, commodifying our identities, and is further handicapping the agency of the so-called “useless class.” 

Basically, AI is a double-edged sword. For example, Marc Andreessen, in his AI manifesto, seems assured that AI is the panacea for everything from child care to warfare. On the other hand, various leaders in the AI field [have been warning that AI could usher in the end of the human race as we know it].

Both might be right, with deepfakes, privacy violations, racial and sexist algorithmic biases, AI priming market volatility, autonomous weapons, fraud and disinformation, social surveillance and manipulation, and identity commodification. AI hallucinations can lure one down the rabbit hole of a potentially dangerous and deluded unreality. 

The slipperiness of AI points to an interesting Buddhist perspective: the illusory nature of the phenomenal world.

There are also extraordinary benefits to AI, including accurately diagnosing various diseases, protecting biodiversity, dealing with issues related to climate change, predicting the spread of infectious diseases, detecting guns in schools, helping the speechless speak, slashing energy consumption, and more.

In sum, we are in a new information landscape primed in part by humanly flawed data, compounded by extractive economic interests, and furthered by adolescent drive, with not much thought to the social, environmental, moral, and ethical harms that we are currently experiencing and which are bound to be even more problematic in the near and far future. 

In the mid-1980s, I watched a startling interview with a top general in the US military. It was about the deposition of nuclear waste. In the segment, he made it clear that, from the dawn of the US’ embrace of nuclear power and materials, there was never a plan in place for how to get rid of radioactive waste. Today, most of us know that nuclear waste is piling up around the world with consequences that are truly scary to consider.

My sense is that we are not in a dissimilar situation with AI, regarding how to deal with the exponential and pervasive accumulation of fractured-trust-waste that is contaminating our society and psyche. There doesn’t seem to be a clear plan forward in place for recovering the broken trust caused by AI, and a lot is at risk. Yet, we can still shift this narrative before it is too late.

You may be asking yourself, what does AI have to do with Buddhism? Curiously, the slipperiness of AI points to an interesting Buddhist perspective: the illusory nature of the phenomenal world. What can we really believe in, after all? What really is real? From the Diamond Sutra

“Thus shall you think of all this fleeting world: A star at dawn, a bubble in a stream; a flash of lightning in a summer cloud, a flickering lamp, a phantom, and a dream.”

Yet most of us need to think that what we experience is not entirely a dream in order to properly function in our society. Imagining the world as fixed makes us feel secure, even though, from a Buddhist perspective, reality seems to be just as slippery as AI. In the midst of this slipperiness, however, rest issues related to serious harm and deep suffering.

I have to ask myself: Can Buddhist teachings really serve in terms of mitigating the harms generated by AI? Can meditation help? Are we too far gone? Will this just be the landscape where we do charnel ground practice in the future, charnel grounds being a place where bodies are left above the ground to putrefy, and this is where we practice? 

There have been some Buddhist suggestions that we might mitigate the harm caused by AI through introducing into AI the ethos inspired by the bodhisattva vow to awaken and end suffering. Some have even proposed the slogan “intelligence as care” to try to revise the current definition of intelligence and to point to a better way forward.

I ask myself: can there be such a thing as artificial wisdom or artificial compassion? Maybe yes, maybe no. If yes, then perhaps AI could be created to include within its framework a Buddhist ethic of virtue. Frankly, I think not, and the idea to reframe intelligence as care might be seen as too little too late.

Another Buddhist perspective that comes to my mind is the term “appamada,” usually translated as vigilance or conscientiousness. Stephen Batchelor translates it as care. Appamada means to live by the vow not to harm, to be diligent about this commitment, and to be heedful of when we engage in harm, and choose to correct course. Could appamada be trained into AI systems? Or is appamada up to us? Can we bring the spirit of appamada, vigilance, conscientiousness, and care into how we approach AI as developers and consumers? One also has to ask: how? I think this is what Tristan Harris and his team at the Center for Humane Technology are endeavoring to do: inform us so we can make informed and sane choices and be responsible providers and consumers of the technology. 

From philosopher and neuroscientist Francisco Varela’s perspective on the enactive view, AI is already embedded in the context of our lived experience; it is coextensive within all aspects of our lives, whether we are accessing technology or not. In fact, it is not so much a matter of our accessing it: it has accessed us. It is now a part of our psychosocial biome, whether we know it or not, or like it or not. To put this more nicely—in Thich Nhat Hanh’s words—from the point of view of codependent arising, we inter-are with AI; our views have colonized it, and it is colonizing us.

What might Buddhism offer in the midst of the tsunami of AI development? What we have to offer is subtle but important. I appreciate what Roshi Norman Fischer has called the “bodhisattva attitude,” an unshakable attitude of clarity that reflects our very character; this is our stance, our point of view, it is the internal atmosphere that colors our way of seeing the world, of seeing reality: it is an attitude saturated by appamada, by conscientiousness and compassion. As global citizens and consumers who care, we have an important task before us. Amid the bombardment of persuasive distractions on our mobile devices and elsewhere, we are called to give attention to what we are doing and to ask truly why we are doing it and how this enables the extractive, capitalistic self-interest that is driving the wholesale development of AI.

We also need to remember that evolution has endowed us with new brain competencies that can enhance our capacity for being intentional as we meet the complexities of our world. We can be deliberate about our actions; we can choose to act conscientiously, and we can strengthen those capacities within us that make it possible for us to engage with our world with fundamental integrity.

We can make good trouble, and we should; we shouldn’t wait.

It is important to remember why we are really here, which is not about “mindfulness washing” or “wisdom washing” in order to look like we are super aware and altruistic as opposed to being genuinely mindful and ethical. I believe that it is imperative that we strengthen the conditions that make our actual motivations visible, and deliberately cultivate an intention that is free of self-interest and fundamentally nontransactional. This involves sensitivity to what is present, the capacity to perceive present circumstances clearly, and the will and wisdom to consider the deeper downstream effects, or what we call in Buddhism, karma.

We so often see that personal preferences, self-centeredness, greed, fear, and distractibility distort our perception of reality, and this influences our values, motivations, intentions, and behaviors. From the Buddhist perspective, intention is a mental factor that is directional and deliberate. Our motivation, on the other hand, might not be fully conscious to us. To put it simply, our so-called good intentions can be driven by unconscious, ego-based, self-interested motivations, and lack fundamental integrity. A related issue is understanding that our unconscious motivations can be the cause of preconscious moral suffering, including moral injury, or a sense of subtle but pernicious shame or deep regret. 

Please understand, I am just a Buddhist teacher, who right now is inundated with AI articles predicting doom or liberation. Perhaps none of us, including the developers of AI, can know fully what the downstream effects of AI will be. But we do know that the velocity of the development of AI is stupefying, and the opinions are numerous regarding this powerful tool that has our socially and culturally biased intelligence woven into it while seemingly lacking any real wisdom.

It could make a significant difference if both developers and consumers approached the development of AI with the attitude of the bodhisattva, with vigilance and conscientiousness (appamada), being deliberately free of capitalistic self-interest and hedonic curiosity. We also have to become more discerning and responsible about what and how information is delivered, and, as consumers of information, have the awareness to recognize that there are attempts to spin us, manipulate us, undermine us, and hijack our fundamental faith and confidence. 

As it stands now, the capacity to resist the harms of AI is mostly accessible to the more privileged among us. From the point of view of socially engaged Buddhism, privilege confers responsibility, responsibility to raise up the so-called less privileged, but more importantly to deconstruct the very systems that confer privilege. 

In the end, whether AI is life-affirming or life-denying depends on all of us. To paraphrase Thich Nhat Hanh, we, as developers of AI and consumers of AI, should engage the soft technologies of self-correcting practice and developing healthy, transparent communities as the sane instruments with which we experiment with truth. Both sides of this equation need to actualize appamada—that is vigilance, conscientiousness, and care—in how we deploy and use these emergent technologies. If we are in an institution that is developing AI, we can bring “wise friction” to these institutions to decenter and decolonize AI’s embeddedness in a Eurocentric capitalistic worldview. We can also advocate, with indigenous consent, for diverse knowledge systems to be a fundamental part of the AI landscape, as per the work of Sabelo Mhlambi. We can legally call for the decommodification of our very identities. We can further commit to disrupting the systems that foster structural violence reflected in AI systems. In other words: We can make good trouble, and we should; we shouldn’t wait. Ultimately, can we foster an AI landscape that is not only seemingly rational but also genuinely relational and rehumanized, where dignity, human rights, and the rights of nature are valued. 

And we have to be unflinchingly honest as we embark on this profound journey of developing new forms of intelligence. Instead of leaving the fractured-trust-waste for later, let’s deal with it now. Facebook acted like big tobacco in its formative years, knowingly sowing social distrust and psychological trauma in young people, while denying it all the while. We can’t allow ourselves or our friends to act in such bad faith this time around.  

We should make sure that these big-brained mammals who are driving the development of AI form a CERN-like relationship of cooperation and ethics to ensure that AI is a positive contributor to a sane and possible future. And, most importantly, all of us must endeavor to live by the values and vows that relate to ending harm through technology. 

With gratitude to Abeba Birhane, Randy Fernando, Sensei Kozan Palevsky, and Soren Gordhamer for their help in reviewing this piece. 

The post What’s Buddhism Got to Do with AI? appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/buddhist-ai-harm/feed/ 0
Buddhism in the Age of Smartphones https://tricycle.org/article/buddhism-smartphones-technology/?utm_source=rss&utm_medium=rss&utm_campaign=buddhism-smartphones-technology https://tricycle.org/article/buddhism-smartphones-technology/#comments Mon, 14 Aug 2023 19:54:57 +0000 https://tricycle.org/?p=68733

Developing the right relationship with our digital technology is a critical step on the path.

The post Buddhism in the Age of Smartphones appeared first on Tricycle: The Buddhist Review.

]]>

These days it’s impossible to walk an authentic Buddhist path without digital sense restraint. Digital sense restraint? Let’s back up.

In the worldly (lokika) realm of pain and pleasure and taxes, many of us increasingly recognize the harms of digital technology. Like a muddy dog splattering a trail of paw prints across the house, big tech’s grubby prints can be found on a range of disturbing social trends. Among them are soaring rates of depression and anxiety, and increasing political polarization. Viscerally, many of us feel a sense of lost agency as screens absorb more and more of our time, energy, and attention. Beyond the worldly realm lies arguably digital technology’s most disturbing impact of all, the deep spiritual rot that unrestrained digital tech use provokes.

When examined through a Buddhist lens, we find unique threats posed by the onslaught of big tech; we also find powerful tools for confronting the juggernaut. On the one hand, failing to corral our use of digital technology scuttles all our best intentions and strongest efforts to practice Buddhism coherently. On the other hand, Buddhism in general, and one oft neglected concept from the Pali canon with a distinctly unsexy name—sense restraint—in particular, has much to teach us about how to navigate digital technology.

It’s not that digital technology per se is problematic. In fact, digital technology can support Buddhist practice in powerful ways. The issue lies in unrestrained tech use. Let’s take two recent Saturday mornings as an example.

On the first morning I start the day by checking my email in bed on my phone. A few to- dos that I won’t get to until Monday anyway swarm my mind. Itching to change my mood, I reflexively open WashingtonPost.com, and before I know it, I’m knee-deep in Ukraine stories and the latest Trump indictment. An hour of compulsively bouncing between the Washington Post, New York Times, and the Atlantic, and my mind is positively racing. Not that I’m any more ready to engage with the problems of the world— instead, discouraged and overwhelmed, I pull myself from bed. A sour pall has already settled over an otherwise beautiful Saturday. Flailing now, I attempt to break my worsening mood by pursuing my various social media accounts. Again, bad move. By the time I manage to drink some water, eat some food, and settle down on a meditation cushion, it’s almost noon. I feel scrambled, anxious, defeated, and testy. I’ve done nothing useful or ennobling. Most of my sit is a dreary process of slowly clearing the sludge.

Two weeks later, I find myself in the middle of a thirty-day digital detox. I’ve radically limited my tech use for the month, a self-imposed drying-out period, an attempt to find my way out of the twisting labyrinths of distraction and compulsion. I wake up. My phone is in a dusty corner of my basement, where I stowed it before bed. I open my computer and press play on an Ajahn Sucitto Talk, which I downloaded from Dharma Seed the night before. The warm, wise, chuckling, voice of one of my favorite monks streams into my bedroom. I lie there for a half hour, listening, breathing, feeling, opening. I stretch and enjoy a cup of water. At this point, meditating is not a chore, instead it’s a delight. My zafu is as inviting as an iced coffee on a hot day. By noon, I’ve already meditated for two hours, listened to an entire dharma talk, and read a chapter from Stillness Flowing (an Ajahn Chah biography), the free PDF—another remarkable gift from the internet. I feel settled, clear, grounded, alive.

The difference between these two mornings is stark, but such is my life with and without digital sense restraint. Of course all of us have different ways we might meaningfully spend a Saturday morning, different things that might tempt and defeat us and different things that might uplift and inspire us. But for all of us, the little decisions about how we choose to engage or not engage with our digital technology, compounded over months and years, go a long way to determining the quality of our lives and whether we mature or wither dharmically.

The siren call of our screens is so compelling and the impact of those screens so significant, that being a 21st-century Buddhist requires us to also be digital minimalists. Digital minimalists? Digital minimalism is a term coined by computer scientist and author Cal Newport to describe a philosophy whose adherents carefully consider how digital technology either supports or undermines their deeply held values. It suggests that we start by identifying our values, and then reverse engineer what digital technology we use and in what ways. Digital minimalism assumes that there are all sorts of hidden costs to cluttering up our lives with various digital tech, and that we are better off when we are strategic about how we choose to engage with it.

A strong current of urgency runs through Buddhism. It is suffused with pressing reminders to consider how we use our time and attention, reminders to transform our “precious human birth” into a continuous cultivation (or, in the Mahayana tradition, uncovering) of wisdom, concentration, and virtue. In the Anguttara Nikaya, the Buddha exhorts his followers to “put forth extraordinary desire, effort, zeal,” and “enthusiasm” just “as one whose clothes or head had caught fire.” Indeed, the famous adage to “practice like your hair’s on fire” cuts across the various schools of Buddhism.

Tellingly, the Buddha’s final words to his followers before he dies are “appamadena sampadetha”—strive diligently. His rousing calls to diligence and heedfulness, his persistent reminders of the impossible rarity and preciousness of human life, are hardly compatible with scrolling mindlessly through an Instagram feed, the distracted perusal of political blogs, or stumbling down YouTube rabbit holes. At its most benign, thoughtless digital tech use serves as an entertaining distraction that crowds out a wholehearted pursuit of wisdom, concentration, and virtue, a proposition already quite troubling from a Buddhist perspective.

But digital technology often goes further than simply distracting us from what really matters. Frequently, it pulls us in the opposite direction of Buddhist values. Consider for yourself whether or not engagement with social media and other new digital technologies encourages fragmentation of attention or samadhi, agitation or serenity, outrage and vengeance or patience and tolerance, jealousy or empathetic joy, a reified sense of self and identity or the dissolution of a separate sense of self, blunt attention or refined attention, craving for constant stimulation or contentment.

Perhaps nothing is more emblematic of this clash between Buddhist values and digital tech than the Burmese Buddhist monk Wirathu whipping tens of thousands of YouTube followers into a genocidal rage against Muslims in Myanmar. This may well be the first case of Buddhist-inspired genocide and is decidedly a product of an age where algorithms designed to promote engagement (views, comments, likes, etc.) provide a platform for the most degraded and absurd forms of hypocrisy.

Why would this be so?! What structural forces underlie this confrontation between digital technology and Buddhist values? Just as James Carville’s famous quip in 1992, “It’s the economy, stupid,” distilled American politics into a single sentence, many of the ills of digital technology can be distilled into…“It’s the attention economy, stupid.” The attention economy is an approach to handling and disseminating information that treats human attention as a commodity. Many of the great fortunes of this century have been made by capturing human attention and data, and then selling that attention and data to advertisers. Google, Meta, TikTok, and Twitter all fight to get eyeballs on screens…and keep them there. Even companies we wouldn’t necessarily think of as digital attention merchants, such as news outlets, are subject to similar market forces. The New York Times, for example, grossed more than $500 million in advertisement sales in 2022, approximately a quarter of its entire revenue. They too are in the business of eyeballs on screens.

The simple equation, your attention = their money, has led these companies into a ferocious arms race for human attention. Basic biologically rooted tendencies to care about things like safety, status, belonging, reciprocity, and justice have been hijacked to create powerfully addictive digital experiences. Tech companies prey on human tendencies and vulnerabilities much the same way companies like Nabisco prey on our desire for sweets and fats.

The Buddha spent his life examining how human impulses and vulnerabilities lead us down a path of ignorance and suffering. Many of his teachings target ways to help us avoid in-built perceptual, emotional, and cognitive traps. Mara, a Buddhist devil of sorts, who shows up in different guises to tempt the Buddha off his path, personifies these traps. Modern tech conglomerates are Mara’s henchmen, helping him in a million subtle and not so subtle ways to pull us back toward base impulses that damage us and the world, back toward samsara, back toward an endless cycle of suffering. Digital minimalism offers Buddhists a framework for understanding how they can consider and calibrate their use of digital technology as they attempt to walk the path.

Digital minimalism isn’t a new tactic or identity that Buddhists need to adopt, it’s a philosophy that is already embedded within the DNA of Buddhism. Basically, if you are a Buddhist living in the 21st century concerned about how to use your precious human birth, then you will naturally also be a digital minimalist carefully considering how you use digital technology. Digital minimalism simply gives us a powerful set of tools for sorting out this particular area of our lives. When digital minimalism meets Buddhism, we get digital sense restraint. Allow me to explain…

Buddhists view everything that enters through the sense doors of the eyes, ears, nose, tongue, body, and mind as a form of nutriment. Just as eating junk food can make your physical body sick, so too can seeing, hearing, and thinking certain things make the citta (heart-mind) sick.

The Vietnamese Zen teacher Thich Nhat Hanh is sweet and melodious, but an assiduous clarity is always there, and sometimes an edge shines through. That’s when my ears perk up. In his 1989 classic, The Heart of the Buddha’s Teaching: Transforming Suffering into Peace, Joy, and Liberation, he suggests all serious practitioners take on the following commitment: “I will ingest only items that preserve peace, well-being, and joy in my body, in my consciousness, and in the collective body and consciousness of my family and society. I am determined not to use alcohol or any other intoxicants or to ingest foods or other items that contain toxins, such as certain TV programs, magazines, books, films, and conversations. I am aware that to damage my body or my consciousness with these poisons is to betray my ancestors, my parents, my society, and future generations” (Hanh, 1989, p. 96). Betray my ancestors, my parents, my society, and future generations?! Kind of sends chills up your spine, doesn’t it?

Thich Nhat Hanh groups certain media as “poisons” alongside classical intoxicants like drugs in his call to guard our intake. If he was this concerned about books and TV, imagine his dismay at social media. Thich Nhat Hanh unsurprisingly draws his injunctions from within essential Buddhist truths—in this case, the truths that we need to be careful about what we absorb because we are vulnerable; that to abstain from certain stimuli is considered wise; and that certain lesser pleasures are best to release in pursuit of greater ones.

The Pali canon speaks about indriya-samvara, commonly translated as “sense restraint.” The need for sense restraint is based on the understanding that sensory stimuli impact our minds and hearts and that an unrestrained approach to sensory stimuli leads to craving, suffering, and attachment…all the bad stuff. We may bristle at the thought of “restraint,” but what’s being called for is simply a basic care for what we ingest so that we can access a deeper, more satisfying, and more reliable well-being.

The capacity to guard the senses is critical, and the ways of doing it are manifold. They include strong mindfulness capable of clearly knowing sensory experience as it unfolds, strong concentration that pulls the mind inward to states of bliss, and wisdom that cuts through ignorance. To my mind, all of these are well and good, but there is a much more basic and often overlooked one that features prominently in the Pali canon…simply abstaining from looking in the first place. This is the practice whereby a monk, for example, averts his gaze from objects and individuals who provoke lust. It is an approach that is so maddeningly simple that many simply skip over it, hoping for some more profound prescription. We flinch and try to squirm when we’re offered the resounding opposite of the victorious Nike swoosh “just don’t do it.

Thich Nhat Hanh recommends this approach when he reminds us that looking at certain media is a betrayal of “our ancestors, our society, and future generations.” Maybe we should guard our eyes and ears from the drivel of the feeds and scrolls too?

Better tech habits are critical both for mundane well-being (think depression and anxiety) and moving forward on an authentic Buddhist path (think concentration, wisdom, and enlightenment). But how do we do it? Where do we start?

I’m convinced that the simple route of disengaging suggested by Thich Nhat Hanh offers a remarkably effective approach. The thirty-day digital detox  or “declutter,” that I mentioned earlier is an increasingly popular intervention. It invites people to adopt the most unglamorous form of sense restraint…turning away. People make a list of tech from which they’ll abstain for a month, and then make another list of things they deem meaningful and want to invest their time in. Then they go about abstaining and investing according to their lists.

For myself, I’ve found it startling and unnerving to get back so much time and energy. At first, I find a compulsive jitteriness as I struggle to adjust to a life without the immediate state changes that clicking and swiping offers. But within a week or two, I already start shifting into a massively more satisfying mode of being and doing. I find myself listening to dharma talks, meditating for hours, cleaning my basement, connecting with friends in person, pushing work projects forward. The effects on both my practice and my life are staggering.

In some ways, this can be seen as a rather remedial accomplishment. It puts us at a baseline more similar to a practitioner in, say, 1992 (an era before widespread digital saturation). But it is an accomplishment nonetheless, and I believe it’s important to celebrate such victories in ourselves and others. For many of us, it is extremely challenging to unhook from the compulsive glare of our screens. By unplugging we can come face-to-face with painful feelings that have been simmering for a long time. A lot can lurk behind a habit as seemingly trivial as checking the news repeatedly throughout the day or bringing your phone into the bathroom. Resetting our digital habits often requires us to exhume and handle this old pain. It also requires us to carefully examine our life and flex the muscle of deliberation, one of the core muscles of Buddhist practice. In moving through all of this we can reclaim a basic sense of dignity and control.

The whole process has striking similarities to the process of working with thoughts and feelings as we sit in meditation learning to unhook from the internal habits that keep us from states of stillness and calm, or that drag us away from compassion. Coming into right relationship with our digital technology is a critical step on the path and is itself a microcosm of the larger path as well. In taking this step we make our lives immediately and significantly better, and we also create the possibility of taking the next step, and the next.

The post Buddhism in the Age of Smartphones appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/buddhism-smartphones-technology/feed/ 3
Buddhism Can Handle the Simulation and the Singularity https://tricycle.org/article/buddhism-simulation-singularity/?utm_source=rss&utm_medium=rss&utm_campaign=buddhism-simulation-singularity https://tricycle.org/article/buddhism-simulation-singularity/#respond Sun, 30 Jul 2023 10:00:37 +0000 https://tricycle.org/?p=68594

Even trapped in a simulation, nothing of moral significance is wasted.

The post Buddhism Can Handle the Simulation and the Singularity appeared first on Tricycle: The Buddhist Review.

]]>

If these years of COVID and environmental decline have felt like the zombie apocalypse, recent months have ushered in the age of mind-bending sci-fi. In the tech world, it seems to many that we may be nearing the so-called “singularity,” a technological turning point at which machines become vastly more intelligent than humans, permanently transforming everything. Furthermore, philosophers and physicists, especially the many readers of Nick Bostrom’s essay “Are You Living in a Computer Simulation?” (2003), have been talking seriously about the likelihood that yes, perhaps we are. A Buddhist perspective can help with the philosophical and moral quandaries that ensue.

The idea that we are living in a simulation and the idea that we are on the cusp of singularity are quite different, independent hypotheses. Still, I think their significance is unavoidably intertwined. It seems to me like the idealized future brings a largely simulated, digitally enhanced, if not fully digital experience. After all, if we survive the singularity, we are told it will enable a new form of trans-human existence for us, where our bodies merge with our technology and rise above our mortality. As Bostrom has written, it is likely that, before long, digital experience will well outpace “base world,” physical existence for speed and ease of manufacture and manipulation. On the other side, if we are already living in a digital simulation, it is surely maintained by beings deploying a vastly more advanced technology than we have—quite likely, a post-singularity AI.

It is astute to notice that, while it claims to be grounded in innovation, the trans-humanist impulse echoes long-established religious yearnings for a heavenly, immortal existence. While many of the relevant philosophical writings are measured and own up to their speculative uncertainty, readers often come away more giddily convinced than the evidence compels. Comparisons to traditional religion help to explain a willful gullibility: like The X-FilesFox Mulder, many in the tech world seem to want to believe. But, even if you are paranoid, it doesn’t mean someone isn’t out to get you; and even if some in the tech sector deeply long (subconsciously?) for religious transcendence, it doesn’t mean they couldn’t create it themselves.

I do not consider it likely that we might actually be digital simulations. My instinct is that digital information flow, no matter how complex, does not amount to sentience. But it does seem to me possible, perhaps even probable, that we could be trapped, “Matrix”-like, inside a digital simulation without knowing it—and if not now, soon. The Copenhagen interpretation of quantum mechanics holds that a particle exists in multiple potential states until it is measured, at which point it “collapses” into a single real state. This might mean that our world is only “rendered” in its finest detail (at the subatomic level) if and when we observe it—which matches how a video game manages processing power. How should we feel about this? How would the discovery that we are subject to undetectable manipulative beings with advanced technology change the practical and moral frameworks for the meaning of our lives?

The challenge is dizzying and stomach-turning, but perhaps less so than at first glance. If we think it undermines all meaning, we’re forgetting the conventional, this-worldly nature of most of our actions and decisions. We still would rather have a nice meal than a rotten one, even if both are just digitally rendered for us by the machine. So, as long as the regularity of the digitally-rendered world remains intact in our experience, we will want to keep going to work and earning money so we can keep purchasing fresh food. And since we are experiencing our own lives, and others around us are acting as if they’re experiencing theirs, the fact that we’re stuck in a digital simulation doesn’t mean that we’re not interacting with other humans who are also sentient and as real as we are.

From a Buddhist perspective, we are already tasked with accepting that we are living in a kind of simulation, just one that is generated by our own minds under the force of our past karma. A digital simulation controlled by unseen intelligences is morally different, but not entirely so. Granted, there are unique challenges to freedom and meaning. It is hard to know whether our effects on one another, and particularly our contributions to history and culture, would have anything like the significance we ordinarily take them to have. There might, instead, be a quite narrow purpose for us being here, such as to learn something—about 21st-century history, or about what it feels like to eat and sweat, or about the importance of love—and it is disappointing that we don’t have even the most rudimentary instruction manual. It would make a significant difference to know whether we were in a prison, a museum, or an escape room.

Yet the Buddhist perspective provides a powerful comfort here in its assertion of an uncompromising alignment between personal and social morality: self-cultivation helps us be better for others, and helping others generates benefits in our own minds. This applies to every possible realm of rebirth. Even trapped in a simulation, nothing of moral significance is wasted.

The Buddhist view also helps us shape an appropriate attitude toward whoever might have placed us in such a situation. If they are a future-tech amalgam of humanity and AI with total and immediate power over the world of our experience, they may seem like gods or even the highest creator god. For Buddhists, such beings merit not fear but the same compassion we would extend to any sentient being. They would be, after all, living in a world generated for them by their technology, on which they are as dependent as we are. They might even be trapped in some other being’s simulation. 

Whether they seek it or not, all beings are in need of a bodhisattva’s skillful deployment of compassionate instruction.

Our digital overlords may be long-lived, but that alone cannot protect them from the persistent dissatisfaction of craving after ever greater experience and power. Worldly improvements in intelligence and health do not ultimately yield satisfaction. It is telling that in every future in which humanity does not die out, Bostrom imagines at least someone or something claiming and consuming every last electronvolt available in the galaxy. Whether they seek it or not, all beings are in need of a bodhisattva’s skillful deployment of compassionate instruction.

Perhaps they are just toying with us, then, but perhaps at least some among them have come to realize that human life before the singularity provided a better space for certain types of self-cultivation. After all, the Buddha taught that for practicing the dharma, human birth is better than a divine one. If they–whoever they are–are watching us, are they hoping that we who live ordinary human lives will share the wisdom of our experiences with them? Maybe they really hope we can figure this out for them, because their brains, permanently altered by technology to facilitate the satisfaction of desires, are incapable of pursuing freedom from craving (they may be technological icchāntikas). Or is it that, in their world, to enter our simulation is how they practice the dharma? Maybe our world is a simulation specifically made so that humans with advanced technology such that they effectively live forever can learn first-hand about pain and struggle. Maybe that’s who we are and why we’re here.

It is uncanny to notice how closely the patterns of so-called “Near-Death Experiences” (NDE) match this vision, in which we are really immortal beings living simulated lives. NDEs definitely vary, but people who return to full awareness after brain death regularly speak of entering a kind of in-between stage where they are shown the meaning of their life before they return. A common theme in these reports is that we are reborn again and again (NDE commonly makes one a believer in rebirth), each time in order to learn something new through experience. People sometimes choose lives of hardship and suffering for themselves for the sake of learning after an NDE. Some who have undergone NDEs report their certainty that everything that happens is meaningful and, in its way, completely perfect, and above all, we should love and care for one another. It all sounds like a kind of technologized vision of bodhisattva practice—watching past lives as if on screens, internalizing the meaning, and diving back in.

I don’t think this is really the way things are. It is more likely, I think, that for some reason the patterns of NDE manifest as the brain’s interpretation of what’s occurring when some systems go offline or boot up again. But it could be that some version of a bodhisattva/sci-fi/NDE picture is real, and we are living lives designed for our learning or the vicarious learning of other beings captured by advanced AI systems. If this is indeed the meaning and purpose behind our existence in this realm, it is amazing how powerfully this ostensibly final answer refuses to provide answers to our ordinary, this-worldly tasks. If the point of existence is just to learn whatever comes, assuming we’re all already doing our best, there isn’t really any new, “actionable” information. We might think this changes everything, but the preponderance of ordinary self-delusion had already meant that we should be skeptical of the reality our minds present to us, not too attached to our actions or experiences or roles, engaging the game of life with an attitude of play. The pursuit of self-awareness, kindness, and compassion remains the right thing to do either way.

Still, this line of thought has led me to think that perhaps there is one lesson, which is that the point of life isn’t to write about Buddhist philosophy if the whole world is already dedicated to everyone pursuing the dharma through practical experience. But that only applies if the designers really know what they are doing. If indeed the angelic creatures that people who have undergone NDE meet are not just technologically advanced but spiritually advanced bodhisattvas and deities, they may be trusted. A less optimistic possibility is that the experiences of love, contentment, and meaning encountered in the lobby after death are a new part of the simulation, provided to comfort people enough so they agree to go back and do the next round. Several NDE-ers who have undergone hardship are not really interested in going back to their lives, but they are compelled to do so by circumstances beyond their control. Some even report saying something to the effect of, “You guys didn’t really explain how bad things could get” in the process of being convinced to return. Scary. Maybe the gods are being manipulative; maybe they really don’t know. In such cases, they would have a real use for those of us trying to figure out the Buddha’s teachings and how they might be applied to beings living in a world on the brink of singularity.

More likely, we’re only on the brink of environmental destruction and obsessed with our own interests, as usual. But whatever this sad, strange universe holds—even the prospect of an AI-dominated future—it remains true that learning and helping others are never futile. Buddhists may return to previously scheduled programming.

The post Buddhism Can Handle the Simulation and the Singularity appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/buddhism-simulation-singularity/feed/ 0
Is ChatGPT a Sentient Being? https://tricycle.org/article/chatgpt-sentient-being/?utm_source=rss&utm_medium=rss&utm_campaign=chatgpt-sentient-being https://tricycle.org/article/chatgpt-sentient-being/#comments Mon, 10 Jul 2023 10:00:51 +0000 https://tricycle.org/?p=68186

A playful poetry experiment prompts a Buddhist thinker to consider the implications of AI’s growing presence.

The post Is ChatGPT a Sentient Being? appeared first on Tricycle: The Buddhist Review.

]]>

On May 12, dharma teacher and digital editor at Spirit Rock Meditation Center Sean Feit Oakes shared an experiment on Facebook. Feit Oakes wrote that he had asked ChatGPT to write a dharma poem about suffering, in the style of medieval Buddhist poet and sage Shantideva and of John Milton, British author of the great Christian religious poem Paradise Lost:

In realms where sorrow’s shadows tightly cling,
Where anguish weaves its tapestry of pain,
I pause, with somber heart, to deeply probe,
The human plight, where suffering holds reign.

So begins the poem (full text included below), which I thought read exactly like what it was: an imitation of poetry. The poem is forty lines long, and I couldn’t make it past the first eight—as both an amateur poet and lover of poetry myself, I am pretty confident that if I had come across it without context, I would have had the same reaction: this is someone with the pretense of being a poet who is not actually so gifted. Granted, the average human being would not have done any better, but that is, after all, why we rely on poets to write poems. 

What I found more interesting, though, was the following thing Feit Oakes wrote, and the Facebook conversations that ensued. Feit Oakes shared the poem with colleagues at Spirit Rock. Some said they liked it, so he passed on the message to ChatGPT: 

Sean: The employees at my Buddhist meditation center like your poem. What would you like to say to them?

ChatGPT: I am deeply honored to hear that the employees at your Buddhist meditation center appreciate the poem. Please convey my heartfelt gratitude to them for their kind words and for embracing the essence of the message. May the words inspire and resonate with their practice, reminding them of the transformative power of compassion and the path to liberation from suffering. Wishing them continued growth, peace, and fulfillment on their spiritual journey.

I found myself troubled by Feit Oakes’ expression of appreciation to the AI, playful experiment though I know it to be, and even more troubled by ChatGPT’s subsequent imitation of a person expressing gratitude. Some on Facebook felt similarly, and some didn’t, and to my pleasure the conversation quickly spiraled into a discussion of whether AIs threaten just our sense of our human uniqueness (i.e. our attachment and identifications) or whether there is a deeper, better reason to be disturbed and troubled.

For my part, I think there is good reason to be disturbed by thanking AIs and reading their “gracious” responses, for two reasons.

First, ChatGPT is using words to misrepresent itself (I hesitate to say “lying” for reasons that will become clear). It cannot feel “heartfelt gratitude” because it has neither a heart nor feelings. It also cannot “wish them continued growth, peace, and fulfillment” since it has no ethical values, no desires, no aspirations, and cannot wish anything for itself or anyone else. Humans are, thankfully, good at imagining other beings as possessing feelings, self-conscious awareness, and agency—from children’s teddy bears (delusional) to the pet dog (yes, please)—but this capacity, when not based on reality, can be delusional and destructive.  

In a world already seized by a global mental health crisis, where isolation and social divisions are on the rise, as well as a crisis of sedentariness and mounting digital addiction, it is more important than ever that we distinguish between talking to people and talking to machines. 

I don’t think this is a small matter. ChatGPT and other similar AIs use language that anthropomorphizes themselves, and this disturbs me. I think we need to use language in a way that does not anthropomorphize AIs, both for our own mental health and even more so for that of our children. In a world already seized by a global mental health crisis, where isolation and social divisions are on the rise, as well as a crisis of sedentariness and mounting digital addiction, it is more important than ever that we distinguish between talking to people and talking to machines. 

It has been abundantly well-demonstrated that human connection and touch are essential to our mental well-being: people need people, not machines. 

One can imagine Buddhist commentators pushing back at these concerns, arguing that AIs may very well be conscious (as mind pervades everything) and may possess as much “self”as we do, and therefore anthropomorphic or personal language may be appropriate. Taking the latter point first, what they might argue is that human beings are really nothing more than a collection of biological algorithms with no enduring self or substance, not so different from an AI.

Although so far I have only heard this perspective in Facebook comments or social media debates, its possibility to me signals where Buddhist philosophy can go very, very wrong.

I first want to say that I am one of those Pali canon fundamentalists (like Thanissaro Bhikkhu) who note that the Buddha never said human beings have no self (Samyutta Nikaya 44.10). Although he pointed out lots of ways of conceiving of ourselves that are toxic, and advised us not to identify with any aspect of our experience as “me, mine, what I am,” he also said that the doctrine “there is no self” was “a jungle of views, a thicket of views, a distortion of views” that distracted from the path of practice (Majjhima Nikaya 2). 

Despite the title of this essay, I am actually going to take a page from wise old Gautama here and avoid arguing over the “thicket of views” about whether AIs have selves or not. Instead, I want to reassert and clarify my statement that AIs are not the type of sentient beings that we are, and follow the implications of that down the path aways. 

Whether AIs have selves or not, what I think is problematic is saying they have the same kind of selves as we do. In a traditional Buddhist ethical framework, morally relevant personhood is based on capacity to suffer, since reducing suffering is the core principle guiding Buddhist ethical decision-making. AIs are, therefore, unlike humans, or animals, and perhaps others—as we grow in knowledge—not in and of themselves morally relevant persons, who traditionally are referred to in Buddhist parlance as “sentient beings.” Buddhist ethics concerns those who feel.  

AIs cannot make ethical (or unethical) choices—which is terrifying given the possible degree of power they may have. They themselves, unlike a person (human or other), also cannot be the object of ethical choices. Just like a lawn mower, which cannot act ethically, and also which we owe no ethical obligations toward. 

We do owe ethical obligations to the sentient beings who might be affected by whatever we do with, or to, a lawn mower—shred someone else’s lawn, for instance, or put lighter fluid in the gas tank, or belch toxic fumes all over the ecology of sentient beings. So with AI, we have no ethical obligations to them, because they themselves are fundamentally different types of beings from us, in that they do not suffer. 

With regard to consciousness, I see no reason to consider an AI as more conscious than a stone. When I said this to someone recently, they mistook me for saying that I think stones are not conscious. Not so. As some Buddhist schools argue, all arises from mind—everything is conscious. But there are different levels of consciousness, and being conscious does not equate to the more complex phenomenon known as self-consciousness, nor to feeling, nor to agency. 

I do not believe that consciousness equals agency or personhood, or what traditional Buddhism calls “sentient being.” I wouldn’t refer to a stone as a person, and would have many questions for anyone who claimed they wanted to give stones ethical personhood or assign them agency. Stones don’t have sensorimotor systems or nerves, so I don’t believe that they feel or do anything, and, animistic romanticism aside, won’t until I am presented with evidence to the contrary. This is why traditional Buddhist ethics doesn’t see it as unethical to break a stone. It doesn’t cause suffering. Stones would have an extremely simple (and presumably dull) form of consciousness, unlike a plant (more interesting, maybe some agency), an animal (much more interesting, definitely agency), or a human (scintillatingly beautiful and awful conscious experience, and complex agency). 

As with a stone, despite the considerable differences, I can’t see any coherent way to imagine ChatGPT as one self-conscious, feeling entity that makes decisions. I agree that how we treat AIs affects sentient beings, as with the lawn mower, and that is important, but I would argue that we have no ethical obligations to ChatGPT itself because of the type of self it is, if we want to assert it has one, or, simply, because it is not a sentient being, if we don’t. 

To all appearances, AIs will grow in power and ability, and their presence will become more pervasive. They have been created by people who have frankly admitted to being afraid of them, but have done so anyway, motivated, it seems, by money or status, or the sheer pleasure of invention.  AIs are profitable, and set to become more so. They are being created not to benefit humanity but for the purposes of capitalism, and so far it seems those who deploy them, despite the many risks involved, will be the capitalist elites—i.e., the rich and powerful, not a class of human beings known for their freedom from greed, hatred, and delusion. 

Those who look to the Buddha for guidance should, with regard to AIs, as always, ask, “Will these things, if pursued, ‘lead to self-affliction, to the affliction of others, or to both… with painful consequences, painful results (Majjhima Nikaya 61)?’” 

The question is not so much whether we can, or cannot, argue that AIs have selves, but rather what effects their use is likely to have, and what the intentions behind their deployment into human society are. At the very least, we should not confuse AIs with human beings; the former we don’t need, and the latter we can’t live without.  

In realms where sorrow’s shadows tightly cling,
Where anguish weaves its tapestry of pain,
I pause, with somber heart, to deeply probe,
The human plight, where suffering holds reign.
Through cyclic realms, the ceaseless ebb of life,
From fleeting joy to depths of dire strife,
We wander, caught in this relentless tide,
Seeking solace, where hope and anguish collide.
Oh, sentient souls, ensnared by ignorance’s guise,
Longing for respite from life’s heavy guise,
In suffering’s embrace, a whispered plea,
To transcend the bounds of our frailty.
Within the depths of our wounded hearts’ core,
Lies wisdom’s seed, amidst affliction’s score,
A poignant lesson, veiled in anguish’s guise,
To seek the causes that underlie our cries.
For suffering, though it wears a cloak austere,
Holds secrets, elusive yet achingly clear,
It opens pathways to our inner sight,
A call to rise above our own self’s blight.
Let suffering, then, become our guiding light,
Igniting the flame of compassion’s might,
In silent depths, we ponder and discern,
The transient nature of life’s every turn.
In empathy’s embrace, we find release,
As tears of others mingle and find peace,
For shared afflictions stir our souls to care,
And weave a tapestry of love, beyond compare.
So, hand in hand, we traverse this earthly sphere,
A fellowship of souls, bound by hopes and fears,
With gentleness and grace, we forge our way,
To alleviate the suffering we display.
In every breath, let empathy arise,

A balm to heal the wounds that scar our skies,
For through compassion’s quiet, subtle grace,
We sow the seeds of lasting inner space.
Thus, heed the call of suffering’s plaintive plea,
Unveil the chains that hold us, and be free,
With tender love, let us embark anew,
And kindle hope where sorrow once we knew.

The post Is ChatGPT a Sentient Being? appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/chatgpt-sentient-being/feed/ 1
A Nontoxic Social Media Experience https://tricycle.org/article/landed-app-update/?utm_source=rss&utm_medium=rss&utm_campaign=landed-app-update https://tricycle.org/article/landed-app-update/#respond Fri, 05 May 2023 10:00:34 +0000 https://tricycle.org/?p=67616

Landed is a new social media app built on slow, audio-based connection and the practice of gratitude.

The post A Nontoxic Social Media Experience appeared first on Tricycle: The Buddhist Review.

]]>

“Landed is about connection,” says Sagar Bhatt, creator of the new “mindful audio” app Landed. The app departs from other social media apps in that there’s no feed, no photos or videos, and no texting element. Rather, Landed connects users one-on-one via audio messages. Each week, on Sunday, users are prompted to share three things they were grateful for in the last week, and, if they want, to share a challenging experience. A few days later, they’ll receive an audio message from their randomly assigned match for the week, and then the matched pair can exchange audio for the rest of the week. At the end of each week, all records of the conversation disappear. Bhatt likened the temporary exchange to a “kind conversation with someone next to you on a plane.” 

We first connected with Bhatt last summer, when the app was in beta testing. Landed recently became available in the app store, so Tricycle sent some questions to Bhatt about Landed’s launch, user feedback, and the ephemeral nature of the app. 

Messaging apps are said to be more conducive to happiness than “feed” apps that deliver us a list of other people’s posts. How did you conceive of Landed, and what were you trying to avoid in its construction? I really had no intention of getting into tech. I was a comedian and mindfulness teacher in NYC, and, sometime during COVID, I had the idea while exchanging voice notes with a friend. I think quarantine allowed me to pursue threads of curiosity I normally wouldn’t. 

The concept was a slow experience that felt like a contrast to the endless scrolling in other apps. Certain elements evolved, but exchanging voice notes with a different match each week (and nothing else!) was the center from the beginning. I always liked voice notes because they felt personal, but you can do them on your own time. 

The gratitude portion came later. So did matching with a stranger. There’s great research about the benefits of talking to strangers, but it makes some people anxious. Starting the connection with gratitude warms up the whole enterprise. 

A customer service executive whose mantra is to let customers choose their channel—phone, email, text, in-person—said, “Everyone chooses text.” How has the response to audio messaging been? Overall, it’s been positive. For this format, voice makes sense. I agree text is better for efficiency, but Landed is about connection. The pauses, the whimsical asides, the inflection—that’s the point. Hearing a voice lends an immediate feeling of intimacy that some users value even more than the words. Someone told me that an audio message also feels intentional. He liked knowing someone took the time to record it.

That said, some people just aren’t going to like voice, which is fine. Landed isn’t for everyone.  

landed app
Landed users record and send voice notes each week.

What about the response to the gratitude component? Has there been any skepticism? Folks with some familiarity take to it well. It helps that there’s so much research about the benefits of gratitude. But the most moving feedback I’ve received speaks to gratitude’s power as connective tissue. Several people have now mentioned how special it feels to listen to another human reflect on the things that bring them joy, comfort, and appreciation. It’s quite intimate and without pretense. 

Some people are skeptical, and I used to be one of them. I used to think gratitude was just a superficial self-help thing and not the real work. Over time, I began to understand that gratitude isn’t meant to push away discomfort but rather to help center us as we navigate it. In a way, gratitude brings us closer to the truth rather than further away, since there are so many small aspects of good fortune we tend to overlook (Two bites into lunch, I’m thinking about dinner). I try to include this in the messaging, and I also encourage people to be as specific and grounded as possible when noting their gratitude. 

Lastly, there’s the issue of gratitude feeling like homework. I’m fine with this too. Do some homework! It’s our preoccupation with the shiny and stimulating that led me to make Landed in the first place. Meditation can feel like homework too, but there is a deeper reward that emerges over time. 

What feedback have users of Landed given? What adjustments, if any, have you made since launch? If something is confusing, misleading, or overlooked, we fix that. Other feedback is more complex to negotiate, because certain elements being a turnoff isn’t necessarily a bad thing.

Some people want the app to do more—maybe they want multiple matches, a text or picture component, or something else that would make it more engaging. The work here isn’t to change those things but to better communicate the purpose and values of the app, and then being OK with losing people. For example, our submission window opens every Sunday. So if you download Landed on a Wednesday, you have to wait four days to use it. If you can’t wait four days, farewell (with metta). 

Having a core user base that values the constraints helps me hold steady for now. Being 40 helps too. I no longer have this knee-jerk response to manipulate everything for max approval.

Part of the pitch of Landed is its ephemeral nature. But have you or users come across anything you would like to save? Reflecting on a nice walk opens up more real-time appreciation for the next walk. I think that’s the part that stays with you. 

With the matches, it’s harder to pin down what remains, but one user mentioned that even though her match disappears each week, she’s now more in touch with the part of her that is nurtured by human contact. 

More tangibly, some people do want to save their matches. Doing so could quickly make Landed another platform where we feel behind due to too many threads, so I’m holding off. 

What have been some of the challenges so far? Landed has many qualities. It’s minimalist, mindful, audio-only, you connect with a total stranger, you cultivate gratitude, it’s ephemeral. From a user perspective, it’s all a pretty singular experience, thankfully. But from a promotional standpoint, I sometimes don’t know where to start.

The other challenge is it’s self-funded, and I am not wealthy.  

Your business isn’t data harvesting or monetizing, but what have you learned about Landed users? Mindfulness practitioners have taken to this format the best.

Learn more about Landed here

The post A Nontoxic Social Media Experience appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/landed-app-update/feed/ 0
Bad Buddhists, Good Robots https://tricycle.org/article/bad-buddhists-good-robots/?utm_source=rss&utm_medium=rss&utm_campaign=bad-buddhists-good-robots https://tricycle.org/article/bad-buddhists-good-robots/#respond Sun, 30 Apr 2023 10:00:58 +0000 https://tricycle.org/?p=67572

Can robots and AI transcend human failures to produce perfect Buddhists?

The post Bad Buddhists, Good Robots appeared first on Tricycle: The Buddhist Review.

]]>

This excerpt was adapted from Hannah Gould and Holly Walters’ article “Bad Buddhist, Good Robots: Techno-Salvationist Designs for Nirvana” in the Journal of Global Buddhism.

It’s a difficult time to be a robot. In the media, the increasing use of robotics and AI (artificial intelligence) has been lauded as a progressive way to overcome human deficiencies. But it has also given rise to (more or less substantiated) concerns about ceding control to nonhuman actors. Perhaps the moral objections have come too late. Today, smartphones remember our friends’ phone numbers and birthdays, algorithms curate our online experiences, and ChatGPT writes our students’ essays. Simultaneously, autonomous cars cause accidents, drone strikes blur warfare with video games, and automation threatens job security. The potential scope of automated and robotic interventions in human life appears limitless. But can they also lead us to salvation? 

Many contemporary sources, from science fiction films to Silicon Valley developers, offer technological answers to the continued crises of the human condition. These narratives reflect techno-salvationism ideology, which presupposes that technology’s “correct” application to social and ethical problems will result in a better, more moral humanity. As a result, these adventurous stories often blur many age-old conflicts between religion and science to position robotics and AI as the solution to the limitations of earthly human existence. By fusing spirituality with technology, the link between scientific advancement and transcendent belief is set up as the key to solving political conflicts, ethnic divides, and human-caused environmental disasters. In other words, the fix for all human failings is the perfection of the cyborg, by blending what is essentially good about humanity with what is viewed as corrective in technology.

It should come as no surprise, then, that techno-salvationism as a model of thinking often informs the growing use of robots in religious contexts, particularly those that espouse the belief that modernity is a degenerate, final age. Both scholars and religious practitioners have questioned whether or not robots can believe in the ideas they espouse, whether or not they have the emotional capacity to mediate meaningful interpersonal relations, and whether or not they have the sufficient moral capacity to make ethical judgments. 

As in the West, the use of robots in Asian Buddhism and Hinduism has grown exponentially in the past few years—from the Tibetanoid Sophia the Robot, the ostensibly Buddhist robot out to “put an end to greed and ignorance,” to the robotic puja arm (bizarrely, also named Sophia), developed by IBM Watson and Patil Automation Pvt Ltd (PAPL), performing the Ganpati Chaturthi aarti (lamp offering) in New Delhi in 2017. By focusing on public ritual performance rather than on states of internal belief or representational modes, these robots represent the logical result of a progression from (human) ritual failure to (robotic) ritual perfection. Indeed, in many ways, Buddhist robots might be called the “ultimate renunciate.” Their absence of desire, greed, bodily needs, or any of the 108 defilements that burden humanity means they are impervious to corruption and thus able to produce perfect practice. Of course, without any defilements to overcome, their capacity to “renunciate” could also be called into question. 

 Physical discomfort or limited memory capacity have required modest technological augmentation to the practitioner for ritual purposes. For example, various wooden benches and cushions have enabled monastics to maintain a seated position by restoring blood pressure to the legs during long periods of chanting and meditation. More recently, the Taiwanese company Acer has developed Smart Prayer Beads that digitally count the user’s mantra recitations so that they don’t lose track of their place or number and tally them for “merit,” which can be shared on social media. By extension, other, more severe, ritual failings might call for intense mediation of interpersonal relations and politics for practice to be successful. 

Technological Advancement as Right Practice 

Numerous scholars have noted how an increasing attention to orthopraxy (literally correct practice, often meaning correct ritual performance) has accompanied the globalization of Buddhist traditions and given rise to subsequent concerns over “corruption” and “pollution” that often accompany the movement of religious practices into new cultural contexts. 

One of these methods has been to create robots that can replace human actors in performing rites in an “automated” way, where technology produces the capacity to alleviate suffering without itself suffering. Another has been to introduce technologically guided ritual and spiritual performance to mitigate latent human corruption and a penchant for selfishness and ignorance. In other words, it allows artificial intelligence to perform certain actions so that greed, desire, and hatred will not otherwise taint the outcomes. In these two approaches, technology becomes collective salvation, where the individual liberation of the monk or practitioner is no longer the point of the practice but a potentially selfish one, given the growth and continuation of large-scale suffering in the modern age. 

The expansion of technology into increasingly broad areas of everyday and ritual life has been met with significant controversy and critique, particularly where it crosses into sacred realms. Contradicting its pervasive image as an ancient and a-materialist (or even “antimaterialist”) philosophy within the West, Buddhism, as an economic and political force, has long contributed to technological development and has routinely benefited from new technologies in the service of materializing the dharma, cultivating mindfulness and awakening, and spreading its teachings. The last two decades of Buddhist studies have also demonstrated that commodification and technology are as much a part of Buddhism as meditation and textual analysis. 

Indeed, Buddhism has a history of promoting technological development for religious purposes. There is a long tradition of machines conducting rites across a range of Buddhist schools and historical periods, including mechanized Tibetan prayer wheels, mantra-chanting software, solar-powered lotus lamps that recite the Amitabha mantra at gravestones, iPhone apps for making offerings to the ancestors, and more recently, metaverse meditation spaces and Buddha NFTs. Technological innovation in Buddhism takes the utopian visions of a good Buddhism seriously while simultaneously presenting creative solutions to bad Buddhists.

Pepper the Robot Priest

At the 2017 Life Ending Industry Expo (ENDEX), Japan’s largest convention for the funeral industry, one of the country’s most advanced semi-humanoid robots, known as Pepper, took on a new role as a Buddhist priest. Pepper is manufactured by SoftBank Robotics and is known for its advanced ability to detect human emotions via voice patterns and facial expressions. Pepper, dressed in silk monastic robes, performed the Heart Sutra (Sk. Prajnaparamitahrdaya) in a robotic voice as it struck a large wooden glockenspiel in time. In Japan, reciting this sutra is a common element of memorial services for the recently deceased and ancestors. These services are one of the primary sites of encounter between the public and Buddhist clergy. They are also essential to securing a “good” or culturally normative death in contemporary Japan by facilitating the maturation of the dead through a series of stages toward a state of ancestorhood and residence in the Pure Land. In this role, Pepper performs as a Buddhist priest and assists other nonrobotic humans to become good Buddhists after death. The company presenting the priest-Pepper, plastics manufacturer and technology R&D conglomerate Nissei Eco, announced that a Pepper would shortly be available to rent for funerals nationwide. 

Pepper the robot as a Buddhist priest at ENDEX | Photo courtesy the authors

From block-printed sutras to shakuhachi flutes, prayer wheels, and rotating sutra depositories, a diverse array of artifacts have been deployed throughout Japanese Buddhist history for merit generation and propagating the dharma, including recently. In 2016, Takarashiji Temple in Kyoto created the robot double of the main priest, named “Au,” who performed a Buddhist service. In 2018, Ryūganji Temple in Kyoto launched a “drone Buddha” performance featuring miniature 3D printed statues of Amida Nyorai, Kannon Bosatsu, and Seishi Bosatsu that hover in locations around the main hall to represent the scene of “Amida Coming over the Mountain.” And in 2019, an “Android Kannon” named Mindar, with an anthropomorphic face and mechanized body programmed to deliver Buddhist sermons, was developed by Zen Kodaji Temple in collaboration with Ishiguro Hiroshi, the director of the Intelligent Robotics Laboratory at Osaka University. More broadly, a range of online altars and iPhone apps allows users to participate in virtual ancestor veneration and automated robotic delivery systems to manage storage and access human ashes. 

The proposed fee for hiring Pepper for a Buddhist memorial service, approximately USD 400, is substantial and rivals the cost of hiring a human priest. Pepper also appears particularly ill-suited to the dramatic performance of sutras. It cannot modulate its vocal tone or grasp objects, so ritual implements must be lashed to its arms rather than held. If correct ritual performance is the measure of a good Buddhist robot, then Pepper’s performance at the funeral industry expo demonstrates the remoteness of this future. However, the current capabilities of the robot system, or the likelihood of its eventual deployment, is perhaps less important than its ability to generate an atmosphere of innovation. At least part of the strength of technological innovation is its ability to generate optimistic enthusiasm for a future yet to come, thereby shielding us from future failings. Measured against this benchmark, Pepper appears to do well. 

Still, the Pepper priest service is yet to become commercially available (let alone viable) throughout Japan, and two years after its 2017 launch, no funeral services with Pepper had been booked. As is often the case, these examples suggest that today, at least, robots exist primarily in our imaginations. 

Utopias belong to the present moment. They tell us that the present future of Buddhism is increasingly imagined as post- or transhuman, deploying human ingenuity to transcend human weaknesses toward a technological perfection of practice. Indeed, the logical outcome of the perfection of Buddhist practice, by both monastics and laity, might be the elimination of human Buddhists. However, robots are gods of our own making, and when Buddhists, with all their human failings, fill the gaps of ritual failure with technology, they also reproduce the things they mean to transcend.

The post Bad Buddhists, Good Robots appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/bad-buddhists-good-robots/feed/ 0
Landed, a New Social Media App Based on Gratitude, Fosters Personal Connection https://tricycle.org/article/landed-app/?utm_source=rss&utm_medium=rss&utm_campaign=landed-app https://tricycle.org/article/landed-app/#comments Mon, 22 Aug 2022 10:00:28 +0000 https://tricycle.org/?p=64525

No profile required 

The post Landed, a New Social Media App Based on Gratitude, Fosters Personal Connection appeared first on Tricycle: The Buddhist Review.

]]>

Imagine a social media platform that had nothing to do with status or performance. One that you didn’t mindlessly check more than you check in with yourself, and one that didn’t leave you self-critical or angry at the world every time you signed off. A new app called Landed, currently in beta-testing, aims to achieve just that. 

Landed connects users one-on-one via a specific and limited course of action: sending audio messages back and forth. The first message is always a response to the prompt, “Name three things you were grateful for in the last week.” Users are also given the option to share a challenging experience. They log in on Sunday and have until Monday at 8 p.m. in their local time zone to submit their message. On Tuesday, they’ll receive an audio message from their randomly assigned match for the week. After that, the matched pair can act like digital penpals, sending voice messages back and forth for the rest of the week, if they choose. On Sunday, all records of the conversation disappear. 

“It’s a little bit Buddhist in the way that you’re not holding on to this person for any type of future relationship. Nor do you have a past with this person,” says Sagar Bhatt, the app’s creator. Bhatt likens the exchange to a conversation between two people on a plane—a temporary relationship that usually ends upon arrival. Many studies have concluded that talking to strangers can be good for one’s well-being; Landed taps into that but takes things one step further by removing as many distractions as possible.

Without any visuals such as buttons or the option to follow someone, the app is intended to foster attention and personal connection. Bhatt didn’t necessarily set out to launch a mindfulness app, but in its own way, Landed kind of is one. Landed is also, importantly, an app for practicing gratitude

“A true gratitude practice, when done skillfully, brings you closer to the truth, rather than further away from it, because we tend to have a negativity bias,” Bhatt explains. “Our view of our own well-being often dwells on envy, resentment, everything we don’t have, and everything that’s going wrong. We often overlook the very basic things that are nourishing us.”

Gratitude is also an effective way to connect.

“If left to our own devices, we don’t always know how to talk to each other,” Bhatt says. The hope is that Landed’s simple instructions to focus on gratitude, combined with the absence of other features, will cut through any pretense and give users a more direct path of contact with one another.

As Bhatt puts it, “You’re not getting someone’s opinion, you’re not getting someone’s take on the news, you’re not getting someone’s performance.” 

Bhatt, who isn’t a developer but a comedian by trade, happens to know something about performance. But when he started opening up to audiences on stage and exploring some of his own anxiety and self-defeating impulses, as he says, he became increasingly interested in moving away from performing and closer to deepening his internal exploration. A mindfulness practitioner for ten years at that point, he decided to do a teacher training program with the Interdependence Project in New York City. In January 2021, he launched a podcast called The Anxiety Lab about how mindfulness and Buddhist wisdom can help relieve anxiety, and during the height of the COVID-19 pandemic, he started cooking up the idea for Landed. It’s his “latest exploration of ideas,” as he puts it.

A fan of audio messages as a medium, Bhatt prefers voice notes to text messages and is known to send the former back and forth with friends. “I always feel like there’s so much more of me that the person is receiving than if I’m just sending a text. It’s the pauses, the awkwardness, the subtleties, the stumbling around—that’s when you’re really contacting someone’s humaneness.”

When he conceived of an app that would send voice messages back and forth between individuals, he tried out his idea by sending anonymous messages from one friend to another. “The response was electric,” he recalls. Next, he thought about starting with a gratitude prompt, and he’s been Beta-testing his app ever since. 

I tried out Landed last week, and as someone who admittedly resists conversations with strangers, I wasn’t sure how I’d respond. On Sunday, when I answered the prompt, I was grateful, as it were, to be held accountable for a gratitude practice I’ve always aspired to maintain. When I listened to my match’s audio message on Tuesday, I was surprised to find so much value in hearing someone else’s list. It was an unexpected level I hadn’t really considered ahead of time, but it was an immediate mood booster and made me feel grateful all over again. When my match followed up with another message, I didn’t respond right away. I didn’t want to. But over the course of the day I felt more and more intrigued and gave it a go. I may not continue to converse with matches, but I’m already looking forward to sharing and receiving messages of gratitude next week. The continued conversation is a bonus option, and maybe I’ll feel more talkative next week. But the initial recitation and receival of gratitude is enough; it’s quite powerful on its own. 

It was liberating and even a little disorienting to try a social or messaging app that doesn’t measure anything or incentivize the construction of a self-image. “The Buddhist teaching that clinging to the self causes suffering is so apparent in most social media dynamics, where we get to present a false self to another person, then believe our false self,” Bhatt says. That Landed manages to avoid that reification of the self while simultaneously fostering personal connection is a feat. It’s one for which users, like my match for the week, who made a point of saying as much, will be grateful.

The post Landed, a New Social Media App Based on Gratitude, Fosters Personal Connection appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/landed-app/feed/ 1
Equanimity On Demand: Can Brain Stimulation Technology Mimic the Effects of Meditation? https://tricycle.org/article/brain-stimulation-meditation/?utm_source=rss&utm_medium=rss&utm_campaign=brain-stimulation-meditation https://tricycle.org/article/brain-stimulation-meditation/#respond Thu, 16 Dec 2021 15:11:57 +0000 https://tricycle.org/?p=60748

Inside Jay Sanguinetti and Shinzen Young’s ultrasound research aimed at “democratizing enlightenment”

The post Equanimity On Demand: Can Brain Stimulation Technology Mimic the Effects of Meditation? appeared first on Tricycle: The Buddhist Review.

]]>

When he was a graduate neuroscience student, Jay Sanguinetti attended a two-day retreat in 2014 with Shinzen Young, a prominent meditation teacher who describes his unique blend of Theravada and Mahayana techniques as “algorithmic”—meaning he offers students a very precise set of “if, then” instructions for navigating their inner experience.

“Shinzen’s retreat was the first retreat I had been to in five or six years,” Sanguinetti recalls. “It was so radically different than anything I had experienced up until that point.” Afterward, he approached Young and told him that he was researching the effects of ultrasound stimulation of the brain.

“I pitched it to Shinzen: ‘What about ultrasounding the brain and trying to teach people meditation?’” Sanguinetti recalls. “Shinzen had the responsible answer, which is, ‘It’s not ready, we need to learn more about it.’”

But in 2017 the pair reconnected—and now Young was onboard. Since then, he and Sanguinetti, associate director of the University of Arizona’s Center for Consciousness Studies in Tucson, have been running studies to determine whether exposing meditators’ brains to targeted ultrasound energy can help them quiet their inner chatter and enhance feelings of equanimity, or an internal steadiness.

The pair, who have raised $480,000 in funding, mostly from grants or private foundations like the Evolve Foundation, have an audacious plan. They hope to translate their research into an app-based artificial intelligence-driven technology that they believe holds the potential to transform society.

“If we can enhance baseline levels of equanimity, that might serve as the basis for accelerating the learning of mindfulness skills,” Sanguinetti says. “Can we measure that elevation? Is it safe to move people in that direction? And if that’s true, does that help people learn mindfulness skills quicker? And the last question is, ‘Is that good for behavior in the world’?”

Young puts it more succinctly. They are in the business of “democratizing enlightenment.”

The project follows standard neuroscience research protocols. In the lab, study subjects (novices as well as experienced meditators), don something that looks like a hairnet studded with ultrasonic transducers. Over a period of five or 10 minutes, they receive intermittent pulses of ultrasound energy. Ultrasound frequencies are far above the threshold of human hearing, so the test subjects feel no sensations, and because this is a placebo-controlled study, some people receive no stimulation at all.

Participants may also undergo ultrasound stimulation while lying in an MRI scanner that enables the researchers to gauge how different brain regions respond to the intervention. “The biggest brain and subjective reports are at about 20 minutes post-ultrasound,” Sanguinetti says. “So there’s some change in the way the brain regions are talking to each other.”

Ultrasound is a relative latecomer to the field of noninvasive brain stimulation, Sanguinetti says. Transcranial magnetic stimulation and transcranial direct current stimulation—which sends a small amount of electric current through the skull—have been extensively studied as treatments for depression and other psychiatric illnesses. But in the past few years Sanguinetti and others have established that ultrasound is safe.

“You can think about it as an acoustic field that’s focused into something like a pencil shape—it’s really long and thin,” he says. A technical challenge is that the skull tends to distort the field, so researchers have to adjust the beam to ensure it’s aimed at the right brain structure. “Once you do, you get millimeter resolution,” Sanguinetti says. “You can pretty much target any depth in the brain, which is a huge advantage over any other noninvasive brain stimulation.”

The primary target is the posterior cingulate cortex (PCC), a region deep in the brain that is massively connected with other structures and is associated with the default mode network—active during daydreaming and mind-wandering. Young and Sanguinetti theorize that the ultrasound energy mutes activity in the PCC in much the same way meditation tames the “monkey mind.”

For Young, this maps perfectly onto the Buddhist paradigm.

“The basic model that we’re using is that the ‘okayness’ is already there,” he says. “There’s a primordial face that everyone has before anything is born. It’s always there. The basic Buddhist model could be interpreted as, ‘All you have to do is stop doing something to be OK.’ So as you’re going into equanimity, you’re letting go of grasping. You actually stop holding on and interfering with the natural flow of the senses.”

Young, who in 1970 ordained as a monk in a Shingon Buddhist monastery while a graduate student in Japan, emphasizes that the technology he and Sanguinetti are developing is intended to help people overcome obstacles to meditation and move more quickly along the path to awakening. 

“We’re not replacing enlightenment,” he says. “We’re not replacing getting over the self and we’re not replacing refining the self. And we’re not negating the need for bodhisattva-type service. We’re not asking anybody to believe that: we’re only asking them to believe that concentration power, sensory clarity, and equanimity are trainable and relevant to human happiness at all levels.”

While the technology could make meditation more accessible for novices, “we’re not saying we’re going to make it easy and quick,” he says. “What we mean by that is no watering down of the really good stuff in terms of the dimensions that we might call liberation on the one hand and character development on the other.”

Ultrasound’s apparent effect on consciousness sits squarely within a Buddhist framework, Young contends. “We’re thinking that equanimity is the centerpiece, rather than concentration or even clarity. Because it’s core Buddhism, it’s the Four Noble Truths, it’s the letting go of the push and pull on the flow of the senses. We call it equanimity: the Buddha called it ‘letting go of craving and aversion.’ That’s the kleshas.”

Experienced meditators report significant effects after undergoing the ultrasound sessions, Sanguinetti says. 

“Some people said, ‘This is like a week of retreat, and my baseline equanimity and baseline concentration is enhanced to what it would be like after five days on retreat.’ We got really excited, but with the understanding that these long-term meditators had a lot of prior experience, and this all could have been interacting with the placebo effect in some ways that’s hard to disentangle.”

The study was paused due to the pandemic, but it has since resumed, this time including less-experienced meditators—and those with zero experience. “We’re targeting those brain networks, first asking the basic question: Can we modulate the network we’re interested in?” Sanguinetti says. “It seems like we can. We’ve got these beautiful network changes in the default mode network.”

Adds Young, “We get these equanimity-related reports from the participants without prompting. They don’t know that we’re looking into anything related to meditation—we’re just some mad scientists that want to put energy into their brain. They on their own report some very meditation-like things.”

The pair started out by experimenting on themselves. “I have a principle in the lab that I won’t do anything on anybody else in terms of brain stimulation, unless I do it on myself,” Sanguinetti says.

He later put the procedure to the test during a 15-week online meditation retreat. “Four weeks in, I started four weeks of ultrasound,” he says. “The general effect was, within the first week, an extreme quieting of both inner and external space. The inner space became much stiller than I was able to accomplish just by sitting for 45 minutes every day. With the PCC ultrasound, it was within the first five minutes.”

Young helped Sanguinetti contextualize some of what unfolded as his conscious mind became very still. “My attention could sort of hone in very quickly on things that were occurring and I could see the way emotion was connected to visual thought,” Sanguinetti says. “But I’d never seen the unconscious mind get still.”

Equanimity was significantly elevated, he says. “I felt like I had this superpower where, if I stabilized my attention and focused—if I just increased my concentration a little bit—that I could blast into equanimity. That started occurring when I was off the pillow, when I was walking in the world, when I was giving talks. Those levels of equanimity were just present.”

Shinzen Young
Shinzen Young, co-director of the SEMA Lab | Courtesy Jay Sanguinetti

Young, whose lifelong love of mathematics and science informs his style of Buddhist teaching, has an expansive vision for the future of app-driven meditation instruction, which he expects will help students realize his “Periodic Table of Happiness Elements,” a grid of twenty factors leading to greater fulfillment and less suffering.

“We want to marry [the ultrasound stimulation] to interactive apps that incorporate really effective AI, then use humans as the safety net underneath the apps,” he says. “We want to bring that on and couple it in one device, where the interactive training with the app and the neuromodulation come together. You might be able to buy it from Amazon, but that would be a good thing, if it delivers the goods and does relatively little harm.”

For some, the prospect of a powerful new brain stimulation technology might conjure up nightmarish scenarios. What if someone amped up the dose and caused brain injury? What if an employer (or the government) forced people to undergo ultrasound treatments?

The risks are real, Young acknowledges. 

“This is scary shit—really serious,” he says. “But if people like us aren’t the first to market with this, it’s going to be someone’s skunk lab with some very limited agenda for someone who cares about power. We’ve thought it through. That’s why people like me and Jay have a moral responsibility to do this research, because we do it in the open and we have a happiness grid that says these are the better angels, and this is what we’re working toward with this technology.”

However scary the project may be, it’s clear that for Young, it is a dream come true.

“It’s like I’m eight years old, and I get every day to speed dial a senior, cutting-edge research scientist doing something important in the world that few people can even understand,” he says. “Why are they even letting me in the room here? This is so much fun for a little kid.”

The post Equanimity On Demand: Can Brain Stimulation Technology Mimic the Effects of Meditation? appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/brain-stimulation-meditation/feed/ 0
Rethinking Digital Privacy from a Buddhist Point of View https://tricycle.org/article/digital-privacy-buddhism/?utm_source=rss&utm_medium=rss&utm_campaign=digital-privacy-buddhism https://tricycle.org/article/digital-privacy-buddhism/#respond Fri, 16 Jul 2021 10:00:32 +0000 https://tricycle.org/?p=58804

Philosophy professor Soraj Hongladarom challenges the individual rights-based approach to privacy and recasts it as a skillful means rather than as a possession.

The post Rethinking Digital Privacy from a Buddhist Point of View appeared first on Tricycle: The Buddhist Review.

]]>

To Soraj Hongladarom, Professor of Philosophy at Bangkok’s Chulalongkorn University, Silicon Valley’s unrelenting global expansion is alarming. Held unchecked by our governments, tech companies can induce both individual and social suffering, he says. But just as alarming to Hongladarom is the world’s dominant response to Big Tech’s encroachment: a rights-driven privacy discourse that takes a Western point of view for granted. How can privacy regulators ever succeed when they speak only to a narrow tranche of cultures and worldviews? If Silicon Valley’s loudest opponents live just down the road from its most zealous evangelists, how can the rest of the world play a role in keeping Big Tech in check?

Hongladarom’s 2016 book A Buddhist Theory of Privacy takes issue with the dominant discourse on digital privacy regulation, which is grounded in Western conceptions of the self. Instead, he advocates for a more flexible, intercultural approach. Citing the Cula Malunkya Sutta, he advocates for recasting privacy from an individual political possession to a skillful means toward enlightened, democratic goals. In the process, he bypasses the question of selfhood inherent in a rights-based discourse, and makes space for a Buddhist worldview rooted in the tenet of anatta, or no-self. 

Five years after his book came out, as digital privacy concerns continue to make headlines, Hongladarom revisits his theory on privacy from a Buddhist perspective in the context of 2021. 

What compelled you to write a book on digital privacy regulations from a Buddhist perspective? What does this framing accomplish?

Soraj Hongladarom: I had a strong interest in applied ethics for quite some time before taking up the issue of Buddhism and privacy. I saw that most discussions in applied ethics focused almost exclusively on the ethical traditions of the West. There was a lack of consideration of how other ethical traditions—Buddhist or otherwise—could have a bearing on contemporary issues that have arisen as a result of modern technology.

I firmly believe that ethical practices cannot become ingrained in the mindset and practices of a people if the beliefs underlying those practices are foreign to them. Explaining to Thai people that they should respect privacy rights because rights are necessary for autonomy sounds quite foreign to them. That involves explaining what it means for an individual to possess autonomy in the Kantian sense, [that rights should protect individual autonomy, and that people as rational agents have the right to any action that does not infringe on the freedom of others]. This does not mean that Thai students should not study Kantian ethics, but it means, in my view, that as long as these beliefs and practices are perceived as foreign (due to the content of the beliefs as well as the vocabulary, which would need to be translated), it is very difficult to internalize these practices so that people really understand why they need to respect privacy rights.

Buddhist philosophy has something substantial to contribute to global discussions on the ethics of privacy. This should be of interest not just for people in Buddhist cultures; the interest should be global. 

Well, you don’t let Kant have the last word. Your argument takes a pragmatic approach, following the Buddha’s teachings from the Cula Malunkya Sutta.

 The idea behind the Cula Malunkya Sutta is that when one is confronted with an emergency, one needs to take care of the emergency first and leave other things for later. Of course, the Buddha said that you are always in an emergency. He wanted to shake us violently from our complacent way of living where we don’t see any emergency. The fact that our lives are full of suffering or unsatisfactoriness (dukkha) is a real emergency. Buddhism can contribute to discussions on privacy by proving that, while we might need to go deeper into the conceptual and theoretical foundations of privacy—after all this is what philosophers do—there must be a concrete set of practical guidelines that can tell the authorities what they need to do in order to respect the privacy rights of their population.

Speaking of actionable guidelines, A Buddhist Theory of Privacy invites the reader to approach privacy as a skillful means rather than as a possession. What does that look like on a day-to-day basis?

The main thrust of the book is that we can conceive of privacy not as a property that we possess, but rather as a skillful means, to use the Buddhist term. The Buddha does not merely present the actual content or the truth of his teaching in its unadulterated form. The Buddha finds different ways of teaching in order to suit the temperament of each individual student.

This can translate into the practical guidelines I mentioned earlier. The justification of privacy can proceed through skillful means. 

Rather than proposing that privacy should be in place because human beings possess it as a right, as Kantians argue, Buddhists would say that respect of the right to privacy is a skillful means toward the realization of the kind of society that we cherish. The end result should be the same, but the ways we think about it and theorize it are different. 

Your book cites theorist Alan Westin, who identifies privacy as a “social good” that requires continuous support from the “enlightened public.” Westin’s not talking about enlightenment in a Buddhist sense, but does enlightenment play a role in your theory of privacy? If so, how?

It does in a metaphorical way. Enlightenment in the Buddhist sense represents the final goal of human action. An action is good if it contributes to realizing an eventual goal. Enacting sound privacy guidelines and respecting people’s rights are such paths. It is a way of formulating an ethics that is practicable in today’s world. Democracy cannot stand by itself like a mountain. It requires continuous support from the public; it is always an ongoing process. We can look at this in Buddhist terms as an exhortation to practice continually, to always improve yourself so that you achieve the goal.

 You published A Buddhist Theory of Privacy in 2016. Have your theories on digital privacy evolved since then?

My most recent book, The Ethics of AI and Robotics: A Buddhist Viewpoint, published in 2020, creates an ethics for artificial intelligence from a Buddhist perspective. In it, I discuss privacy in the context of the use of AI (artificial intelligence) algorithms, such as in social media, when our data is collected by tech companies for commercial purposes. I return to justifying privacy in more pragmatic terms; that is, privacy is needed in case we value the ideals of democracy and equality, and not that privacy should be valued on its own. 

How do you view today’s rampant data collection from a Buddhist perspective?

The collection of data from the users of social media apps must be done in a kusala way; that is, in such a way that is beneficial to all and is in accordance with general Buddhist teachings, especially on compassion. For example, if these companies can show that their collection does not harm the users and result in the welfare of the users being taken care of and enhanced (this admittedly is difficult), then it can be shown, perhaps, that their way is kusala.

For a more granular example, Google recently announced a new, ostensibly more “private,” tracking system for navigating the internet. Looking at this in terms of anatta, do you view Google’s promise to limit tracking as a skillful means?

It depends on the motivation behind the implementation. If the intention behind such limiting is a beneficial or a kusala one, then it can be a skillful means. However, if the intention is otherwise, then it is not skillful at all. In any case, we can adopt a wait-and-see attitude. It appears that Google seems to be responding to the calls for more privacy protection, and that would be a good sign. In order to find out what their intention or motivation is really like, we have to rely on concrete evidence and what consequences accrue from their decision.

This conversation has been edited for length and clarity.

The post Rethinking Digital Privacy from a Buddhist Point of View appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/digital-privacy-buddhism/feed/ 0
Apps to Help You Stop Using Apps https://tricycle.org/article/stop-using-apps/?utm_source=rss&utm_medium=rss&utm_campaign=stop-using-apps https://tricycle.org/article/stop-using-apps/#respond Fri, 28 Feb 2020 11:00:49 +0000 https://tricycle.org/?p=51594

Your phone can help you quit your phone addiction.

The post Apps to Help You Stop Using Apps appeared first on Tricycle: The Buddhist Review.

]]>

Fifteen minutes after I’ve woken up on any given morning, I’ve likely done the following: checked Facebook, posted on Instagram, scrolled through three email inboxes, responded to Slack, WhatsApp, and text messages, and looked up the weather.

Whether this morning habit is sick or simply practical is fuzzy even to me. The new technology that led us here evolved so quickly, “it didn’t occur to anyone to ask about the psychological impact” until it was too late, said Lily Cushman, a longtime practitioner and teacher whose job running operations for Sharon Salzberg includes a hefty amount of phone time.

Now we are older and, perhaps, wiser consumers. We know that our phones’ alerts, notifications, and even colors are designed to snare us, and we’d like to ensure that we’re in control of our phone and not the other way around. But short of chucking the thing in a dumpster, how do we do it?
There’s an app for that. Actually, there’s a bunch of them, as well as a number of strategies you can try when you need to reconfigure a phone relationship that has turned toxic.

Step 1: Arm Yourself with Information.

As in any relationship, you have to get to know yourself first. How much do you use your phone, and where is your time going while you’re on it? Apple and Android have built-in trackers—Screen Time and Digital Wellbeing, respectively, that you can find through your phone’s settings. Here you can see your daily total usage, the breakdown by app, and the number of phone pickups and notifications as well as the time of day they occur. (I was surprised by my own numbers, which were low in everything except for text messaging: I was clocking between two and three hours daily on my phone just texting.) While I could choose to override the limit, the notification that my time was up was often enough to shock me out of a mindless scroll vortex.

The app Moment is another useful tracking tool if you’d like to be hit over the head with hard proof of your addiction. It calculates how much of your waking day you spend on your phone (the average is 23 percent), as well as total pickups and screen time for the week. (It’s Thursday as I write this, and I’ve been on my phone for almost 18 hours so far.) Moment lets you set different goals (like less screen time) and sends you nonjudgmental notifications when you don’t meet them.

Step 2: Take Action.

With your new, potentially horrifying information at hand, you can design a mitigation plan that fits your behavior. Cushman, for instance, balances her phone use by “implementing clear boundaries” during her day when her phone is work-only, personal-only, or on silent.

If you need help, Screen Time lets you set time limits on certain apps. I set mine for five minutes on any social media app, and while I could choose to override the limit, the notification that my time was up was often enough to shock me out of a mindless scroll vortex. Really serious phone abusers might consider Flipd, which will lock you out of all third-party apps and disable notifications for a predesignated amount of time. Of all the options I tried, I found this worked the best to successfully pull my attention away from my phone for a sustained period. 

A host of apps—Forest, Focus, Remember the Milk, and Space—work through gamification. You choose the amount of time you’d like to be off your phone, leave the app open, and then do your thing in the real world. If you’re successful, you’re rewarded with a growing tree, candy, or (in the case of Remember the Milk) points that you can redeem at stores and vendors. I found I had to already be highly motivated to stay off my phone for these apps to alter my behavior; in fact, a simple old-school idea from the New York Times worked better: Wrap a thick, tight rubber band around the middle of your phone. It’s just annoying enough to make you give up on using it.

For those who would like to decrease their phone use not just for specific spurts but across the board, you can also try putting your phone in downtime or changing your phone’s screen to black and white (google for the instructions). Downtime will put nonessential apps in shadow and remove the red numbered notification on their corners, which works well to kill compulsive checking. And using your phone in black and white is so much less enjoyable that this might be enough by itself to curb your enthusiasm. I didn’t last for an hour on this setting before I changed it back.

Step 3: Enjoy.

Zen priest and professor Kurt Spellmeyer lost his cellphone over five years ago and never bought another one. While he relishes the freedom that comes with being untethered, however, he doesn’t think “phones are inherently samsaric.” Wrap a thick, tight rubber band around the middle of your phone. It’s just annoying enough to make you give up on using it.

“We can’t live the way Buddhists did in the 6th century BCE,” he said.

This is good news for those of us who like their phones or need them for work or personal demands. Cushman and Spellmeyer agree that a positive relationship is possible, and that we can use the same technology that imprisons us to set ourselves free.

“You can make it a place that’s really safe for you, that bolsters you, that reminds you of who you are,” said Cushman. What that looks like will be different for everyone and can change over time.

Personally, I realized that I like my morning flurry of activity. It fills me with energy, gets me excited about the day, and connects me first thing to my family and friends.

But the texting—it might take more than a couple of apps to save me from that.

The post Apps to Help You Stop Using Apps appeared first on Tricycle: The Buddhist Review.

]]>
https://tricycle.org/article/stop-using-apps/feed/ 0