The Tokyo Forum 2023, co-hosted by the University of Tokyo and Chey Institute for Advanced Studies was a gathering of academics, business leaders, and members of the public to discuss big global issues that are facing us today and to explore solutions. While political scientists constituted the majority of the participants, panelists included university presidents, chairs and CEOs of private corporations, financial analysts, sociologists, psychologists, AI experts, and even a philosopher and a performance artist. There was a special session dedicated to student panelists, and students were also included in public discussions on shaping our future.

The topics ranged from climate change, AI, wars, and diversity to gender equality, and it was intellectually stimulating to hear perspectives from top experts in so many different fields. One major disagreement occurred between AI enthusiasts and AI skeptics. Hiroshi Ishiguro was probably the greatest proponent of AI. He maintains a twin android that grows old with him (See a report by DW, a German broadcast service for more), and he argued for a life with enriched relationships (both with robots and other humans) with a greater participation of AI in society. Alison Gopnik was more skeptical. AI does not have parents or children, she argued. They do not have the embodied understanding of what it means to take care of other people. She calls this a “relationship of care,” and this is what distinguishes humans from AI.
I agree with her that we should be preserving and cultivating this relationship of care, but I am not sure if a relationship with AI is an all-or-nothing proposition. It is far too easy to recall or imagine dehumanizing interactions with AI (why are customer service AI so bad?), but AI already willingly performs many services as if it cared for us. Does it not count for something that their acts are sometimes indistinguishable from those of caring humans, even though AI’s actions are not grounded in the same human emotions, which can come down to the difference between electrical vs. biochemical mechanisms? We should also not forget that not all human interactions are caring or benign. For the time being, I would say the worst offense by AI is less hurtful than that of a human. So I agree with Ishiguro in that it’s OK to prefer the company of androids sometimes while preferring the company of humans at other times. But I also agree with Gopnik that we should be focusing on proliferating the relationship of care when we develop AI. I think this is possible. AI can help us when the situation impairs our ability to maintain a relationship of care (with other humans), such as when people we care about make us angry or hurt. Instead of letting our emotional reactions destroy the relationship, we can let AI, unaffected by emotions, craft a tactful response to prioritize what is really important.

I am not sure if I agree or disagree with Ishiguro’s final claim: The difference between humans and AI will become meaningless in the near future. I agree with one of his premises that AI is becoming increasingly humanlike, and I partly agree with his other premise that humans are becoming more AI-like. But I am not sure these patterns will continue, converge, and stay constant. Humans and AI will have different futures, not necessarily in opposition to each other but with different abilities and playing different roles in this universe.

Speaking of blurring the boundary between humans and AI, I must make a special mention of Stelarc, an artist who has been experimenting with his body to merge with technology. For his experiment in distributed perception, he let his eyes receive visual input from somebody in Antwerp and his ears receive audio input from somebody in Basel, while he let somebody else control his arm. See his portfolio for other mind-boggling artistic experiments.

The AI theme also came up in the university presidents’ session. It is not just us, language/writing teachers who are concerned about the impact of AI on education. Tshilidzi Marwala, the rector of the UN University emphasized the importance of interdisciplinary collaboration in finding an appropriate approach for working with AI. Although he didn’t name names, it means we need insights from technologists, policy analysts, sociologists, psychologists, philosophers, and us, language teachers. Teruo Fujii, the president of the University of Tokyo, singled out critical thinking as the most important skill in the age of AI: In the era of fake news and deep fake news, the ability to discern truth from fiction becomes ever more important. Many of us teach critical thinking, so what we do remains very relevant as the current AI revolution continues. Yuko Takahashi, the president of Tsuda University did single us out: In the age of AI, the teaching of basic academic reading and writing becomes even more important, not less.

The forum dealt with many other issues, such as how to deal with transnational problems when we are still constrained by nation-centric systems and institutions. This issue encompasses problems like climate change, international wars, food security, and so on. Some panelists expressed frustration with barriers they encountered in their work, and they felt powerless by the authorities that stood in their way. But these people are quite powerful themselves. They may be stymied by the heads of the states (one of them specifically talked about his plan having been frustrated by a US President), but they are far from helpless. Despite all the challenges, a lot of powerful people are working very hard to solve the world’s biggest problems. We may disagree on an approach at times, but I felt energized, knowing that we all share the common value of wanting to make the world a better place.

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *