Faith leaders from across the Presbyterian Church USA and the ELCA recently shared concerns and opportunities related to the use of artificial intelligence. The dialogue was held at the annual meeting of the Ecumenical Roundtable on Faith, Science and Technology at the ELCA offices in Chicago office in April.

The discussions over the course of meetings quickly devolved into a series of futuristic and perhaps even existential questions. More importantly, many wondered whether the church had something of relevance to say related to artificial intelligence or whether the world would even listen.

Participants also needed the insights of an expert if not a practitioner. Anna Foerst, who is currently exploring religious and ethical questions about artificial intelligence and robotics at St. Bonaventure University’s computer science department, was the featured speaker, educating the group on what large language models (LLMs) are and how they are essential to the creation and use of artificial intelligence.

Her work as an ethicist includes participating in AI research at the Massachusetts Institute of Technology (MIT) robotics lab, where she was a theological advisor to Kismet — a robot who had a face, ears and a mouth to interact with the world around it. She also was affiliated with the Center for the Studies of Values in Public Life of Harvard Divinity School. One of her most well-known books mixing theology and artificial intelligence was published in 2005, God in the Machine: What Robots Teach Us about Humanity and God.

ChatGPT — is it good or evil?

She opened up her talk illustrating how frustrated students in the computer science courses at St. Bonaventure become when trying to make ChatGPT ‘say’ something racist. It happens, but perhaps not as fast as they would have expected initially. The example was one of several where humans find themselves anthropomorphizing AI by seeing it ‘speak’ or reflect on something we deem as important.

She also outlined what LLMs do. For instance, they may help you complete a sentence. In the instance of Chat GPT, it remembers everything that it is ever told. And the truth of the matter is, according to Foerst, we have no idea how it forms its own knowledge or ideas or why it reacts to a prompt the way it does.

“LLM tuning is more of an art than a science,” she added.

LLMs are an ‘it’ not a ‘who’

While there are parameters in place on what information is fed into an LLM, those parameters can also be circumvented. In her view, we need to remember it is thinking in numbers not in language terms. Parameters are in place but can be circumvented. We need LLMs to understand LLMs, she said as an example of the conundrum. If LLM thinks, it thinks in numbers not language. 

Looking at the LLM use from a theological perspective, Foerst reminded the group that LLMs are not embodied and they are not self-reflective. It is not solipsistic and will not be able to achieve human consciousness, she added.

From an ethics standpoint, the discussion is one of bias in the data the LLMs are being trained with, according to Foerst. It’s a mirror of all of humanity’s racist and sexist language. Some information platforms, such as The New York Times, are sounding alarm bells with the free use of copyrighted content, she added. There is also an illusion of privacy with these models in that all interactions, with ChatGPT for example, are stored to improve the AI. Foerst added that ChatGPT execs would deny this if asked.

Besides the prospect of AI-generated campaign videos and other problems likely to arise during the U.S. presidential election season, there are other long-term pressing issues that Foerst raised. The applications of AI are seemingly endless, and we are still seeing its proliferation in its early stages. Government concerns and even the White House’s 2023 executive order related to AI safety and security is not likely to impact the increased use of the technology globally.

Ethical concerns

If anyone needs to respond to AI, it needs to be the church, Foerst said.  Mainstream churches need to respond, but they are currently not heard.

“The churches need to come out because they have the moral authority to speak out and to speak of the dangers,” she told attendees.  Part of the issue is the unsupervised learning aspect of LLMs but another ethical question is whether we can become too dependent on AI.

She added, “The problem comes in if I think I need ChatGPT to the exclusion of human interaction.”

As people face the potential of losing jobs as a result of digital workers entering the workforce, it will hit even white-collar workers.

“Our hope is in being the best humans we can be,” she said.

Following her lecture to both an in-person and Zoom audience, much of the discussion centered on what sci-fi oriented visions of technology and AI have proliferated in recent years. But just what it will take for the ethical considerations to be heard and accounted for is still very much up in the air, according to discussions among those in attendance that followed the next day.

Susan Barreto
Susan Barreto

Susan is an author with a long-time interest in religion and science. She currently edits Covalence, the Lutheran Alliance for Faith, Science and Technology’s online magazine. She has written articles in The Lutheran and the Zygon Center for Religion and Science newsletter. Susan is a board member for the Center for Advanced Study of Religion and Science, the supporting organization for the Zygon Center and the Zygon Journal. She also co-wrote Our Bodies Are Selves with Dr. Philip Hefner and Dr. Ann Pederson.

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team!

You have Successfully Subscribed!