Global Enlightenment Community’s Member Tom Kehler discuss the third generation of AI: Concerns and Possible Solutions for Large Language Models

May 21, 2023News

Tom Kehler is a pioneer in AI at Silicon Valley, the chief scientist for, a start-up that utilizes AI in assisting decision-making and recommendations. A member of the Global Enlightenment Community, he spoke about the third generation of AI to AI and Faith. Global Enlightenment Mountain, a new model of Silicon Valley, connecting US, Japan, India, Europe, supports and coordinates this concept.

The full interview can be read here:

Schwarting: The rapid innovation in the space of large language models seems to correlate with troubling cutbacks in AI ethics teams at large companies like Twitter and Microsoft. What are your thoughts on our ethical understanding of ChatGPT and its ramifications?

Kehler: I discussed this a bit in my interview with Christianity Today. I believe that, for the first time in history, we have taken misinformation generated by second-generation AI and have put it on steroids. It is now possible with ChatGPT to create massive amounts of misinformation that sounds extremely intelligent, which is dangerous. An important facet of the issue is dealing with data provenance, and academic publishing is a great analogy. Google’s idea for PageRank stemmed from the mathematics of citation research11. An eigenvalue problem drove the rankings. At CrowdSmart, we use a similar approach with the ideas of a group, with each idea retaining its provenance. I believe that retaining data provenance is critical to the future of AI. OpenAI relies on a whack-a-mole strategy, where any mistakes must be corrected over and over. Furthermore, if an LLM gave a wrong answer, who ought to be held accountable? If you query ChatGPT on its trustworthiness, it will say it is not trustworthy. From a legal perspective, if we are going to regulate one thing, let us require that the model can provide data provenance mapping to an accountable source.

Schwarting: There are many technical hurdles to providing accurate and reliable data provenance in generative models. Do you think that reliable data provenance has a hope of being taken on, both legally and technically?

Kehler: I think it has a very simple fix; namely, building a better training set. We have been lax about constructing a training set, and instead just scrape everything. Some have proposed watermarks 12to trace sources, but I believe more innovations will occur in this space. Historically, we have a process in human society where we speak to logic and evidence. Furthermore, there is nothing in faith that does not point to evidence for the reason for that belief. Faith has a somewhat Bayesian quality. For example, we have hope because of the resurrection of Jesus Christ, and if that is not true then we do not have hope. Hope is predicated on a verifiable piece of evidence: the large set of people who witnessed the resurrection. It is fundamental to society that we have a sense of the truth value of the information that we are building from.

Tom Kehler speaks at the BGF High-level Conference, April 26, 2023