October 29, 2023
Dear our friends,
The alarm over the “existential threat” of AI is resounding throughout social, popular, and trade media. Calls for the pause, regulation and ethical oversight over AI are ubiquitous. Tech giants like Open AI, Meta, MS, Google and slew of well-funded startups are decrying the perils of Artificial General Intelligence (AGI) while developing and promoting their own AI products. Yet there is little discussion of the impending science of what intelligence might be and its origin in the fundamental physics of life, and hence, the natural role of “intelligence” in all aspects of life, human and non-human. The current framing of threats and opportunities are all in the context of machine-based AI, where the formative issues are engineering and commercial viability to be mitigated by historically failed modes of regulation and oversight. The business models and regulatory models remain the same, the a priori extraction of maximum value by private interests with the a posteriori mitigation of negative externalities by public institutions. It is an open question whether such default modes of oversight will suffice for AI.
There are those of us, however, that believe we are the brink of an exciting, foundational, cross disciplinary scientific “breakthrough” in our understanding of what constituted “living things”, and “extended cognition” at all scales and domains. This breakthrough is every bit as consequential as genomics and synthetic biology, and therefore, emphatically should be treated with the same, if not even greater, oversight scientific caution and rigor. If at all possible, “AI” should not become captive to unbridled commercial and political interests as has been the record for the many technological innovations. That may prove to be a naive hope, as already machine learning algorithms to enhance “engagement” (e.g., addiction) continue to spread cognitive opiates of fear, grievance, and conspiracy. Such elementary AI have fundamentally contorted public perceptions of vaccinations, elections, the economy, history, and even science itself. Should this trend continue, and indeed be amplified through “AI”, the need, indeed, the demand for trusted evidence-based, tested, and calibrated content and expertise will become paramount. If, as many analysts predict, “Artificial Intelligence” will be ubiquitous and underpin virtually every application and organizational function imaginable, then intelligence itself must be understood in the most rigorous of scientific protocols and adhere to principles of data provenance. To inoculate ourselves from the future abuses of Artificial Intelligence, much less Artificial Generalized Intelligence (AGI), we will need the through and independent vetting and protocols of science. We will need policy and technology be thoroughly grounded and vetted by the very of best scientific practices. Such an approach might lead to new ways of overseeing and directing AI that are not a posteriori corrective, but a priori principles intrinsic to the forms of intelligence being synthesized.
Through the combined efforts of the Active Inference Institute, whose founding principles are grounded in science and the computational physics and biology of living intelligences and open technologies, and the Boston Global Forum, whose mandate is the formation of global policies and AI World Society model for the inclusive and beneficial application of AI, there can be real opportunities to not only influence the public narrative around AI, but undertake collaborations to further the scientific understanding of living intelligences and provide scientifically informed policies and guidance. The signatories to the attached letter embrace these views and those expressed in the signatory letter.