Elliot Benjamin
7 min readApr 27


Should AI Development Take a Break?

by Elliot Benjamin, Ph.D., Ph.D. April, 2023

Integral World has been buzzing lately with essays about artificial intelligence (AI) [1]. In these essays, as well as in a number of comments to these essays, a range of views have been described, accentuating both the positive and negative aspects of AI [1]. But for me, the bottom line is that in spite of AI’s tremendous potential for development of knowledge in virtually all spheres of human endeavor, there are grave dangers that accompany the development of AI. Furthermore, the completely unchecked development of AI machines that will be far more powerful than GPT-4, namely what Lane and Diem have referred to as artificial general intelligence (AGI) and super general intelligence (SGI), run the risk of far too many people believing that these machines are “self-aware,” i.e., that they are essentially “human” [2]. Lane and Diem described the overwhelming problems of students cheating through the use of GPT-4 in Lane and Diem’s classrooms, and ways they are trying to combat this [2]. However, the ramifications and multitude of present dangers from the current use of AI, and the horrifying rampant dangers of its development through the public misperceptions of these machines, is extremely alarming to me.

Future of Life Institute Letter Calling for Taking a Break in AI Development

My alarm is shared by a number of computer scientists and tech administrators, who have signed an open letter formulated by the Future of Life Institute, requesting that all AI labs immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [3]. Some of the passages in this letter particularly stood out for me [3]

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control and of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed once we are confident that their effects will be positive and their risks will be manageable.”

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society [4]. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”

Institute of Electrical and Electronic Engineers (IEEE) members have expressed a diversity of opinions in regard to this letter [5]. Some of these opinions I found to be particularly relevant [5]:

“AI can be manipulated by a programmer to achieve objectives contrary to moral, ethical, and political standards of a healthy society. . . . I would like to see an unbiased group without personal or commercial agendas to create a set of standards that has to be followed by all users and providers of AI.”

“My biggest concerns is that the letter will be perceived as calling for more than it is. . . . I decided to sign it and hope for an opportunity to explain a more nuanced view than is expressed in the letter. . . . But on balance I think it would be a net positive to let the dust settle a bit on the current LLM versions before developing their successors.“

“These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users. My experience and my professional ethics tell me I must take a stand, and signing the letter is one of those stands.”


I very much agree with the impetus of the Future of Life Institute letter, and I have also signed it. Yes I think we should take a break from AI development and try to gain some sanity and an ability to understand and handle the alarming dangers that we are unleashing, before we resume the development of AI into far more powerful systems than we presently have at our disposal. There are enough problems with the current state of AI development, as described in the recent Integral World AI essays and comments [1], that I think it is most definitely warranted for government agencies to become involved in working to prevent a catastrophe manipulated by the financial motivations of large tech corporations. One writer, Umair Haque, has provocatively conveyed the dangers of AI development on social interaction and authentic learning as follows [6]:

“AI learning often involves an individual working alone with a bot. The bot does the research to, as one AI tool says, ‘get you instant answers.’ It can crowdsource information to help students find facts about their environment, solve a problem and come up with a creative way forward. But AI doesn’t compel students to think through or retain anything. And simply being fed facts and information is not the same as ‘learning.’ Ultimately, if you want students to learn, they need to shore up their neural networks and use their neuroplasticity to develop their own intelligence. This is where AI falls short. There is nothing better than collaboration in real life — connected, reciprocal learning between a student and their peers or teachers — to spark the brain’s natural drive to develop and grow. When my kids engage with AI, the interaction inevitably fizzles out. Eventually, they need to move their bodies, look one another in the eyes and communicate as they tackle a new skill.”

And Haque has described his interesting perspective on “cheating” [7]:

“What does ‘cheating’ really mean? Cheating doesn’t just mean: you got a good grade and you didn’t earn it. Cheating means, kid, you cheated yourself. You didn’t learn from that great book, essay, event, and so on. You didn’t even try to engage with the challenge of learning from it, which is part of the lesson too, because growing is sometimes hard. And you cheated everyone else, too, not of ‘grades,’ but of the way in which we really learn, which is collectively, which is why school, from Aristotle’s time to now, has always been centered around classes.”

And in regard to the concerns expressed in the Integral World essays and comments about people thinking that AI machines are “human” [1], Haque has quite the poetic take on this [7]:

“It’s the opposite of Prometheus. . . . It’s not the light of fire. It’s a thief which steals the fire inside us. And puts it in a bottle, right there, in the place a heart should be, but never can. And those among us who are deluded, greedy, cruel, violent, and vain, point at this heart-breaking, wretched thing, a machine trying to show, desperately, that it has a soul, the very one it’s stolen from us, beating in its chest — they tell us that it’s really true. A tin man who stole our soul now has one of his own. It’s hard for me to think of a more Aeschylean tragedy than that.”

I think back to the years that I taught my Numberama program to children, working to impart to them the utter joys of recreational number theory [8]. I can remember the sheer joy and satisfaction they experienced when discovering the third perfect number [8]. But as I started to work with older children, I had to caution them to not immediately use the internet to obtain the third perfect number, conveying to them that this would defeat the whole purpose of their discovery and adventure. Now multiply this scenario many times beyond human comprehension with AI. Yes I think we need to take a break in AI Development and let our humanness try to catch up with our technology.

Notes and References

1) See Frank Visser (2023), ChatGPT Writes a Poem on Ken Wilber’s Integral Model. www.intgegralworld.net/visser228.html; ChatGPT comments on Ken Wilber’s Understanding of Evolutionary Theory. www.integralworld.net/visser229.html; David Lane (2023), The Cyborg Has Entered the Classroom: A.I. and the Future of Education. www.integralworld.net/lane272.html; David Lane and Andrea Diem (2023), “Please Don’t Turn Me Off!” Alan Turing, Animism, Intentional Stances, and Other Minds. www.integralworld.net/lane273.html

2) See David Lane and Andrea Diem (2023), “Please Don’t Turn Me Off!” Alan Turing, Animism, Intentional Stances, and Other Minds. www.integralworld.net/lane273.html

3) See the letter entitled Pause Giant AI Experiments: An Open Letter at https://futureoflife.org/open-letter/pause-giant-ai-experiments/ There are presently over 30,000 signatures.

4) The letter references these “other technologies with potentially catastrophic effects on society” as follows: “Examples include human cloning, human germline modification, gain-if-function research, and eugenics.” (cf.[3])

5) See Margo Anderson (2023), “AI Pause” Open Letter Stokes Fear and Controversy: IEEE Signatories Say They Worry About Ultrasmart, Amoral Systems Without Guidance. https://spectrum.ieee.org/ai-pause-letter-stokes-fear

6) See Umair Haque (2023), The Three Things AI is Going to Take Away From Us (And Why They Matter Most).https://eand.co/the-three-things-ai-is-going-to-take-away-from-us-and-why-they-matter-most-69956984b63f#:~:text=But%20it%20is%20that%20AI,and%20in%20the%20end%2C%20democracy . This first quote by Haque is one that he attributed to education professor Rina Bliss in her article entitled Opinion: AI Can’t Teach Chidlren to Learn. What’s Missing? https://www.washingtonpost.com/opinions/2023/04/11/ai-teaching-children/

7) See [6] and note that in the second Haque quote, italics are used precisely as the author used them.

8) See Elliot Benjamin (2006), Integral Mathematics: A Four Quadrants Approach. www.integralworld.net/benjamin2.html; And Elliot Benjamin (2017), Numberama: Recreational Number Theory in the School System. Bentham Science Publishers.



Elliot Benjamin

Elliot Benjamin is a philosopher, psychologist, mathematician, musician, and writer, with Ph.Ds in math and psychology. 4 published books, and over 200 articles