Playing With Fire – ChatGPT

The world is very different now. For man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life.

John F. Kennedy

Humans have mastered lots of things that have transformed our lives, created our civilizations, and might ultimately kill us all. This year we’ve invented one more.


Artificial Intelligence has been the technology right around the corner for at least 50 years. Last year a set of specific AI apps caught everyone’s attention as AI finally crossed from the era of niche applications to the delivery of transformative and useful tools – Dall-E for creating images from text prompts, Github Copilot as a pair programming assistant, AlphaFold to calculate the shape of proteins, and ChatGPT 3.5 as an intelligent chatbot. These applications were seen as the beginning of what most assumed would be domain-specific tools. Most people (including me) believed that the next versions of these and other AI applications and tools would be incremental improvements.

We were very, very wrong.

This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application. If you haven’t played with ChatGPT-4, stop and spend a few minutes to do so here. Seriously.

At first blush ChatGPT is an extremely smart conversationalist (and homework writer and test taker). However, this the first time ever that a software program has become human-competitive at multiple general tasks. (Look at the links and realize there’s no going back.) This level of performance was completely unexpected. Even by its creators.

In addition to its outstanding performance on what it was designed to do, what has surprised researchers about ChatGPT is its emergent behaviors. That’s a fancy term that means “we didn’t build it to do that and have no idea how it knows how to do that.” These are behaviors that weren’t present in the small AI models that came before but are now appearing in large models like GPT-4. (Researchers believe this tipping point is result of the complex interactions between the neural network architecture and the massive amounts of training data it has been exposed to – essentially everything that was on the Internet as of September 2021.)

(Another troubling potential of ChatGPT is its ability to manipulate people into beliefs that aren’t true. While ChatGPT “sounds really smart,” at times it simply makes up things and it can convince you of something even when the facts aren’t correct. We’ve seen this effect in social media when it was people who were manipulating beliefs. We can’t predict where an AI with emergent behaviors may decide to take these conservations.)

But that’s not all.

Opening Pandora’s Box
Until now ChatGPT was confined to a chat box that a user interacted with. But OpenAI (the company that developed ChatGPT) is letting ChatGPT reach out and interact with other applications through an API (an Application Programming Interface.)  On the business side that turns the product from an incredibly powerful application into an even more incredibly powerful platform that other software developers can plug into and build upon.

By exposing ChatGPT to a wider range of input and feedback through an API, developers and users are almost guaranteed to uncover new capabilities or applications for the model that were not initially anticipated. (The notion of an app being able to request more data and write code itself to do that is a bit sobering. This will almost certainly lead to even more new unexpected and emergent behaviors.) Some of these applications will create new industries and new jobs. Some will obsolete existing industries and jobs. And much like the invention of fire, explosives, mass communication, computing, recombinant DNA/CRISPR and nuclear weapons, the actual consequences are unknown.

Should you care? Should you worry?
First, you should definitely care.

Over the last 50 years I’ve been lucky enough to have been present at the creation of the first microprocessors, the first personal computers, and the first enterprise web applications. I’ve lived through the revolutions in telecom, life sciences, social media, etc., and watched as new industries, markets and customers created literally overnight. With ChatGPT I might be seeing one more.

One of the problems about disruptive technology is that disruption doesn’t come with a memo. History is replete with journalists writing about it and not recognizing it (e.g. the NY Times putting the invention of the transistor on page 46) or others not understanding what they were seeing (e.g. Xerox executives ignoring the invention of the modern personal computer with a graphical user interface and networking in their own Palo Alto Research Center). Most people have stared into the face of massive disruption and failed to recognize it because to them, it looked like a toy.

Others look at the same technology and recognize at that instant the world will no longer be the same (e.g. Steve Jobs at Xerox). It might be a toy today, but they grasp what inevitably will happen when that technology scales, gets further refined and has tens of thousands of creative people building applications on top of it – they realize right then that the world has changed.

It’s likely we are seeing this here. Some will get ChatGPT’s importance instantly. Others will not.

Perhaps We Should Take A Deep Breath And Think About This?
A few people are concerned about the consequences of ChatGPT and other AGI-like applications and believe we are about to cross the Rubicon – a point of no return. They’ve suggested a 6-month moratorium on training AI systems more powerful than ChatGPT-4. Others find that idea laughable.

There is a long history of scientists concerned about what they’ve unleashed. In the U.S. scientists who worked on the development of the atomic bomb proposed civilian control of nuclear weapons. Post WWII in 1946 the U.S. government seriously considered international control over the development of nuclear weapons. And until recently most nations agreed to a treaty on the nonproliferation of nuclear weapons.

In 1974, molecular biologists were alarmed when they realized that newly discovered genetic editing tools (recombinant DNA technology) could put tumor-causing genes inside of E. Coli bacteria. There was concern that without any recognition of biohazards and without agreed-upon best practices for biosafety, there was a real danger of accidentally creating and unleashing something with dire consequences. They asked for a voluntary moratorium on recombinant DNA experiments until they could agree on best practices in labs. In 1975, the U.S. National Academy of Science sponsored what is known as the Asilomar Conference. Here biologists came up with guidelines for lab safety containment levels depending on the type of experiments, as well as a list of prohibited experiments (cloning things that could be harmful to humans, plants and animals).

Until recently these rules have kept most biological lab accidents under control.

Nuclear weapons and genetic engineering had advocates for unlimited experimentation and unfettered controls. “Let the science go where it will.”  Yet even these minimal controls have kept the world safe for 75 years from potential catastrophes.

Goldman Sachs economists predict that 300 million jobs could be affected by the latest wave of AI. Other economists are just realizing the ripple effect that this technology will have. Simultaneously, new startups are forming, and venture capital is already pouring money into the field at an outstanding rate that will only accelerate the impact of this generation of AI. Intellectual property lawyers are already arguing who owns the data these AI models are built on. Governments and military organizations are coming to grips with the impact that this technology will have across Diplomatic, Information, Military and Economic spheres.

Now that the genie is out of the bottle, it’s not unreasonable to ask that AI researchers take 6 months and follow the model that other thoughtful and concerned scientists did in the past. (Stanford took down its version of ChatGPT over safety concerns.) Guidelines for use of this tech should be drawn up, perhaps paralleling the ones for genetic editing experiments – with Risk Assessments for the type of experiments and Biosafety Containment Levels that match the risk.

Unlike moratoriums of atomic weapons and genetic engineering that were driven by the concern of research scientists without a profit motive, the continued expansion and funding of generative AI is driven by for-profit companies and venture capital.

Welcome to our brave new world.

Lessons Learned

  • Pay attention and hang on
  • We’re in for a bumpy ride
  • We need an Asilomar Conference for AI
  • For-profit companies and VC’s are interested in accelerating the pace

10 Responses

  1. Rules, guidelines, controls, regulations; the difference with AI is in efficient execution: Smart Contracts.

    Speaking of Smart Contracts, the opportunity is a decentralized and representative approach for discerning them. Ultimately Smart Contracts can assure that politicians are included in the 300m job losses.

    Where are the Founding Fathers 2.0 when you need them most?

  2. We need a lot more thought and dialogue about this, and less algorithm forwarding seeking hype. Let’s get the social dialogue and exchange needed happening.

    https://www.linkedin.com/feed/update/urn:li:share:7049017043320262656/

  3. Steve, the internet came without rules and see how it has empowered economies, minds, technologies and what have yous. Humans are born limitless; imagination has no boundaries…at worst the world gets wiped out…what will we miss? Death people are resting in peace…only the living struggles!

  4. When the Internet started to be available to colleges and universities in the US, some people thought that it could be a platform to spread free thoughts without government intervention. It was like an anarchist’s dream.

    Today we know that internet is being used (among many other things) to spread fake-news, to facilitate kidnaps and ransom payments, to exchange child porn, and probably there are many other crime types spreading thanks to the net.

    Why are you so concerned about AI and not about the harmful uses of internet? It is obvious that if AI platforms today are fed with information from the net, they cannot be “better” (whatever that means) than the data they have collected.

    Instead of taking care of what is really hurting us today, we look somewhere into the future, and panic. Why?

    • The problem is not with the technology but accountability and social enforcement of rule breaking. The origins of the Internet technology did not envisage a lawless information world in those who profit from the technology were not held accountable for the inappropriate use of the technology. The ISPs, search engine providers, social media platforms have argued that as technology users they are not accountable for the use of the technology. That is the same argument that gun manufacturers use about the use of guns. But societies have not accepted this argument at face value. Societies focus on the ‘consequences’ of the use of technology. We don’t allow everyone to fly planes or drive on roads as any speed they want. We hold people accountable, and impose boundaries on the manufacturers \ providers of those technologies.

      The Internet is just another technology. We need to talk about accountability and acceptable use, not use the technology itself as an excuse for failing in this ‘social dialogue’. Time those who governed us stepped up to the mark.

  5. I did check out ChatGPT-4. Results: Nearly all the claims hype, nonsense, a fad. For ChatGPT-4 itself, nothing like intelligence. Passes out a lot of nonsense, text hash, dangerous if given any credibility.

    To check out ChatGPT-4, I gave it some questions from high school and early college math. For high school, gave it a plane geometry construction exercise. For college, gave it a first calculus exercise. ChatGPT-4 got a grade of flat F on both. For the geometry exercise, it gave a lot on triangles that was irrelevant. For the calculus exercise, it gave a lot on calculus that was irrelevant.

    Where ChatGPT-4 might have some utility: Google has a lot of utility. At Google can type in some key words, and Google can return several, to dozens, to thousands of results. Studying these result, may find what you are looking for,

    Similarly, ChatGPG-4 can return a lot of results, and if study the results may find something useful.

    But for both Google and ChatGPT-4, there are no guarantees that any of the results really answer your query and, instead, you have to study the many results.

    No revolution. No danger, No big step forward. Nearly all the big claims — just nonsense and hype,.

    • I’m puzzled by your response Sigmund. It appears to contradict much of the work I’ve seen with GPT-4. How you define intelligence matters, so waving a hand with “nothing like intelligence” doesn’t really shed light on where you find GPT-4 coming up short.

      If you haven’t seen it, a very thorough exploration of GPT-4 was undertaken my Microsoft Research. The paper makes explicit that your last sentence is clearly undeserved:

      https://arxiv.org/pdf/2303.12712.pdf

  6. The technology is the final dream of man. If it lives up to all that it promises, we could finally be freed from the drudgery of working and therefore I think it should be accelerated. The correct metaphor is not a nuclear weapon, a niche thing only accessible by nation states, but the microprocessor or the internet, an unstoppably commercial product that changes how we interact with everything. Imagine if the government came together to shut down the internet or suppress the microprocessor, how much of value would have been lost

  7. You can’t legislate morality. Mind is an individual uniquely attributable to each of us and how we use it was left to us to decide, individually and collectively. However, the framers of the internet expected truth to triumph deceit and the evil side of the coin. Rivers and lakes decide their courses on their own. The internet, in my view, will take its own course. Only God knows whenever each of us will die and die through the agent of God. If technology and the internet were the agents to wipe the human race, so be it. R.I.P.

  8. Great blog and article! Here’s a related site to throw in the mix for this AI topic: Expontum (https://www.expontum.com/) – An open access resource that helps students, teachers, and researchers quickly find research knowledge gaps and identify what research projects have been completed before.

Leave a Reply to markboolootianCancel reply

Discover more from Steve Blank

Subscribe now to keep reading and get access to the full archive.

Continue reading