Should we pause AI development?


Should we pause AI development?

The "Open Letter to Pause Giant AI Experiments” is making the rounds. It has many good points. But the called-for solution is deeply flawed and cynical... Blog post here: https://docs.google.com/document/d/1UGgsZmu6S92kEy7kcY65TOLaAymeSbFzMHMbRh50rRM/edit

Posted by George Kao, Authentic Business Coach on Friday, March 31, 2023


The Open Letter to “Pause Giant AI Experiments” is making the rounds. 

It has many good points.  But the called-for solution is deeply flawed and cynical.

  1. The two leading AI companies are being targeted in this letter: OpenAI and Midjourney. It looks to be a smart strategic move by competitors (in the US and abroad) to slow down their development so they can catch up. Important signatories missing are the CEOs of OpenAI and Midjourney... yet all their major competitors signed it 🧐

  2. Even if the agreement goes through, it's very difficult to enforce -- competitors will likely continue to secretly develop their AIs. 

  3. China, Russia, and other less democratic states are working nonstop to catch up. It's unlikely they'll adhere to such an agreement.  I'm much more afraid of them reaching AGI (artificial general intelligence) than the US getting to the tech first. It's truly a national security risk, similar to the development of a nuclear bomb. Who do we want to arrive there first?

  4. Here are 4 industries that are immediately threatened by AI:

    1. The financial sector. Analysts and bankers. Their jobs rely on numerical and statistical skills that AI can excel at – forecasting, modeling, and advising on financial decisions.

    2. Legal industry – reviewing contracts, conducting research, writing legal briefs, and even negotiations, can all be done more efficiently with AI.

    3. The Media – producing text and video content that gets attention? AI can analyze social sentiment and get way more attention than the media. Yes, of course there’s a danger there. But that’s a separate topic (Alignment – see below).

    4. Many techies are also threatened by AI replacing their coding skills that they spent years and degrees developing.

 I'm not surprised that they're all joining forces to promote this movement.

Basically, I see this as a PR move.  

And a hedge, in case something goes wrong later, they can say "I told you so. I was one of the signers." 


Conflict of Interest from the “Future of Life” Institute

Nobody from OpenAI or Midjourney - the two leading AI companies - signed the letter.

Who is behind the letter? The “Future of Life” Institute.  Who is behind the institute? Here are some of the leaders:

Elon Musk, a co-founder and donor, has a history of disagreement and competition with OpenAI, which he also co-founded but left in 2018. He reportedly wanted to take over OpenAI and accelerate its development, but was rejected by the board. He also withdrew his donation to OpenAI after his departure.

Jaan Tallinn, co-founder and board member, is also an investor in several AI companies, such as DeepMind, Vicarious, and AI Foundation… competitors to OpenAI and Midjourney.

Viktoriya Krakovna, co-founder and board member, is also a senior research scientist at DeepMind, which is a rival of OpenAI and Midjourney in developing powerful AI systems.

Do they have a vested interest in seeing OpenAI and Midjourney stop their development for a while? I’ll let you decide.


What then shall we (as a society) do?

At this point, it’s obvious that AI development isn't going to slow down. (Which is why the Open Letter has lots of appeal, even if it’s a deeply flawed project.) 

What's wiser, in my opinion, is to work hard on the Alignment problem (AI to human values) and to promote that movement, rather than a blanket moratorium.

Some specifics about this follow.  When I use the word “we” I primarily mean AI company leaders, governmental regulators, and AI nonprofits:

  • Instead of pausing AI development, we should encourage more transparency and accountability among AI labs and researchers – by sharing data, code, and methods, publishing peer-reviewed papers, and engaging with external reviewers and auditors.

  • We should work on establishing clear and enforceable standards and regulations for AI systems, such as by defining ethical principles, setting safety and quality criteria, and creating legal frameworks and institutions for oversight and governance.

  • Instead of focusing on the risks and dangers of AI development, highlight the opportunities and benefits of AI systems, such as by showcasing positive use cases, supporting social good initiatives, and fostering public awareness and education.

Thanks to Bing AI for those bullet points above ;-) 


What shall you and I do?

What about “regular” people like us, who aren’t leaders of AI companies or government regulators? 

Let’s shore up our own sustainability before getting lost in the sea of news about AI and what “should” be done publicly. (To discuss what should be done publicly, see this other post “I” made…)

Ultimately, we each need to take individual responsibility for our own careers and families. AI is truly going to replace much of our work and our entertainment. 

Much of our previous grunt work will soon be automated, and we can then use the spaciousness of time, energy, mind, and heart to create much more beautiful things, and more helpful and empathic services, to help other humans to grow and enjoy.

AI isn’t going away. It’s only growing faster and getting more embedded into tech and society. It’s urgent for each of us to learn how to use AI to secure our own future, to learn how to use it responsibly, and help our families and friends to do so.