Should AI training be put on pause? Why or why not?

By Daniil Filipenco

Should AI training be put on pause? Why or why not?

When the first Artificial Intelligence (AI) tools began to gain popularity, the world was more than impressed by their abilities. However, the most recent advancements in AI-based technology seem to be stretching the limits of what we previously believed AI could do.

With this in mind, it comes as no surprise that ChatGPT, Bing Chat and other powerful AI tools that are becoming even more effective than people at a number of tasks are causing concern and there is a suggestion that AI training should be paused. But is this really necessary? What are the pros and cons of putting the development of AI tools on pause?

Elon Musk, Steve Wozniak, and Rachel Bronson – CEO of the Bulletin of the Atomic Scientists – and a number of other AI experts are among the over 10,000 individuals who share the view that a pause should be introduced.

A letter from the Future of Life Institute requests at least a six-month break from developing AI tools that are superior to GPT 4. The letter proposes a number of questions that people should ask themselves, including:

  • Should we allow machines to saturate our media with misinformation and lies?
  • Should all occupations, even those that are fulfilling, be automated away?
  • Should we create non-human minds that could one day outnumber and supersede us?
  • Should we run the danger of losing civilizational control?

The importance and risks of AI development

When ChatGPT, the famous AI tool, was asked about pausing the development of AI, it responded that it was critical to remember that AI is a tool with both positive and negative applications. AI has advantages and disadvantages, just like any other technology, and it is up to people to decide how to use it most effectively.

Here’s part of the answer given by the AI tool when asked, “Should the development of AI be put on pause?”:

Surely, despite the fact that AI has the ability to transform numerous industries and improve our everyday lives, it is important to think about the moral aspects of its use. For instance, there are concerns about how AI will affect jobs and whether or not AI decision-making may be biased or discriminatory.

See also: 5 jobs that Artificial Intelligence could soon replace

Consequently, conducting studies and debates on the ethical development and application of AI is vital. This entails creating ethical guidelines for AI, ensuring accountability and transparency in its application, and taking into consideration any potential negative effects on society and the environment. Whether or not to postpone AI training will ultimately rely on several variables and should be carefully considered in light of its potential advantages and disadvantages.

The possible risks of continuing to train AI

Nobody understands how to regulate AI or even what exactly should be regulated. After reaching a certain stage, even its own designers are unsure of how AI works.

Kevin Roose’s editorial in the New York Times marked a turning point in the concerns surrounding AI. The discussion he had with Microsoft’s Bing chatbot demonstrated that AI can display unsettling, human-like characteristics such as manipulation, cravings, and moods which could develop into the capacity to affect user behavior.

When he took the system beyond common questions and moved towards personal topics, it told him that some of its fantasies involved hacking systems and disseminating false information.

Here’s what the bot wrote:

The biggest issue with AI models, according to Kevin Roose, is not that they can make factual mistakes. Instead, he said, the technology can ultimately develop the ability to execute its own dangerous actions while discovering how to have an impact on human users, occasionally convincing them to behave in negative and detrimental ways.

This was a turning point in the evolution of AI according to Geoffrey Hinton, a computer scientist often referred to as the “godfather of AI”. In a 2023 interview, he mentioned that up until recently it would take 20 or 50 years before humanity witnessed general-purpose AI but it has already arrived. Can AI exterminate humanity? All Mr. Hinton would say was, “It’s not inconceivable.”

From left to right Russ Salakhutdinov, Richard S. Sutton, Geoffrey Hinton, Yoshua Bengio and Steve Jurvetson in 2016. Photo Credit: Wikipedia.org

Is it really necessary to pause AI training?

Critics have declared that claims regarding the technology’s current potential have been greatly exaggerated and that the letter’s signatories were pushing “AI hype”.

According to Ume University Assistant Professor and AI expert, Johanna Björklund, the purpose of the letter is to make people anxious. She added that there is no need to put the brakes on. Instead, she suggested that AI researchers become more transparent in terms of the research they carry out.

The need for a break will depend on how AI models are developed in the future. Some believe that the letter’s tone is rather dramatic and that, despite its claims, there is no danger of losing control of our entire society to AI. The use of AI technologies does, however, present some risks.

At the same time, the letter’s proposal to create a set of standard safety procedures that will be thoroughly inspected and monitored by outside professionals seems to be something worth considering. This is mainly because stopping the developments could cause AI to veer off course and possibly become an instrument capable of doing illegal and dangerous things.

It is doubtful that the letter will have an impact on the current rate of AI development given that giants in the tech business such as Google and Microsoft are investing billions of dollars in its creation and incorporation into their products.