The Battle Between AI Profits and Ethics

By 
F2 Team
January 21, 2024
Perspectives
5 min read
Share this post
The Battle Between AI Profits and Ethics

After generative AI entered the mainstream in late 2022 and ChatGPT became the fastest-growing consumer technology in history, no one could have predicted that less than a year later, OpenAI's board would attempt to remove its CEO, Sam Altman.

The move, which shocked the tech world, reflected the widening tension between AI safety and the limitless potential of the technology. OpenAI had aimed to combine the advantages of for-profit and nonprofit models in order to ensure safe AI. Instead, they found themselves grappling with conflicting interests of AI ethics and profits on the world stage.

With board pressure from Microsoft, Altman eventually resumed his role as CEO. However, several questions still remain: what dangers led OpenAI's board members to reach such a critical juncture? Is the fear of AI significant enough to warrant slowing down the pace of technological innovation?

To better understand rising ethical concerns about the future of AI, we looked at the timeline of these sentiments.

AI Hallucinations and Misinformation

"Hallucinate," which was chosen as Dictionary.com's word of the year in 2023, is a verb used in the context of AI that means "to produce false information contrary to the intent of the user and present it as if true and factual." One CNN reporter simply defined the verb as when "chatbots and other AI tools confidently make stuff up.”

The word was chosen after digital media publications used “hallucinate” 85 percent more frequently in 2023 compared to 2022. Dictionary.com recorded a 46 percent uptick in lookups for the word in 2023.


Some notable events that brought AI hallucination to the mainstream include:

The fear of AI hallucinations and misinformation has become so pervasive that in its 2024 report, Davos named AI powered misinformation as the world’s biggest short term threat. The report noted that false and misleading information supercharged by AI could threaten to erode democracy and polarize society.

Back in June 2023, OpenAI addressed concerns regarding the spread of misinformation due to AI hallucinations and its potential negative impact in a research report. OpenAI stated that they may have discovered a method to prevent models from generating hallucinations in the future. According to OpenAI, the solution lies in a method of training models called “process supervision,” which provides feedback for each individual step as opposed to “outcome supervision” which provides feedback only on an end result.

"Mitigating hallucinations is a critical step towards building aligned AGI," OpenAI said in the report.

AGI, short for Artificial General Intelligence, is defined by OpenAI as autonomous systems that exceed human capabilities in most economically viable tasks.

Understanding AGI and its Dangers

A few weeks after the OpenAI board’s attempt to fire Altman, reports started to emerge that a significant development in AGI, part of an internal OpenAI project called Q* (pronounced Q-Star), was the key driver behind the board's dramatic decision regarding Sam Altman's position at the company.

This AGI development led the board to believe that OpenAI was dangerously commercializing technological advances that could have resounding consequences on humanity.

According to reports from an anonymous source at OpenAI, a new model belonging to Project Q* had successfully solved certain mathematical problems. While the model was limited to solving math problems at a grade-school level, the researchers were optimistic about the future success of Q*, the source revealed.

Researchers consider mathematics to be at the forefront of generative AI development. Currently, generative AI excels in tasks such as writing and language translation by statistically predicting the next word. However, answers to the same question can vary significantly. Mastering the ability to perform mathematical calculations, where there is only one correct answer, would imply that AI possesses enhanced reasoning capabilities similar to human intelligence. AI researchers believe that this capability could be applied to innovative scientific research. Unlike a calculator, which can only solve a limited number of operations, AGI would have the ability to generalize, learn, and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, sources said in the reports, without specifying the exact safety concerns. However, there has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide they wanted to destroy humanity.

While this may sound like the theme of a science fiction novel, these fears lie at the very center of the debate between AI ethics and profits.

Leading Thoughts on the Danger of AI to Humanity

Following the Sam Altman debacle, prominent figures in AI started expressing their opinions on the subject of the potential dangers of the emerging technology.

Elon Musk said he wanted to know why the OpenAI Co-Founder and Chief Scientist Ilya Sutskever "felt so strongly as to fight Sam," adding: "That sounds like a serious thing. I don't think it was trivial. And I'm quite concerned that there's some dangerous element of AI that they've discovered."

In March of 2023, after OpenAI faced criticism for releasing GPT-3 and GPT-4 as closed models, Sutskever defended the decision, stating,

“These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them.”

When asked why OpenAI changed its approach to sharing its research, Sutskever was quoted saying, “We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”

On the other hand, other leading names in the field of AI say the fears surrounding AGI are being blown out of proportion.

“AI is the new paradigm to build all technology,” said Clem Delangue, the founder and CEO of the open-source AI startup Hugging Face. “It’s not more; it’s not less. It’s not a new human form. It’s not Skynet or a super sentient being. But it is something massive. It’s bigger than the internet, and it’s bigger than traditional software. It’s going to create new capabilities for technology.”

In December of 2023, after the OpenAI drama, Sequoia published a piece by AI researcher Dan Roberts, where he explored what physics has to say about AI risk. In the piece, Roberts concluded that “there is no such thing as unbounded computation.”

“I think we should focus on the costs and benefits of AI as we would other revolutionary technologies, rather than treating it as something mythical,” Roberts said. “Demoting AI from god to tech, from religion to science, still leaves us the ability to accelerate both technology and science—right up to the limits imposed by the laws of physics,” he concluded.

Curbing the Fear of AI

The fear of AGI is not unfounded, but it may have been exaggerated. A closer examination of the events leading up to Altman's sudden firing reveals that there was another factor at play.

In October, Helen Toner, then an OpenAI board member and the Director of Strategy and Foundational Research Grants at Georgetown University's Center for Security and Emerging Technology, co-authored a paper on AI safety. In the paper, she criticized OpenAI for creating a "sense of urgency inside major tech companies" with the launch of ChatGPT, while praising Anthropic, an OpenAI competitor, for avoiding excessive AI hype.

Toner's paper sparked an ongoing conflict between her and Altman, which played out over the past year, dragged in other board members and ultimately led to Altman's very public dismissal.

Although OpenAI appears to have resolved the issue, concerns about AI have remained prominent in public consciousness.

In July 2023, OpenAI announced it would allocate 20% of its computing resources to a team researching AI control. In October 2023, President Biden issued an Executive Order on AI security, requiring developers to share safety test results with the US government. Furthermore, in December, the EU passed its own AI Act with the aim of regulating the development and use of AI in Europe.

Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked public fears. These initial concerns often diminish as the benefits of the technology become more apparent.

We saw this pattern play out with the widespread adoption of the Internet. In 1988, Nobel-prize winning economist Paul Krugman famously stated that "it will become clear that the Internet's impact on the economy has been no greater than the fax machine's." Low and behold, the Internet completely transformed the way we live and work. It revolutionized countless industries and led to new sectors, such as cybersecurity, that emerged to address the risks of having valuable information stored online.

Software ate the world and AI is eating software. This is inevitable. Rather than fear it, let's spend our energy innovating a new set of guardrails to responsibly harness the endless potential.

Share this post