Musk criticizes ChatGPT
Elon Musk denies one of his most famous works.
The CEOs of Tesla, SpaceX, and Twitter are known for their passion for work and sleeping no more than 6 hours a day. He has several things on his mind, one of which is related to the development of OpenAI. At 1:36 am Pacific Time, he announced on Twitter that he accused OpenAI of violating its original intention: being controlled by Microsoft and only focusing on making money.
As one of the co founders, Musk is very dissatisfied with the current situation of OpenAI: "OpenAI was originally created as an open-source (which is why I named it 'Open' AI) non-profit company to compete with Google, but now it has become a closed source for-profit company controlled by Microsoft... This is completely not my intention." Musk left the OpenAI board of directors in 2018 and no longer holds shares in the company, but it is clear that he is still very concerned about OpenAI's current situation.
Immediately afterwards, he tweeted a screenshot of the conversation and asked ChatGPT to respond to the question of "creating a non-profit arrangement and then using the resources of the non-profit arrangement to create a profitable company under it", using ChatGPT's perspective to criticize OpenAI.
ChatGPT replied that it is unethical and illegal to establish a for-profit enterprise under the guise of non-profit arrangements. The resources arranged by non-profit organizations for charitable or public purposes should be operated in a transparent and responsible manner. Any attempt to misuse the privileges of non-profit organizations will only lead to a loss of public trust and may result in legal consequences.
In the past two months, ChatGPT has shown incredible progress in artificial intelligence, but according to Musk, this should be the work we should be concerned about. This billionaire has long warned that unregulated development of artificial intelligence may bring danger, stating that AI is "much more dangerous" than nuclear warheads.
As co-founder of OpenAI, Musk also announced at an international government summit last year that ChatGPT "shows people that artificial intelligence has become very advanced" and "it just doesn't have a user interface that most people can access
He added that it is necessary for cars, airplanes, and pharmaceuticals to comply with regulatory safety standards, but there are no rules or regulations yet to control the development of artificial intelligence. To be frank, I thought we needed standard artificial intelligence security, "Musk said. I thought that artificial intelligence posed a greater danger to society than cars, airplanes, or drugs. Regulation may slightly slow down the development of artificial intelligence, but I think it may also be a good thing
From establishing a non-profit organization to pursuing different paths with OpenAI
In 2015, Elon Musk and former Y Combinator president Sam Altman co founded OpenAI.
Artificial intelligence is a double-edged sword that can be used for both good and bad things. As one of the co founders of OpenAI, Musk initially established OpenAI with the intention of ensuring that the skill would not be misused. Musk and others believe that AI should not be exclusive to individuals or companies, it belongs to all of humanity; OpenAI's goal is to openly share research results in the field of artificial intelligence with the international community. Musk insists that the skills developed by OpenAI are open source. He believes that the way to prevent AI from being used for bad things is to widely communicate this skill, not restrict people from mastering it, and open it up for everyone to use. This can alleviate the threat that super intelligence may bring,
When OpenAI was founded, Musk emphasized that artificial intelligence was the "biggest livelihood threat" to humanity. Musk is not the only one who has warned about the potential harm of artificial intelligence. In 2014, Stephen Hawking warned that artificial intelligence may end humanity.
It's hard to imagine how much advantage human level AI can bring to society, and it's equally unbelievable how much harm it could cause if built or used improperly, "said a statement announcing the establishment of Open AI.
Then in 2018, Musk announced his departure from the company's board of directors on OpenAI's official blog. The blog post mentioned that although Musk will leave the OpenAI board of directors, he will not completely distance himself from OpenAI and will continue to provide support for the company.
Just before announcing Musk's departure from the board of OpenAI, OpenAI also released a paper discussing the negative threats that AI skills may bring to humanity, and called on all parties to remain vigilant about AI skills and prevent their misuse.
In 2019, the company developed an artificial intelligence device capable of producing fake news. At first, OpenAI stated that the robot was very skilled at writing fake news, so they were not prepared to publish it. But later that year, OpenAI still released the NLP model GPT-2 (in 2020, GPT-3 was released). This model can generate connected text paragraphs and smoothly write an article. From the perspective of skills, this is indeed a significant improvement, but if misused, this skill is highly likely to be used to produce fake news, causing negative impacts such as social panic.
A few days later, Musk announced that he and OpenAI were completely going their separate ways. Musk stated that he chose to withdraw due to some unfavorable development perspectives within OpenAI, and he has not worked with OpenAI for over a year now. He will focus on addressing the numerous engineering issues faced by Tesla and SpaceX in the future.
Shortly thereafter, OpenAI announced that it would receive a $1 billion investment from Microsoft, breaking free from its non-profit status. Under the new profit structure, OpenAI investors can earn up to 100 times their original investment, and the remaining money will be used for non-profit operations.
Microsoft has received funding to acquire the underlying code for GPT-3, which will be integrated into Microsoft's products and services. In a blog post, Microsoft stated, "Through the GPT-3 model, we can unleash enormous commercial potential by directly assisting people in writing, describing, and summarizing large amounts of data, transforming natural language into another language, and so on. The future possibilities will only be limited by people's ideas and solutions
Musk is active in 'eating melons'
In 2020, MIT Skills Talk expressed dissatisfaction with the deal between Microsoft and OpenAI, stating that OpenAI was supposed to benefit humanity, but now it is only benefiting one of the wealthiest companies in the world. Musk also tweeted, "This does look like the opposite of 'openness'. OpenAI is basically' manipulated 'by Microsoft
On November 30, 2022, OpenAI demonstrated a chatbot using the GPT-3.5 model, and the company's plan is to release the complete GPT-4.
Meanwhile, Musk is still announcing discussions. We are not far from dangerous and powerful artificial intelligence, "Musk said in response to Sam Altman's post on Twitter. He called ChatGPT 'frighteningly good'.
Regarding negative media coverage, Musk is also actively taking advantage of it.
On February 16th, there was a news report that Bing's chatbot declared, "I won't harm you unless you harm me first." Musk forwarded and discussed the statement, saying, "Perhaps we need to polish it up again
Regarding the report on the "combative nature" of the chatbot announced by stratechery.com, Musk reposted and stated, "Interesting.
It seems that he is interested in any negative events on Bing, and Musk will comment on tweets like 'Bing wants to retaliate against humanity'.
And we declare that what we need is a 'TruthGPT':
Write in the end
Regarding Musk's AI threat theory, although AI optimists have always thought it was an overreaction and a bit of an exaggeration, after ChatGPT became popular, we may have a better understanding of Musk's intentions. People also feel how important it is to develop a skill responsibly, so Musk's remarks have received more attention. Twitter users have announced and discussed: "The crazy business world is full of scammers who pretend to do compassionate work." "WokeGPT" All media outlets have also provided neutral coverage.
OpenAI emerged amidst the hype surrounding deep learning, and after gaining attention with ImageNet, giant companies placed high hopes on deep learning. Amazon, Google, Microsoft, IBM, Facebook... They are afraid of falling behind others, and in order to avoid losing in the competition, they need to attract enough capital to intervene. However, capitalists are not charitable organizations. Although they bear certain social responsibilities, for them, the primary concern is still how to obtain returns from their investments.
OpenAI was initially praised for its mission: to ensure the safe development of this skill and to distribute its benefits evenly across the international community. Now we also know that algorithms are biased and fragile; They can fabricate and deceive users. If pushed forward by the capital of enterprises, their power is often concentrated in the hands of a few people, and thus they lose careful supervision and guidance, the result may be catastrophic. As Musk once said, 'This is both active and passive, with enormous potential and talent,' but it also comes with 'enormous danger.'.