Elon Musk Has Issued A Stark Warning Over AI. This Isn’t His First Time.
- Elon Musk, an outspoken AI commentator, has reiterated his calls for safety checks at the World Government Summit in Dubai
- Musk founded OpenAI to promote AI regulation, but says the company’s changed since Microsoft’s investment
- Microsoft and Google are vying to best one another in the field, which Musk worries drives down safety checks in the pursuit of winning the race
Thanks to the success of ChatGPT, 2023 kicked off with intense hype around the power of artificial intelligence. It’s no wonder Elon Musk, one of tech’s most outspoken public figures, had something to say on the subject.
But Elon has always been clear on his views, reiterating this week: AI safety is paramount, and without it, we’re toast.
It might surprise you that one of the world’s richest people, with countless innovations for humanity under his belt, is skeptical about AI. But Elon Musk has a long history of decrying the lack of regulations in place to keep AI’s development in check.
Let’s get into his latest comments and the context behind the ‘it’s complicated’ status between Elon and AI.
Our AI algorithm constantly looks at how to bring the best value to your portfolio. The Emerging Tech Kit invests in a wide range of tech ETFs, stocks and cryptocurrencies for a diversified investment into some of the cutting-edge tech of tomorrow.
Download Q.ai today for access to AI-powered investment strategies.
What are Elon Musk’s latest comments?
EV car innovator turned social media mogul, Elon Musk, was the keynote speaker at this year’s World Government Summit in Dubai, which took place this week. He took the time to share his thoughts on a new Twitter CEO, aliens (!) and the topic on everyone’s lips, AI.
When the topic turned to ChatGPT, Musk’s views appear to contrast his opinion on tech in general, given this is someone trying to establish humanity on Mars by 2050. “One of the biggest risks to the future of civilization is AI,” he warned.
When asked about what technology he could see developing ten years from now, he chose to focus on the immediate risk in his eyes. “AI has been advanced for a while; it just didn’t have a user interface that was accessible for people,” Musk continued.
He also called for AI safety protocols to be developed sooner rather than later, citing medical and car seatbelt regulations as a similar comparison to the level of harm it risks to humans.
What is OpenAI?
In 2023, everyone knows OpenAI’s name. If they didn’t hear about them last year with the unveiling of Dall-E, an AI generating weird and wonderful artwork, they couldn’t miss the tidal wave of headlines around text chatbot ChatGPT.
The two have been a roaring success. When OpenAI removed the waitlist for Dall-E in September, it cited 1.5m users generating over two million images daily. ChatGPT has shattered sign-up records, taking less than a week to hit 1m users and reaching 100m in just three months.
OpenAI has a longstanding relationship with Microsoft, which originally invested $1bn in the business in 2019. This has since escalated to a $10bn multi-year partnership announced at the start of the year, with OpenAI technology integrated into Microsoft’s Bing search engine and Edge browser.
But what people may not know is that Elon Musk is one of the company’s founding members.
The rocky relationship between Elon Musk and OpenAI
Elon, with the likes of his PayPal co-founder Peter Thiel and other investors, formed OpenAI in 2015 as a challenger to Google. “I was concerned that Google was not paying enough attention to AI safety,” Musk said at the conference.
Until 2018, Elon was a continuous donor and board member of OpenAI. At the time, the reason given was that Tesla’s growing work in the space caused a conflict of interest for Musk.
It was unclear what relationship Elon has with OpenAI until yesterday’s conference. “Initially, it was created as an open-source nonprofit. Now it is closed-source and for-profit. I don’t have any stake in OpenAI anymore, nor am I on the board, nor do I control it in any way.”
So, the relationship has soured (or so it seems), but his impact on the company still remains.
OpenAI’s ethics charter
OpenAI was originally created as a non-profit to promote AI safety, evident in the company’s charter. It’s dated April 9th 2018, so it’s plausible to suggest Elon had at least some advisory hand in its creation.
What’s interesting is the section on long-term safety, which reads: “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.
“Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.”
We’d like to know what Microsoft makes of that after investing its $10bn.
In comparison, Google is notoriously tight-lipped about its AI development. Its rival to ChatGPT, called Bard, has so far fallen short of lofty expectations after a lackluster presentation in Paris.
It didn’t stop there – a silly mistake in its advertising material about the James Webb telescope wiped $100bn in value off Google’s stock price. Keen to avoid another market disaster, CEO Sundar Pichai has allegedly asked every Google employee to spend hours of their workday testing the tech before its release.
All of this gives the impression Google is panicked about losing its prized search engine share, forcing development before it’s ready – and that’s exactly the safety issue Elon Musk is talking about.
Does Elon have a point?
There have been a lot of grandiose claims about AI lately, which has always been Elon’s style on this topic. “AI is potentially more dangerous than nukes,” is one of the more memorable quotes from the past.
But, like it or not, Elon’s words carry weight. He has 160m Twitter followers, which has helped him to soar crypto prices to the moon and tank his own stock. His narrative on AI will help to shape the public perception of the technology.
This isn’t a bad thing. It’s important to note Musk isn’t against AI itself, but against the lack of regulation and human checkpoints that could come with the race to dominate this fledgling industry.
That’s why he helped found OpenAI in the first place – because AI needs checks and balances from humans. Now, it’s up to the lawmakers to pay attention.
The bottom line
ChatGPT is just a small part of the puzzle – and potential – when it comes to AI. Q.ai’s machine learning technology has parameters provided by human analysts to ensure our Kits give your portfolio the best of both worlds.
Our Emerging Tech Kit is a great way to dip your toe into investing in future technologies being developed now – like AI. Pretty meta, we know.
The mix of stocks, crypto and ETFs are regularly assessed by our AI to bring you the best returns. Worried about protecting your gains? Just use our Portfolio Protection.
Download Q.ai today for access to AI-powered investment strategies.