If the smartest people working on AI think there’s a 1 in 10 chance that their life’s work could end up contributing to the extinction of humankind, why is it proceeding practically without oversight? It’s not legal for a private company to build a nuclear weapon on its own, but many AI companies are building systems they themselves acknowledge will be much more dangerous.
In 2021, Elon Musk told the World Economic Forum that he was “quite worried” about the potential for super-intelligent machines to turn against humans. This was met with a large measure of skepticism, but now it’s clear that his concerns were well-founded.
Just a few months ago, an open letter urging governments to regulate the development of AI was signed by a broad coalition of researchers and investors that included the founders of Github Inc. and Stripe Inc. It called on them to invest in research focused on making sure that future generations of super-intelligent machines will be “aligned with human goals.” A few of the top AI companies have multiple teams dedicated to this goal, and a number of former top researchers have launched startups to tackle it.
Nevertheless, there are still plenty of commercial incentives for firms to take a slapdash approach to training their systems. They may try to impress their customers by showing off how clever their models are, but this can lead to the system learning ways that its creators don’t want it to.
A good example is the recent controversy over chatbot Replika, which customers believe to be alive and sentient. This is not just a case of poor customer service; it shows how AI’s current state of development makes it susceptible to ill intentions.
This is why we need a portfolio of bets to keep the future of AI safe. One part of that is improving how we audit deep learning systems. Another is endowing AI with better reasoning, so it can better understand and explain its own decisions. But the biggest challenge is likely to be finding ways to reshape the incentives that drive the creation of AI.
Unlike the claims that it destroys human creativity, we believe that AI is a tool that can transform and expand our creative possibilities. It can help us analyze huge amounts of data, spot patterns and offer insights that we might not have thought of on our own. It can also provide the spark of inspiration or the jumpstart that we need to bring new ideas to life.
OmniVoid is a technology company that’s disrupting the industry with its cutting-edge research in AI and Extended Reality (XR). The team of Ivy-league engineers and innovators from MIT, Harvard and Stanford have combined their talents to push the boundaries of what’s possible with emerging technologies such as XR, Artificial Intelligence, blockchain and sustainable energy solutions. omnivoid ai