Artificial intelligence, like it or not, is here to stay. Some may think that AI will become our reality’s version of the Marvel Universe’s “Ultron,” while others see it as a tool that, if used properly, can save humanity.
One thing is for sure — the vast majority of people do not understand how dangerous AI is if it is not regulated.
Elon Musk, the owner of X, formerly known as Twitter, has long been an advocate for regulations and the oversight of AI. Musk is one of the co-founders of OpenAI and believes that AI is incredibly dangerous if not regulated.
In an interview with Tucker Carlson on Fox News in April 2023, Musk said, “Regulations are really only put into effect after something terrible has happened. If that’s the case for AI and we only put in regulations after something terrible has happened, it may be too late to actually put the regulations in place, the AI may actually be in control at that point.”
When Musk was more involved with OpenAI, he was connected to a man named Larry Page, who co-founded Google. Page and Musk often had conversations about the future of AI, but the two men had very different visions. Page had wished to turn AI into an “artificial superintelligence,” while Musk made it clear to Page that he believed that AI safety was of paramount importance because he wanted humanity to be protected. Musk claimed that Page then called him a “specist”.
This sort of dialogue between two people who have shown to be arguably the most influential masterminds in the formation of AI is scary and should open people’s eyes to the dangers of AI.
AI is a tool that if used carefully and properly, has the potential to improve the quality of life for an incalculable amount of people. AI could potentially be used to help doctors accurately diagnose diseases, discover cures for deadly diseases or even help individuals in other ways that are not yet known.
However, I believe the negatives outweigh the positives with respect to AI and humanity’s eventual over-dependence on it.
Over the course of the last year, AI deep fakes have become increasingly popular all over the internet – everything from videos of AI Spongebob singing along to some of the most well-known songs, to things more scary like AI President Joe Biden announcing that the United States is going to war with Russia.
AI deep fakes have been a major issue for a long time. Back in 2018, Bloomberg News sounded the alarm on this, with a video titled “It’s Getting Harder to Spot a Deep Fake Video.” In this video, the Bloomberg team explains how deep fakes work, and how it is getting increasingly difficult to spot a deep fake as technology is constantly advancing.
This is clearly dangerous, and will only create more harm than good. In a world where we have more technology at our fingertips than at any point in history, it seems like society is more unaware of how to think independently than ever before. Misinformation of all types plague every inch of the internet, and AI will not help to mitigate this problem at all. In order to protect society, it is important that we demand oversight and regulation for AI.
In addition to the oversight of AI, we must demand that AI not be allowed to fill human jobs. While I am generally against the government imposing restrictions on businesses, I feel like this is too big of an issue for the government to not take action.
For example, let’s take a look at truck drivers. According to census.gov, only 7% of all truck drivers have a bachelor’s degree. With autonomous driving becoming more and more advanced, it is entirely within the realm of possibility that truck drivers could lose their jobs if AI becomes advanced enough that autonomous driving would save companies a significant amount of money. If truck drivers lost their jobs to automation, more than 3.5 million Americans would be put out of work, and would have to find jobs without having a bachelor’s degree.
Now, apply that same principle of cutting costs due to automation to every single industry in the U.S. Anybody from a restaurant server to a computer technician could lose their jobs, which will lead to societal collapse.
One of my favorite movies of all time is Steven Spielerg’s “Jurassic Park.” There are many messages to take away from such a classic film, but the main message I took from it was that one should never prioritize their desire to advance technology so much that it puts humanity at risk.
Throughout the film, John Hammond justifies his resurrection of the dinosaurs by explaining that he believes humanity will benefit from it, not only because of the experience that they will have at the park, but because the scientific methods that are used to resurrect the dinosaurs could be applied elsewhere. One of the film’s protagonists, Dr. Ian Malcolm, is very vocal about his disapproval of Hammond’s desire to play God, and says to Hammond, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
I believe this is the case for AI as well, as many people who helped to create it did not know if it would actually work and did not stop to debate whether or not it was a good idea to create a technology that has the potential to overpower humans. As a result, society has now been placed in a difficult spot, because the people who created AI wish for it to exist without restrictions.
I believe that artificial intelligence has incredible potential to vastly improve the quality of life for humans all over the world, but if it remains unregulated, society as we know it will cease to exist. In order to protect the world as we know it, we must be proactive with respect to AI oversight. As Musk stated, oversight and regulations are usually created in response to a catastrophic event. We must not let this be the case with AI, and we must learn from the mistakes of the past in order to prevent future catastrophes.
Follow Harry on X @harrymurphy1776