As ChatGPT rolled out, many questions arose, primarily from those who viewed it as a threat.
Firstly, artists of all disciplines feared that new technology had the potential to devalue their creations and replace creativity. They were right to be worried as AI demonstrated its ability to create. This created a blur between the lines of AI-generated and human art.
Then came the educators. They were worried about the AI’s ability to write in seconds, potentially increasing academic dishonesty. Students could now spawn essays, solve long problems and even copy their professors’ writing style with a few clicks.
Still, even with all these anxieties, AI panic has gradually faded as it became normalized in our daily life. Regardless, why are we not more worried about its capabilities now?
AI has been part of your, yes, your life, for much longer than you likely are aware. From spell checkers and the advent of DeepBlue to the video generators of today, AI has been continuously making advancements in the background. Making advancements in the AI field is not too unique from other fields: more data and more power. But what does that even mean?
Let’s break this down.
When I say, “more data”, I’m talking about your data specifically. Those likes, shares, reposts on your feed, your purchases, and yes, even your banking information add to the gargantuan machine of information feeding AI development. Data brokers are ruthless in their hunt for collecting as much of this information as possible to sell, likely doing it unnoticed, but it has severe consequences for your personal privacy and security.
AI computation is an energy-intensive machine. It demands vast amounts of electricity to power it. The more it grows, the more companies look for more ways to power it. Now they’ve turned to nuclear power for a solution. Some companies are actively, even now, investing and restarting nuclear plants.
Nuclear energy has its advantages. For example, they are a low-carbon alternative to fossil fuels, and they tend to create high-salary jobs, they come with some serious risks we should consider. Even before we get to the issue of nuclear waste, there are risks of meltdowns, security issues, and threats and critical, negative long-term impacts on the environment. AI’s appetite continues to grow exponentially, and, in their hunt to fuel it, companies will support this dangerous path. We should take a closer look and weigh these risks carefully.
Back to your data. People know that their data is being collected, but just why is the AI-driven collection so bothersome? Well, look at the privacy policy page of our favorite AI, ChatGPT:
“We collect personal data relating to you (“Personal Data”) as follows:
Account information; … we will collect information associated with your account, including your name, contact information, account credentials, date of birth, payment information, and transaction history…
Device Information: … We collect information about the device you use to access the Services, such as the name of the device, operating system, device identifiers, and browser you are using…
Location Information: … We may determine the general area from which your device accesses our Services based on information like its IP address for security reasons and to make your product experience better…”
Personally, I’m not a fan. Although this isn’t anything new. We have already long accepted that companies in the technology sector collect, track and utilize our data. What’s different with AI that the models don’t simply collect and store the data; they quite literally learn from it to adapt to your patterns and make uncomfortably accurate predictions about our behavior. Concerns for our individuality, security and basic freedom come into question. If you care about any of these rights, AI development should be front and center on your radar.
Also, there are concerns that generative AI brings about. We’ve seen what it can do. Several times last year, we saw deepfakes technology utilized to create very realistic but false videos of popular, well-known figures. Celebrities were put in odd and compromised positions, politicians being made to say things they’ve never said and even ordinary people’s likenesses were used without consent. It continuously becomes an issue of trust as these tools become more advanced and easily accessible. The misinformation spreads further and further while casting a spell on reality and fiction. This future is now. We’re in the era where seeing is not believing. And thus, trust in the media, in government and even your personal relationships becomes wary.
When I was growing up, like many others, technology had an effect on the way I saw the world. AI is doing the same today— but on an even larger, more profound scale. The question is: are you paying enough attention?