Photo by Amanda Dalbjörn on Unsplash
Just within the past month or so, I’ve been noticing a tremendous surge in articles about AI, GPT3.5, AskAI, etc.
Not just an increase but also a shout for help from creators, whether they’d be writers or artists, sculptors or painters. Unfounded fear, or a strange phenomenon that will eventually phase out?
One thing is true. It’s an impossible-to-define conundrum that we never faced before.
Writers, to name one circle who thread these waters incessantly are the most inquisitive and preoccupied.
The wave — although sort of anticipated — came fast and furious. It almost seemed like it happened overnight if you care to observe its sudden evolution.
From Silicon Valley’s cafeterias to the pubs of Key West, that is the conversation to have. The advent of AI and its consequences or benefits.
Nobody is agreeing on the matter. Is this reasonable? After all, this is the first time in the history of humanity that such incredible development is accessible to everyone, especially ChatGPT.
Alyssa Stringer from TechCrunch writes:
“Hackers are using ChatGPT lures to spread malware on Facebook.”
Meta said in a report on May 3 that malware posing as ChatGPT was on the rise across its platforms. The company said that since March 2023, its security teams have uncovered 10 malware families using ChatGPT (and similar themes) to deliver malicious software to users’ devices.
“In one case, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools,” said Meta security engineers Duc H. Nguyen and Ryan Victory in a blog post. “They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware.”
Some people are concerned that AI will take over their lives and there’ll be nothing anybody will be able to do about it. Worse, they feel impotent to stop it. And don’t let them read any article about the lack of safety precautions or accidental misshapes where the bots did the opposite of what they were supposed to do and people got hurt in the process. Because they’ll run around with loudspeakers all over their neighborhoods spreading the news for as long as they can. Or until their wives called them in for supper.
Are they way off in their fatalist beliefs and behavior?
Could we call those tragic events where human beings have gotten hurt — sometimes by their own AI vehicles — rare or sporadic incidents? Is that acceptable, and if so, where does it stop?
Is it a legitimate reason for concern?
On the other hand, we can see the benefits for ourselves. They’re innumerable. Just in the medical industrial complex, the advantages of scientific advancements combined with the speed by which AI implements those benefits are nothing short of extraordinary.
The Economist writes:
“It can take a little imagination to see how some innovations might change an economy. Not so with the latest ai tools. It is easy — from a writer’s perspective, uncomfortably so — to think of contexts in which something like Chatgpt, a clever chatbot that has taken the web by storm since its release in November, could either dramatically boost a human worker’s productivity or replace them outright. The GPT in its name stands for “generative pre-trained transformer”, which is a particular kind of language model. It might well stand for general-purpose technology: an earth-shaking sort of innovation that stands to boost productivity across a wide range of industries and occupations, in the manner of steam engines, electricity, and computing. The economic revolutions powered by those earlier GPTs can give us some idea how powerful ai might transform economies in the years ahead.”
The main question irrefutably seems to be a key one: Can they be trusted?
How much license should be given to them to make prudent decisions?
Could they be programmed to sensibly shut down at the first sign of trouble?
To curtail their learning-as-you-go prowess would also deter their aptitude, which would be obviously counter-intuitive to their existence, to begin with.
Doomed if we do, doomed if we don’t?
Maybe not. The CEO of OpenAI is telling legislators that the program “should” be regulated. Can we sleep better now?
This is the beginning of the beginning of a new era. The future of the human race is at stake. Let’s just hope the ones in charge are aware of such responsibility.
Some writing jobs became more algorithmic, which is annoying for me as someone who likes to write creatively. But creative writing jobs still exist, we just have to search for them harder. There are companies that still value humans and human creativity above AI. AI still can offer benefits to us, but we shouldn't depend on it solely.
I wish that the writing on AI was not leaning so far to the fear side. Your title and subtitle perfectly sum up how I feel about it because that is the true nature of this situation. We really don't know what the implications are now or for the future. Our brains are just filling in gaps at this point. We've got to learn how to adapt just like we did with the introduction of other technologies.