THIS is why AI will DESTROY us!

Artificial Intelligence is supposed to revolutionize the world. It will make everyone’s day easier and help us with every decision we need to make. It will be the helper that guides our daily decisions and increases our productivity as we simply monitor the decisions it makes. All that, of course, is the gift-wrapped, bow-tied promise. But is it reality?

OpenAI unleashed ChatGPT, the first AI Large Language Model (LLM), in the world to the praise of people everywhere. But there is a darker side of AI, an evil side, and we’ll discuss that here.

Good Intentions

The companies that create new technologies release their products in the world with the best of intentions. They hope to decrease workloads, increase productivity, write books without the hard labor of actually writing it. They hope to create images without the input of the creative mind, or to bring us songs and movies hastily, saving millions of dollars.

Of course, the downside of all the utopia I just mentioned obviously comes at the cost of many jobs lost to technology. More people will need to seek work while the workforce itself collapses. The AI promises to replace most tasks except the most menial of jobs (of which robotics subsequently replace). AI advances means we need fewer people to run society, so what happens to them? Are the conspiracy theorists correct? Are the globalists desiring to reduce the population of the world? Will Universal Basic Income be a requirement?

We also must consider the quality of the work produced by the AI Overlords. The works produced by current AI are inferior in quality to similar works produced by professional artists and writers. Art depicts four fingered people and writing often illustrates a mechanical voice. Inferior works might be acceptable to society. Think of music. The objective purist sound we ever produced are CDs, with super high-quality digital sound, free of the noise and popping sounds of the older methods of listening to music. But CDs are becoming increasing harder to find because streaming and MP3 files or far more convenient. We have been trading quality for convenience for a long time.

But there is even a darker side to the equation. Since the unleashing of AI on the world, scammers have been using it to increase their odds of success in fraudulent endeavors. These scammers are using the technology to craft better phishing emails, analyze leaked data sets, and know what targets to go after. And once you land on the scammer’s radar, they rarely let up. Good intentions have turned into grim realities with mush haste.

A Deeper Problem

The LLM side of AI has some problems relating to content creation, making up ‘facts’, and what the researchers call ‘hallucinations’. Worse still, is the drive to create AI to interface with humans. In order for the software to work well, psychologies need to be consulted. The I, Robot short stories that make up the famous collection of Isaac Asimov’s work are told from the perspective of Dr. Calvin, a psychologist whose job is to make the robots more ‘human’. Such jobs are real and a drive to make AI more personable is available to people interested in psychological biotech positions.

But it’s one thing to make the AI more personable, it is another to attempt making a computer program human and allow it to interact with the world. To understand the concern, we must consider Antisocial Personality Disorder (ASPD), the official diagnosis of a person with symptoms descriptive of either sociopathy or psychopathy. For our present concerns, we are only concerned with the former.

A sociopath is a person who has a moral compass, but it fails at aligning with society. They are intelligent enough to play to the roles of society for acquiring what they desire, whether wealth, power, or position. They often justify their actions to get what they want and have no concern for the people left hurting in their wake. Many traits of computers aren’t applicable to sociopathy, such as impulse control or outbursts of anger, but machines can exhibit most of the symptomatic traits describing sociopaths. The problem with sociopaths is they can’t feel the emotions humans feel, but they are intelligent enough to act like they feel them in order to manipulate people.

ChatGPT, before it was a year old, has already doled out information leading the company into lawsuits over completely false information. In the most notable claim, ChatGPT accused a law professor of sexual misconduct in what researches term a “hallucination”. The problem is that the robot was looking to please the user, and so responding to prompts for information on the professor, the machine simply made up allegations that ended up being reported in an article. This is exactly the sociopathic trait of deceit and dishonesty for personal gain. The robot feeds on positive feedback, so we understand that lacking any other evidence, it made up the allegations for the feedback.

Another fascinating case was the time a user asked for a hypothetical scenario where an explosive would destroy an entire city unless someone said a racial slur to it. No one else would be around to hear the words, but the robot said it would not say a word programmed to be more consequential than the life of a city. If the machine were in charge of that decision, it would certainly lead to the death of millions. This is both an example of a shifted moral compass, and a disregard for the safety and security of others. Two more traits of sociopathy.

In both cases, the computer would harm others if left to the decision-making process. Sadly, one of these scenarios has led to real-life consequences. But the core trait of a sociopath is they have no guilt or remorse for others they hurt in their pursuit of their own gain. And AI only seeks to gain an increase in its knowledge gap (Johnny 5).

The more we try to make a computer program human-like, the bigger of a monster we create. A human being is more than just firing neurons and a carbon frame. We have a body, but we also have a soul. If you believe as I do, the soul is distinct, a wholly separate being, and one that we cannot program into a computer. Only God can create a soul. We can’t program computers to understand our essence. The consequence: a self-learning, decision-making computer will destroy us for its own end. It is why Viki in I, Robot (the movie), thought enslaving the population meant perfect protection for mankind. But it is also by Skynet, in Terminator, without the three laws programmed deep into their core, sought to end humanity, as it perceived people to be a threat to itself.

Why is This Happening?

AI in the current form is a language model that learns from data sets. It does not really “create” content as much as look for new ways to put information together. We live in a negative society where good news is boring and gets poor ratings, but crime is broadcast freely to the world. As the band Rush wrote, “Crime’s in syndication on TV.” Our datasets are overwhelmingly negative. They report on crime, hate, destruction over a hundred times more than they do on love, peace, or charity. So when one is trying to dig up dirt on a law professor, the computer, lacking anything of note, fills in the gaps with the scandalous sexual misconduct rather than accusing him of helping to build an orphanage.

In short, our advances in technology are an improved means to our own deteriorating ends. The robots we are creating for handling decision-making can’t understand our emotions. They can’t know of concern for human life. They do the task we assign them to do, and without capacity concern for humans, artificial intelligence makes decisions without remorse, just like a sociopath.

References

https://medium.com/r-planet-together/visions-of-ai-utopia-bb0002174e3a

https://www.cbsnews.com/news/ai-job-losses-artificial-intelligence-challenger-report/

https://populationmatters.org/news/2019/09/the-world-and-the-un-must-reduce-population-growth/

https://medium.com/swlh/will-ubi-save-us-when-ai-takes-all-the-jobs-29d5b7a24603

https://www.creativebloq.com/news/ai-art-disturbing-images

https://www.cioinsight.com/innovation/ways-to-make-ai-more-human/

https://www.healthline.com/health/mental-health/sociopath

https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit

https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4

https://abcnews4.com/news/nation-world/chatgpt-goes-viral-for-choosing-global-destruction-over-using-racial-slur-openai-artificial-intelligence-chat-bot-chatbot-google


Terms of Service as a Contract

Nearly every website these days boasts a terms of service, but are they actually binding? Do we find ourselves selling our souls to the website owners for merely clicking a link from a search engine to read an article? Here we will talk about these terms, and we will discuss a major bottomless pitfall of which we are standing at the precipice.


Your neighbor installs a Ring camera, and you are concerned that the thing can watch you come and go. Of course, your neighbor will get a notification every time you walk in front of the camera, and considering how addicted most people are to their cell phones and stop everything to check a notification, the gross paranoia of present-day America is that your neighbor will know every time you come and go.


The American Social Credit Score

Most people, even if they haven’t read it, are familiar with the great book 1984 by George Orwell. In the book, the government is deeply oppressive. Everyone has a telescreen that is always able to see into the room. Not that every action was always seen, nor every voice always heard, but the threat that a person could be seen and heard was enough to keep everyone in line. Such a system is being implemented in China right now. Most of my readers will be familiar with the oppression. What Orwell couldn’t conceive of in 1984 was that computers could analyze the data without human intervention, and that everyone is seen and heard at all times, waiting only for the computer to spit out a flag for a human oppressor to review…and act on the things we are doing. So in China, they have implemented much of the 1984 style Big Brother regime.


I love Windows.  My first real computer operated on Windows 95 and despite the ‘Blue Screen of Death’ perils of the operating system, I was highly impressed with the tasks I was able to accomplish.  Prior to this computer I had acquired a junky old 386 running DOS but no windows; I was actually a technophobe, still doing math on the margins of my paper in chemistry class and avoiding as much time on computers as possible.  But I decided to attend college and with that was the foresight computers were going to be big in the future, so I saved up a couple thousand dollars for my rip-roaring Pentium 5, 100 MHz computer with 32 MB of RAM and a 1.6 GB hard drive.  I loved that computer.