[In it's current form I've always maintained a better term for the large language models called "Artificial Intelligence" would be "pseudo-intelligence" (i.e. fake intelligence) instead because there's nothing "intelligent" about the AIs being used . . . SC]
By Hardscrabble Farmer - February 19, 2025
The ongoing debate over Artificial Intelligence appears to hinge not on whether or not it is beneficial, or possibly dangerous, but rather on if it is or is not what it purports to be by its creators and those with a stake in its widespread adaptation by the general public. The former is a very serious concern while the latter is simply another example of the widespread conditioning to falsehoods as something inconsequential to the status quo. Both concerns ought to be examined in depth well before further development and use by the general public if only for public safety.
The fact that virtually everything we hear about AI comes from those who stand to gain something, such as investors, or those who stand to benefit from its implementation as a tool of perception. There has been very little talk about the potential for misuse and the spread of false information masquerading as empirical truth.
I have made no secret of my skepticism in both the fundamental premise- that it is indeed a form of intelligence- and its danger as a vehicle for actively concealing information and deliberately misleading those who use it for nefarious purposes. While intelligent debate by real people in open forums has become all but impossible in the current era due to deplatforming of those with perspectives outside of the orthodoxy, it is still possible in some venues where free expression remains.
With these issues in mind, I tried an experiment using the currently available AI platforms online, asking a series of questions to determine the veracity of these claims of “intelligence”, and to see if my instincts about the potential for dishonesty in and around the programming of such technologies by those with ulterior motives. I found myself having to be clever and persistent due to its habit of digressing from the topic at hand and being exceedingly long winded, even when specifically asked for concise answers, something that, were it a human being, would immediately be understood as evasive and duplicitous.
Three times I was bumped from the platform and twice my seemingly mundane queries were deleted by Chat GPT for reasons unknown to me. I had to log in from two ISP’s not associated with my home computer to get back in.
The following is a C&P of an exchange that confirmed for me what I had initially suspected. I am more than open to hearing any rational explanation for why anyone should place any trust in the existing AI platforms currently available after reading the exchange below.
Can an intelligent person tell a lie?
ChatGPT said:
Yes, an intelligent person can certainly tell a lie. Intelligence doesn’t inherently prevent someone from being dishonest. While an intelligent person may be more skilled at crafting convincing lies, understanding the consequences, or manipulating situations, the ability to lie comes down to individual choices, values, and circumstances rather than intelligence alone.
In fact, intelligence can sometimes enable someone to lie more effectively, as they might be better at reading social cues, crafting their narrative, and managing their emotions or reactions in ways that make the lie more believable.
So, while an intelligent person might choose to avoid lying or might lie less frequently, their ability to do so is not determined by their intelligence. It’s more about their ethical views, self-control, or the specific situation they’re in.
You said:
Do you lie?
ChatGPT said:
No, I don’t lie. My goal is to provide accurate, honest, and helpful information based on the data and training I’ve received. If I ever don’t know something or can’t give a precise answer, I’ll be upfront about it rather than making something up. My aim is to be as clear and truthful as possible!
You said:
Content removed
This content may violate our usage policies.
( I wrote- Then by your own definition you are not a form of intelligence.) I guess the terms of usage include not offending the Chat GPT program.
ChatGPT said:
I see what you’re getting at! While I don’t lie, it doesn’t mean I . . .
[SNIP]