GPT Will Replace Us All

In ninth grade, I took a course in computer science where I learned my first programming language, Pascal. Every class brought a new learning and a new problem to solve. I don't remember the language at all—if you don't use it, you lose it, but I have retained the most valuable lesson: A computer is only as smart as its user. That's how I know ChatGPT won't replace us. It's not because we're smarter; it's because, in a way, we're dumber.

What's more, I would argue that replacing us was never the point or even the danger of artificial intelligence. Armed with the power of well-written algorithms and well-chosen learning datasets, machines can do amazing things. They can recommend new products to buy based on your past purchases, they can learn enough to drive cars reasonably well, and apparently, the latest version of GPT, GPT-4, can even pass the Uniform Bar Exam. Can GPT-4 argue and win a case in court? Maybe, but I am willing to bet its win rate would be quite low.

There's a certain artfulness to arguing, not just in court, but in everyday life. Problem solving, logical reasoning, and just living with an organic brain are but a few of the things that make human intelligence unique. I'm reminded of one of my favorite films (and every librarian's favorite film), Desk Set. It's a 1957 romantic comedy starring Katharine Hepburn and Spencer Tracy, in which Katharine Hepburn's character is the head of a reference department at the fictional Federal Broadcasting Company and Spencer Tracy's character is a consultant for the company, brought in to add an "electronic brain" to the department.

There's a scene early in the film where the two share lunch on the rooftop of the building, and Tracy's character quizzes Hepburn's character with a few brain teasers to test her memory and logical reasoning. The memory questions would be easy for an AI interface to answer, but curiously, there's one question that's not so clear cut. Tracy's character says:

"A detective broke into an apartment and found Harry and Grace lying on the floor dead. Beside them was a small pool of water and some fragments of broken glass. Above them on the sofa looking down at them was a pet cat with its back arched. The detective concluded without further investigation that the victims had died of strangulation. How was this conclusion possible?"

Hepburn's character correctly answers that Harry and Grace must have been pet fish. What happens when you put the same query to ChatGPT?

Here's the text of the response ChatGPT provides:

"This conclusion is not possible based solely on the information given. The presence of a small pool of water and some fragments of broken glass do not provide any clear indication of the cause of death, and the behavior of the cat is not necessarily relevant either. Without further investigation, it is impossible to determine the exact cause of death, and it would be irresponsible for a detective to draw such a conclusion without sufficient evidence. Therefore, the conclusion that the victims died of strangulation based solely on the given information would be incorrect and premature."

The problem with ChatGPT (and here, I should note this was not a response from GPT-4, since as of this writing, accessing the latest version is only possible with a $20 per month subscription) is that in order to answer the question, it has to assume the detective's conclusion was incorrect. It cannot conceive of a situation in which the detective is correct and therefore, it cannot deduce the reason why he is correct.

I'm sure it's just a matter of time before GPT learns to respond correctly to queries like this, but I feel pretty confident machine learning is not a slippery slope...at least, not in a robots take over the world type of scenario, and not in an AI replaces all knowledge workers sort of way, but I'll save the ethics of AI for a later post.