InfinitePeer

7 Issues with ChatGPT

7 Issues with ChatGPT

Sign up to our mailing list!

ChatGPT is a language-processing AI model trained by OpenAI that generates human-like text. Since its launch in November 2022, it has taken the internet by storm.

This model has been trained using a dataset of online text and is built on the GPT (Generative Pre-trained Transformer) architecture. It may be used to perform various NLP tasks, including text summarization, question answering, and language translation.

We think that ChatGPT is a game changer. That being said, ChatGPT, like all technology (including all language models), has its limitations and issues. With the huge number of people testing it over the past few months, several limitations and situations where ChatGPT fails have been identified.

It’s important to remember that ChatGPT was introduced to the public by OpenAI with the express disclaimer that the tool was still under development. OpenAI hoped to leverage real users to identify areas where the technology still needs more work.

The public did not disappoint, and many of these ChatGPT issues were identified by normal users like you and me!

Math problems

ChatGPT, like all language models, is primarily trained on text data and not mathematical equations.

While it is capable of comprehending and responding to mathematical questions, it does not always produce correct answers. This is due to the model’s token-based architecture, which depends on finding patterns and connections between the words used in the problem rather than actually “understanding” the mathematical problem.

ChatGPT may have trouble resolving more difficult mathematical problems that demand a thorough comprehension of mathematical principles. ChatGPT may also not be able to comprehend a user’s intention, which could result in an incorrect answer.

Take a look at the question below. While ChatGPT has somewhat sound reasoning, it does not account for the two months that have 5 Fridays, meaning the answer is approximately correct, but not exactly correct (there are actually 26 Fridays between January 1st  and June 30th, 2023).

chatgpt math

Basic facts 

ChatGPT was trained on a sizable amount of data, which exposed it to a wide range of information, including some that might be inaccurate or out of date.

Additionally, because the model cannot independently check the veracity of the data it is supplying, it can unintentionally repeat mistakes that it has already identified in its training data.

Finally, ChatGPT was only trained on data available before 2021, which means that some facts or information that were correct then may no longer be accurate.

An example of this type of issue:

chatgpt basic facts
chatgpt fails basic facts

Reasoning 

ChatGPT uses machine learning algorithms to generate text based on the input it receives. Despite having been trained on a sizable amount of material, ChatGPT is unable to comprehend the context or meaning of the text it produces since it only recognizes the relationships between words and not the importance of sentences or words individually.

This might lead to the software producing inaccurate or irrelevant answers to certain queries, particularly those that call for deductive reasoning or the comprehension of intricate concepts.

Take a look at the exchange below, where ChatGPT eventually concludes that an abacus is faster at computing than DNA computing (and with a lot of confidence too!).

chatgpt reasoning

Source: https://twitter.com/AndrewYNg/status/1600284752258686976

Gender and racial biases 

The training data for ChatGPT and other language models leverage is sourced from the internet, which may contain biases and stereotypes.

For example, certain gender or racial stereotypes may be represented in the training data, leading the model to make assumptions or generate text that reflects those biases.

Additionally, models like ChatGPT are not inherently aware of the social and cultural context in which language is used, and therefore they can perpetuate these biases.

An unfortunate example of this issue, where ChatGPT assumes that a good scientist must be male and white.

chatgpt biased

Source: https://twitter.com/spiantado/status/1599462375887114240

Common sense 

Although ChatGPT has all the knowledge in the world and can generate human-like responses to a wide variety of inputs, it does not have true “common sense” or “street smarts”. This may lead its answers to be incorrect or unexpected for questions that require a more humanistic understanding of the situation.

Additionally, ChatGPT may not understand the question’s context or the user’s intent.

Here are a couple of examples that might cause you a quick LOL (and may make you feel more confident about the idea that AI will eliminate all our jobs in the next couple of years).

Chatgpt common sense Chatgpt does not have common sense

Correlation vs. causation 

ChatGPT can understand and generate text based on the relationships between query words, but it does not have a true understanding of the underlying concepts. Correlation and causation are complex concepts that are often difficult to understand. Even humans have difficulty distinguishing between correlation and causation.

ChatGPT may in some cases generate responses that incorrectly associate correlation and causation because it cannot fully understand the nuances of the concepts and the context in which they are being used. As mentioned earlier, it may not be able to understand the intent of the user, which may lead to confusion.

Take a look at this example, where ChatGPT confuses correlation and causation.

chatgpt causation and correlation

Physical and spatial reasoning 

ChatGPT, being a piece of code, cannot perceive or interact with the physical world; therefore, it cannot understand physical or spatial concepts like objects, their properties, or their interactions.

It does not have the senses of sight or touch, which are fundamental to understanding and reasoning about the physical world. It cannot make inferences or predictions about how objects will behave based on their properties and the laws of physics.

Therefore, it may not be able to answer questions that involve physical or spatial reasoning, as it cannot understand or perceive the physical world in the same way that humans do (yet).

An example of this type of issue:

chatgpt spatial reasoning

Source: https://github.com/giuven95/chatgpt-failures/blob/main/images/physicalReasoning1.png

Conclusion

ChatGPT is a powerful language-processing AI model that can generate human-like text and perform various NLP tasks. However, like all language models, it has certain limitations and scenarios where it may not perform well.

These include math problems and questions requiring common sense, as the model is token-based and doesn’t understand the query but only the relationship between words. The model also struggles with reasoning, as it doesn’t comprehend the context or meaning of the text it produces.

Additionally, ChatGPT may not generate accurate responses or interpretations because it was trained on a dataset of online text from before 2021, which may contain inaccuracies or out-of-date information.

Overall, while ChatGPT is a powerful tool, it’s important to be aware of its limitations and situations where it will not perform. We recommend always double-checking ChatGPT’s responses to ensure they are accurate, unbiased, and reflect the true nature of your questions.

 

Share this post:

Facebook
Twitter
LinkedIn
Email
More
5 AI Tools for Software Engineering

Five AI Tools for Software Engineering

Software engineering is the process of designing, creating, testing, and maintaining software using a systematic and structured approach.  It involves using engineering principles to make