Most of the posts you see here that complain about how GPT doesn't work or produces unexpected results are the results of two things. First, people are using GPT 3.5 and expecting it to be some kind of god-like AI. It's not. It's a LLM that simply generates text one word at a time and then serves the generated text to you. Second, if you want it to act as you expect it to then you have to tell it how to act.
GPT 3.5 (Free) is not GPT-4 (Premium/Paid). Complaining that GPT sucks overall when you've only used GPT 3.5 is like complaining that spreadsheets suck having only used Lotus123. GPT-4 is much better, as one would expect, at producing reliable responses but it can get confused, hallucinate, and be wrong as well.
The trick is to understand how the technology works and adjust your own prompts to overcome it's shortcomings. You can tell it what you want to see. Give it an outline of how to respond. Turn the temperature setting down so that it isn't so "creative" and ask it to iterate over it's responses to determine if there are any inaccuracies and correct them if it can. You can even tell it not to hallucinate by prompting it to not fabricate any information it does not have direct knowledge from it's training data up to it's knowledge cut-off date and if it does not have direct knowledge then to simply inform you that it cannot complete the request due to lack of knowledge. This will not keep it from producing inaccurate responses (it will confidently give you wrong data that it was trained on) but it will stop it from producing manufactured data. If you are concerned about it's accuracy feed it back it's own responses and ask it to correct any mistakes.
Of course it's going to get things wrong, it literally just generates words. It doesn't "know" anything reason. Use it like it's a child that happens to be very knowledgeable. Give it roles. (e.g. "I am aware that you are not a therapist and that it is best for me to seek help from a licensed professional with regards to any issues I may have related to mental health. However, for the purposes of this conversation I would like you assume the role of a fictional therapist who is educated in methods and concepts related to CBT, DBT, and ACT. To get started you will need to obtain information from me so I would like to begin by having you ask me questions ..... ).
This forum has become kind of a shit show for pointing out flaws in a new technology when it would probably do better to show how the technology is useful.
Edit: Also when you see posts from third party tools (gpttrolley.com, et al) remember that the results you get from those tools are not direct interactions between you and GPT. I can create a site that will take anything you prompt and add additional prompting to it and then feed that to GPT and serve you the biased response. This means if you're not using GPT directly then anything you say can be altered to prompt GPT to respond in any fashion I want. If I want unethical results to be served to you then I can prompt it to act unethically. Just don't believe everything you see on here and keep in mind that many of these responses were manufactured by the OP's own prompting, they just cropped the dialogue to show you the funny/shocking part.
About Community
r/ChatGPT Rules
Recommended subreddits
346,792 members
DISCLAIMER
This subreddit is not an official subreddit of OpenAI and is not associated with OpenAI in any way. The information provided on this server is not endorsed or approved by OpenAI. For help, contact: support@openai.com
"And also, having a vibrant community of people who are trying to press on the limits of the technology, to find where it breaks and where it does unusual, interesting things, is good." — Sam Altman on r/ChatGPT