I used Chat GPT (https://chatgpt.com/) to create a 250 word pseudo blog post for me using a prompt that I gave it. See the results here: https://docs.google.com/document/d/11-FqZlAuZuewZI4Tqh9kOvM_NaLGZDzZLi9CQdmYmjU/edit?usp=sharing
I told Chat GPT to write me a 275 word description as to the positives and negatives of video games. And without a doubt, I can say that it fully accomplished this task.
And overall, I learned that it’s not really inaccurate per se, but it is missing depth. It gave me 4 positives, and 4 negatives, each of them differing from each other.
But it also just gave me 8 very popular talking points and opinions that have been thrown around for longer than anyone reading this has been alive. It isn’t to say it’s wrong; but some stuff may have been proven otherwise, some stuff is lacking context.
So while it is lacking in certain areas like context, depth, additional research, etc., as an AI, it’s the best that’s offered publicly right now and I’ll definitely say it’s impressive, but certainly can leave a lot to be desired.
Maybe the reason it’s lacking depth is probably because I limited it to 275 words; but there’s no linking to sources, etc. Would that change if I changed it to, say, 500 words? (Spoiler: It did not)
In all, it was a fun exercise that showed me how AI is really taking over and in the future we’ll be seeing it in a whole new light; being able to answer a somewhat complex prompt like that in the matter of seconds is outstanding. But, it also showed me how flawed it is at the same time.
One thing I noticed is that a lot of the time, the platform will just give you research that confirms your point. Which means…
We basically have to trust it at its word. Which begs the question: Do I trust it? Yes and no.
Sure it can be helpful. For example, I used Chat GPT in a different exercise to tell me why a certain stat in Baseball (Batting Average) is the best stat to use to judge a player. It gave me positives and negatives for using that statistic, all of which are popular talking points. It concluded by saying that batting average has been judged as the best stat to use by some.
I then asked it to tell me why it isn’t the best stat. And it told me why it wasn’t.
Obviously that one isn’t a *serious* example with massive real world implications. But, it’s still a fun exercise that pointed me to one thing, which is where a trust issue comes in; If it has the potential to confirm your points with whatever you’re telling it to do, then the trust goes a little bit by the wayside. And when it tells you what you’re looking for, that’s when misinformation can spread.
In addition, trust is already iffy because a lot of the time it simply gives you wrong information. Either outdated, or flat out incorrect.
I had a professor at my old college that had us use Chat GPT for a quiz answer, or an answer to a question she gave the class; The platform all gave us the same answer. Except, that answer was wrong and not at all what the book told us.
So, in all, trust is very iffy. There’s flaws and strengths to Chat GPT and AI and is an overall interesting experiment. But, the technology itself is outstanding and in ten years I’m beyond curious to see the landscape of Chat GPT and AI.
Leave a Reply