I am increasingly worried about the newest trend of using the AI in order to get grammatical explanations. The problem is that Chat GPT is not a tool from sci-fi movies - it does not do a proper research, it does not consult grammar books and dictionaries, etc. It only produces something that looks very convincing on the surface, but very often has nothing to do with facts. What I am seeing (not only on this forum) is more and more cases of spreading misinformation that Chat GPT produced, with people trusting that they are getting reliable information. Sometimes itās in good faith, when people honestly ask whether what they got from the AI is correct, but there are also cases of people who just mindlessly copy and paste this misinformation without any disclaimers or warning, as if they were natives offering their expertise. This is very dangerous and I think it should be seriously addressed, before the āSentence discussionsā forum becomes a place where no one can tell anymore whether what they just read came from someone who knows what they are talking about, or from an algorithm that made everything up. It is especially serious when it comes to less popular languages - just as an experiment, I asked Chat GPT about few sentences in Polish (my mother tongue), which I made grammatically incorrect on purpose. Not only it didnāt correct the actual mistakes, it even added new ones and still made it look very convincing for someone who is not advanced enough to spot them. I really think itās better to act now, before itās too late - in my opinion itās totally ok to ask on the forum if what Chat GPT generated is correct, but just posting it as if it was an additional commentary to the sentences should be strongly discouraged, if not banned.
Just as an aside, equally worrying, another excellent site allows students to correct one another to earn points. The number of times one sees āPerfect!ā in answer to a clearly incorrect English exercise, just to gain another point, is worrying. Adding correct corrections is tiring, Iāve more or less given up as points seem to be more rewarding than proper learning. A relief to come back to ClozeM where the Italian course is mostly spot-on, and if not we ask on the forum.
I enjoyed reading your post. Happy new year!
this is a very valid concern and i completely agree
Today I had to correct yet again some AI misinformation that has been mindlessly shared without any comment. The problem is that writing a post with corrections takes time, while just copying and pasting from Chat GPT takes few seconds. If this becomes a norm, I wouldnāt count on people willing to offer so much of their free time to correct this misinformation. I already noticed a worrying thing - when I do my daily Clozemaster routine and see that a sentence has some comments, I of course check them, but the first thing I look at is who posted them. By now I already know that comments from certain users are worthless, because they are just copying and pasting from Chat GPT. @mike, is this really what we want the āSentence discussionsā to turn into?
Thanks for the post! I appreciate the concern, and we need to give it more thought. I also want to reference Please consider removing the Explain feature entirely which voices related concerns.
I do want to note that the comments youāve mentioned me on do explicitly mention that what theyāve posted is from ChatGPT, for example ĪĪÆĻĪ±Ī¹ ĻĪµ ĻĪ»Ī± ĻĪĪ»ĪµĪ¹ĪæĻ., which seems rather reasonable.
As far as whether we should try having someone moderate the forum to try to censor/remove content from ChatGPT, that weāll have to give more thought. Perhaps we can try to discourage it with some sort of messaging as a first step, and/or encourage people to put that sort of content in their sentence notes which are not shared / publicly visible.
That said, and to your point, ChatGPT content does seem to be ubiquitous, and at least the comments Iāve seen so far explicitly say theyāre from ChatGPT so you know to double check it (or ignore it in your case). Without mentioning that, for example just copy/pasting the output, Iām not sure how weād moderate it at all - try integrating something like GPT Zero? So my initial thought is that at least including that a particular comment is from ChatGPT is something that we should also encourage.
Do you have any particular solutions in mind? How would you expect a ban to be enforced?
I fear that wonāt help. People who mindlessly believe and spread garbage information are ā¦ well ā¦ mindless. Some message they can ignore wonāt discourage mindless people.
I think that (at least a portion of) these offenders comment ChatGPTās output precisely because their text will be publicly visible. Maybe they crave the sense of validation they hope theyāll get from doing this. I donāt know how their brains work. Again, Iām talking only about a portion of the offenders. For some it might help, but I imagine that others get a feeling of superiority from the knowledge that they can use this new technology (which isnāt hard to use and everybody else is also able to use it, but that part they shut out), and they know they wonāt get that feeling if others donāt see proof of them using ChatGPT. In other words, them posting their comments publicly visible. Ergo, I fear that approach wonāt help all too much (at least not as much as desired or necessary).
I understand Iām being negative. I only say that/why potential solutions likely wonāt work, I donāt propose solutions. Unfortunately, I donāt have a solution. Itās a difficult problem.
@mike thanks a lot for being so open minded about that! I totally understand that itās not easy to come up with workable solutions to this problem, since itās an entirely new kind of challenge that we are (and will be) dealing with. I am also far from calling for banning the AI, because we all know thatās unrealistic. Nevertheless, I strongly believe this has to be discussed, before this problem gets out of control and the reputation of Clozemaster suffers.
One solution I would propose is to have two sub-sections in every main āSentences discussionsā section for each language: one for the AI explanations and one for the actual discussions. Please note that the user I was referring to and in whose posts I tagged you is obviously not interested in any kind of discussion - they have never reacted in any way to me pointing out that what they posted is incorrect and they never corrected any of their posts in this regard. So their misinformation will stay there forever. Yes, they do mention that itās from Chat GPT, but in my opinion this is not enough - Iād like to see a clear statement just like from the very AI they use, something like āThese are AI generated explanations. Because they are not based on any critical research, they might be incorrect or misleading, so youāre strongly encouraged to consult other sourcesā. I imagine some people might find this a bit exaggerated, but in my opinion we have some obligations here: if it was a forum for fan fiction or cooking recipes, that would be totally ok. But I believe the environment we strive to create here is of an educational nature, so any hints of misinformation should be pointed out and taken care of seriously.
The advantages of having the separate sub-sections for both the AI explanations and discussions between actual humans:
- people who care more about feeling validated than actually learning the language (see @davidculley 's points) would still have their safe space here, hopefully they would stick to it and those of us who are not interested in their AI generated content will have an easy way of ignoring it
- no one will feel like they are being censored or targeted for ājust trying to helpā, I do understand that many people do it in good faith, without fully understanding the tools they are using
- the AI sub-forum could have an automatic default disclaimer I mentioned above, always appearing above these explanations. To be honest, I thing something like this should also appear in the AI explanations offered by Clozemaster, because I also saw a lot of incorrect information there.
- if someone posts their AI generated content in the wrong section of the forum, admins can easily and quickly move it to the correct one
Iām very curious to hear what are your thoughts!
In the meantime I was informed that my post in another topic has been hidden, because āthe community feels it is not a good fit for the topicā. In my post I asked if the newly introduced feature (discussed in this very topic) could be also implemented in the app, instead of only in the web version. While the reasons for such question being ānot a good fit for the topicā are beyond my understanding, the actual reason I am mentioning it is that I would like to ask for a clarification - how does flagging a post work? If I flag a post I already demonstrated to be factually wrong, incorrect and misleading, will it also be hidden?
That ban seems completely unfair and unwarranted.
Here are a few screenshots to show how itās done. In the first picture, you click on the āā¦ā icon to expand it to a flag, which brings up a number of options.
Click on the flag to get this dialog:
My guess is that someone chose the āItās Inappropriateā option.
I have never used any of the banning options, but I have assumed that any proposed ban would not be applied immediately, only after staff intervention. I fail to see how any staff person would agree that your post was inappropriate, though.
This post was flagged by the community and is temporarily hidden.
AI misinformation is now a worldwide social problem and will become an increasingly pernicious epidemic in every sphere of life every place on earth. It already corrodes our most important institutions with no tenable solution in sight. If the Gutenberg press is any guide, this will result in decades if not centuries of social fracture and conflict before people learn to manage it. This is a good opportunity to practice your equanimity so youāll be prepared when it painfully and senselessly disrupts your employment, politics, and personal relationships, and creates a dead internet where trust is impossible. Cheers! ;-p
This post was flagged by the community and is temporarily hidden.
A little update: during the past few days, several of my posts have been hidden, then made visible again, then hidden again, I also got a message telling me which sentences exactly I should remove in order to have my post visible again, all that because of asking additional questions that were around, but not 100% about the topic discussed. Thatās totally ok, no hard feelings, I understand that rules are rules. But at the same time, as an experiment, few days ago I flagged a post containing a serious case of misinformation, which I clearly demonstrated. Absolutely nothing happened to it, itās still visible to every learner who will have the misfortune of stumbling upon it. This tells me clearly that fighting misinformation is very low on the priorities list, while hiding and unhiding posts because of not sticking to the topic is a top priority, since so much time and effort has been spent on it, while nothing has been done about the flagged post. In this case I will follow @seveer 's wise advice and stop being bothered by it and wasting my time on posting corrections. It is a pity, though. I really hope one day this issue will receive more attention. I wish all of you happy learning, free of AI mistakes!
That shouldnāt/canāt be the case, since itās an automatic process. Thereās no human involved.
Do you have a link to that post?
Itās an automatic process that treats every case equally. There is no human involved. So itās inaccurate to speak of high and low priority and unequal treatment.
Itās an automated process. There is no human involved.
Again, itās an automatic process that gives every flagging the same attention. Also, it has nothing to do with Clozemaster. Itās a feature of Discourse, the forum software that Clozemaster uses.
Thatās the definition of off-topic.
You were informed which parts of your posts are off-topic and you were informed, that if you remove the off-topic parts of your posts (and optionally copy&paste them into a new, separate thread), your posts will become visible again. Which they did, per your admission. So, isnāt all well again? And isnāt the forum in a better state now, since a few posts that were previously off-topic are now on-topic?
You made some comments that had nothing to do with the topic, edited them to make them on-topic, and your comments were restored. I donāt see a problem here. Itās all an automated process. Thereās no human involved who doesnāt value or deprioritizes your contributions.
Reading over this thread, since it was said that AI generated posts are automatically marked, maybe the best approach would be to ācollapseā such posts by default, with the option to āexpandā them and read what the robots spat out after chewing up the internet. (The format should probably differ from the āignored commentā formatting.) At the very least, it would save people like me some scrolling time when we want to see othersā thoughts on a matter. Iād rather not wade through what will, in all likelihood, prove to be verbose silliness from a machine that doesnāt really understand what words mean or how they work.
Warning, this is highly Off-Topic: I tried to message @Hyacinthe, encouraging them to not give up posting corrections and making this a better place. Unfortunately, it seems that the user isnāt accepting any messages (I assume because of the here-mentioned incidents), so Iām posting here instead, hoping the message to come through.
Back to the topic: Hereās an interesting comment from a professor of mathematics:
The era of ChatGPT is kind of horrifying for me as an instructor of mathematicsā¦ Not because I am worried students will use it to cheat (I donāt care! All the worse for them!), but rather because many students may try to use it to learn.
For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They donāt understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go.
I experimented with this today, on a basic result in elementary number theory, and the results were disastrousā¦ ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with āHmm, but I donāt think it is true that [XXX]ā, the LLM responded with something like āYou are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [ā¦further gish-gallop containing subtle and plausible-sounding claims that happen to be false].ā
I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. Theyāll instead see them as valid steps that they can perform in their own proofs.
I see so many adults and professionals talking about how they are using LLMs to deepen their understanding of things, but I think this ultimately dives headlong into the āGell-Mann amnesiaā effect ā these people think they are learning, but it only feels that way because [they] are ignorant enough about the topic theyāre interested in to not detect that they are being fed utter [B.S.].
[ā¦]
Exactly this unwelcome effect also happens when learning a language. Hereās an example where ChatGPT confidently sent me down the wrong path when it was asked about Spanish grammar, with regard to a sentence that contained a grammatical error:
Because Iām not very good at Spanish (yet), I couldnāt detect myself that I was fed B.S., unlike the professor in the comment. It was only thanks to @morbrorper that I realized that the answer given by ChatGPT (and the Spanish sentence in question) was wrong.