Just to clarify and see whether I understand you correctly: Are you denying (or not believing) the fact that Earth’s climate is changing due to greenhouse gas emissions from burning fossil fuels?
I think getting into sensitive political issues will eventually spoil this thread which we have all enjoyed so far. Already I’m taking your advice and getting out now.
Here’s an open question for the people who like to use the Explain feature: Have you tried using other resources – especially Wiktionary – to obtain similar information? If so, how did it compare? Are you aware of the feature that lets you click on a word and look it up either on one of the sites that Clozemaster gives you by default (Wiktionary being one) or a site that you configure yourself?
For every language I’ve studied except Hebrew, I’ve been able to find anything I wanted to know about a word’s grammatical characteristics (number, case, etc.) and etymology at Wiktionary. For Hebrew, where Wiktionary coverage is not very good, I’ve been able to find the information at Morfix and/or Pealim. It might take two or three clicks rather than one, but my experience shows that I can trust the information without worrying about it having been hallucinated.
I personally stay away from generative AI for a number of reasons. I’ve often seen it make mistakes in subject matter I know, so I don’t trust it with material I don’t know very well. I know that it uses huge amounts of energy (even if there might be ways to offset some of it by saving results previously obtained). It’s not very good at giving credit to its sources (though the AI I automatically see when doing Google searches has gotten somewhat better in that regard). It forces me to relinquish some control about which results I investigate further that an ordinary search doesn’t.
In this discussion, I see some people saying they like using the Explain feature. What I don’t see them doing is comparing it to the alternatives.
Thank you for writing this, @alanf_us.
Exactly. This is called the Gell-Mann amnesia effect. Which means: When you know a topic well, it’s relatively easy to spot when somebody says something wrong about that topic. This applies not just to “AI” but also to journalism.
But when you’re a beginner or unfamiliar with a topic, you tend to take at face value and simply believe whatever you’re told by a perceived authority (like a newspaper or an “AI”), even when it’s completely made up or not based on facts. “It was in the news, so surely it’s correct! They wouldn’t print wrong information, would they? As a newspaper, they certainly have fact-checkers to get their stories right, no?” Except that humans don’t even (consciously) have that thought, they simply believe without questioning. Even when they just saw an article whose topic they’re very familiar with where the newspaper got it completely wrong.
I’m not saying that you’re stupid if this effect applies to you. I’m not saying that you fell for some trickery due to being too naive. Before I’m being called “condescending” by a climate-change denier again who thinks that all the facts I posted thus far are “nonsense”. This Gell-Mann amnesia effect is just how the human mind seems to work.
This is exactly the case when you use “AI”. Often, you don’t even notice when it’s wrong. Even when you did notice that the ChatGPT explanation got it wrong, some people never seem to learn the lesson to eventually not trust the ChatGPT explanation anymore. It’s as though each new sentence were a fresh new start where ChatGPT gets yet another chance. For some people, ChatGPT can be completely wrong as often as it wants and yet they still put their entire trust into ChatGPT again in the following sentence. This is the Gell-Mann amnesia effect. Some people brush each and every mistake aside or immediately forget that ChatGPT had just been completely wrong in the sentence they just saw. This is the ‘amnesia’ part. And I believe that many Clozemaster users are unaware that they’re under this effect.
@alanf_us might be onto something: Are all defenders of the Explain feature even aware of the existing features? After all, they’re all hidden in various sidebars, you have to know how to find and use them, et cetera. In the mobile app, it’s even harder to click or hover with your finger over something than it is on your desktop computer with a mouse. Whereas the ChatGPT explanation is directly under their nose, easy to find, and requires just one intuitive click on a simple big button in an obvious position in the UI.
I wonder what would happen if, hypothetically, a click on that “Explain” button would open Wiktionary instead of ChatGPT. Make Wiktionary as “under your nose” as the GenAI explanation currently is. The Clozemaster users obviously like the UI. The one button that gives the ultimate summary, instead of having to look up, say, 5 words 5 individual times. Hypothetical question: What if we replaced that backend behind that button (ChatGPT) with something else? Maybe the Wiktionary links etc. just need a better UI so that Clozemaster users no longer feel they’re “punished” by the hypothetical scenario of no longer having that obvious simple “Explain” button (and I specifically mean the UI element that is the button, not the ChatGPT explanation it represents). I don’t mean to remove that button from the UI, that can stay. I’m just arguing for a more sustainable backend.
Wiktionary is designed to search a single word at a time. I don’t find this an inconvenience because I almost always only have a question about a single word (a consequence of the fact that I only use Clozemaster for languages where my level is intermediate). On the few occasions I have questions about multiple words within a sentence, I don’t mind looking them up individually.
However, if people use Clozemaster for languages where they frequently have questions about multiple words in a sentence, that might explain their fondness for the “Explain” button. I can’t think of a way around that other than automatically preparing Wiktionary-type links for every word in the sentence at once, which would seem to be a usability nightmare.
I have found very few of these resources to be 100% accurate for French. I have seen French translations in Wiktionary that I have been unable to verify anywhere else. (That is troubling.) I have ended up with a hybrid model. I sometimes use ChatGPT, but take it with a grain of salt. I have seen ChatGPT flip-flop on difficult issues when I challenge its inconsistencies. I think ChatGPT is about 80 percent helpful, sometimes I just get clues.
Websites I trust, such as Lawless French, sometimes generalize the rules and leave out the exceptions and intracies of French grammar. I bought Le Bon Usage (an older edition, so cheaper!). Even it waffles on the grammar rules at times. Reddit is a good resource for discussions of French grammar and usage. The point is that one website, resource, or book will not be a great teacher by itself. When I delve into something that doesn’t make sense, I use them all. Then if I can’t figure it out, I post a question in the forum here or on reddit. At that point, it seems that some of the grammar rules are in the process of changing and you could say it a couple different ways and sound ok.
When I was new on Clozemaster, I used the Explain button a lot, now not so much. There were a few times that I was unable to find an explanation / translation anywhere else but under the Explain button. (For those tricky ones to find, I will leave a note what the idiomatic expression was in the public comments rather than only my own notes.)
I’m certainly ok if the Explain button is taken out. If that is done, there could be some explanation in the beginning instructions where and how to find translations. That would be a good place to discuss the pros and cons of the different resources. Alternatively, the explain button could be renamed but it would link to a page on clozemaster that talks about how to translate and what resources might be used. We might get a few more questions in the forums but that is ok too.
Isn’t it optional? As are other translators.