I skimmed over this thread very quickly before but I came back to it now to give my full support to David’s original point.
As I’ve gone through 3,500 Ger-Eng sentences since January I have noticed a pattern that almost every time a sentence has 4+ comments it’s typically someone asking a question and then someone posting an “answer” that goes “chatGPT said: blabla” with zero criticism or context. Sometimes there are no answers from an actual native (/advanced speaker) and sometimes there is a human answer but someone has posted an AI gibberish response anyway, sometimes multiple.
But more importantly and why the thread popped into my mind again, I learned this sentence today and was flabbergasted by the comments. If it was up to me I would have banned Peter on the spot but that’s a quite harsh measure to take when the platform itself uses the same technology for getting further “context” for the sentences. All I can say is I am not in the least surprised that this thread is made by the very David nor that Peter has chimed in with further valuable remarks.
I get that the explanations can be a powerful asset when learning languages. I checked one before writing this for context and at least that one looked fine and comprehensive at a glance (not that I’d know if it was correct). That is why I would like to propose a compromise that lets you both keep the feature and improve on it:
First, when a new explanation is generated (which you thankfully only do once per sentence I noticed) have the actual language model used obfuscated and instead present a big and bold “This explanation was generated with a LLM and must be taken with a huge grain of salt as it might be completely wrong.” This should make people less prone to deifying the tech and the magical thinking around AI. We don’t need to know which model was used but we do need to know to be cautious of it.
Second, have the explanations available somewhere so that natives can check any new ones and either mark them as fine or edit them (or discard) if needed. For explanations approved by humans you can instead display at the bottom “Explanation approved by [user]” and even display the Explain button in a different colour to indicate that the sentence has a proper explanation to it.