Re-run “Explain” with new LLM models

I’ve been a regular user of the “Explain” feature. It’s been greatly beneficial to me and my learning. Even though I know it can be inaccurate, it’s been a huge value add for me. However, it seems like the responses have been recorded some time ago with legacy models. Today’s new models from Anthropic / OpenAI / Google are way better at these tasks and re-running this would greatly improve quality.

Additionally and alternatively, I’d suggest including an option to customize your prompt (so many responses start with “Sure!” and have all kinds of fluff) and let users hook up their own API key for a selected model. That way, you can offload the cost and provide the necessary customization.

Thanks in advance!

2 Likes

The statement made here

As far as environmental concerns, our usage of ChatGPT is quite minimal, and we will continue to minimize our use of it

would no longer be true if either of your requests were implemented:

  • re-generate explanations for tens of thousands of sentences every time the generating models are considered legacy
  • instead of 1 generated explanation per 1 sentence that all users share, enabling $n+1$ generated explanations per 1 sentence if $n$ different users each want to use a prompt of their own, and presumably also first try out $m$ different prompts on 1 sentence before they like the explanation
1 Like