Some open-source, locally run LLMs have the capability to emulate key features of the OpenAI API to "appear" to an application as GPT. Can you add support for a custom API endpoint to experiment with this and see if the performance of something like WizardLM or Falcon may be up to the task of generating commands?
I would then envision an input field in the Settings window with the default OpenAI API endpoint URL replaceable with your own.