Before we hope to implement llmlite as the proxy for different models, this is replaced by the inference engines today. But we found that there're a bunch of model arenas out there, like the lmarena.ai, however, people just want to use the best model for different use cases, I hope to make llmlite the smart router for different models on different scenarios. Also some agents like alphaevolve would like to balance the exploration and exploitation by different models, which is also a good usecase for llmlite.