A/B Testing UX Enhancements #3039
AhmedHawater2003
started this conversation in
Ideas
Replies: 1 comment
-
|
Thank you for the detailed and thoughtful feature requests, @AhmedHawater2003 — we really appreciate the time you took to share this feedback. At the moment, we’ve temporarily removed the A/B testing feature so we can redesign and improve it. The points you raised align well with areas we’re exploring, and we’ll take these suggestions into consideration as we plan the initial version of the improved feature. Thanks again for your valuable input, and we’ll share updates once a new implementation is available. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
My team and I find the A/B testing evaluation feature extremely valuable. However, we believe a few enhancements could significantly improve its usability. It would be great if the following could be considered for future updates:
Allow different users to evaluate the same test independently
Currently, once one user answers a test, all other users see those answers and cannot start from scratch. This forces us to create duplicate copies of the same test for each user, increasing token usage and clutter.
Support specifying model versions for A/B variants
It would be helpful to select specific model versions when choosing A and B variants during the test setup.
Add an option to name tests
Naming tests would make organizing and managing multiple evaluations much easier.
Provide a markdown preview for outputs
If a model response contains markdown, the current plain-text view makes evaluation harder. A rendered markdown preview would improve clarity.
Thank you so much for your great work and for considering these improvements!
Regards,
Beta Was this translation helpful? Give feedback.
All reactions