List view
Better filtering and references (Project) > From [SyncLinear.com](https://synclinear.com)
No due date•1/1 issues closed(Project) > From [SyncLinear.com](https://synclinear.com)
No due dateRepository of all what is related to the product roadmap for Q3. Well be used to gather feedback, determine scope and create the new projects (Project) > From [SyncLinear.com](https://synclinear.com)
No due date(Project) > From [SyncLinear.com](https://synclinear.com)
No due dateAll evaluation related issues (Project) > From [SyncLinear.com](https://synclinear.com)
No due dateOngoing project about bugs (Project) > From [SyncLinear.com](https://synclinear.com)
No due date•4/4 issues closed(Project) > From [SyncLinear.com](https://synclinear.com)
No due date(Project) > From [SyncLinear.com](https://synclinear.com)
No due date- No due date•3/3 issues closed
Our users would like to enrich the test set from the playground. Many times, they would like to create an llm application, try it out in the playground, and then use the outputs they have created in the playground to enrich the test set. For instance, let's say that you are creating an application that creates CVs from job descriptions. Currently, if you would like to create a test set, you need to go to each job description and manually create a CV then add it to the test set. However, a better solution is actually to create an LLM app that creates CVs from job descriptions, play around with the parameters, then whenever you like something, click add it to the test set, then modify maybe a little bit the answer from the LLM and add it to the test set. Same thing goes if you are working on an entity extraction workflow. The best way is not to find the ground truth manually, but actually use the llm to do the entity extraction for you, check the results, and when you're happy add them to the test set. Basically get the llm to help you.
No due date•1/1 issues closedWe would like to introduce some more visibility to the runs happening in the playground. We would like to track how much each run is costing, what prompts are being used, what are the sub-results outputs of each llm call being run... In the long run, we would like to track all this data whenever the code is run.
No due date•1/1 issues closedCurrently, our SDK for the LLM app only supports parameters of type TextParam (text) and FloatParam (float). We propose to expand this functionality by introducing a new parameter type, MultipleChoiceParam. The MultipleChoiceParam will allow users to specify a discrete set of choices. For example, users could choose between different versions of GPT models to use. To utilize MultipleChoiceParam, the user would use the SDK as follows: ```python @post def query( transcript: str, model: MultipleChoiceParam = MultipleChoiceParam(["gpt-3.5-turbo", "gpt-4"]), prompt: TextParam = DEFAULT_PROMPT, ) -> str: ``` In this context, the first value of the MultipleChoiceParam list would serve as the default selection. Inclusion of the MultipleChoiceParam will result in the openapi.json file including an x-parameter, which would contain the list of potential choices. It's important to note that the frontend interprets the parameters from the LLM app via these x-parameters. The relevant code snippet that parses these parameters is as follows: ```javascript export const parseOpenApiSchema = (schema: any): Parameter[] => { const parameters: Parameter[] = [] // check if requestBody exists const requestBody = schema?.paths?.["/generate"]?.post?.requestBody if (requestBody) { const bodySchemaName = requestBody.content["application/json"].schema["$ref"] .split("/") .pop() // get the actual schema for the body parameters const bodySchema = schema.components.schemas[bodySchemaName].properties Object.entries(bodySchema).forEach(([name, param]: [string, any]) => { parameters.push({ name: name, input: param["x-parameter"] ? false : true, type: param["x-parameter"] ? determineType(param["x-parameter"]) : "string", required: schema.components.schemas[bodySchemaName].required.includes(name), default: param.default, }) }) } return parameters } ``` Introducing this parameter will include a number of tasks: - Adding the parameter to the SDK - Adding the parameter to the Frontend (playground) - Adding the parameter to the Frontend (Test set) - Writing examples (and updating examples) with this parameter - Updating the documentation to add this parameter
No due date•4/4 issues closed