Skip to content

feat(indexers): Added alternate scoring metrics#112

Open
rileyok-ons wants to merge 13 commits intomainfrom
111-different-scoring-functions
Open

feat(indexers): Added alternate scoring metrics#112
rileyok-ons wants to merge 13 commits intomainfrom
111-different-scoring-functions

Conversation

@rileyok-ons
Copy link
Collaborator

✨ Summary

Users May have use cases where want to use different search metrics, i.e. dotprod, cosine (normalised dotprod), to do this we have added a scoring method attribute to the vectorstore that is accessed in search to calculate the desired metric.

The scoring logic has been abstracted out to a score method to return scored output on given query using the vectorstores chosen metric
We have created a type alias of a literal with all metrics for typehints and checking

📜 Changes Introduced

  • Feature implementation (feat:) / bug fix (fix:) / refactoring (chore:) / documentation (docs:) / testing (test:)
  • Updates to tests and/or documentation
  • Terraform changes (if applicable)

✅ Checklist

Please confirm you've completed these checks before requesting a review.

  • Code passes linting with Ruff
  • Security checks pass using Bandit
  • API and Unit tests are written and pass using pytest
  • Terraform files (if applicable) follow best practices and have been validated (terraform fmt & terraform validate)
  • DocStrings follow Google-style and are added as per Pylint recommendations
  • Documentation has been updated if needed

🔍 How to Test

Run with different metrics to see the difference

@rileyok-ons rileyok-ons linked an issue Jan 23, 2026 that may be closed by this pull request
@rileyok-ons rileyok-ons marked this pull request as ready for review January 26, 2026 12:51
return result_df

def search(self, query: VectorStoreSearchInput, n_results=10, batch_size=8) -> VectorStoreSearchOutput:
def _check_norm_vdb(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this functionality a lot, but I think it should be the vectoriser's job to output embeddings in the desired form, not the vector store changing them after the fact.
My preference would be to update the Vectorisers' .transform() methods to take an optional (default False) normalise argument, which applies this normalisation if set to True.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree slightly here.

An informed user, who knows their embedding model already outputs normalized embeddings, should then be able to just use the dotproduct metric, which would give them the effects of cosine similarity without having to do the extra norm checks and steps they would need if they set to a cosine metric.

also i think its a good idea to keep the vectorisers pure and not overcomplicate the logic argument logic - whereas the vectorstore responsible for housing, reloading and metric calculations of the vectors probably should be keeping a note on whether the vectors are normalised or not

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An informed user, who knows their embedding model already outputs normalized embeddings, should then be able to just use the dotproduct metric, which would give them the effects of cosine similarity without having to do the extra norm checks and steps they would need if they set to a cosine metric.

I'm not sure I follow what you mean; if a user knows their embedding model already outputs normalised embeddings, they could just not set the normalise flag when creating the Vectoriser.

also i think its a good idea to keep the vectorisers pure and not overcomplicate the logic argument logic

This is an operation that happens directly on the vectors, a step before any use in a vector store or scoring. I think it fits in well with the task of the Vectoriser, and avoids the other issues you discussed - such as any need to duplicate vectors in the vector store and set/read metadata flags about whether the vector store is normalised.

Lets talk about it in our call later 👍

Copy link
Contributor

@frayle-ons frayle-ons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good so far! Very eloquent logic that could be extended to many metrics or eventually replaced with something like FAISS.

Just one things about how we record and reload the vectorstore and how we track whether its already normalised or not

vector_store.vectoriser_class = metadata["vectoriser_class"]
vector_store.hooks = {}

vector_store._check_norm_vdb()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd favour a 'normalise once' approach -

  1. when the VDB is being constructed by _create_vector_store_index(), it checks if the user specified a metric that requires normalised vectors and normalises the created collection and then saves them to the polars df/parquet file.
  2. Then we'd record the 'metric' used in the metadata file
  3. when the parquet is loaded back in with from_filespace() we know to use the appropriate metric already as its stored in the metadata file and theres no need to redo the normalisation

so i'd also take the 'metric_setting' parameter out of the class method from_filespace() and rely just on the metadata file.

this would mean less operations every time we load the vectorstore in, after initial creation - potentially at the cost of losing the magnitude information and not being able to get it back without running the build step again with a different metric

Copy link
Collaborator Author

@rileyok-ons rileyok-ons Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by adding normalize meta field, if choosing cosine with un-normed will norm but will warn user

@rileyok-ons rileyok-ons changed the title 111 different scoring functions feat(indexers): Added alternate scoring metrics Jan 28, 2026
@github-actions github-actions bot added the enhancement New feature or request label Jan 28, 2026
Copy link
Contributor

@frayle-ons frayle-ons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Propose we rework to have only 2 metrics - Inner product and L2 distance.

Inner product is the more general class of operation - it would imply cosine similarity is happening when IP is performed on normalised vectors, and that dotproduct is happening when the vectors aren't normalised.

Similar case for L2.

So we would have just 2 metrics rn ['IP', 'L2']. And then we should just not touch the vectorisers module at all for now, and let users make their own custom vectoriser if they really want to normalise and not just use the base GCP or Huggingface models etc.

This would be more in line with Faiss semantic search process of being agnostic about Vectoriser behaviour (it only assumes you're correctly using the same vectoriser), and may help us integrate that library down the road:

https://www.pinecone.io/learn/series/faiss/composite-indexes/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we scrapped all 6 of these and just had ['IP', 'L2'].

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think L2 squared and IP squared should be a downstream postprocessing hook as its just a common scoring operation

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seen this suggested previously, if we want this can sort

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with that plan 👍

I'd like if we added an example to one of the notebooks showing a way of wrapping one of the Vectorisers to add normalisation though, to tide users over until we properly offer normalisation as an option.

I can add that to this PR tomorrow, if nobody objects.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we scrapped all 6 of these and just had ['IP', 'L2'].

Would you be okay with renaming 'IP'->'dot' for this? I think 'dot' would be more easily understood by users via docstrings without needing to explore documentation etc. to find out / confirm IP = Inner Product

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest we just leave it - if users really really want it they can make their own custom vectoriser that wraps the hugging face vectoriser - but if you really wanted to you could update the custom_vectoriser demo notebook to have a section on this and show how to do it to the hugging face class?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do already have one user group requesting this functionality (and currently using a custom wrapped HF Vectoriser to achieve it), so I think it is worth adding to the docs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So... they did use a custom vectoriser? 😀

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what the correct answer is, we definitely don't want to be adding a variant of every Vectoriser called VectoriserX_normalised, or a wrapper for each class. Maybe 1 utility wrapper that wraps round all our Vectoriser class imps.... but what is the benefit/tradeoffs of that new class, which we'd have to add more docs and ensure it's compatible forever, versus guiding users in how to do it with our existing custom vectoriser / base class architecture.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that I made for them as a one-off solution as the package doesn't yet offer that - I'm saying it would be useful to have that knowledge made accessible in the documentation for other users

Copy link
Contributor

@frayle-ons frayle-ons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking really good! Would just request we remove any logic or attributes for normalisation for now completely.

Leave it completely agnostic so that whatever vectors the the user's vectoriser is producing is what determines if cosine or dot prod is performed.

In all cases use L2^2 = ||A||^2 + ||B||^2 - 2(a.b)

instead of L2^2 = 1 + 1 - 2(a.b)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Different Scoring Functions

4 participants