Skip to content

Conversation

@shrutipatel31
Copy link
Contributor

Summary: This diff adds a new property to the Experiment class called is_preference_learning_problem. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a PreferenceOptimizationConfig or if there is a PE_EXPERIMENT (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126

@meta-cla meta-cla bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Nov 18, 2025
@meta-codesync
Copy link

meta-codesync bot commented Nov 18, 2025

@shrutipatel31 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D87347126.

@codecov-commenter
Copy link

codecov-commenter commented Nov 18, 2025

Codecov Report

❌ Patch coverage is 95.16129% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 96.52%. Comparing base (e26c9d7) to head (d10b696).

Files with missing lines Patch % Lines
ax/service/utils/best_point.py 85.36% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4553      +/-   ##
==========================================
+ Coverage   96.50%   96.52%   +0.01%     
==========================================
  Files         557      557              
  Lines       57359    57479     +120     
==========================================
+ Hits        55356    55480     +124     
+ Misses       2003     1999       -4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 18, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 18, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 18, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 18, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 18, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 19, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 19, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 20, 2025
…k#4553)

Summary:

This diff adds a new property to the `Experiment` class called `is_preference_learning_problem`. This property checks if the experiment is a preference learning (BOPE) experiment by checking if the optimization config is a `PreferenceOptimizationConfig` or if there is a `PE_EXPERIMENT` (preference exploration) auxiliary experiment attached. This property is useful for identifying preference learning experiments in Ax.

Differential Revision: D87347126
@shrutipatel31 shrutipatel31 force-pushed the export-D87347126 branch 2 times, most recently from a4d06e6 to 32a0cae Compare November 20, 2025 04:48
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 20, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 20, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
@shrutipatel31 shrutipatel31 force-pushed the export-D87347126 branch 2 times, most recently from ad9f25a to 90c23c2 Compare November 20, 2025 15:43
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 20, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 23, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
@shrutipatel31 shrutipatel31 force-pushed the export-D87347126 branch 2 times, most recently from d0bf970 to f67ae09 Compare November 23, 2025 19:22
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 23, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 23, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 23, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 24, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
…book#4566)

Summary:

`get_trace` will be used in getting trace for plotting New UtilityProgression Analysis (as a part of AxSweep plots migration to Ax Analyses). Following changes have been made to enable trace compution for the new analysis.
1. Add flag to `get_trace` helper: We need to have `status_quo` not be filtered from the trace to match what AxSweep currently shows.
2. Optimize `get_trace` to skip `derelativize_opt_config` when there are no relative constraints: This avoids unnecessary data pivoting that fails with duplicate (trial_index, arm_name) entries common in multi-objective experiments where each arm has multiple metrics. Without this fix, the pandas pivot operation throws "`ValueError:` Index contains duplicate entries, cannot reshape", causing `get_trace` to fail. Since derelativization is only needed for constraints relative to a status quo baseline, it's safely skipped when all constraints are absolute.

Differential Revision: D87516104
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
shrutipatel31 added a commit to shrutipatel31/Ax that referenced this pull request Nov 25, 2025
Summary:

This diff updates the `get_trace` function in `ax/service/utils/best_point.py` to support preference learning (BOPE) experiments with `PreferenceOptimizationConfig`.

When a BOPE experiment has an associated PE_EXPERIMENT auxiliary experiment with preference data, `get_trace` now:
1. Fits a PairwiseGP preference model to the PE_EXPERIMENT data
2. Uses the learned preference model to predict utility values for each arm's metric values
3. Returns a trace based on predicted utilities

If the PE_EXPERIMENT is missing or has no data, the function gracefully falls back to standard hypervolume computation for multi-objective optimization.

Differential Revision: D87347126
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants