-
Notifications
You must be signed in to change notification settings - Fork 1
Allow for different metrics in NPV calculation #1027
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1027 +/- ##
==========================================
- Coverage 81.78% 81.45% -0.34%
==========================================
Files 53 53
Lines 7254 7322 +68
Branches 7254 7322 +68
==========================================
+ Hits 5933 5964 +31
- Misses 1026 1063 +37
Partials 295 295 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| let (metric_precedence, metric) = match annual_fixed_cost.value() { | ||
| // If AFC is zero, use total surplus as the metric (strictly better than nonzero AFC) | ||
| 0.0 => (0, -profitability_index.total_annualised_surplus.value()), | ||
| // If AFC is non-zero, use profitability index as the metric | ||
| _ => (1, -profitability_index.value().value()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not fully convinced by this. the profitability index is dimensionless, but the annualised surplus is Money. Even though it does not matter from the typing perspective since you are getting the underlying value in both cases, which is float, I wonder if this choice makes sense from a logic perspective.
Not that I've a better suggestion.
|
|
||
| // calculate metric and precedence depending on asset parameters | ||
| // note that metric will be minimised so if larger is better, we negate the value | ||
| let (metric_precedence, metric) = match annual_fixed_cost.value() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking again on this, I think this logic (whatever it becomes, see my other comment) should be within the ProfitabilityIndex.value, also adding a ProfitabilityIndex.precedence method that returns 0 or 1 depending on the value of AFC.
tsmbland
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is ok, I have an alternative suggestion though.
The idea would be introduce a new trait for appraisal metrics:
pub trait MetricTrait {
fn value(&self) -> f64;
fn compare(&self, other: &Self) -> Ordering;
}
pub struct AppraisalOutput {
pub metric: Box<dyn MetricTrait>,
// ...
}
You could add this trait to your ProfitabilityIndex struct, and add a custom compare method here. You'd also have to make an equivalent struct for LCOX - it would probably be very simple, although there may be some edge cases we haven't thought of yet. I think this would help to contain the comparison logic and make the code cleaner. We'd also no longer have to make the profitability index negative as the custom compare method could be written to look for the maximum - I always found this a bit hacky and it makes the output files confusing
Or this: I think there are various pros and cons of each option which I don't fully understand. I think possibly the latter is better if you don't need to store |
alexdewar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've taken the liberty of jumping in for a review here @Aurashk 🙃. Hope that's ok!
I agree with @tsmbland's suggestion that it would be better to use traits for this instead though -- I just think it will make it a bit clearer and more maintainable.
I'm wondering if it might be best to define a supertrait instead (that's just an alias for a combination of traits). In our case, we just need things which can be compared (Ord) and written to file (Serialize). We did talk about having an Ord implementation for unit types (#717) and I think I've actually done that somewhere, but didn't open a PR as we didn't need it, but I can do if that would be useful! That unit types would automatically define the supertrait.
I think the problem with having a value() method returning f64, as @tsmbland suggested, is that it wouldn't be obvious which value was being returned for the NPV case.
E.g.:
trait ComparisonMetric: Ord + Serialize {}
pub struct AppraisalOutput {
pub metric: Box<dyn ComparisonMetric>,
// ...
}What do people think?
| /// Where there is more than one possible metric for comparing appraisals, this integer | ||
| /// indicates the precedence of the metric (lower values have higher precedence). | ||
| /// Only metrics with the same precedence should be compared. | ||
| pub metric_precedence: u8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably make this a u32 instead. I know we won't ever need more than 256 different values here (if we did, that would be horrible!), but I think it's best to use 32-bit integers as a default, unless there's a good reason not to.
| // Calculate profitability index for the hypothetical investment | ||
| let annual_fixed_cost = annual_fixed_cost(asset); | ||
| if annual_fixed_cost.value() < 0.0 { | ||
| bail!("The current NPV calculation does not support negative annual fixed costs"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this actually happen? I'm struggling to think... @tsmbland?
If it's more of a sanity check instead (still worth doing!) then I'd change this to an assert! instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is then something has gone badly wrong! Agree, better to change to assert!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding of this comment from Adam, it may be possible in the future. And in that case a negative AFC isn't necessarily wrong, just the resulting profitability index will be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok. I'd think I'd still make it a panic! for now though, because if it happens it's a coding bug, not a user-facing error.
|
PS -- this isn't super important, but for the NPV metric, I'd include the numerator and denominator in the output CSV file (e.g. "1.0/2.0" or something) rather than just saving one of the two values. We want users to be able to see why the algo has made whatever decisions it has |
|
Actually, on reflection, my suggestion won't work after all 😞. I still think we should use traits, but it's not as simple as adding a single supertrait. The problem is that I think what you want is the pub trait AppraisalMetric {
fn compare(&self, other: &dyn Any) -> Ordering;
// ...
}
impl AppraisalMetric for LCOXMetric {
fn compare(&self, other: &dyn Any) -> Ordering {
let other = other.downcast_ref::<Self>().expect("Cannot compare metrics of different types");
// ...
}
}(This assumes you have an I still think it might make sense to have a supertrait which is It's a little convoluted but that's the downside of using a statically typed language 😛 |
|
Btw, I know I'm a bit late to the party on this one, but I'm not sure about the I'm not sure why we need to go ensure consistent ordering for assets with identical appraisal outputs. If they're identical -- or approximately identical -- then, by definition, we should just return I'm only raising it because I think the added complexity will make @Aurashk's life harder here and I can't really see what benefit it brings. Is there something I'm missing? |
It's not actually that unlikely:
I agree with you that it feels a little hacky for identical metrics not to return |
Ok, good point.
Can I just check what the motivation for this is? On reflection, I'm guessing that the idea was that it would make it easier to figure out why a particular asset was chosen over another. Is that right? If so, that seems reasonable. Initially I was thinking that it was to make choosing between two assets with identical metrics less arbitrary which I'm less convinced about. A lot of things in MUSE2 are arbitrary, e.g. how HiGHS distributes activity across time slices, and the results fluctuate as we change the code anyway, so it seemed overkill to try to make guarantees to users about this when we can't do the same for so much of the rest of the model. Anyway, in terms of the code, I think the problem is that it's not a good separation of concerns. It would be better if the |
That's part of it at least. E.g. Before working on this I didn't previously consider that multiple existing assets from different commission years might have the same metric. If it's going to have to pick one over the other, I'd at least like some consistency so that decision is explainable.
I don't think we can/should try guarantee to users that the model is completely unarbitrary, but we can still do our best and if there's an easy way to make certain behaviours just a little bit more predictable/explainable then I don't think there's an excuse not to.
I think that's a good idea, do you want to give it a try? |
|
I think your other point is that the warning that we're currently raising if it ultimately does have to make an arbitrary decision isn't really worthy of a warning that users should have to be concerned about. I think that's fair, so happy if you'd rather change that to a debug message |
|
@tsmbland Ok cool. Seems like we're on the same page now. I'll have a go at the refactoring. |
|
Sorry just catching up with the discussion. I could do with a bit more convincing on the the traits approach to help me understand the benefits. I do see the elegance of it (particularly over the existing approach), but in the end we have just have three possible comparable metrics - the LCOX one and the two NPV ones (profitability index and total annualised surplus) - and adding others would be a rare occasion. It seems like a metric for our purposes is always going to be a number and one of three (maybe more in the future) labels, we only need to compare like-for-like labels on the same logical path and we are always comparing an Based on the requirements above, it feels overly-general to make the metric an arbitrary data type that has the property of being comparable. If the implementation was really simple I might feel differently, but it feels like you're having to navigate through too much abstraction for a relatively simple bit of logic. I realise I'm outvoted on this and happy to go with the majority, just want to understand the reasoning better. |
You're right that we won't be adding new metrics all that often. There is an open issue to add support for "equivalent annual cost" (#524), but I guess the output of that will be similar to the other two. I think the main rationale for using traits was to have clearer code with better separation of concerns, so you'd have a dedicated function to handle the logic for comparing NPV outputs (for example). Another upside is that we could represent these metrics in more intuitive ways in the output file (currently we output negative NPV, which is potentially pretty confusing), but I don't think this is super important. I hear what you're saying about it being clunky and overcomplex though. Another way to do this would be to keep your current approach, but have a generic |
I'd probably still favour the traits approach, for the reasons that @alexdewar mentioned, but if it's too much work then don't worry about it (I didn't think it would be too difficult when I suggested it here, but maybe I'm missing something). We're also going to have to revisit this when we come to allow multiple objectives, so no point over-engineering things right now |
I don't think it's actually too difficult -- there's just slightly more boilerplate. Maybe try it and see how it looks @Aurashk? If it's a pain, let me know and I can try to help or we can go with the other approach.
Good point! |
|
I've attempted the traits approach and the implementation isn't too tricky. For me I still think it's nicer to just pick the right number (metric) to compare rather than implement compare logic for each case. but I think this approach is also fine and not too hard to follow. Also wondering if I can simplify the NPV compare method further. Only thing I did want to flag is that I commented out this line
because it panics on the two_regions regression test. Are we happy to let the LCOX metric be nan in certain cases or is it something we need to address? |
This is definitely something we need to address. That said, there shouldn't be any changes to the LCOX calculation here, and we were flagging nans before (https://github.com/EnergySystemsModellingLab/MUSE2/pull/1027/changes#diff-54801bfe535cb1590279090e9416267757db256e2864cfc0bae0a9efa764a066L51), so not sure why this would break all of a sudden - do you understand what's going on? |
Ah that's strange, I'll have a closer look at this now |
|
We weren't observing the NaN's before for that model because it's caused by a 0.0/0/0 in the lcox() method, and zero-capacity assets get filtered out here MUSE2/src/simulation/investment.rs Line 731 in 5f5b59b
compare_metric is called. Since they are ignored they shouldn't impact the results.
I'm wondering if with this new abstraction we should add a |
|
I wouldn't add another method for this, because that would kind of imply that whether or not two outputs are comparable depends on the metric in use. Really, it's just zero-capacity assets that shouldn't be compared. Panicking if a function is called with inputs that don't make sense is ok. If we did want to do this, I think it would make more sense just to return an |
|
Ok I've adopted the assertion in compare_metrics so we get the old behaviour |
tsmbland
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a complete review yet as I'm still getting my head around this.
One comment I have is that you're wasting the units system a bit by overuse of .value() on unit types. For example here a better approach would be annual_fixed_cost >= MoneyPerCapacity(0.0), so we're checking and keeping track of the units. Also for cost in LCOXMetric, better to store this in its proper units (i.e. MoneyPerActivity) rather than converting to f64 when constructing the object, and adjust the code around this as necessary. Also here, no need to call .value() as we can do the comparisons on the unit types.
There's quite a few of these so worth having a proper look through.
|
@Aurashk Are you ready for another review from me? Just tag me when you're ready |
tsmbland
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple more comments in addition to my earlier point about unit types.
I'll leave @alexdewar to comment on the more hardcore rusty stuff. It's a bit more convoluted than I had hoped, but I think that's mostly just a feature of the language rather than anything else
| } | ||
|
|
||
| impl MetricTrait for NPVMetric { | ||
| fn value(&self) -> f64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bear in mind that this is the value that gets presented in the output file, and it's not ideal that this has a different meaning depending on the value of the denominator. I think we do need something like what Alex suggested here #1027 (comment)
If we're doing this then we probably don't even need a .value() implementation for MetricTrait
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree it would be better to make it clear what the metric actually is, although I'm not totally clear on how you'd avoid having value altogether since it could either be an lcox metric or an npv?
We might just want a metric value() and a metric label() in the output file. And label is implemented to return 'lcox', 'npv annualised surplus' and 'npv profitability index' accordingly in the trait structs. Unless it's useful seeing both values where the profitability index denominator is nonzero
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should implement the Serialize trait for NPVMetric and LCOXMetric rather than having a method for getting a scalar like this. Then we can write the outputs in a custom way, without having to restrict ourselves to a single number, e.g. for NPV, we could write both the numerator and the denominator.
I think the easiest way to implement this is with a supertrait (without extra methods): https://doc.rust-lang.org/rust-by-example/trait/supertraits.html
So we'd have:
- A trait for comparing two metrics, like the current
MetricTrait(maybe rename toComparableMetricor something) - A supertrait which is defined as:
pub trait MetricTrait: ComparableMetric + Serialize {}Does that make sense?
That said, adding custom Serialize implementations is something we could do later, so if you'd rather open an issue for it instead, feel free.
| /// Calculate LCOX for a hypothetical investment in the given asset. | ||
| /// | ||
| /// This is more commonly referred to as Levelised Cost of *Electricity*, but as the model can | ||
| /// include other flows, we use the term LCOX. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removing this comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
totally unintentional! Thanks for spotting that
| /// Represents the profitability index of an investment | ||
| /// in terms of it's annualised components. | ||
| #[derive(Debug, Clone, Copy)] | ||
| pub struct ProfitabilityIndex { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we really need this now that we have NPVMetric. Could just have profitability_index return a tuple and store each value in NPVMetric. Not sure if that's better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generally prefer structs over tuples but don't feel strongly either way about this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR addresses the issue where NPV calculations produce NaN when annual fixed costs are zero by implementing a trait-based metric system that allows different comparison strategies. When fixed costs are zero, the profitability index becomes infinite, so the implementation now compares investments based on total annualized surplus instead.
Key changes:
- Introduces a
MetricTraitwith implementations forLCOXMetricandNPVMetricthat encapsulate comparison logic - Modifies
NPVMetric::compare()to handle zero and non-zero fixed cost cases differently, preferring zero fixed cost investments - Refactors
ProfitabilityIndexto store component values (total_annualised_surplus and annualised_fixed_cost) separately to enable the new comparison logic
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| src/simulation/investment/appraisal.rs | Introduces MetricTrait and concrete implementations (LCOXMetric, NPVMetric) with specialized comparison logic for zero fixed cost scenarios |
| src/finance.rs | Refactors profitability_index to return a ProfitabilityIndex struct containing component values instead of just the computed ratio |
| src/output.rs | Updates metric access to use the new .value() method on the MetricTrait |
| src/fixture.rs | Updates test fixture to use the new LCOXMetric type |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// Trait for appraisal metrics that can be compared. | ||
| /// | ||
| /// Implementers define how their values should be compared to determine | ||
| /// which investment option is preferable through the `compare` method. | ||
| pub trait MetricTrait: Any + Send + Sync { | ||
| /// Returns the numeric value of this metric. | ||
| fn value(&self) -> f64; | ||
|
|
||
| /// Compares this metric with another of the same type. | ||
| /// | ||
| /// Returns `Ordering::Less` if `self` is better than `other`, | ||
| /// `Ordering::Greater` if `other` is better, or `Ordering::Equal` | ||
| /// if they are approximately equal. | ||
| /// | ||
| /// # Panics | ||
| /// | ||
| /// Panics if `other` is not the same concrete type as `self`. | ||
| fn compare(&self, other: &dyn MetricTrait) -> Ordering; | ||
|
|
||
| /// Helper for downcasting to enable type-safe comparison. | ||
| fn as_any(&self) -> &dyn Any; | ||
| } |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MetricTrait design relies on runtime type checking via downcasting, which can panic if metrics of different types are compared. While this is documented in the trait, it would be safer to use an enum-based approach (e.g., enum Metric { LCOX(LCOXMetric), NPV(NPVMetric) }) which would provide compile-time type safety and eliminate the possibility of panics from type mismatches. The current design assumes all appraisals being compared use the same objective type, but this invariant is not enforced by the type system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case people are curious about this, I think it means an implementation without traits that looks like this:
// REMOVED: MetricTrait and its implementations for LCOXMetric and NPVMetric
// CHANGED: AppraisalOutput.metric field type
pub struct AppraisalOutput {
// ... other fields unchanged ...
/// The comparison metric to compare investment decisions
pub metric: Metric, // Changed from: Box<dyn MetricTrait>
// ... other fields unchanged ...
}
// CHANGED: compare_metric implementation
impl AppraisalOutput {
pub fn compare_metric(&self, other: &Self) -> Ordering {
assert!(
!self.metric.value().is_nan() && !other.metric.value().is_nan(), // Changed: combined assertion
"Appraisal metric cannot be NaN"
);
self.metric.compare(&other.metric) // Changed: direct method call instead of as_ref()
}
}
// NEW: Metric enum replaces the trait-based approach
/// Enum representing different types of investment appraisal metrics.
///
/// This enum provides compile-time type safety when comparing metrics,
/// ensuring that only compatible metric types can be compared without
/// risk of runtime panics from type mismatches.
#[derive(Debug, Clone)]
pub enum Metric {
/// Levelised Cost of X metric variant
LCOX(LCOXMetric),
/// Net Present Value metric variant
NPV(NPVMetric),
}
impl Metric {
/// Returns the numeric value of this metric for display or logging purposes.
pub fn value(&self) -> f64 {
match self {
Metric::LCOX(lcox) => lcox.cost.value(),
Metric::NPV(npv) => npv.value(),
}
}
/// Compares this metric with another of potentially different type.
///
/// Returns `Ordering::Less` if `self` represents a better investment than `other`,
/// `Ordering::Greater` if `other` is better, or `Ordering::Equal` if they are
/// approximately equal.
///
/// # Panics
///
/// Panics if attempting to compare metrics of different types (e.g., LCOX vs NPV),
/// as this represents a logical error in the calling code where incompatible
/// investment appraisals are being compared.
pub fn compare(&self, other: &Self) -> Ordering {
match (self, other) {
(Metric::LCOX(self_lcox), Metric::LCOX(other_lcox)) => {
self_lcox.compare(other_lcox)
}
(Metric::NPV(self_npv), Metric::NPV(other_npv)) => self_npv.compare(other_npv),
_ => panic!(
"Cannot compare metrics of different types: attempting to compare {:?} with {:?}",
std::mem::discriminant(self),
std::mem::discriminant(other)
),
}
}
}
// CHANGED: LCOXMetric - removed MetricTrait implementation, added private compare method
impl LCOXMetric {
// ... new() unchanged ...
/// Compares this LCOX metric with another LCOX metric.
///
/// Returns `Ordering::Less` if `self` has lower cost (better),
/// `Ordering::Greater` if `other` has lower cost (better),
/// or `Ordering::Equal` if costs are approximately equal.
fn compare(&self, other: &Self) -> Ordering {
if approx_eq!(MoneyPerActivity, self.cost, other.cost) {
Ordering::Equal
} else {
self.cost.partial_cmp(&other.cost).unwrap()
}
}
}
// CHANGED: NPVMetric - removed MetricTrait implementation, made value() and compare() private
impl NPVMetric {
// ... new() unchanged ...
/// Returns the numeric value used for comparison.
///
/// When annual fixed cost is zero, the profitability index becomes infinite,
/// so we return the total annualised surplus instead for meaningful comparison.
fn value(&self) -> f64 { // Changed from pub to private
// ... implementation unchanged ...
}
// ... is_zero_fixed_cost() unchanged ...
/// Compares this NPV metric with another NPV metric.
/// (Documentation unchanged)
fn compare(&self, other: &Self) -> Ordering { // Changed from pub to private, signature changed
// ... implementation unchanged ...
}
}
// CHANGED: Return type wrapped in Metric enum
fn calculate_lcox(/* params unchanged */) -> Result<AppraisalOutput> {
// ... unchanged until ...
Ok(AppraisalOutput {
// ... other fields unchanged ...
metric: Metric::LCOX(LCOXMetric::new(cost_index)), // Changed: wrapped in enum
// ... other fields unchanged ...
})
}
// CHANGED: Return type wrapped in Metric enum
fn calculate_npv(/* params unchanged */) -> Result<AppraisalOutput> {
// ... unchanged until ...
Ok(AppraisalOutput {
// ... other fields unchanged ...
metric: Metric::NPV(NPVMetric::new(profitability_index)), // Changed: wrapped in enum
// ... other fields unchanged ...
})
}
So the metric enum is like a union that can be either an NPVMetric or an LCOXMetric. I do think it has a good point here about preferring compile-time type checking over run-time type checking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an interesting point, but I think the traits approach will be a bit cleaner when we get to the custom Serialize implementations. Sure, you can still do this with an enum, but I think it'll probably look cleaner to separate these things out for the different metric types.
|
|
||
| /// Returns true if this metric represents a zero fixed cost case. | ||
| fn is_zero_fixed_cost(&self) -> bool { | ||
| self.profitability_index.annualised_fixed_cost == Money(0.0) |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The zero fixed cost check uses direct equality comparison with Money(0.0), which may not be robust for floating-point values. Consider using an approximate equality check or a small epsilon threshold to handle potential floating-point precision issues, especially since other parts of the codebase use approx_eq! for such comparisons.
| self.profitability_index.annualised_fixed_cost == Money(0.0) | |
| approx_eq!(Money, self.profitability_index.annualised_fixed_cost, Money(0.0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tsmbland @alexdewar I was wondering about this the other day too. Even though very small positive denominators don't break the profitability index calculation mathematically, is it practically more stable to still use surplus instead if the denominator is very small?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. It probably makes sense to use approx_eq! instead, so that we don't get subtly different behaviours depending on small rounding errors (see also #893).
I don't think we need this as a separate helper method though, as it's only used it one place. See my comment below about refactoring.
| pub fn value(&self) -> Dimensionless { | ||
| self.total_annualised_surplus / self.annualised_fixed_cost |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value() method will produce infinity when annualised_fixed_cost is zero (division by zero). While the NPVMetric handles this by checking for zero fixed cost before calling value(), this creates a fragile API where callers must remember to check before calling. Consider adding documentation to this method warning about the division by zero case, or having the method return an Option or Result to make the potential for infinity explicit.
| pub fn value(&self) -> Dimensionless { | |
| self.total_annualised_surplus / self.annualised_fixed_cost | |
| /// | |
| /// If the annualised fixed cost is zero, this will return positive infinity | |
| /// (`Dimensionless(f64::INFINITY)`) to reflect an unbounded profitability index. | |
| /// Callers that wish to treat a zero fixed cost as an error should check for | |
| /// that condition before calling this method. | |
| pub fn value(&self) -> Dimensionless { | |
| if self.annualised_fixed_cost == Money(0.0) { | |
| Dimensionless(f64::INFINITY) | |
| } else { | |
| self.total_annualised_surplus / self.annualised_fixed_cost | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I addressed this by adding an assert instead, value() on profitability index should never be called if annualised fixed cost is zero
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way to do this would be to have value() return None in the case that AFC==0. That might compose nicely with the refactoring I suggested above. Up to you though -- panicking is also fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| fn compare(&self, other: &dyn MetricTrait) -> Ordering { | ||
| let other = other | ||
| .as_any() | ||
| .downcast_ref::<Self>() | ||
| .expect("Cannot compare metrics of different types"); | ||
|
|
||
| // Handle comparison based on fixed cost status | ||
| match (self.is_zero_fixed_cost(), other.is_zero_fixed_cost()) { | ||
| // Both have zero fixed cost: compare total surplus (higher is better) | ||
| (true, true) => { | ||
| let self_surplus = self.profitability_index.total_annualised_surplus; | ||
| let other_surplus = other.profitability_index.total_annualised_surplus; | ||
|
|
||
| if approx_eq!(Money, self_surplus, other_surplus) { | ||
| Ordering::Equal | ||
| } else { | ||
| other_surplus.partial_cmp(&self_surplus).unwrap() | ||
| } | ||
| } | ||
| // Both have non-zero fixed cost: compare profitability index (higher is better) | ||
| (false, false) => { | ||
| let self_pi = self.profitability_index.value(); | ||
| let other_pi = other.profitability_index.value(); | ||
|
|
||
| if approx_eq!(Dimensionless, self_pi, other_pi) { | ||
| Ordering::Equal | ||
| } else { | ||
| other_pi.partial_cmp(&self_pi).unwrap() | ||
| } | ||
| } | ||
| // Zero fixed cost is always better than non-zero fixed cost | ||
| (true, false) => Ordering::Less, | ||
| (false, true) => Ordering::Greater, | ||
| } | ||
| } |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The critical new behavior of NPVMetric comparison with zero fixed costs lacks test coverage. The new logic handles three scenarios: both with zero fixed cost, both with non-zero fixed cost, and mixed cases. Only the panic behavior on calling .value() with zero cost is tested. Consider adding unit tests for NPVMetric.compare() that verify: 1) Comparison of two metrics with zero fixed cost correctly uses total surplus, 2) Comparison of two metrics with non-zero fixed cost correctly uses profitability index, and 3) Zero fixed cost metrics are correctly preferred over non-zero fixed cost metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll wait for your review @alexdewar but I do think some tests would be nice
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
I think the logic for this method is a bit convoluted as it is. There will be different ways you could go about it, but here's how I'd split it up:
- Make a helper function for doing approx comparisons (as you're doing this in a bunch of places), returning Ordering::Equal if two values are approx equal. (You might need to make this generic, in which case I think
Twill have trait boundsPartialEq + ApproxEq<Margin=F64>.) - Write a method which tries to compare the metrics based on profitability index, returning
Noneif either has an AFC that's approx zero. - Then this method can just call this method, falling back on comparing AFC, e.g.:
// Using `compare_approx` as I mentioned in 1 and newtypes
self.try_compare_pi(other).unwrap_or_else(|| compare_approx(self.0.annualised_fixed_cost, other.0.annualised_fixed_cost)Does this make sense?
Thanks @alexdewar, I thinks it's ready for another review but you might also want to have a look at how I addressed copilot and Tom's comments |
alexdewar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is mostly there now, but I've made a few suggestions. In particular, I think NPVMetric::compare() could be made much simpler; it's hard to read as it is.
We could also define Serialize for the metric structs so we can have custom formatting in the CSV file, but you can open an issue for that and do it later if you prefer.
| /// Trait for appraisal metrics that can be compared. | ||
| /// | ||
| /// Implementers define how their values should be compared to determine | ||
| /// which investment option is preferable through the `compare` method. | ||
| pub trait MetricTrait: Any + Send + Sync { | ||
| /// Returns the numeric value of this metric. | ||
| fn value(&self) -> f64; | ||
|
|
||
| /// Compares this metric with another of the same type. | ||
| /// | ||
| /// Returns `Ordering::Less` if `self` is better than `other`, | ||
| /// `Ordering::Greater` if `other` is better, or `Ordering::Equal` | ||
| /// if they are approximately equal. | ||
| /// | ||
| /// # Panics | ||
| /// | ||
| /// Panics if `other` is not the same concrete type as `self`. | ||
| fn compare(&self, other: &dyn MetricTrait) -> Ordering; | ||
|
|
||
| /// Helper for downcasting to enable type-safe comparison. | ||
| fn as_any(&self) -> &dyn Any; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an interesting point, but I think the traits approach will be a bit cleaner when we get to the custom Serialize implementations. Sure, you can still do this with an enum, but I think it'll probably look cleaner to separate these things out for the different metric types.
|
|
||
| /// Returns true if this metric represents a zero fixed cost case. | ||
| fn is_zero_fixed_cost(&self) -> bool { | ||
| self.profitability_index.annualised_fixed_cost == Money(0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. It probably makes sense to use approx_eq! instead, so that we don't get subtly different behaviours depending on small rounding errors (see also #893).
I don't think we need this as a separate helper method though, as it's only used it one place. See my comment below about refactoring.
| #[derive(Debug, Clone)] | ||
| pub struct NPVMetric { | ||
| /// Profitability index data for this NPV metric | ||
| pub profitability_index: ProfitabilityIndex, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also make this a newtype, i.e. not give the field a name, which would make things less verbose:
pub struct NPVMetric(ProfitabilityIndex);You can then get at the field with my_metric.0.
| } | ||
|
|
||
| impl MetricTrait for NPVMetric { | ||
| fn value(&self) -> f64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should implement the Serialize trait for NPVMetric and LCOXMetric rather than having a method for getting a scalar like this. Then we can write the outputs in a custom way, without having to restrict ourselves to a single number, e.g. for NPV, we could write both the numerator and the denominator.
I think the easiest way to implement this is with a supertrait (without extra methods): https://doc.rust-lang.org/rust-by-example/trait/supertraits.html
So we'd have:
- A trait for comparing two metrics, like the current
MetricTrait(maybe rename toComparableMetricor something) - A supertrait which is defined as:
pub trait MetricTrait: ComparableMetric + Serialize {}Does that make sense?
That said, adding custom Serialize implementations is something we could do later, so if you'd rather open an issue for it instead, feel free.
| fn compare(&self, other: &dyn MetricTrait) -> Ordering { | ||
| let other = other | ||
| .as_any() | ||
| .downcast_ref::<Self>() | ||
| .expect("Cannot compare metrics of different types"); | ||
|
|
||
| // Handle comparison based on fixed cost status | ||
| match (self.is_zero_fixed_cost(), other.is_zero_fixed_cost()) { | ||
| // Both have zero fixed cost: compare total surplus (higher is better) | ||
| (true, true) => { | ||
| let self_surplus = self.profitability_index.total_annualised_surplus; | ||
| let other_surplus = other.profitability_index.total_annualised_surplus; | ||
|
|
||
| if approx_eq!(Money, self_surplus, other_surplus) { | ||
| Ordering::Equal | ||
| } else { | ||
| other_surplus.partial_cmp(&self_surplus).unwrap() | ||
| } | ||
| } | ||
| // Both have non-zero fixed cost: compare profitability index (higher is better) | ||
| (false, false) => { | ||
| let self_pi = self.profitability_index.value(); | ||
| let other_pi = other.profitability_index.value(); | ||
|
|
||
| if approx_eq!(Dimensionless, self_pi, other_pi) { | ||
| Ordering::Equal | ||
| } else { | ||
| other_pi.partial_cmp(&self_pi).unwrap() | ||
| } | ||
| } | ||
| // Zero fixed cost is always better than non-zero fixed cost | ||
| (true, false) => Ordering::Less, | ||
| (false, true) => Ordering::Greater, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
I think the logic for this method is a bit convoluted as it is. There will be different ways you could go about it, but here's how I'd split it up:
- Make a helper function for doing approx comparisons (as you're doing this in a bunch of places), returning Ordering::Equal if two values are approx equal. (You might need to make this generic, in which case I think
Twill have trait boundsPartialEq + ApproxEq<Margin=F64>.) - Write a method which tries to compare the metrics based on profitability index, returning
Noneif either has an AFC that's approx zero. - Then this method can just call this method, falling back on comparing AFC, e.g.:
// Using `compare_approx` as I mentioned in 1 and newtypes
self.try_compare_pi(other).unwrap_or_else(|| compare_approx(self.0.annualised_fixed_cost, other.0.annualised_fixed_cost)Does this make sense?
| pub fn value(&self) -> Dimensionless { | ||
| self.total_annualised_surplus / self.annualised_fixed_cost |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way to do this would be to have value() return None in the case that AFC==0. That might compose nicely with the refactoring I suggested above. Up to you though -- panicking is also fine.
Description
If annual fixed costs (AFC) are zero, this currently makes the metric in appraisal comparisons NaN. This PR modifies the behaviour so that we use the total annualised surplus to appraise assets in this case instead. Since there are two different possible metrics in this case, with one strictly better (AFC == 0 always better than AFC > 0). I've added a metric_precedence to
AppraisalOutputwhich ranks the metrics by the order which they can be used. Then,select_best_assetswill disregard all appraisals which have a precedence higher that the minimum. Another way of doing this may be to make the metric itself a struct and implement more sophisticated comparison logic, but since lcox uses the same struct it might end up getting too involvedFixes #1012
Type of change
Key checklist
$ cargo test$ cargo docFurther checks