-
Notifications
You must be signed in to change notification settings - Fork 70
FXC-4927 enable source differentiation #3197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
tests/test_components/autograd/numerical/test_autograd_source_numerical.py
Outdated
Show resolved
Hide resolved
9542484 to
5dec731
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, 2 comments
5dec731 to
6aa0c0b
Compare
6aa0c0b to
7d69d2a
Compare
7d69d2a to
e93bb93
Compare
e93bb93 to
360ac7a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
Diff CoverageDiff: origin/develop...HEAD, staged and unstaged changes
Summary
tidy3d/components/autograd/derivative_utils.pyLines 1122-1130 1122 """
1123
1124 def _cell_size_weights(coord: np.ndarray) -> np.ndarray:
1125 if coord.size <= 1:
! 1126 return np.array([1.0], dtype=float)
1127 deltas = np.diff(coord)
1128 diff_left = np.pad(deltas, (1, 0), mode="edge")
1129 diff_right = np.pad(deltas, (0, 1), mode="edge")
1130 return 0.5 * (diff_left + diff_right)Lines 1132-1140 1132 weight_dims = []
1133 weight_arrays = []
1134 for dim in dims:
1135 if dim not in arr.coords:
! 1136 continue
1137 coord = np.asarray(arr.coords[dim].data)
1138 if coord.size <= 1:
1139 continue
1140 weight_dims.append(dim)Lines 1140-1148 1140 weight_dims.append(dim)
1141 weight_arrays.append(_cell_size_weights(coord))
1142
1143 if not weight_dims:
! 1144 return SpatialDataArray(1.0)
1145
1146 weights = np.ix_(*weight_arrays)
1147 weights_data = weights[0]
1148 for weight_array in weights[1:]:Lines 1164-1176 1164 weights = compute_spatial_weights(field_data, dims=dims_to_integrate)
1165 scale = 1.0
1166 for axis, dim in enumerate("xyz"):
1167 if dim not in field_data.coords:
! 1168 continue
1169 if dim in dims_to_integrate and field_data.sizes.get(dim, 0) == 1:
1170 axis_size = float(source_size[axis])
1171 if axis_size > 0.0:
! 1172 scale = scale * axis_size
1173 elif axis_size == 0.0 and dim in adjoint_field.coords:
1174 coord_vals = np.asarray(adjoint_field.coords[dim].data)
1175 if coord_vals.size > 1:
1176 step = np.min(np.abs(np.diff(coord_vals)))Lines 1176-1184 1176 step = np.min(np.abs(np.diff(coord_vals)))
1177 if np.isfinite(step) and step > 0.0:
1178 scale = scale * step
1179 if dim not in dims_to_integrate and field_data.sizes.get(dim, 0) > 1:
! 1180 scale = scale / field_data.sizes[dim]
1181 return weights * scale
1182
1183
1184 def transpose_interp_field_to_dataset(Lines 1195-1204 1195 if target_freqs.size == source_freqs.size and np.allclose(
1196 target_freqs, source_freqs, rtol=1e-12, atol=0.0
1197 ):
1198 return field
! 1199 method = "nearest" if target_freqs.size <= 1 or source_freqs.size <= 1 else "linear"
! 1200 return field.interp(
1201 {"f": target_freqs},
1202 method=method,
1203 kwargs={"bounds_error": False, "fill_value": 0.0},
1204 ).fillna(0.0)Lines 1208-1216 1208 ) -> np.ndarray:
1209 if param_coords_1d.size == 1:
1210 return field_values.sum(axis=0, keepdims=True)
1211 if np.any(param_coords_1d[1:] < param_coords_1d[:-1]):
! 1212 raise ValueError("Spatial coordinates must be sorted before computing derivatives.")
1213
1214 n_param = param_coords_1d.size
1215 n_field = field_values.shape[0]
1216 field_values_2d = field_values.reshape(n_field, -1)Lines 1258-1266 1258 values = np.asarray(weighted.data)
1259 dims = list(weighted.dims)
1260 for dim in "xyz":
1261 if dim not in field_coords or dim not in param_coords:
! 1262 continue
1263 axis_index = dims.index(dim)
1264 values = _interp_axis(values, axis_index, field_coords[dim], param_coords[dim])
1265
1266 out_coords = {dim: np.asarray(dataset_field.coords[dim].data) for dim in dataset_field.dims}Lines 1265-1273 1265
1266 out_coords = {dim: np.asarray(dataset_field.coords[dim].data) for dim in dataset_field.dims}
1267 result = SpatialDataArray(values, coords=out_coords, dims=tuple(dims))
1268 if tuple(dims) != tuple(dataset_field.dims):
! 1269 result = result.transpose(*dataset_field.dims)
1270 return result
1271
1272
1273 def get_frequency_omega(Lines 1276-1284 1276 """Return angular frequency aligned with field_data frequencies."""
1277 if "f" in field_data.dims:
1278 omega = 2 * np.pi * np.asarray(field_data.coords["f"].data)
1279 return FreqDataArray(omega, coords={"f": np.asarray(field_data.coords["f"].data)})
! 1280 return 2 * np.pi * float(np.asarray(frequencies).squeeze())
1281
1282
1283 __all__ = [
1284 "DerivativeInfo",tidy3d/components/base.pyLines 1560-1568 1560 # Handle multiple starting paths
1561 if paths:
1562 # If paths is a single tuple, convert to tuple of tuples
1563 if isinstance(paths[0], str):
! 1564 paths = (paths,)
1565
1566 # Process each starting path
1567 for starting_path in paths:
1568 # Navigate to the starting path in the dictionarytidy3d/components/simulation.pyLines 5027-5035 5027 structure_index_to_keys[index].append(fields)
5028 elif component_type == "sources":
5029 source_index_to_keys[index].append(fields)
5030 else:
! 5031 raise ValueError(
5032 f"Unknown component type '{component_type}' encountered while "
5033 "constructing adjoint monitors. "
5034 "Expected one of: 'structures', 'sources'."
5035 )tidy3d/components/source/base.pyLines 69-77 69 _warn_traced_size = _warn_unsupported_traced_argument("size")
70
71 def _compute_derivatives(self, derivative_info: DerivativeInfo) -> AutogradFieldMap:
72 """Compute adjoint derivatives for source parameters."""
! 73 raise NotImplementedError(f"Can't compute derivative for 'Source': '{type(self)}'.")
74
75 @field_validator("source_time")
76 @classmethod
77 def _freqs_lower_bound(cls, val: SourceTimeType) -> SourceTimeType:tidy3d/components/source/current.pyLines 230-238 230 transpose_interp_field_to_dataset,
231 )
232
233 if self.current_dataset is None:
! 234 return {tuple(path): 0.0 for path in derivative_info.paths}
235
236 derivative_map = {}
237 center = tuple(self.center)
238 h_adj = derivative_info.H_adj or {}Lines 240-252 240
241 for field_path in derivative_info.paths:
242 field_path = tuple(field_path)
243 if len(field_path) < 2 or field_path[0] != "current_dataset":
! 244 log.warning(
245 f"Unsupported traced source path '{field_path}' for CustomCurrentSource."
246 )
! 247 derivative_map[field_path] = 0.0
! 248 continue
249
250 field_name = field_path[1]
251 if (
252 len(field_name) != 2Lines 252-270 252 len(field_name) != 2
253 or field_name[0] not in ("E", "H")
254 or field_name[1] not in ("x", "y", "z")
255 ):
! 256 log.warning(f"Unsupported field component '{field_name}' in CustomCurrentSource.")
! 257 derivative_map[field_path] = 0.0
! 258 continue
259
260 field_data = getattr(self.current_dataset, field_name, None)
261 if field_data is None:
! 262 raise ValueError(f"Cannot find field '{field_name}' in current dataset.")
263
264 if field_name.startswith("H"):
! 265 adjoint_field = h_adj.get(field_name)
! 266 component_sign = -1.0
267 else: # "E" case
268 adjoint_field = e_adj.get(field_name)
269 component_sign = 1.0tidy3d/components/source/field.pyLines 259-267 259 transpose_interp_field_to_dataset,
260 )
261
262 if self.field_dataset is None:
! 263 return {tuple(path): 0.0 for path in derivative_info.paths}
264
265 derivative_map = {}
266 center = tuple(self.center)
267 e_adj = derivative_info.E_adj or {}Lines 266-287 266 center = tuple(self.center)
267 e_adj = derivative_info.E_adj or {}
268 h_adj = derivative_info.H_adj or {}
269 if self.injection_axis is None:
! 270 return {tuple(path): 0.0 for path in derivative_info.paths}
271
272 for field_path in derivative_info.paths:
273 field_path = tuple(field_path)
274 if len(field_path) < 2 or field_path[0] != "field_dataset":
! 275 log.warning(f"Unsupported traced source path '{field_path}' for CustomFieldSource.")
! 276 derivative_map[field_path] = 0.0
! 277 continue
278
279 field_name = field_path[1]
280 field_data = getattr(self.field_dataset, field_name, None)
281 if field_data is None:
! 282 derivative_map[field_path] = 0.0
! 283 continue
284
285 if (
286 len(field_name) != 2
287 or field_name[0] not in ("E", "H")Lines 286-301 286 len(field_name) != 2
287 or field_name[0] not in ("E", "H")
288 or field_name[1] not in ("x", "y", "z")
289 ):
! 290 log.warning(f"Unsupported field component '{field_name}' in CustomFieldSource.")
! 291 derivative_map[field_path] = 0.0
! 292 continue
293
294 component_axis = "xyz".index(field_name[1])
295 if component_axis == self.injection_axis:
! 296 derivative_map[field_path] = np.zeros_like(field_data.data)
! 297 continue
298
299 def _get_adjoint_and_sign(
300 *,
301 field_name: str,Lines 309-317 309 e_vec = np.eye(3)[component_axis]
310 cross = np.cross(n_vec, e_vec)
311
312 if not np.any(cross):
! 313 return None, 0.0 # indicates "no gradient"
314
315 target_axis = int(np.flatnonzero(cross)[0])
316 component_sign = float(cross[target_axis])Lines 318-327 318 if field_name.startswith("E"):
319 target_component = f"H{'xyz'[target_axis]}"
320 adjoint_field = h_adj.get(target_component)
321 else:
! 322 target_component = f"E{'xyz'[target_axis]}"
! 323 adjoint_field = e_adj.get(target_component)
324
325 return adjoint_field, component_sign
326
327 adjoint_field, component_sign = _get_adjoint_and_sign(Lines 333-342 333 )
334
335 if component_sign == 0.0:
336 # no gradient for injection_axis == component_axis
! 337 derivative_map[field_path] = np.zeros_like(field_data.data)
! 338 continue
339
340 adjoint_on_dataset = transpose_interp_field_to_dataset(
341 adjoint_field, field_data, center=center
342 )tidy3d/web/api/autograd/backward.pyLines 155-163 155 sim_data_adj, sim_data_orig, sim_data_fwd, component_index, component_paths
156 )
157 )
158 else:
! 159 raise ValueError(
160 f"Unexpected component_type='{component_type}' for component_index={component_index}. "
161 "Expected 'structures' or 'sources'."
162 )Lines 193-201 193 monitor_freqs = np.array(fld_adj.monitor.freqs)
194 if len(adjoint_frequencies) != len(monitor_freqs) or not np.allclose(
195 np.sort(adjoint_frequencies), np.sort(monitor_freqs), rtol=1e-10, atol=0
196 ):
! 197 raise ValueError(
198 f"Frequency mismatch in adjoint postprocessing for source {source_index}. "
199 f"Expected frequencies from monitor: {monitor_freqs}, "
200 f"but derivative map has: {adjoint_frequencies}. "
201 )Lines 318-326 318 monitor_freqs = np.array(fld_adj.monitor.freqs)
319 if len(adjoint_frequencies) != len(monitor_freqs) or not np.allclose(
320 np.sort(adjoint_frequencies), np.sort(monitor_freqs), rtol=1e-10, atol=0
321 ):
! 322 raise ValueError(
323 f"Frequency mismatch in adjoint postprocessing for structure {structure_index}. "
324 f"Expected frequencies from monitor: {monitor_freqs}, "
325 f"but derivative map has: {adjoint_frequencies}. "
326 )Lines 408-416 408 n_freqs = len(adjoint_frequencies)
409 if not freq_chunk_size or freq_chunk_size <= 0:
410 freq_chunk_size = n_freqs
411 else:
! 412 freq_chunk_size = min(freq_chunk_size, n_freqs)
413
414 # process in chunks
415 vjp_value_map = {}Lines 483-495 483
484 # accumulate results
485 for path, value in vjp_chunk.items():
486 if path in vjp_value_map:
! 487 val = vjp_value_map[path]
! 488 if isinstance(val, (list, tuple)) and isinstance(value, (list, tuple)):
! 489 vjp_value_map[path] = type(val)(x + y for x, y in zip(val, value))
490 else:
! 491 vjp_value_map[path] += value
492 else:
493 vjp_value_map[path] = value
494 sim_fields_vjp = {}
495 # store vjps in output map |
360ac7a to
b974d37
Compare
b974d37 to
3c33571
Compare
| assert len(field_monitor.freqs) > 0 | ||
|
|
||
|
|
||
| def test_source_field_adjoint_monitors(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this test a duplicate of the one above (test_source_adjoint_monitors)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
difference is CustomFieldSource vs CustomCurrentSource, but this should definitely be better parametrized here
| DataArray of weights broadcastable to ``arr``. | ||
| """ | ||
|
|
||
| def _cell_size_weights(coord: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cell_sizes in components/grid/grid.py does something similar to this and could be good to re-use if possible!
| n_field = field_values.shape[0] | ||
| field_values_2d = field_values.reshape(n_field, -1) | ||
|
|
||
| param_index_upper = np.searchsorted(param_coords_1d, field_coords_1d, side="right") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if it is the same case here, but in the CustomMedium, we ended up needing a small numerical buffer tolerance on the bounds. just wanted to flag in case this is a similar situation
| kwargs={"bounds_error": False, "fill_value": 0.0}, | ||
| ).fillna(0.0) | ||
|
|
||
| def _transpose_interp_axis( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has some similarities to the _transpose_interp_axis used in CustomMedium. do you think there's a good way to re-use some of that code/logic? Or is it different enough that it's more of a pain to try and extract something common?
| @@ -0,0 +1,325 @@ | |||
| from __future__ import annotations | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you may have it in here somewhere already, but it might be worth checking things numerically when there is a simulation background medium and/or if the source is embedded in a structure with a certain refractive index.
| adjoint_field, field_data, center=center | ||
| ) | ||
| vjp_field = 0.5 * np.real( | ||
| derivative_info.source_time_scaling * adjoint_on_dataset * component_sign |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious what the source_time_scaling is for. is this for when source derivatives are made with respect to multiple different frequencies and we run one adjoint source? if so, does this case not get covered by the regular scaling methods used in the adjoint pipeline?
| return self | ||
| raise SetupError("No tangential field found in the suppled 'field_dataset'.") | ||
|
|
||
| def _compute_derivatives(self, derivative_info: DerivativeInfo) -> AutogradFieldMap: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might have been accounted for before the _compute_derivatives call, but for this and the current source case, do we need some guards against other parameters showing up as traced like center, size or parts of the source_time to say those derivatives are not supported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got some guards here:
https://github.com/flexcompute/tidy3d/pull/3197/changes#diff-d68c619258b41f7b295910a2419e26b3530ebf3fcbe229bb9ff50c2148cfc7e5R245
and here:
https://github.com/flexcompute/tidy3d/pull/3197/changes#diff-f91486d15f44ae0474e62d7cdf42db6f3eb191cc6fef07fdc482a401e81d33b0L66
still makes sense to have an explicit warning on source_time
|
|
||
| omega_da = get_frequency_omega(field_data, derivative_info.frequencies) | ||
|
|
||
| current_scale = omega_da * EPSILON_0 / size_element |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering if you need the relative permittivity here as well in case there is a simulation background medium or the source is inside a structure?
there is a potentially pesky case where the source sits on a non-uniform permittivity, which might require using the eps_data from the simulation to create the scaling at each point.
| for i, _field_keys in source_index_to_keys.items(): | ||
| source = self.sources[i] | ||
|
|
||
| # For sources, we only need field monitors (no permittivity monitors) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment above about cases where the source overlaps a non-uniform geometry, which might require a permittivity monitor or would need to use the simulation permittivity in derivative_info
groberts-flex
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @marcorudolphflex, great work!! this is really cool and will be a super useful feature!
I left a few comments/questions on there but overall looking really good
momchil-flex
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Just one comment form me too, apart from what @groberts-flex already identified.
Generally, there could be other small details in the backend which are however not really needed as getting ~1% accuracy in the gradient is already good enough. However, the 0-size dimension handling could introduce a significant normalization factor.
| *, | ||
| add_noise: bool, | ||
| ) -> td.CustomCurrentSource: | ||
| coords = _make_coords(SOURCE_SIZE, DATASET_SPACING, FREQ0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Custom current sources have some special handling in the backend for dimensions where the source size is 0, in order to make them inject field amplitudes that are approximately independent of the exact grid resolution. Another way to think about it physically is that for 3D sizes, the units of the amplitudes in the dataset are e.g. A/um^2 (for electric currents); for 2D sizes, the amplitudes are A/um, for 1D it's A, and for 0D (equivalent to a point dipole source) it's A * um.
Probably worth trying separately if a 2D current source also works or if some different normalization is needed. The most common usage for current monitors is < 3D.
tylerflex
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great. I had similar comments to greg actually, so once those are addressed I think this should be good to go. Thanks @marcorudolphflex
yaugenst-flex
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so cool! Overall everything looks great and everything has basically been covered already except for one bug in the scaling for the custom current sources.
| size_element = compute_spatial_weights(field_data, dims=tuple("xyz")) | ||
| if size_element.size > 1: | ||
| size_element = size_element.transpose(*size_element.dims) | ||
|
|
||
| omega_da = get_frequency_omega(field_data, derivative_info.frequencies) | ||
|
|
||
| current_scale = omega_da * EPSILON_0 / size_element |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran a resolution-invariance probe and found that CustomFieldSource VJP does scale with dataset sampling density, so for the same physical source profile and adjoint field and only changing the dataset resolution from 10x10 to 20x20 changes the summed VJP by a factor of ~4.46x (which is (19/9)^2 = 361/81 = 4.45679). So very likely that the path is currently over-weighting by grid density.
This only seems to be a problem for CustomFieldSource, CustomCurrentSource is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should also add a regression test for this then.
Implemented adjoint gradients for
CustomCurrentSource.current_datasetandCustomFieldSource.field_dataset.here some raw results from the numerical tests
Note
Medium Risk
Touches the autograd forward/backward plumbing and adjoint monitor generation, so regressions could affect gradient correctness/performance for optimization runs. Changes are well-scoped and backed by new analytical and numerical tests, but they exercise core adjoint infrastructure.
Overview
Enables source differentiation in autograd runs. The adjoint pipeline now treats traced
sourcessimilarly tostructures, creating per-source adjointFieldMonitors and computing VJPs forCustomCurrentSource.current_datasetandCustomFieldSource.field_dataset.Core plumbing updates.
_strip_traced_fieldsnow accepts multiplestarting_paths, autograd setup discovers tracers in bothstructuresandsources, and the backward pass is refactored to process structure vs. source gradients separately (includingsource_timefrequency scaling). Adds new derivative utilities for spatial weighting/frequency alignment and extensive new tests (analytical + finite-difference) to validate the new gradients; updates docs/changelog accordingly.Written by Cursor Bugbot for commit 3c33571. This will update automatically on new commits. Configure here.
Greptile Overview
Greptile Summary
This PR implements adjoint gradient computation for
CustomCurrentSource.current_datasetandCustomFieldSource.field_dataset, enabling automatic differentiation with respect to source field data. The implementation extends the existing autograd infrastructure to support sources in addition to structures.Key Changes:
_compute_derivatives()methods toCustomCurrentSourceandCustomFieldSourcethat compute vector-Jacobian products (VJPs) by interpolating adjoint fields onto source datasets_make_adjoint_monitors()inSimulationto create field monitors for sources alongside existing structure monitorspostprocess_adj()in backward.py to handle both structures and sources through separate processing functionstranspose_interp_field_to_dataset(),compute_source_weights(), andget_frequency_omega()for source gradient computations_strip_traced_fields()in base.py to support multiple starting paths instead of a single pathImplementation Details:
For
CustomCurrentSource, the gradient is computed as0.5 * Re(source_time_scaling * adjoint_field * sign)where the sign depends on whether the component is E (+1) or H (-1).For
CustomFieldSource, the implementation uses the equivalence principle with cross products to determine the relationship between field components and injected currents, scaled byomega * epsilon_0 / cell_size.The numerical test results in the PR description show angle differences between adjoint and finite-difference gradients ranging from 0.02° to 3.0°, indicating good agreement.
Confidence Score: 4/5
tidy3d/components/source/current.pyandtidy3d/web/api/autograd/backward.pyto align with coding standardsImportant Files Changed
_compute_derivativesmethod toCustomCurrentSourcefor adjoint gradient computation with proper field interpolation and scaling_compute_derivativesmethod toCustomFieldSourcefor adjoint gradient computation with cross-product based current scaling_process_source_gradientsfunction with source time scaling_make_adjoint_monitorsto create field monitors for sources in addition to structures_strip_traced_fieldsto support multiple starting paths instead of single pathcompute_source_weights,transpose_interp_field_to_dataset, andget_frequency_omegafor source gradient computationSequence Diagram
sequenceDiagram participant User participant AutogradAPI as Autograd API participant Simulation participant Source as CustomSource participant BackwardPass as Backward Pass participant DerivativeInfo User->>AutogradAPI: run with traced source parameters AutogradAPI->>Simulation: execute forward simulation Simulation->>Simulation: _make_adjoint_monitors() Simulation->>Simulation: create source field monitors Note over Simulation: Forward simulation runs User->>AutogradAPI: compute gradients (backward pass) AutogradAPI->>BackwardPass: setup_adj(data_fields_vjp) BackwardPass->>BackwardPass: filter traced fields BackwardPass->>Simulation: _make_adjoint_sims() Note over Simulation: Adjoint simulation runs BackwardPass->>BackwardPass: postprocess_adj() BackwardPass->>BackwardPass: _process_source_gradients() BackwardPass->>DerivativeInfo: create DerivativeInfo with E_adj, H_adj BackwardPass->>Source: _compute_derivatives(derivative_info) alt CustomCurrentSource Source->>Source: transpose_interp_field_to_dataset() Source->>Source: compute VJP with source_time_scaling Source-->>BackwardPass: derivative_map else CustomFieldSource Source->>Source: compute cross products (n x E, n x H) Source->>Source: transpose_interp_field_to_dataset() Source->>Source: apply current_scale (omega * epsilon_0) Source-->>BackwardPass: derivative_map end BackwardPass-->>AutogradAPI: sim_fields_vjp AutogradAPI-->>User: gradients w.r.t. source parameters