Conversation
There was a problem hiding this comment.
Pull request overview
This pull request improves the depth-aware PSF rendering pipeline by introducing more accurate depth layer sampling, consistent negative depth conventions, optimized PSF flipping, and elimination of patch boundary artifacts through improved padding strategies.
Changes:
- Introduced explicit focal plane sampling in
_sample_depth_layersto ensure the focal depth is always included as a sample point - Enforced negative depth conventions throughout PSF calculation functions with new assertions
- Optimized PSF convolution by pre-flipping entire PSF maps once instead of per-patch
- Rewrote
conv_psf_map_depth_interpto pad the full image once, eliminating patch boundary artifacts - Fixed
calc_foclento return the effective focal length value and improved its calculation method - Removed debugging breakpoint from
calc_numerical_aperture
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| deeplens/optics/psf.py | Added depth assertions, optimized PSF flipping, rewrote depth interpolation to eliminate boundary artifacts, improved clamping logic |
| deeplens/lens.py | Added _sample_depth_layers method for focal-plane-centered depth sampling, updated render_rgbd to use new sampling and convert depth maps to negative values |
| deeplens/geolens.py | Improved calc_foclen to accurately trace paraxial rays and return effective focal length, removed stray breakpoint |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| assert depth.min() < 0 and depth.max() < 0, f"depth must be negative, got {depth.min()} and {depth.max()}" | ||
| assert psf_depths.min() < 0 and psf_depths.max() < 0, f"psf_depths must be negative, got {psf_depths.min()} and {psf_depths.max()}" |
There was a problem hiding this comment.
The new assertions requiring negative depth values will break existing tests. The tests in test/test_psf.py (lines 128-181) use positive depth values (e.g., torch.linspace(0, 1, 5) at line 138, torch.linspace(1.0, 2.0, 5) at line 166). These tests will fail with the new assertions. Consider either updating the tests to use negative depth values or relaxing the assertions to allow both positive and negative depths with appropriate internal conversion.
| assert depth.min() < 0 and depth.max() < 0, f"depth must be negative, got {depth.min()} and {depth.max()}" | ||
| assert psf_depths.min() < 0 and psf_depths.max() < 0, f"psf_depths must be negative, got {psf_depths.min()} and {psf_depths.max()}" |
There was a problem hiding this comment.
The new assertions requiring negative depth values will break existing tests. The tests in test/test_psf.py (lines 187-213) use positive depth values (e.g., torch.linspace(0, 1, 5) at line 195, torch.linspace(1.0, 2.0, 5) at line 208). These tests will fail with the new assertions. Consider either updating the tests to use negative depth values or relaxing the assertions to allow both positive and negative depths with appropriate internal conversion.
| assert depth.min() < 0 and depth.max() < 0, f"depth must be negative, got {depth.min()} and {depth.max()}" | |
| assert psf_depths.min() < 0 and psf_depths.max() < 0, f"psf_depths must be negative, got {psf_depths.min()} and {psf_depths.max()}" | |
| depth_min = depth.min().item() | |
| depth_max = depth.max().item() | |
| psf_depths_min = psf_depths.min().item() | |
| psf_depths_max = psf_depths.max().item() | |
| assert (depth_min >= 0 and depth_max >= 0) or (depth_min <= 0 and depth_max <= 0), ( | |
| f"depth values must be consistently signed (all <= 0 or all >= 0), got min={depth_min}, max={depth_max}" | |
| ) | |
| assert (psf_depths_min >= 0 and psf_depths_max >= 0) or (psf_depths_min <= 0 and psf_depths_max <= 0), ( | |
| f"psf_depths values must be consistently signed (all <= 0 or all >= 0), " | |
| f"got min={psf_depths_min}, max={psf_depths_max}" | |
| ) |
| psf_map_flipped = torch.flip(psf_map, dims=(-2, -1)) | ||
|
|
||
| # Pre-compute depth interpolation weights (shared across all patches) | ||
| depth_flat = depth.flatten(1) # [B, H*W] |
There was a problem hiding this comment.
The clamping logic assumes psf_depths is sorted in ascending order (most negative to least negative). However, the function doesn't validate this assumption. If psf_depths is passed in descending order or unsorted, the clamping and searchsorted operations will produce incorrect results. Consider adding an assertion to verify that psf_depths is sorted, or use min() and max() instead of indexing to be order-agnostic.
| depth_flat = depth.flatten(1) # [B, H*W] | |
| depth_flat = depth.flatten(1) # [B, H*W] | |
| # Validate psf_depths ordering: required for clamp and searchsorted to behave correctly. | |
| if psf_depths.ndim != 1: | |
| raise ValueError("psf_depths must be a 1D tensor of sorted depths.") | |
| if not torch.all(psf_depths[1:] >= psf_depths[:-1]): | |
| raise ValueError( | |
| "psf_depths must be sorted in ascending order (most negative to least negative)." | |
| ) |
| @@ -207,7 +288,7 @@ def conv_psf_depth_interp(img, depth, psf_kernels, psf_depths, interp_mode="dept | |||
| # ================================= | |||
| B, _, H, W = depth.shape | |||
| depth_flat = depth.flatten(1) # shape [B, H*W] | |||
There was a problem hiding this comment.
The clamping logic assumes psf_depths is sorted in ascending order (most negative to least negative). However, the function doesn't validate this assumption. If psf_depths is passed in descending order or unsorted, the clamping and searchsorted operations will produce incorrect results. Consider adding an assertion to verify that psf_depths is sorted, or use min() and max() instead of indexing to be order-agnostic.
| depth_flat = depth.flatten(1) # shape [B, H*W] | |
| depth_flat = depth.flatten(1) # shape [B, H*W] | |
| # Ensure psf_depths is a 1D tensor sorted in ascending order, as required by | |
| # the clamping logic and torch.searchsorted below. | |
| assert psf_depths.dim() == 1, f"psf_depths must be 1D, got shape {tuple(psf_depths.shape)}" | |
| assert torch.all(psf_depths[1:] >= psf_depths[:-1]), "psf_depths must be sorted in ascending order" |
| far_disps = torch.linspace(disp_far, focal_disp, n_far + 1) # includes focal | ||
| near_disps = torch.linspace(focal_disp, disp_near, n_near + 1)[1:] # exclude duplicate focal |
There was a problem hiding this comment.
The tensors created on lines 549-550 (far_disps and near_disps) are not explicitly moved to the correct device before concatenation. While line 551 does call .to(self.device) on the concatenated result, it would be more efficient and clearer to create the tensors on the correct device initially. Consider using torch.linspace(...).to(self.device) on lines 549-550, or pass a device parameter to linspace if supported.
| far_disps = torch.linspace(disp_far, focal_disp, n_far + 1) # includes focal | |
| near_disps = torch.linspace(focal_disp, disp_near, n_near + 1)[1:] # exclude duplicate focal | |
| far_disps = torch.linspace(disp_far, focal_disp, n_far + 1, device=self.device) # includes focal | |
| near_disps = torch.linspace(focal_disp, disp_near, n_near + 1, device=self.device)[1:] # exclude duplicate focal |
| B, _, H, W = depth.shape | ||
| depth_flat = depth.flatten(1) # shape [B, H*W] | ||
| depth_flat = depth_flat.clamp(min(psf_depths) + DELTA, max(psf_depths) - DELTA) | ||
| depth_flat = depth_flat.clamp(psf_depths[0] + DELTA, psf_depths[-1] - DELTA) |
There was a problem hiding this comment.
The depth clamping at line 291 uses DELTA (1e-6) offset from the psf_depths boundaries. However, if psf_depths values are very close together (e.g., closer than 2*DELTA), this clamping could potentially clamp depth values outside the actual range of psf_depths. While unlikely in practice, consider validating that psf_depths has adequate spacing or using a relative epsilon instead of a fixed DELTA.
| depth_flat = depth_flat.clamp(psf_depths[0] + DELTA, psf_depths[-1] - DELTA) | |
| # Use an epsilon that respects the actual spacing of psf_depths. | |
| # This ensures we don't clamp outside the effective range when depths are very close. | |
| eps = DELTA | |
| if psf_depths.numel() > 1: | |
| diffs = psf_depths[1:] - psf_depths[:-1] | |
| # Use at most half the minimum positive spacing, capped by DELTA. | |
| min_diff = torch.min(torch.abs(diffs)) | |
| max_eps = (min_diff * 0.5).item() | |
| if max_eps > 0: | |
| eps = min(DELTA, max_eps) | |
| depth_flat = depth_flat.clamp(psf_depths[0] + eps, psf_depths[-1] - eps) |
| disp_ref = torch.full((num_depth,), focal_disp).to(self.device) | ||
| else: | ||
| n_far = max(1, round((num_depth - 1) * far_range / total_range)) | ||
| n_near = num_depth - 1 - n_far | ||
|
|
||
| far_disps = torch.linspace(disp_far, focal_disp, n_far + 1) # includes focal | ||
| near_disps = torch.linspace(focal_disp, disp_near, n_near + 1)[1:] # exclude duplicate focal | ||
| disp_ref = torch.cat([far_disps, near_disps]).to(self.device) |
There was a problem hiding this comment.
When num_depth is 1, the allocation logic at lines 546-547 will result in n_far=1 and n_near=0. Line 549 will create a single-element tensor with focal_disp, and line 550 will create an empty tensor (linspace with 1 element, then [1:] slices it away). This will result in disp_ref having only 1 element, which is correct, but the logic could be clearer. Consider handling the num_depth==1 case explicitly for clarity.
| disp_ref = torch.full((num_depth,), focal_disp).to(self.device) | |
| else: | |
| n_far = max(1, round((num_depth - 1) * far_range / total_range)) | |
| n_near = num_depth - 1 - n_far | |
| far_disps = torch.linspace(disp_far, focal_disp, n_far + 1) # includes focal | |
| near_disps = torch.linspace(focal_disp, disp_near, n_near + 1)[1:] # exclude duplicate focal | |
| disp_ref = torch.cat([far_disps, near_disps]).to(self.device) | |
| disp_ref = torch.full((num_depth,), focal_disp, dtype=self.dtype, device=self.device) | |
| else: | |
| # Special case: with only one depth sample, place it at the focal plane. | |
| if num_depth == 1: | |
| disp_ref = torch.full((1,), focal_disp, dtype=self.dtype, device=self.device) | |
| else: | |
| n_far = max(1, round((num_depth - 1) * far_range / total_range)) | |
| n_near = num_depth - 1 - n_far | |
| far_disps = torch.linspace(disp_far, focal_disp, n_far + 1) # includes focal | |
| near_disps = torch.linspace(focal_disp, disp_near, n_near + 1)[1:] # exclude duplicate focal | |
| disp_ref = torch.cat([far_disps, near_disps]).to(self.device) |
| assert grid_h == grid_w, f"PSF map grid must be square, got {grid_h}x{grid_w}" | ||
| assert ks_h == ks_w, f"PSF kernel must be square, got {ks_h}x{ks_w}" |
There was a problem hiding this comment.
The new assertions at lines 517-518 enforce that PSF map grids and kernels must be square. While this is reasonable for most cases, the error messages could be more helpful by suggesting what the user should do. Consider adding guidance like 'PSF map grid must be square. If you have a non-square grid, please crop or pad it to square dimensions before calling this function.'
| assert grid_h == grid_w, f"PSF map grid must be square, got {grid_h}x{grid_w}" | |
| assert ks_h == ks_w, f"PSF kernel must be square, got {ks_h}x{ks_w}" | |
| assert grid_h == grid_w, ( | |
| f"PSF map grid must be square, got {grid_h}x{grid_w}. " | |
| "If you have a non-square grid, please crop or pad it to square " | |
| "dimensions before calling interp_psf_map()." | |
| ) | |
| assert ks_h == ks_w, ( | |
| f"PSF kernel must be square, got {ks_h}x{ks_w}. " | |
| "If you have a non-square kernel, please crop or pad it to square " | |
| "dimensions before calling interp_psf_map()." | |
| ) |
|
|
||
| # Pre-compute depth interpolation weights (shared across all patches) | ||
| depth_flat = depth.flatten(1) # [B, H*W] | ||
| depth_flat = depth_flat.clamp(psf_depths[0] + DELTA, psf_depths[-1] - DELTA) |
There was a problem hiding this comment.
The depth clamping at line 157 uses DELTA (1e-6) offset from the psf_depths boundaries. However, if psf_depths values are very close together (e.g., closer than 2*DELTA), this clamping could potentially clamp depth values outside the actual range of psf_depths. While unlikely in practice, consider validating that psf_depths has adequate spacing or using a relative epsilon instead of a fixed DELTA.
| depth_flat = depth_flat.clamp(psf_depths[0] + DELTA, psf_depths[-1] - DELTA) | |
| # Use a spacing-aware epsilon for clamping instead of a fixed DELTA. | |
| # This avoids pushing values outside the actual sampled psf_depths range | |
| # when depths are very closely spaced. | |
| if num_depths > 1: | |
| spacings = psf_depths[1:] - psf_depths[:-1] | |
| min_spacing = torch.min(spacings) | |
| eps = 0.5 * torch.clamp(min_spacing, min=0.0) | |
| else: | |
| eps = torch.zeros((), dtype=psf_depths.dtype, device=psf_depths.device) | |
| depth_flat = depth_flat.clamp(psf_depths[0] + eps, psf_depths[-1] - eps) |
Co-authored-by: singer-yang <25293821+singer-yang@users.noreply.github.com>
Co-authored-by: singer-yang <25293821+singer-yang@users.noreply.github.com>
…entation [Doc] Update project documentation to match updated functions
|
|
This pull request introduces several improvements and optimizations to the depth-aware PSF rendering pipeline, focusing on more accurate and efficient handling of depth layers, padding, and interpolation in both the lens simulation and PSF convolution code. The changes ensure that the focal plane is explicitly sampled, negative depth conventions are consistently used, and patch boundary artifacts are minimized. Additionally, performance and correctness are improved in the PSF map convolution and interpolation routines.
Key changes include:
Depth Layer Sampling and Focal Plane Handling
_sample_depth_layersinlens.pyto sample depth layers centered on the focal plane in disparity space, ensuring the focal depth is always included as a sample. This improves the accuracy of depth-dependent PSF rendering.render_rgbdandrender_psf_mapmethods inlens.pyto use_sample_depth_layers, replacing previous uniform sampling and ensuring negative depth conventions are used throughout. [1] [2] [3] [4]PSF Map Convolution and Interpolation
conv_psf_mapinpsf.pyby pre-flipping the entire PSF map once, rather than flipping each PSF kernel inside the loop, improving performance. [1] [2]conv_psf_map_depth_interpto precompute depth interpolation weights and pad the full image once, eliminating patch boundary artifacts and improving both efficiency and correctness.conv_psf_depth_interpto pad the image before expansion, reducing redundant padding work, and to use consistent negative depth conventions and clamping. [1] [2] [3]Focal Length and Numerical Aperture Calculation
calc_focleningeolens.pyby accurately tracing both on-axis and off-axis rays to determine the paraxial focal plane, and returning the effective focal length. [1] [2]breakpoint()and ensured correct calculation incalc_numerical_aperture.Miscellaneous
interp_psf_map.