Enhance the capability of Bridgescaler for supporting tensors#23
Enhance the capability of Bridgescaler for supporting tensors#23kevinyang-cky wants to merge 34 commits intoNCAR:mainfrom
Conversation
|
@kevinyang-cky Your changes all look good code-wise. For the documentation, can you add a separate file on the tensor scalers to the directory https://github.com/NCAR/bridgescaler/tree/main/doc/source in a rst file? See other doc files for examples. |
…sor and errors in DStandScalerTensor
|
This is coming together nicely! However, I do not believe the ability to transform data with different channel order is working correctly. Please see the following tests that fail on my end (this is code added directly to the end of your test example above). |
|
Thanks @charlie-becker for providing testing feedback. I think the issue is that x4_tensor's variable_names are reorder but the data is still in the same order as x3_tensor. Try changing this line of code I will continue working on incorporating more unit tests into the current test script today. |
|
Yup, you're exactly right! Passes no problem. Thank you for catching my testing bug! |
|
@kevinyang-cky I think the code looks good but will wait to approve and merge until you have added your remaining tests. I'm going to fix some of the docs and do some other library cleanup issues in my PR. |
|
@djgagne sounds good to me! I will tag you again when I have all the tests and the example adding into the docs. Have a great long weekend! |
This PR addresses the following things: (probably not dive into each commit as I reverted a couple of things during the development, see the latest version of the files)
print_scaler_tensor()andread_scaler_tensor()inbackend_tensor.pydo that.distributed_tensor.py: ensure input tensors and the following fitting or transforming calculation stay on the same device.distributed_tensor.py: remove for-loop over channels and use vectorization instead.Unit tests passed, and I suggest CREDIT to use this version of Bridgescaler moving on.
Here is an example for using distributed scalers for tensors, happy to put it into the docs if @djgagne can point to a place to include it.