-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Currently, when the robot receives enemy tracking data from the CV-side Jetson, it process the data into a usable format through the enemy_data_conversion code. The processing that it undergoes is a frame-transformation, essentially changing the origins of the coordinate system that is being used, switching it from an origin at the center of the camera, to an origin at the center of the turret (which makes the ballistics math easier).
In practice, our current version of this over-complicated the matter, instead first transforming the origin to a point defined as the chassis's center, before re-transforming it back up to the gimbal's origin. This results in a situation where if the robot chassis is spinning (during defensive maneuvers), slight errors in the math caused by processing speed build up drastically, causing the coordinate to drift way off course. A current stop-gap solution exists by grabbing an old version of the chassis's location from the past to mostly compensate for the lag.
To fix this, we want to simplify the system way down, and just make the translation straight from the camera to the gimbal. gimbal_chase_command will be helpful in debugging the issue, and you can follow the functional pipeline up it to see where the coordinates are being fed to the robot in the first place.
For real-time debugging and testing, work with the CV Team (Lead is Albert Ma) to get a Jetson setup to send CV tracking data to the Embedded-side.
Use branch feature/cv-transformation-fix-2024
To dos:
- Understand the current system and pinpoint the current math going on behind it.
- Remove the needless transformation from camera -> chassis -> gimbal frame
- Nullify the chassis-spin-compensation fix
- Test that tracking is operational on Standard
Metadata
Metadata
Assignees
Labels
Type
Projects
Status