Skip to content

BeaverPhotonVision

BubbleShade edited this page Dec 7, 2025 · 4 revisions

Beaverlib includes a class that can be used to quickly setup a vision subsystem.


Using the Apriltag Pipeline

  1. To do this, first follow Photon Vision's docs and make sure you are able to detect 3d apriltags.

  2. Then, make a new file in your subsystems folder (Or wherever you keep your subsystems), named Vision

  3. Set val Vision = BeaverPhotonVision()

  • This will create a new subsystem named Vision. Every periodic, it will check each of the cameras for new results, and will call listener.update for each new result.
  1. Create a new BeaverVisionCamera() for each of you connected Photon Vision cameras

  2. Set the name of the BeaverVisionCamera() to what it is on the PhotonVision Dashboard

  3. Set the robotToCamera to the offset of the camera to the center of the robot in real life (Or in CAD if you are confident)

  4. Set layout to AprilTagFieldLayout.loadField(AprilTagFields.[insert this years field])

  5. Set strategy to the apriltage estimation strategy you would like to use (PhotonPoseEstimator.PoseStrategy.MULTI_TAG_PNP_ON_COPROCESSOR with fallbackStrategy = PhotonPoseEstimator.PoseStrategy.CLOSEST_TO_REFERENCE_POSE is recommended)

//Vision Subsystem
val Vision =
    BeaverPhotonVision(
        BeaverVisionCamera(
            "HD_USB_Camera",
            Transform3d(0.18,-0.33,0.2, Rotation3d()),
            layout = AprilTagFieldLayout.loadField(AprilTagFields.k2025ReefscapeWelded),
            strategy = PhotonPoseEstimator.PoseStrategy.MULTI_TAG_PNP_ON_COPROCESSOR,
            fallbackStrategy = PhotonPoseEstimator.PoseStrategy.CLOSEST_TO_REFERENCE_POSE,
        )
    )

Now, you can use this data by using Vision.listeners.add("InsertListenerName", {result, camera -> Insert Your Code here}), and the code in the lambda will run every frame a camera sees an apriltag. You can stop the function from running again by doing Vision.listeners.remove("InsertListenerName")

If you want to estimate the robots position from the apriltags, you can copy the following code into your drivetrains init function

// Updates odometry whenever vision sees apriltag
Vision.listeners.add(
    "UpdateOdometry",
    { result, camera ->
        if (result.targets.isEmpty()) return@add
        if (
            !result.multitagResult.isPresent && (result.targets.first().poseAmbiguity > 0.3)
        ) return@add
        val newPose = camera.getEstimatedRobotPose(result) ?: return@add
        addVisionMeasurement(newPose.toPose2d(), result.timestampSeconds)
    },
)
setVisionMeasurementStdDevs(3.0, 4.0, 5.0)

Where addVisionMeasurement adds a vision measurement to the SwervePoseEstimator (swerveDrive.addVisionMeasurement(robotPose, timestamp) for YAGSL) and setVisionMeasurementStdDevs sets the vision standard deviations for the SwervePoseEstimator (swerveDrive.swerveDrivePoseEstimator.setVisionMeasurementStdDevs(VecBuilder.fill(stdDevX, stdDevY, stdDevTheta)) for YAGSL)

Clone this wiki locally