-
Notifications
You must be signed in to change notification settings - Fork 0
Vision
Vision is a key component of FTC challenges, often reserved for the most advanced operations a team can perform. BunyipsLib provides an easy framework and integrated processors to create and manage vision systems, which also integrate smoothly into the core paradigms.
Vision
is a BunyipsSubsystem
that manages the propagation of vision Processor
instances. It is responsible for starting up, shutting down, and otherwise managing an internal VisionPortal
that can be configured and used as a general BunyipsLib subsystem.
Vision
also exposes a utility that allows the SwitchableVisionSender
to be activated, allowing streaming of any vision processor to FtcDashboard with a single line of code.
Review the API docs for Vision (linked above) to see all the available methods for managing processor instances.
The Processor
is the key part of the Vision system, and represents a VisionProcessor
from the SDK. It exposes OpenCV methods for you to use, as well as providing utilities for camera dimensions, a bitmap for FtcDashboard, and a system for data throughput using a generic type.
A VisionData
derivative is returned by all processors in the getData
method, which are lists of data classes that are periodically updated as the processor instance is updated. A flowchart is provided to show this connection.

Processors are abstract and are extended into general-purpose processors. BunyipsLib has a few integrated processors that provide all the vision processing you would ever need (see below section) as a simple component you instantiate.
Tip
Every Processor
has a getData()
method. Use it to extract meaningful data from your processors to use them in tasks and other applications!
WhitePixel whitePixel = new WhitePixel(); // Your vision processor
Vision vision = new Vision(robot.camera);
vision.init(whitePixel).start(whitePixel); // Initialise processor
// BunyipsLib includes several integrated vision tasks
AlignToContourTask task = new AlignToContourTask(robot.drive, whitePixel::getData); // Use in tasks
The SwitchableVisionSender is used internally by the Vision instance through the preview methods to stream a camera feed to the dashboard.
It can be used manually, however, it is designed to run on another thread and due care is advised if you choose to do this manually. The Threads
utility is used by Vision
internally to provide the streaming service.
Multiple instances of Vision
are supported by the SwitchableVisionSender, and all registered processors and vision instances can be dynamically updated by index via an FtcDashboard configuration field.
When running a SwitchableVisionSender
, it will be streamed to FtcDashboard under the Camera view. You may need to rearrange your layout to see it.
There are various integrated OpenCV processors to assist in the creation of OpModes. This includes all integrated vision pipelines to the SDK, such as AprilTag, and basic colour processing. The ColourThreshold is the BunyipsLib upgraded version of the basic colour locator.
Scans for AprilTags and provides their data in the form of an AprilTagData instance.
The AprilTagData schema can be found here. This instance can also be used for relocalization as per a AprilTagRelocalizingAccumulator
, however, the robot-camera pose needs to be set. An example below demonstrates this process.
AprilTag defaultAprilTag = new AprilTag(); // no parameters, using all defaults
AprilTag aprilTag = new AprilTag(builder -> {
// extra builder parameters can optionally go in this lambda, including configuring the camera location for relocalization
builder.setSuppressCalibrationWarnings(true); // other builder config of the AprilTagProcessor can be done too
// utility builder for robot camera pose
builder = AprilTag.setCameraPose(builder)
.forward(Centimeters.of(20)) // define where the camera is
.left(Centimeters.of(20))
.up(Centimeters.of(7))
.yaw(Degrees.of(90))
.apply();
return builder;
});
This processor can then be attached to a Vision
instance to receive updates, and passed into the AprilTagRelocalizingAccumulator
to relocalise.

A modern processor for colour-processed object detection, including erosion, dilation, blur, thresholding, and Perspective-N-Point processing based on set thresholds. The ColourThreshold is abstract. To construct one, extend it, and call the various set
methods in the constructor to configure your processor. Public static fields should be used to allow for FtcDashboard tuning with @Config
. An example is provided below.
Tip
Integrated vision processors pre-calibrated for the INTO THE DEEP and CENTERSTAGE seasons are available in the vision.processors
package. Below is an excerpt from one of the integrated vision processors.
@Config
public class YellowSample extends ColourThreshold {
public static Scalar LOWER_YCRCB = new Scalar(0, 150, 0);
public static Scalar UPPER_YCRCB = new Scalar(255, 255, 90);
public static double MIN_AREA = 10; // min threshold percent
public static double MAX_AREA = 20; // max threshold percent
public static boolean SHOW_MASKED_INPUT = false;
public YellowSample() {
setColourSpace(ColourSpace.YCrCb); // using ycrcb colour space
setContourAreaMinPercent(() -> MIN_AREA);
setContourAreaMaxPercent(() -> MAX_AREA);
setLowerThreshold(() -> LOWER_YCRCB);
setUpperThreshold(() -> UPPER_YCRCB);
setBoxColour(Color.YELLOW); // outline
setShowMaskedInput(() -> SHOW_MASKED_INPUT); // camera view filter mask
setErodeSize(3.5); // px
setDilateSize(3.5); // px
setBlurSize(3); // px
setExternalContoursOnly(true); // no nested contours
setPnP(Centimeters.of(9), Centimeters.of(3.5)); // real world obj dimensions
}
@Override
@NonNull
public String getId() {
return "yellowsample"; // used for processor selection on FtcDashboard and to tie data to an ID
}
}

Composite ColourThreshold
that combines the results of thresholds to provide them in one singular instance.
MultiColourThreshold mct = new MultiColourThreshold(new BlueSample(), new NeutralSample(), new RedSample());

Utility OpMode for ColourThreshold
processors to tune the thresholds live with the help of a gamepad. Does not require FtcDashboard as it can run purely with the Camera Stream! To make one of these tuning OpModes, simply supply your camera instance and desired ColourThresholds to tune:
@TeleOp(name = "Tune Colours", group = "Utility")
public class TuneColours extends ColourTunerOpMode {
@NonNull
@Override
protected CameraName setCamera() {
return hardwareMap.get(WebcamName.class, "webcam");
}
@NonNull
@Override
protected ColourThreshold[] setThresholdsToTune() {
return new ColourThreshold[]{
new BlueSample(),
new NeutralSample(),
new RedSample()
};
}
}
Instructions will show on telemetry regarding the adjustment of tuning using the gamepad when this OpMode is run.
BunyipsLib port of the SDK-integrated ColorBlobLocatorProcessor
. The ColourThreshold
is the modern version of this with more features and the ability to tune it dynamically. Since this is instance-based, the ID for this processor is denoted by colourlocatorX
, where X is the ID of the processor starting from 0 and increments once per construction per OpMode.
The ColourBlob is used as the data class for this processor.
BunyipsLib port of the SDK-integrated PredominantColorProcessor
. Since this is instance-based, the ID for this processor is denoted by coloursensorX
, where X is the ID of the processor starting from 0 and increments once per construction per OpMode.
The ColourSample is used as the data class for this processor.