Replies: 2 comments 9 replies
-
What kind of processing is expected to be required for either case? Also, is there anything each thing would or wouldn't be able to do? I think @shreshtaparthaje mentioned object detection in the case of noticing a bot in the way during auto IIRC. I'm not actually sure how you'd deal with that in color detection but even standard object detection would be tough as a robot isn't a specific object. |
Beta Was this translation helpful? Give feedback.
-
Also one other note that I have gleaned from working on basic game piece detection code: one of the known benefits of object detection is that it would potentially allow us to use a single camera to detect and differentiate multiple types of targets. However, through the code, it is very difficult to actually get the type of the target, so it is very difficult to know what the robot is even looking at in the first place. Theoretically, this should be something we can get through network tables, though I do not know how easy it would be to match this data to the correct vision targets. This means that we shouldn't necessarily anticipate that a single camera can do all types of object detection. As a side note, color detection would mean that each camera only detects one type of game piece with no exceptions if we want it to be stable. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We will need to decide what to focus on for our next year in regards to vision: object or color detection. My vote goes to color detection unless something changes: theoretically it should be able to do anything object detection can do, however with less effort on our own part. Plus, given that game piece detection is a first for our team, we have a lot of other logic to implement along side simple detection as well to actually make this detection useful. Namely, we have to figure out how to convert the measurements vision returns regarding the game pieces into actual Pose2ds, which is difficult on its own.
Doing this AS WELL AS setting object detection up, which won't even be possible next year until we either train our own model or photonvision releases its own(which it hasn't for coral even still, showing that the training process can be difficult) seems like a very difficult task.
While object detection should be something we are looking towards, we should first figure out color detection on its own, which would make object detection that much easier. We could also have a part of the vision people working on configuring object detection while the rest test using color detection.
Beta Was this translation helpful? Give feedback.
All reactions