This is a Flutter application that demonstrates real-time object detection and tracking using a custom TensorFlow Lite model. The application utilizes the device's camera to identify an object, display its label, and track its center with a red dot on the screen.
- Real-time Object Detection: Detects objects from the live camera feed.
- Custom TFLite Model: Uses a custom TensorFlow Lite model for object detection.
- Object Tracking: Tracks the detected object and visualizes its center with a moving dot.
- Label Display: Shows the label of the detected object.
- State Management with BLoC: Manages the application state efficiently using the BLoC pattern.
- Isolate for Performance: Performs heavy object detection tasks in a separate isolate to prevent UI jank.
- Framework: Flutter
- Language: Dart
- State Management: flutter_bloc
- Camera: camera
- ML/Vision: google_mlkit_object_detection
- Isolates: integral_isolates
The project follows a clean architecture, separating concerns into different layers:
lib/
├── constants/
│ └── constants.dart # Application-wide constants
├── device/
│ └── mlkit_object_detection_camera_repository.dart # Handles camera and ML Kit logic
├── logic/
│ └── cubit/ # BLoC state management
│ ├── object_tracker_cubit.dart
│ └── object_tracker_state.dart
├── presentation/
│ └── screens/
│ └── object_tracker_screen.dart # Main UI screen
├── widgets/
│ └── dot_painter_widget.dart # Custom painter for the tracking dot and label
└── main.dart # App entry point
- Flutter SDK installed.
- An IDE like VS Code or Android Studio.
- A physical device or emulator for testing.
-
Clone the repository:
git clone https://github.com/IoT-gamer/flutter_object_detection_tracking_demo.git cd flutter_object_detection_tracking_demo
-
Get Flutter packages:
flutter pub get
-
Place your TFLite model:
- This project expects a TensorFlow Lite model at
assets/ml/
. - If you have your own model, create an
assets/ml
directory and place your.tflite
file there. Update themodelPath
inlib/constants/constants.dart
if you use a different name. - Here is a list of compatible pre-trained models you can use:
- Requirments for pre-trained models:
- Ensure your
pubspec.yaml
file has the assets directory declared:flutter: uses-material-design: true assets: - assets/ml/
- This project expects a TensorFlow Lite model at
-
Android Additional Setup:
- Ensure Gradle doesn't compress the TFLite model file. Add the following to your
android/app/build.gradle.kts
file:
aaptOptions { noCompress("tflite") }
- Ensure Gradle doesn't compress the TFLite model file. Add the following to your
-
Run the app:
flutter run
- Camera Initialization: The
ObjectTrackerCubit
initializes the front-facing camera. - Image Stream: The
MLKITObjectDetectionCameraRepository
starts an image stream from the camera controller. - Isolate Processing: For each frame in the stream, the image data is sent to a separate isolate using the
integral_isolates
package. This prevents the heavy detection work from blocking the main UI thread. - Object Detection: Inside the isolate,
google_mlkit_object_detection
processes the image using the provided custom TFLite model (2.tflite
). - State Update: The result, containing the center coordinates and the label of the detected object, is sent back to the main thread. The
ObjectTrackerCubit
receives this data and emits a newObjectTrackerSuccess
state. - UI Rendering: The
ObjectTrackerScreen
listens to state changes. When it receives a new success state, it rebuilds the UI, passing the object's center coordinates and label to theDotPainterWidget
. - Visualization: The
CustomPaint
widget usesDotPainterWidget
to draw a red dot and the object's label at the calculated position on top of theCameraPreview
.
This project is licensed under the MIT License. See the LICENSE file for details