A real-time virtual background Android application that uses TensorFlow Lite, OpenCV, and AGSL (Android Graphics Shading Language) for GPU-accelerated person segmentation and background replacement. Built with Camera2 API for professional-grade camera integration.
- ๐ฅ Real-time Processing: Live background replacement using Camera2 API
- ๐ง On-device AI: TensorFlow Lite models for fast person segmentation
- โก GPU Acceleration: AGSL shaders for hardware-accelerated blending (Android 13+)
- ๐ฑ Modern Architecture: Fragment-based design with proper lifecycle management
- ๐ง OpenCV Integration: Advanced image processing capabilities
- ๐ฏ High Performance: Optimized for smooth 30fps processing
- ๐ท Professional Camera: Camera2 API with autofocus and front/rear camera support
- Languages: Kotlin + Java
- ML Framework: TensorFlow Lite 2.5.0
- Computer Vision: OpenCV 4.x
- Graphics: AGSL (Android Graphics Shading Language)
- Camera: Camera2 API
- Architecture: Fragment-based with proper lifecycle management
- ImageProcessorAGSL: GPU-accelerated image blending using AGSL shaders
- CameraActivity: Main activity with fragment management
- Camera2BasicFragment: Camera handling and real-time processing
- TensorFlow Lite: Person segmentation model
- OpenCV: Image preprocessing and post-processing
- Android Version: API 26+ (Android 8.0)
- For AGSL Features: API 33+ (Android 13) - Tiramisu
- RAM: Minimum 4GB (6GB+ recommended)
- Storage: ~150MB for app + models
- Camera: Front/rear camera with autofocus support
- GPU: Adreno/Mali/PowerVR with OpenGL ES 3.0+
- Android Studio: Flamingo or later
- NDK: For OpenCV native libraries
- CMake: For building native modules
- Gradle: 8.0+
- JDK: 11+
- Download APK from Releases
- Enable "Install from Unknown Sources" in device settings
- Install the APK
- Grant camera permissions when prompted
- Start using real-time virtual backgrounds!
git clone https://github.com/sudhakar-r08/VirtualBackground.git
cd VirtualBackground
- Import project in Android Studio
- Sync Gradle files
- Ensure NDK and CMake are installed
The project uses these key dependencies:
// TensorFlow Lite
implementation 'org.tensorflow:tensorflow-lite:2.5.0'
implementation 'org.tensorflow:tensorflow-lite-gpu:2.3.0'
// OpenCV (local module)
implementation project(':opencv')
// Image Processing
implementation 'com.github.bumptech.glide:glide:4.12.0'
implementation 'jp.co.cyberagent.android:gpuimage:2.1.0'
// Coroutines
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.7.3'
./gradlew assembleDebug
VirtualBackground/
โโโ app/
โ โโโ src/main/
โ โ โโโ java/com/sudhakar/backgroundchangerapp/
โ โ โ โโโ CameraActivity.java # Main activity
โ โ โ โโโ Camera2BasicFragment.java # Camera fragment
โ โ โ โโโ ImageProcessorAGSL.kt # AGSL GPU processing
โ โ โ โโโ VBApp.java # Application class
โ โ โโโ assets/
โ โ โ โโโ models/ # TensorFlow Lite models
โ โ โโโ res/
โ โ โ โโโ layout/
โ โ โ โ โโโ activity_camera.xml
โ โ โ โโโ values/
โ โ โโโ AndroidManifest.xml
โ โโโ jni/ # Native libraries
โ โโโ build.gradle
โโโ opencv/ # OpenCV module
โโโ README.md
The app uses Android Graphics Shading Language (AGSL) for GPU-accelerated background blending:
const val AGSL_SHADER_CODE = """
uniform shader inputShader; // Background image
uniform shader fgdShader; // Foreground (person)
uniform shader maskShader; // Segmentation mask
half4 main(float2 coords) {
half4 bgd = inputShader.eval(coords);
half4 fgd = fgdShader.eval(coords);
half4 msk = maskShader.eval(coords);
half4 outColor = mix(bgd, fgd, msk);
return outColor;
}
"""
- Hardware Acceleration: Runs directly on GPU
- Real-time Blending: Smooth 30fps processing
- Memory Efficient: Minimal CPU-GPU data transfer
- Modern Graphics: Leverages Android 13+ graphics pipeline
- Launch App: Opens directly to camera view
- Grant Permissions: Camera access required
- Real-time Processing: Automatic background replacement
- Background Selection: Choose from predefined backgrounds
- Capture/Record: Save photos or videos with virtual backgrounds
// Main Activity Setup
public class CameraActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_camera);
// Load OpenCV
System.loadLibrary("opencv_java4");
if (!OpenCVLoader.initDebug()) {
Log.e("OpenCv", "Unable to load OpenCV");
}
// Setup Camera Fragment
Fragment fragment = Camera2BasicFragment.newInstance();
switchFragment(fragment, false);
}
}
android {
compileSdk 36
minSdk 26
targetSdk 36
buildFeatures {
shaders = true // Enable AGSL compilation
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_11
targetCompatibility JavaVersion.VERSION_11
}
packagingOptions {
pickFirst 'lib/armeabi-v7a/libRSSupport.so'
}
}
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" android:required="false" />
<uses-feature android:name="android.hardware.camera.autofocus" android:required="false" />
<uses-feature android:name="android.hardware.camera.front" android:required="false" />
GPU-accelerated image processing using AGSL:
- Hardware Rendering: Uses
HardwareRenderer
for off-screen processing - Shader Blending: Custom AGSL shader for real-time compositing
- Memory Management: Efficient bitmap handling with proper cleanup
- Android 13+ Only: Requires API level 33 (Tiramisu) or higher
- Camera2 API: Professional camera control
- Fragment Architecture: Proper lifecycle management
- Real-time Preview: Live camera feed with processing overlay
- Multi-camera Support: Front and rear camera switching
- Person Segmentation: Real-time human detection
- GPU Acceleration: TensorFlow Lite GPU delegate
- Optimized Models: Compressed models for mobile deployment
- Efficient Inference: <50ms processing time on modern devices
- GPU Processing: Offloads blending to graphics hardware
- Parallel Processing: Concurrent pixel operations
- Memory Efficiency: Direct GPU memory access
- Reduced Latency: Minimal CPU-GPU data transfer
Device | Processing Time | FPS | Memory Usage |
---|---|---|---|
Pixel 6 Pro | ~25ms | 30+ | ~180MB |
Galaxy S22 | ~20ms | 30+ | ~160MB |
OnePlus 9 | ~30ms | 25+ | ~200MB |
1. AGSL Not Working
- Ensure device runs Android 13+ (API 33)
- Check GPU compatibility
- Fallback to CPU processing if needed
2. OpenCV Loading Failed
// Check OpenCV initialization
if (!OpenCVLoader.initDebug()) {
Log.e("OpenCV", "Unable to load OpenCV");
// Implement fallback or show error
}
3. Camera Permissions
- Grant camera permission in device settings
- Check manifest permissions are correctly declared
- Handle runtime permission requests
4. Performance Issues
- Reduce camera resolution
- Disable GPU acceleration if causing crashes
- Optimize TensorFlow Lite model size
- Use Android Studio profiler for performance analysis
- Test on multiple device architectures (ARM64, ARM32)
- Implement proper error handling for GPU operations
- Add fallback processing for older devices
- Real-time Effects: Add filters and color adjustments
- Video Backgrounds: Animated background support
- Edge Refinement: Improved segmentation boundaries
- Background Library: Downloadable background packs
- Social Sharing: Direct integration with social platforms
- Custom Models: Support for custom TensorFlow Lite models
- Vulkan API: Next-gen graphics API support
- MediaPipe: Alternative to TensorFlow Lite
- CameraX: Migration from Camera2 API
- Jetpack Compose: Modern UI framework
CameraActivity (Main)
โ
Camera2BasicFragment (Camera Logic)
โ
ImageProcessorAGSL (GPU Processing)
โ
TensorFlow Lite (ML Inference)
โ
OpenCV (Image Processing)
- CameraActivity: Fragment management and OpenCV initialization
- Camera2BasicFragment: Camera2 API integration and real-time processing
- ImageProcessorAGSL: AGSL shader-based GPU acceleration
- VBApp: Application class for global initialization
- Fork the repository
- Create feature branch:
git checkout -b feature-agsl-improvements
- Implement changes with proper testing
- Test on multiple devices and Android versions
- Submit pull request with detailed description
- Follow Android architecture best practices
- Test AGSL features on Android 13+ devices
- Maintain backward compatibility for older Android versions
- Document GPU-specific implementations
- Include performance benchmarks for new features
This project is licensed under the MIT License:
MIT License
Copyright (c) 2025 Sudhakar Raju
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- TensorFlow Team: For TensorFlow Lite mobile AI framework
- OpenCV Foundation: For computer vision libraries
- Android Team: For AGSL and Camera2 API
- Google AI: For person segmentation research and models
- ๐ Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Email: sudhakar.r08@gmail.com
โญ If this project helped you, please give it a star! โญ
Made with โค๏ธ by Sudhakar Raju