Shazam-like Audio Identification Tool
AUDIORA is an advanced music recognition system designed to identify songs from short audio clips. Inspired by platforms like Shazam, it leverages cutting-edge audio fingerprinting, real-time processing, and a full-stack web interface to bridge the gap between user experience and embedded audio recognition technology.
Built with a modern React frontend, a powerful Go-based backend, and a Python audio engine, AUDIORA provides a scalable and responsive solution for music lovers and developers alike.
AUDIORA converts recorded audio into unique fingerprints, matches them against a fingerprint database, and returns the matching song metadata to the user β all through a user-friendly web interface. It supports live recordings, precise fingerprinting, and fast identification optimized for cloud-scale deployment.
- π€ Real-time audio capture from browser
- πΌ Audio fingerprinting using FFT-based frequency analysis
- π Accurate song matching via Locality-Sensitive Hashing (LSH)
- π§© Modular backend in Go + Python
- π Modern, responsive frontend using React and Tailwind CSS
- π§ͺ Basic testing using Postman, MongoDB Compass, and logs
- βοΈ Scalable and containerized using Docker
Layer | Tools & Languages |
---|---|
Frontend | React.js, Tailwind CSS, Web Audio API |
Backend (API) | Golang (Go), Flask (Python bridge) |
Audio Engine | Python, Librosa, NumPy, SciPy, FFmpeg |
Database | MongoDB |
Containerization | Docker |
Testing Tools | Postman, MongoDB Compass |
AUDIORA/
βββ Backend/
β βββ audio_engine/ # Python audio fingerprinting logic
β βββ app.go # Go server API
β βββ flask_bridge.py # Bridge to Python engine
βββ Frontend/
β βββ src/ # React components
β βββ index.html
βββ Dockerfile
βββ database/ # MongoDB fingerprint collections
βββ tests/ # Postman/API tests
βββ README.md
- Record: User records audio via browser.
- Preprocess: Audio cleaned, converted to mono WAV using FFmpeg.
- Fingerprinting: Spectral peaks extracted using FFT; hashes generated.
- Match: Fingerprints matched against MongoDB using LSH.
- Result: Matching song displayed with title, artist, and album.
git clone https://github.com/rakshitnarang018/Audiora01
cd Audiora01
- Set up Python environment:
python -m venv .venv
source .venv/bin/activate # On Linux/Mac
.venv\Scripts\activate # On Windows
- Install Python dependencies:
pip install -r requirements.txt
- Run the Flask backend:
cd Backend
python app.py
Flask server will run at: http://localhost:5000
You can either run the frontend with npm or use Docker Compose:
cd Frontend
npm install
npm run dev
Visit http://localhost:3000 in your browser to use the app.
AUDIORA uses Docker Compose to containerize and serve the React frontend, while the backend is run manually on your local machine.
- Docker & Docker Compose installed: Get Docker
From the root of your project (where docker-compose.yml is present or create it as shown below):
docker-compose up --build
Basic testing was performed using:
- β Postman β for API request testing
- β MongoDB Compass β to inspect stored fingerprints
- β Temporary audio logs β for audio integrity validation
- β Manual logs β to verify backend/frontend flow
- π§ Landing Page β βWelcome to AUDIORAβ
- π’ Recording Animation β βListening for your tune...β
- π Analyzing Page β βMatching your music...β
- π Result Page β Displays identified song details
- π Live-stream music detection
- π Multilingual/global music database
- β‘ Edge computing for ultra-low latency
- π Audio encryption and user profile sync
- π€ AI-based music genre or mood tagging
- Librosa, SciPy, FFmpeg, Flask, MongoDB
- Shazam β for inspiring the concept
- React + Tailwind community
- Python and Go open-source ecosystems