This is a machine learning model designed for the recognition of Kenyan Sign Language. This project aims to bridge the communication gap by providing an efficient and accessible tool for understanding and interpreting the Kenyan Sign Language.
signsense/
├── Dataset/
│ ├── Dataset/
│ ├── Dataset.md
│ └── Dataset_Visual.png
├── Logs/
├── database/
│ ├── DATABASE.md
│ ├── config.py
│ ├── create_db.py
│ └── models.py
├── model/
│ ├── Model.h5
│ ├── Model.md
│ ├── comparative_analysis_output.png
│ ├── confusionmatrix_output.png
│ ├── metrics.png
│ └── signsense_mediapipe_lstm.ipynb
├── routes/
│ └── routes.py
├── static/
│ ├── css/
│ ├── images/
│ ├── js/
│ └── logo.png
├── templates/
│ ├── layouts/
│ └── pages/
├── utilities/
│ ├── extensions.py
│ └── utils.py
├── .gitignore
├── LICENSE.txt
├── app.py
└── requirements.txt
-
Clone the repository:
git clone https://github.com/mikemwai/signsense.git
-
Navigate to the project directory:
cd signsense
-
Navigate to the project directory and create a virtual environment on your local machine through the command line:
python -m venv venv
-
Install the required packages:
pip install -r requirements.txt
-
Picks up all the packages from the project and copies to the
requirements.txt
file:pip freeze >> requirements.txt
-
-
Run the following command to start the application on your local machine:
-
On Windows:
set FLASK_APP=app.py flask run --host=0.0.0.0
-
On Unix/Linux/Mac:
export FLASK_APP=app.py flask run
-
-
This will start a development server on http://127.0.0.1:5000/ where you can access the application.
-
If you'd like to contribute to this project:
- Please fork the repository.
- Create a new branch for your changes.
- Submit a pull request.
-
Additionally, feel free to send an email where you will receive feedback within 24 hours.
-
Contributions, bug reports, and feature requests are welcome!
If you have any issues with the project, feel free to open up an issue.
This project is licensed under the MIT License - see the LICENSE file for details.
Facial mesh
: Incorporate Mediapipe's face mesh to capture the user's emotions in determing the demonstrated sign. Model needs to be trained after incorporating the face mesh.Avatar
: Develop an avatar that teaches learners how to create the different sign language notations.Dataset
: Upgrade the KSL dataset by incorporating more classes to capture more words.