Here's a concise and professional version of your README.md
without emojis:
This project uses deep learning (CNN) to recognize static hand gestures representing the American Sign Language alphabet (A-Y, excluding J).
- sign_mnist_train.csv: Training images (28x28 grayscale).
- sign_mnist_test.csv: Testing images.
- csv file
- Python, TensorFlow/Keras
- Pandas, NumPy
- Matplotlib, Seaborn
- Scikit-learn
├── Gesture_Recognition_Using_Sign_Language.ipynb
├── sign_mnist_train.csv
├── sign_mnist_test.csv
├── images/
└── README.md
- Open the Jupyter notebook.
- Run all cells to preprocess data, train the model, and evaluate.
- View training/validation graphs, confusion matrix, and predictions.
- CNN-based image classification
- Accuracy and loss visualization
- Confusion matrix and class-wise metrics
- Sample misclassification analysis
Metric | Value |
---|---|
Training Accuracy | 98.5% |
Validation Accuracy | 97.2% |
Test Accuracy | 96.7% |
- Letters 'J' and 'Z' are excluded as they involve motion.
- All input images are normalized 28x28 grayscale.