This repository contains my learning materials and hands-on exercises for QMML society.
- What is Machine Learning?
- Real-life applications of ML
- Introduction to Python
- Basic mathematical concepts and diagrams
- Review of introductory concepts
- Supervised learning and regression problems
- Simple linear regression explained
- Derivatives and partial derivatives in ML
- Feature scaling and normalisation
- Model performance: MSC and R²
- Overfitting and underfitting fundamentals
- Introduction to neural networks
- Network architecture and activation functions
- Forward propagation walkthrough
- Hands-on: Building a simple neural network using NumPy
- Advanced forward pass algorithms
- Sigmoid, Softmax, and categorical cross-entropy loss
- Back-propagation theory and practice
- Hands-on workshop
- Classification problem types and real-world use cases
- Common classification algorithms
- Evaluation metrics and loss functions
- Hands-on implementation
- Core CNN components: convolution, pooling, activation
- CNN architectures: LeNet, AlexNet, VGG, ResNet
- Applications in image processing and beyond
- Transfer learning and data augmentation
- Hands-on workshop
- LLM lifecycle: pre-training, fine-tuning, prompting
- Use cases: chatbots, summarisation, code generation
- API integration with OpenAI, Hugging Face, etc.
- Hands-on workshop with mentoring
- Fundamentals of reinforcement learning
- The K-arm bandit problem
- Designing a trading strategy using RL
- Hands-on workshop with mentoring
- MDP components: states, actions, rewards, transitions
- Bellman equations and optimality
- Policy/value iteration and evaluation
- Exploration strategies revisited
- Understanding Recurrent Neural Networks
- LSTM architecture and capabilities
- Sequential data applications: time-series, NLP
- Hands-on PyTorch tutorial
- Addressing gradient exploding/vanishing in RNNs
- Why LSTMs excel in time-series prediction
- Live demo: LSTMs applied to stock market forecasting
.
├── Lectures
│ ├── data
│ │ └── timemachine.txt
│ ├── lecture_01
│ │ ├── AI.png
│ │ ├── Cancer_Detection.png
│ │ ├── Lec1_Introduction_to_ML.md
│ │ ├── Price_Sqft.png
│ │ ├── lecture1.ipynb
│ │ ├── linear_regression.ipynb
│ │ ├── ml_problem.ipynb
│ │ └── python_complementary.ipynb
│ ├── lecture_02
│ │ ├── Salary_dataset.csv
│ │ ├── gradient_descent_visualisation.png
│ │ ├── kaggle
│ │ │ ├── data_description.txt
│ │ │ ├── sample_submission.csv
│ │ │ ├── test.csv
│ │ │ └── train.csv
│ │ ├── kaggle_simple_regression.ipynb
│ │ ├── lecture2.ipynb
│ │ └── linear_regression_complementary.ipynb
│ ├── lecture_03
│ │ ├── kaggle
│ │ │ ├── data_description.txt
│ │ │ ├── sample_submission.csv
│ │ │ ├── test.csv
│ │ │ └── train.csv
│ │ ├── kaggle_submission.csv
│ │ ├── lecture3.ipynb
│ │ ├── multiple_linear_regression_complementary.ipynb
│ │ └── playground-series-s3e16
│ │ ├── sample_submission.csv
│ │ ├── test.csv
│ │ └── train.csv
│ ├── lecture_04
│ │ ├── inputs_layer.png
│ │ ├── lecture4.ipynb
│ │ ├── nn.png
│ │ ├── outputs_layer.png
│ │ ├── perceptron-in-machine-learning2.png
│ │ └── perceptron.png
│ ├── lecture_05
│ │ ├── activation_functions_and_loss.ipynb
│ │ ├── back_prop_notes.pdf
│ │ ├── kaggle
│ │ │ ├── data_description.txt
│ │ │ ├── sample_submission.csv
│ │ │ ├── test.csv
│ │ │ └── train.csv
│ │ ├── lecture5.ipynb
│ │ ├── nn.png
│ │ └── relud.png
│ ├── lecture_06
│ │ ├── lecture6.ipynb
│ │ └── lecture6_optimised.ipynb
│ ├── lecture_07
│ │ ├── Convolutional_Neural_Networks_(CNNs).md
│ │ ├── DATA_MNIST
│ │ │ └── MNIST
│ │ │ └── raw
│ │ │ ├── t10k-images-idx3-ubyte
│ │ │ ├── t10k-images-idx3-ubyte.gz
│ │ │ ├── t10k-labels-idx1-ubyte
│ │ │ ├── t10k-labels-idx1-ubyte.gz
│ │ │ ├── train-images-idx3-ubyte
│ │ │ ├── train-images-idx3-ubyte.gz
│ │ │ ├── train-labels-idx1-ubyte
│ │ │ └── train-labels-idx1-ubyte.gz
│ │ ├── Images
│ │ │ ├── CNN_Structure.png
│ │ │ ├── DA1.png
│ │ │ ├── DA2.png
│ │ │ ├── DA3.png
│ │ │ ├── DA4.png
│ │ │ ├── Data_Feeding.png
│ │ │ ├── Kernel_1.png
│ │ │ ├── Kernel_2.png
│ │ │ ├── Num_example.png
│ │ │ ├── PL.png
│ │ │ ├── Padding.png
│ │ │ ├── RGB_input.png
│ │ │ ├── R_Matrix.png
│ │ │ ├── ReLU.png
│ │ │ ├── S_and_P.png
│ │ │ ├── Stride.png
│ │ │ └── Training_CNNs.png
│ │ ├── data
│ │ │ └── MNIST
│ │ │ └── raw
│ │ │ ├── t10k-images-idx3-ubyte
│ │ │ ├── t10k-images-idx3-ubyte.gz
│ │ │ ├── t10k-labels-idx1-ubyte
│ │ │ ├── t10k-labels-idx1-ubyte.gz
│ │ │ ├── train-images-idx3-ubyte
│ │ │ ├── train-images-idx3-ubyte.gz
│ │ │ ├── train-labels-idx1-ubyte
│ │ │ └── train-labels-idx1-ubyte.gz
│ │ ├── lecture7.ipynb
│ │ └── lecture7_original.ipynb
│ ├── lecture_08
│ │ └── lectue8.ipynb
│ ├── lecture_09
│ │ ├── Core_challenge.png
│ │ ├── Slot_machine.png
│ │ ├── epsilon_greedy_k_arm_bandit_problem.md
│ │ ├── lecture9.ipynb
│ │ └── ε-Greedy_Algorithm.png
│ ├── lecture_10
│ │ ├── lecture10.ipynb
│ │ └── lecture10.md
│ ├── lecture_11
│ │ ├── Practical_Example.ipynb
│ │ ├── RNNs.ipynb
│ │ ├── Sequential_Data.ipynb
│ │ └── frankenstein.txt
│ └── lecture_12
│ └── Lecture12.ipynb
└── README.md
Joining the QMUL Machine Learning Society marked the beginning of a significant personal and academic shift for me. Coming from a non-STEM background, I initially found many of the concepts—especially the mathematical foundations and programming-heavy workshops—intimidating. Concepts like gradient descent, back-propagation, and reinforcement learning were all new territory, and it was easy to feel behind peers with prior technical experience.
However, this challenge became a driving force. The society’s beginner-friendly yet rigorous progression—from foundational topics like simple and multiple linear regression to more advanced subjects such as neural networks, CNNs, RNNs, and LSTMs—helped me gradually build confidence. Each hands-on session pushed me to not just understand the what but also the how behind machine learning algorithms.
I also took the opportunity to step outside my comfort zone by learning new tools such as PyTorch, TensorFlow, and Keras—frameworks that once felt out of reach but are now part of my regular learning process. The introduction to Large Language Models (LLMs) and reinforcement learning further broadened my perspective on the diverse directions this field can offer.
This experience has not only helped me bridge the gap between my previous education and my current studies in computing, but it’s also sparked genuine curiosity. I’m still exploring what direction I’ll take—whether in research, engineering, or something entirely different—but I now feel equipped with a foundational skill set and a community that makes continued learning both accessible and rewarding.