This project demonstrates how to use a Decision Tree Regressor to predict continuous numerical values based on input features. It's a simple and interpretable regression technique that splits data into decision-based regions.
Decision Tree Regression is a supervised learning algorithm used for predicting continuous (numerical) outputs.
It works by splitting the dataset into smaller and smaller parts based on the input features, and then making predictions based on the average values in each split (leaf).
Each decision in the tree is made by choosing a feature and a threshold that best reduces error.
- Load and explore the dataset
- Preprocess the data (if needed)
- Split into training and testing sets
- Train the model using
DecisionTreeRegressor
- Predict target values on test data
- Measure performance using metrics like MAE, MSE, RMSE, Rยฒ
- Visualize the prediction results