Skip to content

Hashirkoukab/Python-Projects

Repository files navigation

🧠 Hiring Buddy - 📄 Resume(CV) Selector App

Created a Resume Selector App using Generative AI & Python Programming

App Features: Upload multiple CVs (PDF/DOCX)

Enter job description

Use a free model to match CVs against JD

Score and rank the CVs

Display results in an interactive table Tools & Libraries (All Free): Streamlit (UI)

Langchain + HuggingFace open-source LLM (e.g., mistralai/Mistral-7B-Instruct)

PyMuPDF or docx2txt (to extract CV text)

Sentence Transformers (semantic similarity)

Pandas (data handling) Step 1 : We used sentence-transformers/all-MiniLM-L6-v2 Step 2 : Installed Python packages streamlit

PyMuPDF (for reading PDFs)

sentence-transformers (for AI matching)

docx (for reading Word files)

Step 3: Built the core matching logic I added code to:

Upload multiple CV files

Extract text from PDF/DOCX

Use AI model (all-MiniLM-L6-v2) to compare CVs with job description Created results table Step 4: displayed ranked CVs based on similarity score in a DataFrame with download option.

Chatbot

Created a Chatbot using Botpress

Step 1: Signed up for the Botpress tool

Step 2: Created a blank chatbot from scratch

Step 3: Started with a manual welcome node

Step 4: Created multiple standard nodes for incorporating the data as manual answers

Step 5: Created knowledge base for the chatbot

Step 6: Trained the chatbot on provided data in the knowledge base

Step 7: Created another knowledge base and connected it to the web search for live data scrapping

Step 8: Enabled the AI Expression

Step 9: Enabled all knowledge agents

Step 10: Trained the chatbot

Step 11: Created CSS style sheet to add design and work on UI/UX

Step 12: Published the chatbot for users to access it with this provided link:

Python Geometrical Shapes - Module 1 Tech Harbour

Created Python Project In first module during Tech Harbour Course

Step 1: Using Turtle library , Tinkter and Python coding.

Step 2: Wrote the python code for drawing lines.

Step 3: Wrote the python code for creating background colors.

Step 4: Wrote code for creating multiple shapes in square form.

Step 5: Created loops to run the same code block 2000 times in order to create spiral shapes.

Step 6:Added more colors using RGB code and made is visually appealing.

Here is the final look of my drawing with Python:

Python Programming Quiz

Step 1: Used basic python to create a quiz of Marvel characters

Step 2: Purpose of the quiz is to ask a few questions and assess the personality of the user, matching the characteristics of user to all marvel characters on the basis of data collected.

Step 3: Created input box to capture the user answers and save in the variable called Q1 and Q22.

Step 4: Used if statements and functions to run the input answers to meet certain conditions in order to match characteristics with marvel characters.

Step 5: Final result shows your personality matching the specific marvel character:

Power BI Data Visualisation

I have created the visualization report of a Cookie Company for three years sales, profit, risks, monthly and yearly revenue and overall business overview through visualization of the data found in their sheetes.

Step 1: Gathered data from different sources i.e database, website, and inventory regoinal files.

Step 2: Cleaned the data using Power Query and Exel.

Step 3: Loaded all datasets into Power Query Editor for intial cleaning, where I removed unnecessary columns, handeled missing data, and standardized formats (e.g.< dates and currencies ).

Step 4: Applied transformations such as filtering rows, splitting columns, and creating calculated columns for essential metrics (like "Total Revenue" and "Sales Growth").

Step 5: Used "Merge" and "Append" functions to combine tables, ensuring a seamless link between tables to enable accurate, integrated analysis across data sources and customer insights.

Step 6: Performed Data Modeling, established relationships between tables to enable accurate, integrated anlysis across data sources (e.g., linked "Customer ID" across sales and feed back tables).

Step 7: Created visualizations, presented sales data in donut chart, showing sum of units sold by country.

Step 8: Presented Sum of Revenue by Date and Location a beautiful clustered stacked bar chart.

Step 9: Created a stacked bar chart to visualize sum of profit by location.

Step 10: Added a table at the top of the dashboard to present an overview of all the data and visualizations we created, including country name, date, day, time, sum of profits, units sold, product name, sum of revenue.

Step 11: Added a slicer on top right corner to select location wise data on each visual we created.

Step 12: Then I created a live dynamic Map visual we created.

Step 13: Added a Slicer on page 1 to select country wise data with a single click.

Step 14: I created a colorful pie chart to show our monthly and yearly revenue by date.

Step 15: To visualize the yearly profit by date I created a Line Chart to easily see profit date wise.

Step 16: Created Mobile-App version and Web-App dashboard to present data to our client easily on both the platforms.

Cleaning and Analyzing Student Scores with NumPy

Introduction This project demonstrates how to clean a dataset with missing values and calculate basic statistics (mean, maximum, and minimum) using Python and the NumPy library. Here’s a detailed breakdown of each step, from setting up the data to the final analysis.

Step 1: Setting Up the Project Import Libraries: First, I imported the necessary library, NumPy, which provides powerful tools for numerical data manipulation and analysis.

Step 2: Initializing the Data Creating the Dataset: I created a dataset with sample scores and introduced a few missing values. Missing values were represented as None to simulate incomplete data.

Step 3: Filtering Out Missing Values Handling Missing Data: To calculate an average without including missing values, I filtered out None values from the dataset. This allowed me to compute the mean of the available scores accurately.

Step 4: Calculating the Average Score Finding the Mean of the Filtered Data: With the filtered dataset, I calculated the average (mean) score, which would later be used to fill in the missing values.

Step 5: Replacing Missing Values Filling Missing Values with the Mean: I replaced all None values in the original dataset with the calculated average. This resulted in a "cleaned" dataset with no missing values.

Step 6: Calculating Basic Statistics Finding Mean, Maximum, and Minimum of Cleaned Data: I used NumPy functions to calculate key statistics on the cleaned dataset. These included: Mean: The overall average score Maximum: The highest score Minimum: The lowest score

Step 7: Displaying the Results Printing the Results: Finally, I printed the calculated mean, maximum, and minimum scores to display a summary of the cleaned data.

Conclusion In this project, I successfully:

Created a dataset with missing values. Replaced missing values with the mean to complete the dataset. Used NumPy to calculate key statistical metrics. This project highlights my ability to handle missing data, clean datasets, and extract useful insights using basic statistical functions in Python.

Future Improvements To expand on this project, I could:

Add visualizations of the cleaned data. Experiment with different methods to handle missing data

SeaBorn Libraries

This project uses Matplotlib and Seaborn to analyze and visualize the dataset. Matplotlib allows custom plotting, while Seaborn simplifies statistical visualizations and uncovers insights.

Step 1: Begin by importing the necessary libraries. You will need Matplotlib for creating various plots and Seaborn for making statistical plots.

Step 2: Load your dataset into a Pandas DataFrame. For example, we can use a sample dataset like seaborn’s built-in tips dataset or a custom CSV file.

Step 3: It’s important to understand the structure of the dataset to know which columns to visualize.

Step 4: Perform any necessary data cleaning steps, such as handling missing values, converting columns, or dropping unnecessary columns.

Step 5: Create a simple plot to get started with Matplotlib. You can plot basic charts like line plots, bar charts, histograms, etc.

Step 6: Seaborn is great for statistical plots. You can create visualizations like scatter plots, box plots, violin plots, and more, with less code.

Step 7: Both Matplotlib and Seaborn allow for extensive customization of plots, including adjusting labels, titles, and colors.

Step 8: If you want to compare multiple plots in one figure, you can use subplots in Matplotlib.

Step 9: Once your plot looks the way you want, you can save it as an image or other formats.

Step 10: For interactive plots, you can use plotly or matplotlib with mpl_toolkits, but Matplotlib and Seaborn don't provide built-in interactivity. Alternatively, use seaborn's built-in FacetGrid or matplotlib's interactive mode.

Here are some Examples:

Figure 1:

Figure 2:

Figure 3:

In conclusion, Matplotlib and Seaborn were essential tools in analyzing and visualizing the dataset. By combining Matplotlib's flexibility and Seaborn's ease of use for statistical plots, I was able to uncover valuable insights and present the data in a clear, impactful way. These visualizations provide a solid foundation for decision-making and further analysis.

Animal Breed Prediction

During the Machine Learning & AI Course, I created the Animal Breed Prediction Application. The functionalities of the Application are as below:

Step 1: Design the application to predict animal breeds based on a few questions from the user and the data integrated into the model.

Step 2: The Application is built using Python, JavaScript & Coding Blocks from code.org.

Step 3: When the user inputs the information required by the application, the user is automatically redirected to the prediction page. where they can find the breed of the animal & some other information about animals.

About

These are the brief details of my current projects

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages