An AI-powered application that generates personalized conversation starters and insights by analyzing LinkedIn profiles. Simply input a person's name, and the system will find their LinkedIn profile, extract key information, and create engaging ice breakers using local LLM models.
- Automated LinkedIn Profile Discovery: Uses Tavily search API to find LinkedIn profiles from names
- AI-Powered Profile Analysis: Leverages Ollama (Llama 3/3.1) for intelligent content processing
- Smart Content Generation: Creates personalized summaries, interesting facts, topics of interest, and ice breakers
- Modern Web Interface: Clean, responsive Flask-based web application
- Privacy-Focused: Uses local LLM models (Ollama) instead of cloud-based APIs
- Professional Data Extraction: Integrates with Proxycurl API for comprehensive LinkedIn data
- Backend: Python, Flask
- AI/ML: LangChain, Ollama (Llama 3/3.1), Pydantic
- APIs: Proxycurl (LinkedIn data), Tavily (web search)
- Frontend: HTML, CSS, JavaScript
- Environment: Python-dotenv for configuration
Before running this application, ensure you have:
- Python 3.8+ installed
- Ollama installed and running locally
- API Keys:
- Proxycurl API key (for LinkedIn data extraction)
- Tavily API key (for web search)
-
Clone the repository:
git clone https://github.com/DahalRojan/ice-breaker-llm.git cd ice-breaker-llm -
Install dependencies:
pip install -r requirements.txt
-
Set up Ollama:
# Install Ollama (visit https://ollama.ai for installation instructions) # Pull required models ollama pull llama3 ollama pull llama3.1
-
Configure environment variables: Create a
.envfile in the project root:PROXYCURL_API_KEY=your_proxycurl_api_key_here TAVILY_API_KEY=your_tavily_api_key_here
-
Start the application:
python app.py
-
Access the web interface: Open your browser and navigate to
http://localhost:5000 -
Generate ice breakers:
- Enter a person's full name in the search form
- Click "Do Your Magic"
- Wait for the AI to process the information
- View the generated summary, facts, interests, and ice breakers
- Input Processing: User enters a person's name through the web interface
- Profile Discovery: Tavily search API finds the corresponding LinkedIn profile URL
- Data Extraction: Proxycurl API retrieves comprehensive LinkedIn profile data
- AI Analysis: Three specialized LangChain pipelines process the data:
- Summary Chain: Creates professional summary and interesting facts
- Interests Chain: Identifies topics that might interest the person
- Ice Breaker Chain: Generates personalized conversation starters
- Output Generation: Results are formatted and displayed with profile picture
app.py: Flask web server and API endpointsice_breaker.py: Core orchestration logicagents/: LinkedIn profile lookup agents using LangChainchains/: LLM processing chains for different output typesthird_parties/: External API integrations (LinkedIn, Twitter)output_parsers.py: Pydantic models for structured data outputtools/: Utility functions for web search and data processing
Create a .env file with the following variables:
# Required API Keys
PROXYCURL_API_KEY=your_proxycurl_api_key_here
TAVILY_API_KEY=your_tavily_api_key_here
# Optional: OpenAI API (if you want to use GPT instead of Ollama)
# OPENAI_API_KEY=your_openai_api_key_hereFor testing purposes, you can use mock LinkedIn data by modifying third_parties/linkedin.py:
# In ice_breaker.py, change:
linkedin_data = scrape_linkedin_profile(linkedin_profile_url=linkedin_username, mock=True)Processes a person's name and returns ice breaker information.
Request Body:
{
"name": "John Doe"
}Response:
{
"summary_and_facts": {
"summary": "Professional summary...",
"facts": ["Fact 1", "Fact 2"]
},
"interests": {
"topics_of_interest": ["Topic 1", "Topic 2", "Topic 3"]
},
"ice_breakers": {
"ice_breakers": ["Ice breaker 1", "Ice breaker 2"]
},
"picture_url": "https://profile-pic-url.com"
}To test the application with mock data:
- Set
mock=Truein thescrape_linkedin_profilefunction call - The system will use predefined mock data instead of making API calls
- This is useful for development and testing without consuming API credits
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Ollama not running: Ensure Ollama is installed and running (
ollama serve) - Missing API keys: Check your
.envfile for correct API key configuration - Model not found: Pull required models (
ollama pull llama3andollama pull llama3.1) - Port already in use: Change the port in
app.pyif 5000 is occupied
For issues and questions, please open an issue on the GitHub repository.