LangChainRunnables is a Python-based project showcasing five distinct LangChain workflows—Branch, Lambda, Parallel, Passthrough, and Sequence—using OpenRouter’s free API. It demonstrates AI-driven text processing tasks like generating facts, summarizing reports, creating notes and quizzes, responding to sentiment feedback, and crafting jokes.
A Runnable is a core concept in LangChain used to define composable, reusable units of logic (chains, tools, models, etc.) that can be executed (or "run"). It abstracts components like LLMs, prompts, and retrievers into a unified interface.
These are components designed to perform individual NLP tasks:
- Examples:
ChatOpenAI
: Handles conversational LLM output.PromptTemplate
: Templates prompts for consistent input formatting.Retrievers
: Fetch relevant documents for context.
These are combinators that build complex workflows from simpler tasks:
- RunnableSequence: Runs components in order, piping outputs to inputs.
- RunnableParallel: Executes components in parallel and merges results.
- RunnableBranch: Dynamically routes input based on conditions.
- RunnablePassthrough: Simply passes the input through to the next step.
- RunnableLambda: Allows custom Python logic inline.
Each script in this repository showcases a different workflow pattern:
Workflow | Script | Description |
---|---|---|
Branch | runnable_branch.py |
Classifies sentiment and tailors a response (e.g., “This is a beautiful phone”) |
Lambda | runnable_lambda.py |
Generates 5 interesting facts on a given topic (e.g., N8N) |
Parallel | runnable_parallel.py |
Creates notes and quizzes in parallel and merges the result |
Passthrough | runnable_passthrough.py |
Generates a joke based on input topic (e.g., programming) |
Sequence | runnable_sequence.py |
Produces a detailed report and a 5-point summary (e.g., AI ML jobs) |
Passthrough: Generates a joke based on input topic {programming}. (Create a joke and count words in the joke using the NLP concept.)
- Python 3.11
- LangChain – Framework for building NLP workflows
- langchain-core – Core components for LangChain runnables
- openai – For interacting with OpenRouter APIs
- python-dotenv – Environment variable management
- pydantic – For structured output (used in branch sentiment classification)
- grandalf – For ASCII workflow visualization
- ✅ Fact generation, summarization, quiz creation, sentiment analysis, and joke generation
- 🌐 Free OpenRouter LLM integration
- 🧩 ASCII graph visualization of chains
- 🧾 Structured output parsing via Pydantic
- 🔐 Secure configuration with
.env
file for API keys - Output and workflow visualization attached in file as a comment after code for quick and better understanding.
Clone the repository:
git clone https://github.com/vishal815/vishal815-LangChains-runnable-components-Conveys-intelligent-automation.git
cd vishal815-LangChains-runnable-components-Conveys-intelligent-automation
Create a virtual environment:
python -m venv venv
venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Create a .env
file in the root folder and add your API key:
OPENROUTER_API_KEY=your-api-key-here
Make sure your .env
file is correctly set up.
Run any of the following Python scripts to explore the workflows:
python runnable_branch.py # Sentiment-based feedback response
python runnable_lambda.py # Topic-based fact generation
python runnable_parallel.py # Note + quiz creation from text
python runnable_passthrough.py # Joke generator
python runnable_sequence.py # Report + summary creation
Each script will also render an ASCII graph to visualize how the workflow is structured.
- Name: Vishal Lazrus
- GitHub: vishal815
Contributions are welcome!