ChainFactory: Structured LLM Inference with Easy Parallelism & Tool Calling (chainfactory-py 0.0.18b
)
ChainFactory is a declarative system for creating complex, multi-step LLM workflows using a simple YAML-like syntax. It allows you to connect multiple prompts in a chain, with outputs from one step feeding into inputs of subsequent steps. The most important feature is the reduced reliance on exact wording of the prompts and easy parallel execution wherever iterables are involved.
- Sequential and parallel chain execution
- Ideal for Split-Map-Reduce workflows.
- Automatic prompt generation from concise purpose statements.
- Type-safe outputs with Pydantic models.
- Chain inheritance and reusability.
- Smart input/output mapping between chain links.
- Hash based caching for intermediate prompts and masks.
- Tool calling for fetching data and performing actions.
A chain-link is a single unit in your workflow / chain, defined using the @chainlink
directive:
@chainlink my-first-chain
prompt: Write a story about {topic}
out:
story: str
You can specify how chainlinks execute:
- Sequential (
--
orsequential
): Links run one after another serially. - Parallel (
||
orparallel
): Links run simultaneously for multiple inputs (requires an iterable output from the previous link). Example: A 3 step chain.
@chainlink generator -- # runs once
@chainlink reviewer || # runs multiple times in parallel, number of runs is determined by output of the previous link which must be or have an iterable
@chainlink summarizer -- # runs once to summarize the output of the previous parallel link
Instead of writing prompts manually, let ChainFactory generate them:
@chainlink
purpose: generate creative haiku topics
in:
num: int
out:
topics: list[str]
The system will automatically create an optimal prompt based on the purpose and the input variales before executing the chain.
If a chainlink output is an iterable or something with an iterable attribute, the next chainlink can operate on each element of the previous one concurrently - drastically improving the performance. To achieve this, we just need to change the link type to ||
or parallel
. The Split-Map-Reduce
kind of workflows benefit the most from ChainFactory
's seamless parallelism.
@chainlink topic_generator --
purpose: generate creative haiku topics
in:
num: int
out:
topics: list[str]
@chainlink haiku_generator || # parallel to generate multiple haikus
purpose: generate a haiku
in:
topics.element: str
out:
topic: str
haiku: str
You can build on top of existing chains using the @extends
directive. If a flow is common to multiple chains, it can be defined once and reused using @extends
.
@extends examples/base_chain.fctr
@chainlink additional_step
The system automatically maps outputs to inputs between chain links using dot notation:
in:
previous_chain.element.field: str
The YAML defined output structures are converted to a proper Python class at runtime. The class is then used to validate and type-check the output of the chain-link.
def:
Haiku:
text: str = default value % description
explanation: str? # optional field
out:
haikus: list[Haiku]
- Use Purpose Statements When possible, let the system generate prompts using clear one-liner purpose statements. Do this more often for tasks that do not require domain knowledge.
- Type Everything Define input/output types for better reliability:
def:
MyType:
field1: str
field2: int?
Side Note: In a .fctr file, any type defined above the current chain-link is available as a global type. 3. Chain Structure - General Workflows
- Start with sequential chains for initial processing.
- Use parallel chains for whenever the order or execution is unimportant and iterables are involved.
- End with sequential chains or a final tool call for summarization and getting a final text / object output.
The above is the Split-Map-Reduce pattern for which ChainFactory
is a very good fit.
4. Descriptions
Add field descriptions using %
. This is not only for readability, but also for the LLM to understand the context of the field when producing a structured output. Essentially, the field descriptions are mini-prompts themselves.
out:
review: str % A comprehensive analysis of the text
For parallel-to-sequential transitions, use masks to format data:
mask:
type: auto
variables:
- result.field1
- result.field2
A template is automatically generated based on the supplied variables to the mask. This template is used to format the data before passing it to the final chainlink.
The system automatically caches generated prompts, and masks by hashing them and storing them under .chainfactory/cache
. So even though the prompt template is generated dynamically in purpose based prompting, it only needs to be done once in the beginning and then whenever the purpose or the inputs change.
Side Note: It is recommended to commit the .chainfactory/cache
folder to your codebase's version control system.
Tools are callables that behave in a similar fashion to chain-links. They are used very similarly to chain-links but with a few key differences:
- They are not defined in a
.fctr
file. - Tools do not have prompts, purposes, or out sections.
- They can be used to fetch data that subsequent chain-links need or to perform an action that uses data from the previous chain-links.
@chainlink generator
purpose: generate haiku
in:
topic: str
out:
haiku: str % the complete haiku text. required.
@tool websearch
in:
topic: str # becomes a kwarg to the registered tool.
# the most minimal tool definition is a single line
# the tool gets whatever the above chainlink returns as input
@tool another_tool
Note: If the tools are not registered with the engine config, initialization of the engine will fail.
If defined in a parallel chainlink, the tool will run once corresponding to each instance of the chainlink. Make sure that your tool is stateless and does not have any side effects that could cause issues due to repeated concurrent executions.
If the code causes some side effects and repeating the action is not safe, please only use @tool
directive in a sequential context. Most of the places where tool use makes sense are things such as fetching data from an API or a database.
Tools are registered using the register_tools
method of the ChainFactoryEngineConfig
class. The singular version of this, register_tool
can also be used as a decorator. Warning: loading a file with tools fails if the config already does not have a tool registered with the same name.
@chainlink haiku-generator
prompt: Write {num} haiku(s) about {topic} # for small inputs, prompt and purpose do not have much of a difference.
out:
haikus: list[Haiku]
@chainlink reviewer || # parallely review each haiku
purpose: critically analyze each haiku
@chainlink cancellation_request
purpose: check if the given email explicitly contains an event cancellation request
in:
email_body: str
out:
is_cancellation_request: bool
@chainlink classify
purpose: classify the input text into one of the provided labels
in:
text: str
labels: list[str]
def:
Classification:
label: str % most relevant label for input text
text: str % entirety of the input text, verbatim
snippet: str % snippet from input text that justifies the label
confidence: Literal["extreme", "high", "medium", "low", "none"] % confidence level in the label's accuracy
out:
classification: Classification
@tool display_classification
in:
- classification.label # becomes kwarg called `label`
- classification.snippet # becomes kwarg called `snippet`
- classification.confidence as conf # becomes kwarg called `conf`
@tool confidence_filter --
@tool take_action --
in:
classification: Classification
@chainlink validate --
prompt: Is (Action: {action}) in response to (Text: {text}) valid and reasonable?
out:
is_valid: bool
reason: str
And here's the python counterpart for above .fctr
file:
from chainfactory import Engine, EngineConfig
from some.module import TEXT, HANDLERS
engine_config = EngineConfig()
@engine_config.register_tool
def display_classification(label: str, snippet: str, conf: str):
"""
Raise an exception if the confidence level is low.
"""
raise NotImplementedError
@engine_config.register_tool
def confidence_filter(**kwargs): # gets called first
"""
Raise an exception if the confidence level is low.
"""
raise NotImplementedError
@engine_config.register_tool # brand new decorator
def take_action(**kwargs):
"""
Call a handler based on the classification by ChainFactory.
"""
raise NotImplementedError
if __name__ == "__main__":
classification_engine = Engine.from_file("examples/classify.fctr", config=engine_config)
classification_engine(text=TEXT, labels=list(HANDLERS.keys()))
The ChainFactoryEngine
or simply the Engine
can be configured using the ChainFactoryEngineConfig
(EngineConfig
) class. You can control aspects such as the language model used, caching behavior, concurrency, and execution traces using the config class. Below are the configuration options available:
model
: Specifies the model to use (default is"gpt-4o"
).temperature
: Sets the temperature for the model, which controls the randomness of the outputs (default is0
).cache
: Enables caching of prompts and results (default isFalse
).provider
: Defines the provider for the language model, with supported options including"openai"
,"anthropic"
, and"ollama"
.max_tokens
: Specifies the maximum tokens allowed per response (default is1024
).model_kwargs
: A dictionary of additional keyword arguments to pass to the model.max_parallel_chains
: Sets the maximum number of chains that can execute in parallel (default is10
).print_trace
: IfTrue
, enables printing of execution traces (default isFalse
).print_trace_for_single_chain
: Similar toprint_trace
but for single chain execution (default isFalse
).pause_between_executions
: IfTrue
, prompts for confirmation before executing the next chain (default isTrue
).tools
: A dictionary of tools that can be used from the .fctr files. It's updated using theregister_tools
method of the class.
from chainfactory import Engine, EngineConfig
from some_module import websearch
config = EngineConfig(
model="gpt-4o",
temperature=0.7,
cache=True,
provider="openai",
max_tokens=2048,
max_parallel_chains=5,
print_trace=False,
print_trace_for_single_chain=False,
pause_between_executions=True
)
config.register_tools([websearch]) # register a tool or multiple using the register_tools method
@config.register_tool # works as a decorator
def another_tool(**kwargs):
return kwargs
engine = Engine.from_file("examples/haiku.fctr", config) # can be used as a callable
ChainFactory makes it easy to create complex LLM workflows without writing code. Its simple syntax, automatic prompt generation, and smart features let you focus on what matters - designing great AI workflows.
Remember that this is just an overview - experiment with the examples to discover more possibilities!
- Check the examples folder for common patterns.
- Ping me directly on email: garkotipankaj [at] gmail.com