Releases: Synaptic724/ThreadFactory
🚀 ThreadFactory 1.2.4 — Smarter Collections, Stronger Control
High-performance thread-safe collections, lifecycle-aware Work objects, a full benchmarking suite, and the foundation of modular concurrency for Python 3.13+ (Free Threading).
While active development on ThreadFactory is paused temporarily, the team is now building Melder, a powerful DI and composition framework designed to work seamlessly with these primitives.
🆕 [1.2.4] - 2025-05-02
🚀 Classes Added
ConcurrentSet
- A fully thread-safe
set
implementation designed for high read and concurrent write workloads. - Supports full set algebra:
union
,intersection
,difference
,symmetric_difference
, with both standard and in-place forms. - Includes:
freeze()
mode to make the structure lock-free for reads.batch_update()
for atomic composite operations.- Functional access (
map
,filter
,reduce
).
- Integrates cleanly with Python idioms:
with ConcurrentSet(...) as s:
enables atomic scoped access.- Implements
IDispose
, supporting proper lifecycle cleanup.
➕ New Features
🔄 Set Algebra + Operators
- Standard operations:
|
,&
,-
,^
- In-place versions:
|=
,&=
,-=
,^=
- Method equivalents:
union()
,intersection()
, etc.
🧠 Functional Utilities
- Built-in
map(func)
,filter(func)
, andreduce(func)
on the set — thread-safe and iterable-aware.
🧊 Freeze Mode
- New
freeze()
method added to:ConcurrentSet
ConcurrentList
ConcurrentDict
- When frozen:
- All mutations are blocked.
- Reads (iteration, contains, length) become lock-free, improving performance.
- Unfreeze manually or via internal mechanisms.
🧵 Lock-Aware Reads
- Smart internal handling avoids unnecessary locking:
__iter__
,__contains__
,__len__
, and copies are optimized based on freeze state.
🔒 Atomic Batch Operations
batch_update(func)
ensures thread-safe bulk changes under a single lock pass.
🛠 Fixes
-
ConcurrentQueue
andConcurrentStack
:peek()
now usestry/finally
to guarantee lock release even when exceptions are raised.
-
Clarified locking strategy and lifecycle handling across
concurrent_core
classes via enhanced docstrings.
🔄 Changes
-
License upgraded from MIT to Apache 2.0
- Enforces better attribution.
- Improves corporate compliance and future integration safety.
-
NOTICE File
- Now includes formal third-party acknowledgments.
-
Disposal Standardization
- All disposable types implement a common
IDispose
interface with internal.disposed
flags anddispose()
methods.
- All disposable types implement a common
🧭 Project Status
Development on ThreadFactory is not abandoned — it’s currently paused as the team shifts focus to a companion framework:
Melder — A dependency injection and object composition system that integrates deeply with
ThreadFactory
.
Melder will allow you to:
- Compose concurrent flows using declarative bindings.
- Control lifecycles with Work + Scope-aware systems.
- Scale from functions to full orchestrated pipelines.
ThreadFactory will resume updates once Melder stabilizes, with both designed to work together in the long term.
📦 Install or Upgrade
pip install --upgrade threadfactory
ThreadFactory 1.2.0 — Smarter Work, Scalable Threads
🚀 ThreadFactory 1.2.0 — Smarter Work, Scalable Threads
High-performance thread-safe collections, lifecycle-aware Work
objects, a full benchmarking suite, and the foundation of modular concurrency for Python 3.13+ Free Threading.
✨ What’s New in 1.2.0
🧠 Work Object
- Introduced the
Work
class — aFuture
subclass with:- Native
await
support via__await__
- Automatic disposal after
.result()
or.exception()
- Pre/post execution hooks (
add_hook(...)
) - Metadata tracking: timestamps, retry count, IDs
- Graceful cancellation with
CancelledError
- Fully thread-safe with condition handling
- Native
🧵 Worker (Prototype)
- Initial implementation of
Worker
— runsWork
objects in background threads. - Early step toward a full execution system.
🏗️ ThreadFactory Execution System (Scaffolded)
- Modular thread orchestration engine:
- Producer / Worker role assignment
- Queue-to-worker routing
- Flexible scaling support
🎫 QueueAllocator
- New class for managing ticket-based ID pools using
ConcurrentQueue
. - Supports context managers and resource recycling.
- Fully disposable and range-validated.
📈 Performance Structures
🔷 ConcurrentCollection
- Unordered, thread-safe collection optimized for high concurrency.
- Uses fair, circular scanning and timestamp-based slot targeting.
- Benchmark comparison (2M ops, 10 producers, 20 consumers):
ConcurrentCollection
: 108,235 ops/secConcurrentBuffer
: 102,494 ops/sec
📊 Benchmarking Suite
- Modular, extensible benchmarking system now included.
- Supports:
- Throughput, latency, and load tests
- Custom strategy ratios
- CSV, JSON, YAML export
- Visualization-ready outputs
⚡ ConcurrentBuffer Upgrade
- Now uses a windowed enqueue strategy with even-shard grouping.
- Improves performance under heavy load.
- Requires even number of shards (≥2) unless in single-shard mode.
🛠 Fixes & Optimizations
- Removed locks from
peek()
forConcurrentQueue
andConcurrentStack
- Full
.NET-style
Disposable
integration across all major types - Absolute imports applied across source tree
- Improved metadata exposure and test diagnostics
📦 Install or Upgrade
pip install --upgrade threadfactory
Python 3.13+ required
Free-threading mode (no-GIL builds) highly recommended for full performance.
🗺️ Roadmap
✅ v1.2 (Now)
- ✅
Work
,Worker
,QueueAllocator
- ✅
ConcurrentCollection
+ optimizedConcurrentBuffer
- ✅ Benchmark suite and performance validation
- ✅ Core thread-safe structures refactored with auto-disposal
🔨 In Progress: v1.2.x → v2.0
🚀 ThreadFactory System
ThreadFactory.submit(fn)
returnsWork
- Dynamic worker scaling
- Retry, timeout, and cancellation support
- Plug-and-play task queues (FIFO, priority, circular)
⚙️ Synchronization Primitives
- Reader/Writer locks (with upgrade/downgrade)
- Async-compatible lock structures
- Spinlocks and hybrid condition strategies
📦 Data Structures
ConcurrentSet
- Min/Max priority queues
- Lock-free ring buffers
- Shared memory containers (planned)
🧪 Experimental / Exploratory
AsyncThreadFactory
(await factory.submit(...)
)- Real-time diagnostics and queue tracking
- Orchestrator for DAG-style workflows and dynamic thread tuning
🌐 Long-Term Vision
- Distributed execution across machines
- Actor-style task systems and cooperative workloads
- Optional native backends (C/C++) for zero-copy execution
- Become Python’s go-to system for Free Threading concurrency
MIT License © 2025 Mark Geleta (Synaptic724)
ThreadFactory 1.1.0
🚀 ThreadFactory 1.1.0 — Smarter Threads, Stronger Structures
High-performance thread-safe collections and concurrency primitives for Python 3.13+ Free Threading (No-GIL).
This release introduces a powerful new data structure — ConcurrentBuffer
— plus dynamic semaphores, bulk update enhancements, and core refinements for cleaner operation under pressure.
✨ What’s New in 1.1.0
🧩 New Concurrent Structures
🔷 ConcurrentBuffer
- A high-throughput, sharded buffer optimized for low-to-moderate contention workloads.
- Balances concurrency using internal timestamp-tagged shards.
- Outperforms
ConcurrentQueue
by up to 60% in 4–20 thread scenarios. - Ideal for producer/consumer pipelines and approximate-FIFO coordination.
🔶 Dynaphore
- A dynamic semaphore abstraction with runtime tunable limits.
- Allows increasing or decreasing concurrency capacity on-the-fly.
- Perfect for adaptive workloads or dynamic backpressure strategies.
🛠️ Feature Enhancements
- ✅ Bulk
update(...)
support forConcurrentBag
andConcurrentList
- ✅ Added
.remove(item)
toConcurrentQueue
andConcurrentStack
for more flexible usage - ✅
ConcurrentStack
andConcurrentQueue
now include micro-sleeps (time.sleep(0.001)
) to reduce tight-loop contention and provide natural backpressure under load - ✅ Unit tests now include optional performance benchmarks — clone and run to compare against Python’s built-ins
🧹 Fixes & Refactors
- 🛠 Switched from relative to absolute imports for better modular packaging
- 🔁 Minor consistency improvements across
peek
,clear
, and__repr__
methods - 🧪 Refined
Empty
exception handling for consistency across all structures
📦 Install / Upgrade
pip install --upgrade threadfactory
Supports Python 3.13+ — Free Threading (No-GIL) builds recommended for maximum concurrency.
🔮 Roadmap Highlights
🔜 Next Up — v1.2
-
✅ Work-stealing thread pool executor
Dynamic task distribution using a local + global queue model with stealing for idle threads. -
✅ Dynamic task orchestration
IntroducingWork
,Future
, andCancellationToken
objects for structured concurrency and result handling. -
✅ Pluggable queue strategies
Support for custom task queues including priority queues and circular buffers. -
✅ Graceful shutdown + restart
Controlled thread pool lifecycle with support for cancelable tasks, soft exits, and full reset.
🔁 Compare Performance Yourself
Clone and run:
python -m unittest discover -v tests/
Benchmarks included for:
ConcurrentBuffer
vsConcurrentQueue
vsmultiprocessing.Queue
- Real-world 8M+ ops producer/consumer tests
GIL-enabled
vsNo-GIL
execution modes
🧠 Built for Engineers Who Think in Threads
ThreadFactory continues its mission to bring .NET-style concurrency to Python — but without GIL constraints.
Fast, flexible, and optimized for Free Threading in Python 3.13+.
Build high-performance systems. Think in parallel. Own the thread.
MIT License © 2025 Mark Geleta (Synaptic724)
ThreadFactory 1.0.1
🚀 ThreadFactory 1.0.1
High-performance thread-safe data structures and parallel utilities for Python 3.13+ with No-GIL optimizations.
ThreadFactory provides fast and scalable concurrent collections and parallel execution patterns.
Designed for Python 3.13+ Free Threading (No-GIL) builds. It works in standard Python 3.x too, but you’ll get maximum concurrency in No-GIL mode.
📦 What's Included
👜 ConcurrentBag
A multiset collection supporting duplicates.
🗄️ ConcurrentDict
A thread-safe dictionary with atomic operations.
📃 ConcurrentList
A list optimized for concurrent reads/writes.
📬 ConcurrentQueue
A FIFO queue with lock-based synchronization.
📚 ConcurrentStack
A LIFO stack built for thread-safe operations.
⚙️ Parallel Utilities
parallel_for
,parallel_foreach
,parallel_map
,parallel_invoke
- Inspired by .NET’s Task Parallel Library (TPL), with chunking, early exit, and thread-local storage.
✨ Highlights
✅ Designed for Python 3.13+, optimized for No-GIL
✅ Supports Free Threading, unlocking true parallelism
✅ Fine-grained batch operations, map
/filter
/reduce
, and parallel execution patterns
✅ Minimal dependencies — no asyncio
, no multiprocessing
— pure threads!
✅ Fully unit-tested and benchmarked
⚙️ Installation
Option 1: Clone and Install Locally (Recommended for Development)
# Clone the repository
git clone https://github.com/yourusername/threadfactory.git
cd threadfactory
# Create a Python 3.13+ virtual environment (No-GIL/Free Threading recommended)
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install from PyPI:
pip install threadfactory
## 🛣️ Roadmap
### ✅ v1.0.0 — Core Release
- **Concurrent Collections**
- `ConcurrentList`
- `ConcurrentDict`
- `ConcurrentBag`
- `ConcurrentQueue`
- `ConcurrentStack`
- **Parallel Utilities**
- `parallel_for`
- `parallel_map`
- `parallel_invoke`
- `parallel_foreach`
- Optimized for Python 3.13+ Free Threading / No-GIL
- Full unit test coverage
- Benchmark suite comparing performance against standard lib concurrency
---
### 🔜 v1.1 — ThreadPool & WorkStealing Executors
- Dynamic thread pool with work-stealing
- `Work` object encapsulating tasks, cancellation tokens, and results
- Future-style result retrieval
- Graceful shutdown/restart for pools
- Pluggable task queues (deque, priority)
---
### 🔮 Long-Term Vision (v2.x+)
- Distributed task execution (multi-node)
- Actor-based concurrency
- C-extension acceleration (optional) for ultra-low latency
- Unified concurrency framework for Free Threading and GPU compute
- Task cancellation + chaining + dependency graphs
- `ThreadFactory` as a concurrency backbone for high-performance Python
---
## 📄 License
MIT License © 2024 Mark Geleta (Synaptic724)