みらい 未来
Minimalist Async Evaluation Framework
for R
→ Run R code in parallel without blocking your session
→ Distribute workloads across local or remote machines
→ Execute tasks on different compute resources as required
→ Perform actions reactively as soon as tasks complete
install.packages("mirai")
mirai()
evaluates an R expression asynchronously in a parallel
process.
daemons()
sets up persistent background processes for parallel
computations.
library(mirai)
daemons(5)
m <- mirai({
Sys.sleep(1)
100 + 42
})
mp <- mirai_map(1:9, \(x) {
Sys.sleep(1)
x^2
})
m
#> < mirai [] >
m[]
#> [1] 142
mp
#> < mirai map [4/9] >
mp[.flat]
#> [1] 1 4 9 16 25 36 49 64 81
daemons(0)
⚙️ Modern Foundation
- Architected on current communication technologies (IPC, TCP, secure TLS)
- Professional queueing and scheduling built on nanonext and NNG
- Engineered for custom serialization of cross-language data formats (e.g. torch, Arrow)
⚡️ Extreme Performance
- Scales to millions of tasks across thousands of connections
- Delivers 1,000x greater efficiency and responsiveness over alternatives
- Zero-latency, event-driven promises optimized for real-time applications
🚀 Production First
- Clear evaluation model with clean environment separation and explicit object passing
- Transparent and robust operation from minimal complexity and no hidden state
- Enhanced observability through OpenTelemetry integration
🌐 Deploy Everywhere
- Deploy across local, remote (SSH), and HPC environments (Slurm, SGE, PBS, LSF)
- Compute profiles manage independent daemon pools and resource types
- Distribute workload to optimal resources using multiple compute profiles
mirai serves as a foundation for asynchronous and parallel computing in the R ecosystem:
The first official alternative communications backend for R, the
‘MIRAI’ parallel cluster, a feature request by R-Core.
Powers parallel map for purrr, a core tidyverse package.
Primary async backend for Shiny with full ExtendedTask support.
Built-in async evaluator enabling the
@async
tag in plumber2.
Core parallel processing infrastructure provider for tidymodels.
Seamless use of torch tensors, models and optimizers across parallel
processes.
Query databases over ADBC connections natively in the Arrow data
format.
R Polars leverages mirai’s serialization registration mechanism for
transparent use of Polars objects.
Targets uses crew as its default high-performance computing backend.
Crew is a distributed worker launcher extending mirai to different
computing platforms.
Will Landau for being instrumental in shaping development of the package, from initiating the original request for persistent daemons, through to orchestrating robustness testing for the high performance computing requirements of crew and targets.
Joe Cheng for integrating the ‘promises’ method to work seamlessly within Shiny, and prototyping event-driven promises.
Luke Tierney of R Core, for discussion on L’Ecuyer-CMRG streams to ensure statistical independence in parallel processing, and reviewing mirai’s implementation as the first ‘alternative communications backend for R’.
Travers Ching for a novel idea in extending the original custom serialization support in the package.
Hadley Wickham, Henrik Bengtsson, Daniel Falbel, and Kirill Müller for many deep insights and discussions.
mirai • nanonext • CRAN HPC Task View
–
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.