AltCore: Core functionalities for creating and validating alternative results of experimental studies
Metric | Count |
---|---|
Total papers analyzed | 1 |
Total alternatives generated | 1 |
Total evaluations done | 1 |
The background of the current project is based on previously published results. Two key premises for us are:
- LLMs can extract key patterns from a noisy scientific literature,
- Their predictions are calibrated, which opens the door to assigning probabilities to possible outcomes of studies.
Our big picture objective is to develop automated systems to predict and rank plausible experimental outcomes before scientists conduct resource-intensive experiments. Such tools would generate a comprehensive space of potential results based on preliminary hypotheses and methods, accelerating scientific discovery.
We aim to explore and validate the idea that it is possible to generate scientifically viable alternative results from published research. We aim to automate the process of generation as well as validation by incorporating both ML-based evaluations and expert feedback to assess feasibility and accuracy.
To achieve this, our current idea is using large language models (LLMs) to extract knowledge graphs (KGs) from neuroscience articles. KGs represent key entities (e.g., brain regions) as nodes and represent relationships among nodes in terms of edges. Once we build a KG, we can alter it in several ways to detail a range of alternative results for a study. Ken has made one quick attempt at this (repo). It is by no means perfect, but hopefully, you will find inspirations from his attempt and perhaps reuse pieces of his code in your solutions.
- Clone the repository:
git@github.com:don-tpanic/alt-core-playground.git
cd alt-core-playground
- Install the package with development dependencies:
pip install -e ".[dev]"
This project uses several development tools:
- Black for code formatting
- Ruff for linting
- MyPy for type checking
To run linting checks manually:
make lint
This project is inherently exploratory—roles and priorities may shift as we learn! Whether you’re drawn to algorithm design, neuroscience validation, or tool-building, there’s room to collaborate across teams. Early stages will emphasize research, but we’ll gradually transition polished components into engineering streams. We appreciate all contributions.
To learn about how to make contributions, kindly refer to the contribution page here.
Below is a list of tasks we’re working on and seeking contributors to help with. This provides an overview, but if you're considering contributing, please refer to the contribution page for detailed instructions.
Team | Stream | Feature | Tasks |
---|---|---|---|
generator | research | develop new algos to create alternative results (#2) | #3, #4 |
evaluator | research | Validate generated results against ground truth, develop ground truth (#5) | #11 |
evaluator | research | develop automated evaluation pipelines | #5 |
- GitHub Issues: see contribution page.
- Discord for daily communications, discussions:
- Website for new releases:
TODO
TODO