You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recommended that your contribution complies with the following rules before you submit a pull request:
18
22
19
-
Give your pull request a helpful title that summarises what your contribution does. In some cases Fix <ISSUETITLE> is enough. Fix #<ISSUENUMBER> is not enough.
20
-
21
-
All public methods should have informative docstrings with sample usage presented as doctests when appropriate.
22
-
23
-
At least one paragraph of narrative documentation with links to references in the literature (with PDF links when possible) and the example.
24
-
25
-
All functions and classes must have unit tests. These should include, at the very least, type checking and ensuring correct computation/outputs.
26
-
27
-
Ensure all tests are passing locally using pytest. Install the necessary packages by:
28
-
23
+
-[ ] Give your pull request a helpful title that summarises what your contribution does. In some cases Fix <ISSUETITLE> is enough. Fix #<ISSUENUMBER> is not enough.
24
+
-[ ] All public methods should have informative docstrings with sample usage presented as doctests when appropriate.
25
+
-[ ] At least one paragraph of narrative documentation with links to references in the literature (with PDF links when possible) and the example.
26
+
-[ ] All functions and classes must have unit tests. These should include, at the very least, type checking and ensuring correct computation/outputs.
27
+
-[ ] Ensure all tests are passing locally using pytest. Install the necessary packages by:
29
28
```pip install pytest pytest-cov```
30
29
then run
31
-
32
30
```pytest```
33
31
or you can run pytest on a single test file by
34
-
35
32
```pytest path/to/test.py```
36
-
Run an autoformatter. We use black and would like for you to format all files using black. You can run the following lines to format your files.
37
-
33
+
-[ ] Run an autoformatter. We use black and would like for you to format all files using black. You can run the following lines to format your files.
This repository is a container of methods that Neurodata usees to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additioanlly, methods for computational neuroscience on brains too specific for a general scientific library can be found here, such as image registration software tuned specifically for large brain volumes.
This repository is a container of methods that Neurodata uses to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additionally, methods for computational neuroscience on brains too specific for a general scientific library can be found here, such as image registration software tuned specifically for large brain volumes.
-[(optional, any python >= 3.7 environment will suffice)](#optional-any-python--38-environment-will-suffice)
20
+
-[Install from pypi](#install-from-pypi)
21
+
-[Install from source](#install-from-source)
22
+
-[How to use Brainlit](#how-to-use-brainlit)
23
+
-[Data setup](#data-setup)
24
+
-[Create a session](#create-a-session)
25
+
-[Features](#features)
26
+
-[Registration](#registration)
27
+
-[Core](#core)
28
+
-[(Push and Pull Data)](#push-and-pull-data)
29
+
-[Visualize](#visualize)
30
+
-[Manually Segment](#manually-segment)
31
+
-[Automatically and Semi-automatically Segment](#automatically-and-semi-automatically-segment)
32
+
-[API Reference](#api-reference)
33
+
-[Tests](#tests)
34
+
-[Common errors and troubleshooting](#common-errors-and-troubleshooting)
35
+
-[Contributing](#contributing)
36
+
-[Credits](#credits)
34
37
35
38
## Motivation
36
-
The repository originated as the project of a team in Joshua Vogelstein's class **Neurodata** at Johns Hopkins University. This project was focused on data science towards the [mouselight data](https://www.hhmi.org/news/mouselight-project-maps-1000-neurons-and-counting-in-the-mouse-brain). It becme apparent that the tools developed for the class would be useful for other groups doing data science on large data volumes.
39
+
40
+
The repository originated as the project of a team in Joshua Vogelstein's class **Neurodata** at Johns Hopkins University. This project was focused on data science towards the [mouselight data](https://www.hhmi.org/news/mouselight-project-maps-1000-neurons-and-counting-in-the-mouse-brain). It became apparent that the tools developed for the class would be useful for other groups doing data science on large data volumes.
37
41
The repository can now be considered a "holding bay" for code developed by Neurodata for collaborators and researchers to use.
38
42
39
43
## Installation
44
+
45
+
### Operating Systems
46
+
Brainlit is compatible with Mac, Windows, and Unix systems.
47
+
48
+
#### Windows Linux Subsystem 2
49
+
For Windows 10 users that prefer Linux functionality without the speed sacrifice of a Virtual Machine, Brainlit can be installed and run on WSL2. See installation walkthrough [here.](docs/WSL2-install-instructions.md)
- create a virtual environment: `conda create --name brainlit python=3.8`
58
+
- activate the environment: `conda activate brainlit`
59
+
45
60
### Install from pypi
46
-
- install brainlit via `pip install brainlit`
47
-
61
+
62
+
- install brainlit: `pip install brainlit`
63
+
48
64
### Install from source
49
-
- clone the repo via `git clone https://github.com/neurodata/brainlit.git`
50
-
- cd into the repo via `cd brainlit`
51
-
- install brainlit via `pip install -e .`
65
+
66
+
- clone the repo: `git clone https://github.com/neurodata/brainlit.git`
67
+
- cd into the repo: `cd brainlit`
68
+
- install brainlit: `pip install -e .`
69
+
70
+
### For Windows Users setting up a Conda environment:
71
+
72
+
Users currently may run into an issue with installing dependencies on Python 3.8. There are a couple workarounds currently available:
73
+
74
+
#### Use Python 3.7 - RECOMMENDED
75
+
76
+
- Create a new environment using Python 3.7 instead: `conda create --name brainlit3.7 python=3.7`
77
+
78
+
- Run `pip install -e .` This should successfully install the brainlit module for Conda on Windows.
79
+
80
+
#### Other potential fixes
81
+
82
+
Potentially, `gcc` is missing, which is necessary for wheel installation from Python 3.6 onwards.
83
+
84
+
- Install [gcc for Windows](https://www.guru99.com/c-gcc-install.html) and run `pip install brainlit -e . --no-cache-dir`.
85
+
86
+
Post-Python 3.6, windows handles wheels through the Microsoft Manifest Tool, it might be missing.
87
+
88
+
- Add the [Microsoft Manifest Tool](https://docs.microsoft.com/en-us/windows/win32/sbscs/mt-exe) to the `PATH` variable.
52
89
53
90
## How to use Brainlit
91
+
54
92
### Data setup
55
-
The `source` data directory should look something like an octree data structure with optional swc folder
56
-
57
-
data/
58
-
- default.0.tif
59
-
- 1/
60
-
* default.0.tif
61
-
* 1/ ... 8/
62
-
- 2/ ... 8/
63
-
- transform.txt
64
-
- consensus-swcs (optional, for .swc files)
65
-
66
-
First, decide for your team where you'd like to store the data - whether it will be on a local machine or on the cloud. If on the cloud,
67
-
each collaborator will need to create a file at `~/.cloudvolume/secrets/x-secret.json`, where `x` is one of `[aws, gc, azure]` which contains your id and secret key for your cloud platform.
93
+
94
+
The `source` data directory should have an octree data structure
95
+
96
+
```
97
+
data/
98
+
├── default.0.tif
99
+
├── transform.txt
100
+
├── 1/
101
+
│ ├── 1/, ..., 8/
102
+
│ └── default.0.tif
103
+
├── 2/ ... 8/
104
+
└── consensus-swcs (optional)
105
+
├── G-001.swc
106
+
├── G-002.swc
107
+
└── default.0.tif
108
+
```
109
+
110
+
If your team wants to interact with cloud data, each member will need account credentials specified in `~/.cloudvolume/secrets/x-secret.json`, where `x` is one of `[aws, gc, azure]` which contains your id and secret key for your cloud platform.
111
+
We provide a template for `aws` in the repo for convenience.
68
112
69
113
### Create a session
114
+
70
115
Each user will start their scripts with approximately the same lines:
116
+
71
117
```
72
118
from brainlit.utils.ngl import NeuroglancerSession
From here, any number of tools can be run such as the visualization or annotation tools. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/visualization/visualization.ipynb).
77
124
78
125
## Features
79
126
80
127
### Registration
128
+
81
129
The registration subpackage is a facsimile of ARDENT, a pip-installable (pip install ardent) package for nonlinear image registration wrapped in an object-oriented framework for ease of use. This is an implementation of the LDDMM algorithm with modifications, written by Devin Crowley and based on "Diffeomorphic registration with intensity transformation and missing data: Application to 3D digital pathology of Alzheimer's disease." This paper extends on an older LDDMM paper, "Computing large deformation metric mappings via geodesic flows of diffeomorphisms."
A tutorial is available in docs/notebooks/registration_demo.ipynb.
96
144
97
145
## Core
98
-
The core brain-lit package can be described by the diagram at the top of the readme:
146
+
147
+
The core brainlit package can be described by the diagram at the top of the readme:
99
148
100
149
### (Push and Pull Data)
150
+
101
151
Brainlit uses the Seung Lab's [Cloudvolume](https://github.com/seung-lab/cloud-volume) package to push and pull data through the cloud or a local machine in an efficient and parallelized fashion. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/utils/uploading_brains.ipynb).
102
-
The only requirement is to have an account on a cloud service on s3, azure, or google cloud.
152
+
The only requirement is to have an account on a cloud service on s3, Azure, or Google Cloud.
103
153
104
154
Loading data via local filepath of an octree structure is also supported. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/utils/upload_brains.ipynb).
105
155
106
156
### Visualize
157
+
107
158
Brainlit supports many methods to visualize large data. Visualizing the entire data can be done via Google's [Neuroglancer](https://github.com/google/neuroglancer), which provides a web link as shown below.
108
159
109
160
screenshot
@@ -113,22 +164,34 @@ Brainlit also has tools to visualize chunks of data as 2d slices or as a 3d mode
113
164
screenshot
114
165
115
166
### Manually Segment
167
+
116
168
Brainlit includes a lightweight manual segmentation pipeline. This allows collaborators of a projec to pull data from the cloud, create annotations, and push their annotations back up as a separate channel. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/manual_segementation.ipynb).
117
169
118
170
### Automatically and Semi-automatically Segment
119
-
Similar to the above pipeline, segmentations can be automatically or semi-automatically generated and pushed to a separate channel for viewing. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/seg_pipeline_demo.ipynb).
171
+
172
+
Similar to the above pipeline, segmentations can be automatically or semi-automatically generated and pushed to a separate channel for viewing. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/seg_pipeline_demo.ipynb).
0 commit comments