Skip to content

Commit 757f22a

Browse files
authored
Merge pull request #159 from neurodata/develop
Version 0.2.0
2 parents daf8a9f + a117ac5 commit 757f22a

File tree

2,643 files changed

+43737
-38020
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,643 files changed

+43737
-38020
lines changed

.aws.sh

Lines changed: 0 additions & 11 deletions
This file was deleted.

.gitignore

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,6 @@
11
**/.DS_Store
2-
data/test_*
3-
data/data_octree/1/
4-
data/data_octree/2/
5-
data/data_octree/3/
6-
data/data_octree/4/
7-
data/data_octree/5/
8-
data/data_octree/6/
9-
data/data_octree/7/
10-
data/data_octree/8/
2+
docs/notebooks/pipelines/demo*
3+
**-checkpoint.ipynb
114

125
# Byte-compiled / optimized / DLL files
136
__pycache__/

.travis.yml

Lines changed: 18 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -2,39 +2,29 @@ dist: xenial
22
sudo: true
33
language: python
44
python:
5-
- '3.6'
5+
- "3.8"
66
matrix:
77
include:
8-
- python: 3.7
8+
- python: 3.8
99
cache: pip
10-
before_install:
11-
- "./.aws.sh"
1210
install:
13-
- pip install -r requirements.txt
14-
- pip install -U pytest pytest-cov pytest-doctestplus codecov
15-
- pip install black
11+
- pip install -r requirements.txt
12+
- pip install -U pytest pytest-cov pytest-doctestplus codecov six
13+
- pip install black
1614
script:
17-
- pytest --cov=brainlit tests/
18-
- black --check --diff ./brainlit ./tests
15+
- pytest --cov=brainlit tests/
16+
- black --check --diff ./brainlit ./tests
1917
after_success:
20-
- codecov
21-
env:
22-
global:
23-
secure: XXGsCs9mY4iCJG8KHF7OqCfh5tmZbPd6HGm3XIHYHSKWfEozG7ac+LPFfUgHOXy4SJXsEBZEs/0LAENfJS5BL9ToFxsH08QOoHt5cXx/vt/v8Al4uFJEhNvn3FSRfabYljdjDNAzcIqTpj7h0PIxLpKzQOXsaYPtKuvYlp3pwg4kdX2mw0gqcz8sF0QmeyTLsYnXb2cQ4FVUrl7y9sLLUKfsR+6kMnoQNWA9Blnm3aIKp20RPcrjlD+XLsfjA85kQEAWsryMefjPo+K4DHfHV/sTu8157Yz8/OJ3yYofR/bfhSAmHHw4LOM92UYggMSRVM0txiL1kLjxP2kBY63aNtwKrC1q7WJ2WtFTLXNUo1kMUm8QbcSOhmzErBEADKnFviYQogs9UrsWfgeW3hBTIzYwUmatXXy33drQywbHzp0PDrweaoYI9Zr606oPvpzJ8G5WZwDWbf2G785L5J21UzxSKkPMTO6RZ2Oni0cxdiT9DgPlao4Y+P7f756L4dy5YE29/mbExAfbKt8FTqnEl5JDwJBaCPJe/1uaSUy5rFF2noAHbylKrrpE3kgt+9o83gG69wQj6RFx3Z889FjvQYiMKS0Dn/TEXsaWMbX7HlEtXSq4U9r5N59cF4c/m6+/YlY8Hwz8XxeKIf/r31anI93sapuYwgb0V0jBMBGGkAk=
18+
- codecov
2419

2520
deploy:
26-
- provider: pypi
27-
user: "__token__"
28-
password:
29-
secure: $PYPI
30-
skip_existing: true
31-
on:
32-
branch: master
33-
tags: true
34-
repo: neurodata/brainlit
35-
python: '3.6'
36-
37-
38-
39-
40-
21+
- provider: pypi
22+
user: "__token__"
23+
password:
24+
secure: $PYPI
25+
skip_existing: true
26+
on:
27+
branch: master
28+
tags: true
29+
repo: neurodata/brainlit
30+
python: "3.8"

CONTRIBUTING.md

Lines changed: 10 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,27 +13,23 @@ to record your changes in Git, push the changes to your branch with:
1313
```git pull origin master```
1414
```git push origin my-feature```
1515

16+
The repository is structured according to
17+
![this model](https://nvie.com/img/git-model@2x.png)
18+
([source](https://nvie.com/posts/a-successful-git-branching-model/))
19+
1620
## Pull Request Checklist
1721
We recommended that your contribution complies with the following rules before you submit a pull request:
1822

19-
Give your pull request a helpful title that summarises what your contribution does. In some cases Fix <ISSUE TITLE> is enough. Fix #<ISSUE NUMBER> is not enough.
20-
21-
All public methods should have informative docstrings with sample usage presented as doctests when appropriate.
22-
23-
At least one paragraph of narrative documentation with links to references in the literature (with PDF links when possible) and the example.
24-
25-
All functions and classes must have unit tests. These should include, at the very least, type checking and ensuring correct computation/outputs.
26-
27-
Ensure all tests are passing locally using pytest. Install the necessary packages by:
28-
23+
- [ ] Give your pull request a helpful title that summarises what your contribution does. In some cases Fix <ISSUE TITLE> is enough. Fix #<ISSUE NUMBER> is not enough.
24+
- [ ] All public methods should have informative docstrings with sample usage presented as doctests when appropriate.
25+
- [ ] At least one paragraph of narrative documentation with links to references in the literature (with PDF links when possible) and the example.
26+
- [ ] All functions and classes must have unit tests. These should include, at the very least, type checking and ensuring correct computation/outputs.
27+
- [ ] Ensure all tests are passing locally using pytest. Install the necessary packages by:
2928
```pip install pytest pytest-cov```
3029
then run
31-
3230
```pytest```
3331
or you can run pytest on a single test file by
34-
3532
```pytest path/to/test.py```
36-
Run an autoformatter. We use black and would like for you to format all files using black. You can run the following lines to format your files.
37-
33+
- [ ] Run an autoformatter. We use black and would like for you to format all files using black. You can run the following lines to format your files.
3834
```pip install black```
3935
```black path/to/module.py```

README.md

Lines changed: 115 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# Brainlit
2+
23
[![Python](https://img.shields.io/badge/python-3.7-blue.svg)]()
34
[![Build Status](https://travis-ci.com/neurodata/brainlit.svg?branch=master)](https://travis-ci.com/neurodata/brainlit)
45
[![PyPI version](https://badge.fury.io/py/brainlit.svg)](https://badge.fury.io/py/brainlit)
@@ -7,77 +8,124 @@
78
![Docker Cloud Build Status](https://img.shields.io/docker/cloud/build/bvarjavand/brainlit)
89
![Docker Image Size (latest by date)](https://img.shields.io/docker/image-size/bvarjavand/brainlit)
910
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
10-
This repository is a container of methods that Neurodata usees to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additioanlly, methods for computational neuroscience on brains too specific for a general scientific library can be found here, such as image registration software tuned specifically for large brain volumes.
11-
12-
![Brainlight Features](https://raw.githubusercontent.com/neurodata/brainlight/diagram/Brainlight.png)
13-
14-
- [Motivation](#motivation)
15-
- [Installation](#installation)
16-
* [Environment](#environment)
17-
* [Install from pypi](#install-from-pypi)
18-
* [Install from source](#install-from-source)
19-
- [How to Use Brainlit](#how-to-use-brainlit)
20-
* [Data Setup](#data-setup)
21-
* [Create a Session](#create-a-session)
22-
- [Features](#features)
23-
* [Registration](#registration)
24-
- [Core](#core)
25-
* [Push/Pull Data](#push-and-pull-data)
26-
* [Visualize](#visualize)
27-
* [Manually Segment](#manually-segment)
28-
* [Automatically Segment](#automatically-and-semi-automatically-segment)
29-
- [API reference](#api-reference)
30-
- [Tests](#tests)
31-
- [Contributing](#contributing)
32-
- [Credits](#credits)
33-
11+
This repository is a container of methods that Neurodata uses to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additionally, methods for computational neuroscience on brains too specific for a general scientific library can be found here, such as image registration software tuned specifically for large brain volumes.
12+
13+
![Brainlight Features](https://github.com/neurodata/brainlit/blob/develop/docs/images/figure.png)
14+
15+
- [Brainlit](#brainlit)
16+
- [Motivation](#motivation)
17+
- [Installation](#installation)
18+
- [Environment](#environment)
19+
- [(optional, any python >= 3.7 environment will suffice)](#optional-any-python--38-environment-will-suffice)
20+
- [Install from pypi](#install-from-pypi)
21+
- [Install from source](#install-from-source)
22+
- [How to use Brainlit](#how-to-use-brainlit)
23+
- [Data setup](#data-setup)
24+
- [Create a session](#create-a-session)
25+
- [Features](#features)
26+
- [Registration](#registration)
27+
- [Core](#core)
28+
- [(Push and Pull Data)](#push-and-pull-data)
29+
- [Visualize](#visualize)
30+
- [Manually Segment](#manually-segment)
31+
- [Automatically and Semi-automatically Segment](#automatically-and-semi-automatically-segment)
32+
- [API Reference](#api-reference)
33+
- [Tests](#tests)
34+
- [Common errors and troubleshooting](#common-errors-and-troubleshooting)
35+
- [Contributing](#contributing)
36+
- [Credits](#credits)
3437

3538
## Motivation
36-
The repository originated as the project of a team in Joshua Vogelstein's class **Neurodata** at Johns Hopkins University. This project was focused on data science towards the [mouselight data](https://www.hhmi.org/news/mouselight-project-maps-1000-neurons-and-counting-in-the-mouse-brain). It becme apparent that the tools developed for the class would be useful for other groups doing data science on large data volumes.
39+
40+
The repository originated as the project of a team in Joshua Vogelstein's class **Neurodata** at Johns Hopkins University. This project was focused on data science towards the [mouselight data](https://www.hhmi.org/news/mouselight-project-maps-1000-neurons-and-counting-in-the-mouse-brain). It became apparent that the tools developed for the class would be useful for other groups doing data science on large data volumes.
3741
The repository can now be considered a "holding bay" for code developed by Neurodata for collaborators and researchers to use.
3842

3943
## Installation
44+
45+
### Operating Systems
46+
Brainlit is compatible with Mac, Windows, and Unix systems.
47+
48+
#### Windows Linux Subsystem 2
49+
For Windows 10 users that prefer Linux functionality without the speed sacrifice of a Virtual Machine, Brainlit can be installed and run on WSL2. See installation walkthrough [here.](docs/WSL2-install-instructions.md)
50+
51+
4052
### Environment
41-
- [get conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)
42-
- create a virtual environment with `python>=3.7` via `conda create --name brainlit python=3.7`
43-
- activate the environment via `conda activate brainlit`
44-
53+
54+
#### (optional, any python >= 3.8 environment will suffice)
55+
56+
- [get conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)
57+
- create a virtual environment: `conda create --name brainlit python=3.8`
58+
- activate the environment: `conda activate brainlit`
59+
4560
### Install from pypi
46-
- install brainlit via `pip install brainlit`
47-
61+
62+
- install brainlit: `pip install brainlit`
63+
4864
### Install from source
49-
- clone the repo via `git clone https://github.com/neurodata/brainlit.git`
50-
- cd into the repo via `cd brainlit`
51-
- install brainlit via `pip install -e .`
65+
66+
- clone the repo: `git clone https://github.com/neurodata/brainlit.git`
67+
- cd into the repo: `cd brainlit`
68+
- install brainlit: `pip install -e .`
69+
70+
### For Windows Users setting up a Conda environment:
71+
72+
Users currently may run into an issue with installing dependencies on Python 3.8. There are a couple workarounds currently available:
73+
74+
#### Use Python 3.7 - RECOMMENDED
75+
76+
- Create a new environment using Python 3.7 instead: `conda create --name brainlit3.7 python=3.7`
77+
78+
- Run `pip install -e .` This should successfully install the brainlit module for Conda on Windows.
79+
80+
#### Other potential fixes
81+
82+
Potentially, `gcc` is missing, which is necessary for wheel installation from Python 3.6 onwards.
83+
84+
- Install [gcc for Windows](https://www.guru99.com/c-gcc-install.html) and run `pip install brainlit -e . --no-cache-dir`.
85+
86+
Post-Python 3.6, windows handles wheels through the Microsoft Manifest Tool, it might be missing.
87+
88+
- Add the [Microsoft Manifest Tool](https://docs.microsoft.com/en-us/windows/win32/sbscs/mt-exe) to the `PATH` variable.
5289

5390
## How to use Brainlit
91+
5492
### Data setup
55-
The `source` data directory should look something like an octree data structure with optional swc folder
56-
57-
data/
58-
- default.0.tif
59-
- 1/
60-
* default.0.tif
61-
* 1/ ... 8/
62-
- 2/ ... 8/
63-
- transform.txt
64-
- consensus-swcs (optional, for .swc files)
65-
66-
First, decide for your team where you'd like to store the data - whether it will be on a local machine or on the cloud. If on the cloud,
67-
each collaborator will need to create a file at `~/.cloudvolume/secrets/x-secret.json`, where `x` is one of `[aws, gc, azure]` which contains your id and secret key for your cloud platform.
93+
94+
The `source` data directory should have an octree data structure
95+
96+
```
97+
data/
98+
├── default.0.tif
99+
├── transform.txt
100+
├── 1/
101+
│ ├── 1/, ..., 8/
102+
│ └── default.0.tif
103+
├── 2/ ... 8/
104+
└── consensus-swcs (optional)
105+
├── G-001.swc
106+
├── G-002.swc
107+
└── default.0.tif
108+
```
109+
110+
If your team wants to interact with cloud data, each member will need account credentials specified in `~/.cloudvolume/secrets/x-secret.json`, where `x` is one of `[aws, gc, azure]` which contains your id and secret key for your cloud platform.
111+
We provide a template for `aws` in the repo for convenience.
68112

69113
### Create a session
114+
70115
Each user will start their scripts with approximately the same lines:
116+
71117
```
72118
from brainlit.utils.ngl import NeuroglancerSession
73119
74120
session = NeuroglancerSession(url='file:///abc123xyz')
75121
```
122+
76123
From here, any number of tools can be run such as the visualization or annotation tools. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/visualization/visualization.ipynb).
77124

78125
## Features
79126

80127
### Registration
128+
81129
The registration subpackage is a facsimile of ARDENT, a pip-installable (pip install ardent) package for nonlinear image registration wrapped in an object-oriented framework for ease of use. This is an implementation of the LDDMM algorithm with modifications, written by Devin Crowley and based on "Diffeomorphic registration with intensity transformation and missing data: Application to 3D digital pathology of Alzheimer's disease." This paper extends on an older LDDMM paper, "Computing large deformation metric mappings via geodesic flows of diffeomorphisms."
82130

83131
This is the more recent paper:
@@ -95,15 +143,18 @@ https://doi.org/10.1023/B:VISI.0000043755.93987.aa
95143
A tutorial is available in docs/notebooks/registration_demo.ipynb.
96144

97145
## Core
98-
The core brain-lit package can be described by the diagram at the top of the readme:
146+
147+
The core brainlit package can be described by the diagram at the top of the readme:
99148

100149
### (Push and Pull Data)
150+
101151
Brainlit uses the Seung Lab's [Cloudvolume](https://github.com/seung-lab/cloud-volume) package to push and pull data through the cloud or a local machine in an efficient and parallelized fashion. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/utils/uploading_brains.ipynb).
102-
The only requirement is to have an account on a cloud service on s3, azure, or google cloud.
152+
The only requirement is to have an account on a cloud service on s3, Azure, or Google Cloud.
103153

104154
Loading data via local filepath of an octree structure is also supported. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/utils/upload_brains.ipynb).
105155

106156
### Visualize
157+
107158
Brainlit supports many methods to visualize large data. Visualizing the entire data can be done via Google's [Neuroglancer](https://github.com/google/neuroglancer), which provides a web link as shown below.
108159

109160
screenshot
@@ -113,22 +164,34 @@ Brainlit also has tools to visualize chunks of data as 2d slices or as a 3d mode
113164
screenshot
114165

115166
### Manually Segment
167+
116168
Brainlit includes a lightweight manual segmentation pipeline. This allows collaborators of a projec to pull data from the cloud, create annotations, and push their annotations back up as a separate channel. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/manual_segementation.ipynb).
117169

118170
### Automatically and Semi-automatically Segment
119-
Similar to the above pipeline, segmentations can be automatically or semi-automatically generated and pushed to a separate channel for viewing. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/seg_pipeline_demo.ipynb)
171+
172+
Similar to the above pipeline, segmentations can be automatically or semi-automatically generated and pushed to a separate channel for viewing. [Interactive demo](https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/seg_pipeline_demo.ipynb).
120173

121174
## API Reference
175+
122176
[![Documentation Status](https://readthedocs.org/projects/brainlight/badge/?version=latest)](https://brainlight.readthedocs.io/en/latest/?badge=latest)
123177
The documentation can be found at [https://brainlight.readthedocs.io/en/latest/](https://brainlight.readthedocs.io/en/latest/).
124178

125179
## Tests
126-
Running tests can easily be done by moving to the root directory of the brainlit package ant typing `pytest tests` or `python -m pytest tests`.
180+
181+
Running tests can easily be done by moving to the root directory of the brainlit package and typing `pytest tests` or `python -m pytest tests`.
127182
Running a specific test, such as `test_upload.py` can be done simply by `ptest tests/test_upload.py`.
128183

184+
## Common errors and troubleshooting
185+
186+
- [macOS Install/Run Issues](https://github.com/NeuroDataDesign/brainlit/blob/develop/docs/macOS_Install_%26_Run_Issues.md)
187+
188+
- [AWS Credentials Issues](https://github.com/NeuroDataDesign/brainlit/blob/develop/docs/AWS_Credentials_Issues.md)
189+
129190
## Contributing
191+
130192
Contribution guidelines can be found via [CONTRIBUTING.md](https://github.com/neurodata/brainlit/blob/master/CONTRIBUTING.md)
131193

132194
## Credits
133-
Thanks to the neurodata team and the group in the neurodata class which started the project.
195+
196+
Thanks to the Neurodata team and the group in the Neurodata class which started the project.
134197
This project is currently managed by Tommy Athey and Bijan Varjavand.

brainlit/__init__.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
import warnings
22

3-
# import brainlit.algorithms
3+
import brainlit.algorithms
4+
import brainlit.cloudreg
5+
import brainlit.feature_extraction
46
import brainlit.preprocessing
57
import brainlit.registration
68
import brainlit.utils

brainlit/algorithms/__init__.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
import brainlit.algorithms.connect_fragments
21
import brainlit.algorithms.generate_fragments
32

4-
from brainlit.algorithms.connect_fragments import *
53
from brainlit.algorithms.generate_fragments import *

brainlit/algorithms/connect_fragments/__init__.py

Lines changed: 0 additions & 3 deletions
This file was deleted.

0 commit comments

Comments
 (0)