Skip to content

Commit 4994531

Browse files
authored
Merge pull request #168 from neurodata/develop
Merge v0.2.1
2 parents 757f22a + 77d6e78 commit 4994531

File tree

15 files changed

+274
-109
lines changed

15 files changed

+274
-109
lines changed

.gitattributes

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
*.ipynb linguist-vendored=true

brainlit/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@
1313
warnings.simplefilter("always", category=UserWarning)
1414

1515

16-
__version__ = "0.2.0"
16+
__version__ = "0.2.1"

brainlit/archive/upload_skeleton.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import argparse
44
import numpy as np
55
from cloudvolume import CloudVolume, Skeleton, storage
6-
from .swc import swc2skeleton
6+
from brainlit.utils.swc import swc2skeleton
77
import pandas as pd
88
from pathlib import Path
99
import tifffile as tf

docs/README.md

Lines changed: 0 additions & 1 deletion
This file was deleted.

docs/README.rst

Lines changed: 209 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,209 @@
1+
.. role:: raw-html-m2r(raw)
2+
:format: html
3+
4+
5+
Brainlit
6+
========
7+
8+
9+
.. image:: https://travis-ci.com/neurodata/brainlit.svg?branch=master
10+
:target: https://travis-ci.com/neurodata/brainlit
11+
:alt: Build Status
12+
13+
14+
.. image:: https://badge.fury.io/py/brainlit.svg
15+
:target: https://badge.fury.io/py/brainlit
16+
:alt: PyPI version
17+
18+
19+
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
20+
:target: https://github.com/psf/black
21+
:alt: Code style: black
22+
23+
24+
.. image:: https://codecov.io/gh/neurodata/brainlit/branch/master/graph/badge.svg
25+
:target: https://codecov.io/gh/neurodata/brainlit
26+
:alt: codecov
27+
28+
29+
.. image:: https://img.shields.io/docker/cloud/build/bvarjavand/brainlit
30+
:target: https://img.shields.io/docker/cloud/build/bvarjavand/brainlit
31+
:alt: Docker Cloud Build Status
32+
33+
34+
.. image:: https://img.shields.io/docker/image-size/bvarjavand/brainlit
35+
:target: https://img.shields.io/docker/image-size/bvarjavand/brainlit
36+
:alt: Docker Image Size (latest by date)
37+
38+
39+
.. image:: https://img.shields.io/badge/License-Apache%202.0-blue.svg
40+
:target: https://opensource.org/licenses/Apache-2.0
41+
:alt: License
42+
43+
44+
This repository is a container of methods that Neurodata usees to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additionally, methods for computational neuroscience on brains too specific for a general scientific library can be found here, such as image registration software tuned specifically for large brain volumes.
45+
46+
47+
.. image:: https://i.postimg.cc/QtG9Xs68/Brainlit.png
48+
:target: https://i.postimg.cc/QtG9Xs68/Brainlit.png
49+
:alt: Brainlight Features
50+
51+
52+
.. toctree::
53+
:numbered:
54+
55+
56+
Motivation
57+
----------
58+
59+
The repository originated as the project of a team in Joshua Vogelstein's class **Neurodata** at Johns Hopkins University. This project was focused on data science towards the `mouselight data <https://www.hhmi.org/news/mouselight-project-maps-1000-neurons-and-counting-in-the-mouse-brain>`_. It becme apparent that the tools developed for the class would be useful for other groups doing data science on large data volumes.
60+
The repository can now be considered a "holding bay" for code developed by Neurodata for collaborators and researchers to use.
61+
62+
Installation
63+
------------
64+
65+
Environment
66+
^^^^^^^^^^^
67+
68+
(optional, any python >= 3.8 environment will suffice)
69+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
70+
71+
72+
* `get conda <https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html>`_
73+
* create a virtual environment: ``conda create --name brainlit python=3.8``
74+
* activate the environment: ``conda activate brainlit``
75+
76+
Install from pypi
77+
^^^^^^^^^^^^^^^^^
78+
79+
80+
* install brainlit: ``pip install brainlit``
81+
82+
Install from source
83+
^^^^^^^^^^^^^^^^^^^
84+
85+
86+
* clone the repo: ``git clone https://github.com/neurodata/brainlit.git``
87+
* cd into the repo: ``cd brainlit``
88+
* install brainlit: ``pip install -e .``
89+
90+
How to use Brainlit
91+
-------------------
92+
93+
Data setup
94+
^^^^^^^^^^
95+
96+
The ``source`` data directory should have an octree data structure
97+
98+
.. code-block::
99+
100+
data/
101+
├── default.0.tif
102+
├── transform.txt
103+
├── 1/
104+
│ ├── 1/, ..., 8/
105+
│ └── default.0.tif
106+
├── 2/ ... 8/
107+
└── consensus-swcs (optional)
108+
├── G-001.swc
109+
├── G-002.swc
110+
└── default.0.tif
111+
112+
If your team wants to interact with cloud data, each member will need account credentials specified in ``~/.cloudvolume/secrets/x-secret.json``\ , where ``x`` is one of ``[aws, gc, azure]`` which contains your id and secret key for your cloud platform.
113+
We provide a template for ``aws`` in the repo for convenience.
114+
115+
Create a session
116+
^^^^^^^^^^^^^^^^
117+
118+
Each user will start their scripts with approximately the same lines:
119+
120+
.. code-block::
121+
122+
from brainlit.utils.ngl import NeuroglancerSession
123+
124+
session = NeuroglancerSession(url='file:///abc123xyz')
125+
126+
From here, any number of tools can be run such as the visualization or annotation tools. `Viz demo <https://github.com/neurodata/brainlit/blob/master/docs/notebooks/visualization/visualization.ipynb>`_.
127+
128+
Features
129+
--------
130+
131+
Registration
132+
^^^^^^^^^^^^
133+
134+
The registration subpackage is a facsimile of ARDENT, a pip-installable (pip install ardent) package for nonlinear image registration wrapped in an object-oriented framework for ease of use. This is an implementation of the LDDMM algorithm with modifications, written by Devin Crowley and based on "Diffeomorphic registration with intensity transformation and missing data: Application to 3D digital pathology of Alzheimer's disease." This paper extends on an older LDDMM paper, "Computing large deformation metric mappings via geodesic flows of diffeomorphisms."
135+
136+
This is the more recent paper:
137+
138+
Tward, Daniel, et al. "Diffeomorphic registration with intensity transformation and missing data: Application to 3D digital pathology of Alzheimer's disease." Frontiers in neuroscience 14 (2020).
139+
140+
https://doi.org/10.3389/fnins.2020.00052
141+
142+
This is the original LDDMM paper:
143+
144+
Beg, M. Faisal, et al. "Computing large deformation metric mappings via geodesic flows of diffeomorphisms." International journal of computer vision 61.2 (2005): 139-157.
145+
146+
https://doi.org/10.1023/B:VISI.0000043755.93987.aa
147+
148+
A tutorial is available in docs/notebooks/registration_demo.ipynb.
149+
150+
Core
151+
----
152+
153+
The core brain-lit package can be described by the diagram at the top of the readme:
154+
155+
(Push and Pull Data)
156+
^^^^^^^^^^^^^^^^^^^^
157+
158+
Brainlit uses the Seung Lab's `Cloudvolume <https://github.com/seung-lab/cloud-volume>`_ package to push and pull data through the cloud or a local machine in an efficient and parallelized fashion. `Uploading demo <https://github.com/neurodata/brainlit/blob/master/docs/notebooks/utils/uploading_brains.ipynb>`_.\ :raw-html-m2r:`<br>`
159+
The only requirement is to have an account on a cloud service on s3, azure, or google cloud.
160+
161+
Loading data via local filepath of an octree structure is also supported. `Octree demo <https://github.com/neurodata/brainlit/blob/master/docs/notebooks/utils/upload_brains.ipynb>`_.
162+
163+
Visualize
164+
^^^^^^^^^
165+
166+
Brainlit supports many methods to visualize large data. Visualizing the entire data can be done via Google's `Neuroglancer <https://github.com/google/neuroglancer>`_\ , which provides a web link as shown below.
167+
168+
screenshot
169+
170+
Brainlit also has tools to visualize chunks of data as 2d slices or as a 3d model. `Visualization demo <https://github.com/neurodata/brainlit/blob/master/docs/notebooks/visualization/visualization.ipynb>`_.
171+
172+
screenshot
173+
174+
Manually Segment
175+
^^^^^^^^^^^^^^^^
176+
177+
Brainlit includes a lightweight manual segmentation pipeline. This allows collaborators of a projec to pull data from the cloud, create annotations, and push their annotations back up as a separate channel. `Auto demo <https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/manual_segementation.ipynb>`_.
178+
179+
Automatically and Semi-automatically Segment
180+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
181+
182+
Similar to the above pipeline, segmentations can be automatically or semi-automatically generated and pushed to a separate channel for viewing. `Semi-auto demo <https://github.com/neurodata/brainlit/blob/master/docs/notebooks/pipelines/seg_pipeline_demo.ipynb>`_.
183+
184+
API Reference
185+
-------------
186+
187+
188+
.. image:: https://readthedocs.org/projects/brainlight/badge/?version=latest
189+
:target: https://brainlight.readthedocs.io/en/latest/?badge=latest
190+
:alt: Documentation Status
191+
192+
The documentation can be found at `https://brainlight.readthedocs.io/en/latest/ <https://brainlight.readthedocs.io/en/latest/>`_.
193+
194+
Tests
195+
-----
196+
197+
Running tests can easily be done by moving to the root directory of the brainlit package ant typing ``pytest tests`` or ``python -m pytest tests``.\ :raw-html-m2r:`<br>`
198+
Running a specific test, such as ``test_upload.py`` can be done simply by ``ptest tests/test_upload.py``.
199+
200+
Contributing
201+
------------
202+
203+
Contribution guidelines can be found via `CONTRIBUTING.md <https://github.com/neurodata/brainlit/blob/master/CONTRIBUTING.md>`_
204+
205+
Credits
206+
-------
207+
208+
Thanks to the neurodata team and the group in the neurodata class which started the project.
209+
This project is currently managed by Tommy Athey and Bijan Varjavand.

docs/conf.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,9 @@
4343
"sphinx.ext.intersphinx",
4444
]
4545

46+
nbsphinx_kernel_name = "docs"
47+
nbsphinx_allow_errors = True
48+
4649
autoapi_dirs = ["../brainlit"]
4750
autoapi_add_toctree_entry = False
4851
autoapi_generate_api_docs = False

docs/index.rst

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,10 @@
55
Overview of Brainlit
66
====================
77

8-
Brainlit is a Python package for reading and analyzing brain data.
9-
The brain data is assumed to consist of image files in an octree structure to handle mutliple resolutions.
10-
Optionally, the package is able to handle skeletonized axon data stored in a `.swc` file format.
11-
12-
Brainlit is able to handle this data, visualizing and running analysis with morphological and statistical methods.
13-
A diagram demonstrating the capabilities of the package is shown.
8+
.. toctree::
9+
:maxdepth: 1
1410

15-
.. image:: images/figure.png
16-
:width: 600
11+
README
1712

1813
Documentation
1914
=============

docs/notebooks/utils/downloading_brains.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,4 +132,4 @@
132132
"source": []
133133
}
134134
]
135-
}
135+
}

docs/notebooks/utils/utils.ipynb

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,12 +33,18 @@
3333
"## Converting image data to neuroglancer precomputed format\n",
3434
"\n",
3535
"Image data will be assumed to be stored locally in octree format and at multiple resolutions, such as\n",
36-
"```default.0.tif 0/default.0.tif 1/default.0.tif 2/default.0.tif 3/default.0.tif 4/default.0.tif 5/default.0.tif 6/default.0.tif 7/default.0.tif 8/default.0.tif```.\n",
36+
"```default.0.tif 0/default.0.tif 1/default.0.tif ... 8/default.0.tif```.\n",
3737
"\n",
38-
"A user only needs to specity a path to the octree top level, and specify the number of resolutions to use.\n",
39-
"### The octree path can be modified to generated files from different data"
38+
"A user only needs to specity a path to the octree top level, and specify the number of resolutions to use."
4039
]
4140
},
41+
{
42+
"source": [
43+
"### The octree path can be modified to generated files from different data"
44+
],
45+
"cell_type": "markdown",
46+
"metadata": {}
47+
},
4248
{
4349
"cell_type": "code",
4450
"execution_count": 2,
@@ -82,7 +88,6 @@
8288
"cell_type": "markdown",
8389
"metadata": {},
8490
"source": [
85-
"The image layer is then defined\n",
8691
"### The URL-formated layer location parameter can be modified to send generated files to\n",
8792
" - any location on a local machine by using the \"file://\" prefix and then filepath\n",
8893
" - a google storage account by using the \"gs://\" prefix and then url\n",
@@ -132,7 +137,7 @@
132137
"SWC data will be assumed to be stored locally in `.swc` format, such as\n",
133138
"```default.0.swc```.\n",
134139
"\n",
135-
"As before, this tutorial simply shows how to generate the requisite files, and the user can provide any destination for said files. The dafault is a local folder, but this can be modified to any url."
140+
"As before, this tutorial simply shows how to generate the requisite files, and the user can provide any destination for said files. The dafault is a local folder, but this can be modified to any url.\n"
136141
]
137142
},
138143
{
@@ -148,8 +153,9 @@
148153
"cell_type": "markdown",
149154
"metadata": {},
150155
"source": [
151-
"We use `get_volume_info` from the `upload_skeleton` module instead, defining a volume in the same way.\n",
152-
"### The octree path can be modified to generated files from different data"
156+
"### The octree path can be modified to generated files from different data\n",
157+
"\n",
158+
"We use `get_volume_info` from the `upload_skeleton` module instead, defining a volume in the same way."
153159
]
154160
},
155161
{

0 commit comments

Comments
 (0)