Skip to content

Commit 2096fd8

Browse files
committed
Deploying to main from @ numpy/numpy.org@8eb4c54 🚀
1 parent bbd3e50 commit 2096fd8

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

case-studies/deeplabcut-dnn/index.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
</a><a href=/news class=navbar-item>News
1212
</a><a href=/contribute class=navbar-item>Contribute</a><div class="navbar-item has-dropdown"><a aria-label="Select language" class=navbar-link>English</a><div class=navbar-dropdown><a href=/pt/case-studies/deeplabcut-dnn/ class=navbar-item>Português
1313
</a><a href=/ja/case-studies/deeplabcut-dnn/ class=navbar-item>日本語 (Japanese)
14-
</a><a href=/es/case-studies/deeplabcut-dnn/ class=navbar-item>Español</a></div></div></div></div></div></nav><section class=content-padding><div class=content-container><nav aria-label=Breadcrumb><ul id=breadcrumbs class=bd-breadcrumbs><li class="breadcrumb-item breadcrumb-home"><a href=/ class=nav-link aria-label=Home><i class="fas fa-home"></i></a></li><li class=breadcrumb-item><a href=/case-studies/ class=nav-link>Case-Studies</a></li><li class="breadcrumb-item active" aria-current=page>Case Study: DeepLabCut 3D Pose Estimation</li></ul></nav><h1>Case Study: DeepLabCut 3D Pose Estimation</h1><div><figure class=align-default id=id000><img src=/images/content_images/cs/mice-hand.gif alt=micehandanim class=align-center><figcaption><strong class=caption-title>Analyzing mice hand-movement using DeepLapCut</strong><a class=headerlink href=#id000 title="Link to this image">#</a><br><a href=http://www.mousemotorlab.org/deeplabcut>(Source: www.deeplabcut.org )</a><p><span class=caption-text></span></figcaption></figure><blockquote cite=https://news.harvard.edu/gazette/story/newsplus/harvard-researchers-awarded-czi-open-source-award/><p>Open Source Software is accelerating Biomedicine. DeepLabCut enables automated video analysis of animal behavior using Deep Learning.</p><p class=attribution>—Alexander Mathis, <em>Assistant Professor, École polytechnique fédérale de Lausanne</em> (<a href=https://www.epfl.ch/en/>EPFL</a>)</p></blockquote><h2 id=about-deeplabcut>About DeepLabCut<a class=headerlink href=#about-deeplabcut title="Link to this heading">#</a></h2><p><a href=https://github.com/DeepLabCut/DeepLabCut>DeepLabCut</a> is an open source toolbox that empowers researchers at hundreds of institutions worldwide to track behaviour of laboratory animals, with very little training data, at human-level accuracy. With DeepLabCut technology, scientists can delve deeper into the scientific understanding of motor control and behavior across animal species and timescales.</p><p>Several areas of research, including neuroscience, medicine, and biomechanics, use data from tracking animal movement. DeepLabCut helps in understanding what humans and other animals are doing by parsing actions that have been recorded on film. Using automation for laborious tasks of tagging and monitoring, along with deep neural network based data analysis, DeepLabCut makes scientific studies involving observing animals, such as primates, mice, fish, flies etc., much faster and more accurate.</p><figure class=align-default id=id002><img src=/images/content_images/cs/race-horse.gif alt=horserideranim class=align-center><figcaption><strong class=caption-title>Colored dots track the positions of a racehorse’s body part</strong><a class=headerlink href=#id002 title="Link to this image">#</a><br>(Source: Mackenzie Mathis)<p><span class=caption-text></span></figcaption></figure><p>DeepLabCut&rsquo;s non-invasive behavioral tracking of animals by extracting the poses of animals is crucial for scientific pursuits in domains such as biomechanics, genetics, ethology & neuroscience. Measuring animal poses non-invasively from video - without markers - in dynamically changing backgrounds is computationally challenging, both technically as well as in terms of resource needs and training data required.</p><p>DeepLabCut allows researchers to estimate the pose of the subject, efficiently enabling them to quantify the behavior through a Python based software toolkit. With DeepLabCut, researchers can identify distinct frames from videos, digitally label specific body parts in a few dozen frames with a tailored GUI, and then the deep learning based pose estimation architectures in DeepLabCut learn how to pick out those same features in the rest of the video and in other similar videos of animals. It works across species of animals, from common laboratory animals such as flies and mice to more unusual animals like <a href=https://www.technologynetworks.com/neuroscience/articles/interview-a-deeper-cut-into-behavior-with-mackenzie-mathis-327618>cheetahs</a>.</p><p>DeepLabCut uses a principle called <a href=https://arxiv.org/pdf/1909.11229>transfer learning</a>, which greatly reduces the amount of training data required and speeds up the convergence of the training period. Depending on the needs, users can pick different network architectures that provide faster inference (e.g. MobileNetV2), which can also be combined with real-time experimental feedback. DeepLabCut originally used the feature detectors from a top-performing human pose estimation architecture, called <a href=https://arxiv.org/abs/1605.03170>DeeperCut</a>, which inspired the name. The package now has been significantly changed to include additional architectures, augmentation methods, and a full front-end user experience. Furthermore, to support large-scale biological experiments DeepLabCut provides active learning capabilities so that users can increase the training set over time to cover edge cases and make their pose estimation algorithm robust within the specific context.</p><p>Recently, the <a href=http://www.mousemotorlab.org/dlc-modelzoo>DeepLabCut model zoo</a> was introduced, which provides pre-trained models for various species and experimental conditions from facial analysis in primates to dog posture. This can be run for instance in the cloud without any labeling of new data, or neural network training, and no programming experience is necessary.</p><h3 id=key-goals-and-results>Key Goals and Results<a class=headerlink href=#key-goals-and-results title="Link to this heading">#</a></h3><ul><li><p><strong>Automation of animal pose analysis for scientific studies:</strong></p><p>The primary objective of DeepLabCut technology is to measure and track posture
14+
</a><a href=/es/case-studies/deeplabcut-dnn/ class=navbar-item>Español</a></div></div></div></div></div></nav><section class=content-padding><div class=content-container><nav aria-label=Breadcrumb><ul id=breadcrumbs class=bd-breadcrumbs><li class="breadcrumb-item breadcrumb-home"><a href=/ class=nav-link aria-label=Home><i class="fas fa-home"></i></a></li><li class=breadcrumb-item><a href=/case-studies/ class=nav-link>Case-Studies</a></li><li class="breadcrumb-item active" aria-current=page>Case Study: DeepLabCut 3D Pose Estimation</li></ul></nav><h1>Case Study: DeepLabCut 3D Pose Estimation</h1><div><figure class=align-default id=id000><img src=/images/content_images/cs/mice-hand.gif alt=micehandanim class=align-center><figcaption><strong class=caption-title>Analyzing mice hand-movement using DeepLapCut</strong><a class=headerlink href=#id000 title="Link to this image">#</a><br><a href=http://www.mousemotorlab.org/deeplabcut>(Source: www.deeplabcut.org )</a><p><span class=caption-text></span></figcaption></figure><blockquote cite=https://news.harvard.edu/gazette/story/newsplus/harvard-researchers-awarded-czi-open-source-award/><p>Open Source Software is accelerating Biomedicine. DeepLabCut enables automated video analysis of animal behavior using Deep Learning.</p><p class=attribution>—Alexander Mathis, <em>Assistant Professor, École polytechnique fédérale de Lausanne</em> (<a href=https://www.epfl.ch/en/>EPFL</a>)</p></blockquote><h2 id=about-deeplabcut>About DeepLabCut<a class=headerlink href=#about-deeplabcut title="Link to this heading">#</a></h2><p><a href=https://github.com/DeepLabCut/DeepLabCut>DeepLabCut</a> is an open source toolbox that empowers researchers at hundreds of institutions worldwide to track behaviour of laboratory animals, with very little training data, at human-level accuracy. With DeepLabCut technology, scientists can delve deeper into the scientific understanding of motor control and behavior across animal species and timescales.</p><p>Several areas of research, including neuroscience, medicine, and biomechanics, use data from tracking animal movement. DeepLabCut helps in understanding what humans and other animals are doing by parsing actions that have been recorded on film. Using automation for laborious tasks of tagging and monitoring, along with deep neural network based data analysis, DeepLabCut makes scientific studies involving observing animals, such as primates, mice, fish, flies etc., much faster and more accurate.</p><figure class=align-default id=id002><img src=/images/content_images/cs/race-horse.gif alt=horserideranim class=align-center><figcaption><strong class=caption-title>Colored dots track the positions of a racehorse’s body part</strong><a class=headerlink href=#id002 title="Link to this image">#</a><br>(Source: Mackenzie Mathis)<p><span class=caption-text></span></figcaption></figure><p>DeepLabCut&rsquo;s non-invasive behavioral tracking of animals by extracting the poses of animals is crucial for scientific pursuits in domains such as biomechanics, genetics, ethology & neuroscience. Measuring animal poses non-invasively from video - without markers - in dynamically changing backgrounds is computationally challenging, both technically as well as in terms of resource needs and training data required.</p><p>DeepLabCut allows researchers to estimate the pose of the subject, efficiently enabling them to quantify the behavior through a Python based software toolkit. With DeepLabCut, researchers can identify distinct frames from videos, digitally label specific body parts in a few dozen frames with a tailored GUI, and then the deep learning based pose estimation architectures in DeepLabCut learn how to pick out those same features in the rest of the video and in other similar videos of animals. It works across species of animals, from common laboratory animals such as flies and mice to more unusual animals like <a href=https://www.technologynetworks.com/neuroscience/articles/interview-a-deeper-cut-into-behavior-with-mackenzie-mathis-327618>cheetahs</a>.</p><p>DeepLabCut uses a principle called <a href=https://arxiv.org/pdf/1909.11229>transfer learning</a>, which greatly reduces the amount of training data required and speeds up the convergence of the training period. Depending on the needs, users can pick different network architectures that provide faster inference (e.g. MobileNetV2), which can also be combined with real-time experimental feedback. DeepLabCut originally used the feature detectors from a top-performing human pose estimation architecture, called <a href=https://arxiv.org/abs/1605.03170>DeeperCut</a>, which inspired the name. The package now has been significantly changed to include additional architectures, augmentation methods, and a full front-end user experience. Furthermore, to support large-scale biological experiments DeepLabCut provides active learning capabilities so that users can increase the training set over time to cover edge cases and make their pose estimation algorithm robust within the specific context.</p><p>Recently, the <a href=https://deeplabcut.github.io/DeepLabCut/docs/ModelZoo.html>DeepLabCut model zoo</a> was introduced, which provides pre-trained models for various species and experimental conditions from facial analysis in primates to dog posture. This can be run for instance in the cloud without any labeling of new data, or neural network training, and no programming experience is necessary.</p><h3 id=key-goals-and-results>Key Goals and Results<a class=headerlink href=#key-goals-and-results title="Link to this heading">#</a></h3><ul><li><p><strong>Automation of animal pose analysis for scientific studies:</strong></p><p>The primary objective of DeepLabCut technology is to measure and track posture
1515
of animals in a diverse settings. This data can be used, for example, in
1616
neuroscience studies to understand how the brain controls movement, or to
1717
elucidate how animals socially interact. Researchers have observed a
@@ -60,7 +60,7 @@
6060
the most likely predictions from target scoremaps need to extracted and one
6161
needs to efficiently “link predictions to assemble individual animals”.</p><figure class=align-default id=id005><img src=/images/content_images/cs/deeplabcut-workflow.png alt=workflow class=align-center><figcaption><strong class=caption-title>DeepLabCut Workflow</strong><a class=headerlink href=#id005 title="Link to this image">#</a><br><a href=https://www.researchgate.net/figure/DeepLabCut-work-flow-The-diagram-delineates-the-work-flow-as-well-as-the-directory-and_fig1_329185962>(Source: Mackenzie Mathis)</a><p><span class=caption-text></span></figcaption></figure><h2 id=summary>Summary<a class=headerlink href=#summary title="Link to this heading">#</a></h2><p>Observing and efficiently describing behavior is a core tenant of modern
6262
ethology, neuroscience, medicine, and technology.
63-
<a href=http://orga.cvss.cc/wp-content/uploads/2019/05/NathMathis2019.pdf>DeepLabCut</a>
63+
<a href=https://static1.squarespace.com/static/57f6d51c9f74566f55ecf271/t/5eab5ff7999bf94756b27481/1588289532243/NathMathis2019.pdf>DeepLabCut</a>
6464
allows researchers to estimate the pose of the subject, efficiently enabling
6565
them to quantify the behavior. With only a small set of training images,
6666
the DeepLabCut Python toolbox allows training a neural network to within human

0 commit comments

Comments
 (0)