-
Notifications
You must be signed in to change notification settings - Fork 283
Open
Description
Read before creating a new issue
- Users who want to use SpikingJelly should first be familiar with the usage of PyTorch.
- If you do not know much about PyTorch, we recommend that the user can learn the basic tutorials of PyTorch.
- Do not ask for help with the basic conception of PyTorch/Machine Learning but not related to SpikingJelly. For these questions, please refer to Google or PyTorch Forums.
For faster response
You can @ the corresponding developers for your issue. Here is the division:
Features | Developers |
---|---|
Neurons and Surrogate Functions | fangwei123456 Yanqi-Chen |
CUDA Acceleration | fangwei123456 Yanqi-Chen |
Reinforcement Learning | lucifer2859 |
ANN to SNN Conversion | DingJianhao Lyu6PosHao |
Biological Learning (e.g., STDP) | AllenYolk |
Others | Grasshlw lucifer2859 AllenYolk Lyu6PosHao DingJianhao Yanqi-Chen fangwei123456 |
We are glad to add new developers who are volunteering to help solve issues to the above table.
Issue type
- Bug Report
- Feature Request
- [x ] Help wanted
- Other
SpikingJelly version
0.0.0.0.2
@Yanqi-Chen @fangwei123456 I plan to use 3 gpus and Dataparalle is much easier than DDP. Could you please show an example of how to do this?
I tried but loss computation returns error since dataparalle seems to concat outputs along time rather than batch
Description
...
Minimal code to reproduce the error/bug
import spikingjelly
# ...
Metadata
Metadata
Assignees
Labels
No labels