@@ -86,12 +86,13 @@ paddle.distributed.fleet 是分布式训练的统一入口 API,用于配置分
86
86
" :ref: `all_gather <cn_api_distributed_all_gather >` ", "组聚合,聚合进程组内的 tensor,结果广播至每个进程"
87
87
" :ref: `all_gather_object <cn_api_distributed_all_gather_object >` ", "组聚合,聚合进程组内的 object,结果广播至每个进程"
88
88
" :ref: `alltoall <cn_api_distributed_alltoall >` ", "分发 tensor 列表到每个进程并进行聚合"
89
+ " :ref: `alltoall_single <cn_api_distributed_alltoall_single >` ", "分发单个 tensor 到每个进程并聚合至目标 tensor"
89
90
" :ref: `broadcast <cn_api_distributed_broadcast >` ", "广播一个 tensor 到每个进程"
90
91
" :ref: `scatter <cn_api_distributed_scatter >` ", "分发 tensor 到每个进程"
91
92
" :ref: `split <cn_api_distributed_split >` ", "切分参数到多个设备"
92
93
" :ref: `barrier <cn_api_distributed_barrier >` ", "同步路障,进行阻塞操作,实现组内所有进程的同步"
93
- " :ref: `send <cn_api_distributed_send >` ", "发送一个 tensor 到指定的接收者 "
94
- " :ref: `recv <cn_api_distributed_recv >` ", "接收一个来自指定发送者的 tensor"
95
- " :ref: `isend <cn_api_distributed_isend >` ", "异步发送一个 tensor 到指定的接收者 "
96
- " :ref: `irecv <cn_api_distributed_irecv >` ", "异步接收一个来自指定发送者的 tensor"
94
+ " :ref: `send <cn_api_distributed_send >` ", "发送一个 tensor 到指定的进程 "
95
+ " :ref: `recv <cn_api_distributed_recv >` ", "接收一个来自指定进程的 tensor"
96
+ " :ref: `isend <cn_api_paddle_distributed_isend >` ", "异步发送一个 tensor 到指定的进程 "
97
+ " :ref: `irecv <cn_api_paddle_distributed_irecv >` ", "异步接收一个来自指定进程的 tensor"
97
98
" :ref: `reduce_scatter <cn_api_paddle_distributed_reduce_scatter >` ", "规约,然后将 tensor 列表分散到组中的所有进程上"
0 commit comments