1
- OFI MTL
2
-
1
+ OFI MTL:
2
+ --------
3
3
The OFI MTL supports Libfabric (a.k.a. Open Fabrics Interfaces OFI,
4
4
https://ofiwg.github.io/libfabric/) tagged APIs (fi_tagged(3)). At
5
5
initialization time, the MTL queries libfabric for providers supporting tag matching
@@ -9,19 +9,22 @@ The user may modify the OFI provider selection with mca parameters
9
9
mtl_ofi_provider_include or mtl_ofi_provider_exclude.
10
10
11
11
PROGRESS:
12
+ ---------
12
13
The MTL registers a progress function to opal_progress. There is currently
13
14
no support for asynchronous progress. The progress function reads multiple events
14
15
from the OFI provider Completion Queue (CQ) per iteration (defaults to 100, can be
15
16
modified with the mca mtl_ofi_progress_event_cnt) and iterates until the
16
17
completion queue is drained.
17
18
18
19
COMPLETIONS:
20
+ ------------
19
21
Each operation uses a request type ompi_mtl_ofi_request_t which includes a reference
20
- to an operation specific completion callback, an MPI request, and a context. The
22
+ to an operation specific completion callback, an MPI request, and a context. The
21
23
context (fi_context) is used to map completion events with MPI_requests when reading the
22
24
CQ.
23
25
24
26
OFI TAG:
27
+ --------
25
28
MPI needs to send 96 bits of information per message (32 bits communicator id,
26
29
32 bits source rank, 32 bits MPI tag) but OFI only offers 64 bits tags. In
27
30
addition, the OFI MTL uses 2 bits of the OFI tag for the synchronous send protocol.
@@ -67,3 +70,76 @@ This is signaled in mem_tag_format (see fi_endpoint(3)) by setting higher order
67
70
to zero. In such cases, the OFI MTL will reduce the number of communicator ids supported
68
71
by reducing the bits available for the communicator ID field in the OFI tag.
69
72
73
+ SCALABLE ENDPOINTS:
74
+ -------------------
75
+ OFI MTL supports OFI Scalable Endpoints feature as a means to improve
76
+ multi-threaded application throughput and message rate. Currently the feature
77
+ is designed to utilize multiple TX/RX contexts exposed by the OFI provider in
78
+ conjunction with a multi-communicator MPI application model. Therefore, new OFI
79
+ contexts are created as and when communicators are duplicated in a lazy fashion
80
+ instead of creating them all at once during init time and this approach also
81
+ favours only creating as many contexts as needed.
82
+
83
+ 1. Multi-communicator model:
84
+ With this approach, the application first duplicates the communicators it
85
+ wants to use with MPI operations (ideally creating as many communicators as
86
+ the number of threads it wants to use to call into MPI). The duplicated
87
+ communicators are then used by the corresponding threads to perform MPI
88
+ operations. A possible usage scenario could be in an MPI + OMP
89
+ application as follows (example limited to 2 ranks):
90
+
91
+ MPI_Comm dup_comm[n];
92
+ MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
93
+ for (i = 0; i < n; i++) {
94
+ MPI_Comm_dup(MPI_COMM_WORLD, &dup_comm[i]);
95
+ }
96
+ if (rank == 0) {
97
+ #pragma omp parallel for private(host_sbuf, host_rbuf) num_threads(n)
98
+ for (i = 0; i < n ; i++) {
99
+ MPI_Send(host_sbuf, MYBUFSIZE, MPI_CHAR,
100
+ 1, MSG_TAG, dup_comm[i]);
101
+ MPI_Recv(host_rbuf, MYBUFSIZE, MPI_CHAR,
102
+ 1, MSG_TAG, dup_comm[i], &status);
103
+ }
104
+ } else if (rank == 1) {
105
+ #pragma omp parallel for private(status, host_sbuf, host_rbuf) num_threads(n)
106
+ for (i = 0; i < n ; i++) {
107
+ MPI_Recv(host_rbuf, MYBUFSIZE, MPI_CHAR,
108
+ 0, MSG_TAG, dup_comm[i], &status);
109
+ MPI_Send(host_sbuf, MYBUFSIZE, MPI_CHAR,
110
+ 0, MSG_TAG, dup_comm[i]);
111
+ }
112
+ }
113
+
114
+ 2. MCA variable:
115
+ To utilize the feature, the following MCA variable needs to be set:
116
+ mtl_ofi_thread_grouping:
117
+ This MCA variable is at the OFI MTL level and needs to be set to switch
118
+ the feature on.
119
+
120
+ Default: 0
121
+
122
+ It is not recommended to set the MCA variable for:
123
+ - Multi-threaded MPI applications not following multi-communicator approach.
124
+ - Applications that have multiple threads using a single communicator as
125
+ it may degrade performance.
126
+
127
+ Command-line syntax to set the MCA variable:
128
+ "-mca mtl_ofi_thread_grouping 1"
129
+
130
+ 3. Notes on performance:
131
+ - OFI MTL will create as many TX/RX contexts as allowed by an underlying
132
+ provider (each provider may have different thresholds). Once the threshold
133
+ is exceeded, contexts are used in a round-robin fashion which leads to
134
+ resource sharing among threads. Therefore locks are required to guard
135
+ against race conditions. For performance, it is recommended to have
136
+
137
+ Number of communicators = Number of contexts
138
+
139
+ For example, when using PSM2 provider, the number of contexts is dictated
140
+ by the Intel Omni-Path HFI1 driver module.
141
+
142
+ - For applications using a single thread with multiple communicators and MCA
143
+ variable "mtl_ofi_thread_grouping" set to 1, the MTL will use multiple
144
+ contexts, but the benefits may be negligible as only one thread is driving
145
+ progress.
0 commit comments