Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit 9b950da

Browse files
author
Sven Verdoolaege
committed
memory_promotion_heuristic.cc: fix typos in comments
1 parent e34fce7 commit 9b950da

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

tc/core/polyhedral/cuda/memory_promotion_heuristic.cc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -318,7 +318,7 @@ bool isCoalesced(
318318
* are mapped to threads (the innermost of them being mapped to thread x) and
319319
* the depth of this mapping can be obtained from threadIdxXScheduleDepthState.
320320
*
321-
* In parciular, the group's footprint must contain only one element and the
321+
* In particular, the group's footprint must contain only one element and the
322322
* same tensor element should never be accessed by two different threads.
323323
*/
324324
bool isPromotableToRegisterBelowThreads(
@@ -350,15 +350,15 @@ bool isPromotableToRegisterBelowThreads(
350350
auto scheduledAccesses = originalAccesses.apply_domain(schedule);
351351

352352
// Scheduled accesses contain maps from schedule dimensions to tensor
353-
// subscripts. Compute the relation that between the schedule dimensions
353+
// subscripts. Compute the relation between the schedule dimensions
354354
// mapped to threads and tensor subscripts by first removing dimensions
355355
// following the one mapped to thread x (last one assuming inverse mapping
356356
// order), then by equating all dimensions not mapped to threads to
357357
// parameters. Promotion to registers is only allowed if the resulting
358358
// relation is injective, i.e. the same tensor element is never accessed by
359359
// more than one thread. Note that our current check is overly conservative
360360
// because different values of schedule dimension may get mapped to the same
361-
// thread, in which case the could access the same tensor element.
361+
// thread, in which case they could access the same tensor element.
362362
for (auto sa : isl::UnionAsVector<isl::union_map>(scheduledAccesses)) {
363363
sa = sa.project_out(
364364
isl::dim_type::in, depth, sa.dim(isl::dim_type::in) - depth);

0 commit comments

Comments
 (0)