Skip to content

Commit 810ef41

Browse files
committed
partition: Adjust group scoring to normalize shared counts
Instead of applying a group size bias, which is somewhat arbitrary as it conflates the dimensions (of cluster counts and vertex counts), we now normalize the shared count by inverse square root of group sizes. Since we, ideally, expect the boundary of a cluster group to have sqrt(V) vertices, this results in a normalized account of shared vertex count; while it is a little redundant to use rsqrt for the group we're merging, this keeps the scoring mostly symmetric. We can also use the product of rsqrt's which matches the metric used in KaPPa ["Engineering a Scalable High Quality Graph Partitioner" (2010)]. It doesn't seem to strongly matter one way or the other, and the weights are a little easier to debug and understand with + so we'll use that for now.
1 parent 574b211 commit 810ef41

File tree

1 file changed

+11
-9
lines changed

1 file changed

+11
-9
lines changed

src/partition.cpp

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,12 @@
22
#include "meshoptimizer.h"
33

44
#include <assert.h>
5+
#include <math.h>
56
#include <string.h>
67

78
namespace meshopt
89
{
910

10-
static const unsigned int kGroupSizeBias = 3;
11-
1211
struct ClusterAdjacency
1312
{
1413
unsigned int* offsets;
@@ -264,12 +263,14 @@ static unsigned int countShared(const ClusterGroup* groups, int group1, int grou
264263
return total;
265264
}
266265

267-
static int pickGroupToMerge(const ClusterGroup* groups, int id, const ClusterAdjacency& adjacency, size_t max_group_size)
266+
static int pickGroupToMerge(const ClusterGroup* groups, int id, const ClusterAdjacency& adjacency, size_t max_partition_size)
268267
{
269268
assert(groups[id].size > 0);
270269

270+
float group_rsqrt = 1.f / sqrtf(float(int(groups[id].vertices)));
271+
271272
int best_group = -1;
272-
unsigned int best_score = 0;
273+
float best_score = 0;
273274

274275
for (int ci = id; ci >= 0; ci = groups[ci].next)
275276
{
@@ -280,13 +281,14 @@ static int pickGroupToMerge(const ClusterGroup* groups, int id, const ClusterAdj
280281
continue;
281282

282283
assert(groups[other].size > 0);
283-
if (groups[id].size + groups[other].size > max_group_size)
284+
if (groups[id].size + groups[other].size > max_partition_size)
284285
continue;
285286

286-
unsigned int score = countShared(groups, id, other, adjacency);
287+
unsigned int shared = countShared(groups, id, other, adjacency);
288+
float other_rsqrt = 1.f / sqrtf(float(int(groups[other].vertices)));
287289

288-
// favor smaller target groups
289-
score += (unsigned(max_group_size) - groups[other].size) * kGroupSizeBias;
290+
// normalize shared count by the expected boundary of each group (+ keeps scoring symmetric)
291+
float score = float(int(shared)) * (group_rsqrt + other_rsqrt);
290292

291293
if (score > best_score)
292294
{
@@ -395,7 +397,7 @@ size_t meshopt_partitionClusters(unsigned int* destination, const unsigned int*
395397
// update group sizes; note, the vertex update is an approximation which avoids recomputing the true size via countTotal
396398
groups[top.id].size += groups[best_group].size;
397399
groups[top.id].vertices += groups[best_group].vertices;
398-
groups[top.id].vertices -= shared;
400+
groups[top.id].vertices = (groups[top.id].vertices > shared) ? groups[top.id].vertices - shared : 1;
399401

400402
groups[best_group].size = 0;
401403
groups[best_group].vertices = 0;

0 commit comments

Comments
 (0)