Skip to content

Tracking Issue: Benchmark JSON Input #1912

@MarcelKoch

Description

@MarcelKoch

This issue tracks the PRs regarding changing how the input for the benchmarks is handled.
The goal is to remove many CLI flags and replace them by the existing file config in Ginkgo. Consider for example #1845, which adds multigrid as a preconditioner option. However, only very few CLI options are given to configure the MG. The proposed alternative is to allow users defining the preconditioner/solvers using the same syntax as the file config.

This will also allow currently unavailable features, such as arbitrary nesting of solvers/preconditioners.

One main point is to define each single benchmark case as a single item in the JSON input. Currently, the cartesian product between each JSON input item and some CLI flags is taken to create all benchmarks. This is replicated in the json format.

Example JSON configurations of a single benchmark item:
SpMV

{
  "operator": {
    "stencil": {
      "name": "5pt",
      "size": 100
    }
  },
  "format": "coo",
  "reorder": "natural"
}

Solver

{
  "operator": {
    "stencil": {
      "size": 100,
      "name": "5pt"
    }
  },
  "solver": {
    "type": "solver::Cg",
    "preconditioner": {
      "type": "preconditioner::Jacobi",
      "max_block_size": 4
    }
  },
  "optimal": {
    "spmv": {
      "format": "csr",
      "reorder": "natural"
    }
  }
}

The list of benchmark items can be defined by a cartesian product. This JSON input

{
  "operator": {
    "stencil": {
      "name": "5pt",
      "size": 100
    }
  },
  "format": ["coo", "csr"],
  "reorder": "natural"
}

is equivalent to

[
  {
    "operator": {
      "stencil": {
        "name": "5pt",
        "size": 100
      }
    },
    "format": "coo",
    "reorder": "natural"
  },
  {
    "operator": {
      "stencil": {
        "name": "5pt",
        "size": 100
      }
    },
    "format": "csr",
    "reorder": "natural"
  }
]

Question: Should the benchmarks be more unified? The spmv/preconditioner/solver benchmark do essentially the same, in the sense that an operator is defined and then applied to vectors. Maybe we could get away with combining these benchmarks. This would remove some specificity, so I'm not sure how worth it is.

PRs:

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions