Skip to content

Merge develop #63

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 45 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
98982fd
add external API for low rank generator + change API to hmatrix build…
PierreMarchand20 Oct 15, 2024
aad8d55
change LowRankMatrix API and add recompression
PierreMarchand20 Oct 28, 2024
d60a219
fix resize in matrix.hpp and remove is_htool_owning_data from lrmat g…
PierreMarchand20 Nov 9, 2024
8ffbc9a
add safe gard for LU facto
PierreMarchand20 Dec 19, 2024
e64a3e1
add recompressed compressor
PierreMarchand20 Dec 19, 2024
8eeb521
unsigned int replaced by std::size_t
vdubos Jul 26, 2024
ef58be5
size_t(dimension) for matrix
vdubos Jul 25, 2024
a1e2388
improve DDMSolverBuilder API
PierreMarchand20 Feb 12, 2025
1f4e75a
clean cmake
PierreMarchand20 Feb 21, 2025
f3c4823
add hmatrix builder
PierreMarchand20 Feb 21, 2025
4f2ddc0
add hmatrix product public interface
PierreMarchand20 Feb 24, 2025
36c66dc
update readme
PierreMarchand20 Feb 24, 2025
78122f9
fix old style cast
PierreMarchand20 Feb 24, 2025
1d0b7d7
fix wrong default template parameter
PierreMarchand20 Feb 25, 2025
91eb12a
add VirtualGeneoCoarseSpaceBuilder
PierreMarchand20 Mar 17, 2025
d09a4a8
update clustering
PierreMarchand20 Mar 25, 2025
77935d8
fix compression example and add recompression
PierreMarchand20 Mar 25, 2025
9b7deef
cleaning
PierreMarchand20 Mar 25, 2025
3e311d7
remove valgrind false positive
PierreMarchand20 May 14, 2025
7d7c547
fix bug with solver using local hmat adding overlap
PierreMarchand20 May 20, 2025
7e7e7b4
add non uniforme coarse space test in Test_solver
PierreMarchand20 May 21, 2025
dc5e07d
fix format
PierreMarchand20 Jun 22, 2025
bd385b6
add check for geneo coarse space builder
PierreMarchand20 Jun 22, 2025
5f9d5ab
update blas/lapack wrappers
PierreMarchand20 Jul 4, 2025
3d4666b
use symmetric coarse space solver
PierreMarchand20 Jul 4, 2025
9391485
updadte changelog
PierreMarchand20 Jul 4, 2025
f4ba380
update hpddm version in CI
PierreMarchand20 Jul 4, 2025
ef6dfb3
fix solver tests with symmetric coarse space solve
PierreMarchand20 Jul 9, 2025
485f665
fix examples
PierreMarchand20 Jul 9, 2025
8a25adf
fix missing include
PierreMarchand20 Jul 10, 2025
a6227ac
improve code coverage setup + fix solver tests
PierreMarchand20 Jul 15, 2025
1c6e3e5
trying to fix a (false positive?) leak
PierreMarchand20 Jul 15, 2025
549fe13
better testing for triangular solve thx to @vdubos
PierreMarchand20 Jul 16, 2025
17938b6
improve coverage ?
PierreMarchand20 Jul 17, 2025
b6d96f3
improve testing and coverage
PierreMarchand20 Jul 18, 2025
1606881
update codecov
PierreMarchand20 Jul 18, 2025
2ddacc1
improve coverage
PierreMarchand20 Jul 18, 2025
e3ed985
Fixup
PierreMarchand20 Jul 18, 2025
038152c
fix lrmat compression with virtualgenerator
PierreMarchand20 Jul 21, 2025
3a99253
remove unused interface
PierreMarchand20 Jul 22, 2025
43b7d10
add task based algebra and tests
vdubos Feb 19, 2025
63ac72a
fix warning from gcc when using omp iterator
PierreMarchand20 Jul 25, 2025
c4c3cbc
remove unused variables
PierreMarchand20 Jul 25, 2025
7e2ed7e
try to use own clang-format instead of action
PierreMarchand20 Jul 25, 2025
be4c2e0
update clang format version in CI
PierreMarchand20 Jul 25, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ jobs:
- name: Checkout hpddm
run: |
git clone https://github.com/hpddm/hpddm.git hpddm
cd hpddm && git checkout 5890d5addf3962d539dc25c441ec3ff4af93b3ab
cd hpddm && git checkout 24aed69dbde7ef1526ae87ccf4f39ceb840bccea
# uses: actions/checkout@v3
# with:
# path: "hpddm"
Expand Down Expand Up @@ -169,12 +169,12 @@ jobs:
make doc

- name: Check c++ format
uses: DoozyX/clang-format-lint-action@v0.16.2
uses: DoozyX/clang-format-lint-action@v0.20
with:
source: 'htool/include htool/tests'
# exclude: './third_party ./external'
extensions: 'hpp,cpp'
clangFormatVersion: 16
clangFormatVersion: 20
style: file

- name: Check cmake format
Expand Down Expand Up @@ -216,9 +216,9 @@ jobs:
fetch-depth: 0
- uses: actions/download-artifact@v4
- name: Upload coverage report
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
fail_ci_if_error: true
file: ./coverage.info
files: ./coverage.info
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
34 changes: 33 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,41 @@ All notable changes to this project will be documented in this file.

## Unreleased

### Added

- `HMatrix` recompression with SVD.
- Generic recompressed low-rank compression with `RecompressedLowRankGenerator`.
- Checks about `UPLO` for hmatrix factorization.
- `HMatrixBuilder` for easier `HMatrix` creation (especially when using only the hmatrix part of Htool-DDM).
- `add_hmatrix_vector_product` and `add_hmatrix_matrix_product` for working in user numbering. For C++17 and onward, these functions have preliminary support for execution policies with default being sequential execution. This tries to some extent to follow `<linalg>` API.
- `task_based_tree_builder.hpp` for miscellaneous functions used for task based approach.
- `hmatrix_output_dot.hpp` for L0 and block tree visualization.
- `task_based_compute_blocks` for task based alternative to `compute_blocks`.
- `task_based_internal_add_hmatrix_vector_product` for task based alternative to `internal_add_hmatrix_vector_product`.
- `task_based_internal_add_hmatrix_hmatrix_product` for task based alternative to `internal_add_hmatrix_hmatrix_product`.
- `task_based_internal_triangular_hmatrix_hmatrix_solve` for task based alternative to `internal_triangular_hmatrix_hmatrix_solve`.
- `task_based_lu_factorization` and `task_based_cholesky_factorization` for task based alternatives to `lu_factorization` and `cholesky_factorization`.
- `test_task_based_hmatrix_***.hpp` for testing various task based features.
- `internal_add_lrmat_hmatrix` is now overloaded to handle the case where the HMatrix is larger than the LowRankMatrix.
- `get_leaves_from` is overloaded to return non const arguments.
- `get_false_positive` and `get_L0` in a tree builder.
- `left_hmatrix_ancestor_of_right_hmatrix` and `left_hmatrix_descendant_of_right_hmatrix` for returning parent and children of a hmatrix.

### Changed

- `VirtualInternalLowRankGenerator` and `VirtualLowRankGenerator`'s `copy_low_rank_approximation` function takes a `LowRankMatrix` as input to populate it and returns a boolean. The return value is true if the compression succeded, false otherwise.
- `LowRankMatrix` constructors changed. It only takes sizes and an epsilon or a required rank. Then, it is expected to call a `VirtualInternalLowRankGenerator` to populate it.
- `ClusterTreeBuilder` has now one strategy as `VirtualPartitioning`. Usual implementations are still available, for example using `Partitioning<double,ComputeLargestExtent,RegularSplitting>`.
- When using `ClusterTreeBuilder` with `number_of_children=2^spatial_dimension`, it will do a binary/quad/octo-tree instead of `number_of_children` cut along the main direction.
- `ClusterTreeBuilder` parameter `minclustersize` was removed, and a parameter `maximal_leaf_size` has been added.
- `build` can now use `task_based_compute_blocks`

### Fixed

- fix inline definition of `logging_level_to_string`
- Fix inline definition of `logging_level_to_string`.
- Fix error when resizing `Matrix`.
- Fix error due to using `int` instead of `size_t`, thanks to @vdubos.
- Fix warnings with `-Wold-style-cast`.

## [0.9.0] - 2024-09-19

Expand Down
45 changes: 21 additions & 24 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,6 @@ else()
LANGUAGES CXX)
endif()

# To force c++14
if(${CMAKE_VERSION} VERSION_LESS 3.1)
add_compile_options(-std=c++14)
elseif(${CMAKE_VERSION} VERSION_LESS 3.6.3 AND ${CMAKE_CXX_COMPILER_ID} STREQUAL "Intel")
add_compile_options(-std=c++14)
else()
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
endif()

if(CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME)
# To set default CMAKE_BUILD_TYPE
set(default_build_type "Release")
Expand Down Expand Up @@ -94,23 +84,21 @@ option(HTOOL_WITH_STRICT_TESTS "Add -Werror to the tests" OFF)
#=============================================================================#
# MPI
find_package(MPI REQUIRED)
message(STATUS "MPI libraries:" "${MPI_LIBRARIES}")
message(STATUS "MPI include files:" "${MPI_INCLUDE_PATH}")
message(STATUS "Tests run: ${MPIEXEC} ${MPIEXEC_NUMPROC_FLAG} ${MPIEXEC_MAX_NUMPROCS} ${MPIEXEC_PREFLAGS} EXECUTABLE ${MPIEXEC_POSTFLAGS} ARGS")
separate_arguments(MPIEXEC_PREFLAGS) # to support multi flags
message(STATUS "Run: ${MPIEXEC} ${MPIEXEC_NUMPROC_FLAG} ${MPIEXEC_MAX_NUMPROCS} ${MPIEXEC_PREFLAGS} EXECUTABLE ${MPIEXEC_POSTFLAGS} ARGS")

# OPENMP
find_package(OpenMP)

# BLAS
find_package(BLAS REQUIRED)
message("-- Found Blas implementation:" "${BLAS_LIBRARIES}")
message(STATUS "Blas implementation:" "${BLAS_LIBRARIES}")

# LAPACK
find_package(LAPACK)
message("-- Found Lapack:" "${LAPACK_LIBRARIES}")

# # ARPACK
# find_package(ARPACK)
# message("-- Found Arpack:" "${ARPACK_LIBRARIES}")
message(STATUS "Lapack implementation:" "${LAPACK_LIBRARIES}")

# HPDDM
find_package(HPDDM)
Expand All @@ -120,12 +108,12 @@ find_package(HPDDM)
#=============================================================================#
#=== HTOOL as header only library
add_library(htool INTERFACE)
target_include_directories(htool INTERFACE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include> $<INSTALL_INTERFACE:include> ${MPI_INCLUDE_PATH} ${HPDDM_INCLUDE_DIRS} ${MKL_INC_DIR})
target_link_libraries(htool INTERFACE MPI::MPI_CXX ${BLAS_LIBRARIES} ${LAPACK_LIBRARIES} ${ARPACK_LIBRARIES})
target_include_directories(htool INTERFACE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include> $<INSTALL_INTERFACE:include> ${HPDDM_INCLUDE_DIRS} ${MKL_INC_DIR})
target_link_libraries(htool INTERFACE MPI::MPI_CXX BLAS::BLAS LAPACK::LAPACK)
if(OpenMP_CXX_FOUND)
target_link_libraries(htool INTERFACE OpenMP::OpenMP_CXX)
endif()
target_compile_features(htool INTERFACE cxx_std_11)
target_compile_features(htool INTERFACE cxx_std_17)

if("${BLA_VENDOR}" STREQUAL "Intel10_32"
OR "${BLA_VENDOR}" STREQUAL "Intel10_64lp"
Expand Down Expand Up @@ -184,9 +172,18 @@ endif()
# Add tests

if((CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME) AND BUILD_TESTING)
if(CODE_COVERAGE AND (CMAKE_C_COMPILER_ID MATCHES "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "GNU"))
target_compile_options(htool INTERFACE -fprofile-arcs -ftest-coverage)
target_link_libraries(htool INTERFACE gcov)
if(CODE_COVERAGE)
if(CMAKE_C_COMPILER_ID MATCHES "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "GNU")
message(STATUS "Code coverage enabled with GNU.")
target_compile_options(htool INTERFACE -fprofile-arcs -ftest-coverage -fno-elide-constructors -fno-default-inline)
target_link_libraries(htool INTERFACE gcov)
elseif(CMAKE_C_COMPILER_ID MATCHES "Clang" OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
message(STATUS "Code coverage enabled with LLVM.")
target_compile_options(htool INTERFACE -fprofile-instr-generate -fcoverage-mapping)
target_link_options(htool INTERFACE -fprofile-instr-generate -fcoverage-mapping)
else()
message(STATUS "Code coverage not available.")
endif()
endif()
if(HTOOL_WITH_STRICT_TESTS)
target_compile_options(htool INTERFACE -Werror)
Expand All @@ -198,7 +195,7 @@ if((CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME) AND BUILD_TESTING)
-Wshadow
-Wnon-virtual-dtor
-pedantic
# -Wold-style-cast
-Wold-style-cast
-Wcast-align
-Wunused
-Woverloaded-virtual
Expand Down
2 changes: 1 addition & 1 deletion cmake/version.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ function(check_version_number CODE_VERSION_FILE CODE_VARIABLE_MAJOR_VERSION CODE
set(code_subminor_version_number ${CMAKE_MATCH_1})

set(code_version_number "${code_major_version_number}.${code_minor_version_number}.${code_subminor_version_number}")
message("Htool version: " "${code_version_number}")
message(STATUS "Htool version: " "${code_version_number}")
# Check version number: error if code unconsistent
if(NOT "${code_version_number}" STREQUAL "${CMAKE_PROJECT_VERSION}")
message(FATAL_ERROR "Inconsistent version number:\n* Source code version number: ${code_version_number}\n* CMake version number: ${CMAKE_PROJECT_VERSION}\n")
Expand Down
42 changes: 28 additions & 14 deletions examples/compression_comparison.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
#include <iostream>
#include <vector>

#include <htool/hmatrix/lrmat/recompressed_low_rank_generator.hpp>
#include <htool/htool.hpp>
#include <htool/testing/geometry.hpp>
using namespace std;
Expand Down Expand Up @@ -96,43 +97,56 @@ int main(int argc, char *argv[]) {
MyMatrix A(spatial_dimension, target_cluster.get_permutation(), source_cluster.get_permutation(), target_coordinates, source_coordinates);
double norm_A = A.normFrob();

// SVD with fixed rank
SVD<double> compressor_SVD;
LowRankMatrix<double> A_SVD(A, compressor_SVD, target_cluster, source_cluster, reqrank_max, epsilon);
// SVD
SVD<double> compressor_SVD(A);
LowRankMatrix<double> A_SVD(target_cluster.get_size(), source_cluster.get_size(), epsilon);
compressor_SVD.copy_low_rank_approximation(target_cluster.get_size(), source_cluster.get_size(), target_cluster.get_offset(), source_cluster.get_offset(), A_SVD);
std::vector<double> SVD_fixed_errors;
for (int k = 0; k < A_SVD.rank_of() + 1; k++) {
SVD_fixed_errors.push_back(Frobenius_absolute_error(target_cluster, source_cluster, A_SVD, A, k) / norm_A);
}

// fullACA with fixed rank
fullACA<double> compressor_fullACA;
LowRankMatrix<double> A_fullACA_fixed(A, compressor_fullACA, target_cluster, source_cluster, reqrank_max, epsilon);
// fullACA
fullACA<double> compressor_fullACA(A);
LowRankMatrix<double> A_fullACA_fixed(target_cluster.get_size(), source_cluster.get_size(), epsilon);
compressor_fullACA.copy_low_rank_approximation(target_cluster.get_size(), source_cluster.get_size(), target_cluster.get_offset(), source_cluster.get_offset(), A_fullACA_fixed);
std::vector<double> fullACA_fixed_errors;
for (int k = 0; k < A_fullACA_fixed.rank_of() + 1; k++) {
fullACA_fixed_errors.push_back(Frobenius_absolute_error(target_cluster, source_cluster, A_fullACA_fixed, A, k) / norm_A);
}

// partialACA with fixed rank
partialACA<double> compressor_partialACA;
LowRankMatrix<double> A_partialACA_fixed(A, compressor_partialACA, target_cluster, source_cluster, reqrank_max, epsilon);
// partialACA
partialACA<double> compressor_partialACA(A);
LowRankMatrix<double> A_partialACA_fixed(target_cluster.get_size(), source_cluster.get_size(), epsilon);
compressor_partialACA.copy_low_rank_approximation(target_cluster.get_size(), source_cluster.get_size(), target_cluster.get_offset(), source_cluster.get_offset(), A_partialACA_fixed);
std::vector<double> partialACA_fixed_errors;
for (int k = 0; k < A_partialACA_fixed.rank_of() + 1; k++) {
partialACA_fixed_errors.push_back(Frobenius_absolute_error(target_cluster, source_cluster, A_partialACA_fixed, A, k) / norm_A);
}

// sympartialACA with fixed rank
sympartialACA<double> compressor_sympartialACA;
LowRankMatrix<double> A_sympartialACA_fixed(A, compressor_sympartialACA, target_cluster, source_cluster, reqrank_max, epsilon);
// sympartialACA
sympartialACA<double> compressor_sympartialACA(A);
LowRankMatrix<double> A_sympartialACA_fixed(target_cluster.get_size(), source_cluster.get_size(), epsilon);
compressor_sympartialACA.copy_low_rank_approximation(target_cluster.get_size(), source_cluster.get_size(), target_cluster.get_offset(), source_cluster.get_offset(), A_sympartialACA_fixed);
std::vector<double> sympartialACA_fixed_errors;
for (int k = 0; k < A_sympartialACA_fixed.rank_of() + 1; k++) {
sympartialACA_fixed_errors.push_back(Frobenius_absolute_error(target_cluster, source_cluster, A_sympartialACA_fixed, A, k) / norm_A);
}

// sympartialACA with recompression
RecompressedLowRankGenerator<double> compressor_recompressed_sympartialACA(compressor_sympartialACA, std::function<void(LowRankMatrix<double> &)>(SVD_recompression<double>));
LowRankMatrix<double> A_recompressed_sympartialACA_fixed(target_cluster.get_size(), source_cluster.get_size(), epsilon);
compressor_recompressed_sympartialACA.copy_low_rank_approximation(target_cluster.get_size(), source_cluster.get_size(), target_cluster.get_offset(), source_cluster.get_offset(), A_recompressed_sympartialACA_fixed);
std::vector<double> recompressed_sympartialACA_fixed_errors;
for (int k = 0; k < A_recompressed_sympartialACA_fixed.rank_of() + 1; k++) {
recompressed_sympartialACA_fixed_errors.push_back(Frobenius_absolute_error(target_cluster, source_cluster, A_recompressed_sympartialACA_fixed, A, k) / norm_A);
}

// Output
ofstream file_fixed((outputpath + "/" + outputfile).c_str());
file_fixed << "Rank,SVD,Full ACA,partial ACA,sym partial ACA" << endl;
file_fixed << "Rank,SVD,Full ACA,partial ACA,sym partial ACA,recompressed sym partial ACA" << endl;
for (int i = 0; i < reqrank_max; i++) {
file_fixed << i << "," << SVD_fixed_errors[i] << "," << fullACA_fixed_errors[i] << "," << partialACA_fixed_errors[i] << "," << sympartialACA_fixed_errors[i] << endl;
file_fixed << i << "," << SVD_fixed_errors[i] << "," << fullACA_fixed_errors[i] << "," << partialACA_fixed_errors[i] << "," << sympartialACA_fixed_errors[i] << "," << recompressed_sympartialACA_fixed_errors[i] << endl;
}

// Finalize the MPI environment.
Expand Down
2 changes: 1 addition & 1 deletion examples/smallest_example.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ int main(int argc, char *argv[]) {
// Distributed operator
char symmetry = 'S';
char UPLO = 'U';
DefaultApproximationBuilder<double, double> default_approximation_builder(A, cluster, cluster, epsilon, eta, symmetry, UPLO, MPI_COMM_WORLD);
DefaultApproximationBuilder<double, double> default_approximation_builder(A, cluster, cluster, HMatrixTreeBuilder<double, double>(epsilon, eta, symmetry, UPLO), MPI_COMM_WORLD);
DistributedOperator<double> &distributed_operator = default_approximation_builder.distributed_operator;

// Matrix vector product
Expand Down
2 changes: 1 addition & 1 deletion examples/smallest_example.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ mpirun -np 2 ./examples/smallest_example ${outputpath}



# python3 ../tools/plot_hmatrix.py --inputfile ../output/examples/smallest_example/smallest_example_plot --sizeWorld 2
# python3 ../tools/plot_hmatrix.py --inputfile ../output/examples/smallest_example/local_hmatrix_0.csv
3 changes: 2 additions & 1 deletion examples/visucluster.cpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#include <htool/clustering/implementations/partitioning.hpp>
#include <htool/htool.hpp>
#include <htool/testing/geometry.hpp>

Expand Down Expand Up @@ -28,7 +29,7 @@ int main(int argc, char *argv[]) {
Cluster<double> cluster = recursive_build_strategy.create_cluster_tree(size, spatial_dimension, p.data(), 2, 2);

// Output
save_clustered_geometry(cluster, 3, p.data(), outputname + "/clustering_output", {1, 2, 3});
save_clustered_geometry(cluster, spatial_dimension, p.data(), outputname + "/clustering_output", {1, 2, 3});

return 0;
}
8 changes: 4 additions & 4 deletions include/htool/basic_types/vector.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -133,8 +133,8 @@ int vector_to_bytes(const std::vector<T> vect, const std::string &file) {
return 1; // LCOV_EXCL_LINE
}
int size = vect.size();
out.write((char *)(&size), sizeof(int));
out.write((char *)&(vect[0]), size * sizeof(T));
out.write(reinterpret_cast<char *>(&size), sizeof(int));
out.write(reinterpret_cast<const char *>(vect.data()), size * sizeof(T));

out.close();
return 0;
Expand All @@ -151,9 +151,9 @@ int bytes_to_vector(std::vector<T> &vect, const std::string &file) {
}

int size = 0;
in.read((char *)(&size), sizeof(int));
in.read(reinterpret_cast<char *>(&size), sizeof(int));
vect.resize(size);
in.read((char *)&(vect[0]), size * sizeof(T));
in.read(reinterpret_cast<char *>(vect.data()), size * sizeof(T));

in.close();
return 0;
Expand Down
4 changes: 2 additions & 2 deletions include/htool/clustering/cluster_node.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ class Cluster : public TreeNode<Cluster<CoordinatesPrecision>, ClusterTreeData<C
// Cluster tree getters
unsigned int get_maximal_depth() const { return this->m_tree_data->m_max_depth; }
unsigned int get_minimal_depth() const { return this->m_tree_data->m_min_depth; }
unsigned int get_minclustersize() const { return this->m_tree_data->m_minclustersize; }
unsigned int get_maximal_leaf_size() const { return this->m_tree_data->m_maximal_leaf_size; }
const std::vector<const Cluster<CoordinatesPrecision> *> &get_clusters_on_partition() const { return this->m_tree_data->m_clusters_on_partition; }
const Cluster<CoordinatesPrecision> &get_cluster_on_partition(size_t index) const {
return *this->m_tree_data->m_clusters_on_partition[index];
Expand All @@ -75,7 +75,7 @@ class Cluster : public TreeNode<Cluster<CoordinatesPrecision>, ClusterTreeData<C
void set_is_permutation_local(bool is_permutation_local) { this->m_tree_data->m_is_permutation_local = is_permutation_local; }
void set_minimal_depth(unsigned int minimal_depth) { this->m_tree_data->m_min_depth = minimal_depth; }
void set_maximal_depth(unsigned int maximal_depth) { this->m_tree_data->m_max_depth = maximal_depth; }
void set_minclustersize(unsigned int minclustersize) { this->m_tree_data->m_minclustersize = minclustersize; }
void set_maximal_leaf_size(unsigned int maximal_leaf_size) { this->m_tree_data->m_maximal_leaf_size = maximal_leaf_size; }

// Operator overloading
bool operator==(const Cluster<CoordinatesPrecision> &rhs) const { return this->get_offset() == rhs.get_offset() && this->get_size() == rhs.get_size() && this->m_tree_data == rhs.m_tree_data && this->get_depth() == rhs.get_depth() && this->get_counter() == rhs.get_counter(); }
Expand Down
4 changes: 2 additions & 2 deletions include/htool/clustering/cluster_output.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ void save_cluster_tree(const Cluster<CoordinatesPrecision> &cluster, std::string
// Cluster tree properties
std::ofstream output_permutation(filename + "_cluster_tree_properties.csv");

output_permutation << "minclustersize: " << cluster.get_minclustersize() << "\n";
output_permutation << "maximal leaf size: " << cluster.get_maximal_leaf_size() << "\n";
output_permutation << "maximal depth: " << cluster.get_maximal_depth() << "\n";
output_permutation << "minimal depth: " << cluster.get_minimal_depth() << "\n";
output_permutation << "permutation: ";
Expand Down Expand Up @@ -128,7 +128,7 @@ Cluster<CoordinatesPrecision> read_cluster_tree(std::string file_cluster_tree_pr

std::getline(input_permutation, line);
splitted_string = split(line, " ");
root_cluster.set_minclustersize(std::stoul(splitted_string.back()));
root_cluster.set_maximal_leaf_size(std::stoul(splitted_string.back()));
std::getline(input_permutation, line);
splitted_string = split(line, " ");
root_cluster.set_maximal_depth(std::stoi(splitted_string.back()));
Expand Down
2 changes: 1 addition & 1 deletion include/htool/clustering/cluster_tree_data.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ class Cluster;
template <typename CoordinatePrecision>
struct ClusterTreeData {
// Parameters
unsigned int m_minclustersize{10}; // minimal number of geometric point in a cluster
unsigned int m_maximal_leaf_size{10}; // minimal number of geometric point in a cluster

// Information
unsigned int m_max_depth{std::numeric_limits<unsigned int>::min()}; // maximum depth of the tree
Expand Down
Loading
Loading