Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 101 additions & 0 deletions .codeboarding/Evaluation_Orchestration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
```mermaid

graph LR

Evaluation_Orchestration_ScoreCard_["Evaluation Orchestration (ScoreCard)"]

Metric_Set_Definitions["Metric Set Definitions"]

Metric_Implementations_Performance_Coverage_["Metric Implementations (Performance & Coverage)"]

Evaluation_Orchestration_ScoreCard_ -- "uses" --> Metric_Set_Definitions

Evaluation_Orchestration_ScoreCard_ -- "invokes" --> Metric_Implementations_Performance_Coverage_

```



[![CodeBoarding](https://img.shields.io/badge/Generated%20by-CodeBoarding-9cf?style=flat-square)](https://github.com/CodeBoarding/GeneratedOnBoardings)[![Demo](https://img.shields.io/badge/Try%20our-Demo-blue?style=flat-square)](https://www.codeboarding.org/demo)[![Contact](https://img.shields.io/badge/Contact%20us%20-%20contact@codeboarding.org-lightgrey?style=flat-square)](mailto:contact@codeboarding.org)



## Details



These components are fundamental because they represent a clear separation of concerns in the evaluation toolkit: What to evaluate (Metric Set Definitions), How to evaluate (Metric Implementations), and Orchestrating the evaluation (Evaluation Orchestration). This modular design aligns perfectly with the "Machine Learning/Data Science Utility Library" pattern, emphasizing reusability, extensibility, and a clear, intuitive API for users.



### Evaluation Orchestration (ScoreCard)

This is the core component responsible for driving the evaluation process. It takes input data and a defined set of metrics, computes the scores, and aggregates them into a structured scorecard. It acts as a facade, simplifying the interaction with various metric implementations and metric sets. `CoverageScoreCard` extends `ScoreCard` to specifically handle coverage-related evaluations.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/scorecard.py#L13-L93" target="_blank" rel="noopener noreferrer">`rexmex.scorecard.ScoreCard` (13:93)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/scorecard.py#L96-L158" target="_blank" rel="noopener noreferrer">`rexmex.scorecard.CoverageScoreCard` (96:158)</a>





### Metric Set Definitions

This component defines collections of metrics that can be applied together for a specific evaluation context (e.g., classification, ranking, coverage, rating). Classes like `rexmex.metricset.ClassificationMetricSet`, `CoverageMetricSet`, `RankingMetricSet`, and `RatingMetricSet` (all inheriting from `rexmex.metricset.MetricSet`) encapsulate these predefined sets.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L77-L108" target="_blank" rel="noopener noreferrer">`rexmex.metricset.ClassificationMetricSet` (77:108)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L151-L166" target="_blank" rel="noopener noreferrer">`rexmex.metricset.CoverageMetricSet` (151:166)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L169-L179" target="_blank" rel="noopener noreferrer">`rexmex.metricset.RankingMetricSet` (169:179)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L111-L148" target="_blank" rel="noopener noreferrer">`rexmex.metricset.RatingMetricSet` (111:148)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L16-L74" target="_blank" rel="noopener noreferrer">`rexmex.metricset.MetricSet` (16:74)</a>





### Metric Implementations (Performance & Coverage)

These components provide the concrete algorithms and functions for calculating individual performance and coverage metrics. Examples include metrics found within `rexmex.metrics.performance` (e.g., precision, recall) and `rexmex.metrics.coverage` (e.g., catalog coverage, diversity).





**Related Classes/Methods**:



- `rexmex.metrics.performance` (-1:-1)

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/coverage.py#L-1-L-1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.coverage` (-1:-1)</a>









### [FAQ](https://github.com/CodeBoarding/GeneratedOnBoardings/tree/main?tab=readme-ov-file#faq)
101 changes: 101 additions & 0 deletions .codeboarding/Metric_Definitions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
```mermaid

graph LR

Metric_Definitions["Metric Definitions"]

Metric_Set_Definitions["Metric Set Definitions"]

Evaluation_Orchestration_ScoreCard_["Evaluation Orchestration (ScoreCard)"]

Metric_Set_Definitions -- "uses/depends on" --> Metric_Definitions

Evaluation_Orchestration_ScoreCard_ -- "invokes" --> Metric_Definitions

Evaluation_Orchestration_ScoreCard_ -- "uses/depends on" --> Metric_Set_Definitions

click Metric_Definitions href "https://github.com/AstraZeneca/rexmex/blob/main/.codeboarding//Metric_Definitions.md" "Details"

```



[![CodeBoarding](https://img.shields.io/badge/Generated%20by-CodeBoarding-9cf?style=flat-square)](https://github.com/CodeBoarding/GeneratedOnBoardings)[![Demo](https://img.shields.io/badge/Try%20our-Demo-blue?style=flat-square)](https://www.codeboarding.org/demo)[![Contact](https://img.shields.io/badge/Contact%20us%20-%20contact@codeboarding.org-lightgrey?style=flat-square)](mailto:contact@codeboarding.org)



## Details



Abstract Components Overview of rexmex project



### Metric Definitions [[Expand]](./Metric_Definitions.md)

This component provides the concrete implementations of individual evaluation metrics. It's the foundational layer for all quantitative assessments within rexmex, housing the algorithms for various metric types.





**Related Classes/Methods**:



- `rexmex.metrics` (1:1)

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/classification.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.classification` (1:1)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/ranking.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.ranking` (1:1)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/rating.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.rating` (1:1)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/coverage.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.coverage` (1:1)</a>





### Metric Set Definitions

This component defines collections of related metrics, allowing for the evaluation of specific aspects of recommender systems (e.g., a set of classification metrics). It leverages the individual metric implementations from Metric Definitions.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metricset` (1:1)</a>





### Evaluation Orchestration (ScoreCard)

This component is responsible for orchestrating the evaluation process. It takes metric sets and data, computes the metrics, and generates comprehensive reports or scorecards.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/scorecard.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.scorecard` (1:1)</a>









### [FAQ](https://github.com/CodeBoarding/GeneratedOnBoardings/tree/main?tab=readme-ov-file#faq)
127 changes: 127 additions & 0 deletions .codeboarding/Metric_Set_Management.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
```mermaid

graph LR

Metric_Set_Management["Metric Set Management"]

Metric_Implementations_Performance_["Metric Implementations (Performance)"]

Metric_Implementations_Coverage_["Metric Implementations (Coverage)"]

Evaluation_Orchestrator_ScoreCard_["Evaluation Orchestrator (ScoreCard)"]

Metric_Set_Management -- "depends on" --> Metric_Implementations_Performance_

Metric_Set_Management -- "depends on" --> Metric_Implementations_Coverage_

Evaluation_Orchestrator_ScoreCard_ -- "uses" --> Metric_Set_Management

click Metric_Set_Management href "https://github.com/AstraZeneca/rexmex/blob/main/.codeboarding//Metric_Set_Management.md" "Details"

```



[![CodeBoarding](https://img.shields.io/badge/Generated%20by-CodeBoarding-9cf?style=flat-square)](https://github.com/CodeBoarding/GeneratedOnBoardings)[![Demo](https://img.shields.io/badge/Try%20our-Demo-blue?style=flat-square)](https://www.codeboarding.org/demo)[![Contact](https://img.shields.io/badge/Contact%20us%20-%20contact@codeboarding.org-lightgrey?style=flat-square)](mailto:contact@codeboarding.org)



## Details



The `Metric Set Management` component is fundamental because it provides a structured and extensible way to group related metrics into reusable `MetricSet` objects. This aligns with the "Modular Design" and "Component-Based Architecture" patterns, promoting reusability and clear separation of concerns. The `getClassHierarchy` output confirms that `ClassificationMetricSet`, `CoverageMetricSet`, `RankingMetricSet`, and `RatingMetricSet` all inherit from `MetricSet`, demonstrating a well-defined hierarchy and adherence to the Strategy Pattern, where each subclass represents a specific evaluation strategy.



### Metric Set Management [[Expand]](./Metric_Set_Management.md)

This component, primarily located in `rexmex.metricset`, is responsible for defining and managing collections of evaluation metrics. It provides a structured and extensible way to group related metrics into reusable `MetricSet` objects (e.g., `ClassificationMetricSet`, `RankingMetricSet`, `RatingMetricSet`, `CoverageMetricSet`). The base class `rexmex.metricset.MetricSet` establishes a common interface, allowing for the aggregation of individual `Metric Definitions` (from `rexmex.metrics.*`) into coherent evaluation scenarios. This component embodies the Strategy Pattern, where different `MetricSet` implementations represent distinct evaluation strategies.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L16-L74" target="_blank" rel="noopener noreferrer">`rexmex.metricset.MetricSet` (16:74)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L77-L108" target="_blank" rel="noopener noreferrer">`rexmex.metricset.ClassificationMetricSet` (77:108)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L169-L179" target="_blank" rel="noopener noreferrer">`rexmex.metricset.RankingMetricSet` (169:179)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L111-L148" target="_blank" rel="noopener noreferrer">`rexmex.metricset.RatingMetricSet` (111:148)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metricset.py#L151-L166" target="_blank" rel="noopener noreferrer">`rexmex.metricset.CoverageMetricSet` (151:166)</a>





### Metric Implementations (Performance)

This component includes implementations of performance-related evaluation metrics, such as classification, ranking, and rating metrics. These are the concrete metric calculations that are aggregated by the `Metric Set Management` component.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/classification.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.classification` (1:1)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/ranking.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.ranking` (1:1)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/rating.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.rating` (1:1)</a>





### Metric Implementations (Coverage)

This component includes implementations of coverage-related evaluation metrics. Similar to performance metrics, these are the specific calculations used within `MetricSet` objects.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/metrics/coverage.py#L1-L1" target="_blank" rel="noopener noreferrer">`rexmex.metrics.coverage` (1:1)</a>





### Evaluation Orchestrator (ScoreCard)

This component is responsible for orchestrating the evaluation process, typically using `Metric Sets` to generate scorecards. It acts as a Facade, simplifying the evaluation workflow for users by leveraging the defined metric sets.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/scorecard.py#L13-L93" target="_blank" rel="noopener noreferrer">`rexmex.scorecard.ScoreCard` (13:93)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/scorecard.py#L96-L158" target="_blank" rel="noopener noreferrer">`rexmex.scorecard.CoverageScoreCard` (96:158)</a>









### [FAQ](https://github.com/CodeBoarding/GeneratedOnBoardings/tree/main?tab=readme-ov-file#faq)
59 changes: 59 additions & 0 deletions .codeboarding/Utility_Functions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
```mermaid

graph LR

Utility_Functions["Utility Functions"]

Evaluation_Orchestrator_ScoreCard_ -- "utilizes" --> Utility_Functions

Metric_Set_Definitions -- "utilizes" --> Utility_Functions

Metric_Implementations_Performance_ -- "utilizes" --> Utility_Functions

Metric_Implementations_Coverage_ -- "utilizes" --> Utility_Functions

click Utility_Functions href "https://github.com/AstraZeneca/rexmex/blob/main/.codeboarding//Utility_Functions.md" "Details"

```



[![CodeBoarding](https://img.shields.io/badge/Generated%20by-CodeBoarding-9cf?style=flat-square)](https://github.com/CodeBoarding/GeneratedOnBoardings)[![Demo](https://img.shields.io/badge/Try%20our-Demo-blue?style=flat-square)](https://www.codeboarding.org/demo)[![Contact](https://img.shields.io/badge/Contact%20us%20-%20contact@codeboarding.org-lightgrey?style=flat-square)](mailto:contact@codeboarding.org)



## Details



The `Utility Functions` component is fundamental to the `rexmex` library, serving as a data preprocessing backbone with functions like `binarize` and `normalize`. It enhances discoverability and extensibility through the `Annotator` class for metadata attachment. By centralizing common helper functions, it promotes modularity and reusability, preventing code duplication. Its pervasive usage across components like `Evaluation Orchestrator`, `Metric Set Definitions`, and `Metric Implementations` underscores its foundational role as a shared toolkit.



### Utility Functions [[Expand]](./Utility_Functions.md)

This component offers a collection of versatile helper functions and classes that support various operations across the `rexmex` library. It includes crucial data transformation utilities like `binarize` and `normalize`, which are vital for preprocessing prediction and ground-truth vectors into appropriate formats for metric computation. Additionally, the `Annotator` class within this component facilitates the annotation and registration of functions (likely metrics) with valuable metadata such as value ranges, descriptions, and links. This enhances the discoverability, usability, and extensibility of various components within the library.





**Related Classes/Methods**:



- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/utils.py#L16-L35" target="_blank" rel="noopener noreferrer">`rexmex.utils.binarize` (16:35)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/utils.py#L38-L59" target="_blank" rel="noopener noreferrer">`rexmex.utils.normalize` (38:59)</a>

- <a href="https://github.com/AstraZeneca/rexmex/blob/main/rexmex/utils.py#L62-L117" target="_blank" rel="noopener noreferrer">`rexmex.utils.Annotator` (62:117)</a>









### [FAQ](https://github.com/CodeBoarding/GeneratedOnBoardings/tree/main?tab=readme-ov-file#faq)
Loading