Introduction
AI adoption is increasing in various industries including telecommunication to leverage the capabilities of AI. Besides its big potential, AI has unique characteristics and challenges in terms of security, privacy, safety, and ethics so called trustworthy AI aspects.
To ensure that AI components operate securely and responsibly, it is essential to implement robust risk assessment methodologies and governance practices centered around the pillars of trustworthy AI. This work emphasizes the trustworthiness-by-design principle, highlighting that the controls that are missing during design phase of AI systems increase the probability of exposures occurring.
Inspired by the OWASP risk rating methodology, we propose risk rating metrics and evaluation tool, AI-RRT.
Risk assessment is a critical part of AI governance, helping stakeholders understand the importance and potential impact of threats targeting AI systems. This project focuses on the risk rating step within the broader AI risk assessment process, offering a methodology, metrics, and practical guidance for evaluating impact and likelihood to determine overall risk levels.
Note that this study is under review, tool will be uploaded based on the the review progress.