This project evaluates the risk of an AI system using principles from the AIRO Ontology. It supports command-line and web-based (Sinatra) interfaces and outputs results in both Markdown and RDF/Turtle formats.
- Collects AI risk inputs via a structured questionnaire.
- Applies heuristic risk scoring logic.
- Outputs:
- Human-readable Markdown Report
- RDF/Turtle data for SPARQL/linked data applications
- Available as a CLI tool or web app (Sinatra).
git clone https://github.com/inloopstudio/AI-Risk-Ontology-AIRO.git
cd AI-Risk-Ontology-AIRO
bundle install
bundle exec ruby ai_risk_assessment.rb generate_questionnaire > questionnaire.txt
bundle exec ruby ai_risk_assessment.rb process_responses questionnaire.txt
Outputs:
assessment_report.md
assessment_report.ttl
Start the Sinatra server:
bundle exec ruby app.rb
Visit http://localhost:4567
, fill out the form, and get your AI Risk Report in Markdown.
The app is avaible to remix at https://replit.com/@inloop/AI-Risk-Ontology-AIRO
# AI Risk Assessment Report
...
**Total Risk Score:** 13
**Overall Risk Level:** HIGH
@prefix airo: <http://example.org/airo#> .
<http://example.org/ai_system/1>
a airo:AISystem ;
airo:hasDomain "Law Enforcement" ;
...
airo:hasRiskLevel "HIGH" .
This project is sponsored by inloop.studio
MIT