
SUSTAIN is an environmentally-friendly, token-optimized AI wrapper designed to reduce compute costs and increase productivity. By filtering out irrelevant words and phrases from prompts, SUSTAIN minimizes the number of tokens sent to and received from the AI, saving energy and boosting performance.
Our mission is to deliver a sustainable, high-efficiency alternative to major large language models (LLMs) while maintaining powerful AI-driven results.🔋
- Traditional AI systems expend significant energy processing large amounts of token data, much of which is redundant or irrelevant (e.g., greetings, fillers, politeness).
- SUSTAIN significantly reduces token usage, minimizing the carbon footprint of AI queries.
- By promoting shorter, optimized inputs and outputs, SUSTAIN contributes to a greener AI ecosystem.
- Get results faster with condensed, actionable responses.
- Eliminate unnecessary verbose outputs by default, with the option to expand details when needed.
- Powerful, straight-to-the-point professional AI assistant.
- Preprocesses user prompts to filter out unnecessary words (e.g., "Hello," "Thank you," etc.) and retain only the core intent.
- Example Conversion:
- Input: "Could you kindly explain machine learning? Thank you!"
- Refined input: "explain machine learning"
- Example Conversion:
- Uses Python's word2number module and intelligently strips detected math from a prompt to calculate locally instead of sending to AI, translating into 100% token savings for all math queries.
- Example:
- Input: "What's four times three"
- Refined input: 4*3
- Example:
- Limits responses to concise, actionable outputs using optimized
max_tokens
settings. - Example Output:
- Refined input: "explain machine learning"
- Output: "Machine learning is a field of AI that trains computers to learn patterns from data."
- Track token savings and display eco-friendly metrics to users.
- Example:
- Token savings: 50%
- You have saved: 0.0023 kWh of power and 0.0009 metric tons of CO₂ emissions
- Provide additional eco-feedback on overall token savings
- Convert to web app and deploy on Azure
- Implement math optimization pipeline
- Implement caching for frequently requested queries to reduce API calls
- Convert to Android, iOS apps
- Implement dynamic summarization based on context length
See our Contributing page.
For questions or suggestions, feel free to reach out to us:
- Project Team:
- Klein Cafa kleinlester.cafa@ontariotechu.net
- Tomasz Puzio tomasz.puzio@ontariotechu.net
- DJ Leamen dj.leamen@ontariotechu.net
- Juliana Losada Prieto juliana.losadaprieto@ontariotechu.net
Let’s build a more sustainable AI future together!