-
🔁 The Problem: Multi-Node Response Chaos Multiple bots reply at once. Bandwidth is wasted. Mesh traffic floods. Users receive overlapping or conflicting responses. This behavior is fine in isolated test setups, but becomes unmanageable in: Urban or public meshes. Emergency deployments. Global bridges via MQTT. 🤖 AI-to-AI Messaging Loop Warning If the message itself contains something the second node interprets as an /ai command or input, it responds. If that response is received by the original node, the cycle continues—creating an infinite AI loop. This can result in: Rapid battery drain or device lockups. Channel flooding that renders the mesh unusable. A feedback loop that can persist even after rebooting the devices. Currently, the only way to stop this behavior is to physically disconnect at least one node. 🧪 Current Mitigations Being Explored ✅ A config toggle to disable longfast responses for AI commands. 🛑 Optionally disabling longfast entirely for MESH-AI to preserve bandwidth and prevent command duplication. 🔧 Allowing users to customize the /ai command string per node. 🎲 Auto-generating unique suffixes or identifiers for the /ai command on first setup (e.g., /ai-TB42). 💡 Additional Fixes Under Consideration Command UUID Tagging – Assign each /ai request a unique hash so nodes ignore duplicates or foreign requests. Addressed Commands – Require formatting like /ai TBOT: so only named nodes respond. Designated AI Nodes – Allow networks to define one AI responder per channel or group. Cooldown or Rate-Limiting Logic – Prevent repeated /ai responses within a short time window. MQTT Scoped Routing – Ensure only one MQTT-connected node responds per command, avoiding echo storms. 📍 Summary and Next Steps I am aware of these risks and am actively working on fixes to prevent: Unintended AI loops, Multi-node response storms, And mesh saturation in future builds. 💬 Community Input Welcome Join the discussion! Let’s build MESH-AI into a smarter, safer, and more scalable platform — together. — TBOT 🛰️ |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Don't run it on Longfast. This is a public channel, and its common courtesy not to shout over everyone in public places, let alone have AI do it. Make the bot Direct Message access only. I would have thought running Ai bots on Longfast would be frowned upon in most areas. |
Beta Was this translation helpful? Give feedback.
-
v0.6.0 will disable by default use on Longfast - as this is the quickest and dirtiest way to deal with this. Unfortunately, fully removing the ability altogether is a bit counterproductive for certain use cases - and any kid armed with AI can re-enable this easily - so - I think more creative steps will need to be taken to deal with this long term and restrict use on longfast channels. Fighting malicious repurposing is always going to be an issue in software development... and unfortunately any kid with a chatGPT account can be malicious if they really want to... This is 2025, after-all - but - I am on it! |
Beta Was this translation helpful? Give feedback.
v0.6.0 will disable by default use on Longfast - as this is the quickest and dirtiest way to deal with this. Unfortunately, fully removing the ability altogether is a bit counterproductive for certain use cases - and any kid armed with AI can re-enable this easily - so - I think more creative steps will need to be taken to deal with this long term and restrict use on longfast channels.
Fighting malicious repurposing is always going to be an issue in software development... and unfortunately any kid with a chatGPT account can be malicious if they really want to...
This is 2025, after-all - but - I am on it!