🏆 Winner - 1st Place, Sui AI Typhoon Hackathon (Sui Track)
https://x.com/SuiNetwork/status/1890575219934523803join and stay tuned for our future fantastic updates.
Our Twitter: https://x.com/OpenGraph_Labs Our Contact: contacts@opengraphlabs.xyz
Tensorflowsui aims to bridge Web2-based DL/ML models to a fully decentralized on-chain environment, enhancing the transparency, reproducibility, auditability, operational efficiency, and connectivity of AI models. By providing trust and predictability for otherwise untestable "black box" AI systems, Tensorflowsui delivers effective risk management solutions for AI models.
Our fully on-chain inference approach:
- Ensures objective reliability of the model's predictions
- Defines algorithmic ownership on the blockchain
- Encourages industrial-scale mass adoption of on-chain agents
Tensorflowsui's ultimate goal is to democratize AI by making deep learning (DL) and machine learning (ML) models verifiable, trustworthy, and easily integrable into decentralized applications (dApps) and beyond.
-
Fully On-Chain Execution
Run DL/ML models directly on-chain for maximum transparency and security. -
High Trust & Predictability
Eliminate the risks of "black box" AI through verifiable inference processes that can be audited and reproduced. -
Ownership & Accountability
Clearly define algorithmic ownership, enabling fair usage rights and licensing on the blockchain. -
Mass Adoption
Pave the way for on-chain AI agents to be deployed in real-world industrial scenarios, thanks to transparent and provable computations.
The @tensorflowSui_lib
provides core functionality for interpreting and executing Web2 AI models (trained with frameworks like TensorFlow and PyTorch) in a fully on-chain environment.
-
tensor.move
- Implements tensor operations with fixed-point arithmetic
- Handles signed number computations
- Supports multi-dimensional tensor operations
- Provides optimized matrix multiplication
- Includes batch processing capabilities
-
graph.move
Handles AI model structure interpretation, including:- Dense (fully connected) layers
- ReLU and Softmax activations
- Batch normalization
- Model weight management
- Partial computation support for gas optimization
-
Fixed-point Precision
- Configurable scale parameter for numerical accuracy
- Prevents floating-point inconsistencies on-chain
- Maintains computational stability
-
Gas Optimization
- Chunked computation support
- Batch processing capabilities
- Memory-efficient tensor operations
- Partial state management
-
Safety Measures
- Input validation and dimension checking
- Overflow prevention
- Scale consistency verification
- Memory safety through Move's ownership system
The library offers three flexible inference approaches to balance between atomicity, cost, and efficiency:
-
One Transaction Inference
- All computations performed in a single transaction
- Fully atomic execution
- Higher gas costs due to accumulated computations
- Best for simple models or when atomicity is critical
-
Programable Transaction Block (PTB) Inference
- Leverages Sui's PTB functionality to split computations
- Maintains atomic execution while reducing costs
- Computations can be split by layers
- Optimal balance between atomicity and efficiency
-
Split Transaction Inference
- Divides computation across multiple transactions
- Uses partial state boxes for intermediate results
- Lowest gas costs
- Non-atomic execution
- Recommended to use with Walrus for transaction trajectory and output verification
These options can be mixed and matched within the same model to optimize for specific requirements. For example, you could use PTB inference for critical computations while using split transactions for less sensitive operations, allowing for a customized balance between security and efficiency.
Choose the inference option or combination that best matches your requirements for atomicity, cost efficiency, and execution speed.
A decentralized system for ensuring data provenance and model integrity in AI/ML workflows using blockchain technology.
This project aims to establish a robust framework for maintaining digital provenance of both data assets and AI models in a decentralized environment. By leveraging blockchain technology, specifically the Sui blockchain, we ensure:
- Transparent tracking of data lineage
- Verifiable model execution
- Immutable record of training/test datasets
- Certified model inference results
The system consists of several key components working together:
- Handles blockchain interactions and smart contract execution
- Manages digital signatures and provenance certification
- Provides verifiable execution receipts
Start the Go server that handles Walrus interactions:
cd walrus
go run .
- Deploys AI models to the blockchain
- Manages model versioning and updates
- Handles model weight distribution
The Model Publisher simplifies the process of deploying Web2 AI models to the Sui blockchain:
-
Prepare Your Model
- Convert your .h5 model to TensorFlow.js format:
here ! https://www.tensorflow.org/js/guide/conversion?hl=en
tensorflowjs_converter --input_format=keras /path/to/model.h5 /path/to/tfjs_model
- Place the converted model in the
web2_models
directory
- Convert your .h5 model to TensorFlow.js format:
-
Configure Publishing
- Update
config.txt
with your settings:PRIVATE_KEY=your_private_key NETWORK=devnet # or testnet/mainnet SCALE=2 # decimal precision for fixed-point conversion MODEL_PATH=./web2_models
- Update
-
Run the Publisher
cd modelPublisher node model_publish.js
The publisher will:
- Load and process your TensorFlow.js model
- Convert weights to fixed-point representation
- Auto-generate Move smart contracts
- Deploy to the specified Sui network
- Save the package ID for future reference
- Performs model inference
- Verifies execution results
- Interacts with the blockchain for provenance tracking
The Model User component provides a CLI interface for interacting with deployed models and performing inference: For using models from SUI packageId:
- Downloads input data from Walrus with digital provenance
- Performs fully on-chain inference
- Provides transaction verification
-
Setup
cd modelUser npm install
-
Configure
- The package ID from the published model will be automatically loaded from
packageId.txt
- Update
config.txt
with your settings:PRIVATE_KEY=your_private_key NETWORK=devnet # or testnet/mainnet
- The package ID from the published model will be automatically loaded from
-
Run Inference
node inference.js
The CLI supports three commands:
init
: Initialize the model stateload input
: Load input data from the Walrus serverrun
: Execute the inference process mixed inference options (split transaction and PTB)
-
Hybrid Inference Process The implementation uses a hybrid approach combining two inference methods:
-
Split Transaction for the input layer → layer 1
- Divides computation into 16 partitions for efficiency
- Provides progress visualization
- Optimizes gas costs for heavy computations
-
PTB (Programable Transaction Block) for layers 2 → 3 → output
- Maintains atomic execution
- Processes remaining layers in a single transaction
- Outputs final classification result
-
-
Walrus Integration
- All transaction trajectories are automatically uploaded to Walrus
- Provides verifiable proof of computation
- Access results via Walrus Explorer:
https://walruscan.com/testnet/account/{blobId}
- Handles input data generation and preprocessing
- Converts traditional ML inputs to blockchain-compatible format
- Manages data transformation pipelines
Hybrid Inference Architecture
Tensorflowsui supports both on-chain and off-chain inference through a hybrid architecture:
-
On-Chain Inference (Small Models)
- Fully decentralized execution for lightweight models
- Complete transparency and auditability
- Suitable for:
- Classification tasks
- Simple neural networks
- Time-critical applications
-
Atoma Network Integration (Large Models)
- Leverages Atoma Network for large model inference
- Maintains connection to Sui blockchain for verification
- Supports models like:
- LLMs (e.g., Llama-3.3-70B)
- Complex deep learning architectures
- Resource-intensive tasks
The atoma
directory provides tools for generating model inputs using Atoma Network:
cd atoma
npm install
node input_data_generation_with_atoma.js
Key components:
input_data_generation_with_atoma.js
: Generates inputs using Atoma's LLM capabilitiesconvert_data_for_web3_input.js
: Converts generated data for on-chain useatoma_converted.json
: Stores processed input data
This hybrid approach enables:
- Scalable inference for both small and large models
- Decentralized verification of model outputs
- Flexible deployment options based on model size and requirements
-
Data Provenance
- Track data lineage from source to inference
- Verify data integrity at each step
- Maintain immutable audit trails
-
Model Integrity
- On-chain model execution
- Verifiable computation results
- Transparent model updates
-
Decentralized Storage
- Distributed data storage
- Content-addressed blob storage
- Efficient data retrieval
-
Smart Contract Integration
- Move language implementation
- Fixed-point arithmetic for deterministic computation
- Gas-optimized operations
-
For Data Scientists
- Verifiable training datasets
- Transparent model development
- Reproducible results
-
For Model Users
- Trusted inference results
- Verified model lineage
- Transparent execution
-
For Stakeholders
- Auditable AI systems
- Compliance tracking
- Risk management
Our development roadmap outlines planned expansions across three key areas:
- TypeScript support (Available)
- Python integration
- Auto-publish capabilities
- Adjustable calling mechanisms
- Floating & negative weights support
- Dense (Linear) Layer implementation
- ReLU Activation
- Future additions:
- More activation functions
- Normalization layers
- CNN support
- Ownership management
- Monetization features
- Smart contract connectivity
- Platform management tools
- Planned integrations:
- Transformer architectures
- Matrix multiplication optimizations
- Advanced mapping tools
- Mobile pre-trained models
- Gradient/backprop support
- Transfer learning capabilities
We want to co-develop with those who share our vision.
Feel free to reach out at any time via the link or email below:
https://www.opengraphlabs.xyz/
- LinkedIn: Jun-hwan Kwon
- Email: gr0442@gmail.com
- Telegram: @BradleyKwon
Junhwan Kwon, Ph.D. by OpenGraph
- LinkedIn: yoong-doo Noh
- Email: yoongdoo0819@gmail.com
Yoongdoo Noh, by OpenGraph
- LinkedIn: Julia Kim
- Email: jooho0129@gmail.com
Julia Kim, by OpenGraph
- Email: styu12@naver.com
Jarry Han, Sui Ambassador, by Sui Korea
and, Sui Korea peoples. lead by Harrison Kim.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Transaction (modelPublisher/publish_model.js)
Initialization (modelUser/inference.js)
Created Objects
Inference Execution
- Input Layer → Layer 1 (Split Transactions)
View 16 Split Transactions
Partition | Transaction Link |
---|---|
1 | View on SuiScan |
2 | View on SuiScan |
3 | View on SuiScan |
4 | View on SuiScan |
5 | View on SuiScan |
6 | View on SuiScan |
7 | View on SuiScan |
8 | View on SuiScan |
9 | View on SuiScan |
10 | View on SuiScan |
11 | View on SuiScan |
12 | View on SuiScan |
13 | View on SuiScan |
14 | View on SuiScan |
15 | View on SuiScan |
16 | View on SuiScan |
- Layer 1 → Layer 2 → Output (PTB Transaction)