A lightweight Chrome extension that tracks token usage in AI chat platforms like ChatGPT, providing real-time context window monitoring.
- Real-time Token Tracking: Monitor token usage as you chat
- Multiple Platform Support: Currently supports ChatGPT (more platforms coming soon)
- Smart Model Detection: Automatically detects the current AI model
- Visual Progress Indicator: Color-coded progress bar showing context usage
- Lightweight & Fast: Optimized bundle with dynamic loading
- Privacy-First: All processing happens locally in your browser
TokenFlow automatically adapts to your platform's theme (light/dark mode) for optimal visibility and user experience.
The extension uses intelligent theme detection with multiple fallback layers:
- Platform Storage: Reads theme preferences from platform-specific localStorage keys
- DOM Attributes: Checks
data-theme
attributes on the document - CSS Classes: Looks for theme-related CSS classes (
.light
,.dark
) - System Preference: Falls back to your browser/OS dark mode setting
Different platforms store theme preferences differently:
- ChatGPT: Uses
localStorage['theme']
with values'dark'
or'light'
- Gemini: Uses
localStorage['Bard-Color-Theme']
with substring matching - Other platforms: Configurable exact or substring matching
The extension automatically detects theme changes in real-time, including:
- Manual theme switching within the platform
- System dark/light mode changes
- Theme synchronization across browser tabs
- Clone the repository:
git clone https://github.com/ivelin-web/tokenflow.git
cd tokenflow
- Install dependencies:
npm install
- Build the extension:
npm run build
- Load in Chrome:
- Open Chrome and go to
chrome://extensions/
- Enable "Developer mode"
- Click "Load unpacked" and select the
dist
folder
- Open Chrome and go to
- ✅ ChatGPT (chat.openai.com) - Full support
- 🚧 Claude (claude.ai) - Coming soon
- 🚧 Gemini (gemini.google.com) - Coming soon
- 🚧 Grok (x.ai) - Coming soon
- GPT-4o (128k tokens)
- GPT-4.1 (128k tokens)
- GPT-4.1-mini (128k tokens)
- o3 (128k tokens)
- o3-pro (128k tokens)
- o4-mini (128k tokens)
- o4-mini-high (128k tokens)
- GPT-4.5 (128k tokens)
src/
├── content.ts # Main content script
├── options.ts # Extension options page
├── types.ts # TypeScript type definitions
├── tenants/ # Platform-specific configurations
│ ├── index.ts # Platform detection and management
│ ├── chatgpt.ts # ChatGPT configuration
│ ├── claude.ts # Claude configuration (stub)
│ ├── gemini.ts # Gemini configuration (stub)
│ └── grok.ts # Grok configuration (stub)
├── tokenizers/ # Token counting implementations
│ ├── index.ts # Tokenizer selection
│ ├── gpt.ts # GPT tokenizer (dynamic import)
│ └── ... # Other tokenizers
└── utils/ # Utility classes
├── contextCalculator.ts # Token calculation logic
├── platformManager.ts # Platform management
├── uiComponent.ts # UI rendering
└── constants.ts # Configuration constants
- Create a new configuration file in
src/tenants/
:
// src/tenants/newplatform.ts
import { Platform, PlatformConfig, TokenizerType } from '../types';
export const newPlatformConfig: PlatformConfig = {
platform: Platform.NewPlatform,
models: {
'model-name': {
name: 'Model Display Name',
maxTokens: 100000,
tokenizerType: TokenizerType.NewPlatform,
},
},
conversationSelector: '[data-conversation]', // CSS selector for messages
modelSelector: '[data-model-selector]', // CSS selector for model picker
defaultModel: 'model-name',
themeConfig: {
storageKey: 'theme', // localStorage key for theme
darkValues: ['dark'], // Exact matches for dark theme
lightValues: ['light'], // Exact matches for light theme
// Optional: substring matching
// darkContains: ['dark'], // Substring matches for dark theme
// lightContains: ['light'], // Substring matches for light theme
},
modelPatterns: [
{ text: 'model-name', model: 'model-name' },
],
};
- Add the platform to
src/types.ts
:
export enum Platform {
// ... existing platforms
NewPlatform = 'newplatform',
}
- Update
src/tenants/index.ts
to include the new platform.
npm run build
- Production buildnpm run dev
- Development build with watch mode
- Use TypeScript for all new code
- Follow the existing naming conventions
- Keep functions small and focused
- Add JSDoc comments for public APIs
- Use async/await for asynchronous operations
The extension uses a modular architecture:
- Platform Manager: Detects the current platform and loads appropriate configuration
- Context Calculator: Handles token counting using platform-specific tokenizers
- UI Component: Renders the token meter with real-time updates
- Dynamic Loading: Tokenizers are loaded on-demand to reduce initial bundle size
- Initial bundle size: ~18.4 kB (main content script)
- Tokenizer loaded dynamically: ~1.7 MB (only when needed)
- Memory usage: < 10 MB
- CPU impact: Minimal (debounced updates)
- No data is sent to external servers
- All token counting happens locally
- No tracking or analytics
- Open source and auditable
- Fork the repository
- Create a feature branch:
git checkout -b feature/new-feature
- Make your changes following the code style guidelines
- Test thoroughly on the target platform
- Submit a pull request
MIT License - see LICENSE file for details
For issues and feature requests, please use the GitHub issue tracker.