A lightweight and intuitive Node.js client for the Paxsenix AI API.
Easily integrate AI-powered chat completions, streaming responses, model listing, and moreβright into your app.
Free to use with a rate limit of 3 requests per minute.
Need more? API key support with higher limits! :)
- Features
- Installation
- Usage
- Error Handling
- Rate Limits
- Upcoming Features
- License
- Feedback & Contributions
- Chat Completions β Generate AI-powered responses with ease
- Streaming Responses β Get output in real-time as the AI types
- Model Listing β Retrieve available model options
- Planned β Image generation, embeddings, and more (coming soon)
npm install @paxsenix/ai
import PaxSenixAI from '@paxsenix/ai';
// Without API key (free access)
const paxsenix = new PaxSenixAI();
// With API key
const paxsenix = new PaxSenixAI('YOUR_API_KEY');
// Advanced usage
const paxsenix = new PaxSenixAI('YOUR_API_KEY', {
timeout: 30000, // Request timeout in ms
retries: 3, // Number of retry attempts
retryDelay: 1000 // Delay between retries in ms
});
const response = await paxsenix.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a sarcastic assistant.' },
{ role: 'user', content: 'Wassup beach' }
],
temperature: 0.7,
max_tokens: 100
});
console.log(response.choices[0].message.content);
console.log('Tokens used:', response.usage.total_tokens);
Or using resource-specific API:
const chatResponse = await paxsenix.Chat.createCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a sarcastic assistant.' },
{ role: 'user', content: 'Who tf r u?' }
]
});
console.log(chatResponse.choices[0].message.content);
// Simple callback approach
await paxsenix.Chat.streamCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello!' }]
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || '')
);
// With error handling
await paxsenix.Chat.streamCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: 'Hello!' }
]
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || ''),
(error) => console.error('Error:', error),
() => console.log('Done!')
);
// Using async generator (recommended)
for await (const chunk of paxsenix.Chat.streamCompletionAsync({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: 'Hello!' }
]
})) {
const content = chunk.choices?.[0]?.delta?.content;
if (content) process.stdout.write(content);
}
const models = await paxsenix.listModels();
console.log(models.data);
try {
const response = await paxsenix.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello!' }]
});
} catch (error) {
console.error('Status:', error.status);
console.error('Message:', error.message);
console.error('Data:', error.data);
}
- Free access allows up to 3 requests per minute.
- Higher rate limits and API key support are planned.
- API keys will offer better stability and priority access.
- Image Generation
- Embeddings Support
MIT License. See LICENSE for full details. :)
Pull requests and issues are welcome.
Feel free to fork, submit PRs, or just star the repo if it's helpful :P