Skip to content

Paxsenix0/paxsenix-ai.js

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

@paxsenix/ai

A lightweight and intuitive Node.js client for the Paxsenix AI API.
Easily integrate AI-powered chat completions, streaming responses, model listing, and moreβ€”right into your app.

Free to use with a rate limit of 3 requests per minute.
Need more? API key support with higher limits! :)

Static Badge GitHub top language GitHub Repo stars GitHub issues NPM Downloads


πŸ“‹ Table of Contents


πŸš€ Features

  • Chat Completions – Generate AI-powered responses with ease
  • Streaming Responses – Get output in real-time as the AI types
  • Model Listing – Retrieve available model options
  • Planned – Image generation, embeddings, and more (coming soon)

πŸ“¦ Installation

npm install @paxsenix/ai

πŸ“– Usage

Initialize the Client

import PaxSenixAI from '@paxsenix/ai';

// Without API key (free access)
const paxsenix = new PaxSenixAI();

// With API key
const paxsenix = new PaxSenixAI('YOUR_API_KEY');

// Advanced usage
const paxsenix = new PaxSenixAI('YOUR_API_KEY', {
  timeout: 30000, // Request timeout in ms
  retries: 3, // Number of retry attempts
  retryDelay: 1000 // Delay between retries in ms
});

Chat Completions (Non-Streaming)

const response = await paxsenix.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'system', content: 'You are a sarcastic assistant.' },
    { role: 'user', content: 'Wassup beach' }
  ],
  temperature: 0.7,
  max_tokens: 100
});

console.log(response.choices[0].message.content);
console.log('Tokens used:', response.usage.total_tokens);

Or using resource-specific API:

const chatResponse = await paxsenix.Chat.createCompletion({
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'system', content: 'You are a sarcastic assistant.' },
    { role: 'user', content: 'Who tf r u?' }
  ]
});

console.log(chatResponse.choices[0].message.content);

Chat Completions (Streaming)

// Simple callback approach
await paxsenix.Chat.streamCompletion({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Hello!' }] 
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || '')
);

// With error handling
await paxsenix.Chat.streamCompletion({ 
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'user', content: 'Hello!' }
  ] 
}, (chunk) => console.log(chunk.choices[0]?.delta?.content || ''),
  (error) => console.error('Error:', error),
  () => console.log('Done!')
);

// Using async generator (recommended)
for await (const chunk of paxsenix.Chat.streamCompletionAsync({
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'user', content: 'Hello!' }
  ]
})) {
  const content = chunk.choices?.[0]?.delta?.content;
  if (content) process.stdout.write(content);
}

List Available Models

const models = await paxsenix.listModels();
console.log(models.data);

πŸ› οΈ Error Handling

try {
  const response = await paxsenix.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'Hello!' }]
  });
} catch (error) {
  console.error('Status:', error.status);
  console.error('Message:', error.message);
  console.error('Data:', error.data);
}

⏱️ Rate Limits

  • Free access allows up to 3 requests per minute.
  • Higher rate limits and API key support are planned.
  • API keys will offer better stability and priority access.

🚧 Upcoming Features

  • Image Generation
  • Embeddings Support

πŸ“œ License

MIT License. See LICENSE for full details. :)


πŸ’¬ Feedback & Contributions

Pull requests and issues are welcome.
Feel free to fork, submit PRs, or just star the repo if it's helpful :P