Initial commit: Rondevu signaling server

Open signaling and tracking server for peer discovery in distributed P2P applications.

Features:
- REST API for WebRTC peer discovery and signaling
- Origin-based session isolation
- Multiple storage backends (SQLite, in-memory, Cloudflare KV)
- Docker and Cloudflare Workers deployment support
- Automatic session cleanup and expiration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-02 14:32:25 +01:00
commit 82c0e8b065
18 changed files with 3346 additions and 0 deletions

12
.dockerignore Normal file
View File

@@ -0,0 +1,12 @@
node_modules
dist
*.log
.git
.gitignore
.env
README.md
API.md
.DS_Store
*.db
*.db-journal
data/

14
.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
node_modules/
dist/
*.log
.DS_Store
.env
*.db
*.db-journal
data/
# Wrangler / Cloudflare Workers
.wrangler/
.dev.vars
wrangler.toml.backup
wrangler.toml

428
API.md Normal file
View File

@@ -0,0 +1,428 @@
# HTTP API
This API provides peer signaling and tracking endpoints for distributed peer-to-peer applications. Uses JSON request/response bodies with Origin-based session isolation.
All endpoints require an `Origin` header and accept `application/json` content type.
---
## Overview
Sessions are organized by:
- **Origin**: The HTTP Origin header (e.g., `https://example.com`) - isolates sessions by application
- **Topic**: A string identifier for grouping related peers (max 256 chars)
- **Info**: User-provided metadata (max 1024 chars) to uniquely identify each peer
This allows multiple peers from the same application (origin) to discover each other through topics while preventing duplicate connections by comparing the info field.
---
## GET `/`
Lists all topics with the count of available peers for each (paginated). Returns only topics that have unanswered sessions.
### Request
**Headers:**
- `Origin: https://example.com` (required)
**Query Parameters:**
| Parameter | Type | Required | Default | Description |
|-----------|--------|----------|---------|---------------------------------|
| `page` | number | No | `1` | Page number (starting from 1) |
| `limit` | number | No | `100` | Results per page (max 1000) |
### Response
**Content-Type:** `application/json`
**Success (200 OK):**
```json
{
"topics": [
{
"topic": "my-room",
"count": 3
},
{
"topic": "another-room",
"count": 1
}
],
"pagination": {
"page": 1,
"limit": 100,
"total": 2,
"hasMore": false
}
}
```
**Notes:**
- Only returns topics from the same origin as the request
- Only includes topics with at least one unanswered session
- Topics are sorted alphabetically
- Counts only include unexpired sessions
- Maximum 1000 results per page
### Examples
**Default pagination (page 1, limit 100):**
```bash
curl -X GET http://localhost:3000/ \
-H "Origin: https://example.com"
```
**Custom pagination:**
```bash
curl -X GET "http://localhost:3000/?page=2&limit=50" \
-H "Origin: https://example.com"
```
---
## GET `/:topic/sessions`
Discovers available peers for a given topic. Returns all unanswered sessions from the requesting origin.
### Request
**Headers:**
- `Origin: https://example.com` (required)
**Path Parameters:**
| Parameter | Type | Required | Description |
|-----------|--------|----------|-------------------------------|
| `topic` | string | Yes | Topic identifier to query |
### Response
**Content-Type:** `application/json`
**Success (200 OK):**
```json
{
"sessions": [
{
"code": "550e8400-e29b-41d4-a716-446655440000",
"info": "peer-123",
"offer": "<SIGNALING_DATA>",
"offerCandidates": ["<SIGNALING_DATA>"],
"createdAt": 1699564800000,
"expiresAt": 1699565100000
},
{
"code": "660e8400-e29b-41d4-a716-446655440001",
"info": "peer-456",
"offer": "<SIGNALING_DATA>",
"offerCandidates": [],
"createdAt": 1699564850000,
"expiresAt": 1699565150000
}
]
}
```
**Notes:**
- Only returns sessions from the same origin as the request
- Only returns sessions that haven't been answered yet
- Sessions are ordered by creation time (newest first)
- Use the `info` field to avoid answering your own offers
### Example
```bash
curl -X GET http://localhost:3000/my-room/sessions \
-H "Origin: https://example.com"
```
---
## POST `/:topic/offer`
Announces peer availability and creates a new session for the specified topic. Returns a unique session code (UUID) for other peers to connect to.
### Request
**Headers:**
- `Content-Type: application/json`
- `Origin: https://example.com` (required)
**Path Parameters:**
| Parameter | Type | Required | Description |
|-----------|--------|----------|----------------------------------------------|
| `topic` | string | Yes | Topic identifier for grouping peers (max 256 characters) |
**Body Parameters:**
| Parameter | Type | Required | Description |
|-----------|--------|----------|----------------------------------------------|
| `info` | string | Yes | Peer identifier/metadata (max 1024 characters) |
| `offer` | string | Yes | Signaling data for peer connection |
### Response
**Content-Type:** `application/json`
**Success (200 OK):**
```json
{
"code": "550e8400-e29b-41d4-a716-446655440000"
}
```
Returns a unique UUID session code.
### Example
```bash
curl -X POST http://localhost:3000/my-room/offer \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{
"info": "peer-123",
"offer": "<SIGNALING_DATA>"
}'
# Response:
# {"code":"550e8400-e29b-41d4-a716-446655440000"}
```
---
## POST `/answer`
Connects to an existing peer session by sending connection data or exchanging signaling information.
### Request
**Headers:**
- `Content-Type: application/json`
- `Origin: https://example.com` (required)
**Body Parameters:**
| Parameter | Type | Required | Description |
|-------------|--------|----------|----------------------------------------------------------|
| `code` | string | Yes | The session UUID from the offer |
| `answer` | string | No* | Response signaling data for connection establishment |
| `candidate` | string | No* | Additional signaling data for connection negotiation |
| `side` | string | Yes | Which peer is sending: `offerer` or `answerer` |
*Either `answer` or `candidate` must be provided, but not both.
### Response
**Content-Type:** `application/json`
**Success (200 OK):**
```json
{
"success": true
}
```
**Notes:**
- Origin header must match the session's origin
- Sessions are isolated by origin to group topics by domain
### Examples
**Sending connection response:**
```bash
curl -X POST http://localhost:3000/answer \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{
"code": "550e8400-e29b-41d4-a716-446655440000",
"answer": "<SIGNALING_DATA>",
"side": "answerer"
}'
# Response:
# {"success":true}
```
**Sending additional signaling data:**
```bash
curl -X POST http://localhost:3000/answer \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{
"code": "550e8400-e29b-41d4-a716-446655440000",
"candidate": "<SIGNALING_DATA>",
"side": "offerer"
}'
# Response:
# {"success":true}
```
---
## POST `/poll`
Retrieves session data including offers, responses, and signaling information from the other peer.
### Request
**Headers:**
- `Content-Type: application/json`
- `Origin: https://example.com` (required)
**Body Parameters:**
| Parameter | Type | Required | Description |
|-----------|--------|----------|-------------------------------------------------|
| `code` | string | Yes | The session UUID |
| `side` | string | Yes | Which side is polling: `offerer` or `answerer` |
### Response
**Content-Type:** `application/json`
**Success (200 OK):**
Response varies by side:
**For `side=offerer` (the offerer polls for response from answerer):**
```json
{
"answer": "<SIGNALING_DATA>",
"answerCandidates": [
"<SIGNALING_DATA_1>",
"<SIGNALING_DATA_2>"
]
}
```
**For `side=answerer` (the answerer polls for offer from offerer):**
```json
{
"offer": "<SIGNALING_DATA>",
"offerCandidates": [
"<SIGNALING_DATA_1>",
"<SIGNALING_DATA_2>"
]
}
```
**Notes:**
- `answer` will be `null` if the answerer hasn't responded yet
- Candidate arrays will be empty `[]` if no additional signaling data has been sent
- Use this endpoint for polling to check for new signaling data
- Origin header must match the session's origin
### Examples
**Answerer polling for signaling data:**
```bash
curl -X POST http://localhost:3000/poll \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{
"code": "550e8400-e29b-41d4-a716-446655440000",
"side": "answerer"
}'
# Response:
# {
# "offer": "<SIGNALING_DATA>",
# "offerCandidates": ["<SIGNALING_DATA>"]
# }
```
**Offerer polling for response:**
```bash
curl -X POST http://localhost:3000/poll \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{
"code": "550e8400-e29b-41d4-a716-446655440000",
"side": "offerer"
}'
# Response:
# {
# "answer": "<SIGNALING_DATA>",
# "answerCandidates": ["<SIGNALING_DATA>"]
# }
```
---
## GET `/health`
Health check endpoint.
### Response
**Content-Type:** `application/json`
**Success (200 OK):**
```json
{
"status": "ok",
"timestamp": 1699564800000
}
```
---
## Error Responses
All endpoints may return the following error responses:
**400 Bad Request:**
```json
{
"error": "Missing or invalid required parameter: topic"
}
```
**404 Not Found:**
```json
{
"error": "Session not found, expired, or origin mismatch"
}
```
**500 Internal Server Error:**
```json
{
"error": "Internal server error"
}
```
---
## Usage Flow
### Peer Discovery and Connection
1. **Discover active topics:**
- GET `/` to see all topics and peer counts
- Optional: paginate through results with `?page=2&limit=100`
2. **Peer A announces availability:**
- POST `/:topic/offer` with peer identifier and signaling data
- Receives a unique session code
3. **Peer B discovers peers:**
- GET `/:topic/sessions` to list available sessions in a topic
- Filters out sessions with their own info to avoid self-connection
- Selects a peer to connect to
4. **Peer B initiates connection:**
- POST `/answer` with the session code and their signaling data
5. **Both peers exchange signaling information:**
- POST `/answer` with additional signaling data as needed
- POST `/poll` to retrieve signaling data from the other peer
6. **Peer connection established**
- Peers use exchanged signaling data to establish direct connection
- Session automatically expires after configured timeout

346
DEPLOYMENT.md Normal file
View File

@@ -0,0 +1,346 @@
# Deployment Guide
This guide covers deploying Rondevu to various platforms.
## Table of Contents
- [Cloudflare Workers](#cloudflare-workers)
- [Docker](#docker)
- [Node.js](#nodejs)
---
## Cloudflare Workers
Deploy to Cloudflare's edge network using Cloudflare Workers and KV storage.
### Prerequisites
```bash
npm install -g wrangler
```
### Setup
1. **Login to Cloudflare**
```bash
wrangler login
```
2. **Create KV Namespace**
```bash
# For production
wrangler kv:namespace create SESSIONS
# This will output something like:
# { binding = "SESSIONS", id = "abc123..." }
```
3. **Update wrangler.toml**
Edit `wrangler.toml` and replace `YOUR_KV_NAMESPACE_ID` with the ID from step 2:
```toml
[[kv_namespaces]]
binding = "SESSIONS"
id = "abc123..." # Your actual KV namespace ID
```
4. **Configure Environment Variables** (Optional)
Update `wrangler.toml` to customize settings:
```toml
[vars]
SESSION_TIMEOUT = "300000" # Session timeout in milliseconds
CORS_ORIGINS = "https://example.com,https://app.example.com"
```
### Local Development
```bash
# Run locally with Wrangler
npx wrangler dev
# The local development server will:
# - Start on http://localhost:8787
# - Use a local KV namespace automatically
# - Hot-reload on file changes
```
### Production Deployment
```bash
# Deploy to Cloudflare Workers
npx wrangler deploy
# This will output your worker URL:
# https://rondevu.YOUR_SUBDOMAIN.workers.dev
```
### Custom Domain (Optional)
1. Go to your Cloudflare Workers dashboard
2. Select your worker
3. Click "Triggers" → "Add Custom Domain"
4. Enter your domain (e.g., `api.example.com`)
### Monitoring
View logs and analytics:
```bash
# Stream real-time logs
npx wrangler tail
# View in dashboard
# Visit: https://dash.cloudflare.com → Workers & Pages
```
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `SESSION_TIMEOUT` | `300000` | Session timeout in milliseconds |
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins |
### Pricing
Cloudflare Workers Free Tier includes:
- 100,000 requests/day
- 10ms CPU time per request
- KV: 100,000 reads/day, 1,000 writes/day
For higher usage, see [Cloudflare Workers pricing](https://workers.cloudflare.com/#plans).
### Advantages
- **Global Edge Network**: Deploy to 300+ locations worldwide
- **Instant Scaling**: Handles traffic spikes automatically
- **Low Latency**: Runs close to your users
- **No Server Management**: Fully serverless
- **Free Tier**: Generous limits for small projects
---
## Docker
### Quick Start
```bash
# Build
docker build -t rondevu .
# Run with in-memory SQLite
docker run -p 3000:3000 -e STORAGE_PATH=:memory: rondevu
# Run with persistent SQLite
docker run -p 3000:3000 \
-v $(pwd)/data:/app/data \
-e STORAGE_PATH=/app/data/sessions.db \
rondevu
```
### Docker Compose
Create a `docker-compose.yml`:
```yaml
version: '3.8'
services:
rondevu:
build: .
ports:
- "3000:3000"
environment:
- PORT=3000
- STORAGE_TYPE=sqlite
- STORAGE_PATH=/app/data/sessions.db
- SESSION_TIMEOUT=300000
- CORS_ORIGINS=*
volumes:
- ./data:/app/data
restart: unless-stopped
```
Run with:
```bash
docker-compose up -d
```
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3000` | Server port |
| `STORAGE_TYPE` | `sqlite` | Storage backend |
| `STORAGE_PATH` | `/app/data/sessions.db` | SQLite database path |
| `SESSION_TIMEOUT` | `300000` | Session timeout in ms |
| `CORS_ORIGINS` | `*` | Allowed CORS origins |
---
## Node.js
### Production Deployment
1. **Install Dependencies**
```bash
npm ci --production
```
2. **Build TypeScript**
```bash
npm run build
```
3. **Set Environment Variables**
```bash
export PORT=3000
export STORAGE_TYPE=sqlite
export STORAGE_PATH=./data/sessions.db
export SESSION_TIMEOUT=300000
export CORS_ORIGINS=*
```
4. **Run**
```bash
npm start
```
### Process Manager (PM2)
For production, use a process manager like PM2:
1. **Install PM2**
```bash
npm install -g pm2
```
2. **Create ecosystem.config.js**
```javascript
module.exports = {
apps: [{
name: 'rondevu',
script: './dist/index.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000,
STORAGE_TYPE: 'sqlite',
STORAGE_PATH: './data/sessions.db',
SESSION_TIMEOUT: 300000,
CORS_ORIGINS: '*'
}
}]
};
```
3. **Start with PM2**
```bash
pm2 start ecosystem.config.js
pm2 save
pm2 startup
```
### Systemd Service
Create `/etc/systemd/system/rondevu.service`:
```ini
[Unit]
Description=Rondevu Peer Discovery and Signaling Server
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/rondevu
ExecStart=/usr/bin/node dist/index.js
Restart=on-failure
Environment=PORT=3000
Environment=STORAGE_TYPE=sqlite
Environment=STORAGE_PATH=/opt/rondevu/data/sessions.db
Environment=SESSION_TIMEOUT=300000
Environment=CORS_ORIGINS=*
[Install]
WantedBy=multi-user.target
```
Enable and start:
```bash
sudo systemctl enable rondevu
sudo systemctl start rondevu
sudo systemctl status rondevu
```
---
## Troubleshooting
### Docker
**Issue: Permission denied on /app/data**
- Ensure volume permissions are correct
- The container runs as user `node` (UID 1000)
**Issue: Database locked**
- Don't share the same SQLite database file across multiple containers
- Use one instance or implement a different storage backend
### Node.js
**Issue: EADDRINUSE**
- Port is already in use, change `PORT` environment variable
**Issue: Database is locked**
- Another process is using the database
- Ensure only one instance is running with the same database file
---
## Performance Tuning
### Node.js/Docker
- Set `SESSION_TIMEOUT` appropriately to balance resource usage
- For high traffic, use `STORAGE_PATH=:memory:` with session replication
- Consider horizontal scaling with a shared database backend
---
## Security Considerations
1. **HTTPS**: Always use HTTPS in production
- Use a reverse proxy (nginx, Caddy) for Node.js deployments
- Docker deployments should be behind a reverse proxy
2. **Rate Limiting**: Implement rate limiting at the proxy level
3. **CORS**: Configure CORS origins appropriately
- Don't use `*` in production
- Set specific allowed origins: `https://example.com,https://app.example.com`
4. **Input Validation**: SDP offers/answers are stored as-is; validate on client side
5. **Session Codes**: UUID v4 codes provide strong entropy (2^122 combinations)
6. **Origin Isolation**: Sessions are isolated by Origin header to organize topics by domain
---
## Scaling
### Horizontal Scaling
- **Docker/Node.js**: Use a shared database (not SQLite) for multiple instances
- Implement a Redis or PostgreSQL storage adapter
### Vertical Scaling
- Increase `SESSION_TIMEOUT` or cleanup frequency as needed
- Monitor database size and connection pool
- For Node.js, monitor memory usage and increase if needed

57
Dockerfile Normal file
View File

@@ -0,0 +1,57 @@
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci
# Copy source files
COPY tsconfig.json ./
COPY build.js ./
COPY src ./src
# Build TypeScript
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
# Install production dependencies only
COPY package*.json ./
RUN npm ci --omit=dev && \
npm cache clean --force
# Copy built files from builder
COPY --from=builder /app/dist ./dist
# Create data directory for SQLite
RUN mkdir -p /app/data && \
chown -R node:node /app
# Switch to non-root user
USER node
# Environment variables with defaults
ENV PORT=3000
ENV STORAGE_TYPE=sqlite
ENV STORAGE_PATH=/app/data/sessions.db
ENV SESSION_TIMEOUT=300000
ENV CODE_CHARS=0123456789
ENV CODE_LENGTH=9
ENV CORS_ORIGINS=*
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:${PORT}/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
# Start server
CMD ["node", "dist/index.js"]

242
README.md Normal file
View File

@@ -0,0 +1,242 @@
# Rondevu
An open signaling and tracking server for peer discovery. Enables peers to find each other through a topic-based HTTP API with Origin isolation for organizing peer-to-peer applications.
## Features
- 🚀 **Fast & Lightweight** - Built with [Hono](https://hono.dev/) framework
- 📂 **Topic-Based Organization** - Group sessions by topic for easy peer discovery
- 🔒 **Origin Isolation** - Sessions are isolated by HTTP Origin header to group topics by domain
- 🏷️ **Peer Identification** - Info field prevents duplicate connections to same peer
- 🔌 **Pluggable Storage** - Storage interface supports SQLite and in-memory adapters
- 🐳 **Docker Ready** - Minimal Alpine-based Docker image
- ⏱️ **Session Timeout** - Configurable session expiration from initiation time
- 🔐 **Type Safe** - Written in TypeScript with full type definitions
## Quick Start
### Using Node.js
```bash
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build and run in production
npm run build
npm start
```
### Using Docker
```bash
# Build the image
docker build -t rondevu .
# Run with default settings (SQLite database)
docker run -p 3000:3000 rondevu
# Run with in-memory storage
docker run -p 3000:3000 -e STORAGE_TYPE=memory rondevu
# Run with custom timeout (10 minutes)
docker run -p 3000:3000 -e SESSION_TIMEOUT=600000 rondevu
```
### Using Cloudflare Workers
```bash
# Install Wrangler CLI
npm install -g wrangler
# Login to Cloudflare
wrangler login
# Create KV namespace
wrangler kv:namespace create SESSIONS
# Update wrangler.toml with the KV namespace ID
# Deploy to Cloudflare's edge network
npx wrangler deploy
```
See [DEPLOYMENT.md](./DEPLOYMENT.md#cloudflare-workers) for detailed instructions.
## Configuration
Configuration is done through environment variables:
| Variable | Description | Default |
|--------------------|--------------------------------------------------|-------------|
| `PORT` | Server port | `3000` |
| `STORAGE_TYPE` | Storage backend: `sqlite` or `memory` | `sqlite` |
| `STORAGE_PATH` | Path to SQLite database file | `./data.db` |
| `SESSION_TIMEOUT` | Session timeout in milliseconds | `300000` |
| `CORS_ORIGINS` | Comma-separated list of allowed origins | `*` |
### Example .env file
```env
PORT=3000
STORAGE_TYPE=sqlite
STORAGE_PATH=./sessions.db
SESSION_TIMEOUT=300000
CORS_ORIGINS=https://example.com,https://app.example.com
```
## API Documentation
See [API.md](./API.md) for complete API documentation.
### Quick Overview
**List all active topics (with pagination):**
```bash
curl -X GET http://localhost:3000/ \
-H "Origin: https://example.com"
# Returns: {"topics":[{"topic":"my-room","count":3}],"pagination":{...}}
```
**Create an offer (announce yourself as available):**
```bash
curl -X POST http://localhost:3000/my-room/offer \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{"info":"peer-123","offer":"<SIGNALING_DATA>"}'
# Returns: {"code":"550e8400-e29b-41d4-a716-446655440000"}
```
**List available peers in a topic:**
```bash
curl -X GET http://localhost:3000/my-room/sessions \
-H "Origin: https://example.com"
# Returns: {"sessions":[...]}
```
**Connect to a peer:**
```bash
curl -X POST http://localhost:3000/answer \
-H "Content-Type: application/json" \
-H "Origin: https://example.com" \
-d '{"code":"550e8400-...","answer":"<SIGNALING_DATA>","side":"answerer"}'
# Returns: {"success":true}
```
## Architecture
### Storage Interface
The storage layer is abstracted through a simple interface, making it easy to implement custom storage backends:
```typescript
interface Storage {
createSession(origin: string, topic: string, info: string, offer: string, expiresAt: number): Promise<string>;
listSessionsByTopic(origin: string, topic: string): Promise<Session[]>;
getSession(code: string, origin: string): Promise<Session | null>;
updateSession(code: string, origin: string, update: Partial<Session>): Promise<void>;
deleteSession(code: string): Promise<void>;
cleanup(): Promise<void>;
close(): Promise<void>;
}
```
### Built-in Storage Adapters
**SQLite Storage** (`sqlite.ts`)
- For Node.js/Docker deployments
- Persistent file-based or in-memory
- Automatic session cleanup
- Simple and reliable
**Cloudflare KV Storage** (`kv.ts`)
- For Cloudflare Workers deployments
- Global edge storage
- Automatic TTL-based expiration
- Distributed and highly available
### Custom Storage Adapters
You can implement your own storage adapter by implementing the `Storage` interface:
```typescript
import { Storage, Session } from './storage/types';
export class CustomStorage implements Storage {
async createSession(offer: string, expiresAt: number): Promise<string> {
// Your implementation
}
// ... implement other methods
}
```
## Development
### Project Structure
```
rondevu/
├── src/
│ ├── index.ts # Node.js server entry point
│ ├── app.ts # Hono application
│ ├── config.ts # Configuration
│ └── storage/
│ ├── types.ts # Storage interface
│ ├── sqlite.ts # SQLite adapter
│ └── codeGenerator.ts # Code generation utility
├── Dockerfile # Docker build configuration
├── build.js # Build script
├── API.md # API documentation
└── README.md # This file
```
### Building
```bash
# Build TypeScript
npm run build
# Run built version
npm start
```
### Docker Build
```bash
# Build the image
docker build -t rondevu .
# Run with volume for persistent storage
docker run -p 3000:3000 -v $(pwd)/data:/app/data rondevu
```
## How It Works
1. **Discover topics** (optional): Call `GET /` to see all active topics and peer counts
2. **Peer A** announces availability by posting to `/:topic/offer` with peer identifier and signaling data
3. Server generates a unique UUID code and stores the session (bucketed by Origin and topic)
4. **Peer B** discovers available peers using `GET /:topic/sessions`
5. **Peer B** filters out their own session using the info field to avoid self-connection
6. **Peer B** selects a peer and posts their connection data to `POST /answer` with the session code
7. Both peers exchange signaling data through `POST /answer` endpoint
8. Both peers poll for updates using `POST /poll` to retrieve connection information
9. Sessions automatically expire after the configured timeout
This allows peers in distributed systems to discover each other without requiring a centralized registry, while maintaining isolation between different applications through Origin headers.
### Origin Isolation
Sessions are isolated by the HTTP `Origin` header, ensuring that:
- Peers can only see sessions from their own origin
- Session codes cannot be accessed cross-origin
- Topics are organized by application domain
## License
MIT
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.

17
build.js Normal file
View File

@@ -0,0 +1,17 @@
// Build script using esbuild
const esbuild = require('esbuild');
esbuild.build({
entryPoints: ['src/index.ts'],
bundle: true,
platform: 'node',
target: 'node20',
outfile: 'dist/index.js',
format: 'cjs',
external: [
'better-sqlite3',
'@hono/node-server',
'hono'
],
sourcemap: true,
}).catch(() => process.exit(1));

1216
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

27
package.json Normal file
View File

@@ -0,0 +1,27 @@
{
"name": "rondevu",
"version": "1.0.0",
"description": "Open signaling and tracking server for peer discovery in distributed P2P applications",
"main": "dist/index.js",
"scripts": {
"build": "node build.js",
"typecheck": "tsc",
"dev": "ts-node src/index.ts",
"start": "node dist/index.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"private": true,
"devDependencies": {
"@cloudflare/workers-types": "^4.20251014.0",
"@types/better-sqlite3": "^7.6.13",
"@types/node": "^24.9.2",
"esbuild": "^0.25.11",
"ts-node": "^10.9.2",
"typescript": "^5.9.3"
},
"dependencies": {
"@hono/node-server": "^1.19.6",
"better-sqlite3": "^12.4.1",
"hono": "^4.10.4"
}
}

228
src/app.ts Normal file
View File

@@ -0,0 +1,228 @@
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { Storage } from './storage/types.ts';
export interface AppConfig {
sessionTimeout: number;
corsOrigins: string[];
}
/**
* Creates the Hono application with WebRTC signaling endpoints
*/
export function createApp(storage: Storage, config: AppConfig) {
const app = new Hono();
// Enable CORS
app.use('/*', cors({
origin: config.corsOrigins,
allowMethods: ['GET', 'POST', 'OPTIONS'],
allowHeaders: ['Content-Type'],
exposeHeaders: ['Content-Type'],
maxAge: 600,
credentials: true,
}));
/**
* GET /
* Lists all topics with their unanswered session counts (paginated)
* Query params: page (default: 1), limit (default: 100, max: 1000)
*/
app.get('/', async (c) => {
try {
const origin = c.req.header('Origin') || c.req.header('origin') || 'unknown';
const page = parseInt(c.req.query('page') || '1', 10);
const limit = parseInt(c.req.query('limit') || '100', 10);
const result = await storage.listTopics(origin, page, limit);
return c.json(result);
} catch (err) {
console.error('Error listing topics:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /:topic/sessions
* Lists all unanswered sessions for a topic
*/
app.get('/:topic/sessions', async (c) => {
try {
const origin = c.req.header('Origin') || c.req.header('origin') || 'unknown';
const topic = c.req.param('topic');
if (!topic) {
return c.json({ error: 'Missing required parameter: topic' }, 400);
}
if (topic.length > 256) {
return c.json({ error: 'Topic string must be 256 characters or less' }, 400);
}
const sessions = await storage.listSessionsByTopic(origin, topic);
return c.json({
sessions: sessions.map(s => ({
code: s.code,
info: s.info,
offer: s.offer,
offerCandidates: s.offerCandidates,
createdAt: s.createdAt,
expiresAt: s.expiresAt,
})),
});
} catch (err) {
console.error('Error listing sessions:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /:topic/offer
* Creates a new offer and returns a unique session code
* Body: { info: string, offer: string }
*/
app.post('/:topic/offer', async (c) => {
try {
const origin = c.req.header('Origin') || c.req.header('origin') || 'unknown';
const topic = c.req.param('topic');
const body = await c.req.json();
const { info, offer } = body;
if (!topic || typeof topic !== 'string') {
return c.json({ error: 'Missing or invalid required parameter: topic' }, 400);
}
if (topic.length > 256) {
return c.json({ error: 'Topic string must be 256 characters or less' }, 400);
}
if (!info || typeof info !== 'string') {
return c.json({ error: 'Missing or invalid required parameter: info' }, 400);
}
if (info.length > 1024) {
return c.json({ error: 'Info string must be 1024 characters or less' }, 400);
}
if (!offer || typeof offer !== 'string') {
return c.json({ error: 'Missing or invalid required parameter: offer' }, 400);
}
const expiresAt = Date.now() + config.sessionTimeout;
const code = await storage.createSession(origin, topic, info, offer, expiresAt);
return c.json({ code }, 200);
} catch (err) {
console.error('Error creating offer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /answer
* Responds to an existing offer or sends ICE candidates
* Body: { code: string, answer?: string, candidate?: string, side: 'offerer' | 'answerer' }
*/
app.post('/answer', async (c) => {
try {
const origin = c.req.header('Origin') || c.req.header('origin') || 'unknown';
const body = await c.req.json();
const { code, answer, candidate, side } = body;
if (!code || typeof code !== 'string') {
return c.json({ error: 'Missing or invalid required parameter: code' }, 400);
}
if (!side || (side !== 'offerer' && side !== 'answerer')) {
return c.json({ error: 'Invalid or missing parameter: side (must be "offerer" or "answerer")' }, 400);
}
if (!answer && !candidate) {
return c.json({ error: 'Missing required parameter: answer or candidate' }, 400);
}
if (answer && candidate) {
return c.json({ error: 'Cannot provide both answer and candidate' }, 400);
}
const session = await storage.getSession(code, origin);
if (!session) {
return c.json({ error: 'Session not found, expired, or origin mismatch' }, 404);
}
if (answer) {
await storage.updateSession(code, origin, { answer });
}
if (candidate) {
if (side === 'offerer') {
const updatedCandidates = [...session.offerCandidates, candidate];
await storage.updateSession(code, origin, { offerCandidates: updatedCandidates });
} else {
const updatedCandidates = [...session.answerCandidates, candidate];
await storage.updateSession(code, origin, { answerCandidates: updatedCandidates });
}
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error handling answer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /poll
* Polls for session data (offer, answer, ICE candidates)
* Body: { code: string, side: 'offerer' | 'answerer' }
*/
app.post('/poll', async (c) => {
try {
const origin = c.req.header('Origin') || c.req.header('origin') || 'unknown';
const body = await c.req.json();
const { code, side } = body;
if (!code || typeof code !== 'string') {
return c.json({ error: 'Missing or invalid required parameter: code' }, 400);
}
if (!side || (side !== 'offerer' && side !== 'answerer')) {
return c.json({ error: 'Invalid or missing parameter: side (must be "offerer" or "answerer")' }, 400);
}
const session = await storage.getSession(code, origin);
if (!session) {
return c.json({ error: 'Session not found, expired, or origin mismatch' }, 404);
}
if (side === 'offerer') {
return c.json({
answer: session.answer || null,
answerCandidates: session.answerCandidates,
});
} else {
return c.json({
offer: session.offer,
offerCandidates: session.offerCandidates,
});
}
} catch (err) {
console.error('Error polling session:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /health
* Health check endpoint
*/
app.get('/health', (c) => {
return c.json({ status: 'ok', timestamp: Date.now() });
});
return app;
}

26
src/config.ts Normal file
View File

@@ -0,0 +1,26 @@
/**
* Application configuration
* Reads from environment variables with sensible defaults
*/
export interface Config {
port: number;
storageType: 'sqlite' | 'memory';
storagePath: string;
sessionTimeout: number;
corsOrigins: string[];
}
/**
* Loads configuration from environment variables
*/
export function loadConfig(): Config {
return {
port: parseInt(process.env.PORT || '3000', 10),
storageType: (process.env.STORAGE_TYPE || 'sqlite') as 'sqlite' | 'memory',
storagePath: process.env.STORAGE_PATH || ':memory:',
sessionTimeout: parseInt(process.env.SESSION_TIMEOUT || '300000', 10),
corsOrigins: process.env.CORS_ORIGINS
? process.env.CORS_ORIGINS.split(',').map(o => o.trim())
: ['*'],
};
}

59
src/index.ts Normal file
View File

@@ -0,0 +1,59 @@
import { serve } from '@hono/node-server';
import { createApp } from './app.ts';
import { loadConfig } from './config.ts';
import { SQLiteStorage } from './storage/sqlite.ts';
import { Storage } from './storage/types.ts';
/**
* Main entry point for the standalone Node.js server
*/
async function main() {
const config = loadConfig();
console.log('Starting Rondevu server...');
console.log('Configuration:', {
port: config.port,
storageType: config.storageType,
storagePath: config.storagePath,
sessionTimeout: `${config.sessionTimeout}ms`,
corsOrigins: config.corsOrigins,
});
let storage: Storage;
if (config.storageType === 'sqlite') {
storage = new SQLiteStorage(config.storagePath);
console.log('Using SQLite storage');
} else {
throw new Error('Unsupported storage type');
}
const app = createApp(storage, {
sessionTimeout: config.sessionTimeout,
corsOrigins: config.corsOrigins,
});
const server = serve({
fetch: app.fetch,
port: config.port,
});
console.log(`Server running on http://localhost:${config.port}`);
process.on('SIGINT', async () => {
console.log('\nShutting down gracefully...');
await storage.close();
process.exit(0);
});
process.on('SIGTERM', async () => {
console.log('\nShutting down gracefully...');
await storage.close();
process.exit(0);
});
}
main().catch((err) => {
console.error('Fatal error:', err);
process.exit(1);
});

241
src/storage/kv.ts Normal file
View File

@@ -0,0 +1,241 @@
import { Storage, Session } from './types.ts';
/**
* Cloudflare KV storage adapter for session management
*/
export class KVStorage implements Storage {
private kv: KVNamespace;
/**
* Creates a new KV storage instance
* @param kv Cloudflare KV namespace binding
*/
constructor(kv: KVNamespace) {
this.kv = kv;
}
/**
* Generates a unique code using Web Crypto API
*/
private generateCode(): string {
return crypto.randomUUID();
}
/**
* Gets the key for storing a session
*/
private sessionKey(code: string): string {
return `session:${code}`;
}
/**
* Gets the key for the topic index
*/
private topicIndexKey(origin: string, topic: string): string {
return `index:${origin}:${topic}`;
}
async createSession(origin: string, topic: string, info: string, offer: string, expiresAt: number): Promise<string> {
// Validate info length
if (info.length > 1024) {
throw new Error('Info string must be 1024 characters or less');
}
const code = this.generateCode();
const createdAt = Date.now();
const session: Session = {
code,
origin,
topic,
info,
offer,
answer: undefined,
offerCandidates: [],
answerCandidates: [],
createdAt,
expiresAt,
};
// Calculate TTL in seconds for KV
const ttl = Math.max(60, Math.floor((expiresAt - createdAt) / 1000));
// Store the session
await this.kv.put(
this.sessionKey(code),
JSON.stringify(session),
{ expirationTtl: ttl }
);
// Update the topic index
const indexKey = this.topicIndexKey(origin, topic);
const existingIndex = await this.kv.get(indexKey, 'json') as string[] | null;
const updatedIndex = existingIndex ? [...existingIndex, code] : [code];
// Set index TTL to slightly longer than session TTL to avoid race conditions
await this.kv.put(
indexKey,
JSON.stringify(updatedIndex),
{ expirationTtl: ttl + 300 }
);
return code;
}
async listSessionsByTopic(origin: string, topic: string): Promise<Session[]> {
const indexKey = this.topicIndexKey(origin, topic);
const codes = await this.kv.get(indexKey, 'json') as string[] | null;
if (!codes || codes.length === 0) {
return [];
}
// Fetch all sessions in parallel
const sessionPromises = codes.map(async (code) => {
const sessionData = await this.kv.get(this.sessionKey(code), 'json') as Session | null;
return sessionData;
});
const sessions = await Promise.all(sessionPromises);
// Filter out expired or answered sessions, and null values
const now = Date.now();
const validSessions = sessions.filter(
(session): session is Session =>
session !== null &&
session.expiresAt > now &&
session.answer === undefined
);
// Sort by creation time (newest first)
return validSessions.sort((a, b) => b.createdAt - a.createdAt);
}
async listTopics(origin: string, page: number, limit: number): Promise<{
topics: Array<{ topic: string; count: number }>;
pagination: {
page: number;
limit: number;
total: number;
hasMore: boolean;
};
}> {
// Ensure limit doesn't exceed 1000
const safeLimit = Math.min(Math.max(1, limit), 1000);
const safePage = Math.max(1, page);
const prefix = `index:${origin}:`;
const topicCounts = new Map<string, number>();
// List all index keys for this origin
const list = await this.kv.list({ prefix });
// Process each topic index
for (const key of list.keys) {
// Extract topic from key: "index:{origin}:{topic}"
const topic = key.name.substring(prefix.length);
// Get the session codes for this topic
const codes = await this.kv.get(key.name, 'json') as string[] | null;
if (!codes || codes.length === 0) {
continue;
}
// Fetch sessions to count only valid ones (unexpired and unanswered)
const sessionPromises = codes.map(async (code) => {
const sessionData = await this.kv.get(this.sessionKey(code), 'json') as Session | null;
return sessionData;
});
const sessions = await Promise.all(sessionPromises);
// Count valid sessions
const now = Date.now();
const validCount = sessions.filter(
(session) =>
session !== null &&
session.expiresAt > now &&
session.answer === undefined
).length;
if (validCount > 0) {
topicCounts.set(topic, validCount);
}
}
// Convert to array and sort by topic name
const allTopics = Array.from(topicCounts.entries())
.map(([topic, count]) => ({ topic, count }))
.sort((a, b) => a.topic.localeCompare(b.topic));
// Apply pagination
const total = allTopics.length;
const offset = (safePage - 1) * safeLimit;
const topics = allTopics.slice(offset, offset + safeLimit);
return {
topics,
pagination: {
page: safePage,
limit: safeLimit,
total,
hasMore: offset + topics.length < total,
},
};
}
async getSession(code: string, origin: string): Promise<Session | null> {
const sessionData = await this.kv.get(this.sessionKey(code), 'json') as Session | null;
if (!sessionData) {
return null;
}
// Validate origin and expiration
if (sessionData.origin !== origin || sessionData.expiresAt <= Date.now()) {
return null;
}
return sessionData;
}
async updateSession(code: string, origin: string, update: Partial<Session>): Promise<void> {
const current = await this.getSession(code, origin);
if (!current) {
throw new Error('Session not found or origin mismatch');
}
// Merge updates
const updated: Session = {
...current,
...(update.answer !== undefined && { answer: update.answer }),
...(update.offerCandidates !== undefined && { offerCandidates: update.offerCandidates }),
...(update.answerCandidates !== undefined && { answerCandidates: update.answerCandidates }),
};
// Calculate remaining TTL
const ttl = Math.max(60, Math.floor((updated.expiresAt - Date.now()) / 1000));
// Update the session
await this.kv.put(
this.sessionKey(code),
JSON.stringify(updated),
{ expirationTtl: ttl }
);
}
async deleteSession(code: string): Promise<void> {
await this.kv.delete(this.sessionKey(code));
}
async cleanup(): Promise<void> {
// KV automatically expires keys based on TTL
// No manual cleanup needed
}
async close(): Promise<void> {
// No connection to close for KV
}
}

258
src/storage/sqlite.ts Normal file
View File

@@ -0,0 +1,258 @@
import Database from 'better-sqlite3';
import { randomUUID } from 'crypto';
import { Storage, Session } from './types.ts';
/**
* SQLite storage adapter for session management
* Supports both file-based and in-memory databases
*/
export class SQLiteStorage implements Storage {
private db: Database.Database;
/**
* Creates a new SQLite storage instance
* @param path Path to SQLite database file, or ':memory:' for in-memory database
*/
constructor(path: string = ':memory:') {
this.db = new Database(path);
this.initializeDatabase();
this.startCleanupInterval();
}
/**
* Initializes database schema
*/
private initializeDatabase(): void {
this.db.exec(`
CREATE TABLE IF NOT EXISTS sessions (
code TEXT PRIMARY KEY,
origin TEXT NOT NULL,
topic TEXT NOT NULL,
info TEXT NOT NULL CHECK(length(info) <= 1024),
offer TEXT NOT NULL,
answer TEXT,
offer_candidates TEXT NOT NULL DEFAULT '[]',
answer_candidates TEXT NOT NULL DEFAULT '[]',
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_expires_at ON sessions(expires_at);
CREATE INDEX IF NOT EXISTS idx_origin_topic ON sessions(origin, topic);
CREATE INDEX IF NOT EXISTS idx_origin_topic_expires ON sessions(origin, topic, expires_at);
`);
}
/**
* Starts periodic cleanup of expired sessions
*/
private startCleanupInterval(): void {
// Run cleanup every minute
setInterval(() => {
this.cleanup().catch(err => {
console.error('Cleanup error:', err);
});
}, 60000);
}
/**
* Generates a unique code using UUID
*/
private generateCode(): string {
return randomUUID();
}
async createSession(origin: string, topic: string, info: string, offer: string, expiresAt: number): Promise<string> {
// Validate info length
if (info.length > 1024) {
throw new Error('Info string must be 1024 characters or less');
}
let code: string;
let attempts = 0;
const maxAttempts = 10;
// Try to generate a unique code
do {
code = this.generateCode();
attempts++;
if (attempts > maxAttempts) {
throw new Error('Failed to generate unique session code');
}
try {
const stmt = this.db.prepare(`
INSERT INTO sessions (code, origin, topic, info, offer, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
`);
stmt.run(code, origin, topic, info, offer, Date.now(), expiresAt);
break;
} catch (err: any) {
// If unique constraint failed, try again
if (err.code === 'SQLITE_CONSTRAINT_PRIMARYKEY') {
continue;
}
throw err;
}
} while (true);
return code;
}
async listSessionsByTopic(origin: string, topic: string): Promise<Session[]> {
const stmt = this.db.prepare(`
SELECT * FROM sessions
WHERE origin = ? AND topic = ? AND expires_at > ? AND answer IS NULL
ORDER BY created_at DESC
`);
const rows = stmt.all(origin, topic, Date.now()) as any[];
return rows.map(row => ({
code: row.code,
origin: row.origin,
topic: row.topic,
info: row.info,
offer: row.offer,
answer: row.answer || undefined,
offerCandidates: JSON.parse(row.offer_candidates),
answerCandidates: JSON.parse(row.answer_candidates),
createdAt: row.created_at,
expiresAt: row.expires_at,
}));
}
async listTopics(origin: string, page: number, limit: number): Promise<{
topics: Array<{ topic: string; count: number }>;
pagination: {
page: number;
limit: number;
total: number;
hasMore: boolean;
};
}> {
// Ensure limit doesn't exceed 1000
const safeLimit = Math.min(Math.max(1, limit), 1000);
const safePage = Math.max(1, page);
const offset = (safePage - 1) * safeLimit;
// Get total count of topics
const countStmt = this.db.prepare(`
SELECT COUNT(DISTINCT topic) as total
FROM sessions
WHERE origin = ? AND expires_at > ? AND answer IS NULL
`);
const { total } = countStmt.get(origin, Date.now()) as any;
// Get paginated topics
const stmt = this.db.prepare(`
SELECT topic, COUNT(*) as count
FROM sessions
WHERE origin = ? AND expires_at > ? AND answer IS NULL
GROUP BY topic
ORDER BY topic ASC
LIMIT ? OFFSET ?
`);
const rows = stmt.all(origin, Date.now(), safeLimit, offset) as any[];
const topics = rows.map(row => ({
topic: row.topic,
count: row.count,
}));
return {
topics,
pagination: {
page: safePage,
limit: safeLimit,
total,
hasMore: offset + topics.length < total,
},
};
}
async getSession(code: string, origin: string): Promise<Session | null> {
const stmt = this.db.prepare(`
SELECT * FROM sessions WHERE code = ? AND origin = ? AND expires_at > ?
`);
const row = stmt.get(code, origin, Date.now()) as any;
if (!row) {
return null;
}
return {
code: row.code,
origin: row.origin,
topic: row.topic,
info: row.info,
offer: row.offer,
answer: row.answer || undefined,
offerCandidates: JSON.parse(row.offer_candidates),
answerCandidates: JSON.parse(row.answer_candidates),
createdAt: row.created_at,
expiresAt: row.expires_at,
};
}
async updateSession(code: string, origin: string, update: Partial<Session>): Promise<void> {
const current = await this.getSession(code, origin);
if (!current) {
throw new Error('Session not found or origin mismatch');
}
const updates: string[] = [];
const values: any[] = [];
if (update.answer !== undefined) {
updates.push('answer = ?');
values.push(update.answer);
}
if (update.offerCandidates !== undefined) {
updates.push('offer_candidates = ?');
values.push(JSON.stringify(update.offerCandidates));
}
if (update.answerCandidates !== undefined) {
updates.push('answer_candidates = ?');
values.push(JSON.stringify(update.answerCandidates));
}
if (updates.length === 0) {
return;
}
values.push(code);
values.push(origin);
const stmt = this.db.prepare(`
UPDATE sessions SET ${updates.join(', ')} WHERE code = ? AND origin = ?
`);
stmt.run(...values);
}
async deleteSession(code: string): Promise<void> {
const stmt = this.db.prepare('DELETE FROM sessions WHERE code = ?');
stmt.run(code);
}
async cleanup(): Promise<void> {
const stmt = this.db.prepare('DELETE FROM sessions WHERE expires_at <= ?');
const result = stmt.run(Date.now());
if (result.changes > 0) {
console.log(`Cleaned up ${result.changes} expired session(s)`);
}
}
async close(): Promise<void> {
this.db.close();
}
}

90
src/storage/types.ts Normal file
View File

@@ -0,0 +1,90 @@
/**
* Represents a WebRTC signaling session
*/
export interface Session {
code: string;
origin: string;
topic: string;
info: string;
offer: string;
answer?: string;
offerCandidates: string[];
answerCandidates: string[];
createdAt: number;
expiresAt: number;
}
/**
* Storage interface for session management
* Implementations can use different backends (SQLite, Redis, Memory, etc.)
*/
export interface Storage {
/**
* Creates a new session with the given offer
* @param origin The Origin header from the request
* @param topic The topic to post the offer to
* @param info User info string (max 1024 chars)
* @param offer The WebRTC SDP offer message
* @param expiresAt Unix timestamp when the session should expire
* @returns The unique session code
*/
createSession(origin: string, topic: string, info: string, offer: string, expiresAt: number): Promise<string>;
/**
* Lists all unanswered sessions for a given origin and topic
* @param origin The Origin header from the request
* @param topic The topic to list offers for
* @returns Array of sessions that haven't been answered yet
*/
listSessionsByTopic(origin: string, topic: string): Promise<Session[]>;
/**
* Lists all topics for a given origin with their session counts
* @param origin The Origin header from the request
* @param page Page number (starting from 1)
* @param limit Number of results per page (max 1000)
* @returns Object with topics array and pagination metadata
*/
listTopics(origin: string, page: number, limit: number): Promise<{
topics: Array<{ topic: string; count: number }>;
pagination: {
page: number;
limit: number;
total: number;
hasMore: boolean;
};
}>;
/**
* Retrieves a session by its code
* @param code The session code
* @param origin The Origin header from the request (for validation)
* @returns The session if found, null otherwise
*/
getSession(code: string, origin: string): Promise<Session | null>;
/**
* Updates an existing session with new data
* @param code The session code
* @param origin The Origin header from the request (for validation)
* @param update Partial session data to update
*/
updateSession(code: string, origin: string, update: Partial<Session>): Promise<void>;
/**
* Deletes a session
* @param code The session code
*/
deleteSession(code: string): Promise<void>;
/**
* Removes expired sessions
* Should be called periodically to clean up old data
*/
cleanup(): Promise<void>;
/**
* Closes the storage connection and releases resources
*/
close(): Promise<void>;
}

39
src/worker.ts Normal file
View File

@@ -0,0 +1,39 @@
import { createApp } from './app.ts';
import { KVStorage } from './storage/kv.ts';
/**
* Cloudflare Workers environment bindings
*/
export interface Env {
SESSIONS: KVNamespace;
SESSION_TIMEOUT?: string;
CORS_ORIGINS?: string;
}
/**
* Cloudflare Workers fetch handler
*/
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// Initialize KV storage
const storage = new KVStorage(env.SESSIONS);
// Parse configuration
const sessionTimeout = env.SESSION_TIMEOUT
? parseInt(env.SESSION_TIMEOUT, 10)
: 300000; // 5 minutes default
const corsOrigins = env.CORS_ORIGINS
? env.CORS_ORIGINS.split(',').map(o => o.trim())
: ['*'];
// Create Hono app
const app = createApp(storage, {
sessionTimeout,
corsOrigins,
});
// Handle request
return app.fetch(request, env, ctx);
},
};

20
tsconfig.json Normal file
View File

@@ -0,0 +1,20 @@
{
"compilerOptions": {
"target": "ES2020",
"module": "ESNext",
"lib": ["ES2020"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"noEmit": true,
"types": ["@types/node", "@cloudflare/workers-types"]
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules", "dist"]
}

26
wrangler.toml.example Normal file
View File

@@ -0,0 +1,26 @@
name = "rondevu"
main = "src/worker.ts"
compatibility_date = "2024-01-01"
# KV Namespace binding
[[kv_namespaces]]
binding = "SESSIONS"
id = "" # Replace with your KV namespace ID
# Environment variables
[vars]
SESSION_TIMEOUT = "300000" # 5 minutes in milliseconds
CORS_ORIGINS = "*" # Comma-separated list of allowed origins
# Build configuration
[build]
command = ""
# For local development
# Run: npx wrangler dev
# The local KV will be created automatically
# For production deployment:
# 1. Create KV namespace: npx wrangler kv:namespace create SESSIONS
# 2. Update the 'id' field above with the returned namespace ID
# 3. Deploy: npx wrangler deploy