46 Commits

Author SHA1 Message Date
9e93ed506e Release v0.5.0: Phase 1 & 2 improvements, documentation restructuring 2025-12-13 13:19:14 +01:00
9002fe3f6d Make README more concise with ADVANCED.md
Restructure documentation for better discoverability:

Changes:
- README.md: 624 → 259 lines (58% reduction)
- ADVANCED.md: New comprehensive guide (502 lines)

README.md now contains:
 Features and architecture overview
 Quick start commands
 RPC interface basics
 Core method examples
 Configuration quick reference
 Links to advanced docs

ADVANCED.md contains:
📚 Complete RPC method reference (8 methods)
📚 Full configuration table
📚 Database schema documentation
📚 Security implementation details
📚 Migration guides

Benefits:
- Faster onboarding for API consumers
- Essential examples in README
- Detailed reference still accessible
- Consistent documentation structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 23:17:06 +01:00
88a038a12a Refactor getService() method for better maintainability
Break down 138-line method with three helper functions:

1. filterCompatibleServices(): Eliminates duplicate filtering logic
   - Used in both paginated and random discovery modes
   - Centralizes version compatibility checking

2. findAvailableOffer(): Encapsulates offer lookup logic
   - Used across all three modes
   - Ensures consistent offer selection

3. buildServiceResponse(): Standardizes response formatting
   - Single source of truth for response structure
   - Used in all return paths

Benefits:
- Eliminates 30+ lines of duplicate code
- Three modes now clearly separated and documented
- Easier to maintain and test each mode independently
- Consistent response formatting across all modes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 23:09:24 +01:00
3c0d1c8411 Replace magic numbers with named constants in server
Refactoring: Extract magic numbers to named constants
- MAX_BATCH_SIZE = 100 (batch request limit)
- MAX_PAGE_SIZE = 100 (pagination limit)

Replaced in:
- app.ts: Batch size validation (line 68-69)
- rpc.ts: Page size limit (line 184)

Impact: Improves code clarity and makes limits configurable
2025-12-12 22:56:55 +01:00
53a576670e Add candidate validation to addIceCandidates()
Validation: Add basic candidate validation
- Validate each candidate is an object
- Don't enforce specific structure (per CLAUDE.md guidelines)
- Provides clear error messages with index

Impact: Prevents runtime errors from null/primitive values
Note: Intentionally keeps candidate structure flexible per design
2025-12-12 22:53:29 +01:00
7e2e8c703e Add SDP validation to publishService()
Validation: Add comprehensive offer validation
- Validate each offer is an object
- Validate each offer has sdp property
- Validate sdp is a string
- Validate sdp is not empty/whitespace

Impact: Prevents runtime errors from malformed offers
Improves error messages with specific index information
2025-12-12 22:52:46 +01:00
05fe34be01 Remove explicit claimUsername RPC handler - claiming now fully implicit
Username claiming is now handled automatically in verifyAuth() when a username
doesn't exist. The separate claimUsername RPC method is no longer needed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 21:56:56 +01:00
68bb28bbc2 Fix: Replace all remaining getOffersByService with getOffersForService 2025-12-12 21:13:02 +01:00
677bbbb37e Fix: Correct method name from getOffersByService to getOffersForService
The storage interface defines getOffersForService() but RPC
handler was calling getOffersByService(), causing runtime error.
2025-12-12 21:11:56 +01:00
caae10bcac Fix: Pass offers to createService method
The createService storage method expects offers in the request,
but publishService wasn't passing them. This caused undefined
error when d1.ts tried to call request.offers.map().

Now correctly passes offers to createService which handles
creating both the service and all offers atomically.
2025-12-12 21:09:15 +01:00
34babd036e Fix: Auto-claim should not validate claim message format
Auto-claim was incorrectly using validateUsernameClaim() which
expects 'claim:{username}:{timestamp}' message format. This failed
when users tried to auto-claim via publishService or getService.

Now auto-claim only:
- Validates username format
- Verifies signature against the actual message
- Claims the username

This allows implicit username claiming on first authenticated request.
2025-12-12 21:03:44 +01:00
876ac2602c Fix: Correct validateUsernameClaim function calls
The function expects 4 separate parameters, not an object.
This was causing 'Username must be a string' errors because
the entire object was being passed as the username parameter.
2025-12-12 21:00:11 +01:00
df9f3311e9 Fix: Add missing continue statement in message validation
The message validation was missing a continue statement, causing
the handler to continue executing even after pushing an error response.
This led to undefined errors when trying to map over undefined values.
2025-12-12 20:52:24 +01:00
9f30f8b46d Implement implicit username claiming in RPC handler
Modified verifyAuth() to automatically claim usernames on first use.
When a username is not claimed and a publicKey is provided in the
RPC request, the server will validate and auto-claim it.

Changes:
- Added publicKey parameter to verifyAuth() function
- Added publicKey field to RpcRequest interface
- Updated RpcHandler type to include publicKey parameter
- Modified all method handlers to pass publicKey to verifyAuth()
- Updated handleRpc() to extract publicKey from requests

🤖 Generated with Claude Code
https://claude.com/claude-code

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 20:22:23 +01:00
17765a9f4f refactor: Convert to RPC interface with single /rpc endpoint
BREAKING CHANGES:
- Replaced REST API with RPC interface
- Single POST /rpc endpoint for all operations
- Removed auth middleware (per-method auth instead)
- Support for batch operations
- Message format changed for all methods

Changes:
- Created src/rpc.ts with all method handlers
- Simplified src/app.ts to only handle /rpc endpoint
- Removed src/middleware/auth.ts
- Updated README.md with complete RPC documentation
2025-12-12 19:51:58 +01:00
4e73157a16 migration: Convert peer_id to username in offers and ice_candidates
This migration aligns the D1 database schema with the unified Ed25519
authentication system that replaced the dual peerId/secret system.

Changes:
- Renames peer_id to username in offers table
- Renames answerer_peer_id to answerer_username in offers table
- Renames peer_id to username in ice_candidates table
- Adds service_fqn column to offers table
- Updates all indexes and foreign keys
2025-12-12 19:20:54 +01:00
8d47424a82 fix: Remove authSecret reference from worker config
The authSecret variable was removed but still referenced in the config
object, causing the worker to crash on all requests.
2025-12-12 19:18:24 +01:00
1612bd78b7 refactor: Unify polling endpoint and remove AUTH_SECRET
BREAKING CHANGES:
- Renamed /offers/poll to /poll (generic polling endpoint)
- Removed /offers/answered endpoint (use /poll instead)
- Removed AUTH_SECRET environment variable (Ed25519 auth only)
- Updated auth message format from 'pollOffers' to 'poll'
2025-12-12 19:13:11 +01:00
01b751afc3 docs: Fix DELETE endpoint auth and anonymous users description
- DELETE /services/:fqn uses request body for auth, not query parameters
- Updated anonymous users description to reflect server capabilities
  (not client auto-claiming behavior which was removed)
2025-12-12 17:44:35 +01:00
0a98ace6f7 docs: Update README for unified Ed25519 authentication
- Remove POST /register endpoint documentation
- Update all endpoints to show signature-based auth (username, signature, message)
- Remove Authorization header examples (replaced with body/query params)
- Add anonymous username documentation (anon-{timestamp}-{random})
- Update database schema to show username-based tables
- Remove AUTH_SECRET from configuration
- Update security section with Ed25519 authentication details
2025-12-10 22:19:06 +01:00
51fe405440 Unified Ed25519 authentication - remove peer_id/credentials system
BREAKING CHANGE: Remove dual authentication system

- Remove POST /register endpoint - no longer needed
- Remove peer_id/secret credential-based auth
- All authentication now uses username + Ed25519 signatures
- Anonymous users can generate random usernames (anon-{timestamp}-{hex})

Database schema:
- Rename peer_id → username in offers table
- Rename answerer_peer_id → answerer_username in offers table
- Rename peer_id → username in ice_candidates table
- Remove secret column from offers table
- Add FK constraints for username columns

Storage layer:
- Update D1 and SQLite implementations
- All methods use username instead of peerId
- Remove secret-related code

Auth middleware:
- Replace validateCredentials() with Ed25519 signature verification
- Extract auth from request body (POST) or query params (GET)
- Verify signature against username's public key
- Validate message format and timestamp

Crypto utilities:
- Remove generatePeerId(), encryptPeerId(), decryptPeerId(), validateCredentials()
- Add generateAnonymousUsername() - creates anon-{timestamp}-{random}
- Add validateAuthMessage() - validates auth message format

Config:
- Remove authSecret from Config interface (no longer needed)

All server endpoints updated to use getAuthenticatedUsername()
2025-12-10 22:06:45 +01:00
95596dd462 Update README to document current v0.4 API
- Remove outdated UUID-based endpoint documentation
- Document actual service:version@username FQN format
- Add /offers/poll combined polling endpoint
- Update all endpoint paths to match actual implementation
- Document ICE candidate role filtering
- Add migration notes from v0.3.x
2025-12-10 21:03:51 +01:00
1bf21d7df8 Include both offerer and answerer ICE candidates in polling endpoint
- Add role and peerId to ICE candidate responses for matching
- Offerers can now see their own candidates (for debugging/sync)
- Answerers can poll same endpoint to get offerer candidates
- Each candidate tagged with role ('offerer' or 'answerer') and peerId
- Enables proper bidirectional ICE candidate exchange
2025-12-10 19:51:31 +01:00
e3ede0033e Fix UNIQUE constraint: Use (service_name, version, username) instead of service_fqn
- Change UNIQUE constraint to composite key on separate columns
- Move upsert logic into D1Storage.createService() for atomic operation
- Delete existing service and its offers before inserting new one
- Remove redundant delete logic from app.ts endpoint
- Fixes 'UNIQUE constraint failed: services.service_fqn' error when republishing
2025-12-10 19:42:03 +01:00
cfa58f1dfa Add combined polling endpoint for answers and ICE candidates
- Add GET /offers/poll endpoint for efficient batch polling
- Returns both answered offers and ICE candidates in single request
- Supports timestamp-based filtering with 'since' parameter
- Reduces HTTP overhead from 2N requests to 1 request
- Filters ICE candidates by role (answerer candidates for offerer)
2025-12-10 19:32:52 +01:00
c14a8c24fc Add efficient batch polling endpoint for answered offers
Added GET /offers/answered endpoint that returns all answered offers
for the authenticated peer with optional 'since' timestamp filtering.

This allows offerers to efficiently poll for all incoming connections
in a single request instead of polling each offer individually.
2025-12-10 19:17:19 +01:00
b282bf6470 Fix D1 storage: Insert service_id when creating offers
The createOffers function was not inserting the service_id column even
though it was passed in the CreateOfferRequest. This caused all offers
to have NULL service_id, making getOffersForService return empty results.

Fixed:
- Added service_id to INSERT statement in createOffers
- Added serviceId to created offer objects
- Added serviceId to rowToOffer mapping

This resolves the 'No available offers' error when trying to connect
to a published service.
2025-12-10 18:52:11 +01:00
9088abe305 Fix fresh schema to match D1 storage expectations
Changed offers table to use service_id (nullable) instead of service_fqn.
This matches the actual D1 storage implementation in d1.ts which expects:
- service_id TEXT (optional link to service)
- NOT service_fqn (that's only in the services table)

Resolves 'NOT NULL constraint failed: offers.service_fqn' error.
2025-12-10 18:32:43 +01:00
00c5bbc501 Update database configuration and add fresh schema
- Update wrangler.toml with new D1 database ID
- Add fresh_schema.sql for clean database initialization
- Applied schema to fresh D1 database
- Server redeployed with correct database binding

This resolves the 'table services has no column named service_name' error
by ensuring the database has the correct v0.4.1+ schema.
2025-12-10 18:17:53 +01:00
85a3de65e2 Fix signature validation bug for serviceFqn with colons
The validateServicePublish function was incorrectly parsing the signature
message when serviceFqn contained colons (e.g., 'chat:2.0.0@user').

Old logic: Split by ':' and expected exactly 4 parts
Problem: serviceFqn 'chat:2.0.0@user' contains a colon, so we get 5 parts

Fixed:
- Allow parts.length >= 4
- Extract timestamp from the last part
- Reconstruct serviceFqn from all middle parts (parts[2] to parts[length-2])

This fixes the '403 Invalid signature for username' error that was
preventing service publication.
2025-12-09 22:59:02 +01:00
8111cb9cec v0.5.0: Service discovery and FQN format refactoring
- Changed service FQN format: service:version@username (colon instead of @)
- Added service discovery: direct lookup, random selection, paginated queries
- Updated parseServiceFqn to handle optional username for discovery
- Removed UUID privacy layer (service_index table)
- Updated storage interface with discovery methods (discoverServices, getRandomService, getServiceByFqn)
- Removed deprecated methods (getServiceByUuid, queryService, listServicesForUsername, findServicesByName, touchUsername, batchCreateServices)
- Updated API routes: /services/:fqn with three modes (direct, random, paginated)
- Changed offer/answer/ICE routes to offer-specific: /services/:fqn/offers/:offerId/*
- Added extracted fields to services table (service_name, version, username) for efficient discovery
- Created migration 0007 to update schema and migrate existing data
- Added discovery indexes for performance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-09 22:22:37 +01:00
b446adaee4 fix: better error handling for public key constraint
- Add try/catch in claimUsername to handle UNIQUE constraint
- Return meaningful error: 'This public key has already claimed a different username'
- Enable observability logs for better debugging
2025-12-08 21:31:36 +01:00
163e1f73d4 fix: update D1 schema to match v0.4.0 service-to-offers relationship
- Add service_id column to offers table
- Remove offer_id column from services table
- Add index for service_id in offers
2025-12-07 22:31:34 +01:00
1d47d47ef7 feat: add database migration for service-to-offers refactor
- Add service_id column to offers table
- Remove offer_id column from services table
- Update VERSION to 0.4.0 in wrangler.toml
2025-12-07 22:28:14 +01:00
1d70cd79e8 feat: refactor to service-based WebRTC signaling endpoints
BREAKING CHANGE: Replace offer-based endpoints with service-based signaling

- Add POST /services/:uuid/answer
- Add GET /services/:uuid/answer
- Add POST /services/:uuid/ice-candidates
- Add GET /services/:uuid/ice-candidates
- Remove all /offers/* endpoints (POST /offers, GET /offers/mine, etc.)
- Server auto-detects peer's offer when offerId is omitted
- Update README with new service-based API documentation
- Bump version to 0.4.0

This change simplifies the API by focusing on services rather than individual offers.
WebRTC signaling (answer/ICE) now operates at the service level, with automatic
offer detection when needed.
2025-12-07 22:17:24 +01:00
2aa1fee4d6 docs: update server README to remove outdated sections
- Remove obsolete POST /index/:username/query endpoint
- Remove non-existent PUT /offers/:offerId/heartbeat endpoint
- Update architecture diagram to reflect semver discovery
- Update database schema to show service-to-offers relationship
2025-12-07 22:07:16 +01:00
d564e2250f docs: Update README with semver matching and offers array 2025-12-07 22:00:40 +01:00
06ec5020f7 0.3.0 2025-12-07 21:59:15 +01:00
5c71f66a26 feat: Add semver-compatible service discovery with privacy
## Breaking Changes

### Removed Endpoints
- Removed GET /users/:username/services (service listing)
- Services are now completely hidden - cannot be enumerated

### Updated Endpoints
- GET /users/:username/services/:fqn now supports semver matching
- Requesting chat@1.0.0 will match chat@1.2.3, chat@1.5.0, etc.
- Will NOT match chat@2.0.0 (different major version)

## New Features

### Semantic Versioning Support
- Compatible version matching following semver rules (^1.0.0)
- Major version must match exactly
- For major version 0, minor must also match (0.x.y is unstable)
- Available version must be >= requested version
- Prerelease versions require exact match

### Privacy Improvements
- All services are now hidden by default
- No way to enumerate or list services for a username
- Must know exact service name to discover

## Implementation

### Server (src/)
- crypto.ts: Added parseVersion(), isVersionCompatible(), parseServiceFqn()
- storage/types.ts: Added findServicesByName() interface method
- storage/sqlite.ts: Implemented findServicesByName() with LIKE query
- storage/d1.ts: Implemented findServicesByName() with LIKE query
- app.ts: Updated GET /:username/services/:fqn with semver matching

### Semver Matching Logic
- Parse requested version: chat@1.0.0 → {name: "chat", version: "1.0.0"}
- Find all services with matching name: chat@*
- Filter to compatible versions using semver rules
- Return first match (most recently created)

## Examples

Request: chat@1.0.0
Matches: chat@1.0.0, chat@1.2.3, chat@1.9.5
Does NOT match: chat@0.9.0, chat@2.0.0, chat@1.0.0-beta

🤖 Generated with Claude Code
2025-12-07 21:56:19 +01:00
ca3db47009 Refactor: Consolidate service/offer architecture
## Breaking Changes

### Server
- Services can now have multiple offers instead of single offer
- POST /users/:username/services accepts `offers` array instead of `sdp`
- GET /users/:username/services/:fqn returns `offers` array in response
- GET /services/:uuid returns `offers` array in response
- Database schema: removed `offer_id` from services table, added `service_id` to offers table
- Added `batchCreateServices()` and `getOffersForService()` methods

### Client
- `PublishServiceOptions` interface: `offers` array instead of `sdp` string
- `Service` interface: `offers` array instead of `offerId` and `sdp`
- `ServiceRequest` interface: `offers` array instead of `sdp`
- RondevuSignaler.setOffer() sends offers array to server
- Updated to extract offerId from first offer in service response

## New Features
- Support for multiple simultaneous offers per service (connection pooling)
- Batch service creation endpoint for reduced server load
- Proper one-to-many relationship between services and offers

## Implementation Details

### Server Changes (src/storage/)
- sqlite.ts: Added service_id column to offers, removed offer_id from services
- d1.ts: Updated to match new interface
- types.ts: Updated interfaces for Service, Offer, CreateServiceRequest
- app.ts: Updated all service endpoints to handle offers array

### Client Changes (src/)
- api.ts: Added OfferRequest and ServiceOffer interfaces
- rondevu-service.ts: Updated PublishServiceOptions to use offers array
- rondevu-signaler.ts: Updated to send/receive offers array

## Migration Notes
- No backwards compatibility - this is a breaking change
- Services published with old API will not work with new server
- Clients must update to new API to work with updated server

🤖 Generated with Claude Code
2025-12-07 21:49:23 +01:00
3efed6e9d2 Fix service reconnection: return available offer from pool
Modified /services/:uuid endpoint to return an available (unanswered)
offer from the service's offer pool instead of always returning the
initial offer. This fixes reconnection failures where clients would
try to answer already-consumed offers.

Changes:
- Query all offers from the service's peer ID
- Return first unanswered offer
- Return 503 if no offers available

Fixes: "Offer already answered" errors on reconnection attempts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-06 13:47:00 +01:00
1257867dff fix: implement upsert behavior for service creation
When a service is republished (e.g., for TTL refresh), the old service
is now deleted before creating a new one, preventing UNIQUE constraint
errors on (username, service_fqn).

Changes:
- Query for existing service before creation
- Delete existing service if found
- Create new service with same username/serviceFqn

This enables the client's TTL auto-refresh feature to work correctly.
2025-12-06 13:04:45 +01:00
52cf734858 Remove legacy V1 code and clean up unused remnants
- Delete unused bloom.ts module (leftover from topic-based discovery)
- Remove maxTopicsPerOffer configuration (no longer used)
- Remove unused info field from Offer types
- Simplify generateOfferHash() to only hash SDP (remove topics param)
- Update outdated comments referencing deprecated features
- Remove backward compatibility topics field from answer responses

This completes the migration to V2 service-based architecture by
removing all remnants of the V1 topic-based system.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-06 12:06:02 +01:00
5622867411 Add upsert behavior to service creation
- Delete existing service before creating new one
- Prevents UNIQUE constraint error on (username, service_fqn)
- Enables seamless service republishing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-06 11:46:21 +01:00
ac0e064e34 Fix answer response field names for V2 API compatibility
- Change 'answererPeerId' to 'answererId'
- Change 'answerSdp' to 'sdp'
- Add 'topics' field (empty array) for client compatibility

This ensures the server response matches the expected format
in the client's AnsweredOffer interface.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-06 11:37:31 +01:00
e7cd90b905 Fix error handling scope issue in service creation
The error handler was referencing variables (username, serviceFqn, offers)
that were declared inside the try block. If an error occurred before these
were defined, the error handler itself would fail, resulting in non-JSON
responses that caused "JSON.parse: unexpected character" errors on the client.

Fixed by:
- Declaring variables at function scope
- Initializing offers as empty array
- Using destructuring assignment for username/serviceFqn

This ensures the error handler can always access these variables safely,
even if an early error occurs, and will always return proper JSON responses.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 19:56:06 +01:00
21 changed files with 2284 additions and 1507 deletions

502
ADVANCED.md Normal file
View File

@@ -0,0 +1,502 @@
# Rondevu Server - Advanced Usage
Comprehensive API reference, configuration guide, database schema, and security details.
## Table of Contents
- [RPC Methods](#rpc-methods)
- [Configuration](#configuration)
- [Database Schema](#database-schema)
- [Security](#security)
- [Migration Guide](#migration-guide)
---
## RPC Methods
### `getUser`
Check username availability
**Parameters:**
- `username` - Username to check
**Message format:** `getUser:{username}:{timestamp}` (no authentication required)
**Example:**
```json
{
"method": "getUser",
"message": "getUser:alice:1733404800000",
"signature": "base64-signature",
"params": { "username": "alice" }
}
```
**Response:**
```json
{
"success": true,
"result": {
"username": "alice",
"available": false,
"claimedAt": 1733404800000,
"expiresAt": 1765027200000,
"publicKey": "base64-encoded-public-key"
}
}
```
### `claimUsername`
Claim a username with cryptographic proof
**Parameters:**
- `username` - Username to claim
- `publicKey` - Base64-encoded Ed25519 public key
**Message format:** `claim:{username}:{timestamp}`
**Example:**
```json
{
"method": "claimUsername",
"message": "claim:alice:1733404800000",
"signature": "base64-signature",
"params": {
"username": "alice",
"publicKey": "base64-encoded-public-key"
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"success": true,
"username": "alice"
}
}
```
### `getService`
Get service by FQN (direct lookup, random discovery, or paginated)
**Parameters:**
- `serviceFqn` - Service FQN (e.g., `chat:1.0.0` or `chat:1.0.0@alice`)
- `limit` - (optional) Number of results for paginated mode
- `offset` - (optional) Offset for paginated mode
**Message format:** `getService:{username}:{serviceFqn}:{timestamp}`
**Modes:**
1. **Direct lookup** (with @username): Returns specific user's service
2. **Random** (without @username, no limit): Returns random service
3. **Paginated** (without @username, with limit): Returns multiple services
**Example:**
```json
{
"method": "getService",
"message": "getService:bob:chat:1.0.0:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice"
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"serviceId": "uuid",
"username": "alice",
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"sdp": "v=0...",
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
}
```
### `publishService`
Publish a service with offers
**Parameters:**
- `serviceFqn` - Service FQN with username (e.g., `chat:1.0.0@alice`)
- `offers` - Array of offers, each with `sdp` field
- `ttl` - (optional) Time to live in milliseconds
**Message format:** `publishService:{username}:{serviceFqn}:{timestamp}`
**Example:**
```json
{
"method": "publishService",
"message": "publishService:alice:chat:1.0.0@alice:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offers": [
{ "sdp": "v=0..." },
{ "sdp": "v=0..." }
],
"ttl": 300000
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"serviceId": "uuid",
"username": "alice",
"serviceFqn": "chat:1.0.0@alice",
"offers": [
{
"offerId": "offer-hash-1",
"sdp": "v=0...",
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
],
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
}
```
### `deleteService`
Delete a service
**Parameters:**
- `serviceFqn` - Service FQN with username
**Message format:** `deleteService:{username}:{serviceFqn}:{timestamp}`
**Example:**
```json
{
"method": "deleteService",
"message": "deleteService:alice:chat:1.0.0@alice:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice"
}
}
```
**Response:**
```json
{
"success": true,
"result": { "success": true }
}
```
### `answerOffer`
Answer a specific offer
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
- `sdp` - Answer SDP
**Message format:** `answerOffer:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "answerOffer",
"message": "answerOffer:bob:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"sdp": "v=0..."
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"success": true,
"offerId": "offer-hash"
}
}
```
### `getOfferAnswer`
Get answer for an offer (offerer polls this)
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
**Message format:** `getOfferAnswer:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "getOfferAnswer",
"message": "getOfferAnswer:alice:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash"
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"sdp": "v=0...",
"offerId": "offer-hash",
"answererId": "bob",
"answeredAt": 1733404800000
}
}
```
### `poll`
Combined polling for answers and ICE candidates
**Parameters:**
- `since` - (optional) Timestamp to get only new data
**Message format:** `poll:{username}:{timestamp}`
**Example:**
```json
{
"method": "poll",
"message": "poll:alice:1733404800000",
"signature": "base64-signature",
"params": {
"since": 1733404800000
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"answers": [
{
"offerId": "offer-hash",
"serviceId": "service-uuid",
"answererId": "bob",
"sdp": "v=0...",
"answeredAt": 1733404800000
}
],
"iceCandidates": {
"offer-hash": [
{
"candidate": { "candidate": "...", "sdpMid": "0", "sdpMLineIndex": 0 },
"role": "answerer",
"username": "bob",
"createdAt": 1733404800000
}
]
}
}
}
```
### `addIceCandidates`
Add ICE candidates to an offer
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
- `candidates` - Array of ICE candidates
**Message format:** `addIceCandidates:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "addIceCandidates",
"message": "addIceCandidates:alice:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"candidates": [
{
"candidate": "candidate:...",
"sdpMid": "0",
"sdpMLineIndex": 0
}
]
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"count": 1,
"offerId": "offer-hash"
}
}
```
### `getIceCandidates`
Get ICE candidates for an offer
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
- `since` - (optional) Timestamp to get only new candidates
**Message format:** `getIceCandidates:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "getIceCandidates",
"message": "getIceCandidates:alice:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"since": 1733404800000
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"candidates": [
{
"candidate": {
"candidate": "candidate:...",
"sdpMid": "0",
"sdpMLineIndex": 0
},
"createdAt": 1733404800000
}
],
"offerId": "offer-hash"
}
}
```
## Configuration
Environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3000` | Server port (Node.js/Docker) |
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins |
| `STORAGE_PATH` | `./rondevu.db` | SQLite database path (use `:memory:` for in-memory) |
| `VERSION` | `0.5.0` | Server version (semver) |
| `OFFER_DEFAULT_TTL` | `60000` | Default offer TTL in ms (1 minute) |
| `OFFER_MIN_TTL` | `60000` | Minimum offer TTL in ms (1 minute) |
| `OFFER_MAX_TTL` | `86400000` | Maximum offer TTL in ms (24 hours) |
| `CLEANUP_INTERVAL` | `60000` | Cleanup interval in ms (1 minute) |
| `MAX_OFFERS_PER_REQUEST` | `100` | Maximum offers per create request |
## Database Schema
### usernames
- `username` (PK): Claimed username
- `public_key`: Ed25519 public key (base64)
- `claimed_at`: Claim timestamp
- `expires_at`: Expiry timestamp (365 days)
- `last_used`: Last activity timestamp
- `metadata`: Optional JSON metadata
### services
- `id` (PK): Service ID (UUID)
- `username` (FK): Owner username
- `service_fqn`: Fully qualified name (chat:1.0.0@alice)
- `service_name`: Service name component (chat)
- `version`: Version component (1.0.0)
- `created_at`, `expires_at`: Timestamps
- UNIQUE constraint on (service_name, version, username)
### offers
- `id` (PK): Offer ID (hash of SDP)
- `username` (FK): Owner username
- `service_id` (FK): Link to service
- `service_fqn`: Denormalized service FQN
- `sdp`: WebRTC offer SDP
- `answerer_username`: Username of answerer (null until answered)
- `answer_sdp`: WebRTC answer SDP (null until answered)
- `answered_at`: Timestamp when answered
- `created_at`, `expires_at`, `last_seen`: Timestamps
### ice_candidates
- `id` (PK): Auto-increment ID
- `offer_id` (FK): Link to offer
- `username`: Username who sent the candidate
- `role`: 'offerer' or 'answerer'
- `candidate`: JSON-encoded candidate
- `created_at`: Timestamp
## Security
### Ed25519 Signature Authentication
All authenticated requests require:
- **message**: Signed message with format-specific structure
- **signature**: Base64-encoded Ed25519 signature of the message
- Username is extracted from the message
### Username Claiming
- **Algorithm**: Ed25519 signatures
- **Message Format**: `claim:{username}:{timestamp}`
- **Replay Protection**: Timestamp must be within 5 minutes
- **Key Management**: Private keys never leave the client
- **Validity**: 365 days, auto-renewed on use
### Anonymous Users
- **Format**: `anon-{timestamp}-{random}` (e.g., `anon-lx2w34-a3f501`)
- **Generation**: Can be generated by client for testing
- **Behavior**: Same as regular usernames, must be explicitly claimed like any username
### Service Publishing
- **Ownership Verification**: Every publish requires username signature
- **Message Format**: `publishService:{username}:{serviceFqn}:{timestamp}`
- **Auto-Renewal**: Publishing a service extends username expiry
### ICE Candidate Filtering
- Server filters candidates by role to prevent peers from receiving their own candidates
- Offerers receive only answerer candidates
- Answerers receive only offerer candidates
## Migration from v0.4.x
See [MIGRATION.md](../MIGRATION.md) for detailed migration guide.
**Key Changes:**
- Moved from REST API to RPC interface with single `/rpc` endpoint
- All methods now use POST with JSON body
- Batch operations supported
- Authentication is per-method instead of per-endpoint middleware
## License
MIT

431
README.md
View File

@@ -2,9 +2,9 @@
[![npm version](https://img.shields.io/npm/v/@xtr-dev/rondevu-server)](https://www.npmjs.com/package/@xtr-dev/rondevu-server)
🌐 **DNS-like WebRTC signaling with username claiming and service discovery**
🌐 **Simple WebRTC signaling with RPC interface**
Scalable WebRTC signaling server with cryptographic username claiming, service publishing, and privacy-preserving discovery.
Scalable WebRTC signaling server with cryptographic username claiming, service publishing with semantic versioning, and efficient offer/answer exchange via JSON-RPC interface.
**Related repositories:**
- [@xtr-dev/rondevu-client](https://github.com/xtr-dev/rondevu-client) - TypeScript client library ([npm](https://www.npmjs.com/package/@xtr-dev/rondevu-client))
@@ -15,12 +15,14 @@ Scalable WebRTC signaling server with cryptographic username claiming, service p
## Features
- **RPC Interface**: Single endpoint for all operations with batching support
- **Username Claiming**: Cryptographic username ownership with Ed25519 signatures (365-day validity, auto-renewed on use)
- **Service Publishing**: Package-style naming with semantic versioning (com.example.chat@1.0.0)
- **Privacy-Preserving Discovery**: UUID-based service index prevents enumeration
- **Public/Private Services**: Control service visibility
- **Stateless Authentication**: AES-256-GCM encrypted credentials, no server-side sessions
- **Service Publishing**: Service:version@username naming (e.g., `chat:1.0.0@alice`)
- **Service Discovery**: Random and paginated discovery for finding services without knowing usernames
- **Semantic Versioning**: Compatible version matching (chat:1.0.0 matches any 1.x.x)
- **Signature-Based Authentication**: All authenticated requests use Ed25519 signatures
- **Complete WebRTC Signaling**: Offer/answer exchange and ICE candidate relay
- **Batch Operations**: Execute multiple operations in a single HTTP request
- **Dual Storage**: SQLite (Node.js/Docker) and Cloudflare D1 (Workers) backends
## Architecture
@@ -30,11 +32,13 @@ Username Claiming → Service Publishing → Service Discovery → WebRTC Connec
alice claims "alice" with Ed25519 signature
alice publishes com.example.chat@1.0.0 → receives UUID abc123
alice publishes chat:1.0.0@alice with offers
bob queries alice's services → gets UUID abc123
bob queries chat:1.0.0@alice (direct) or chat:1.0.0 (discovery) → gets offer SDP
bob connects to UUID abc123 → WebRTC connection established
bob posts answer SDP → WebRTC connection established
ICE candidates exchanged via server relay
```
## Quick Start
@@ -46,7 +50,7 @@ npm install && npm start
**Docker:**
```bash
docker build -t rondevu . && docker run -p 3000:3000 -e STORAGE_PATH=:memory: -e AUTH_SECRET=$(openssl rand -hex 32) rondevu
docker build -t rondevu . && docker run -p 3000:3000 -e STORAGE_PATH=:memory: rondevu
```
**Cloudflare Workers:**
@@ -54,308 +58,201 @@ docker build -t rondevu . && docker run -p 3000:3000 -e STORAGE_PATH=:memory: -e
npx wrangler deploy
```
## API Endpoints
## RPC Interface
### Public Endpoints
All API calls are made to `POST /rpc` with JSON-RPC format.
#### `GET /`
Returns server version and info
### Request Format
#### `GET /health`
Health check endpoint with version
#### `POST /register`
Register a new peer and receive credentials (peerId + secret)
Generates a cryptographically random 128-bit peer ID.
**Response:**
**Single method call:**
```json
{
"peerId": "f17c195f067255e357232e34cf0735d9",
"secret": "DdorTR8QgSn9yngn+4qqR8cs1aMijvX..."
"method": "getUser",
"message": "getUser:alice:1733404800000",
"signature": "base64-encoded-signature",
"params": {
"username": "alice"
}
}
```
**Batch calls:**
```json
[
{
"method": "getUser",
"message": "getUser:alice:1733404800000",
"signature": "base64-encoded-signature",
"params": { "username": "alice" }
},
{
"method": "claimUsername",
"message": "claim:bob:1733404800000",
"signature": "base64-encoded-signature",
"params": {
"username": "bob",
"publicKey": "base64-encoded-public-key"
}
}
]
```
### Response Format
**Single response:**
```json
{
"success": true,
"result": { /* method-specific data */ }
}
```
**Error response:**
```json
{
"success": false,
"error": "Error message"
}
```
**Batch responses:** Array of responses matching request array order.
## Core Methods
### Username Management
#### `POST /usernames/claim`
Claim a username with cryptographic proof
**Request:**
```json
```typescript
// Check username availability
POST /rpc
{
"method": "getUser",
"params": { "username": "alice" }
}
// Claim username (requires signature)
POST /rpc
{
"method": "claimUsername",
"message": "claim:alice:1733404800000",
"signature": "base64-signature",
"params": {
"username": "alice",
"publicKey": "base64-encoded-ed25519-public-key",
"signature": "base64-encoded-signature",
"message": "claim:alice:1733404800000"
"publicKey": "base64-public-key"
}
}
```
**Response:**
```json
### Service Publishing
```typescript
// Publish service (requires signature)
POST /rpc
{
"username": "alice",
"claimedAt": 1733404800000,
"expiresAt": 1765027200000
"method": "publishService",
"message": "publishService:alice:chat:1.0.0@alice:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offers": [{ "sdp": "webrtc-offer-sdp" }],
"ttl": 300000
}
```
**Validation:**
- Username format: `^[a-z0-9][a-z0-9-]*[a-z0-9]$` (3-32 characters)
- Signature must be valid Ed25519 signature
- Timestamp must be within 5 minutes (replay protection)
- Expires after 365 days, auto-renewed on use
#### `GET /usernames/:username`
Check username availability and claim status
**Response:**
```json
{
"username": "alice",
"available": false,
"claimedAt": 1733404800000,
"expiresAt": 1765027200000,
"publicKey": "..."
}
```
#### `GET /usernames/:username/services`
List all services for a username (privacy-preserving)
**Response:**
```json
{
"username": "alice",
"services": [
{
"uuid": "abc123",
"isPublic": false
},
{
"uuid": "def456",
"isPublic": true,
"serviceFqn": "com.example.public@1.0.0",
"metadata": { "description": "Public service" }
}
]
}
```
### Service Management
#### `POST /services`
Publish a service (requires authentication and username signature)
**Headers:**
- `Authorization: Bearer {peerId}:{secret}`
**Request:**
```json
{
"username": "alice",
"serviceFqn": "com.example.chat@1.0.0",
"sdp": "v=0...",
"ttl": 300000,
"isPublic": false,
"metadata": { "description": "Chat service" },
"signature": "base64-encoded-signature",
"message": "publish:alice:com.example.chat@1.0.0:1733404800000"
}
```
**Response:**
```json
{
"serviceId": "uuid-v4",
"uuid": "uuid-v4-for-index",
"offerId": "offer-hash-id",
"expiresAt": 1733405100000
}
```
**Service FQN Format:**
- Service name: Reverse domain notation (e.g., `com.example.chat`)
- Version: Semantic versioning (e.g., `1.0.0`, `2.1.3-beta`)
- Complete FQN: `service-name@version` (e.g., `com.example.chat@1.0.0`)
**Validation:**
- Service name pattern: `^[a-z0-9]([a-z0-9-]*[a-z0-9])?(\.[a-z0-9]([a-z0-9-]*[a-z0-9])?)+$`
- Length: 3-128 characters
- Version pattern: `^[0-9]+\.[0-9]+\.[0-9]+(-[a-z0-9.-]+)?$`
#### `GET /services/:uuid`
Get service details by UUID
**Response:**
```json
{
"serviceId": "...",
"username": "alice",
"serviceFqn": "com.example.chat@1.0.0",
"offerId": "...",
"sdp": "v=0...",
"isPublic": false,
"metadata": { ... },
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
```
#### `DELETE /services/:serviceId`
Unpublish a service (requires authentication and ownership)
**Headers:**
- `Authorization: Bearer {peerId}:{secret}`
**Request:**
```json
{
"username": "alice"
}
```
### Service Discovery
#### `POST /index/:username/query`
Query a service by FQN
**Request:**
```json
```typescript
// Get specific service
POST /rpc
{
"serviceFqn": "com.example.chat@1.0.0"
"method": "getService",
"params": { "serviceFqn": "chat:1.0.0@alice" }
}
// Random discovery
POST /rpc
{
"method": "getService",
"params": { "serviceFqn": "chat:1.0.0" }
}
// Paginated discovery
POST /rpc
{
"method": "getService",
"params": {
"serviceFqn": "chat:1.0.0",
"limit": 10,
"offset": 0
}
}
```
**Response:**
```json
### WebRTC Signaling
```typescript
// Answer offer (requires signature)
POST /rpc
{
"uuid": "abc123",
"allowed": true
"method": "answerOffer",
"message": "answer:bob:offer-id:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-id",
"sdp": "webrtc-answer-sdp"
}
}
// Add ICE candidates (requires signature)
POST /rpc
{
"method": "addIceCandidates",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-id",
"candidates": [{ /* RTCIceCandidateInit */ }]
}
}
// Poll for answers and ICE candidates (requires signature)
POST /rpc
{
"method": "poll",
"params": { "since": 1733404800000 }
}
```
### Offer Management (Low-level)
#### `POST /offers`
Create one or more offers (requires authentication)
**Headers:**
- `Authorization: Bearer {peerId}:{secret}`
**Request:**
```json
{
"offers": [
{
"sdp": "v=0...",
"ttl": 300000
}
]
}
```
#### `GET /offers/mine`
List all offers owned by authenticated peer
#### `PUT /offers/:offerId/heartbeat`
Update last_seen timestamp for an offer
#### `DELETE /offers/:offerId`
Delete a specific offer
#### `POST /offers/:offerId/answer`
Answer an offer (locks it to answerer)
**Request:**
```json
{
"sdp": "v=0..."
}
```
#### `GET /offers/answers`
Poll for answers to your offers
#### `POST /offers/:offerId/ice-candidates`
Post ICE candidates for an offer
**Request:**
```json
{
"candidates": ["candidate:1 1 UDP..."]
}
```
#### `GET /offers/:offerId/ice-candidates?since=1234567890`
Get ICE candidates from the other peer
## Configuration
Environment variables:
Quick reference for common environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3000` | Server port (Node.js/Docker) |
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins |
| `STORAGE_PATH` | `./rondevu.db` | SQLite database path (use `:memory:` for in-memory) |
| `VERSION` | `2.0.0` | Server version (semver) |
| `AUTH_SECRET` | Random 32-byte hex | Secret key for credential encryption (required for production) |
| `OFFER_DEFAULT_TTL` | `300000` | Default offer TTL in ms (5 minutes) |
| `OFFER_MIN_TTL` | `60000` | Minimum offer TTL in ms (1 minute) |
| `OFFER_MAX_TTL` | `3600000` | Maximum offer TTL in ms (1 hour) |
| `MAX_OFFERS_PER_REQUEST` | `10` | Maximum offers per create request |
## Database Schema
📚 See [ADVANCED.md](./ADVANCED.md#configuration) for complete configuration reference.
### usernames
- `username` (PK): Claimed username
- `public_key`: Ed25519 public key (base64)
- `claimed_at`: Claim timestamp
- `expires_at`: Expiry timestamp (365 days)
- `last_used`: Last activity timestamp
- `metadata`: Optional JSON metadata
## Documentation
### services
- `id` (PK): Service ID (UUID)
- `username` (FK): Owner username
- `service_fqn`: Fully qualified name (com.example.chat@1.0.0)
- `offer_id` (FK): WebRTC offer ID
- `is_public`: Public/private flag
- `metadata`: JSON metadata
- `created_at`, `expires_at`: Timestamps
### service_index (privacy layer)
- `uuid` (PK): Random UUID for discovery
- `service_id` (FK): Links to service
- `username`, `service_fqn`: Denormalized for performance
📚 **[ADVANCED.md](./ADVANCED.md)** - Comprehensive guide including:
- Complete RPC method reference with examples
- Full configuration options
- Database schema documentation
- Security implementation details
- Migration guides
## Security
### Username Claiming
- **Algorithm**: Ed25519 signatures
- **Message Format**: `claim:{username}:{timestamp}`
- **Replay Protection**: Timestamp must be within 5 minutes
- **Key Management**: Private keys never leave the client
All authenticated operations require Ed25519 signatures:
- **Message Format**: `{method}:{username}:{context}:{timestamp}`
- **Signature**: Base64-encoded Ed25519 signature of the message
- **Replay Protection**: Timestamps must be within 5 minutes
- **Username Ownership**: Verified via public key signature
### Service Publishing
- **Ownership Verification**: Every publish requires username signature
- **Message Format**: `publish:{username}:{serviceFqn}:{timestamp}`
- **Auto-Renewal**: Publishing a service extends username expiry
### Privacy
- **Private Services**: Only UUID exposed, FQN hidden
- **Public Services**: FQN and metadata visible
- **No Enumeration**: Cannot list all services without knowing FQN
## Migration from V1
V2 is a **breaking change** that removes topic-based discovery. See [MIGRATION.md](../MIGRATION.md) for detailed migration guide.
**Key Changes:**
- ❌ Removed: Topic-based discovery, bloom filters, public peer listings
- ✅ Added: Username claiming, service publishing, UUID-based privacy
See [ADVANCED.md](./ADVANCED.md#security) for detailed security documentation.
## License

View File

@@ -0,0 +1,40 @@
-- V0.4.0 Migration: Refactor service-to-offer relationship
-- Change from one-to-one (service has offer_id) to one-to-many (offer has service_id)
-- Step 1: Add service_id column to offers table
ALTER TABLE offers ADD COLUMN service_id TEXT;
-- Step 2: Create new services table without offer_id
CREATE TABLE services_new (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
is_public INTEGER NOT NULL DEFAULT 0,
metadata TEXT,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(username, service_fqn)
);
-- Step 3: Copy data from old services table (if any exists)
INSERT INTO services_new (id, username, service_fqn, created_at, expires_at, is_public, metadata)
SELECT id, username, service_fqn, created_at, expires_at, is_public, metadata
FROM services;
-- Step 4: Drop old services table
DROP TABLE services;
-- Step 5: Rename new table to services
ALTER TABLE services_new RENAME TO services;
-- Step 6: Recreate indexes
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_fqn ON services(service_fqn);
CREATE INDEX IF NOT EXISTS idx_services_expires ON services(expires_at);
-- Step 7: Add index for service_id in offers
CREATE INDEX IF NOT EXISTS idx_offers_service ON offers(service_id);
-- Step 8: Add foreign key constraint (D1 doesn't enforce FK in ALTER, but good for documentation)
-- FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE

View File

@@ -0,0 +1,54 @@
-- V0.4.1 Migration: Simplify schema and add service discovery
-- Remove privacy layer (service_index) and add extracted fields for discovery
-- Step 1: Drop service_index table (privacy layer removal)
DROP TABLE IF EXISTS service_index;
-- Step 2: Create new services table with extracted fields for discovery
CREATE TABLE services_new (
id TEXT PRIMARY KEY,
service_fqn TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(service_fqn)
);
-- Step 3: Migrate existing data (if any) - parse FQN to extract components
-- Note: This migration assumes FQN format is already "service:version@username"
-- If there's old data with different format, manual intervention may be needed
INSERT INTO services_new (id, service_fqn, service_name, version, username, created_at, expires_at)
SELECT
id,
service_fqn,
-- Extract service_name: everything before first ':'
substr(service_fqn, 1, instr(service_fqn, ':') - 1) as service_name,
-- Extract version: between ':' and '@'
substr(
service_fqn,
instr(service_fqn, ':') + 1,
instr(service_fqn, '@') - instr(service_fqn, ':') - 1
) as version,
username,
created_at,
expires_at
FROM services
WHERE service_fqn LIKE '%:%@%'; -- Only migrate properly formatted FQNs
-- Step 4: Drop old services table
DROP TABLE services;
-- Step 5: Rename new table to services
ALTER TABLE services_new RENAME TO services;
-- Step 6: Create indexes for efficient querying
CREATE INDEX idx_services_fqn ON services(service_fqn);
CREATE INDEX idx_services_discovery ON services(service_name, version);
CREATE INDEX idx_services_username ON services(username);
CREATE INDEX idx_services_expires ON services(expires_at);
-- Step 7: Create index on offers for available offer filtering
CREATE INDEX IF NOT EXISTS idx_offers_available ON offers(answerer_peer_id) WHERE answerer_peer_id IS NULL;

View File

@@ -0,0 +1,67 @@
-- Migration: Convert peer_id to username in offers and ice_candidates tables
-- This migration aligns the database with the unified Ed25519 authentication system
-- Step 1: Recreate offers table with username instead of peer_id
CREATE TABLE offers_new (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_id TEXT,
service_fqn TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (answerer_username) REFERENCES usernames(username) ON DELETE SET NULL
);
-- Step 2: Migrate data (if any) - peer_id becomes username
-- Note: This assumes peer_id values were already usernames in practice
INSERT INTO offers_new (id, username, service_id, service_fqn, sdp, created_at, expires_at, last_seen, answerer_username, answer_sdp, answered_at)
SELECT id, peer_id as username, service_id, NULL as service_fqn, sdp, created_at, expires_at, last_seen, answerer_peer_id as answerer_username, answer_sdp, answered_at
FROM offers;
-- Step 3: Drop old offers table
DROP TABLE offers;
-- Step 4: Rename new table
ALTER TABLE offers_new RENAME TO offers;
-- Step 5: Recreate indexes
CREATE INDEX idx_offers_username ON offers(username);
CREATE INDEX idx_offers_service ON offers(service_id);
CREATE INDEX idx_offers_expires ON offers(expires_at);
CREATE INDEX idx_offers_last_seen ON offers(last_seen);
CREATE INDEX idx_offers_answerer ON offers(answerer_username);
-- Step 6: Recreate ice_candidates table with username instead of peer_id
CREATE TABLE ice_candidates_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE
);
-- Step 7: Migrate ICE candidates data
INSERT INTO ice_candidates_new (offer_id, username, role, candidate, created_at)
SELECT offer_id, peer_id as username, role, candidate, created_at
FROM ice_candidates;
-- Step 8: Drop old ice_candidates table
DROP TABLE ice_candidates;
-- Step 9: Rename new table
ALTER TABLE ice_candidates_new RENAME TO ice_candidates;
-- Step 10: Recreate indexes
CREATE INDEX idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX idx_ice_username ON ice_candidates(username);
CREATE INDEX idx_ice_role ON ice_candidates(role);
CREATE INDEX idx_ice_created ON ice_candidates(created_at);

View File

@@ -0,0 +1,81 @@
-- Fresh schema for Rondevu v0.5.0+
-- Unified Ed25519 authentication - username/keypair only
-- This is the complete schema without migration steps
-- Drop existing tables if they exist
DROP TABLE IF EXISTS ice_candidates;
DROP TABLE IF EXISTS services;
DROP TABLE IF EXISTS offers;
DROP TABLE IF EXISTS usernames;
-- Usernames table (now required for all users, even anonymous)
CREATE TABLE usernames (
username TEXT PRIMARY KEY,
public_key TEXT NOT NULL UNIQUE,
claimed_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_used INTEGER NOT NULL,
metadata TEXT,
CHECK(length(username) >= 3 AND length(username) <= 32)
);
CREATE INDEX idx_usernames_expires ON usernames(expires_at);
CREATE INDEX idx_usernames_public_key ON usernames(public_key);
-- Services table with discovery fields
CREATE TABLE services (
id TEXT PRIMARY KEY,
service_fqn TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(service_name, version, username)
);
CREATE INDEX idx_services_fqn ON services(service_fqn);
CREATE INDEX idx_services_discovery ON services(service_name, version);
CREATE INDEX idx_services_username ON services(username);
CREATE INDEX idx_services_expires ON services(expires_at);
-- Offers table (now uses username instead of peer_id)
CREATE TABLE offers (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_id TEXT,
service_fqn TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (answerer_username) REFERENCES usernames(username) ON DELETE SET NULL
);
CREATE INDEX idx_offers_username ON offers(username);
CREATE INDEX idx_offers_service ON offers(service_id);
CREATE INDEX idx_offers_expires ON offers(expires_at);
CREATE INDEX idx_offers_last_seen ON offers(last_seen);
CREATE INDEX idx_offers_answerer ON offers(answerer_username);
-- ICE candidates table (now uses username instead of peer_id)
CREATE TABLE ice_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE
);
CREATE INDEX idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX idx_ice_username ON ice_candidates(username);
CREATE INDEX idx_ice_role ON ice_candidates(role);
CREATE INDEX idx_ice_created ON ice_candidates(created_at);

46
package-lock.json generated
View File

@@ -1,15 +1,16 @@
{
"name": "@xtr-dev/rondevu-server",
"version": "0.1.5",
"version": "0.5.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@xtr-dev/rondevu-server",
"version": "0.1.5",
"version": "0.5.0",
"dependencies": {
"@hono/node-server": "^1.19.6",
"@noble/ed25519": "^3.0.0",
"@xtr-dev/rondevu-client": "^0.13.0",
"better-sqlite3": "^12.4.1",
"hono": "^4.10.4"
},
@@ -23,9 +24,9 @@
}
},
"node_modules/@cloudflare/workers-types": {
"version": "4.20251115.0",
"resolved": "https://registry.npmjs.org/@cloudflare/workers-types/-/workers-types-4.20251115.0.tgz",
"integrity": "sha512-aM7jp7IfKhqKvfSaK1IhVTbSzxB6KQ4gX8e/W29tOuZk+YHlYXuRd/bMm4hWkfd7B1HWNWdsx1GTaEUoZIuVsw==",
"version": "4.20251209.0",
"resolved": "https://registry.npmjs.org/@cloudflare/workers-types/-/workers-types-4.20251209.0.tgz",
"integrity": "sha512-O+cbUVwgb4NgUB39R1cITbRshlAAPy1UQV0l8xEy2xcZ3wTh3fMl9f5oBwLsVmE9JRhIZx6llCLOBVf53eI5xA==",
"dev": true,
"license": "MIT OR Apache-2.0"
},
@@ -485,9 +486,9 @@
}
},
"node_modules/@hono/node-server": {
"version": "1.19.6",
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.6.tgz",
"integrity": "sha512-Shz/KjlIeAhfiuE93NDKVdZ7HdBVLQAfdbaXEaoAVO3ic9ibRSLGIQGkcBbFyuLr+7/1D5ZCINM8B+6IvXeMtw==",
"version": "1.19.7",
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.7.tgz",
"integrity": "sha512-vUcD0uauS7EU2caukW8z5lJKtoGMokxNbJtBiwHgpqxEXokaHCBkQUmCHhjFB1VUTWdqj25QoMkMKzgjq+uhrw==",
"license": "MIT",
"engines": {
"node": ">=18.14.1"
@@ -572,15 +573,24 @@
}
},
"node_modules/@types/node": {
"version": "24.10.1",
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.10.1.tgz",
"integrity": "sha512-GNWcUTRBgIRJD5zj+Tq0fKOJ5XZajIiBroOF0yvj2bSU1WvNdYS/dn9UxwsujGW4JX06dnHyjV2y9rRaybH0iQ==",
"version": "24.10.2",
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.10.2.tgz",
"integrity": "sha512-WOhQTZ4G8xZ1tjJTvKOpyEVSGgOTvJAfDK3FNFgELyaTpzhdgHVHeqW8V+UJvzF5BT+/B54T/1S2K6gd9c7bbA==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~7.16.0"
}
},
"node_modules/@xtr-dev/rondevu-client": {
"version": "0.13.0",
"resolved": "https://registry.npmjs.org/@xtr-dev/rondevu-client/-/rondevu-client-0.13.0.tgz",
"integrity": "sha512-oauCveLga4lploxpoW8U0Fd9Fyz+SAsNQzIDvAIG1fkAnAJu9eajmLsZ5JfzzDi7h2Ew1ClZ7MOrmlRfG4vaBg==",
"license": "MIT",
"dependencies": {
"@noble/ed25519": "^3.0.0"
}
},
"node_modules/acorn": {
"version": "8.15.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
@@ -635,9 +645,9 @@
"license": "MIT"
},
"node_modules/better-sqlite3": {
"version": "12.4.1",
"resolved": "https://registry.npmjs.org/better-sqlite3/-/better-sqlite3-12.4.1.tgz",
"integrity": "sha512-3yVdyZhklTiNrtg+4WqHpJpFDd+WHTg2oM7UcR80GqL05AOV0xEJzc6qNvFYoEtE+hRp1n9MpN6/+4yhlGkDXQ==",
"version": "12.5.0",
"resolved": "https://registry.npmjs.org/better-sqlite3/-/better-sqlite3-12.5.0.tgz",
"integrity": "sha512-WwCZ/5Diz7rsF29o27o0Gcc1Du+l7Zsv7SYtVPG0X3G/uUI1LqdxrQI7c9Hs2FWpqXXERjW9hp6g3/tH7DlVKg==",
"hasInstallScript": true,
"license": "MIT",
"dependencies": {
@@ -645,7 +655,7 @@
"prebuild-install": "^7.1.1"
},
"engines": {
"node": "20.x || 22.x || 23.x || 24.x"
"node": "20.x || 22.x || 23.x || 24.x || 25.x"
}
},
"node_modules/bindings": {
@@ -827,9 +837,9 @@
"license": "MIT"
},
"node_modules/hono": {
"version": "4.10.6",
"resolved": "https://registry.npmjs.org/hono/-/hono-4.10.6.tgz",
"integrity": "sha512-BIdolzGpDO9MQ4nu3AUuDwHZZ+KViNm+EZ75Ae55eMXMqLVhDFqEMXxtUe9Qh8hjL+pIna/frs2j6Y2yD5Ua/g==",
"version": "4.10.8",
"resolved": "https://registry.npmjs.org/hono/-/hono-4.10.8.tgz",
"integrity": "sha512-DDT0A0r6wzhe8zCGoYOmMeuGu3dyTAE40HHjwUsWFTEy5WxK1x2WDSsBPlEXgPbRIFY6miDualuUDbasPogIww==",
"license": "MIT",
"engines": {
"node": ">=16.9.0"

View File

@@ -1,6 +1,6 @@
{
"name": "@xtr-dev/rondevu-server",
"version": "0.2.4",
"version": "0.5.0",
"description": "DNS-like WebRTC signaling server with username claiming and service discovery",
"main": "dist/index.js",
"scripts": {
@@ -22,6 +22,7 @@
"dependencies": {
"@hono/node-server": "^1.19.6",
"@noble/ed25519": "^3.0.0",
"@xtr-dev/rondevu-client": "^0.13.0",
"better-sqlite3": "^12.4.1",
"hono": "^4.10.4"
}

View File

@@ -2,19 +2,17 @@ import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { Storage } from './storage/types.ts';
import { Config } from './config.ts';
import { createAuthMiddleware, getAuthenticatedPeerId } from './middleware/auth.ts';
import { generatePeerId, encryptPeerId, validateUsernameClaim, validateServicePublish, validateServiceFqn } from './crypto.ts';
import type { Context } from 'hono';
import { handleRpc, RpcRequest } from './rpc.ts';
// Constants
const MAX_BATCH_SIZE = 100;
/**
* Creates the Hono application with username and service-based WebRTC signaling
* Creates the Hono application with RPC interface
*/
export function createApp(storage: Storage, config: Config) {
const app = new Hono();
// Create auth middleware
const authMiddleware = createAuthMiddleware(config.authSecret);
// Enable CORS
app.use('/*', cors({
origin: (origin) => {
@@ -26,589 +24,70 @@ export function createApp(storage: Storage, config: Config) {
}
return config.corsOrigins[0];
},
allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
allowHeaders: ['Content-Type', 'Origin', 'Authorization'],
allowMethods: ['GET', 'POST', 'OPTIONS'],
allowHeaders: ['Content-Type', 'Origin'],
exposeHeaders: ['Content-Type'],
maxAge: 600,
credentials: true,
credentials: false,
maxAge: 86400,
}));
// ===== General Endpoints =====
/**
* GET /
* Returns server information
*/
// Root endpoint - server info
app.get('/', (c) => {
return c.json({
version: config.version,
name: 'Rondevu',
description: 'DNS-like WebRTC signaling with username claiming and service discovery'
});
description: 'WebRTC signaling with RPC interface and Ed25519 authentication',
}, 200);
});
/**
* GET /health
* Health check endpoint
*/
// Health check
app.get('/health', (c) => {
return c.json({
status: 'ok',
timestamp: Date.now(),
version: config.version
});
});
/**
* POST /register
* Register a new peer (still needed for peer ID generation)
*/
app.post('/register', async (c) => {
try {
const peerId = generatePeerId();
const secret = await encryptPeerId(peerId, config.authSecret);
return c.json({
peerId,
secret
version: config.version,
}, 200);
} catch (err) {
console.error('Error registering peer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== Username Management =====
/**
* POST /usernames/claim
* Claim a username with cryptographic proof
* POST /rpc
* RPC endpoint - accepts single or batch method calls
*/
app.post('/usernames/claim', async (c) => {
app.post('/rpc', async (c) => {
try {
const body = await c.req.json();
const { username, publicKey, signature, message } = body;
if (!username || !publicKey || !signature || !message) {
return c.json({ error: 'Missing required parameters: username, publicKey, signature, message' }, 400);
// Support both single request and batch array
const requests: RpcRequest[] = Array.isArray(body) ? body : [body];
// Validate requests
if (requests.length === 0) {
return c.json({ error: 'Empty request array' }, 400);
}
// Validate claim
const validation = await validateUsernameClaim(username, publicKey, signature, message);
if (!validation.valid) {
return c.json({ error: validation.error }, 400);
if (requests.length > MAX_BATCH_SIZE) {
return c.json({ error: `Too many requests in batch (max ${MAX_BATCH_SIZE})` }, 400);
}
// Attempt to claim username
try {
const claimed = await storage.claimUsername({
username,
publicKey,
signature,
message
});
// Handle RPC
const responses = await handleRpc(requests, storage, config);
// Return single response or array based on input
return c.json(Array.isArray(body) ? responses : responses[0], 200);
} catch (err) {
console.error('RPC error:', err);
return c.json({
username: claimed.username,
claimedAt: claimed.claimedAt,
expiresAt: claimed.expiresAt
}, 200);
} catch (err: any) {
if (err.message?.includes('already claimed')) {
return c.json({ error: 'Username already claimed by different public key' }, 409);
}
throw err;
}
} catch (err) {
console.error('Error claiming username:', err);
return c.json({ error: 'Internal server error' }, 500);
success: false,
error: 'Invalid request format',
}, 400);
}
});
/**
* GET /usernames/:username
* Check if username is available or get claim info
*/
app.get('/usernames/:username', async (c) => {
try {
const username = c.req.param('username');
const claimed = await storage.getUsername(username);
if (!claimed) {
// 404 for all other routes
app.all('*', (c) => {
return c.json({
username,
available: true
}, 200);
}
return c.json({
username: claimed.username,
available: false,
claimedAt: claimed.claimedAt,
expiresAt: claimed.expiresAt,
publicKey: claimed.publicKey
}, 200);
} catch (err) {
console.error('Error checking username:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /usernames/:username/services
* List services for a username (privacy-preserving)
*/
app.get('/usernames/:username/services', async (c) => {
try {
const username = c.req.param('username');
const services = await storage.listServicesForUsername(username);
return c.json({
username,
services
}, 200);
} catch (err) {
console.error('Error listing services:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== Service Management =====
/**
* POST /services
* Publish a service
*/
app.post('/services', authMiddleware, async (c) => {
try {
const body = await c.req.json();
const { username, serviceFqn, sdp, ttl, isPublic, metadata, signature, message } = body;
if (!username || !serviceFqn || !sdp) {
return c.json({ error: 'Missing required parameters: username, serviceFqn, sdp' }, 400);
}
// Validate service FQN
const fqnValidation = validateServiceFqn(serviceFqn);
if (!fqnValidation.valid) {
return c.json({ error: fqnValidation.error }, 400);
}
// Verify username ownership (signature required)
if (!signature || !message) {
return c.json({ error: 'Missing signature or message for username verification' }, 400);
}
const usernameRecord = await storage.getUsername(username);
if (!usernameRecord) {
return c.json({ error: 'Username not claimed' }, 404);
}
// Verify signature matches username's public key
const signatureValidation = await validateServicePublish(username, serviceFqn, usernameRecord.publicKey, signature, message);
if (!signatureValidation.valid) {
return c.json({ error: 'Invalid signature for username' }, 403);
}
// Validate SDP
if (typeof sdp !== 'string' || sdp.length === 0) {
return c.json({ error: 'Invalid SDP' }, 400);
}
if (sdp.length > 64 * 1024) {
return c.json({ error: 'SDP too large (max 64KB)' }, 400);
}
// Calculate expiry
const peerId = getAuthenticatedPeerId(c);
const offerTtl = Math.min(
Math.max(ttl || config.offerDefaultTtl, config.offerMinTtl),
config.offerMaxTtl
);
const expiresAt = Date.now() + offerTtl;
// Create offer first
const offers = await storage.createOffers([{
peerId,
sdp,
expiresAt
}]);
if (offers.length === 0) {
return c.json({ error: 'Failed to create offer' }, 500);
}
const offer = offers[0];
// Create service
const result = await storage.createService({
username,
serviceFqn,
offerId: offer.id,
expiresAt,
isPublic: isPublic || false,
metadata: metadata ? JSON.stringify(metadata) : undefined
});
return c.json({
serviceId: result.service.id,
uuid: result.indexUuid,
offerId: offer.id,
expiresAt: result.service.expiresAt
}, 201);
} catch (err) {
console.error('Error creating service:', err);
console.error('Error details:', {
message: (err as Error).message,
stack: (err as Error).stack,
username,
serviceFqn,
offerId: offers[0]?.id
});
return c.json({
error: 'Internal server error',
details: (err as Error).message
}, 500);
}
});
/**
* GET /services/:uuid
* Get service details by index UUID
*/
app.get('/services/:uuid', async (c) => {
try {
const uuid = c.req.param('uuid');
const service = await storage.getServiceByUuid(uuid);
if (!service) {
return c.json({ error: 'Service not found' }, 404);
}
// Get associated offer
const offer = await storage.getOfferById(service.offerId);
if (!offer) {
return c.json({ error: 'Associated offer not found' }, 404);
}
return c.json({
serviceId: service.id,
username: service.username,
serviceFqn: service.serviceFqn,
offerId: service.offerId,
sdp: offer.sdp,
isPublic: service.isPublic,
metadata: service.metadata ? JSON.parse(service.metadata) : undefined,
createdAt: service.createdAt,
expiresAt: service.expiresAt
}, 200);
} catch (err) {
console.error('Error getting service:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* DELETE /services/:serviceId
* Delete a service (requires ownership)
*/
app.delete('/services/:serviceId', authMiddleware, async (c) => {
try {
const serviceId = c.req.param('serviceId');
const body = await c.req.json();
const { username } = body;
if (!username) {
return c.json({ error: 'Missing required parameter: username' }, 400);
}
const deleted = await storage.deleteService(serviceId, username);
if (!deleted) {
return c.json({ error: 'Service not found or not owned by this username' }, 404);
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error deleting service:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /index/:username/query
* Query service by FQN (returns UUID)
*/
app.post('/index/:username/query', async (c) => {
try {
const username = c.req.param('username');
const body = await c.req.json();
const { serviceFqn } = body;
if (!serviceFqn) {
return c.json({ error: 'Missing required parameter: serviceFqn' }, 400);
}
const uuid = await storage.queryService(username, serviceFqn);
if (!uuid) {
return c.json({ error: 'Service not found' }, 404);
}
return c.json({
uuid,
allowed: true
}, 200);
} catch (err) {
console.error('Error querying service:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== Offer Management (Core WebRTC) =====
/**
* POST /offers
* Create offers (direct, no service - for testing/advanced users)
*/
app.post('/offers', authMiddleware, async (c) => {
try {
const body = await c.req.json();
const { offers } = body;
if (!Array.isArray(offers) || offers.length === 0) {
return c.json({ error: 'Missing or invalid required parameter: offers (must be non-empty array)' }, 400);
}
if (offers.length > config.maxOffersPerRequest) {
return c.json({ error: `Too many offers (max ${config.maxOffersPerRequest})` }, 400);
}
const peerId = getAuthenticatedPeerId(c);
// Validate and prepare offers
const validated = offers.map((offer: any) => {
const { sdp, ttl, secret } = offer;
if (typeof sdp !== 'string' || sdp.length === 0) {
throw new Error('Invalid SDP in offer');
}
if (sdp.length > 64 * 1024) {
throw new Error('SDP too large (max 64KB)');
}
const offerTtl = Math.min(
Math.max(ttl || config.offerDefaultTtl, config.offerMinTtl),
config.offerMaxTtl
);
return {
peerId,
sdp,
expiresAt: Date.now() + offerTtl,
secret: secret ? String(secret).substring(0, 128) : undefined
};
});
const created = await storage.createOffers(validated);
return c.json({
offers: created.map(offer => ({
id: offer.id,
peerId: offer.peerId,
expiresAt: offer.expiresAt,
createdAt: offer.createdAt,
hasSecret: !!offer.secret
}))
}, 201);
} catch (err: any) {
console.error('Error creating offers:', err);
return c.json({ error: err.message || 'Internal server error' }, 500);
}
});
/**
* GET /offers/mine
* Get authenticated peer's offers
*/
app.get('/offers/mine', authMiddleware, async (c) => {
try {
const peerId = getAuthenticatedPeerId(c);
const offers = await storage.getOffersByPeerId(peerId);
return c.json({
offers: offers.map(offer => ({
id: offer.id,
sdp: offer.sdp,
createdAt: offer.createdAt,
expiresAt: offer.expiresAt,
lastSeen: offer.lastSeen,
hasSecret: !!offer.secret,
answererPeerId: offer.answererPeerId,
answered: !!offer.answererPeerId
}))
}, 200);
} catch (err) {
console.error('Error getting offers:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* DELETE /offers/:offerId
* Delete an offer
*/
app.delete('/offers/:offerId', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const peerId = getAuthenticatedPeerId(c);
const deleted = await storage.deleteOffer(offerId, peerId);
if (!deleted) {
return c.json({ error: 'Offer not found or not owned by this peer' }, 404);
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error deleting offer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /offers/:offerId/answer
* Answer an offer
*/
app.post('/offers/:offerId/answer', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const body = await c.req.json();
const { sdp, secret } = body;
if (!sdp) {
return c.json({ error: 'Missing required parameter: sdp' }, 400);
}
if (typeof sdp !== 'string' || sdp.length === 0) {
return c.json({ error: 'Invalid SDP' }, 400);
}
if (sdp.length > 64 * 1024) {
return c.json({ error: 'SDP too large (max 64KB)' }, 400);
}
const answererPeerId = getAuthenticatedPeerId(c);
const result = await storage.answerOffer(offerId, answererPeerId, sdp, secret);
if (!result.success) {
return c.json({ error: result.error }, 400);
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error answering offer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /offers/answers
* Get answers for authenticated peer's offers
*/
app.get('/offers/answers', authMiddleware, async (c) => {
try {
const peerId = getAuthenticatedPeerId(c);
const offers = await storage.getAnsweredOffers(peerId);
return c.json({
answers: offers.map(offer => ({
offerId: offer.id,
answererPeerId: offer.answererPeerId,
answerSdp: offer.answerSdp,
answeredAt: offer.answeredAt
}))
}, 200);
} catch (err) {
console.error('Error getting answers:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== ICE Candidate Exchange =====
/**
* POST /offers/:offerId/ice-candidates
* Add ICE candidates for an offer
*/
app.post('/offers/:offerId/ice-candidates', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const body = await c.req.json();
const { candidates } = body;
if (!Array.isArray(candidates) || candidates.length === 0) {
return c.json({ error: 'Missing or invalid required parameter: candidates' }, 400);
}
const peerId = getAuthenticatedPeerId(c);
// Get offer to determine role
const offer = await storage.getOfferById(offerId);
if (!offer) {
return c.json({ error: 'Offer not found' }, 404);
}
// Determine role
const role = offer.peerId === peerId ? 'offerer' : 'answerer';
const count = await storage.addIceCandidates(offerId, peerId, role, candidates);
return c.json({ count }, 200);
} catch (err) {
console.error('Error adding ICE candidates:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /offers/:offerId/ice-candidates
* Get ICE candidates for an offer
*/
app.get('/offers/:offerId/ice-candidates', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const since = c.req.query('since');
const peerId = getAuthenticatedPeerId(c);
// Get offer to determine role
const offer = await storage.getOfferById(offerId);
if (!offer) {
return c.json({ error: 'Offer not found' }, 404);
}
// Get candidates for opposite role
const targetRole = offer.peerId === peerId ? 'answerer' : 'offerer';
const sinceTimestamp = since ? parseInt(since, 10) : undefined;
const candidates = await storage.getIceCandidates(offerId, targetRole, sinceTimestamp);
return c.json({
candidates: candidates.map(c => ({
candidate: c.candidate,
createdAt: c.createdAt
}))
}, 200);
} catch (err) {
console.error('Error getting ICE candidates:', err);
return c.json({ error: 'Internal server error' }, 500);
}
error: 'Not found. Use POST /rpc for all API calls.',
}, 404);
});
return app;

View File

@@ -1,66 +0,0 @@
/**
* Bloom filter utility for testing if peer IDs might be in a set
* Used to filter out known peers from discovery results
*/
export class BloomFilter {
private bits: Uint8Array;
private size: number;
private numHashes: number;
/**
* Creates a bloom filter from a base64 encoded bit array
*/
constructor(base64Data: string, numHashes: number = 3) {
// Decode base64 to Uint8Array (works in both Node.js and Workers)
const binaryString = atob(base64Data);
const bytes = new Uint8Array(binaryString.length);
for (let i = 0; i < binaryString.length; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
this.bits = bytes;
this.size = this.bits.length * 8;
this.numHashes = numHashes;
}
/**
* Test if a peer ID might be in the filter
* Returns true if possibly in set, false if definitely not in set
*/
test(peerId: string): boolean {
for (let i = 0; i < this.numHashes; i++) {
const hash = this.hash(peerId, i);
const index = hash % this.size;
const byteIndex = Math.floor(index / 8);
const bitIndex = index % 8;
if (!(this.bits[byteIndex] & (1 << bitIndex))) {
return false;
}
}
return true;
}
/**
* Simple hash function (FNV-1a variant)
*/
private hash(str: string, seed: number): number {
let hash = 2166136261 ^ seed;
for (let i = 0; i < str.length; i++) {
hash ^= str.charCodeAt(i);
hash += (hash << 1) + (hash << 4) + (hash << 7) + (hash << 8) + (hash << 24);
}
return hash >>> 0;
}
}
/**
* Helper to parse bloom filter from base64 string
*/
export function parseBloomFilter(base64: string): BloomFilter | null {
try {
return new BloomFilter(base64);
} catch {
return null;
}
}

View File

@@ -1,5 +1,3 @@
import { generateSecretKey } from './crypto.ts';
/**
* Application configuration
* Reads from environment variables with sensible defaults
@@ -10,28 +8,17 @@ export interface Config {
storagePath: string;
corsOrigins: string[];
version: string;
authSecret: string;
offerDefaultTtl: number;
offerMaxTtl: number;
offerMinTtl: number;
cleanupInterval: number;
maxOffersPerRequest: number;
maxTopicsPerOffer: number;
}
/**
* Loads configuration from environment variables
*/
export function loadConfig(): Config {
// Generate or load auth secret
let authSecret = process.env.AUTH_SECRET;
if (!authSecret) {
authSecret = generateSecretKey();
console.warn('WARNING: No AUTH_SECRET provided. Generated temporary secret:', authSecret);
console.warn('All peer credentials will be invalidated on server restart.');
console.warn('Set AUTH_SECRET environment variable to persist credentials across restarts.');
}
return {
port: parseInt(process.env.PORT || '3000', 10),
storageType: (process.env.STORAGE_TYPE || 'sqlite') as 'sqlite' | 'memory',
@@ -40,12 +27,10 @@ export function loadConfig(): Config {
? process.env.CORS_ORIGINS.split(',').map(o => o.trim())
: ['*'],
version: process.env.VERSION || 'unknown',
authSecret,
offerDefaultTtl: parseInt(process.env.OFFER_DEFAULT_TTL || '60000', 10),
offerMaxTtl: parseInt(process.env.OFFER_MAX_TTL || '86400000', 10),
offerMinTtl: parseInt(process.env.OFFER_MIN_TTL || '60000', 10),
cleanupInterval: parseInt(process.env.CLEANUP_INTERVAL || '60000', 10),
maxOffersPerRequest: parseInt(process.env.MAX_OFFERS_PER_REQUEST || '100', 10),
maxTopicsPerOffer: parseInt(process.env.MAX_TOPICS_PER_OFFER || '50', 10),
maxOffersPerRequest: parseInt(process.env.MAX_OFFERS_PER_REQUEST || '100', 10)
};
}

View File

@@ -1,7 +1,7 @@
/**
* Crypto utilities for stateless peer authentication
* Uses Web Crypto API for compatibility with both Node.js and Cloudflare Workers
* Crypto utilities for Ed25519-based authentication
* Uses @noble/ed25519 for Ed25519 signature verification
* Uses Web Crypto API for compatibility with both Node.js and Cloudflare Workers
*/
import * as ed25519 from '@noble/ed25519';
@@ -12,10 +12,6 @@ ed25519.hashes.sha512Async = async (message: Uint8Array) => {
return new Uint8Array(await crypto.subtle.digest('SHA-512', message as BufferSource));
};
const ALGORITHM = 'AES-GCM';
const IV_LENGTH = 12; // 96 bits for GCM
const KEY_LENGTH = 32; // 256 bits
// Username validation
const USERNAME_REGEX = /^[a-z0-9][a-z0-9-]*[a-z0-9]$/;
const USERNAME_MIN_LENGTH = 3;
@@ -25,30 +21,15 @@ const USERNAME_MAX_LENGTH = 32;
const TIMESTAMP_TOLERANCE_MS = 5 * 60 * 1000;
/**
* Generates a random peer ID (16 bytes = 32 hex chars)
* Generates an anonymous username for users who don't want to claim one
* Format: anon-{timestamp}-{random}
* This reduces collision probability to near-zero
*/
export function generatePeerId(): string {
const bytes = crypto.getRandomValues(new Uint8Array(16));
return Array.from(bytes).map(b => b.toString(16).padStart(2, '0')).join('');
}
/**
* Generates a random secret key for encryption (32 bytes = 64 hex chars)
*/
export function generateSecretKey(): string {
const bytes = crypto.getRandomValues(new Uint8Array(KEY_LENGTH));
return Array.from(bytes).map(b => b.toString(16).padStart(2, '0')).join('');
}
/**
* Convert hex string to Uint8Array
*/
function hexToBytes(hex: string): Uint8Array {
const bytes = new Uint8Array(hex.length / 2);
for (let i = 0; i < hex.length; i += 2) {
bytes[i / 2] = parseInt(hex.substring(i, i + 2), 16);
}
return bytes;
export function generateAnonymousUsername(): string {
const timestamp = Date.now().toString(36);
const random = crypto.getRandomValues(new Uint8Array(3));
const hex = Array.from(random).map(b => b.toString(16).padStart(2, '0')).join('');
return `anon-${timestamp}-${hex}`;
}
/**
@@ -70,99 +51,40 @@ function base64ToBytes(base64: string): Uint8Array {
}
/**
* Encrypts a peer ID using the server secret key
* Returns base64-encoded encrypted data (IV + ciphertext)
* Validates a generic auth message format
* Expected format: action:username:params:timestamp
* Validates that the message contains the expected username and has a valid timestamp
*/
export async function encryptPeerId(peerId: string, secretKeyHex: string): Promise<string> {
const keyBytes = hexToBytes(secretKeyHex);
export function validateAuthMessage(
expectedUsername: string,
message: string
): { valid: boolean; error?: string } {
const parts = message.split(':');
if (keyBytes.length !== KEY_LENGTH) {
throw new Error(`Secret key must be ${KEY_LENGTH * 2} hex characters (${KEY_LENGTH} bytes)`);
if (parts.length < 3) {
return { valid: false, error: 'Invalid message format: must have at least action:username:timestamp' };
}
// Import key
const key = await crypto.subtle.importKey(
'raw',
keyBytes,
{ name: ALGORITHM, length: 256 },
false,
['encrypt']
);
// Extract username (second part) and timestamp (last part)
const messageUsername = parts[1];
const timestamp = parseInt(parts[parts.length - 1], 10);
// Generate random IV
const iv = crypto.getRandomValues(new Uint8Array(IV_LENGTH));
// Encrypt peer ID
const encoder = new TextEncoder();
const data = encoder.encode(peerId);
const encrypted = await crypto.subtle.encrypt(
{ name: ALGORITHM, iv },
key,
data
);
// Combine IV + ciphertext and encode as base64
const combined = new Uint8Array(iv.length + encrypted.byteLength);
combined.set(iv, 0);
combined.set(new Uint8Array(encrypted), iv.length);
return bytesToBase64(combined);
// Validate username matches
if (messageUsername !== expectedUsername) {
return { valid: false, error: 'Username in message does not match authenticated username' };
}
/**
* Decrypts an encrypted peer ID secret
* Returns the plaintext peer ID or throws if decryption fails
*/
export async function decryptPeerId(encryptedSecret: string, secretKeyHex: string): Promise<string> {
try {
const keyBytes = hexToBytes(secretKeyHex);
if (keyBytes.length !== KEY_LENGTH) {
throw new Error(`Secret key must be ${KEY_LENGTH * 2} hex characters (${KEY_LENGTH} bytes)`);
// Validate timestamp
if (isNaN(timestamp)) {
return { valid: false, error: 'Invalid timestamp in message' };
}
// Decode base64
const combined = base64ToBytes(encryptedSecret);
// Extract IV and ciphertext
const iv = combined.slice(0, IV_LENGTH);
const ciphertext = combined.slice(IV_LENGTH);
// Import key
const key = await crypto.subtle.importKey(
'raw',
keyBytes,
{ name: ALGORITHM, length: 256 },
false,
['decrypt']
);
// Decrypt
const decrypted = await crypto.subtle.decrypt(
{ name: ALGORITHM, iv },
key,
ciphertext
);
const decoder = new TextDecoder();
return decoder.decode(decrypted);
} catch (err) {
throw new Error('Failed to decrypt peer ID: invalid secret or secret key');
}
const timestampCheck = validateTimestamp(timestamp);
if (!timestampCheck.valid) {
return timestampCheck;
}
/**
* Validates that a peer ID and secret match
* Returns true if valid, false otherwise
*/
export async function validateCredentials(peerId: string, encryptedSecret: string, secretKey: string): Promise<boolean> {
try {
const decryptedPeerId = await decryptPeerId(encryptedSecret, secretKey);
return decryptedPeerId === peerId;
} catch {
return false;
}
return { valid: true };
}
// ===== Username and Ed25519 Signature Utilities =====
@@ -192,31 +114,32 @@ export function validateUsername(username: string): { valid: boolean; error?: st
}
/**
* Validates service FQN format (service-name@version)
* Service name: reverse domain notation (com.example.service)
* Validates service FQN format (service:version@username or service:version)
* Service name: lowercase alphanumeric with dots/dashes (e.g., chat, file-share, com.example.chat)
* Version: semantic versioning (1.0.0, 2.1.3-beta, etc.)
* Username: optional, lowercase alphanumeric with dashes
*/
export function validateServiceFqn(fqn: string): { valid: boolean; error?: string } {
if (typeof fqn !== 'string') {
return { valid: false, error: 'Service FQN must be a string' };
}
// Split into service name and version
const parts = fqn.split('@');
if (parts.length !== 2) {
return { valid: false, error: 'Service FQN must be in format: service-name@version' };
// Parse the FQN
const parsed = parseServiceFqn(fqn);
if (!parsed) {
return { valid: false, error: 'Service FQN must be in format: service:version[@username]' };
}
const [serviceName, version] = parts;
const { serviceName, version, username } = parsed;
// Validate service name (reverse domain notation)
const serviceNameRegex = /^[a-z0-9]([a-z0-9-]*[a-z0-9])?(\.[a-z0-9]([a-z0-9-]*[a-z0-9])?)+$/;
// Validate service name (alphanumeric with dots/dashes)
const serviceNameRegex = /^[a-z0-9]([a-z0-9.-]*[a-z0-9])?$/;
if (!serviceNameRegex.test(serviceName)) {
return { valid: false, error: 'Service name must be reverse domain notation (e.g., com.example.service)' };
return { valid: false, error: 'Service name must be lowercase alphanumeric with optional dots/dashes' };
}
if (serviceName.length < 3 || serviceName.length > 128) {
return { valid: false, error: 'Service name must be 3-128 characters' };
if (serviceName.length < 1 || serviceName.length > 128) {
return { valid: false, error: 'Service name must be 1-128 characters' };
}
// Validate version (semantic versioning)
@@ -225,9 +148,97 @@ export function validateServiceFqn(fqn: string): { valid: boolean; error?: strin
return { valid: false, error: 'Version must be semantic versioning (e.g., 1.0.0, 2.1.3-beta)' };
}
// Validate username if present
if (username) {
const usernameCheck = validateUsername(username);
if (!usernameCheck.valid) {
return usernameCheck;
}
}
return { valid: true };
}
/**
* Parse semantic version string into components
*/
export function parseVersion(version: string): { major: number; minor: number; patch: number; prerelease?: string } | null {
const match = version.match(/^([0-9]+)\.([0-9]+)\.([0-9]+)(-[a-z0-9.-]+)?$/);
if (!match) return null;
return {
major: parseInt(match[1], 10),
minor: parseInt(match[2], 10),
patch: parseInt(match[3], 10),
prerelease: match[4]?.substring(1), // Remove leading dash
};
}
/**
* Check if two versions are compatible (same major version)
* Following semver rules: ^1.0.0 matches 1.x.x but not 2.x.x
*/
export function isVersionCompatible(requested: string, available: string): boolean {
const req = parseVersion(requested);
const avail = parseVersion(available);
if (!req || !avail) return false;
// Major version must match
if (req.major !== avail.major) return false;
// If major is 0, minor must also match (0.x.y is unstable)
if (req.major === 0 && req.minor !== avail.minor) return false;
// Available version must be >= requested version
if (avail.minor < req.minor) return false;
if (avail.minor === req.minor && avail.patch < req.patch) return false;
// Prerelease versions are only compatible with exact matches
if (req.prerelease && req.prerelease !== avail.prerelease) return false;
return true;
}
/**
* Parse service FQN into components
* Formats supported:
* - service:version@username (e.g., "chat:1.0.0@alice")
* - service:version (e.g., "chat:1.0.0") for discovery
*/
export function parseServiceFqn(fqn: string): { serviceName: string; version: string; username: string | null } | null {
if (!fqn || typeof fqn !== 'string') return null;
// Check if username is present
const atIndex = fqn.lastIndexOf('@');
let serviceVersion: string;
let username: string | null = null;
if (atIndex > 0) {
// Format: service:version@username
serviceVersion = fqn.substring(0, atIndex);
username = fqn.substring(atIndex + 1);
} else {
// Format: service:version (no username)
serviceVersion = fqn;
}
// Split service:version
const colonIndex = serviceVersion.indexOf(':');
if (colonIndex <= 0) return null; // No colon or colon at start
const serviceName = serviceVersion.substring(0, colonIndex);
const version = serviceVersion.substring(colonIndex + 1);
if (!serviceName || !version) return null;
return {
serviceName,
version,
username,
};
}
/**
* Validates timestamp is within acceptable range (prevents replay attacks)
*/
@@ -336,16 +347,24 @@ export async function validateServicePublish(
}
// Parse message format: "publish:{username}:{serviceFqn}:{timestamp}"
// Note: serviceFqn can contain colons (e.g., "chat:2.0.0@user"), so we need careful parsing
const parts = message.split(':');
if (parts.length !== 4 || parts[0] !== 'publish' || parts[1] !== username || parts[2] !== serviceFqn) {
if (parts.length < 4 || parts[0] !== 'publish' || parts[1] !== username) {
return { valid: false, error: 'Invalid message format (expected: publish:{username}:{serviceFqn}:{timestamp})' };
}
const timestamp = parseInt(parts[3], 10);
// The timestamp is the last part
const timestamp = parseInt(parts[parts.length - 1], 10);
if (isNaN(timestamp)) {
return { valid: false, error: 'Invalid timestamp in message' };
}
// The serviceFqn is everything between username and timestamp
const extractedServiceFqn = parts.slice(2, parts.length - 1).join(':');
if (extractedServiceFqn !== serviceFqn) {
return { valid: false, error: `Service FQN mismatch (expected: ${serviceFqn}, got: ${extractedServiceFqn})` };
}
// Validate timestamp
const timestampCheck = validateTimestamp(timestamp);
if (!timestampCheck.valid) {

View File

@@ -20,7 +20,6 @@ async function main() {
offerMinTtl: `${config.offerMinTtl}ms`,
cleanupInterval: `${config.cleanupInterval}ms`,
maxOffersPerRequest: config.maxOffersPerRequest,
maxTopicsPerOffer: config.maxTopicsPerOffer,
corsOrigins: config.corsOrigins,
version: config.version,
});

View File

@@ -1,51 +0,0 @@
import { Context, Next } from 'hono';
import { validateCredentials } from '../crypto.ts';
/**
* Authentication middleware for Rondevu
* Validates Bearer token in format: {peerId}:{encryptedSecret}
*/
export function createAuthMiddleware(authSecret: string) {
return async (c: Context, next: Next) => {
const authHeader = c.req.header('Authorization');
if (!authHeader) {
return c.json({ error: 'Missing Authorization header' }, 401);
}
// Expect format: Bearer {peerId}:{secret}
const parts = authHeader.split(' ');
if (parts.length !== 2 || parts[0] !== 'Bearer') {
return c.json({ error: 'Invalid Authorization header format. Expected: Bearer {peerId}:{secret}' }, 401);
}
const credentials = parts[1].split(':');
if (credentials.length !== 2) {
return c.json({ error: 'Invalid credentials format. Expected: {peerId}:{secret}' }, 401);
}
const [peerId, encryptedSecret] = credentials;
// Validate credentials (async operation)
const isValid = await validateCredentials(peerId, encryptedSecret, authSecret);
if (!isValid) {
return c.json({ error: 'Invalid credentials' }, 401);
}
// Attach peer ID to context for use in handlers
c.set('peerId', peerId);
await next();
};
}
/**
* Helper to get authenticated peer ID from context
*/
export function getAuthenticatedPeerId(c: Context): string {
const peerId = c.get('peerId');
if (!peerId) {
throw new Error('No authenticated peer ID in context');
}
return peerId;
}

725
src/rpc.ts Normal file
View File

@@ -0,0 +1,725 @@
import { Context } from 'hono';
import { Storage } from './storage/types.ts';
import { Config } from './config.ts';
import {
validateUsernameClaim,
validateServicePublish,
validateServiceFqn,
parseServiceFqn,
isVersionCompatible,
verifyEd25519Signature,
validateAuthMessage,
validateUsername,
} from './crypto.ts';
// Constants
const MAX_PAGE_SIZE = 100;
/**
* RPC request format
*/
export interface RpcRequest {
method: string;
message: string;
signature: string;
publicKey?: string; // Optional: for auto-claiming usernames
params?: any;
}
/**
* RPC response format
*/
export interface RpcResponse {
success: boolean;
result?: any;
error?: string;
}
/**
* RPC method handler
*/
type RpcHandler = (
params: any,
message: string,
signature: string,
publicKey: string | undefined,
storage: Storage,
config: Config
) => Promise<any>;
/**
* Verify authentication for a method call
* Automatically claims username if it doesn't exist
*/
async function verifyAuth(
username: string,
message: string,
signature: string,
publicKey: string | undefined,
storage: Storage
): Promise<{ valid: boolean; error?: string }> {
// Get username record to fetch public key
let usernameRecord = await storage.getUsername(username);
// Auto-claim username if it doesn't exist
if (!usernameRecord) {
if (!publicKey) {
return {
valid: false,
error: `Username "${username}" is not claimed and no public key provided for auto-claim.`,
};
}
// Validate username format before claiming
const usernameValidation = validateUsername(username);
if (!usernameValidation.valid) {
return usernameValidation;
}
// Verify signature against the current message (not a claim message)
const signatureValid = await verifyEd25519Signature(publicKey, signature, message);
if (!signatureValid) {
return { valid: false, error: 'Invalid signature for auto-claim' };
}
// Auto-claim the username
const expiresAt = Date.now() + 365 * 24 * 60 * 60 * 1000; // 365 days
await storage.claimUsername({
username,
publicKey,
expiresAt,
});
usernameRecord = await storage.getUsername(username);
if (!usernameRecord) {
return { valid: false, error: 'Failed to claim username' };
}
}
// Verify Ed25519 signature
const isValid = await verifyEd25519Signature(
usernameRecord.publicKey,
signature,
message
);
if (!isValid) {
return { valid: false, error: 'Invalid signature' };
}
// Validate message format and timestamp
const validation = validateAuthMessage(username, message);
if (!validation.valid) {
return { valid: false, error: validation.error };
}
return { valid: true };
}
/**
* Extract username from message
*/
function extractUsername(message: string): string | null {
// Message format: method:username:...
const parts = message.split(':');
if (parts.length < 2) return null;
return parts[1];
}
/**
* RPC Method Handlers
*/
const handlers: Record<string, RpcHandler> = {
/**
* Check if username is available
*/
async getUser(params, message, signature, publicKey, storage, config) {
const { username } = params;
const claimed = await storage.getUsername(username);
if (!claimed) {
return {
username,
available: true,
};
}
return {
username: claimed.username,
available: false,
claimedAt: claimed.claimedAt,
expiresAt: claimed.expiresAt,
publicKey: claimed.publicKey,
};
},
/**
* Get service by FQN - Supports 3 modes:
* 1. Direct lookup: FQN includes @username
* 2. Paginated discovery: FQN without @username, with limit/offset
* 3. Random discovery: FQN without @username, no limit
*/
async getService(params, message, signature, publicKey, storage, config) {
const { serviceFqn, limit, offset } = params;
const username = extractUsername(message);
// Verify authentication
if (username) {
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
}
// Parse and validate FQN
const fqnValidation = validateServiceFqn(serviceFqn);
if (!fqnValidation.valid) {
throw new Error(fqnValidation.error || 'Invalid service FQN');
}
const parsed = parseServiceFqn(serviceFqn);
if (!parsed) {
throw new Error('Failed to parse service FQN');
}
// Helper: Filter services by version compatibility
const filterCompatibleServices = (services) => {
return services.filter((s) => {
const serviceVersion = parseServiceFqn(s.serviceFqn);
return (
serviceVersion &&
isVersionCompatible(parsed.version, serviceVersion.version)
);
});
};
// Helper: Find available offer for service
const findAvailableOffer = async (service) => {
const offers = await storage.getOffersForService(service.id);
return offers.find((o) => !o.answererUsername);
};
// Helper: Build service response object
const buildServiceResponse = (service, offer) => ({
serviceId: service.id,
username: service.username,
serviceFqn: service.serviceFqn,
offerId: offer.id,
sdp: offer.sdp,
createdAt: service.createdAt,
expiresAt: service.expiresAt,
});
// Mode 1: Paginated discovery
if (limit !== undefined) {
const pageLimit = Math.min(Math.max(1, limit), MAX_PAGE_SIZE);
const pageOffset = Math.max(0, offset || 0);
const allServices = await storage.getServicesByName(parsed.service, parsed.version);
const compatibleServices = filterCompatibleServices(allServices);
// Get unique services per username with available offers
const usernameSet = new Set<string>();
const uniqueServices: any[] = [];
for (const service of compatibleServices) {
if (!usernameSet.has(service.username)) {
usernameSet.add(service.username);
const availableOffer = await findAvailableOffer(service);
if (availableOffer) {
uniqueServices.push(buildServiceResponse(service, availableOffer));
}
}
}
// Paginate results
const paginatedServices = uniqueServices.slice(pageOffset, pageOffset + pageLimit);
return {
services: paginatedServices,
count: paginatedServices.length,
limit: pageLimit,
offset: pageOffset,
};
}
// Mode 2: Direct lookup with username
if (parsed.username) {
const service = await storage.getServiceByFqn(serviceFqn);
if (!service) {
throw new Error('Service not found');
}
const availableOffer = await findAvailableOffer(service);
if (!availableOffer) {
throw new Error('Service has no available offers');
}
return buildServiceResponse(service, availableOffer);
}
// Mode 3: Random discovery without username
const allServices = await storage.getServicesByName(parsed.service, parsed.version);
const compatibleServices = filterCompatibleServices(allServices);
if (compatibleServices.length === 0) {
throw new Error('No services found');
}
const randomService = compatibleServices[Math.floor(Math.random() * compatibleServices.length)];
const availableOffer = await findAvailableOffer(randomService);
if (!availableOffer) {
throw new Error('Service has no available offers');
}
return buildServiceResponse(randomService, availableOffer);
},
/**
* Publish a service
*/
async publishService(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offers, ttl } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required for service publishing');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
// Validate service FQN
const fqnValidation = validateServiceFqn(serviceFqn);
if (!fqnValidation.valid) {
throw new Error(fqnValidation.error || 'Invalid service FQN');
}
const parsed = parseServiceFqn(serviceFqn);
if (!parsed || !parsed.username) {
throw new Error('Service FQN must include username');
}
if (parsed.username !== username) {
throw new Error('Service FQN username must match authenticated username');
}
// Validate offers
if (!offers || !Array.isArray(offers) || offers.length === 0) {
throw new Error('Must provide at least one offer');
}
if (offers.length > config.maxOffersPerRequest) {
throw new Error(
`Too many offers (max ${config.maxOffersPerRequest})`
);
}
// Validate each offer has valid SDP
offers.forEach((offer, index) => {
if (!offer || typeof offer !== 'object') {
throw new Error(`Invalid offer at index ${index}: must be an object`);
}
if (!offer.sdp || typeof offer.sdp !== 'string') {
throw new Error(`Invalid offer at index ${index}: missing or invalid SDP`);
}
if (!offer.sdp.trim()) {
throw new Error(`Invalid offer at index ${index}: SDP cannot be empty`);
}
});
// Create service with offers
const now = Date.now();
const offerTtl =
ttl !== undefined
? Math.min(
Math.max(ttl, config.offerMinTtl),
config.offerMaxTtl
)
: config.offerDefaultTtl;
const expiresAt = now + offerTtl;
// Prepare offer requests with TTL
const offerRequests = offers.map(offer => ({
username,
serviceFqn,
sdp: offer.sdp,
expiresAt,
}));
const result = await storage.createService({
serviceFqn,
expiresAt,
offers: offerRequests,
});
return {
serviceId: result.service.id,
username: result.service.username,
serviceFqn: result.service.serviceFqn,
offers: result.offers.map(offer => ({
offerId: offer.id,
sdp: offer.sdp,
createdAt: offer.createdAt,
expiresAt: offer.expiresAt,
})),
createdAt: result.service.createdAt,
expiresAt: result.service.expiresAt,
};
},
/**
* Delete a service
*/
async deleteService(params, message, signature, publicKey, storage, config) {
const { serviceFqn } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const parsed = parseServiceFqn(serviceFqn);
if (!parsed || !parsed.username) {
throw new Error('Service FQN must include username');
}
const service = await storage.getServiceByFqn(serviceFqn);
if (!service) {
throw new Error('Service not found');
}
const deleted = await storage.deleteService(service.id, username);
if (!deleted) {
throw new Error('Service not found or not owned by this username');
}
return { success: true };
},
/**
* Answer an offer
*/
async answerOffer(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId, sdp } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
if (!sdp || typeof sdp !== 'string' || sdp.length === 0) {
throw new Error('Invalid SDP');
}
if (sdp.length > 64 * 1024) {
throw new Error('SDP too large (max 64KB)');
}
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
if (offer.answererUsername) {
throw new Error('Offer already answered');
}
await storage.answerOffer(offerId, username, sdp);
return { success: true, offerId };
},
/**
* Get answer for an offer
*/
async getOfferAnswer(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
if (offer.username !== username) {
throw new Error('Not authorized to access this offer');
}
if (!offer.answererUsername || !offer.answerSdp) {
throw new Error('Offer not yet answered');
}
return {
sdp: offer.answerSdp,
offerId: offer.id,
answererId: offer.answererUsername,
answeredAt: offer.answeredAt,
};
},
/**
* Combined polling for answers and ICE candidates
*/
async poll(params, message, signature, publicKey, storage, config) {
const { since } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const sinceTimestamp = since || 0;
// Get all answered offers
const answeredOffers = await storage.getAnsweredOffers(username);
const filteredAnswers = answeredOffers.filter(
(offer) => offer.answeredAt && offer.answeredAt > sinceTimestamp
);
// Get all user's offers
const allOffers = await storage.getOffersByUsername(username);
// For each offer, get ICE candidates from both sides
const iceCandidatesByOffer: Record<string, any[]> = {};
for (const offer of allOffers) {
const offererCandidates = await storage.getIceCandidates(
offer.id,
'offerer',
sinceTimestamp
);
const answererCandidates = await storage.getIceCandidates(
offer.id,
'answerer',
sinceTimestamp
);
const allCandidates = [
...offererCandidates.map((c: any) => ({
...c,
role: 'offerer' as const,
})),
...answererCandidates.map((c: any) => ({
...c,
role: 'answerer' as const,
})),
];
if (allCandidates.length > 0) {
const isOfferer = offer.username === username;
const filtered = allCandidates.filter((c) =>
isOfferer ? c.role === 'answerer' : c.role === 'offerer'
);
if (filtered.length > 0) {
iceCandidatesByOffer[offer.id] = filtered;
}
}
}
return {
answers: filteredAnswers.map((offer) => ({
offerId: offer.id,
serviceId: offer.serviceId,
answererId: offer.answererUsername,
sdp: offer.answerSdp,
answeredAt: offer.answeredAt,
})),
iceCandidates: iceCandidatesByOffer,
};
},
/**
* Add ICE candidates
*/
async addIceCandidates(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId, candidates } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
if (!Array.isArray(candidates) || candidates.length === 0) {
throw new Error('Missing or invalid required parameter: candidates');
}
// Validate each candidate is an object (don't enforce structure per CLAUDE.md)
candidates.forEach((candidate, index) => {
if (!candidate || typeof candidate !== 'object') {
throw new Error(`Invalid candidate at index ${index}: must be an object`);
}
});
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
const role = offer.username === username ? 'offerer' : 'answerer';
const count = await storage.addIceCandidates(
offerId,
username,
role,
candidates
);
return { count, offerId };
},
/**
* Get ICE candidates
*/
async getIceCandidates(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId, since } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const sinceTimestamp = since || 0;
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
const isOfferer = offer.username === username;
const role = isOfferer ? 'answerer' : 'offerer';
const candidates = await storage.getIceCandidates(
offerId,
role,
sinceTimestamp
);
return {
candidates: candidates.map((c: any) => ({
candidate: c.candidate,
createdAt: c.createdAt,
})),
offerId,
};
},
};
/**
* Handle RPC batch request
*/
export async function handleRpc(
requests: RpcRequest[],
storage: Storage,
config: Config
): Promise<RpcResponse[]> {
const responses: RpcResponse[] = [];
for (const request of requests) {
try {
const { method, message, signature, publicKey, params } = request;
// Validate request
if (!method || typeof method !== 'string') {
responses.push({
success: false,
error: 'Missing or invalid method',
});
continue;
}
if (!message || typeof message !== 'string') {
responses.push({
success: false,
error: 'Missing or invalid message',
});
continue;
}
if (!signature || typeof signature !== 'string') {
responses.push({
success: false,
error: 'Missing or invalid signature',
});
continue;
}
// Get handler
const handler = handlers[method];
if (!handler) {
responses.push({
success: false,
error: `Unknown method: ${method}`,
});
continue;
}
// Execute handler
const result = await handler(
params || {},
message,
signature,
publicKey,
storage,
config
);
responses.push({
success: true,
result,
});
} catch (err) {
responses.push({
success: false,
error: (err as Error).message || 'Internal server error',
});
}
}
return responses;
}

View File

@@ -8,9 +8,9 @@ import {
ClaimUsernameRequest,
Service,
CreateServiceRequest,
ServiceInfo,
} from './types.ts';
import { generateOfferHash } from './hash-id.ts';
import { parseServiceFqn } from '../crypto.ts';
const YEAR_IN_MS = 365 * 24 * 60 * 60 * 1000; // 365 days
@@ -34,30 +34,31 @@ export class D1Storage implements Storage {
*/
async initializeDatabase(): Promise<void> {
await this.db.exec(`
-- Offers table (no topics)
-- WebRTC signaling offers
CREATE TABLE IF NOT EXISTS offers (
id TEXT PRIMARY KEY,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
service_id TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
secret TEXT,
answerer_peer_id TEXT,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER
);
CREATE INDEX IF NOT EXISTS idx_offers_peer ON offers(peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_username ON offers(username);
CREATE INDEX IF NOT EXISTS idx_offers_service ON offers(service_id);
CREATE INDEX IF NOT EXISTS idx_offers_expires ON offers(expires_at);
CREATE INDEX IF NOT EXISTS idx_offers_last_seen ON offers(last_seen);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_username);
-- ICE candidates table
CREATE TABLE IF NOT EXISTS ice_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
@@ -65,7 +66,7 @@ export class D1Storage implements Storage {
);
CREATE INDEX IF NOT EXISTS idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX IF NOT EXISTS idx_ice_peer ON ice_candidates(peer_id);
CREATE INDEX IF NOT EXISTS idx_ice_username ON ice_candidates(username);
CREATE INDEX IF NOT EXISTS idx_ice_created ON ice_candidates(created_at);
-- Usernames table
@@ -82,39 +83,23 @@ export class D1Storage implements Storage {
CREATE INDEX IF NOT EXISTS idx_usernames_expires ON usernames(expires_at);
CREATE INDEX IF NOT EXISTS idx_usernames_public_key ON usernames(public_key);
-- Services table
-- Services table (new schema with extracted fields for discovery)
CREATE TABLE IF NOT EXISTS services (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
offer_id TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
is_public INTEGER NOT NULL DEFAULT 0,
metadata TEXT,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
UNIQUE(username, service_fqn)
UNIQUE(service_fqn)
);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_fqn ON services(service_fqn);
CREATE INDEX IF NOT EXISTS idx_services_discovery ON services(service_name, version);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_expires ON services(expires_at);
CREATE INDEX IF NOT EXISTS idx_services_offer ON services(offer_id);
-- Service index table (privacy layer)
CREATE TABLE IF NOT EXISTS service_index (
uuid TEXT PRIMARY KEY,
service_id TEXT NOT NULL,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_service_index_username ON service_index(username);
CREATE INDEX IF NOT EXISTS idx_service_index_expires ON service_index(expires_at);
`);
}
@@ -125,34 +110,35 @@ export class D1Storage implements Storage {
// D1 doesn't support true transactions yet, so we do this sequentially
for (const offer of offers) {
const id = offer.id || await generateOfferHash(offer.sdp, []);
const id = offer.id || await generateOfferHash(offer.sdp);
const now = Date.now();
await this.db.prepare(`
INSERT INTO offers (id, peer_id, sdp, created_at, expires_at, last_seen, secret)
INSERT INTO offers (id, username, service_id, sdp, created_at, expires_at, last_seen)
VALUES (?, ?, ?, ?, ?, ?, ?)
`).bind(id, offer.peerId, offer.sdp, now, offer.expiresAt, now, offer.secret || null).run();
`).bind(id, offer.username, offer.serviceId || null, offer.sdp, now, offer.expiresAt, now).run();
created.push({
id,
peerId: offer.peerId,
username: offer.username,
serviceId: offer.serviceId,
serviceFqn: offer.serviceFqn,
sdp: offer.sdp,
createdAt: now,
expiresAt: offer.expiresAt,
lastSeen: now,
secret: offer.secret,
});
}
return created;
}
async getOffersByPeerId(peerId: string): Promise<Offer[]> {
async getOffersByUsername(username: string): Promise<Offer[]> {
const result = await this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND expires_at > ?
WHERE username = ? AND expires_at > ?
ORDER BY last_seen DESC
`).bind(peerId, Date.now()).all();
`).bind(username, Date.now()).all();
if (!result.results) {
return [];
@@ -174,11 +160,11 @@ export class D1Storage implements Storage {
return this.rowToOffer(result as any);
}
async deleteOffer(offerId: string, ownerPeerId: string): Promise<boolean> {
async deleteOffer(offerId: string, ownerUsername: string): Promise<boolean> {
const result = await this.db.prepare(`
DELETE FROM offers
WHERE id = ? AND peer_id = ?
`).bind(offerId, ownerPeerId).run();
WHERE id = ? AND username = ?
`).bind(offerId, ownerUsername).run();
return (result.meta.changes || 0) > 0;
}
@@ -193,9 +179,8 @@ export class D1Storage implements Storage {
async answerOffer(
offerId: string,
answererPeerId: string,
answerSdp: string,
secret?: string
answererUsername: string,
answerSdp: string
): Promise<{ success: boolean; error?: string }> {
// Check if offer exists and is not expired
const offer = await this.getOfferById(offerId);
@@ -207,16 +192,8 @@ export class D1Storage implements Storage {
};
}
// Verify secret if offer is protected
if (offer.secret && offer.secret !== secret) {
return {
success: false,
error: 'Invalid or missing secret'
};
}
// Check if offer already has an answerer
if (offer.answererPeerId) {
if (offer.answererUsername) {
return {
success: false,
error: 'Offer already answered'
@@ -226,9 +203,9 @@ export class D1Storage implements Storage {
// Update offer with answer
const result = await this.db.prepare(`
UPDATE offers
SET answerer_peer_id = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_peer_id IS NULL
`).bind(answererPeerId, answerSdp, Date.now(), offerId).run();
SET answerer_username = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_username IS NULL
`).bind(answererUsername, answerSdp, Date.now(), offerId).run();
if ((result.meta.changes || 0) === 0) {
return {
@@ -240,12 +217,12 @@ export class D1Storage implements Storage {
return { success: true };
}
async getAnsweredOffers(offererPeerId: string): Promise<Offer[]> {
async getAnsweredOffers(offererUsername: string): Promise<Offer[]> {
const result = await this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND answerer_peer_id IS NOT NULL AND expires_at > ?
WHERE username = ? AND answerer_username IS NOT NULL AND expires_at > ?
ORDER BY answered_at DESC
`).bind(offererPeerId, Date.now()).all();
`).bind(offererUsername, Date.now()).all();
if (!result.results) {
return [];
@@ -258,7 +235,7 @@ export class D1Storage implements Storage {
async addIceCandidates(
offerId: string,
peerId: string,
username: string,
role: 'offerer' | 'answerer',
candidates: any[]
): Promise<number> {
@@ -266,11 +243,11 @@ export class D1Storage implements Storage {
for (let i = 0; i < candidates.length; i++) {
const timestamp = Date.now() + i;
await this.db.prepare(`
INSERT INTO ice_candidates (offer_id, peer_id, role, candidate, created_at)
INSERT INTO ice_candidates (offer_id, username, role, candidate, created_at)
VALUES (?, ?, ?, ?, ?)
`).bind(
offerId,
peerId,
username,
role,
JSON.stringify(candidates[i]),
timestamp
@@ -308,7 +285,7 @@ export class D1Storage implements Storage {
return result.results.map((row: any) => ({
id: row.id,
offerId: row.offer_id,
peerId: row.peer_id,
username: row.username,
role: row.role,
candidate: JSON.parse(row.candidate),
createdAt: row.created_at,
@@ -321,6 +298,7 @@ export class D1Storage implements Storage {
const now = Date.now();
const expiresAt = now + YEAR_IN_MS;
try {
// Try to insert or update
const result = await this.db.prepare(`
INSERT INTO usernames (username, public_key, claimed_at, expires_at, last_used, metadata)
@@ -351,6 +329,13 @@ export class D1Storage implements Storage {
expiresAt,
lastUsed: now,
};
} catch (err: any) {
// Handle UNIQUE constraint on public_key
if (err.message?.includes('UNIQUE constraint failed: usernames.public_key')) {
throw new Error('This public key has already claimed a different username');
}
throw err;
}
}
async getUsername(username: string): Promise<Username | null> {
@@ -375,18 +360,6 @@ export class D1Storage implements Storage {
};
}
async touchUsername(username: string): Promise<boolean> {
const now = Date.now();
const expiresAt = now + YEAR_IN_MS;
const result = await this.db.prepare(`
UPDATE usernames
SET last_used = ?, expires_at = ?
WHERE username = ? AND expires_at > ?
`).bind(now, expiresAt, username, now).run();
return (result.meta.changes || 0) > 0;
}
async deleteExpiredUsernames(now: number): Promise<number> {
const result = await this.db.prepare(`
@@ -400,58 +373,99 @@ export class D1Storage implements Storage {
async createService(request: CreateServiceRequest): Promise<{
service: Service;
indexUuid: string;
offers: Offer[];
}> {
const serviceId = crypto.randomUUID();
const indexUuid = crypto.randomUUID();
const now = Date.now();
// Insert service
await this.db.prepare(`
INSERT INTO services (id, username, service_fqn, offer_id, created_at, expires_at, is_public, metadata)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
`).bind(
serviceId,
request.username,
request.serviceFqn,
request.offerId,
now,
request.expiresAt,
request.isPublic ? 1 : 0,
request.metadata || null
).run();
// Parse FQN to extract components
const parsed = parseServiceFqn(request.serviceFqn);
if (!parsed) {
throw new Error(`Invalid service FQN: ${request.serviceFqn}`);
}
if (!parsed.username) {
throw new Error(`Service FQN must include username: ${request.serviceFqn}`);
}
// Insert service index
const { serviceName, version, username } = parsed;
// Delete existing service with same (service_name, version, username) and its related offers (upsert behavior)
// First get the existing service
const existingService = await this.db.prepare(`
SELECT id FROM services
WHERE service_name = ? AND version = ? AND username = ?
`).bind(serviceName, version, username).first();
if (existingService) {
// Delete related offers first (no FK cascade from offers to services)
await this.db.prepare(`
INSERT INTO service_index (uuid, service_id, username, service_fqn, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?)
DELETE FROM offers WHERE service_id = ?
`).bind(existingService.id).run();
// Delete the service
await this.db.prepare(`
DELETE FROM services WHERE id = ?
`).bind(existingService.id).run();
}
// Insert new service with extracted fields
await this.db.prepare(`
INSERT INTO services (id, service_fqn, service_name, version, username, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
`).bind(
indexUuid,
serviceId,
request.username,
request.serviceFqn,
serviceName,
version,
username,
now,
request.expiresAt
).run();
// Touch username to extend expiry
await this.touchUsername(request.username);
// Create offers with serviceId
const offerRequests = request.offers.map(offer => ({
...offer,
serviceId,
}));
const offers = await this.createOffers(offerRequests);
// Touch username to extend expiry (inline logic)
const expiresAt = now + YEAR_IN_MS;
await this.db.prepare(`
UPDATE usernames
SET last_used = ?, expires_at = ?
WHERE username = ? AND expires_at > ?
`).bind(now, expiresAt, username, now).run();
return {
service: {
id: serviceId,
username: request.username,
serviceFqn: request.serviceFqn,
offerId: request.offerId,
serviceName,
version,
username,
createdAt: now,
expiresAt: request.expiresAt,
isPublic: request.isPublic || false,
metadata: request.metadata,
},
indexUuid,
offers,
};
}
async getOffersForService(serviceId: string): Promise<Offer[]> {
const result = await this.db.prepare(`
SELECT * FROM offers
WHERE service_id = ? AND expires_at > ?
ORDER BY created_at ASC
`).bind(serviceId, Date.now()).all();
if (!result.results) {
return [];
}
return result.results.map(row => this.rowToOffer(row as any));
}
async getServiceById(serviceId: string): Promise<Service | null> {
const result = await this.db.prepare(`
SELECT * FROM services
@@ -465,12 +479,11 @@ export class D1Storage implements Storage {
return this.rowToService(result as any);
}
async getServiceByUuid(uuid: string): Promise<Service | null> {
async getServiceByFqn(serviceFqn: string): Promise<Service | null> {
const result = await this.db.prepare(`
SELECT s.* FROM services s
INNER JOIN service_index si ON s.id = si.service_id
WHERE si.uuid = ? AND s.expires_at > ?
`).bind(uuid, Date.now()).first();
SELECT * FROM services
WHERE service_fqn = ? AND expires_at > ?
`).bind(serviceFqn, Date.now()).first();
if (!result) {
return null;
@@ -479,35 +492,56 @@ export class D1Storage implements Storage {
return this.rowToService(result as any);
}
async listServicesForUsername(username: string): Promise<ServiceInfo[]> {
async discoverServices(
serviceName: string,
version: string,
limit: number,
offset: number
): Promise<Service[]> {
// Query for unique services with available offers
// We join with offers and filter for available ones (answerer_username IS NULL)
const result = await this.db.prepare(`
SELECT si.uuid, s.is_public, s.service_fqn, s.metadata
FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.expires_at > ?
SELECT DISTINCT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY s.created_at DESC
`).bind(username, Date.now()).all();
LIMIT ? OFFSET ?
`).bind(serviceName, version, Date.now(), Date.now(), limit, offset).all();
if (!result.results) {
return [];
}
return result.results.map((row: any) => ({
uuid: row.uuid,
isPublic: row.is_public === 1,
serviceFqn: row.is_public === 1 ? row.service_fqn : undefined,
metadata: row.is_public === 1 ? row.metadata || undefined : undefined,
}));
return result.results.map(row => this.rowToService(row as any));
}
async queryService(username: string, serviceFqn: string): Promise<string | null> {
async getRandomService(serviceName: string, version: string): Promise<Service | null> {
// Get a random service with an available offer
const result = await this.db.prepare(`
SELECT si.uuid FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.service_fqn = ? AND si.expires_at > ?
`).bind(username, serviceFqn, Date.now()).first();
SELECT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY RANDOM()
LIMIT 1
`).bind(serviceName, version, Date.now(), Date.now()).first();
return result ? (result as any).uuid : null;
if (!result) {
return null;
}
return this.rowToService(result as any);
}
async deleteService(serviceId: string, username: string): Promise<boolean> {
@@ -540,13 +574,14 @@ export class D1Storage implements Storage {
private rowToOffer(row: any): Offer {
return {
id: row.id,
peerId: row.peer_id,
username: row.username,
serviceId: row.service_id || undefined,
serviceFqn: row.service_fqn || undefined,
sdp: row.sdp,
createdAt: row.created_at,
expiresAt: row.expires_at,
lastSeen: row.last_seen,
secret: row.secret || undefined,
answererPeerId: row.answerer_peer_id || undefined,
answererUsername: row.answerer_username || undefined,
answerSdp: row.answer_sdp || undefined,
answeredAt: row.answered_at || undefined,
};
@@ -558,13 +593,12 @@ export class D1Storage implements Storage {
private rowToService(row: any): Service {
return {
id: row.id,
username: row.username,
serviceFqn: row.service_fqn,
offerId: row.offer_id,
serviceName: row.service_name,
version: row.version,
username: row.username,
createdAt: row.created_at,
expiresAt: row.expires_at,
isPublic: row.is_public === 1,
metadata: row.metadata || undefined,
};
}
}

View File

@@ -1,22 +1,17 @@
/**
* Generates a content-based offer ID using SHA-256 hash
* Creates deterministic IDs based on offer content (sdp, topics)
* Creates deterministic IDs based on offer SDP content
* PeerID is not included as it's inferred from authentication
* Uses Web Crypto API for compatibility with both Node.js and Cloudflare Workers
*
* @param sdp - The WebRTC SDP offer
* @param topics - Array of topic strings
* @returns SHA-256 hash of the sanitized offer content
* @returns SHA-256 hash of the SDP content
*/
export async function generateOfferHash(
sdp: string,
topics: string[]
): Promise<string> {
export async function generateOfferHash(sdp: string): Promise<string> {
// Sanitize and normalize the offer content
// Only include core offer content (not peerId - that's inferred from auth)
const sanitizedOffer = {
sdp,
topics: [...topics].sort(), // Sort topics for consistency
sdp
};
// Create non-prettified JSON string

View File

@@ -9,9 +9,9 @@ import {
ClaimUsernameRequest,
Service,
CreateServiceRequest,
ServiceInfo,
} from './types.ts';
import { generateOfferHash } from './hash-id.ts';
import { parseServiceFqn } from '../crypto.ts';
const YEAR_IN_MS = 365 * 24 * 60 * 60 * 1000; // 365 days
@@ -36,30 +36,32 @@ export class SQLiteStorage implements Storage {
*/
private initializeDatabase(): void {
this.db.exec(`
-- Offers table (no topics)
-- WebRTC signaling offers
CREATE TABLE IF NOT EXISTS offers (
id TEXT PRIMARY KEY,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
service_id TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
secret TEXT,
answerer_peer_id TEXT,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER
answered_at INTEGER,
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_offers_peer ON offers(peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_username ON offers(username);
CREATE INDEX IF NOT EXISTS idx_offers_service ON offers(service_id);
CREATE INDEX IF NOT EXISTS idx_offers_expires ON offers(expires_at);
CREATE INDEX IF NOT EXISTS idx_offers_last_seen ON offers(last_seen);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_username);
-- ICE candidates table
CREATE TABLE IF NOT EXISTS ice_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
@@ -67,7 +69,7 @@ export class SQLiteStorage implements Storage {
);
CREATE INDEX IF NOT EXISTS idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX IF NOT EXISTS idx_ice_peer ON ice_candidates(peer_id);
CREATE INDEX IF NOT EXISTS idx_ice_username ON ice_candidates(username);
CREATE INDEX IF NOT EXISTS idx_ice_created ON ice_candidates(created_at);
-- Usernames table
@@ -84,39 +86,23 @@ export class SQLiteStorage implements Storage {
CREATE INDEX IF NOT EXISTS idx_usernames_expires ON usernames(expires_at);
CREATE INDEX IF NOT EXISTS idx_usernames_public_key ON usernames(public_key);
-- Services table
-- Services table (new schema with extracted fields for discovery)
CREATE TABLE IF NOT EXISTS services (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
offer_id TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
is_public INTEGER NOT NULL DEFAULT 0,
metadata TEXT,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
UNIQUE(username, service_fqn)
UNIQUE(service_fqn)
);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_fqn ON services(service_fqn);
CREATE INDEX IF NOT EXISTS idx_services_discovery ON services(service_name, version);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_expires ON services(expires_at);
CREATE INDEX IF NOT EXISTS idx_services_offer ON services(offer_id);
-- Service index table (privacy layer)
CREATE TABLE IF NOT EXISTS service_index (
uuid TEXT PRIMARY KEY,
service_id TEXT NOT NULL,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_service_index_username ON service_index(username);
CREATE INDEX IF NOT EXISTS idx_service_index_expires ON service_index(expires_at);
`);
// Enable foreign keys
@@ -132,14 +118,14 @@ export class SQLiteStorage implements Storage {
const offersWithIds = await Promise.all(
offers.map(async (offer) => ({
...offer,
id: offer.id || await generateOfferHash(offer.sdp, []),
id: offer.id || await generateOfferHash(offer.sdp),
}))
);
// Use transaction for atomic creation
const transaction = this.db.transaction((offersWithIds: (CreateOfferRequest & { id: string })[]) => {
const offerStmt = this.db.prepare(`
INSERT INTO offers (id, peer_id, sdp, created_at, expires_at, last_seen, secret)
INSERT INTO offers (id, username, service_id, sdp, created_at, expires_at, last_seen)
VALUES (?, ?, ?, ?, ?, ?, ?)
`);
@@ -149,22 +135,23 @@ export class SQLiteStorage implements Storage {
// Insert offer
offerStmt.run(
offer.id,
offer.peerId,
offer.username,
offer.serviceId || null,
offer.sdp,
now,
offer.expiresAt,
now,
offer.secret || null
now
);
created.push({
id: offer.id,
peerId: offer.peerId,
username: offer.username,
serviceId: offer.serviceId || undefined,
serviceFqn: offer.serviceFqn,
sdp: offer.sdp,
createdAt: now,
expiresAt: offer.expiresAt,
lastSeen: now,
secret: offer.secret,
});
}
});
@@ -173,14 +160,14 @@ export class SQLiteStorage implements Storage {
return created;
}
async getOffersByPeerId(peerId: string): Promise<Offer[]> {
async getOffersByUsername(username: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND expires_at > ?
WHERE username = ? AND expires_at > ?
ORDER BY last_seen DESC
`);
const rows = stmt.all(peerId, Date.now()) as any[];
const rows = stmt.all(username, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
@@ -199,13 +186,13 @@ export class SQLiteStorage implements Storage {
return this.rowToOffer(row);
}
async deleteOffer(offerId: string, ownerPeerId: string): Promise<boolean> {
async deleteOffer(offerId: string, ownerUsername: string): Promise<boolean> {
const stmt = this.db.prepare(`
DELETE FROM offers
WHERE id = ? AND peer_id = ?
WHERE id = ? AND username = ?
`);
const result = stmt.run(offerId, ownerPeerId);
const result = stmt.run(offerId, ownerUsername);
return result.changes > 0;
}
@@ -217,9 +204,8 @@ export class SQLiteStorage implements Storage {
async answerOffer(
offerId: string,
answererPeerId: string,
answerSdp: string,
secret?: string
answererUsername: string,
answerSdp: string
): Promise<{ success: boolean; error?: string }> {
// Check if offer exists and is not expired
const offer = await this.getOfferById(offerId);
@@ -231,16 +217,8 @@ export class SQLiteStorage implements Storage {
};
}
// Verify secret if offer is protected
if (offer.secret && offer.secret !== secret) {
return {
success: false,
error: 'Invalid or missing secret'
};
}
// Check if offer already has an answerer
if (offer.answererPeerId) {
if (offer.answererUsername) {
return {
success: false,
error: 'Offer already answered'
@@ -250,11 +228,11 @@ export class SQLiteStorage implements Storage {
// Update offer with answer
const stmt = this.db.prepare(`
UPDATE offers
SET answerer_peer_id = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_peer_id IS NULL
SET answerer_username = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_username IS NULL
`);
const result = stmt.run(answererPeerId, answerSdp, Date.now(), offerId);
const result = stmt.run(answererUsername, answerSdp, Date.now(), offerId);
if (result.changes === 0) {
return {
@@ -266,14 +244,14 @@ export class SQLiteStorage implements Storage {
return { success: true };
}
async getAnsweredOffers(offererPeerId: string): Promise<Offer[]> {
async getAnsweredOffers(offererUsername: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND answerer_peer_id IS NOT NULL AND expires_at > ?
WHERE username = ? AND answerer_username IS NOT NULL AND expires_at > ?
ORDER BY answered_at DESC
`);
const rows = stmt.all(offererPeerId, Date.now()) as any[];
const rows = stmt.all(offererUsername, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
@@ -281,12 +259,12 @@ export class SQLiteStorage implements Storage {
async addIceCandidates(
offerId: string,
peerId: string,
username: string,
role: 'offerer' | 'answerer',
candidates: any[]
): Promise<number> {
const stmt = this.db.prepare(`
INSERT INTO ice_candidates (offer_id, peer_id, role, candidate, created_at)
INSERT INTO ice_candidates (offer_id, username, role, candidate, created_at)
VALUES (?, ?, ?, ?, ?)
`);
@@ -295,7 +273,7 @@ export class SQLiteStorage implements Storage {
for (let i = 0; i < candidates.length; i++) {
stmt.run(
offerId,
peerId,
username,
role,
JSON.stringify(candidates[i]),
baseTimestamp + i
@@ -332,7 +310,7 @@ export class SQLiteStorage implements Storage {
return rows.map(row => ({
id: row.id,
offerId: row.offer_id,
peerId: row.peer_id,
username: row.username,
role: row.role,
candidate: JSON.parse(row.candidate),
createdAt: row.created_at,
@@ -425,66 +403,98 @@ export class SQLiteStorage implements Storage {
async createService(request: CreateServiceRequest): Promise<{
service: Service;
indexUuid: string;
offers: Offer[];
}> {
const serviceId = randomUUID();
const indexUuid = randomUUID();
const now = Date.now();
// Parse FQN to extract components
const parsed = parseServiceFqn(request.serviceFqn);
if (!parsed) {
throw new Error(`Invalid service FQN: ${request.serviceFqn}`);
}
if (!parsed.username) {
throw new Error(`Service FQN must include username: ${request.serviceFqn}`);
}
const { serviceName, version, username } = parsed;
const transaction = this.db.transaction(() => {
// Insert service
const serviceStmt = this.db.prepare(`
INSERT INTO services (id, username, service_fqn, offer_id, created_at, expires_at, is_public, metadata)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
`);
// Delete existing service with same (service_name, version, username) and its related offers (upsert behavior)
const existingService = this.db.prepare(`
SELECT id FROM services
WHERE service_name = ? AND version = ? AND username = ?
`).get(serviceName, version, username) as any;
serviceStmt.run(
if (existingService) {
// Delete related offers first (no FK cascade from offers to services)
this.db.prepare(`
DELETE FROM offers WHERE service_id = ?
`).run(existingService.id);
// Delete the service
this.db.prepare(`
DELETE FROM services WHERE id = ?
`).run(existingService.id);
}
// Insert new service with extracted fields
this.db.prepare(`
INSERT INTO services (id, service_fqn, service_name, version, username, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
`).run(
serviceId,
request.username,
request.serviceFqn,
request.offerId,
now,
request.expiresAt,
request.isPublic ? 1 : 0,
request.metadata || null
);
// Insert service index
const indexStmt = this.db.prepare(`
INSERT INTO service_index (uuid, service_id, username, service_fqn, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?)
`);
indexStmt.run(
indexUuid,
serviceId,
request.username,
request.serviceFqn,
serviceName,
version,
username,
now,
request.expiresAt
);
// Touch username to extend expiry
this.touchUsername(request.username);
// Touch username to extend expiry (inline logic)
const expiresAt = now + YEAR_IN_MS;
this.db.prepare(`
UPDATE usernames
SET last_used = ?, expires_at = ?
WHERE username = ? AND expires_at > ?
`).run(now, expiresAt, username, now);
});
transaction();
// Create offers with serviceId (after transaction)
const offerRequests = request.offers.map(offer => ({
...offer,
serviceId,
}));
const offers = await this.createOffers(offerRequests);
return {
service: {
id: serviceId,
username: request.username,
serviceFqn: request.serviceFqn,
offerId: request.offerId,
serviceName,
version,
username,
createdAt: now,
expiresAt: request.expiresAt,
isPublic: request.isPublic || false,
metadata: request.metadata,
},
indexUuid,
offers,
};
}
async getOffersForService(serviceId: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE service_id = ? AND expires_at > ?
ORDER BY created_at ASC
`);
const rows = stmt.all(serviceId, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
async getServiceById(serviceId: string): Promise<Service | null> {
const stmt = this.db.prepare(`
SELECT * FROM services
@@ -500,14 +510,13 @@ export class SQLiteStorage implements Storage {
return this.rowToService(row);
}
async getServiceByUuid(uuid: string): Promise<Service | null> {
async getServiceByFqn(serviceFqn: string): Promise<Service | null> {
const stmt = this.db.prepare(`
SELECT s.* FROM services s
INNER JOIN service_index si ON s.id = si.service_id
WHERE si.uuid = ? AND s.expires_at > ?
SELECT * FROM services
WHERE service_fqn = ? AND expires_at > ?
`);
const row = stmt.get(uuid, Date.now()) as any;
const row = stmt.get(serviceFqn, Date.now()) as any;
if (!row) {
return null;
@@ -516,35 +525,51 @@ export class SQLiteStorage implements Storage {
return this.rowToService(row);
}
async listServicesForUsername(username: string): Promise<ServiceInfo[]> {
async discoverServices(
serviceName: string,
version: string,
limit: number,
offset: number
): Promise<Service[]> {
// Query for unique services with available offers
// We join with offers and filter for available ones (answerer_username IS NULL)
const stmt = this.db.prepare(`
SELECT si.uuid, s.is_public, s.service_fqn, s.metadata
FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.expires_at > ?
SELECT DISTINCT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY s.created_at DESC
LIMIT ? OFFSET ?
`);
const rows = stmt.all(username, Date.now()) as any[];
return rows.map(row => ({
uuid: row.uuid,
isPublic: row.is_public === 1,
serviceFqn: row.is_public === 1 ? row.service_fqn : undefined,
metadata: row.is_public === 1 ? row.metadata || undefined : undefined,
}));
const rows = stmt.all(serviceName, version, Date.now(), Date.now(), limit, offset) as any[];
return rows.map(row => this.rowToService(row));
}
async queryService(username: string, serviceFqn: string): Promise<string | null> {
async getRandomService(serviceName: string, version: string): Promise<Service | null> {
// Get a random service with an available offer
const stmt = this.db.prepare(`
SELECT si.uuid FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.service_fqn = ? AND si.expires_at > ?
SELECT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY RANDOM()
LIMIT 1
`);
const row = stmt.get(username, serviceFqn, Date.now()) as any;
const row = stmt.get(serviceName, version, Date.now(), Date.now()) as any;
return row ? row.uuid : null;
if (!row) {
return null;
}
return this.rowToService(row);
}
async deleteService(serviceId: string, username: string): Promise<boolean> {
@@ -575,13 +600,14 @@ export class SQLiteStorage implements Storage {
private rowToOffer(row: any): Offer {
return {
id: row.id,
peerId: row.peer_id,
username: row.username,
serviceId: row.service_id || undefined,
serviceFqn: row.service_fqn || undefined,
sdp: row.sdp,
createdAt: row.created_at,
expiresAt: row.expires_at,
lastSeen: row.last_seen,
secret: row.secret || undefined,
answererPeerId: row.answerer_peer_id || undefined,
answererUsername: row.answerer_username || undefined,
answerSdp: row.answer_sdp || undefined,
answeredAt: row.answered_at || undefined,
};
@@ -593,13 +619,12 @@ export class SQLiteStorage implements Storage {
private rowToService(row: any): Service {
return {
id: row.id,
username: row.username,
serviceFqn: row.service_fqn,
offerId: row.offer_id,
serviceName: row.service_name,
version: row.version,
username: row.username,
createdAt: row.created_at,
expiresAt: row.expires_at,
isPublic: row.is_public === 1,
metadata: row.metadata || undefined,
};
}
}

View File

@@ -1,16 +1,16 @@
/**
* Represents a WebRTC signaling offer (no topics)
* Represents a WebRTC signaling offer
*/
export interface Offer {
id: string;
peerId: string;
username: string;
serviceId?: string; // Optional link to service (null for standalone offers)
serviceFqn?: string; // Denormalized service FQN for easier queries
sdp: string;
createdAt: number;
expiresAt: number;
lastSeen: number;
secret?: string;
info?: string;
answererPeerId?: string;
answererUsername?: string;
answerSdp?: string;
answeredAt?: number;
}
@@ -22,7 +22,7 @@ export interface Offer {
export interface IceCandidate {
id: number;
offerId: string;
peerId: string;
username: string;
role: 'offerer' | 'answerer';
candidate: any; // Full candidate object as JSON - don't enforce structure
createdAt: number;
@@ -33,11 +33,11 @@ export interface IceCandidate {
*/
export interface CreateOfferRequest {
id?: string;
peerId: string;
username: string;
serviceId?: string; // Optional link to service
serviceFqn?: string; // Optional service FQN
sdp: string;
expiresAt: number;
secret?: string;
info?: string;
}
/**
@@ -63,51 +63,26 @@ export interface ClaimUsernameRequest {
}
/**
* Represents a published service
* Represents a published service (can have multiple offers)
* New format: service:version@username (e.g., chat:1.0.0@alice)
*/
export interface Service {
id: string; // UUID v4
username: string;
serviceFqn: string; // com.example.chat@1.0.0
offerId: string; // Links to offers table
serviceFqn: string; // Full FQN: chat:1.0.0@alice
serviceName: string; // Extracted: chat
version: string; // Extracted: 1.0.0
username: string; // Extracted: alice
createdAt: number;
expiresAt: number;
isPublic: boolean;
metadata?: string; // JSON service description
}
/**
* Request to create a service
* Request to create a single service
*/
export interface CreateServiceRequest {
username: string;
serviceFqn: string;
offerId: string;
serviceFqn: string; // Full FQN with username: chat:1.0.0@alice
expiresAt: number;
isPublic?: boolean;
metadata?: string;
}
/**
* Represents a service index entry (privacy layer)
*/
export interface ServiceIndex {
uuid: string; // Random UUID for privacy
serviceId: string;
username: string;
serviceFqn: string;
createdAt: number;
expiresAt: number;
}
/**
* Service info for discovery (privacy-aware)
*/
export interface ServiceInfo {
uuid: string;
isPublic: boolean;
serviceFqn?: string; // Only present if public
metadata?: string; // Only present if public
offers: CreateOfferRequest[]; // Multiple offers per service
}
/**
@@ -125,11 +100,11 @@ export interface Storage {
createOffers(offers: CreateOfferRequest[]): Promise<Offer[]>;
/**
* Retrieves all offers from a specific peer
* @param peerId Peer identifier
* @returns Array of offers from the peer
* Retrieves all offers from a specific user
* @param username Username identifier
* @returns Array of offers from the user
*/
getOffersByPeerId(peerId: string): Promise<Offer[]>;
getOffersByUsername(username: string): Promise<Offer[]>;
/**
* Retrieves a specific offer by ID
@@ -141,10 +116,10 @@ export interface Storage {
/**
* Deletes an offer (with ownership verification)
* @param offerId Offer identifier
* @param ownerPeerId Peer ID of the owner (for verification)
* @param ownerUsername Username of the owner (for verification)
* @returns true if deleted, false if not found or not owned
*/
deleteOffer(offerId: string, ownerPeerId: string): Promise<boolean>;
deleteOffer(offerId: string, ownerUsername: string): Promise<boolean>;
/**
* Deletes all expired offers
@@ -156,36 +131,35 @@ export interface Storage {
/**
* Answers an offer (locks it to the answerer)
* @param offerId Offer identifier
* @param answererPeerId Answerer's peer ID
* @param answererUsername Answerer's username
* @param answerSdp WebRTC answer SDP
* @param secret Optional secret for protected offers
* @returns Success status and optional error message
*/
answerOffer(offerId: string, answererPeerId: string, answerSdp: string, secret?: string): Promise<{
answerOffer(offerId: string, answererUsername: string, answerSdp: string): Promise<{
success: boolean;
error?: string;
}>;
/**
* Retrieves all answered offers for a specific offerer
* @param offererPeerId Offerer's peer ID
* @param offererUsername Offerer's username
* @returns Array of answered offers
*/
getAnsweredOffers(offererPeerId: string): Promise<Offer[]>;
getAnsweredOffers(offererUsername: string): Promise<Offer[]>;
// ===== ICE Candidate Management =====
/**
* Adds ICE candidates for an offer
* @param offerId Offer identifier
* @param peerId Peer ID posting the candidates
* @param role Role of the peer (offerer or answerer)
* @param username Username posting the candidates
* @param role Role of the user (offerer or answerer)
* @param candidates Array of candidate objects (stored as plain JSON)
* @returns Number of candidates added
*/
addIceCandidates(
offerId: string,
peerId: string,
username: string,
role: 'offerer' | 'answerer',
candidates: any[]
): Promise<number>;
@@ -219,13 +193,6 @@ export interface Storage {
*/
getUsername(username: string): Promise<Username | null>;
/**
* Updates the last_used timestamp for a username (extends expiry)
* @param username Username to update
* @returns true if updated, false if not found
*/
touchUsername(username: string): Promise<boolean>;
/**
* Deletes all expired usernames
* @param now Current timestamp
@@ -236,15 +203,23 @@ export interface Storage {
// ===== Service Management =====
/**
* Creates a new service
* @param request Service creation request
* @returns Created service with generated ID and index UUID
* Creates a new service with offers
* @param request Service creation request (includes offers)
* @returns Created service with generated ID and created offers
*/
createService(request: CreateServiceRequest): Promise<{
service: Service;
indexUuid: string;
offers: Offer[];
}>;
/**
* Gets all offers for a service
* @param serviceId Service ID
* @returns Array of offers for the service
*/
getOffersForService(serviceId: string): Promise<Offer[]>;
/**
* Gets a service by its service ID
* @param serviceId Service ID
@@ -253,26 +228,40 @@ export interface Storage {
getServiceById(serviceId: string): Promise<Service | null>;
/**
* Gets a service by its index UUID
* @param uuid Index UUID
* Gets a service by its fully qualified name (FQN)
* @param serviceFqn Full service FQN (e.g., "chat:1.0.0@alice")
* @returns Service if found, null otherwise
*/
getServiceByUuid(uuid: string): Promise<Service | null>;
getServiceByFqn(serviceFqn: string): Promise<Service | null>;
/**
* Lists all services for a username (with privacy filtering)
* @param username Username to query
* @returns Array of service info (UUIDs only for private services)
* Discovers services by name and version with pagination
* Returns unique available offers (where answerer_peer_id IS NULL)
* @param serviceName Service name (e.g., 'chat')
* @param version Version string for semver matching (e.g., '1.0.0')
* @param limit Maximum number of unique services to return
* @param offset Number of services to skip
* @returns Array of services with available offers
*/
listServicesForUsername(username: string): Promise<ServiceInfo[]>;
discoverServices(
serviceName: string,
version: string,
limit: number,
offset: number
): Promise<Service[]>;
/**
* Queries a service by username and FQN
* @param username Username
* @param serviceFqn Service FQN
* @returns Service index UUID if found, null otherwise
* Gets a random available service by name and version
* Returns a single random offer that is available (answerer_peer_id IS NULL)
* @param serviceName Service name (e.g., 'chat')
* @param version Version string for semver matching (e.g., '1.0.0')
* @returns Random service with available offer, or null if none found
*/
queryService(username: string, serviceFqn: string): Promise<string | null>;
getRandomService(serviceName: string, version: string): Promise<Service | null>;
/**
* Deletes a service (with ownership verification)

View File

@@ -1,6 +1,5 @@
import { createApp } from './app.ts';
import { D1Storage } from './storage/d1.ts';
import { generateSecretKey } from './crypto.ts';
import { Config } from './config.ts';
/**
@@ -8,12 +7,10 @@ import { Config } from './config.ts';
*/
export interface Env {
DB: D1Database;
AUTH_SECRET?: string;
OFFER_DEFAULT_TTL?: string;
OFFER_MAX_TTL?: string;
OFFER_MIN_TTL?: string;
MAX_OFFERS_PER_REQUEST?: string;
MAX_TOPICS_PER_OFFER?: string;
CORS_ORIGINS?: string;
VERSION?: string;
}
@@ -26,9 +23,6 @@ export default {
// Initialize D1 storage
const storage = new D1Storage(env.DB);
// Generate or use provided auth secret
const authSecret = env.AUTH_SECRET || generateSecretKey();
// Build config from environment
const config: Config = {
port: 0, // Not used in Workers
@@ -38,13 +32,11 @@ export default {
? env.CORS_ORIGINS.split(',').map(o => o.trim())
: ['*'],
version: env.VERSION || 'unknown',
authSecret,
offerDefaultTtl: env.OFFER_DEFAULT_TTL ? parseInt(env.OFFER_DEFAULT_TTL, 10) : 60000,
offerMaxTtl: env.OFFER_MAX_TTL ? parseInt(env.OFFER_MAX_TTL, 10) : 86400000,
offerMinTtl: env.OFFER_MIN_TTL ? parseInt(env.OFFER_MIN_TTL, 10) : 60000,
cleanupInterval: 60000, // Not used in Workers (scheduled handler instead)
maxOffersPerRequest: env.MAX_OFFERS_PER_REQUEST ? parseInt(env.MAX_OFFERS_PER_REQUEST, 10) : 100,
maxTopicsPerOffer: env.MAX_TOPICS_PER_OFFER ? parseInt(env.MAX_TOPICS_PER_OFFER, 10) : 50,
maxOffersPerRequest: env.MAX_OFFERS_PER_REQUEST ? parseInt(env.MAX_OFFERS_PER_REQUEST, 10) : 100
};
// Create Hono app

View File

@@ -7,7 +7,7 @@ compatibility_flags = ["nodejs_compat"]
[[d1_databases]]
binding = "DB"
database_name = "rondevu-offers"
database_id = "b94e3f71-816d-455b-a89d-927fa49532d0"
database_id = "3d469855-d37f-477b-b139-fa58843a54ff"
# Environment variables
[vars]
@@ -17,7 +17,7 @@ OFFER_MIN_TTL = "60000" # Min offer TTL: 1 minute
MAX_OFFERS_PER_REQUEST = "100" # Max offers per request
MAX_TOPICS_PER_OFFER = "50" # Max topics per offer
CORS_ORIGINS = "*" # Comma-separated list of allowed origins
VERSION = "0.1.0" # Semantic version
VERSION = "0.4.0" # Semantic version
# AUTH_SECRET should be set as a secret, not a var
# Run: npx wrangler secret put AUTH_SECRET
@@ -39,7 +39,7 @@ command = ""
[observability]
[observability.logs]
enabled = false
enabled = true
head_sampling_rate = 1
invocation_logs = true
persist = true