46 Commits

Author SHA1 Message Date
Bas
83fc5d89d6 "Claude Code Review workflow" 2025-12-14 11:00:45 +01:00
Bas
59b008983d "Claude PR Assistant workflow" 2025-12-14 11:00:44 +01:00
6a1ca14bf8 Update VERSION to 0.5.2 in wrangler.toml 2025-12-13 18:25:16 +01:00
350d15591a v0.5.2 - Allow periods in usernames 2025-12-13 18:24:26 +01:00
7246d4c723 Allow periods in usernames
Update username validation regex to allow periods (.) in addition to alphanumeric characters and dashes. Usernames must still start and end with alphanumeric characters.
2025-12-13 18:24:16 +01:00
f53d3024c7 Update VERSION to 0.5.1 in wrangler.toml 2025-12-13 15:09:11 +01:00
384b80ef0d v0.5.1 - Fix base64 encoding for Node.js compatibility 2025-12-13 15:09:02 +01:00
b5799fa9ac Fix base64 encoding for Node.js compatibility
Use Buffer for base64 encoding/decoding to ensure compatibility with Node.js clients using NodeCryptoAdapter. The previous btoa/atob implementation caused signature verification failures when clients used Buffer-based encoding.

Fixes: Invalid signature errors when using NodeCryptoAdapter
2025-12-13 15:08:56 +01:00
2a648c5818 Update VERSION to 0.5.0 in wrangler.toml 2025-12-13 14:43:42 +01:00
9e93ed506e Release v0.5.0: Phase 1 & 2 improvements, documentation restructuring 2025-12-13 13:19:14 +01:00
9002fe3f6d Make README more concise with ADVANCED.md
Restructure documentation for better discoverability:

Changes:
- README.md: 624 → 259 lines (58% reduction)
- ADVANCED.md: New comprehensive guide (502 lines)

README.md now contains:
 Features and architecture overview
 Quick start commands
 RPC interface basics
 Core method examples
 Configuration quick reference
 Links to advanced docs

ADVANCED.md contains:
📚 Complete RPC method reference (8 methods)
📚 Full configuration table
📚 Database schema documentation
📚 Security implementation details
📚 Migration guides

Benefits:
- Faster onboarding for API consumers
- Essential examples in README
- Detailed reference still accessible
- Consistent documentation structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 23:17:06 +01:00
88a038a12a Refactor getService() method for better maintainability
Break down 138-line method with three helper functions:

1. filterCompatibleServices(): Eliminates duplicate filtering logic
   - Used in both paginated and random discovery modes
   - Centralizes version compatibility checking

2. findAvailableOffer(): Encapsulates offer lookup logic
   - Used across all three modes
   - Ensures consistent offer selection

3. buildServiceResponse(): Standardizes response formatting
   - Single source of truth for response structure
   - Used in all return paths

Benefits:
- Eliminates 30+ lines of duplicate code
- Three modes now clearly separated and documented
- Easier to maintain and test each mode independently
- Consistent response formatting across all modes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 23:09:24 +01:00
3c0d1c8411 Replace magic numbers with named constants in server
Refactoring: Extract magic numbers to named constants
- MAX_BATCH_SIZE = 100 (batch request limit)
- MAX_PAGE_SIZE = 100 (pagination limit)

Replaced in:
- app.ts: Batch size validation (line 68-69)
- rpc.ts: Page size limit (line 184)

Impact: Improves code clarity and makes limits configurable
2025-12-12 22:56:55 +01:00
53a576670e Add candidate validation to addIceCandidates()
Validation: Add basic candidate validation
- Validate each candidate is an object
- Don't enforce specific structure (per CLAUDE.md guidelines)
- Provides clear error messages with index

Impact: Prevents runtime errors from null/primitive values
Note: Intentionally keeps candidate structure flexible per design
2025-12-12 22:53:29 +01:00
7e2e8c703e Add SDP validation to publishService()
Validation: Add comprehensive offer validation
- Validate each offer is an object
- Validate each offer has sdp property
- Validate sdp is a string
- Validate sdp is not empty/whitespace

Impact: Prevents runtime errors from malformed offers
Improves error messages with specific index information
2025-12-12 22:52:46 +01:00
05fe34be01 Remove explicit claimUsername RPC handler - claiming now fully implicit
Username claiming is now handled automatically in verifyAuth() when a username
doesn't exist. The separate claimUsername RPC method is no longer needed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 21:56:56 +01:00
68bb28bbc2 Fix: Replace all remaining getOffersByService with getOffersForService 2025-12-12 21:13:02 +01:00
677bbbb37e Fix: Correct method name from getOffersByService to getOffersForService
The storage interface defines getOffersForService() but RPC
handler was calling getOffersByService(), causing runtime error.
2025-12-12 21:11:56 +01:00
caae10bcac Fix: Pass offers to createService method
The createService storage method expects offers in the request,
but publishService wasn't passing them. This caused undefined
error when d1.ts tried to call request.offers.map().

Now correctly passes offers to createService which handles
creating both the service and all offers atomically.
2025-12-12 21:09:15 +01:00
34babd036e Fix: Auto-claim should not validate claim message format
Auto-claim was incorrectly using validateUsernameClaim() which
expects 'claim:{username}:{timestamp}' message format. This failed
when users tried to auto-claim via publishService or getService.

Now auto-claim only:
- Validates username format
- Verifies signature against the actual message
- Claims the username

This allows implicit username claiming on first authenticated request.
2025-12-12 21:03:44 +01:00
876ac2602c Fix: Correct validateUsernameClaim function calls
The function expects 4 separate parameters, not an object.
This was causing 'Username must be a string' errors because
the entire object was being passed as the username parameter.
2025-12-12 21:00:11 +01:00
df9f3311e9 Fix: Add missing continue statement in message validation
The message validation was missing a continue statement, causing
the handler to continue executing even after pushing an error response.
This led to undefined errors when trying to map over undefined values.
2025-12-12 20:52:24 +01:00
9f30f8b46d Implement implicit username claiming in RPC handler
Modified verifyAuth() to automatically claim usernames on first use.
When a username is not claimed and a publicKey is provided in the
RPC request, the server will validate and auto-claim it.

Changes:
- Added publicKey parameter to verifyAuth() function
- Added publicKey field to RpcRequest interface
- Updated RpcHandler type to include publicKey parameter
- Modified all method handlers to pass publicKey to verifyAuth()
- Updated handleRpc() to extract publicKey from requests

🤖 Generated with Claude Code
https://claude.com/claude-code

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 20:22:23 +01:00
17765a9f4f refactor: Convert to RPC interface with single /rpc endpoint
BREAKING CHANGES:
- Replaced REST API with RPC interface
- Single POST /rpc endpoint for all operations
- Removed auth middleware (per-method auth instead)
- Support for batch operations
- Message format changed for all methods

Changes:
- Created src/rpc.ts with all method handlers
- Simplified src/app.ts to only handle /rpc endpoint
- Removed src/middleware/auth.ts
- Updated README.md with complete RPC documentation
2025-12-12 19:51:58 +01:00
4e73157a16 migration: Convert peer_id to username in offers and ice_candidates
This migration aligns the D1 database schema with the unified Ed25519
authentication system that replaced the dual peerId/secret system.

Changes:
- Renames peer_id to username in offers table
- Renames answerer_peer_id to answerer_username in offers table
- Renames peer_id to username in ice_candidates table
- Adds service_fqn column to offers table
- Updates all indexes and foreign keys
2025-12-12 19:20:54 +01:00
8d47424a82 fix: Remove authSecret reference from worker config
The authSecret variable was removed but still referenced in the config
object, causing the worker to crash on all requests.
2025-12-12 19:18:24 +01:00
1612bd78b7 refactor: Unify polling endpoint and remove AUTH_SECRET
BREAKING CHANGES:
- Renamed /offers/poll to /poll (generic polling endpoint)
- Removed /offers/answered endpoint (use /poll instead)
- Removed AUTH_SECRET environment variable (Ed25519 auth only)
- Updated auth message format from 'pollOffers' to 'poll'
2025-12-12 19:13:11 +01:00
01b751afc3 docs: Fix DELETE endpoint auth and anonymous users description
- DELETE /services/:fqn uses request body for auth, not query parameters
- Updated anonymous users description to reflect server capabilities
  (not client auto-claiming behavior which was removed)
2025-12-12 17:44:35 +01:00
0a98ace6f7 docs: Update README for unified Ed25519 authentication
- Remove POST /register endpoint documentation
- Update all endpoints to show signature-based auth (username, signature, message)
- Remove Authorization header examples (replaced with body/query params)
- Add anonymous username documentation (anon-{timestamp}-{random})
- Update database schema to show username-based tables
- Remove AUTH_SECRET from configuration
- Update security section with Ed25519 authentication details
2025-12-10 22:19:06 +01:00
51fe405440 Unified Ed25519 authentication - remove peer_id/credentials system
BREAKING CHANGE: Remove dual authentication system

- Remove POST /register endpoint - no longer needed
- Remove peer_id/secret credential-based auth
- All authentication now uses username + Ed25519 signatures
- Anonymous users can generate random usernames (anon-{timestamp}-{hex})

Database schema:
- Rename peer_id → username in offers table
- Rename answerer_peer_id → answerer_username in offers table
- Rename peer_id → username in ice_candidates table
- Remove secret column from offers table
- Add FK constraints for username columns

Storage layer:
- Update D1 and SQLite implementations
- All methods use username instead of peerId
- Remove secret-related code

Auth middleware:
- Replace validateCredentials() with Ed25519 signature verification
- Extract auth from request body (POST) or query params (GET)
- Verify signature against username's public key
- Validate message format and timestamp

Crypto utilities:
- Remove generatePeerId(), encryptPeerId(), decryptPeerId(), validateCredentials()
- Add generateAnonymousUsername() - creates anon-{timestamp}-{random}
- Add validateAuthMessage() - validates auth message format

Config:
- Remove authSecret from Config interface (no longer needed)

All server endpoints updated to use getAuthenticatedUsername()
2025-12-10 22:06:45 +01:00
95596dd462 Update README to document current v0.4 API
- Remove outdated UUID-based endpoint documentation
- Document actual service:version@username FQN format
- Add /offers/poll combined polling endpoint
- Update all endpoint paths to match actual implementation
- Document ICE candidate role filtering
- Add migration notes from v0.3.x
2025-12-10 21:03:51 +01:00
1bf21d7df8 Include both offerer and answerer ICE candidates in polling endpoint
- Add role and peerId to ICE candidate responses for matching
- Offerers can now see their own candidates (for debugging/sync)
- Answerers can poll same endpoint to get offerer candidates
- Each candidate tagged with role ('offerer' or 'answerer') and peerId
- Enables proper bidirectional ICE candidate exchange
2025-12-10 19:51:31 +01:00
e3ede0033e Fix UNIQUE constraint: Use (service_name, version, username) instead of service_fqn
- Change UNIQUE constraint to composite key on separate columns
- Move upsert logic into D1Storage.createService() for atomic operation
- Delete existing service and its offers before inserting new one
- Remove redundant delete logic from app.ts endpoint
- Fixes 'UNIQUE constraint failed: services.service_fqn' error when republishing
2025-12-10 19:42:03 +01:00
cfa58f1dfa Add combined polling endpoint for answers and ICE candidates
- Add GET /offers/poll endpoint for efficient batch polling
- Returns both answered offers and ICE candidates in single request
- Supports timestamp-based filtering with 'since' parameter
- Reduces HTTP overhead from 2N requests to 1 request
- Filters ICE candidates by role (answerer candidates for offerer)
2025-12-10 19:32:52 +01:00
c14a8c24fc Add efficient batch polling endpoint for answered offers
Added GET /offers/answered endpoint that returns all answered offers
for the authenticated peer with optional 'since' timestamp filtering.

This allows offerers to efficiently poll for all incoming connections
in a single request instead of polling each offer individually.
2025-12-10 19:17:19 +01:00
b282bf6470 Fix D1 storage: Insert service_id when creating offers
The createOffers function was not inserting the service_id column even
though it was passed in the CreateOfferRequest. This caused all offers
to have NULL service_id, making getOffersForService return empty results.

Fixed:
- Added service_id to INSERT statement in createOffers
- Added serviceId to created offer objects
- Added serviceId to rowToOffer mapping

This resolves the 'No available offers' error when trying to connect
to a published service.
2025-12-10 18:52:11 +01:00
9088abe305 Fix fresh schema to match D1 storage expectations
Changed offers table to use service_id (nullable) instead of service_fqn.
This matches the actual D1 storage implementation in d1.ts which expects:
- service_id TEXT (optional link to service)
- NOT service_fqn (that's only in the services table)

Resolves 'NOT NULL constraint failed: offers.service_fqn' error.
2025-12-10 18:32:43 +01:00
00c5bbc501 Update database configuration and add fresh schema
- Update wrangler.toml with new D1 database ID
- Add fresh_schema.sql for clean database initialization
- Applied schema to fresh D1 database
- Server redeployed with correct database binding

This resolves the 'table services has no column named service_name' error
by ensuring the database has the correct v0.4.1+ schema.
2025-12-10 18:17:53 +01:00
85a3de65e2 Fix signature validation bug for serviceFqn with colons
The validateServicePublish function was incorrectly parsing the signature
message when serviceFqn contained colons (e.g., 'chat:2.0.0@user').

Old logic: Split by ':' and expected exactly 4 parts
Problem: serviceFqn 'chat:2.0.0@user' contains a colon, so we get 5 parts

Fixed:
- Allow parts.length >= 4
- Extract timestamp from the last part
- Reconstruct serviceFqn from all middle parts (parts[2] to parts[length-2])

This fixes the '403 Invalid signature for username' error that was
preventing service publication.
2025-12-09 22:59:02 +01:00
8111cb9cec v0.5.0: Service discovery and FQN format refactoring
- Changed service FQN format: service:version@username (colon instead of @)
- Added service discovery: direct lookup, random selection, paginated queries
- Updated parseServiceFqn to handle optional username for discovery
- Removed UUID privacy layer (service_index table)
- Updated storage interface with discovery methods (discoverServices, getRandomService, getServiceByFqn)
- Removed deprecated methods (getServiceByUuid, queryService, listServicesForUsername, findServicesByName, touchUsername, batchCreateServices)
- Updated API routes: /services/:fqn with three modes (direct, random, paginated)
- Changed offer/answer/ICE routes to offer-specific: /services/:fqn/offers/:offerId/*
- Added extracted fields to services table (service_name, version, username) for efficient discovery
- Created migration 0007 to update schema and migrate existing data
- Added discovery indexes for performance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-09 22:22:37 +01:00
b446adaee4 fix: better error handling for public key constraint
- Add try/catch in claimUsername to handle UNIQUE constraint
- Return meaningful error: 'This public key has already claimed a different username'
- Enable observability logs for better debugging
2025-12-08 21:31:36 +01:00
163e1f73d4 fix: update D1 schema to match v0.4.0 service-to-offers relationship
- Add service_id column to offers table
- Remove offer_id column from services table
- Add index for service_id in offers
2025-12-07 22:31:34 +01:00
1d47d47ef7 feat: add database migration for service-to-offers refactor
- Add service_id column to offers table
- Remove offer_id column from services table
- Update VERSION to 0.4.0 in wrangler.toml
2025-12-07 22:28:14 +01:00
1d70cd79e8 feat: refactor to service-based WebRTC signaling endpoints
BREAKING CHANGE: Replace offer-based endpoints with service-based signaling

- Add POST /services/:uuid/answer
- Add GET /services/:uuid/answer
- Add POST /services/:uuid/ice-candidates
- Add GET /services/:uuid/ice-candidates
- Remove all /offers/* endpoints (POST /offers, GET /offers/mine, etc.)
- Server auto-detects peer's offer when offerId is omitted
- Update README with new service-based API documentation
- Bump version to 0.4.0

This change simplifies the API by focusing on services rather than individual offers.
WebRTC signaling (answer/ICE) now operates at the service level, with automatic
offer detection when needed.
2025-12-07 22:17:24 +01:00
2aa1fee4d6 docs: update server README to remove outdated sections
- Remove obsolete POST /index/:username/query endpoint
- Remove non-existent PUT /offers/:offerId/heartbeat endpoint
- Update architecture diagram to reflect semver discovery
- Update database schema to show service-to-offers relationship
2025-12-07 22:07:16 +01:00
d564e2250f docs: Update README with semver matching and offers array 2025-12-07 22:00:40 +01:00
20 changed files with 2282 additions and 1649 deletions

View File

@@ -0,0 +1,57 @@
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
# Optional: Only run on specific file changes
# paths:
# - "src/**/*.ts"
# - "src/**/*.tsx"
# - "src/**/*.js"
# - "src/**/*.jsx"
jobs:
claude-review:
# Optional: Filter by PR author
# if: |
# github.event.pull_request.user.login == 'external-contributor' ||
# github.event.pull_request.user.login == 'new-developer' ||
# github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code Review
id: claude-review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance considerations
- Security concerns
- Test coverage
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://code.claude.com/docs/en/cli-reference for available options
claude_args: '--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"'

50
.github/workflows/claude.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: read
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
# prompt: 'Update the pull request description to include a summary of changes.'
# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://code.claude.com/docs/en/cli-reference for available options
# claude_args: '--allowed-tools Bash(gh pr:*)'

502
ADVANCED.md Normal file
View File

@@ -0,0 +1,502 @@
# Rondevu Server - Advanced Usage
Comprehensive API reference, configuration guide, database schema, and security details.
## Table of Contents
- [RPC Methods](#rpc-methods)
- [Configuration](#configuration)
- [Database Schema](#database-schema)
- [Security](#security)
- [Migration Guide](#migration-guide)
---
## RPC Methods
### `getUser`
Check username availability
**Parameters:**
- `username` - Username to check
**Message format:** `getUser:{username}:{timestamp}` (no authentication required)
**Example:**
```json
{
"method": "getUser",
"message": "getUser:alice:1733404800000",
"signature": "base64-signature",
"params": { "username": "alice" }
}
```
**Response:**
```json
{
"success": true,
"result": {
"username": "alice",
"available": false,
"claimedAt": 1733404800000,
"expiresAt": 1765027200000,
"publicKey": "base64-encoded-public-key"
}
}
```
### `claimUsername`
Claim a username with cryptographic proof
**Parameters:**
- `username` - Username to claim
- `publicKey` - Base64-encoded Ed25519 public key
**Message format:** `claim:{username}:{timestamp}`
**Example:**
```json
{
"method": "claimUsername",
"message": "claim:alice:1733404800000",
"signature": "base64-signature",
"params": {
"username": "alice",
"publicKey": "base64-encoded-public-key"
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"success": true,
"username": "alice"
}
}
```
### `getService`
Get service by FQN (direct lookup, random discovery, or paginated)
**Parameters:**
- `serviceFqn` - Service FQN (e.g., `chat:1.0.0` or `chat:1.0.0@alice`)
- `limit` - (optional) Number of results for paginated mode
- `offset` - (optional) Offset for paginated mode
**Message format:** `getService:{username}:{serviceFqn}:{timestamp}`
**Modes:**
1. **Direct lookup** (with @username): Returns specific user's service
2. **Random** (without @username, no limit): Returns random service
3. **Paginated** (without @username, with limit): Returns multiple services
**Example:**
```json
{
"method": "getService",
"message": "getService:bob:chat:1.0.0:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice"
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"serviceId": "uuid",
"username": "alice",
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"sdp": "v=0...",
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
}
```
### `publishService`
Publish a service with offers
**Parameters:**
- `serviceFqn` - Service FQN with username (e.g., `chat:1.0.0@alice`)
- `offers` - Array of offers, each with `sdp` field
- `ttl` - (optional) Time to live in milliseconds
**Message format:** `publishService:{username}:{serviceFqn}:{timestamp}`
**Example:**
```json
{
"method": "publishService",
"message": "publishService:alice:chat:1.0.0@alice:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offers": [
{ "sdp": "v=0..." },
{ "sdp": "v=0..." }
],
"ttl": 300000
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"serviceId": "uuid",
"username": "alice",
"serviceFqn": "chat:1.0.0@alice",
"offers": [
{
"offerId": "offer-hash-1",
"sdp": "v=0...",
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
],
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
}
```
### `deleteService`
Delete a service
**Parameters:**
- `serviceFqn` - Service FQN with username
**Message format:** `deleteService:{username}:{serviceFqn}:{timestamp}`
**Example:**
```json
{
"method": "deleteService",
"message": "deleteService:alice:chat:1.0.0@alice:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice"
}
}
```
**Response:**
```json
{
"success": true,
"result": { "success": true }
}
```
### `answerOffer`
Answer a specific offer
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
- `sdp` - Answer SDP
**Message format:** `answerOffer:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "answerOffer",
"message": "answerOffer:bob:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"sdp": "v=0..."
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"success": true,
"offerId": "offer-hash"
}
}
```
### `getOfferAnswer`
Get answer for an offer (offerer polls this)
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
**Message format:** `getOfferAnswer:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "getOfferAnswer",
"message": "getOfferAnswer:alice:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash"
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"sdp": "v=0...",
"offerId": "offer-hash",
"answererId": "bob",
"answeredAt": 1733404800000
}
}
```
### `poll`
Combined polling for answers and ICE candidates
**Parameters:**
- `since` - (optional) Timestamp to get only new data
**Message format:** `poll:{username}:{timestamp}`
**Example:**
```json
{
"method": "poll",
"message": "poll:alice:1733404800000",
"signature": "base64-signature",
"params": {
"since": 1733404800000
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"answers": [
{
"offerId": "offer-hash",
"serviceId": "service-uuid",
"answererId": "bob",
"sdp": "v=0...",
"answeredAt": 1733404800000
}
],
"iceCandidates": {
"offer-hash": [
{
"candidate": { "candidate": "...", "sdpMid": "0", "sdpMLineIndex": 0 },
"role": "answerer",
"username": "bob",
"createdAt": 1733404800000
}
]
}
}
}
```
### `addIceCandidates`
Add ICE candidates to an offer
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
- `candidates` - Array of ICE candidates
**Message format:** `addIceCandidates:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "addIceCandidates",
"message": "addIceCandidates:alice:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"candidates": [
{
"candidate": "candidate:...",
"sdpMid": "0",
"sdpMLineIndex": 0
}
]
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"count": 1,
"offerId": "offer-hash"
}
}
```
### `getIceCandidates`
Get ICE candidates for an offer
**Parameters:**
- `serviceFqn` - Service FQN
- `offerId` - Offer ID
- `since` - (optional) Timestamp to get only new candidates
**Message format:** `getIceCandidates:{username}:{offerId}:{timestamp}`
**Example:**
```json
{
"method": "getIceCandidates",
"message": "getIceCandidates:alice:offer-hash:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-hash",
"since": 1733404800000
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"candidates": [
{
"candidate": {
"candidate": "candidate:...",
"sdpMid": "0",
"sdpMLineIndex": 0
},
"createdAt": 1733404800000
}
],
"offerId": "offer-hash"
}
}
```
## Configuration
Environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3000` | Server port (Node.js/Docker) |
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins |
| `STORAGE_PATH` | `./rondevu.db` | SQLite database path (use `:memory:` for in-memory) |
| `VERSION` | `0.5.0` | Server version (semver) |
| `OFFER_DEFAULT_TTL` | `60000` | Default offer TTL in ms (1 minute) |
| `OFFER_MIN_TTL` | `60000` | Minimum offer TTL in ms (1 minute) |
| `OFFER_MAX_TTL` | `86400000` | Maximum offer TTL in ms (24 hours) |
| `CLEANUP_INTERVAL` | `60000` | Cleanup interval in ms (1 minute) |
| `MAX_OFFERS_PER_REQUEST` | `100` | Maximum offers per create request |
## Database Schema
### usernames
- `username` (PK): Claimed username
- `public_key`: Ed25519 public key (base64)
- `claimed_at`: Claim timestamp
- `expires_at`: Expiry timestamp (365 days)
- `last_used`: Last activity timestamp
- `metadata`: Optional JSON metadata
### services
- `id` (PK): Service ID (UUID)
- `username` (FK): Owner username
- `service_fqn`: Fully qualified name (chat:1.0.0@alice)
- `service_name`: Service name component (chat)
- `version`: Version component (1.0.0)
- `created_at`, `expires_at`: Timestamps
- UNIQUE constraint on (service_name, version, username)
### offers
- `id` (PK): Offer ID (hash of SDP)
- `username` (FK): Owner username
- `service_id` (FK): Link to service
- `service_fqn`: Denormalized service FQN
- `sdp`: WebRTC offer SDP
- `answerer_username`: Username of answerer (null until answered)
- `answer_sdp`: WebRTC answer SDP (null until answered)
- `answered_at`: Timestamp when answered
- `created_at`, `expires_at`, `last_seen`: Timestamps
### ice_candidates
- `id` (PK): Auto-increment ID
- `offer_id` (FK): Link to offer
- `username`: Username who sent the candidate
- `role`: 'offerer' or 'answerer'
- `candidate`: JSON-encoded candidate
- `created_at`: Timestamp
## Security
### Ed25519 Signature Authentication
All authenticated requests require:
- **message**: Signed message with format-specific structure
- **signature**: Base64-encoded Ed25519 signature of the message
- Username is extracted from the message
### Username Claiming
- **Algorithm**: Ed25519 signatures
- **Message Format**: `claim:{username}:{timestamp}`
- **Replay Protection**: Timestamp must be within 5 minutes
- **Key Management**: Private keys never leave the client
- **Validity**: 365 days, auto-renewed on use
### Anonymous Users
- **Format**: `anon-{timestamp}-{random}` (e.g., `anon-lx2w34-a3f501`)
- **Generation**: Can be generated by client for testing
- **Behavior**: Same as regular usernames, must be explicitly claimed like any username
### Service Publishing
- **Ownership Verification**: Every publish requires username signature
- **Message Format**: `publishService:{username}:{serviceFqn}:{timestamp}`
- **Auto-Renewal**: Publishing a service extends username expiry
### ICE Candidate Filtering
- Server filters candidates by role to prevent peers from receiving their own candidates
- Offerers receive only answerer candidates
- Answerers receive only offerer candidates
## Migration from v0.4.x
See [MIGRATION.md](../MIGRATION.md) for detailed migration guide.
**Key Changes:**
- Moved from REST API to RPC interface with single `/rpc` endpoint
- All methods now use POST with JSON body
- Batch operations supported
- Authentication is per-method instead of per-endpoint middleware
## License
MIT

434
README.md
View File

@@ -2,9 +2,9 @@
[![npm version](https://img.shields.io/npm/v/@xtr-dev/rondevu-server)](https://www.npmjs.com/package/@xtr-dev/rondevu-server)
🌐 **DNS-like WebRTC signaling with username claiming and service discovery**
🌐 **Simple WebRTC signaling with RPC interface**
Scalable WebRTC signaling server with cryptographic username claiming, service publishing, and privacy-preserving discovery.
Scalable WebRTC signaling server with cryptographic username claiming, service publishing with semantic versioning, and efficient offer/answer exchange via JSON-RPC interface.
**Related repositories:**
- [@xtr-dev/rondevu-client](https://github.com/xtr-dev/rondevu-client) - TypeScript client library ([npm](https://www.npmjs.com/package/@xtr-dev/rondevu-client))
@@ -15,12 +15,14 @@ Scalable WebRTC signaling server with cryptographic username claiming, service p
## Features
- **RPC Interface**: Single endpoint for all operations with batching support
- **Username Claiming**: Cryptographic username ownership with Ed25519 signatures (365-day validity, auto-renewed on use)
- **Service Publishing**: Package-style naming with semantic versioning (com.example.chat@1.0.0)
- **Privacy-Preserving Discovery**: UUID-based service index prevents enumeration
- **Public/Private Services**: Control service visibility
- **Stateless Authentication**: AES-256-GCM encrypted credentials, no server-side sessions
- **Service Publishing**: Service:version@username naming (e.g., `chat:1.0.0@alice`)
- **Service Discovery**: Random and paginated discovery for finding services without knowing usernames
- **Semantic Versioning**: Compatible version matching (chat:1.0.0 matches any 1.x.x)
- **Signature-Based Authentication**: All authenticated requests use Ed25519 signatures
- **Complete WebRTC Signaling**: Offer/answer exchange and ICE candidate relay
- **Batch Operations**: Execute multiple operations in a single HTTP request
- **Dual Storage**: SQLite (Node.js/Docker) and Cloudflare D1 (Workers) backends
## Architecture
@@ -30,11 +32,13 @@ Username Claiming → Service Publishing → Service Discovery → WebRTC Connec
alice claims "alice" with Ed25519 signature
alice publishes com.example.chat@1.0.0 → receives UUID abc123
alice publishes chat:1.0.0@alice with offers
bob queries alice's services → gets UUID abc123
bob queries chat:1.0.0@alice (direct) or chat:1.0.0 (discovery) → gets offer SDP
bob connects to UUID abc123 → WebRTC connection established
bob posts answer SDP → WebRTC connection established
ICE candidates exchanged via server relay
```
## Quick Start
@@ -46,7 +50,7 @@ npm install && npm start
**Docker:**
```bash
docker build -t rondevu . && docker run -p 3000:3000 -e STORAGE_PATH=:memory: -e AUTH_SECRET=$(openssl rand -hex 32) rondevu
docker build -t rondevu . && docker run -p 3000:3000 -e STORAGE_PATH=:memory: rondevu
```
**Cloudflare Workers:**
@@ -54,331 +58,201 @@ docker build -t rondevu . && docker run -p 3000:3000 -e STORAGE_PATH=:memory: -e
npx wrangler deploy
```
## API Endpoints
## RPC Interface
### Public Endpoints
All API calls are made to `POST /rpc` with JSON-RPC format.
#### `GET /`
Returns server version and info
### Request Format
#### `GET /health`
Health check endpoint with version
#### `POST /register`
Register a new peer and receive credentials (peerId + secret)
Generates a cryptographically random 128-bit peer ID.
**Response:**
**Single method call:**
```json
{
"peerId": "f17c195f067255e357232e34cf0735d9",
"secret": "DdorTR8QgSn9yngn+4qqR8cs1aMijvX..."
}
```
### User Management (RESTful)
#### `GET /users/:username`
Check username availability and claim status
**Response:**
```json
{
"username": "alice",
"available": false,
"claimedAt": 1733404800000,
"expiresAt": 1765027200000,
"publicKey": "..."
}
```
#### `POST /users/:username`
Claim a username with cryptographic proof
**Request:**
```json
{
"publicKey": "base64-encoded-ed25519-public-key",
"method": "getUser",
"message": "getUser:alice:1733404800000",
"signature": "base64-encoded-signature",
"message": "claim:alice:1733404800000"
"params": {
"username": "alice"
}
}
```
**Response:**
**Batch calls:**
```json
{
"username": "alice",
"claimedAt": 1733404800000,
"expiresAt": 1765027200000
}
```
**Validation:**
- Username format: `^[a-z0-9][a-z0-9-]*[a-z0-9]$` (3-32 characters)
- Signature must be valid Ed25519 signature
- Timestamp must be within 5 minutes (replay protection)
- Expires after 365 days, auto-renewed on use
#### `GET /users/:username/services`
List all services for a username (privacy-preserving)
**Response:**
```json
{
"username": "alice",
"services": [
{
"uuid": "abc123",
"isPublic": false
},
{
"uuid": "def456",
"isPublic": true,
"serviceFqn": "com.example.public@1.0.0",
"metadata": { "description": "Public service" }
[
{
"method": "getUser",
"message": "getUser:alice:1733404800000",
"signature": "base64-encoded-signature",
"params": { "username": "alice" }
},
{
"method": "claimUsername",
"message": "claim:bob:1733404800000",
"signature": "base64-encoded-signature",
"params": {
"username": "bob",
"publicKey": "base64-encoded-public-key"
}
]
}
]
```
### Response Format
**Single response:**
```json
{
"success": true,
"result": { /* method-specific data */ }
}
```
#### `GET /users/:username/services/:fqn`
Get specific service by username and FQN (single request)
**Response:**
**Error response:**
```json
{
"uuid": "abc123",
"serviceId": "service-id",
"username": "alice",
"serviceFqn": "chat.app@1.0.0",
"offerId": "offer-hash",
"sdp": "v=0...",
"isPublic": true,
"metadata": {},
"createdAt": 1733404800000,
"expiresAt": 1733405100000
"success": false,
"error": "Error message"
}
```
### Service Management (RESTful)
**Batch responses:** Array of responses matching request array order.
#### `POST /users/:username/services`
Publish a service (requires authentication and username signature)
## Core Methods
**Headers:**
- `Authorization: Bearer {peerId}:{secret}`
### Username Management
**Request:**
```json
```typescript
// Check username availability
POST /rpc
{
"serviceFqn": "com.example.chat@1.0.0",
"sdp": "v=0...",
"ttl": 300000,
"isPublic": false,
"metadata": { "description": "Chat service" },
"signature": "base64-encoded-signature",
"message": "publish:alice:com.example.chat@1.0.0:1733404800000"
"method": "getUser",
"params": { "username": "alice" }
}
// Claim username (requires signature)
POST /rpc
{
"method": "claimUsername",
"message": "claim:alice:1733404800000",
"signature": "base64-signature",
"params": {
"username": "alice",
"publicKey": "base64-public-key"
}
}
```
**Response (Full service details):**
```json
### Service Publishing
```typescript
// Publish service (requires signature)
POST /rpc
{
"uuid": "uuid-v4-for-index",
"serviceId": "uuid-v4",
"username": "alice",
"serviceFqn": "com.example.chat@1.0.0",
"offerId": "offer-hash-id",
"sdp": "v=0...",
"isPublic": false,
"metadata": { "description": "Chat service" },
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
```
**Service FQN Format:**
- Service name: Reverse domain notation (e.g., `com.example.chat`)
- Version: Semantic versioning (e.g., `1.0.0`, `2.1.3-beta`)
- Complete FQN: `service-name@version` (e.g., `com.example.chat@1.0.0`)
**Validation:**
- Service name pattern: `^[a-z0-9]([a-z0-9-]*[a-z0-9])?(\.[a-z0-9]([a-z0-9-]*[a-z0-9])?)+$`
- Length: 3-128 characters
- Version pattern: `^[0-9]+\.[0-9]+\.[0-9]+(-[a-z0-9.-]+)?$`
#### `GET /services/:uuid`
Get service details by UUID
**Response:**
```json
{
"serviceId": "...",
"username": "alice",
"serviceFqn": "com.example.chat@1.0.0",
"offerId": "...",
"sdp": "v=0...",
"isPublic": false,
"metadata": { ... },
"createdAt": 1733404800000,
"expiresAt": 1733405100000
}
```
#### `DELETE /users/:username/services/:fqn`
Unpublish a service (requires authentication and ownership)
**Headers:**
- `Authorization: Bearer {peerId}:{secret}`
**Request:**
```json
{
"username": "alice"
"method": "publishService",
"message": "publishService:alice:chat:1.0.0@alice:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offers": [{ "sdp": "webrtc-offer-sdp" }],
"ttl": 300000
}
}
```
### Service Discovery
#### `POST /index/:username/query`
Query a service by FQN
**Request:**
```json
```typescript
// Get specific service
POST /rpc
{
"serviceFqn": "com.example.chat@1.0.0"
"method": "getService",
"params": { "serviceFqn": "chat:1.0.0@alice" }
}
// Random discovery
POST /rpc
{
"method": "getService",
"params": { "serviceFqn": "chat:1.0.0" }
}
// Paginated discovery
POST /rpc
{
"method": "getService",
"params": {
"serviceFqn": "chat:1.0.0",
"limit": 10,
"offset": 0
}
}
```
**Response:**
```json
### WebRTC Signaling
```typescript
// Answer offer (requires signature)
POST /rpc
{
"uuid": "abc123",
"allowed": true
"method": "answerOffer",
"message": "answer:bob:offer-id:1733404800000",
"signature": "base64-signature",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-id",
"sdp": "webrtc-answer-sdp"
}
}
// Add ICE candidates (requires signature)
POST /rpc
{
"method": "addIceCandidates",
"params": {
"serviceFqn": "chat:1.0.0@alice",
"offerId": "offer-id",
"candidates": [{ /* RTCIceCandidateInit */ }]
}
}
// Poll for answers and ICE candidates (requires signature)
POST /rpc
{
"method": "poll",
"params": { "since": 1733404800000 }
}
```
### Offer Management (Low-level)
#### `POST /offers`
Create one or more offers (requires authentication)
**Headers:**
- `Authorization: Bearer {peerId}:{secret}`
**Request:**
```json
{
"offers": [
{
"sdp": "v=0...",
"ttl": 300000
}
]
}
```
#### `GET /offers/mine`
List all offers owned by authenticated peer
#### `PUT /offers/:offerId/heartbeat`
Update last_seen timestamp for an offer
#### `DELETE /offers/:offerId`
Delete a specific offer
#### `POST /offers/:offerId/answer`
Answer an offer (locks it to answerer)
**Request:**
```json
{
"sdp": "v=0..."
}
```
#### `GET /offers/:offerId/answer`
Get answer for a specific offer
#### `POST /offers/:offerId/ice-candidates`
Post ICE candidates for an offer
**Request:**
```json
{
"candidates": ["candidate:1 1 UDP..."]
}
```
#### `GET /offers/:offerId/ice-candidates?since=1234567890`
Get ICE candidates from the other peer
## Configuration
Environment variables:
Quick reference for common environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3000` | Server port (Node.js/Docker) |
| `CORS_ORIGINS` | `*` | Comma-separated allowed origins |
| `STORAGE_PATH` | `./rondevu.db` | SQLite database path (use `:memory:` for in-memory) |
| `VERSION` | `2.0.0` | Server version (semver) |
| `AUTH_SECRET` | Random 32-byte hex | Secret key for credential encryption (required for production) |
| `OFFER_DEFAULT_TTL` | `300000` | Default offer TTL in ms (5 minutes) |
| `OFFER_MIN_TTL` | `60000` | Minimum offer TTL in ms (1 minute) |
| `OFFER_MAX_TTL` | `3600000` | Maximum offer TTL in ms (1 hour) |
| `MAX_OFFERS_PER_REQUEST` | `10` | Maximum offers per create request |
## Database Schema
📚 See [ADVANCED.md](./ADVANCED.md#configuration) for complete configuration reference.
### usernames
- `username` (PK): Claimed username
- `public_key`: Ed25519 public key (base64)
- `claimed_at`: Claim timestamp
- `expires_at`: Expiry timestamp (365 days)
- `last_used`: Last activity timestamp
- `metadata`: Optional JSON metadata
## Documentation
### services
- `id` (PK): Service ID (UUID)
- `username` (FK): Owner username
- `service_fqn`: Fully qualified name (com.example.chat@1.0.0)
- `offer_id` (FK): WebRTC offer ID
- `is_public`: Public/private flag
- `metadata`: JSON metadata
- `created_at`, `expires_at`: Timestamps
### service_index (privacy layer)
- `uuid` (PK): Random UUID for discovery
- `service_id` (FK): Links to service
- `username`, `service_fqn`: Denormalized for performance
📚 **[ADVANCED.md](./ADVANCED.md)** - Comprehensive guide including:
- Complete RPC method reference with examples
- Full configuration options
- Database schema documentation
- Security implementation details
- Migration guides
## Security
### Username Claiming
- **Algorithm**: Ed25519 signatures
- **Message Format**: `claim:{username}:{timestamp}`
- **Replay Protection**: Timestamp must be within 5 minutes
- **Key Management**: Private keys never leave the client
All authenticated operations require Ed25519 signatures:
- **Message Format**: `{method}:{username}:{context}:{timestamp}`
- **Signature**: Base64-encoded Ed25519 signature of the message
- **Replay Protection**: Timestamps must be within 5 minutes
- **Username Ownership**: Verified via public key signature
### Service Publishing
- **Ownership Verification**: Every publish requires username signature
- **Message Format**: `publish:{username}:{serviceFqn}:{timestamp}`
- **Auto-Renewal**: Publishing a service extends username expiry
### Privacy
- **Private Services**: Only UUID exposed, FQN hidden
- **Public Services**: FQN and metadata visible
- **No Enumeration**: Cannot list all services without knowing FQN
## Migration from V1
V2 is a **breaking change** that removes topic-based discovery. See [MIGRATION.md](../MIGRATION.md) for detailed migration guide.
**Key Changes:**
- ❌ Removed: Topic-based discovery, bloom filters, public peer listings
- ✅ Added: Username claiming, service publishing, UUID-based privacy
See [ADVANCED.md](./ADVANCED.md#security) for detailed security documentation.
## License

View File

@@ -0,0 +1,40 @@
-- V0.4.0 Migration: Refactor service-to-offer relationship
-- Change from one-to-one (service has offer_id) to one-to-many (offer has service_id)
-- Step 1: Add service_id column to offers table
ALTER TABLE offers ADD COLUMN service_id TEXT;
-- Step 2: Create new services table without offer_id
CREATE TABLE services_new (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
is_public INTEGER NOT NULL DEFAULT 0,
metadata TEXT,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(username, service_fqn)
);
-- Step 3: Copy data from old services table (if any exists)
INSERT INTO services_new (id, username, service_fqn, created_at, expires_at, is_public, metadata)
SELECT id, username, service_fqn, created_at, expires_at, is_public, metadata
FROM services;
-- Step 4: Drop old services table
DROP TABLE services;
-- Step 5: Rename new table to services
ALTER TABLE services_new RENAME TO services;
-- Step 6: Recreate indexes
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_fqn ON services(service_fqn);
CREATE INDEX IF NOT EXISTS idx_services_expires ON services(expires_at);
-- Step 7: Add index for service_id in offers
CREATE INDEX IF NOT EXISTS idx_offers_service ON offers(service_id);
-- Step 8: Add foreign key constraint (D1 doesn't enforce FK in ALTER, but good for documentation)
-- FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE

View File

@@ -0,0 +1,54 @@
-- V0.4.1 Migration: Simplify schema and add service discovery
-- Remove privacy layer (service_index) and add extracted fields for discovery
-- Step 1: Drop service_index table (privacy layer removal)
DROP TABLE IF EXISTS service_index;
-- Step 2: Create new services table with extracted fields for discovery
CREATE TABLE services_new (
id TEXT PRIMARY KEY,
service_fqn TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(service_fqn)
);
-- Step 3: Migrate existing data (if any) - parse FQN to extract components
-- Note: This migration assumes FQN format is already "service:version@username"
-- If there's old data with different format, manual intervention may be needed
INSERT INTO services_new (id, service_fqn, service_name, version, username, created_at, expires_at)
SELECT
id,
service_fqn,
-- Extract service_name: everything before first ':'
substr(service_fqn, 1, instr(service_fqn, ':') - 1) as service_name,
-- Extract version: between ':' and '@'
substr(
service_fqn,
instr(service_fqn, ':') + 1,
instr(service_fqn, '@') - instr(service_fqn, ':') - 1
) as version,
username,
created_at,
expires_at
FROM services
WHERE service_fqn LIKE '%:%@%'; -- Only migrate properly formatted FQNs
-- Step 4: Drop old services table
DROP TABLE services;
-- Step 5: Rename new table to services
ALTER TABLE services_new RENAME TO services;
-- Step 6: Create indexes for efficient querying
CREATE INDEX idx_services_fqn ON services(service_fqn);
CREATE INDEX idx_services_discovery ON services(service_name, version);
CREATE INDEX idx_services_username ON services(username);
CREATE INDEX idx_services_expires ON services(expires_at);
-- Step 7: Create index on offers for available offer filtering
CREATE INDEX IF NOT EXISTS idx_offers_available ON offers(answerer_peer_id) WHERE answerer_peer_id IS NULL;

View File

@@ -0,0 +1,67 @@
-- Migration: Convert peer_id to username in offers and ice_candidates tables
-- This migration aligns the database with the unified Ed25519 authentication system
-- Step 1: Recreate offers table with username instead of peer_id
CREATE TABLE offers_new (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_id TEXT,
service_fqn TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (answerer_username) REFERENCES usernames(username) ON DELETE SET NULL
);
-- Step 2: Migrate data (if any) - peer_id becomes username
-- Note: This assumes peer_id values were already usernames in practice
INSERT INTO offers_new (id, username, service_id, service_fqn, sdp, created_at, expires_at, last_seen, answerer_username, answer_sdp, answered_at)
SELECT id, peer_id as username, service_id, NULL as service_fqn, sdp, created_at, expires_at, last_seen, answerer_peer_id as answerer_username, answer_sdp, answered_at
FROM offers;
-- Step 3: Drop old offers table
DROP TABLE offers;
-- Step 4: Rename new table
ALTER TABLE offers_new RENAME TO offers;
-- Step 5: Recreate indexes
CREATE INDEX idx_offers_username ON offers(username);
CREATE INDEX idx_offers_service ON offers(service_id);
CREATE INDEX idx_offers_expires ON offers(expires_at);
CREATE INDEX idx_offers_last_seen ON offers(last_seen);
CREATE INDEX idx_offers_answerer ON offers(answerer_username);
-- Step 6: Recreate ice_candidates table with username instead of peer_id
CREATE TABLE ice_candidates_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE
);
-- Step 7: Migrate ICE candidates data
INSERT INTO ice_candidates_new (offer_id, username, role, candidate, created_at)
SELECT offer_id, peer_id as username, role, candidate, created_at
FROM ice_candidates;
-- Step 8: Drop old ice_candidates table
DROP TABLE ice_candidates;
-- Step 9: Rename new table
ALTER TABLE ice_candidates_new RENAME TO ice_candidates;
-- Step 10: Recreate indexes
CREATE INDEX idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX idx_ice_username ON ice_candidates(username);
CREATE INDEX idx_ice_role ON ice_candidates(role);
CREATE INDEX idx_ice_created ON ice_candidates(created_at);

View File

@@ -0,0 +1,81 @@
-- Fresh schema for Rondevu v0.5.0+
-- Unified Ed25519 authentication - username/keypair only
-- This is the complete schema without migration steps
-- Drop existing tables if they exist
DROP TABLE IF EXISTS ice_candidates;
DROP TABLE IF EXISTS services;
DROP TABLE IF EXISTS offers;
DROP TABLE IF EXISTS usernames;
-- Usernames table (now required for all users, even anonymous)
CREATE TABLE usernames (
username TEXT PRIMARY KEY,
public_key TEXT NOT NULL UNIQUE,
claimed_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_used INTEGER NOT NULL,
metadata TEXT,
CHECK(length(username) >= 3 AND length(username) <= 32)
);
CREATE INDEX idx_usernames_expires ON usernames(expires_at);
CREATE INDEX idx_usernames_public_key ON usernames(public_key);
-- Services table with discovery fields
CREATE TABLE services (
id TEXT PRIMARY KEY,
service_fqn TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(service_name, version, username)
);
CREATE INDEX idx_services_fqn ON services(service_fqn);
CREATE INDEX idx_services_discovery ON services(service_name, version);
CREATE INDEX idx_services_username ON services(username);
CREATE INDEX idx_services_expires ON services(expires_at);
-- Offers table (now uses username instead of peer_id)
CREATE TABLE offers (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_id TEXT,
service_fqn TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (answerer_username) REFERENCES usernames(username) ON DELETE SET NULL
);
CREATE INDEX idx_offers_username ON offers(username);
CREATE INDEX idx_offers_service ON offers(service_id);
CREATE INDEX idx_offers_expires ON offers(expires_at);
CREATE INDEX idx_offers_last_seen ON offers(last_seen);
CREATE INDEX idx_offers_answerer ON offers(answerer_username);
-- ICE candidates table (now uses username instead of peer_id)
CREATE TABLE ice_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE
);
CREATE INDEX idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX idx_ice_username ON ice_candidates(username);
CREATE INDEX idx_ice_role ON ice_candidates(role);
CREATE INDEX idx_ice_created ON ice_candidates(created_at);

46
package-lock.json generated
View File

@@ -1,15 +1,16 @@
{
"name": "@xtr-dev/rondevu-server",
"version": "0.3.0",
"version": "0.5.2",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@xtr-dev/rondevu-server",
"version": "0.3.0",
"version": "0.5.2",
"dependencies": {
"@hono/node-server": "^1.19.6",
"@noble/ed25519": "^3.0.0",
"@xtr-dev/rondevu-client": "^0.13.0",
"better-sqlite3": "^12.4.1",
"hono": "^4.10.4"
},
@@ -23,9 +24,9 @@
}
},
"node_modules/@cloudflare/workers-types": {
"version": "4.20251115.0",
"resolved": "https://registry.npmjs.org/@cloudflare/workers-types/-/workers-types-4.20251115.0.tgz",
"integrity": "sha512-aM7jp7IfKhqKvfSaK1IhVTbSzxB6KQ4gX8e/W29tOuZk+YHlYXuRd/bMm4hWkfd7B1HWNWdsx1GTaEUoZIuVsw==",
"version": "4.20251209.0",
"resolved": "https://registry.npmjs.org/@cloudflare/workers-types/-/workers-types-4.20251209.0.tgz",
"integrity": "sha512-O+cbUVwgb4NgUB39R1cITbRshlAAPy1UQV0l8xEy2xcZ3wTh3fMl9f5oBwLsVmE9JRhIZx6llCLOBVf53eI5xA==",
"dev": true,
"license": "MIT OR Apache-2.0"
},
@@ -485,9 +486,9 @@
}
},
"node_modules/@hono/node-server": {
"version": "1.19.6",
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.6.tgz",
"integrity": "sha512-Shz/KjlIeAhfiuE93NDKVdZ7HdBVLQAfdbaXEaoAVO3ic9ibRSLGIQGkcBbFyuLr+7/1D5ZCINM8B+6IvXeMtw==",
"version": "1.19.7",
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.7.tgz",
"integrity": "sha512-vUcD0uauS7EU2caukW8z5lJKtoGMokxNbJtBiwHgpqxEXokaHCBkQUmCHhjFB1VUTWdqj25QoMkMKzgjq+uhrw==",
"license": "MIT",
"engines": {
"node": ">=18.14.1"
@@ -572,15 +573,24 @@
}
},
"node_modules/@types/node": {
"version": "24.10.1",
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.10.1.tgz",
"integrity": "sha512-GNWcUTRBgIRJD5zj+Tq0fKOJ5XZajIiBroOF0yvj2bSU1WvNdYS/dn9UxwsujGW4JX06dnHyjV2y9rRaybH0iQ==",
"version": "24.10.2",
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.10.2.tgz",
"integrity": "sha512-WOhQTZ4G8xZ1tjJTvKOpyEVSGgOTvJAfDK3FNFgELyaTpzhdgHVHeqW8V+UJvzF5BT+/B54T/1S2K6gd9c7bbA==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~7.16.0"
}
},
"node_modules/@xtr-dev/rondevu-client": {
"version": "0.13.0",
"resolved": "https://registry.npmjs.org/@xtr-dev/rondevu-client/-/rondevu-client-0.13.0.tgz",
"integrity": "sha512-oauCveLga4lploxpoW8U0Fd9Fyz+SAsNQzIDvAIG1fkAnAJu9eajmLsZ5JfzzDi7h2Ew1ClZ7MOrmlRfG4vaBg==",
"license": "MIT",
"dependencies": {
"@noble/ed25519": "^3.0.0"
}
},
"node_modules/acorn": {
"version": "8.15.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
@@ -635,9 +645,9 @@
"license": "MIT"
},
"node_modules/better-sqlite3": {
"version": "12.4.1",
"resolved": "https://registry.npmjs.org/better-sqlite3/-/better-sqlite3-12.4.1.tgz",
"integrity": "sha512-3yVdyZhklTiNrtg+4WqHpJpFDd+WHTg2oM7UcR80GqL05AOV0xEJzc6qNvFYoEtE+hRp1n9MpN6/+4yhlGkDXQ==",
"version": "12.5.0",
"resolved": "https://registry.npmjs.org/better-sqlite3/-/better-sqlite3-12.5.0.tgz",
"integrity": "sha512-WwCZ/5Diz7rsF29o27o0Gcc1Du+l7Zsv7SYtVPG0X3G/uUI1LqdxrQI7c9Hs2FWpqXXERjW9hp6g3/tH7DlVKg==",
"hasInstallScript": true,
"license": "MIT",
"dependencies": {
@@ -645,7 +655,7 @@
"prebuild-install": "^7.1.1"
},
"engines": {
"node": "20.x || 22.x || 23.x || 24.x"
"node": "20.x || 22.x || 23.x || 24.x || 25.x"
}
},
"node_modules/bindings": {
@@ -827,9 +837,9 @@
"license": "MIT"
},
"node_modules/hono": {
"version": "4.10.6",
"resolved": "https://registry.npmjs.org/hono/-/hono-4.10.6.tgz",
"integrity": "sha512-BIdolzGpDO9MQ4nu3AUuDwHZZ+KViNm+EZ75Ae55eMXMqLVhDFqEMXxtUe9Qh8hjL+pIna/frs2j6Y2yD5Ua/g==",
"version": "4.10.8",
"resolved": "https://registry.npmjs.org/hono/-/hono-4.10.8.tgz",
"integrity": "sha512-DDT0A0r6wzhe8zCGoYOmMeuGu3dyTAE40HHjwUsWFTEy5WxK1x2WDSsBPlEXgPbRIFY6miDualuUDbasPogIww==",
"license": "MIT",
"engines": {
"node": ">=16.9.0"

View File

@@ -1,6 +1,6 @@
{
"name": "@xtr-dev/rondevu-server",
"version": "0.3.0",
"version": "0.5.2",
"description": "DNS-like WebRTC signaling server with username claiming and service discovery",
"main": "dist/index.js",
"scripts": {
@@ -22,6 +22,7 @@
"dependencies": {
"@hono/node-server": "^1.19.6",
"@noble/ed25519": "^3.0.0",
"@xtr-dev/rondevu-client": "^0.13.0",
"better-sqlite3": "^12.4.1",
"hono": "^4.10.4"
}

View File

@@ -2,20 +2,17 @@ import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { Storage } from './storage/types.ts';
import { Config } from './config.ts';
import { createAuthMiddleware, getAuthenticatedPeerId } from './middleware/auth.ts';
import { generatePeerId, encryptPeerId, validateUsernameClaim, validateServicePublish, validateServiceFqn, parseServiceFqn, isVersionCompatible } from './crypto.ts';
import type { Context } from 'hono';
import { handleRpc, RpcRequest } from './rpc.ts';
// Constants
const MAX_BATCH_SIZE = 100;
/**
* Creates the Hono application with username and service-based WebRTC signaling
* RESTful API design - v0.11.0
* Creates the Hono application with RPC interface
*/
export function createApp(storage: Storage, config: Config) {
const app = new Hono();
// Create auth middleware
const authMiddleware = createAuthMiddleware(config.authSecret);
// Enable CORS
app.use('/*', cors({
origin: (origin) => {
@@ -27,706 +24,70 @@ export function createApp(storage: Storage, config: Config) {
}
return config.corsOrigins[0];
},
allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
allowHeaders: ['Content-Type', 'Origin', 'Authorization'],
allowMethods: ['GET', 'POST', 'OPTIONS'],
allowHeaders: ['Content-Type', 'Origin'],
exposeHeaders: ['Content-Type'],
maxAge: 600,
credentials: true,
credentials: false,
maxAge: 86400,
}));
// ===== General Endpoints =====
/**
* GET /
* Returns server information
*/
// Root endpoint - server info
app.get('/', (c) => {
return c.json({
version: config.version,
name: 'Rondevu',
description: 'DNS-like WebRTC signaling with username claiming and service discovery'
});
description: 'WebRTC signaling with RPC interface and Ed25519 authentication',
}, 200);
});
/**
* GET /health
* Health check endpoint
*/
// Health check
app.get('/health', (c) => {
return c.json({
status: 'ok',
timestamp: Date.now(),
version: config.version
});
version: config.version,
}, 200);
});
/**
* POST /register
* Register a new peer
* POST /rpc
* RPC endpoint - accepts single or batch method calls
*/
app.post('/register', async (c) => {
try {
const peerId = generatePeerId();
const secret = await encryptPeerId(peerId, config.authSecret);
return c.json({
peerId,
secret
}, 200);
} catch (err) {
console.error('Error registering peer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== User Management (RESTful) =====
/**
* GET /users/:username
* Check if username is available or get claim info
*/
app.get('/users/:username', async (c) => {
try {
const username = c.req.param('username');
const claimed = await storage.getUsername(username);
if (!claimed) {
return c.json({
username,
available: true
}, 200);
}
return c.json({
username: claimed.username,
available: false,
claimedAt: claimed.claimedAt,
expiresAt: claimed.expiresAt,
publicKey: claimed.publicKey
}, 200);
} catch (err) {
console.error('Error checking username:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /users/:username
* Claim a username with cryptographic proof
*/
app.post('/users/:username', async (c) => {
try {
const username = c.req.param('username');
const body = await c.req.json();
const { publicKey, signature, message } = body;
if (!publicKey || !signature || !message) {
return c.json({ error: 'Missing required parameters: publicKey, signature, message' }, 400);
}
// Validate claim
const validation = await validateUsernameClaim(username, publicKey, signature, message);
if (!validation.valid) {
return c.json({ error: validation.error }, 400);
}
// Attempt to claim username
try {
const claimed = await storage.claimUsername({
username,
publicKey,
signature,
message
});
return c.json({
username: claimed.username,
claimedAt: claimed.claimedAt,
expiresAt: claimed.expiresAt
}, 201);
} catch (err: any) {
if (err.message?.includes('already claimed')) {
return c.json({ error: 'Username already claimed by different public key' }, 409);
}
throw err;
}
} catch (err) {
console.error('Error claiming username:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /users/:username/services/:fqn
* Get service by username and FQN with semver-compatible matching
*/
app.get('/users/:username/services/:fqn', async (c) => {
try {
const username = c.req.param('username');
const serviceFqn = decodeURIComponent(c.req.param('fqn'));
// Parse the requested FQN
const parsed = parseServiceFqn(serviceFqn);
if (!parsed) {
return c.json({ error: 'Invalid service FQN format' }, 400);
}
const { serviceName, version: requestedVersion } = parsed;
// Find all services with matching service name
const matchingServices = await storage.findServicesByName(username, serviceName);
if (matchingServices.length === 0) {
return c.json({ error: 'Service not found' }, 404);
}
// Filter to compatible versions
const compatibleServices = matchingServices.filter(service => {
const serviceParsed = parseServiceFqn(service.serviceFqn);
if (!serviceParsed) return false;
return isVersionCompatible(requestedVersion, serviceParsed.version);
});
if (compatibleServices.length === 0) {
return c.json({
error: 'No compatible version found',
message: `Requested ${serviceFqn}, but no compatible versions available`
}, 404);
}
// Use the first compatible service (most recently created)
const service = compatibleServices[0];
// Get the UUID for this service
const uuid = await storage.queryService(username, service.serviceFqn);
if (!uuid) {
return c.json({ error: 'Service index not found' }, 500);
}
// Get all offers for this service
const serviceOffers = await storage.getOffersForService(service.id);
if (serviceOffers.length === 0) {
return c.json({ error: 'No offers found for this service' }, 404);
}
// Find an unanswered offer
const availableOffer = serviceOffers.find(offer => !offer.answererPeerId);
if (!availableOffer) {
return c.json({
error: 'No available offers',
message: 'All offers from this service are currently in use. Please try again later.'
}, 503);
}
return c.json({
uuid: uuid,
serviceId: service.id,
username: service.username,
serviceFqn: service.serviceFqn,
offerId: availableOffer.id,
sdp: availableOffer.sdp,
isPublic: service.isPublic,
metadata: service.metadata ? JSON.parse(service.metadata) : undefined,
createdAt: service.createdAt,
expiresAt: service.expiresAt
}, 200);
} catch (err) {
console.error('Error getting service:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /users/:username/services
* Publish a service with one or more offers (RESTful endpoint)
*/
app.post('/users/:username/services', authMiddleware, async (c) => {
let serviceFqn: string | undefined;
let createdOffers: any[] = [];
try {
const username = c.req.param('username');
const body = await c.req.json();
serviceFqn = body.serviceFqn;
const { offers, ttl, isPublic, metadata, signature, message } = body;
if (!serviceFqn || !offers || !Array.isArray(offers) || offers.length === 0) {
return c.json({ error: 'Missing required parameters: serviceFqn, offers (must be non-empty array)' }, 400);
}
// Validate service FQN
const fqnValidation = validateServiceFqn(serviceFqn);
if (!fqnValidation.valid) {
return c.json({ error: fqnValidation.error }, 400);
}
// Verify username ownership (signature required)
if (!signature || !message) {
return c.json({ error: 'Missing signature or message for username verification' }, 400);
}
const usernameRecord = await storage.getUsername(username);
if (!usernameRecord) {
return c.json({ error: 'Username not claimed' }, 404);
}
// Verify signature matches username's public key
const signatureValidation = await validateServicePublish(username, serviceFqn, usernameRecord.publicKey, signature, message);
if (!signatureValidation.valid) {
return c.json({ error: 'Invalid signature for username' }, 403);
}
// Delete existing service if one exists (upsert behavior)
const existingUuid = await storage.queryService(username, serviceFqn);
if (existingUuid) {
const existingService = await storage.getServiceByUuid(existingUuid);
if (existingService) {
await storage.deleteService(existingService.id, username);
}
}
// Validate all offers
for (const offer of offers) {
if (!offer.sdp || typeof offer.sdp !== 'string' || offer.sdp.length === 0) {
return c.json({ error: 'Invalid SDP in offers array' }, 400);
}
if (offer.sdp.length > 64 * 1024) {
return c.json({ error: 'SDP too large (max 64KB)' }, 400);
}
}
// Calculate expiry
const peerId = getAuthenticatedPeerId(c);
const offerTtl = Math.min(
Math.max(ttl || config.offerDefaultTtl, config.offerMinTtl),
config.offerMaxTtl
);
const expiresAt = Date.now() + offerTtl;
// Prepare offer requests
const offerRequests = offers.map(offer => ({
peerId,
sdp: offer.sdp,
expiresAt
}));
// Create service with offers
const result = await storage.createService({
username,
serviceFqn,
expiresAt,
isPublic: isPublic || false,
metadata: metadata ? JSON.stringify(metadata) : undefined,
offers: offerRequests
});
createdOffers = result.offers;
// Return full service details with all offers
return c.json({
uuid: result.indexUuid,
serviceFqn: serviceFqn,
username: username,
serviceId: result.service.id,
offers: result.offers.map(o => ({
offerId: o.id,
sdp: o.sdp,
createdAt: o.createdAt,
expiresAt: o.expiresAt
})),
isPublic: result.service.isPublic,
metadata: metadata,
createdAt: result.service.createdAt,
expiresAt: result.service.expiresAt
}, 201);
} catch (err) {
console.error('Error creating service:', err);
console.error('Error details:', {
message: (err as Error).message,
stack: (err as Error).stack,
username: c.req.param('username'),
serviceFqn,
offerIds: createdOffers.map(o => o.id)
});
return c.json({
error: 'Internal server error',
details: (err as Error).message
}, 500);
}
});
/**
* DELETE /users/:username/services/:fqn
* Delete a service by username and FQN (RESTful)
*/
app.delete('/users/:username/services/:fqn', authMiddleware, async (c) => {
try {
const username = c.req.param('username');
const serviceFqn = decodeURIComponent(c.req.param('fqn'));
// Find service by username and FQN
const uuid = await storage.queryService(username, serviceFqn);
if (!uuid) {
return c.json({ error: 'Service not found' }, 404);
}
const service = await storage.getServiceByUuid(uuid);
if (!service) {
return c.json({ error: 'Service not found' }, 404);
}
const deleted = await storage.deleteService(service.id, username);
if (!deleted) {
return c.json({ error: 'Service not found or not owned by this username' }, 404);
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error deleting service:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== Service Management (Legacy - for UUID-based access) =====
/**
* GET /services/:uuid
* Get service details by index UUID (kept for privacy)
*/
app.get('/services/:uuid', async (c) => {
try {
const uuid = c.req.param('uuid');
const service = await storage.getServiceByUuid(uuid);
if (!service) {
return c.json({ error: 'Service not found' }, 404);
}
// Get all offers for this service
const serviceOffers = await storage.getOffersForService(service.id);
if (serviceOffers.length === 0) {
return c.json({ error: 'No offers found for this service' }, 404);
}
// Find an unanswered offer
const availableOffer = serviceOffers.find(offer => !offer.answererPeerId);
if (!availableOffer) {
return c.json({
error: 'No available offers',
message: 'All offers from this service are currently in use. Please try again later.'
}, 503);
}
return c.json({
uuid: uuid,
serviceId: service.id,
username: service.username,
serviceFqn: service.serviceFqn,
offerId: availableOffer.id,
sdp: availableOffer.sdp,
isPublic: service.isPublic,
metadata: service.metadata ? JSON.parse(service.metadata) : undefined,
createdAt: service.createdAt,
expiresAt: service.expiresAt
}, 200);
} catch (err) {
console.error('Error getting service:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== Offer Management (Core WebRTC) =====
/**
* POST /offers
* Create offers (direct, no service - for testing/advanced users)
*/
app.post('/offers', authMiddleware, async (c) => {
app.post('/rpc', async (c) => {
try {
const body = await c.req.json();
const { offers } = body;
if (!Array.isArray(offers) || offers.length === 0) {
return c.json({ error: 'Missing or invalid required parameter: offers (must be non-empty array)' }, 400);
// Support both single request and batch array
const requests: RpcRequest[] = Array.isArray(body) ? body : [body];
// Validate requests
if (requests.length === 0) {
return c.json({ error: 'Empty request array' }, 400);
}
if (offers.length > config.maxOffersPerRequest) {
return c.json({ error: `Too many offers (max ${config.maxOffersPerRequest})` }, 400);
if (requests.length > MAX_BATCH_SIZE) {
return c.json({ error: `Too many requests in batch (max ${MAX_BATCH_SIZE})` }, 400);
}
const peerId = getAuthenticatedPeerId(c);
// Validate and prepare offers
const validated = offers.map((offer: any) => {
const { sdp, ttl, secret } = offer;
if (typeof sdp !== 'string' || sdp.length === 0) {
throw new Error('Invalid SDP in offer');
}
if (sdp.length > 64 * 1024) {
throw new Error('SDP too large (max 64KB)');
}
const offerTtl = Math.min(
Math.max(ttl || config.offerDefaultTtl, config.offerMinTtl),
config.offerMaxTtl
);
return {
peerId,
sdp,
expiresAt: Date.now() + offerTtl,
secret: secret ? String(secret).substring(0, 128) : undefined
};
});
const created = await storage.createOffers(validated);
// Handle RPC
const responses = await handleRpc(requests, storage, config);
// Return single response or array based on input
return c.json(Array.isArray(body) ? responses : responses[0], 200);
} catch (err) {
console.error('RPC error:', err);
return c.json({
offers: created.map(offer => ({
id: offer.id,
peerId: offer.peerId,
expiresAt: offer.expiresAt,
createdAt: offer.createdAt,
hasSecret: !!offer.secret
}))
}, 201);
} catch (err: any) {
console.error('Error creating offers:', err);
return c.json({ error: err.message || 'Internal server error' }, 500);
success: false,
error: 'Invalid request format',
}, 400);
}
});
/**
* GET /offers/mine
* Get authenticated peer's offers
*/
app.get('/offers/mine', authMiddleware, async (c) => {
try {
const peerId = getAuthenticatedPeerId(c);
const offers = await storage.getOffersByPeerId(peerId);
return c.json({
offers: offers.map(offer => ({
id: offer.id,
sdp: offer.sdp,
createdAt: offer.createdAt,
expiresAt: offer.expiresAt,
lastSeen: offer.lastSeen,
hasSecret: !!offer.secret,
answererPeerId: offer.answererPeerId,
answered: !!offer.answererPeerId
}))
}, 200);
} catch (err) {
console.error('Error getting offers:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /offers/:offerId
* Get offer details (added for completeness)
*/
app.get('/offers/:offerId', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const offer = await storage.getOfferById(offerId);
if (!offer) {
return c.json({ error: 'Offer not found' }, 404);
}
return c.json({
id: offer.id,
peerId: offer.peerId,
sdp: offer.sdp,
createdAt: offer.createdAt,
expiresAt: offer.expiresAt,
answererPeerId: offer.answererPeerId,
answered: !!offer.answererPeerId,
answerSdp: offer.answerSdp
}, 200);
} catch (err) {
console.error('Error getting offer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* DELETE /offers/:offerId
* Delete an offer
*/
app.delete('/offers/:offerId', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const peerId = getAuthenticatedPeerId(c);
const deleted = await storage.deleteOffer(offerId, peerId);
if (!deleted) {
return c.json({ error: 'Offer not found or not owned by this peer' }, 404);
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error deleting offer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* POST /offers/:offerId/answer
* Answer an offer
*/
app.post('/offers/:offerId/answer', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const body = await c.req.json();
const { sdp, secret } = body;
if (!sdp) {
return c.json({ error: 'Missing required parameter: sdp' }, 400);
}
if (typeof sdp !== 'string' || sdp.length === 0) {
return c.json({ error: 'Invalid SDP' }, 400);
}
if (sdp.length > 64 * 1024) {
return c.json({ error: 'SDP too large (max 64KB)' }, 400);
}
const answererPeerId = getAuthenticatedPeerId(c);
const result = await storage.answerOffer(offerId, answererPeerId, sdp, secret);
if (!result.success) {
return c.json({ error: result.error }, 400);
}
return c.json({ success: true }, 200);
} catch (err) {
console.error('Error answering offer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /offers/:offerId/answer
* Get answer for a specific offer (RESTful endpoint)
*/
app.get('/offers/:offerId/answer', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const peerId = getAuthenticatedPeerId(c);
const offer = await storage.getOfferById(offerId);
if (!offer) {
return c.json({ error: 'Offer not found' }, 404);
}
// Verify ownership
if (offer.peerId !== peerId) {
return c.json({ error: 'Not authorized to view this answer' }, 403);
}
// Check if answered
if (!offer.answererPeerId || !offer.answerSdp) {
return c.json({ error: 'Offer not yet answered' }, 404);
}
return c.json({
offerId: offer.id,
answererId: offer.answererPeerId,
sdp: offer.answerSdp,
answeredAt: offer.answeredAt
}, 200);
} catch (err) {
console.error('Error getting answer:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
// ===== ICE Candidate Exchange =====
/**
* POST /offers/:offerId/ice-candidates
* Add ICE candidates for an offer
*/
app.post('/offers/:offerId/ice-candidates', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const body = await c.req.json();
const { candidates } = body;
if (!Array.isArray(candidates) || candidates.length === 0) {
return c.json({ error: 'Missing or invalid required parameter: candidates' }, 400);
}
const peerId = getAuthenticatedPeerId(c);
// Get offer to determine role
const offer = await storage.getOfferById(offerId);
if (!offer) {
return c.json({ error: 'Offer not found' }, 404);
}
// Determine role
const role = offer.peerId === peerId ? 'offerer' : 'answerer';
const count = await storage.addIceCandidates(offerId, peerId, role, candidates);
return c.json({ count }, 200);
} catch (err) {
console.error('Error adding ICE candidates:', err);
return c.json({ error: 'Internal server error' }, 500);
}
});
/**
* GET /offers/:offerId/ice-candidates
* Get ICE candidates for an offer
*/
app.get('/offers/:offerId/ice-candidates', authMiddleware, async (c) => {
try {
const offerId = c.req.param('offerId');
const since = c.req.query('since');
const peerId = getAuthenticatedPeerId(c);
// Get offer to determine role
const offer = await storage.getOfferById(offerId);
if (!offer) {
return c.json({ error: 'Offer not found' }, 404);
}
// Get candidates for opposite role
const targetRole = offer.peerId === peerId ? 'answerer' : 'offerer';
const sinceTimestamp = since ? parseInt(since, 10) : undefined;
const candidates = await storage.getIceCandidates(offerId, targetRole, sinceTimestamp);
return c.json({
candidates: candidates.map(c => ({
candidate: c.candidate,
createdAt: c.createdAt
}))
}, 200);
} catch (err) {
console.error('Error getting ICE candidates:', err);
return c.json({ error: 'Internal server error' }, 500);
}
// 404 for all other routes
app.all('*', (c) => {
return c.json({
error: 'Not found. Use POST /rpc for all API calls.',
}, 404);
});
return app;

View File

@@ -1,5 +1,3 @@
import { generateSecretKey } from './crypto.ts';
/**
* Application configuration
* Reads from environment variables with sensible defaults
@@ -10,7 +8,6 @@ export interface Config {
storagePath: string;
corsOrigins: string[];
version: string;
authSecret: string;
offerDefaultTtl: number;
offerMaxTtl: number;
offerMinTtl: number;
@@ -22,15 +19,6 @@ export interface Config {
* Loads configuration from environment variables
*/
export function loadConfig(): Config {
// Generate or load auth secret
let authSecret = process.env.AUTH_SECRET;
if (!authSecret) {
authSecret = generateSecretKey();
console.warn('WARNING: No AUTH_SECRET provided. Generated temporary secret:', authSecret);
console.warn('All peer credentials will be invalidated on server restart.');
console.warn('Set AUTH_SECRET environment variable to persist credentials across restarts.');
}
return {
port: parseInt(process.env.PORT || '3000', 10),
storageType: (process.env.STORAGE_TYPE || 'sqlite') as 'sqlite' | 'memory',
@@ -39,7 +27,6 @@ export function loadConfig(): Config {
? process.env.CORS_ORIGINS.split(',').map(o => o.trim())
: ['*'],
version: process.env.VERSION || 'unknown',
authSecret,
offerDefaultTtl: parseInt(process.env.OFFER_DEFAULT_TTL || '60000', 10),
offerMaxTtl: parseInt(process.env.OFFER_MAX_TTL || '86400000', 10),
offerMinTtl: parseInt(process.env.OFFER_MIN_TTL || '60000', 10),

View File

@@ -1,10 +1,11 @@
/**
* Crypto utilities for stateless peer authentication
* Uses Web Crypto API for compatibility with both Node.js and Cloudflare Workers
* Crypto utilities for Ed25519-based authentication
* Uses @noble/ed25519 for Ed25519 signature verification
* Uses Web Crypto API for compatibility with both Node.js and Cloudflare Workers
*/
import * as ed25519 from '@noble/ed25519';
import { Buffer } from 'node:buffer';
// Set SHA-512 hash function for ed25519 (required in @noble/ed25519 v3+)
// Uses Web Crypto API (compatible with both Node.js and Cloudflare Workers)
@@ -12,12 +13,8 @@ ed25519.hashes.sha512Async = async (message: Uint8Array) => {
return new Uint8Array(await crypto.subtle.digest('SHA-512', message as BufferSource));
};
const ALGORITHM = 'AES-GCM';
const IV_LENGTH = 12; // 96 bits for GCM
const KEY_LENGTH = 32; // 256 bits
// Username validation
const USERNAME_REGEX = /^[a-z0-9][a-z0-9-]*[a-z0-9]$/;
const USERNAME_REGEX = /^[a-z0-9][a-z0-9.-]*[a-z0-9]$/;
const USERNAME_MIN_LENGTH = 3;
const USERNAME_MAX_LENGTH = 32;
@@ -25,144 +22,68 @@ const USERNAME_MAX_LENGTH = 32;
const TIMESTAMP_TOLERANCE_MS = 5 * 60 * 1000;
/**
* Generates a random peer ID (16 bytes = 32 hex chars)
* Generates an anonymous username for users who don't want to claim one
* Format: anon-{timestamp}-{random}
* This reduces collision probability to near-zero
*/
export function generatePeerId(): string {
const bytes = crypto.getRandomValues(new Uint8Array(16));
return Array.from(bytes).map(b => b.toString(16).padStart(2, '0')).join('');
}
/**
* Generates a random secret key for encryption (32 bytes = 64 hex chars)
*/
export function generateSecretKey(): string {
const bytes = crypto.getRandomValues(new Uint8Array(KEY_LENGTH));
return Array.from(bytes).map(b => b.toString(16).padStart(2, '0')).join('');
}
/**
* Convert hex string to Uint8Array
*/
function hexToBytes(hex: string): Uint8Array {
const bytes = new Uint8Array(hex.length / 2);
for (let i = 0; i < hex.length; i += 2) {
bytes[i / 2] = parseInt(hex.substring(i, i + 2), 16);
}
return bytes;
export function generateAnonymousUsername(): string {
const timestamp = Date.now().toString(36);
const random = crypto.getRandomValues(new Uint8Array(3));
const hex = Array.from(random).map(b => b.toString(16).padStart(2, '0')).join('');
return `anon-${timestamp}-${hex}`;
}
/**
* Convert Uint8Array to base64 string
* Uses Buffer for compatibility with Node.js-based clients
*/
function bytesToBase64(bytes: Uint8Array): string {
const binString = Array.from(bytes, (byte) =>
String.fromCodePoint(byte)
).join('');
return btoa(binString);
return Buffer.from(bytes).toString('base64');
}
/**
* Convert base64 string to Uint8Array
* Uses Buffer for compatibility with Node.js-based clients
*/
function base64ToBytes(base64: string): Uint8Array {
const binString = atob(base64);
return Uint8Array.from(binString, (char) => char.codePointAt(0)!);
return new Uint8Array(Buffer.from(base64, 'base64'));
}
/**
* Encrypts a peer ID using the server secret key
* Returns base64-encoded encrypted data (IV + ciphertext)
* Validates a generic auth message format
* Expected format: action:username:params:timestamp
* Validates that the message contains the expected username and has a valid timestamp
*/
export async function encryptPeerId(peerId: string, secretKeyHex: string): Promise<string> {
const keyBytes = hexToBytes(secretKeyHex);
export function validateAuthMessage(
expectedUsername: string,
message: string
): { valid: boolean; error?: string } {
const parts = message.split(':');
if (keyBytes.length !== KEY_LENGTH) {
throw new Error(`Secret key must be ${KEY_LENGTH * 2} hex characters (${KEY_LENGTH} bytes)`);
if (parts.length < 3) {
return { valid: false, error: 'Invalid message format: must have at least action:username:timestamp' };
}
// Import key
const key = await crypto.subtle.importKey(
'raw',
keyBytes,
{ name: ALGORITHM, length: 256 },
false,
['encrypt']
);
// Extract username (second part) and timestamp (last part)
const messageUsername = parts[1];
const timestamp = parseInt(parts[parts.length - 1], 10);
// Generate random IV
const iv = crypto.getRandomValues(new Uint8Array(IV_LENGTH));
// Encrypt peer ID
const encoder = new TextEncoder();
const data = encoder.encode(peerId);
const encrypted = await crypto.subtle.encrypt(
{ name: ALGORITHM, iv },
key,
data
);
// Combine IV + ciphertext and encode as base64
const combined = new Uint8Array(iv.length + encrypted.byteLength);
combined.set(iv, 0);
combined.set(new Uint8Array(encrypted), iv.length);
return bytesToBase64(combined);
}
/**
* Decrypts an encrypted peer ID secret
* Returns the plaintext peer ID or throws if decryption fails
*/
export async function decryptPeerId(encryptedSecret: string, secretKeyHex: string): Promise<string> {
try {
const keyBytes = hexToBytes(secretKeyHex);
if (keyBytes.length !== KEY_LENGTH) {
throw new Error(`Secret key must be ${KEY_LENGTH * 2} hex characters (${KEY_LENGTH} bytes)`);
}
// Decode base64
const combined = base64ToBytes(encryptedSecret);
// Extract IV and ciphertext
const iv = combined.slice(0, IV_LENGTH);
const ciphertext = combined.slice(IV_LENGTH);
// Import key
const key = await crypto.subtle.importKey(
'raw',
keyBytes,
{ name: ALGORITHM, length: 256 },
false,
['decrypt']
);
// Decrypt
const decrypted = await crypto.subtle.decrypt(
{ name: ALGORITHM, iv },
key,
ciphertext
);
const decoder = new TextDecoder();
return decoder.decode(decrypted);
} catch (err) {
throw new Error('Failed to decrypt peer ID: invalid secret or secret key');
// Validate username matches
if (messageUsername !== expectedUsername) {
return { valid: false, error: 'Username in message does not match authenticated username' };
}
}
/**
* Validates that a peer ID and secret match
* Returns true if valid, false otherwise
*/
export async function validateCredentials(peerId: string, encryptedSecret: string, secretKey: string): Promise<boolean> {
try {
const decryptedPeerId = await decryptPeerId(encryptedSecret, secretKey);
return decryptedPeerId === peerId;
} catch {
return false;
// Validate timestamp
if (isNaN(timestamp)) {
return { valid: false, error: 'Invalid timestamp in message' };
}
const timestampCheck = validateTimestamp(timestamp);
if (!timestampCheck.valid) {
return timestampCheck;
}
return { valid: true };
}
// ===== Username and Ed25519 Signature Utilities =====
@@ -192,31 +113,32 @@ export function validateUsername(username: string): { valid: boolean; error?: st
}
/**
* Validates service FQN format (service-name@version)
* Service name: reverse domain notation (com.example.service)
* Validates service FQN format (service:version@username or service:version)
* Service name: lowercase alphanumeric with dots/dashes (e.g., chat, file-share, com.example.chat)
* Version: semantic versioning (1.0.0, 2.1.3-beta, etc.)
* Username: optional, lowercase alphanumeric with dashes
*/
export function validateServiceFqn(fqn: string): { valid: boolean; error?: string } {
if (typeof fqn !== 'string') {
return { valid: false, error: 'Service FQN must be a string' };
}
// Split into service name and version
const parts = fqn.split('@');
if (parts.length !== 2) {
return { valid: false, error: 'Service FQN must be in format: service-name@version' };
// Parse the FQN
const parsed = parseServiceFqn(fqn);
if (!parsed) {
return { valid: false, error: 'Service FQN must be in format: service:version[@username]' };
}
const [serviceName, version] = parts;
const { serviceName, version, username } = parsed;
// Validate service name (reverse domain notation)
const serviceNameRegex = /^[a-z0-9]([a-z0-9-]*[a-z0-9])?(\.[a-z0-9]([a-z0-9-]*[a-z0-9])?)+$/;
// Validate service name (alphanumeric with dots/dashes)
const serviceNameRegex = /^[a-z0-9]([a-z0-9.-]*[a-z0-9])?$/;
if (!serviceNameRegex.test(serviceName)) {
return { valid: false, error: 'Service name must be reverse domain notation (e.g., com.example.service)' };
return { valid: false, error: 'Service name must be lowercase alphanumeric with optional dots/dashes' };
}
if (serviceName.length < 3 || serviceName.length > 128) {
return { valid: false, error: 'Service name must be 3-128 characters' };
if (serviceName.length < 1 || serviceName.length > 128) {
return { valid: false, error: 'Service name must be 1-128 characters' };
}
// Validate version (semantic versioning)
@@ -225,6 +147,14 @@ export function validateServiceFqn(fqn: string): { valid: boolean; error?: strin
return { valid: false, error: 'Version must be semantic versioning (e.g., 1.0.0, 2.1.3-beta)' };
}
// Validate username if present
if (username) {
const usernameCheck = validateUsername(username);
if (!usernameCheck.valid) {
return usernameCheck;
}
}
return { valid: true };
}
@@ -270,15 +200,41 @@ export function isVersionCompatible(requested: string, available: string): boole
}
/**
* Parse service FQN into service name and version
* Parse service FQN into components
* Formats supported:
* - service:version@username (e.g., "chat:1.0.0@alice")
* - service:version (e.g., "chat:1.0.0") for discovery
*/
export function parseServiceFqn(fqn: string): { serviceName: string; version: string } | null {
const parts = fqn.split('@');
if (parts.length !== 2) return null;
export function parseServiceFqn(fqn: string): { serviceName: string; version: string; username: string | null } | null {
if (!fqn || typeof fqn !== 'string') return null;
// Check if username is present
const atIndex = fqn.lastIndexOf('@');
let serviceVersion: string;
let username: string | null = null;
if (atIndex > 0) {
// Format: service:version@username
serviceVersion = fqn.substring(0, atIndex);
username = fqn.substring(atIndex + 1);
} else {
// Format: service:version (no username)
serviceVersion = fqn;
}
// Split service:version
const colonIndex = serviceVersion.indexOf(':');
if (colonIndex <= 0) return null; // No colon or colon at start
const serviceName = serviceVersion.substring(0, colonIndex);
const version = serviceVersion.substring(colonIndex + 1);
if (!serviceName || !version) return null;
return {
serviceName: parts[0],
version: parts[1],
serviceName,
version,
username,
};
}
@@ -390,16 +346,24 @@ export async function validateServicePublish(
}
// Parse message format: "publish:{username}:{serviceFqn}:{timestamp}"
// Note: serviceFqn can contain colons (e.g., "chat:2.0.0@user"), so we need careful parsing
const parts = message.split(':');
if (parts.length !== 4 || parts[0] !== 'publish' || parts[1] !== username || parts[2] !== serviceFqn) {
if (parts.length < 4 || parts[0] !== 'publish' || parts[1] !== username) {
return { valid: false, error: 'Invalid message format (expected: publish:{username}:{serviceFqn}:{timestamp})' };
}
const timestamp = parseInt(parts[3], 10);
// The timestamp is the last part
const timestamp = parseInt(parts[parts.length - 1], 10);
if (isNaN(timestamp)) {
return { valid: false, error: 'Invalid timestamp in message' };
}
// The serviceFqn is everything between username and timestamp
const extractedServiceFqn = parts.slice(2, parts.length - 1).join(':');
if (extractedServiceFqn !== serviceFqn) {
return { valid: false, error: `Service FQN mismatch (expected: ${serviceFqn}, got: ${extractedServiceFqn})` };
}
// Validate timestamp
const timestampCheck = validateTimestamp(timestamp);
if (!timestampCheck.valid) {

View File

@@ -1,51 +0,0 @@
import { Context, Next } from 'hono';
import { validateCredentials } from '../crypto.ts';
/**
* Authentication middleware for Rondevu
* Validates Bearer token in format: {peerId}:{encryptedSecret}
*/
export function createAuthMiddleware(authSecret: string) {
return async (c: Context, next: Next) => {
const authHeader = c.req.header('Authorization');
if (!authHeader) {
return c.json({ error: 'Missing Authorization header' }, 401);
}
// Expect format: Bearer {peerId}:{secret}
const parts = authHeader.split(' ');
if (parts.length !== 2 || parts[0] !== 'Bearer') {
return c.json({ error: 'Invalid Authorization header format. Expected: Bearer {peerId}:{secret}' }, 401);
}
const credentials = parts[1].split(':');
if (credentials.length !== 2) {
return c.json({ error: 'Invalid credentials format. Expected: {peerId}:{secret}' }, 401);
}
const [peerId, encryptedSecret] = credentials;
// Validate credentials (async operation)
const isValid = await validateCredentials(peerId, encryptedSecret, authSecret);
if (!isValid) {
return c.json({ error: 'Invalid credentials' }, 401);
}
// Attach peer ID to context for use in handlers
c.set('peerId', peerId);
await next();
};
}
/**
* Helper to get authenticated peer ID from context
*/
export function getAuthenticatedPeerId(c: Context): string {
const peerId = c.get('peerId');
if (!peerId) {
throw new Error('No authenticated peer ID in context');
}
return peerId;
}

725
src/rpc.ts Normal file
View File

@@ -0,0 +1,725 @@
import { Context } from 'hono';
import { Storage } from './storage/types.ts';
import { Config } from './config.ts';
import {
validateUsernameClaim,
validateServicePublish,
validateServiceFqn,
parseServiceFqn,
isVersionCompatible,
verifyEd25519Signature,
validateAuthMessage,
validateUsername,
} from './crypto.ts';
// Constants
const MAX_PAGE_SIZE = 100;
/**
* RPC request format
*/
export interface RpcRequest {
method: string;
message: string;
signature: string;
publicKey?: string; // Optional: for auto-claiming usernames
params?: any;
}
/**
* RPC response format
*/
export interface RpcResponse {
success: boolean;
result?: any;
error?: string;
}
/**
* RPC method handler
*/
type RpcHandler = (
params: any,
message: string,
signature: string,
publicKey: string | undefined,
storage: Storage,
config: Config
) => Promise<any>;
/**
* Verify authentication for a method call
* Automatically claims username if it doesn't exist
*/
async function verifyAuth(
username: string,
message: string,
signature: string,
publicKey: string | undefined,
storage: Storage
): Promise<{ valid: boolean; error?: string }> {
// Get username record to fetch public key
let usernameRecord = await storage.getUsername(username);
// Auto-claim username if it doesn't exist
if (!usernameRecord) {
if (!publicKey) {
return {
valid: false,
error: `Username "${username}" is not claimed and no public key provided for auto-claim.`,
};
}
// Validate username format before claiming
const usernameValidation = validateUsername(username);
if (!usernameValidation.valid) {
return usernameValidation;
}
// Verify signature against the current message (not a claim message)
const signatureValid = await verifyEd25519Signature(publicKey, signature, message);
if (!signatureValid) {
return { valid: false, error: 'Invalid signature for auto-claim' };
}
// Auto-claim the username
const expiresAt = Date.now() + 365 * 24 * 60 * 60 * 1000; // 365 days
await storage.claimUsername({
username,
publicKey,
expiresAt,
});
usernameRecord = await storage.getUsername(username);
if (!usernameRecord) {
return { valid: false, error: 'Failed to claim username' };
}
}
// Verify Ed25519 signature
const isValid = await verifyEd25519Signature(
usernameRecord.publicKey,
signature,
message
);
if (!isValid) {
return { valid: false, error: 'Invalid signature' };
}
// Validate message format and timestamp
const validation = validateAuthMessage(username, message);
if (!validation.valid) {
return { valid: false, error: validation.error };
}
return { valid: true };
}
/**
* Extract username from message
*/
function extractUsername(message: string): string | null {
// Message format: method:username:...
const parts = message.split(':');
if (parts.length < 2) return null;
return parts[1];
}
/**
* RPC Method Handlers
*/
const handlers: Record<string, RpcHandler> = {
/**
* Check if username is available
*/
async getUser(params, message, signature, publicKey, storage, config) {
const { username } = params;
const claimed = await storage.getUsername(username);
if (!claimed) {
return {
username,
available: true,
};
}
return {
username: claimed.username,
available: false,
claimedAt: claimed.claimedAt,
expiresAt: claimed.expiresAt,
publicKey: claimed.publicKey,
};
},
/**
* Get service by FQN - Supports 3 modes:
* 1. Direct lookup: FQN includes @username
* 2. Paginated discovery: FQN without @username, with limit/offset
* 3. Random discovery: FQN without @username, no limit
*/
async getService(params, message, signature, publicKey, storage, config) {
const { serviceFqn, limit, offset } = params;
const username = extractUsername(message);
// Verify authentication
if (username) {
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
}
// Parse and validate FQN
const fqnValidation = validateServiceFqn(serviceFqn);
if (!fqnValidation.valid) {
throw new Error(fqnValidation.error || 'Invalid service FQN');
}
const parsed = parseServiceFqn(serviceFqn);
if (!parsed) {
throw new Error('Failed to parse service FQN');
}
// Helper: Filter services by version compatibility
const filterCompatibleServices = (services) => {
return services.filter((s) => {
const serviceVersion = parseServiceFqn(s.serviceFqn);
return (
serviceVersion &&
isVersionCompatible(parsed.version, serviceVersion.version)
);
});
};
// Helper: Find available offer for service
const findAvailableOffer = async (service) => {
const offers = await storage.getOffersForService(service.id);
return offers.find((o) => !o.answererUsername);
};
// Helper: Build service response object
const buildServiceResponse = (service, offer) => ({
serviceId: service.id,
username: service.username,
serviceFqn: service.serviceFqn,
offerId: offer.id,
sdp: offer.sdp,
createdAt: service.createdAt,
expiresAt: service.expiresAt,
});
// Mode 1: Paginated discovery
if (limit !== undefined) {
const pageLimit = Math.min(Math.max(1, limit), MAX_PAGE_SIZE);
const pageOffset = Math.max(0, offset || 0);
const allServices = await storage.getServicesByName(parsed.service, parsed.version);
const compatibleServices = filterCompatibleServices(allServices);
// Get unique services per username with available offers
const usernameSet = new Set<string>();
const uniqueServices: any[] = [];
for (const service of compatibleServices) {
if (!usernameSet.has(service.username)) {
usernameSet.add(service.username);
const availableOffer = await findAvailableOffer(service);
if (availableOffer) {
uniqueServices.push(buildServiceResponse(service, availableOffer));
}
}
}
// Paginate results
const paginatedServices = uniqueServices.slice(pageOffset, pageOffset + pageLimit);
return {
services: paginatedServices,
count: paginatedServices.length,
limit: pageLimit,
offset: pageOffset,
};
}
// Mode 2: Direct lookup with username
if (parsed.username) {
const service = await storage.getServiceByFqn(serviceFqn);
if (!service) {
throw new Error('Service not found');
}
const availableOffer = await findAvailableOffer(service);
if (!availableOffer) {
throw new Error('Service has no available offers');
}
return buildServiceResponse(service, availableOffer);
}
// Mode 3: Random discovery without username
const allServices = await storage.getServicesByName(parsed.service, parsed.version);
const compatibleServices = filterCompatibleServices(allServices);
if (compatibleServices.length === 0) {
throw new Error('No services found');
}
const randomService = compatibleServices[Math.floor(Math.random() * compatibleServices.length)];
const availableOffer = await findAvailableOffer(randomService);
if (!availableOffer) {
throw new Error('Service has no available offers');
}
return buildServiceResponse(randomService, availableOffer);
},
/**
* Publish a service
*/
async publishService(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offers, ttl } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required for service publishing');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
// Validate service FQN
const fqnValidation = validateServiceFqn(serviceFqn);
if (!fqnValidation.valid) {
throw new Error(fqnValidation.error || 'Invalid service FQN');
}
const parsed = parseServiceFqn(serviceFqn);
if (!parsed || !parsed.username) {
throw new Error('Service FQN must include username');
}
if (parsed.username !== username) {
throw new Error('Service FQN username must match authenticated username');
}
// Validate offers
if (!offers || !Array.isArray(offers) || offers.length === 0) {
throw new Error('Must provide at least one offer');
}
if (offers.length > config.maxOffersPerRequest) {
throw new Error(
`Too many offers (max ${config.maxOffersPerRequest})`
);
}
// Validate each offer has valid SDP
offers.forEach((offer, index) => {
if (!offer || typeof offer !== 'object') {
throw new Error(`Invalid offer at index ${index}: must be an object`);
}
if (!offer.sdp || typeof offer.sdp !== 'string') {
throw new Error(`Invalid offer at index ${index}: missing or invalid SDP`);
}
if (!offer.sdp.trim()) {
throw new Error(`Invalid offer at index ${index}: SDP cannot be empty`);
}
});
// Create service with offers
const now = Date.now();
const offerTtl =
ttl !== undefined
? Math.min(
Math.max(ttl, config.offerMinTtl),
config.offerMaxTtl
)
: config.offerDefaultTtl;
const expiresAt = now + offerTtl;
// Prepare offer requests with TTL
const offerRequests = offers.map(offer => ({
username,
serviceFqn,
sdp: offer.sdp,
expiresAt,
}));
const result = await storage.createService({
serviceFqn,
expiresAt,
offers: offerRequests,
});
return {
serviceId: result.service.id,
username: result.service.username,
serviceFqn: result.service.serviceFqn,
offers: result.offers.map(offer => ({
offerId: offer.id,
sdp: offer.sdp,
createdAt: offer.createdAt,
expiresAt: offer.expiresAt,
})),
createdAt: result.service.createdAt,
expiresAt: result.service.expiresAt,
};
},
/**
* Delete a service
*/
async deleteService(params, message, signature, publicKey, storage, config) {
const { serviceFqn } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const parsed = parseServiceFqn(serviceFqn);
if (!parsed || !parsed.username) {
throw new Error('Service FQN must include username');
}
const service = await storage.getServiceByFqn(serviceFqn);
if (!service) {
throw new Error('Service not found');
}
const deleted = await storage.deleteService(service.id, username);
if (!deleted) {
throw new Error('Service not found or not owned by this username');
}
return { success: true };
},
/**
* Answer an offer
*/
async answerOffer(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId, sdp } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
if (!sdp || typeof sdp !== 'string' || sdp.length === 0) {
throw new Error('Invalid SDP');
}
if (sdp.length > 64 * 1024) {
throw new Error('SDP too large (max 64KB)');
}
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
if (offer.answererUsername) {
throw new Error('Offer already answered');
}
await storage.answerOffer(offerId, username, sdp);
return { success: true, offerId };
},
/**
* Get answer for an offer
*/
async getOfferAnswer(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
if (offer.username !== username) {
throw new Error('Not authorized to access this offer');
}
if (!offer.answererUsername || !offer.answerSdp) {
throw new Error('Offer not yet answered');
}
return {
sdp: offer.answerSdp,
offerId: offer.id,
answererId: offer.answererUsername,
answeredAt: offer.answeredAt,
};
},
/**
* Combined polling for answers and ICE candidates
*/
async poll(params, message, signature, publicKey, storage, config) {
const { since } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const sinceTimestamp = since || 0;
// Get all answered offers
const answeredOffers = await storage.getAnsweredOffers(username);
const filteredAnswers = answeredOffers.filter(
(offer) => offer.answeredAt && offer.answeredAt > sinceTimestamp
);
// Get all user's offers
const allOffers = await storage.getOffersByUsername(username);
// For each offer, get ICE candidates from both sides
const iceCandidatesByOffer: Record<string, any[]> = {};
for (const offer of allOffers) {
const offererCandidates = await storage.getIceCandidates(
offer.id,
'offerer',
sinceTimestamp
);
const answererCandidates = await storage.getIceCandidates(
offer.id,
'answerer',
sinceTimestamp
);
const allCandidates = [
...offererCandidates.map((c: any) => ({
...c,
role: 'offerer' as const,
})),
...answererCandidates.map((c: any) => ({
...c,
role: 'answerer' as const,
})),
];
if (allCandidates.length > 0) {
const isOfferer = offer.username === username;
const filtered = allCandidates.filter((c) =>
isOfferer ? c.role === 'answerer' : c.role === 'offerer'
);
if (filtered.length > 0) {
iceCandidatesByOffer[offer.id] = filtered;
}
}
}
return {
answers: filteredAnswers.map((offer) => ({
offerId: offer.id,
serviceId: offer.serviceId,
answererId: offer.answererUsername,
sdp: offer.answerSdp,
answeredAt: offer.answeredAt,
})),
iceCandidates: iceCandidatesByOffer,
};
},
/**
* Add ICE candidates
*/
async addIceCandidates(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId, candidates } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
if (!Array.isArray(candidates) || candidates.length === 0) {
throw new Error('Missing or invalid required parameter: candidates');
}
// Validate each candidate is an object (don't enforce structure per CLAUDE.md)
candidates.forEach((candidate, index) => {
if (!candidate || typeof candidate !== 'object') {
throw new Error(`Invalid candidate at index ${index}: must be an object`);
}
});
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
const role = offer.username === username ? 'offerer' : 'answerer';
const count = await storage.addIceCandidates(
offerId,
username,
role,
candidates
);
return { count, offerId };
},
/**
* Get ICE candidates
*/
async getIceCandidates(params, message, signature, publicKey, storage, config) {
const { serviceFqn, offerId, since } = params;
const username = extractUsername(message);
if (!username) {
throw new Error('Username required');
}
// Verify authentication
const auth = await verifyAuth(username, message, signature, publicKey, storage);
if (!auth.valid) {
throw new Error(auth.error);
}
const sinceTimestamp = since || 0;
const offer = await storage.getOfferById(offerId);
if (!offer) {
throw new Error('Offer not found');
}
const isOfferer = offer.username === username;
const role = isOfferer ? 'answerer' : 'offerer';
const candidates = await storage.getIceCandidates(
offerId,
role,
sinceTimestamp
);
return {
candidates: candidates.map((c: any) => ({
candidate: c.candidate,
createdAt: c.createdAt,
})),
offerId,
};
},
};
/**
* Handle RPC batch request
*/
export async function handleRpc(
requests: RpcRequest[],
storage: Storage,
config: Config
): Promise<RpcResponse[]> {
const responses: RpcResponse[] = [];
for (const request of requests) {
try {
const { method, message, signature, publicKey, params } = request;
// Validate request
if (!method || typeof method !== 'string') {
responses.push({
success: false,
error: 'Missing or invalid method',
});
continue;
}
if (!message || typeof message !== 'string') {
responses.push({
success: false,
error: 'Missing or invalid message',
});
continue;
}
if (!signature || typeof signature !== 'string') {
responses.push({
success: false,
error: 'Missing or invalid signature',
});
continue;
}
// Get handler
const handler = handlers[method];
if (!handler) {
responses.push({
success: false,
error: `Unknown method: ${method}`,
});
continue;
}
// Execute handler
const result = await handler(
params || {},
message,
signature,
publicKey,
storage,
config
);
responses.push({
success: true,
result,
});
} catch (err) {
responses.push({
success: false,
error: (err as Error).message || 'Internal server error',
});
}
}
return responses;
}

View File

@@ -8,9 +8,9 @@ import {
ClaimUsernameRequest,
Service,
CreateServiceRequest,
ServiceInfo,
} from './types.ts';
import { generateOfferHash } from './hash-id.ts';
import { parseServiceFqn } from '../crypto.ts';
const YEAR_IN_MS = 365 * 24 * 60 * 60 * 1000; // 365 days
@@ -37,27 +37,28 @@ export class D1Storage implements Storage {
-- WebRTC signaling offers
CREATE TABLE IF NOT EXISTS offers (
id TEXT PRIMARY KEY,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
service_id TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
secret TEXT,
answerer_peer_id TEXT,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER
);
CREATE INDEX IF NOT EXISTS idx_offers_peer ON offers(peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_username ON offers(username);
CREATE INDEX IF NOT EXISTS idx_offers_service ON offers(service_id);
CREATE INDEX IF NOT EXISTS idx_offers_expires ON offers(expires_at);
CREATE INDEX IF NOT EXISTS idx_offers_last_seen ON offers(last_seen);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_username);
-- ICE candidates table
CREATE TABLE IF NOT EXISTS ice_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
@@ -65,7 +66,7 @@ export class D1Storage implements Storage {
);
CREATE INDEX IF NOT EXISTS idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX IF NOT EXISTS idx_ice_peer ON ice_candidates(peer_id);
CREATE INDEX IF NOT EXISTS idx_ice_username ON ice_candidates(username);
CREATE INDEX IF NOT EXISTS idx_ice_created ON ice_candidates(created_at);
-- Usernames table
@@ -82,39 +83,23 @@ export class D1Storage implements Storage {
CREATE INDEX IF NOT EXISTS idx_usernames_expires ON usernames(expires_at);
CREATE INDEX IF NOT EXISTS idx_usernames_public_key ON usernames(public_key);
-- Services table
-- Services table (new schema with extracted fields for discovery)
CREATE TABLE IF NOT EXISTS services (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
offer_id TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
is_public INTEGER NOT NULL DEFAULT 0,
metadata TEXT,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
FOREIGN KEY (offer_id) REFERENCES offers(id) ON DELETE CASCADE,
UNIQUE(username, service_fqn)
UNIQUE(service_fqn)
);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_fqn ON services(service_fqn);
CREATE INDEX IF NOT EXISTS idx_services_discovery ON services(service_name, version);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_expires ON services(expires_at);
CREATE INDEX IF NOT EXISTS idx_services_offer ON services(offer_id);
-- Service index table (privacy layer)
CREATE TABLE IF NOT EXISTS service_index (
uuid TEXT PRIMARY KEY,
service_id TEXT NOT NULL,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_service_index_username ON service_index(username);
CREATE INDEX IF NOT EXISTS idx_service_index_expires ON service_index(expires_at);
`);
}
@@ -129,30 +114,31 @@ export class D1Storage implements Storage {
const now = Date.now();
await this.db.prepare(`
INSERT INTO offers (id, peer_id, sdp, created_at, expires_at, last_seen, secret)
INSERT INTO offers (id, username, service_id, sdp, created_at, expires_at, last_seen)
VALUES (?, ?, ?, ?, ?, ?, ?)
`).bind(id, offer.peerId, offer.sdp, now, offer.expiresAt, now, offer.secret || null).run();
`).bind(id, offer.username, offer.serviceId || null, offer.sdp, now, offer.expiresAt, now).run();
created.push({
id,
peerId: offer.peerId,
username: offer.username,
serviceId: offer.serviceId,
serviceFqn: offer.serviceFqn,
sdp: offer.sdp,
createdAt: now,
expiresAt: offer.expiresAt,
lastSeen: now,
secret: offer.secret,
});
}
return created;
}
async getOffersByPeerId(peerId: string): Promise<Offer[]> {
async getOffersByUsername(username: string): Promise<Offer[]> {
const result = await this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND expires_at > ?
WHERE username = ? AND expires_at > ?
ORDER BY last_seen DESC
`).bind(peerId, Date.now()).all();
`).bind(username, Date.now()).all();
if (!result.results) {
return [];
@@ -174,11 +160,11 @@ export class D1Storage implements Storage {
return this.rowToOffer(result as any);
}
async deleteOffer(offerId: string, ownerPeerId: string): Promise<boolean> {
async deleteOffer(offerId: string, ownerUsername: string): Promise<boolean> {
const result = await this.db.prepare(`
DELETE FROM offers
WHERE id = ? AND peer_id = ?
`).bind(offerId, ownerPeerId).run();
WHERE id = ? AND username = ?
`).bind(offerId, ownerUsername).run();
return (result.meta.changes || 0) > 0;
}
@@ -193,9 +179,8 @@ export class D1Storage implements Storage {
async answerOffer(
offerId: string,
answererPeerId: string,
answerSdp: string,
secret?: string
answererUsername: string,
answerSdp: string
): Promise<{ success: boolean; error?: string }> {
// Check if offer exists and is not expired
const offer = await this.getOfferById(offerId);
@@ -207,16 +192,8 @@ export class D1Storage implements Storage {
};
}
// Verify secret if offer is protected
if (offer.secret && offer.secret !== secret) {
return {
success: false,
error: 'Invalid or missing secret'
};
}
// Check if offer already has an answerer
if (offer.answererPeerId) {
if (offer.answererUsername) {
return {
success: false,
error: 'Offer already answered'
@@ -226,9 +203,9 @@ export class D1Storage implements Storage {
// Update offer with answer
const result = await this.db.prepare(`
UPDATE offers
SET answerer_peer_id = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_peer_id IS NULL
`).bind(answererPeerId, answerSdp, Date.now(), offerId).run();
SET answerer_username = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_username IS NULL
`).bind(answererUsername, answerSdp, Date.now(), offerId).run();
if ((result.meta.changes || 0) === 0) {
return {
@@ -240,12 +217,12 @@ export class D1Storage implements Storage {
return { success: true };
}
async getAnsweredOffers(offererPeerId: string): Promise<Offer[]> {
async getAnsweredOffers(offererUsername: string): Promise<Offer[]> {
const result = await this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND answerer_peer_id IS NOT NULL AND expires_at > ?
WHERE username = ? AND answerer_username IS NOT NULL AND expires_at > ?
ORDER BY answered_at DESC
`).bind(offererPeerId, Date.now()).all();
`).bind(offererUsername, Date.now()).all();
if (!result.results) {
return [];
@@ -258,7 +235,7 @@ export class D1Storage implements Storage {
async addIceCandidates(
offerId: string,
peerId: string,
username: string,
role: 'offerer' | 'answerer',
candidates: any[]
): Promise<number> {
@@ -266,11 +243,11 @@ export class D1Storage implements Storage {
for (let i = 0; i < candidates.length; i++) {
const timestamp = Date.now() + i;
await this.db.prepare(`
INSERT INTO ice_candidates (offer_id, peer_id, role, candidate, created_at)
INSERT INTO ice_candidates (offer_id, username, role, candidate, created_at)
VALUES (?, ?, ?, ?, ?)
`).bind(
offerId,
peerId,
username,
role,
JSON.stringify(candidates[i]),
timestamp
@@ -308,7 +285,7 @@ export class D1Storage implements Storage {
return result.results.map((row: any) => ({
id: row.id,
offerId: row.offer_id,
peerId: row.peer_id,
username: row.username,
role: row.role,
candidate: JSON.parse(row.candidate),
createdAt: row.created_at,
@@ -321,36 +298,44 @@ export class D1Storage implements Storage {
const now = Date.now();
const expiresAt = now + YEAR_IN_MS;
// Try to insert or update
const result = await this.db.prepare(`
INSERT INTO usernames (username, public_key, claimed_at, expires_at, last_used, metadata)
VALUES (?, ?, ?, ?, ?, NULL)
ON CONFLICT(username) DO UPDATE SET
expires_at = ?,
last_used = ?
WHERE public_key = ?
`).bind(
request.username,
request.publicKey,
now,
expiresAt,
now,
expiresAt,
now,
request.publicKey
).run();
try {
// Try to insert or update
const result = await this.db.prepare(`
INSERT INTO usernames (username, public_key, claimed_at, expires_at, last_used, metadata)
VALUES (?, ?, ?, ?, ?, NULL)
ON CONFLICT(username) DO UPDATE SET
expires_at = ?,
last_used = ?
WHERE public_key = ?
`).bind(
request.username,
request.publicKey,
now,
expiresAt,
now,
expiresAt,
now,
request.publicKey
).run();
if ((result.meta.changes || 0) === 0) {
throw new Error('Username already claimed by different public key');
if ((result.meta.changes || 0) === 0) {
throw new Error('Username already claimed by different public key');
}
return {
username: request.username,
publicKey: request.publicKey,
claimedAt: now,
expiresAt,
lastUsed: now,
};
} catch (err: any) {
// Handle UNIQUE constraint on public_key
if (err.message?.includes('UNIQUE constraint failed: usernames.public_key')) {
throw new Error('This public key has already claimed a different username');
}
throw err;
}
return {
username: request.username,
publicKey: request.publicKey,
claimedAt: now,
expiresAt,
lastUsed: now,
};
}
async getUsername(username: string): Promise<Username | null> {
@@ -375,18 +360,6 @@ export class D1Storage implements Storage {
};
}
async touchUsername(username: string): Promise<boolean> {
const now = Date.now();
const expiresAt = now + YEAR_IN_MS;
const result = await this.db.prepare(`
UPDATE usernames
SET last_used = ?, expires_at = ?
WHERE username = ? AND expires_at > ?
`).bind(now, expiresAt, username, now).run();
return (result.meta.changes || 0) > 0;
}
async deleteExpiredUsernames(now: number): Promise<number> {
const result = await this.db.prepare(`
@@ -400,36 +373,51 @@ export class D1Storage implements Storage {
async createService(request: CreateServiceRequest): Promise<{
service: Service;
indexUuid: string;
offers: Offer[];
}> {
const serviceId = crypto.randomUUID();
const indexUuid = crypto.randomUUID();
const now = Date.now();
// Insert service
// Parse FQN to extract components
const parsed = parseServiceFqn(request.serviceFqn);
if (!parsed) {
throw new Error(`Invalid service FQN: ${request.serviceFqn}`);
}
if (!parsed.username) {
throw new Error(`Service FQN must include username: ${request.serviceFqn}`);
}
const { serviceName, version, username } = parsed;
// Delete existing service with same (service_name, version, username) and its related offers (upsert behavior)
// First get the existing service
const existingService = await this.db.prepare(`
SELECT id FROM services
WHERE service_name = ? AND version = ? AND username = ?
`).bind(serviceName, version, username).first();
if (existingService) {
// Delete related offers first (no FK cascade from offers to services)
await this.db.prepare(`
DELETE FROM offers WHERE service_id = ?
`).bind(existingService.id).run();
// Delete the service
await this.db.prepare(`
DELETE FROM services WHERE id = ?
`).bind(existingService.id).run();
}
// Insert new service with extracted fields
await this.db.prepare(`
INSERT INTO services (id, username, service_fqn, created_at, expires_at, is_public, metadata)
INSERT INTO services (id, service_fqn, service_name, version, username, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
`).bind(
serviceId,
request.username,
request.serviceFqn,
now,
request.expiresAt,
request.isPublic ? 1 : 0,
request.metadata || null
).run();
// Insert service index
await this.db.prepare(`
INSERT INTO service_index (uuid, service_id, username, service_fqn, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?)
`).bind(
indexUuid,
serviceId,
request.username,
request.serviceFqn,
serviceName,
version,
username,
now,
request.expiresAt
).run();
@@ -441,36 +429,28 @@ export class D1Storage implements Storage {
}));
const offers = await this.createOffers(offerRequests);
// Touch username to extend expiry
await this.touchUsername(request.username);
// Touch username to extend expiry (inline logic)
const expiresAt = now + YEAR_IN_MS;
await this.db.prepare(`
UPDATE usernames
SET last_used = ?, expires_at = ?
WHERE username = ? AND expires_at > ?
`).bind(now, expiresAt, username, now).run();
return {
service: {
id: serviceId,
username: request.username,
serviceFqn: request.serviceFqn,
serviceName,
version,
username,
createdAt: now,
expiresAt: request.expiresAt,
isPublic: request.isPublic || false,
metadata: request.metadata,
},
indexUuid,
offers,
};
}
async batchCreateServices(requests: CreateServiceRequest[]): Promise<Array<{
service: Service;
indexUuid: string;
offers: Offer[];
}>> {
const results = [];
for (const request of requests) {
const result = await this.createService(request);
results.push(result);
}
return results;
}
async getOffersForService(serviceId: string): Promise<Offer[]> {
const result = await this.db.prepare(`
@@ -499,12 +479,11 @@ export class D1Storage implements Storage {
return this.rowToService(result as any);
}
async getServiceByUuid(uuid: string): Promise<Service | null> {
async getServiceByFqn(serviceFqn: string): Promise<Service | null> {
const result = await this.db.prepare(`
SELECT s.* FROM services s
INNER JOIN service_index si ON s.id = si.service_id
WHERE si.uuid = ? AND s.expires_at > ?
`).bind(uuid, Date.now()).first();
SELECT * FROM services
WHERE service_fqn = ? AND expires_at > ?
`).bind(serviceFqn, Date.now()).first();
if (!result) {
return null;
@@ -513,43 +492,29 @@ export class D1Storage implements Storage {
return this.rowToService(result as any);
}
async listServicesForUsername(username: string): Promise<ServiceInfo[]> {
async discoverServices(
serviceName: string,
version: string,
limit: number,
offset: number
): Promise<Service[]> {
// Query for unique services with available offers
// We join with offers and filter for available ones (answerer_username IS NULL)
const result = await this.db.prepare(`
SELECT si.uuid, s.is_public, s.service_fqn, s.metadata
FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.expires_at > ?
SELECT DISTINCT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY s.created_at DESC
`).bind(username, Date.now()).all();
if (!result.results) {
return [];
}
return result.results.map((row: any) => ({
uuid: row.uuid,
isPublic: row.is_public === 1,
serviceFqn: row.is_public === 1 ? row.service_fqn : undefined,
metadata: row.is_public === 1 ? row.metadata || undefined : undefined,
}));
}
async queryService(username: string, serviceFqn: string): Promise<string | null> {
const result = await this.db.prepare(`
SELECT si.uuid FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.service_fqn = ? AND si.expires_at > ?
`).bind(username, serviceFqn, Date.now()).first();
return result ? (result as any).uuid : null;
}
async findServicesByName(username: string, serviceName: string): Promise<Service[]> {
const result = await this.db.prepare(`
SELECT * FROM services
WHERE username = ? AND service_fqn LIKE ? AND expires_at > ?
ORDER BY created_at DESC
`).bind(username, `${serviceName}@%`, Date.now()).all();
LIMIT ? OFFSET ?
`).bind(serviceName, version, Date.now(), Date.now(), limit, offset).all();
if (!result.results) {
return [];
@@ -558,6 +523,27 @@ export class D1Storage implements Storage {
return result.results.map(row => this.rowToService(row as any));
}
async getRandomService(serviceName: string, version: string): Promise<Service | null> {
// Get a random service with an available offer
const result = await this.db.prepare(`
SELECT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY RANDOM()
LIMIT 1
`).bind(serviceName, version, Date.now(), Date.now()).first();
if (!result) {
return null;
}
return this.rowToService(result as any);
}
async deleteService(serviceId: string, username: string): Promise<boolean> {
const result = await this.db.prepare(`
DELETE FROM services
@@ -588,13 +574,14 @@ export class D1Storage implements Storage {
private rowToOffer(row: any): Offer {
return {
id: row.id,
peerId: row.peer_id,
username: row.username,
serviceId: row.service_id || undefined,
serviceFqn: row.service_fqn || undefined,
sdp: row.sdp,
createdAt: row.created_at,
expiresAt: row.expires_at,
lastSeen: row.last_seen,
secret: row.secret || undefined,
answererPeerId: row.answerer_peer_id || undefined,
answererUsername: row.answerer_username || undefined,
answerSdp: row.answer_sdp || undefined,
answeredAt: row.answered_at || undefined,
};
@@ -606,12 +593,12 @@ export class D1Storage implements Storage {
private rowToService(row: any): Service {
return {
id: row.id,
username: row.username,
serviceFqn: row.service_fqn,
serviceName: row.service_name,
version: row.version,
username: row.username,
createdAt: row.created_at,
expiresAt: row.expires_at,
isPublic: row.is_public === 1,
metadata: row.metadata || undefined,
};
}
}

View File

@@ -9,9 +9,9 @@ import {
ClaimUsernameRequest,
Service,
CreateServiceRequest,
ServiceInfo,
} from './types.ts';
import { generateOfferHash } from './hash-id.ts';
import { parseServiceFqn } from '../crypto.ts';
const YEAR_IN_MS = 365 * 24 * 60 * 60 * 1000; // 365 days
@@ -39,30 +39,29 @@ export class SQLiteStorage implements Storage {
-- WebRTC signaling offers
CREATE TABLE IF NOT EXISTS offers (
id TEXT PRIMARY KEY,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
service_id TEXT,
sdp TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
secret TEXT,
answerer_peer_id TEXT,
answerer_username TEXT,
answer_sdp TEXT,
answered_at INTEGER,
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_offers_peer ON offers(peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_username ON offers(username);
CREATE INDEX IF NOT EXISTS idx_offers_service ON offers(service_id);
CREATE INDEX IF NOT EXISTS idx_offers_expires ON offers(expires_at);
CREATE INDEX IF NOT EXISTS idx_offers_last_seen ON offers(last_seen);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_peer_id);
CREATE INDEX IF NOT EXISTS idx_offers_answerer ON offers(answerer_username);
-- ICE candidates table
CREATE TABLE IF NOT EXISTS ice_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
offer_id TEXT NOT NULL,
peer_id TEXT NOT NULL,
username TEXT NOT NULL,
role TEXT NOT NULL CHECK(role IN ('offerer', 'answerer')),
candidate TEXT NOT NULL,
created_at INTEGER NOT NULL,
@@ -70,7 +69,7 @@ export class SQLiteStorage implements Storage {
);
CREATE INDEX IF NOT EXISTS idx_ice_offer ON ice_candidates(offer_id);
CREATE INDEX IF NOT EXISTS idx_ice_peer ON ice_candidates(peer_id);
CREATE INDEX IF NOT EXISTS idx_ice_username ON ice_candidates(username);
CREATE INDEX IF NOT EXISTS idx_ice_created ON ice_candidates(created_at);
-- Usernames table
@@ -87,36 +86,23 @@ export class SQLiteStorage implements Storage {
CREATE INDEX IF NOT EXISTS idx_usernames_expires ON usernames(expires_at);
CREATE INDEX IF NOT EXISTS idx_usernames_public_key ON usernames(public_key);
-- Services table (one service can have multiple offers)
-- Services table (new schema with extracted fields for discovery)
CREATE TABLE IF NOT EXISTS services (
id TEXT PRIMARY KEY,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
service_name TEXT NOT NULL,
version TEXT NOT NULL,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
is_public INTEGER NOT NULL DEFAULT 0,
metadata TEXT,
FOREIGN KEY (username) REFERENCES usernames(username) ON DELETE CASCADE,
UNIQUE(username, service_fqn)
UNIQUE(service_fqn)
);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_fqn ON services(service_fqn);
CREATE INDEX IF NOT EXISTS idx_services_discovery ON services(service_name, version);
CREATE INDEX IF NOT EXISTS idx_services_username ON services(username);
CREATE INDEX IF NOT EXISTS idx_services_expires ON services(expires_at);
-- Service index table (privacy layer)
CREATE TABLE IF NOT EXISTS service_index (
uuid TEXT PRIMARY KEY,
service_id TEXT NOT NULL,
username TEXT NOT NULL,
service_fqn TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_service_index_username ON service_index(username);
CREATE INDEX IF NOT EXISTS idx_service_index_expires ON service_index(expires_at);
`);
// Enable foreign keys
@@ -139,8 +125,8 @@ export class SQLiteStorage implements Storage {
// Use transaction for atomic creation
const transaction = this.db.transaction((offersWithIds: (CreateOfferRequest & { id: string })[]) => {
const offerStmt = this.db.prepare(`
INSERT INTO offers (id, peer_id, service_id, sdp, created_at, expires_at, last_seen, secret)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO offers (id, username, service_id, sdp, created_at, expires_at, last_seen)
VALUES (?, ?, ?, ?, ?, ?, ?)
`);
for (const offer of offersWithIds) {
@@ -149,24 +135,23 @@ export class SQLiteStorage implements Storage {
// Insert offer
offerStmt.run(
offer.id,
offer.peerId,
offer.username,
offer.serviceId || null,
offer.sdp,
now,
offer.expiresAt,
now,
offer.secret || null
now
);
created.push({
id: offer.id,
peerId: offer.peerId,
username: offer.username,
serviceId: offer.serviceId || undefined,
serviceFqn: offer.serviceFqn,
sdp: offer.sdp,
createdAt: now,
expiresAt: offer.expiresAt,
lastSeen: now,
secret: offer.secret,
});
}
});
@@ -175,14 +160,14 @@ export class SQLiteStorage implements Storage {
return created;
}
async getOffersByPeerId(peerId: string): Promise<Offer[]> {
async getOffersByUsername(username: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND expires_at > ?
WHERE username = ? AND expires_at > ?
ORDER BY last_seen DESC
`);
const rows = stmt.all(peerId, Date.now()) as any[];
const rows = stmt.all(username, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
@@ -201,13 +186,13 @@ export class SQLiteStorage implements Storage {
return this.rowToOffer(row);
}
async deleteOffer(offerId: string, ownerPeerId: string): Promise<boolean> {
async deleteOffer(offerId: string, ownerUsername: string): Promise<boolean> {
const stmt = this.db.prepare(`
DELETE FROM offers
WHERE id = ? AND peer_id = ?
WHERE id = ? AND username = ?
`);
const result = stmt.run(offerId, ownerPeerId);
const result = stmt.run(offerId, ownerUsername);
return result.changes > 0;
}
@@ -219,9 +204,8 @@ export class SQLiteStorage implements Storage {
async answerOffer(
offerId: string,
answererPeerId: string,
answerSdp: string,
secret?: string
answererUsername: string,
answerSdp: string
): Promise<{ success: boolean; error?: string }> {
// Check if offer exists and is not expired
const offer = await this.getOfferById(offerId);
@@ -233,16 +217,8 @@ export class SQLiteStorage implements Storage {
};
}
// Verify secret if offer is protected
if (offer.secret && offer.secret !== secret) {
return {
success: false,
error: 'Invalid or missing secret'
};
}
// Check if offer already has an answerer
if (offer.answererPeerId) {
if (offer.answererUsername) {
return {
success: false,
error: 'Offer already answered'
@@ -252,11 +228,11 @@ export class SQLiteStorage implements Storage {
// Update offer with answer
const stmt = this.db.prepare(`
UPDATE offers
SET answerer_peer_id = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_peer_id IS NULL
SET answerer_username = ?, answer_sdp = ?, answered_at = ?
WHERE id = ? AND answerer_username IS NULL
`);
const result = stmt.run(answererPeerId, answerSdp, Date.now(), offerId);
const result = stmt.run(answererUsername, answerSdp, Date.now(), offerId);
if (result.changes === 0) {
return {
@@ -268,14 +244,14 @@ export class SQLiteStorage implements Storage {
return { success: true };
}
async getAnsweredOffers(offererPeerId: string): Promise<Offer[]> {
async getAnsweredOffers(offererUsername: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE peer_id = ? AND answerer_peer_id IS NOT NULL AND expires_at > ?
WHERE username = ? AND answerer_username IS NOT NULL AND expires_at > ?
ORDER BY answered_at DESC
`);
const rows = stmt.all(offererPeerId, Date.now()) as any[];
const rows = stmt.all(offererUsername, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
@@ -283,12 +259,12 @@ export class SQLiteStorage implements Storage {
async addIceCandidates(
offerId: string,
peerId: string,
username: string,
role: 'offerer' | 'answerer',
candidates: any[]
): Promise<number> {
const stmt = this.db.prepare(`
INSERT INTO ice_candidates (offer_id, peer_id, role, candidate, created_at)
INSERT INTO ice_candidates (offer_id, username, role, candidate, created_at)
VALUES (?, ?, ?, ?, ?)
`);
@@ -297,7 +273,7 @@ export class SQLiteStorage implements Storage {
for (let i = 0; i < candidates.length; i++) {
stmt.run(
offerId,
peerId,
username,
role,
JSON.stringify(candidates[i]),
baseTimestamp + i
@@ -334,7 +310,7 @@ export class SQLiteStorage implements Storage {
return rows.map(row => ({
id: row.id,
offerId: row.offer_id,
peerId: row.peer_id,
username: row.username,
role: row.role,
candidate: JSON.parse(row.candidate),
createdAt: row.created_at,
@@ -427,87 +403,96 @@ export class SQLiteStorage implements Storage {
async createService(request: CreateServiceRequest): Promise<{
service: Service;
indexUuid: string;
offers: Offer[];
}> {
const serviceId = randomUUID();
const indexUuid = randomUUID();
const now = Date.now();
// Create offers with serviceId
const offerRequests: CreateOfferRequest[] = request.offers.map(offer => ({
...offer,
serviceId,
}));
// Parse FQN to extract components
const parsed = parseServiceFqn(request.serviceFqn);
if (!parsed) {
throw new Error(`Invalid service FQN: ${request.serviceFqn}`);
}
if (!parsed.username) {
throw new Error(`Service FQN must include username: ${request.serviceFqn}`);
}
const offers = await this.createOffers(offerRequests);
const { serviceName, version, username } = parsed;
const transaction = this.db.transaction(() => {
// Insert service (no offer_id column anymore)
const serviceStmt = this.db.prepare(`
INSERT INTO services (id, username, service_fqn, created_at, expires_at, is_public, metadata)
// Delete existing service with same (service_name, version, username) and its related offers (upsert behavior)
const existingService = this.db.prepare(`
SELECT id FROM services
WHERE service_name = ? AND version = ? AND username = ?
`).get(serviceName, version, username) as any;
if (existingService) {
// Delete related offers first (no FK cascade from offers to services)
this.db.prepare(`
DELETE FROM offers WHERE service_id = ?
`).run(existingService.id);
// Delete the service
this.db.prepare(`
DELETE FROM services WHERE id = ?
`).run(existingService.id);
}
// Insert new service with extracted fields
this.db.prepare(`
INSERT INTO services (id, service_fqn, service_name, version, username, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
`);
serviceStmt.run(
`).run(
serviceId,
request.username,
request.serviceFqn,
now,
request.expiresAt,
request.isPublic ? 1 : 0,
request.metadata || null
);
// Insert service index
const indexStmt = this.db.prepare(`
INSERT INTO service_index (uuid, service_id, username, service_fqn, created_at, expires_at)
VALUES (?, ?, ?, ?, ?, ?)
`);
indexStmt.run(
indexUuid,
serviceId,
request.username,
request.serviceFqn,
serviceName,
version,
username,
now,
request.expiresAt
);
// Touch username to extend expiry
this.touchUsername(request.username);
// Touch username to extend expiry (inline logic)
const expiresAt = now + YEAR_IN_MS;
this.db.prepare(`
UPDATE usernames
SET last_used = ?, expires_at = ?
WHERE username = ? AND expires_at > ?
`).run(now, expiresAt, username, now);
});
transaction();
// Create offers with serviceId (after transaction)
const offerRequests = request.offers.map(offer => ({
...offer,
serviceId,
}));
const offers = await this.createOffers(offerRequests);
return {
service: {
id: serviceId,
username: request.username,
serviceFqn: request.serviceFqn,
serviceName,
version,
username,
createdAt: now,
expiresAt: request.expiresAt,
isPublic: request.isPublic || false,
metadata: request.metadata,
},
indexUuid,
offers,
};
}
async batchCreateServices(requests: CreateServiceRequest[]): Promise<Array<{
service: Service;
indexUuid: string;
offers: Offer[];
}>> {
const results = [];
async getOffersForService(serviceId: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE service_id = ? AND expires_at > ?
ORDER BY created_at ASC
`);
for (const request of requests) {
const result = await this.createService(request);
results.push(result);
}
return results;
const rows = stmt.all(serviceId, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
async getServiceById(serviceId: string): Promise<Service | null> {
@@ -525,14 +510,13 @@ export class SQLiteStorage implements Storage {
return this.rowToService(row);
}
async getServiceByUuid(uuid: string): Promise<Service | null> {
async getServiceByFqn(serviceFqn: string): Promise<Service | null> {
const stmt = this.db.prepare(`
SELECT s.* FROM services s
INNER JOIN service_index si ON s.id = si.service_id
WHERE si.uuid = ? AND s.expires_at > ?
SELECT * FROM services
WHERE service_fqn = ? AND expires_at > ?
`);
const row = stmt.get(uuid, Date.now()) as any;
const row = stmt.get(serviceFqn, Date.now()) as any;
if (!row) {
return null;
@@ -541,49 +525,53 @@ export class SQLiteStorage implements Storage {
return this.rowToService(row);
}
async listServicesForUsername(username: string): Promise<ServiceInfo[]> {
async discoverServices(
serviceName: string,
version: string,
limit: number,
offset: number
): Promise<Service[]> {
// Query for unique services with available offers
// We join with offers and filter for available ones (answerer_username IS NULL)
const stmt = this.db.prepare(`
SELECT si.uuid, s.is_public, s.service_fqn, s.metadata
FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.expires_at > ?
SELECT DISTINCT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY s.created_at DESC
LIMIT ? OFFSET ?
`);
const rows = stmt.all(username, Date.now()) as any[];
return rows.map(row => ({
uuid: row.uuid,
isPublic: row.is_public === 1,
serviceFqn: row.is_public === 1 ? row.service_fqn : undefined,
metadata: row.is_public === 1 ? row.metadata || undefined : undefined,
}));
}
async queryService(username: string, serviceFqn: string): Promise<string | null> {
const stmt = this.db.prepare(`
SELECT si.uuid FROM service_index si
INNER JOIN services s ON si.service_id = s.id
WHERE si.username = ? AND si.service_fqn = ? AND si.expires_at > ?
`);
const row = stmt.get(username, serviceFqn, Date.now()) as any;
return row ? row.uuid : null;
}
async findServicesByName(username: string, serviceName: string): Promise<Service[]> {
const stmt = this.db.prepare(`
SELECT * FROM services
WHERE username = ? AND service_fqn LIKE ? AND expires_at > ?
ORDER BY created_at DESC
`);
const rows = stmt.all(username, `${serviceName}@%`, Date.now()) as any[];
const rows = stmt.all(serviceName, version, Date.now(), Date.now(), limit, offset) as any[];
return rows.map(row => this.rowToService(row));
}
async getRandomService(serviceName: string, version: string): Promise<Service | null> {
// Get a random service with an available offer
const stmt = this.db.prepare(`
SELECT s.* FROM services s
INNER JOIN offers o ON o.service_id = s.id
WHERE s.service_name = ?
AND s.version = ?
AND s.expires_at > ?
AND o.answerer_username IS NULL
AND o.expires_at > ?
ORDER BY RANDOM()
LIMIT 1
`);
const row = stmt.get(serviceName, version, Date.now(), Date.now()) as any;
if (!row) {
return null;
}
return this.rowToService(row);
}
async deleteService(serviceId: string, username: string): Promise<boolean> {
const stmt = this.db.prepare(`
DELETE FROM services
@@ -612,14 +600,14 @@ export class SQLiteStorage implements Storage {
private rowToOffer(row: any): Offer {
return {
id: row.id,
peerId: row.peer_id,
username: row.username,
serviceId: row.service_id || undefined,
serviceFqn: row.service_fqn || undefined,
sdp: row.sdp,
createdAt: row.created_at,
expiresAt: row.expires_at,
lastSeen: row.last_seen,
secret: row.secret || undefined,
answererPeerId: row.answerer_peer_id || undefined,
answererUsername: row.answerer_username || undefined,
answerSdp: row.answer_sdp || undefined,
answeredAt: row.answered_at || undefined,
};
@@ -631,26 +619,12 @@ export class SQLiteStorage implements Storage {
private rowToService(row: any): Service {
return {
id: row.id,
username: row.username,
serviceFqn: row.service_fqn,
serviceName: row.service_name,
version: row.version,
username: row.username,
createdAt: row.created_at,
expiresAt: row.expires_at,
isPublic: row.is_public === 1,
metadata: row.metadata || undefined,
};
}
/**
* Get all offers for a service
*/
async getOffersForService(serviceId: string): Promise<Offer[]> {
const stmt = this.db.prepare(`
SELECT * FROM offers
WHERE service_id = ? AND expires_at > ?
ORDER BY created_at ASC
`);
const rows = stmt.all(serviceId, Date.now()) as any[];
return rows.map(row => this.rowToOffer(row));
}
}

View File

@@ -3,14 +3,14 @@
*/
export interface Offer {
id: string;
peerId: string;
username: string;
serviceId?: string; // Optional link to service (null for standalone offers)
serviceFqn?: string; // Denormalized service FQN for easier queries
sdp: string;
createdAt: number;
expiresAt: number;
lastSeen: number;
secret?: string;
answererPeerId?: string;
answererUsername?: string;
answerSdp?: string;
answeredAt?: number;
}
@@ -22,7 +22,7 @@ export interface Offer {
export interface IceCandidate {
id: number;
offerId: string;
peerId: string;
username: string;
role: 'offerer' | 'answerer';
candidate: any; // Full candidate object as JSON - don't enforce structure
createdAt: number;
@@ -33,11 +33,11 @@ export interface IceCandidate {
*/
export interface CreateOfferRequest {
id?: string;
peerId: string;
username: string;
serviceId?: string; // Optional link to service
serviceFqn?: string; // Optional service FQN
sdp: string;
expiresAt: number;
secret?: string;
}
/**
@@ -64,58 +64,27 @@ export interface ClaimUsernameRequest {
/**
* Represents a published service (can have multiple offers)
* New format: service:version@username (e.g., chat:1.0.0@alice)
*/
export interface Service {
id: string; // UUID v4
username: string;
serviceFqn: string; // com.example.chat@1.0.0
serviceFqn: string; // Full FQN: chat:1.0.0@alice
serviceName: string; // Extracted: chat
version: string; // Extracted: 1.0.0
username: string; // Extracted: alice
createdAt: number;
expiresAt: number;
isPublic: boolean;
metadata?: string; // JSON service description
}
/**
* Request to create a single service
*/
export interface CreateServiceRequest {
username: string;
serviceFqn: string;
serviceFqn: string; // Full FQN with username: chat:1.0.0@alice
expiresAt: number;
isPublic?: boolean;
metadata?: string;
offers: CreateOfferRequest[]; // Multiple offers per service
}
/**
* Request to create multiple services in batch
*/
export interface BatchCreateServicesRequest {
services: CreateServiceRequest[];
}
/**
* Represents a service index entry (privacy layer)
*/
export interface ServiceIndex {
uuid: string; // Random UUID for privacy
serviceId: string;
username: string;
serviceFqn: string;
createdAt: number;
expiresAt: number;
}
/**
* Service info for discovery (privacy-aware)
*/
export interface ServiceInfo {
uuid: string;
isPublic: boolean;
serviceFqn?: string; // Only present if public
metadata?: string; // Only present if public
}
/**
* Storage interface for rondevu DNS-like system
* Implementations can use different backends (SQLite, D1, etc.)
@@ -131,11 +100,11 @@ export interface Storage {
createOffers(offers: CreateOfferRequest[]): Promise<Offer[]>;
/**
* Retrieves all offers from a specific peer
* @param peerId Peer identifier
* @returns Array of offers from the peer
* Retrieves all offers from a specific user
* @param username Username identifier
* @returns Array of offers from the user
*/
getOffersByPeerId(peerId: string): Promise<Offer[]>;
getOffersByUsername(username: string): Promise<Offer[]>;
/**
* Retrieves a specific offer by ID
@@ -147,10 +116,10 @@ export interface Storage {
/**
* Deletes an offer (with ownership verification)
* @param offerId Offer identifier
* @param ownerPeerId Peer ID of the owner (for verification)
* @param ownerUsername Username of the owner (for verification)
* @returns true if deleted, false if not found or not owned
*/
deleteOffer(offerId: string, ownerPeerId: string): Promise<boolean>;
deleteOffer(offerId: string, ownerUsername: string): Promise<boolean>;
/**
* Deletes all expired offers
@@ -162,36 +131,35 @@ export interface Storage {
/**
* Answers an offer (locks it to the answerer)
* @param offerId Offer identifier
* @param answererPeerId Answerer's peer ID
* @param answererUsername Answerer's username
* @param answerSdp WebRTC answer SDP
* @param secret Optional secret for protected offers
* @returns Success status and optional error message
*/
answerOffer(offerId: string, answererPeerId: string, answerSdp: string, secret?: string): Promise<{
answerOffer(offerId: string, answererUsername: string, answerSdp: string): Promise<{
success: boolean;
error?: string;
}>;
/**
* Retrieves all answered offers for a specific offerer
* @param offererPeerId Offerer's peer ID
* @param offererUsername Offerer's username
* @returns Array of answered offers
*/
getAnsweredOffers(offererPeerId: string): Promise<Offer[]>;
getAnsweredOffers(offererUsername: string): Promise<Offer[]>;
// ===== ICE Candidate Management =====
/**
* Adds ICE candidates for an offer
* @param offerId Offer identifier
* @param peerId Peer ID posting the candidates
* @param role Role of the peer (offerer or answerer)
* @param username Username posting the candidates
* @param role Role of the user (offerer or answerer)
* @param candidates Array of candidate objects (stored as plain JSON)
* @returns Number of candidates added
*/
addIceCandidates(
offerId: string,
peerId: string,
username: string,
role: 'offerer' | 'answerer',
candidates: any[]
): Promise<number>;
@@ -225,13 +193,6 @@ export interface Storage {
*/
getUsername(username: string): Promise<Username | null>;
/**
* Updates the last_used timestamp for a username (extends expiry)
* @param username Username to update
* @returns true if updated, false if not found
*/
touchUsername(username: string): Promise<boolean>;
/**
* Deletes all expired usernames
* @param now Current timestamp
@@ -244,24 +205,13 @@ export interface Storage {
/**
* Creates a new service with offers
* @param request Service creation request (includes offers)
* @returns Created service with generated ID, index UUID, and created offers
* @returns Created service with generated ID and created offers
*/
createService(request: CreateServiceRequest): Promise<{
service: Service;
indexUuid: string;
offers: Offer[];
}>;
/**
* Creates multiple services with offers in batch
* @param requests Array of service creation requests
* @returns Array of created services with IDs, UUIDs, and offers
*/
batchCreateServices(requests: CreateServiceRequest[]): Promise<Array<{
service: Service;
indexUuid: string;
offers: Offer[];
}>>;
/**
* Gets all offers for a service
@@ -278,34 +228,40 @@ export interface Storage {
getServiceById(serviceId: string): Promise<Service | null>;
/**
* Gets a service by its index UUID
* @param uuid Index UUID
* Gets a service by its fully qualified name (FQN)
* @param serviceFqn Full service FQN (e.g., "chat:1.0.0@alice")
* @returns Service if found, null otherwise
*/
getServiceByUuid(uuid: string): Promise<Service | null>;
getServiceByFqn(serviceFqn: string): Promise<Service | null>;
/**
* Lists all services for a username (with privacy filtering)
* @param username Username to query
* @returns Array of service info (UUIDs only for private services)
* Discovers services by name and version with pagination
* Returns unique available offers (where answerer_peer_id IS NULL)
* @param serviceName Service name (e.g., 'chat')
* @param version Version string for semver matching (e.g., '1.0.0')
* @param limit Maximum number of unique services to return
* @param offset Number of services to skip
* @returns Array of services with available offers
*/
listServicesForUsername(username: string): Promise<ServiceInfo[]>;
discoverServices(
serviceName: string,
version: string,
limit: number,
offset: number
): Promise<Service[]>;
/**
* Queries a service by username and FQN
* @param username Username
* @param serviceFqn Service FQN
* @returns Service index UUID if found, null otherwise
* Gets a random available service by name and version
* Returns a single random offer that is available (answerer_peer_id IS NULL)
* @param serviceName Service name (e.g., 'chat')
* @param version Version string for semver matching (e.g., '1.0.0')
* @returns Random service with available offer, or null if none found
*/
queryService(username: string, serviceFqn: string): Promise<string | null>;
/**
* Finds all services by username and service name (without version)
* @param username Username
* @param serviceName Service name (e.g., 'com.example.chat')
* @returns Array of services with matching service name
*/
findServicesByName(username: string, serviceName: string): Promise<Service[]>;
getRandomService(serviceName: string, version: string): Promise<Service | null>;
/**
* Deletes a service (with ownership verification)

View File

@@ -1,6 +1,5 @@
import { createApp } from './app.ts';
import { D1Storage } from './storage/d1.ts';
import { generateSecretKey } from './crypto.ts';
import { Config } from './config.ts';
/**
@@ -8,7 +7,6 @@ import { Config } from './config.ts';
*/
export interface Env {
DB: D1Database;
AUTH_SECRET?: string;
OFFER_DEFAULT_TTL?: string;
OFFER_MAX_TTL?: string;
OFFER_MIN_TTL?: string;
@@ -25,9 +23,6 @@ export default {
// Initialize D1 storage
const storage = new D1Storage(env.DB);
// Generate or use provided auth secret
const authSecret = env.AUTH_SECRET || generateSecretKey();
// Build config from environment
const config: Config = {
port: 0, // Not used in Workers
@@ -37,7 +32,6 @@ export default {
? env.CORS_ORIGINS.split(',').map(o => o.trim())
: ['*'],
version: env.VERSION || 'unknown',
authSecret,
offerDefaultTtl: env.OFFER_DEFAULT_TTL ? parseInt(env.OFFER_DEFAULT_TTL, 10) : 60000,
offerMaxTtl: env.OFFER_MAX_TTL ? parseInt(env.OFFER_MAX_TTL, 10) : 86400000,
offerMinTtl: env.OFFER_MIN_TTL ? parseInt(env.OFFER_MIN_TTL, 10) : 60000,

View File

@@ -7,7 +7,7 @@ compatibility_flags = ["nodejs_compat"]
[[d1_databases]]
binding = "DB"
database_name = "rondevu-offers"
database_id = "b94e3f71-816d-455b-a89d-927fa49532d0"
database_id = "3d469855-d37f-477b-b139-fa58843a54ff"
# Environment variables
[vars]
@@ -17,7 +17,7 @@ OFFER_MIN_TTL = "60000" # Min offer TTL: 1 minute
MAX_OFFERS_PER_REQUEST = "100" # Max offers per request
MAX_TOPICS_PER_OFFER = "50" # Max topics per offer
CORS_ORIGINS = "*" # Comma-separated list of allowed origins
VERSION = "0.1.0" # Semantic version
VERSION = "0.5.2" # Semantic version
# AUTH_SECRET should be set as a secret, not a var
# Run: npx wrangler secret put AUTH_SECRET
@@ -39,7 +39,7 @@ command = ""
[observability]
[observability.logs]
enabled = false
enabled = true
head_sampling_rate = 1
invocation_logs = true
persist = true