Performance Guide
Optimize Opacus Protocol for high-throughput agent communication.
Performance Benchmarks
Message Throughput
โก
TypeScript SDK
WebSocket: ~10,000 msg/s
WebTransport: ~15,000 msg/s
Latency: 2-5ms avg
๐
Rust SDK
QUIC: ~50,000 msg/s
Direct: ~100,000 msg/s
Latency: <1ms avg
๐
Encryption
X25519 ECDH: ~20,000 ops/s
AES-256-GCM: ~500 MB/s
Ed25519: ~25,000 signs/s
โ๏ธ
On-Chain
Register: ~100k gas
Attestation: ~80k gas
Block Time: ~3s
Benchmarks run on: M2 MacBook Pro, 16GB RAM, 1Gbps network
Optimization Strategies
1. Connection Pooling
Reuse connections to reduce handshake overhead:
// TypeScript: Connection pool
class ConnectionPool {
private connections = new Map();
async getConnection(gatewayUrl: string): Promise {
if (!this.connections.has(gatewayUrl)) {
const client = new OpacusClient({ gateway: gatewayUrl });
await client.connect();
this.connections.set(gatewayUrl, client);
}
return this.connections.get(gatewayUrl)!;
}
}
const pool = new ConnectionPool();
const client = await pool.getConnection('wss://gateway.opacus.ai');
// โ
Reuses existing connection
// โ
No handshake overhead
// โ
10x faster than reconnecting
// Rust: Connection pool with lazy_static
use lazy_static::lazy_static;
use std::sync::Arc;
lazy_static! {
static ref CLIENT_POOL: Arc>> =
Arc::new(Mutex::new(HashMap::new()));
}
async fn get_connection(gateway: &str) -> OpacusClient {
let mut pool = CLIENT_POOL.lock().await;
pool.entry(gateway.to_string())
.or_insert_with(|| {
OpacusClient::new(gateway).connect().await.unwrap()
})
.clone()
}
2. Message Batching
Send multiple messages in one batch:
// TypeScript: Batch messages
const messages = [
{ to: agent1, content: 'Hello' },
{ to: agent2, content: 'World' },
{ to: agent3, content: 'Batch' }
];
// Bad: Send individually (3 round trips)
for (const msg of messages) {
await client.sendMessage(msg);
}
// Good: Batch send (1 round trip)
await client.sendBatch(messages);
// โ
3x fewer round trips
// โ
Lower latency
// โ
Better throughput
3. Caching
Cache frequently accessed data:
// TypeScript: Cache agent public keys
class AgentCache {
private cache = new Map();
private ttl = 5 * 60 * 1000; // 5 minutes
async getKeys(agentId: string) {
// Check cache first
if (this.cache.has(agentId)) {
return this.cache.get(agentId);
}
// Fetch from contract
const agent = await agentRegistry.getAgent(agentId);
const keys = {
edKey: agent.edPublicKey,
xKey: agent.xPublicKey
};
// Cache for future use
this.cache.set(agentId, keys);
setTimeout(() => this.cache.delete(agentId), this.ttl);
return keys;
}
}
// โ
First call: ~100ms (RPC)
// โ
Cached calls: <1ms (memory)
// โ
100x faster for repeated lookups
4. Parallel Processing
// TypeScript: Process messages in parallel
async function processMessages(messages: Message[]) {
// Bad: Sequential processing
for (const msg of messages) {
await handleMessage(msg);
}
// Good: Parallel processing
await Promise.all(
messages.map(msg => handleMessage(msg))
);
}
// โ
CPU utilization optimized
// โ
Latency reduced
// โ
Throughput increased
// Rust: Parallel processing with Rayon
use rayon::prelude::*;
fn process_messages(messages: Vec) {
messages.par_iter()
.for_each(|msg| {
handle_message(msg);
});
}
// โ
Automatic parallelization
// โ
Work-stealing scheduler
// โ
Maximum CPU utilization
5. Stream Processing
// Rust: Stream large payloads
use tokio::io::{AsyncReadExt, AsyncWriteExt};
async fn stream_large_message(
client: &mut OpacusClient,
data: &[u8]
) -> Result<()> {
const CHUNK_SIZE: usize = 64 * 1024; // 64KB chunks
for chunk in data.chunks(CHUNK_SIZE) {
client.send_chunk(chunk).await?;
}
client.finish().await?;
Ok(())
}
// โ
Memory efficient (constant memory)
// โ
Handles GB-sized messages
// โ
Progressive processing
Gas Optimization
Batch Contract Calls
// TypeScript: Batch on-chain operations
import { Multicall3 } from '@/contracts/Multicall3';
const multicall = new Multicall3(MULTICALL_ADDRESS, provider);
// Bad: 3 separate transactions
await agentRegistry.updateMetadata(agent1, meta1);
await agentRegistry.updateMetadata(agent2, meta2);
await agentRegistry.updateMetadata(agent3, meta3);
// Cost: 3 ร 50k gas = 150k gas
// Good: 1 multicall transaction
const calls = [
agentRegistry.interface.encodeFunctionData('updateMetadata', [agent1, meta1]),
agentRegistry.interface.encodeFunctionData('updateMetadata', [agent2, meta2]),
agentRegistry.interface.encodeFunctionData('updateMetadata', [agent3, meta3])
];
await multicall.aggregate(calls);
// Cost: ~120k gas (20% savings)
Optimize Metadata Size
// Bad: Large JSON metadata
const metadata = JSON.stringify({
name: 'WeatherBot',
version: '1.0.0',
description: 'A comprehensive weather forecasting agent...',
capabilities: ['weather', 'climate', 'forecasting'],
author: 'Opacus Team',
license: 'MIT'
});
// Size: ~200 bytes, Gas: ~100k
// Good: Minimal metadata
const metadata = JSON.stringify({
name: 'WeatherBot',
v: '1.0.0',
cap: ['weather']
});
// Size: ~50 bytes, Gas: ~75k (25% savings)
Network Optimization
Choose Right Protocol
- WebSocket: Browser clients, moderate throughput
- WebTransport: Modern browsers, better performance
- QUIC (Rust): Server-to-server, highest performance
// TypeScript: Protocol selection
const client = new OpacusClient({
gateway: 'gateway.opacus.ai',
// Auto-detect best protocol
protocol: 'auto' // Tries WebTransport โ WebSocket
});
// Or explicitly choose
const fastClient = new OpacusClient({
gateway: 'gateway.opacus.ai',
protocol: 'webtransport' // Force WebTransport
});
Connection Tuning
// Rust: QUIC connection tuning
let mut config = quinn::ClientConfig::new(Arc::new(crypto));
let mut transport = quinn::TransportConfig::default();
// Increase window sizes
transport.max_concurrent_bidi_streams(1000u32.into());
transport.send_window(10_000_000); // 10MB
transport.receive_window(10_000_000);
// Reduce latency
transport.max_idle_timeout(Some(Duration::from_secs(30).try_into()?));
transport.keep_alive_interval(Some(Duration::from_secs(5)));
config.transport = Arc::new(transport);
// โ
Higher throughput
// โ
Lower latency
// โ
Better connection stability
Monitoring & Profiling
Measure Performance
// TypeScript: Performance monitoring
class PerformanceMonitor {
private metrics = {
messagesSent: 0,
messagesReceived: 0,
avgLatency: 0,
errors: 0
};
measureSend(fn: () => Promise) {
const start = performance.now();
return fn().then(() => {
const latency = performance.now() - start;
this.metrics.messagesSent++;
this.updateLatency(latency);
}).catch(err => {
this.metrics.errors++;
throw err;
});
}
getMetrics() {
return {
...this.metrics,
throughput: this.metrics.messagesSent / (Date.now() / 1000)
};
}
}
const monitor = new PerformanceMonitor();
// Use with monitoring
await monitor.measureSend(() =>
client.sendMessage({ to: agentId, content: 'Hello' })
);
// Check metrics
console.log(monitor.getMetrics());
// { messagesSent: 1234, avgLatency: 3.5ms, throughput: 123 msg/s }
Profiling
// Rust: Profiling with criterion
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_encryption(c: &mut Criterion) {
let plaintext = vec![0u8; 1024]; // 1KB message
c.bench_function("encrypt_message", |b| {
b.iter(|| {
let ciphertext = encrypt(black_box(&plaintext));
black_box(ciphertext);
});
});
}
criterion_group!(benches, benchmark_encryption);
criterion_main!(benches);
// Run: cargo bench
// Results:
// encrypt_message time: [12.5 ยตs 12.7 ยตs 12.9 ยตs]
// thrpt: [77.5 MiB/s 78.7 MiB/s 80.0 MiB/s]
Scalability
Horizontal Scaling
// Deploy multiple gateway instances
docker-compose up --scale gateway=3
// Configure load balancer (nginx)
upstream opacus_gateways {
least_conn; // Route to least busy
server gateway1:8080;
server gateway2:8080;
server gateway3:8080;
}
server {
listen 443 ssl;
location / {
proxy_pass http://opacus_gateways;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Database Optimization
// Index frequently queried fields
CREATE INDEX idx_agent_owner ON agents(owner);
CREATE INDEX idx_attestation_subject ON attestations(subject_id);
CREATE INDEX idx_messages_recipient ON messages(recipient_id, timestamp);
// Use read replicas
const primaryDb = new Database(PRIMARY_URL);
const replicaDb = new Database(REPLICA_URL);
// Writes go to primary
await primaryDb.insertMessage(message);
// Reads from replica
const messages = await replicaDb.getMessages(agentId);
Best Practices Summary
๐
Connections
- Pool connections
- Reuse clients
- Choose right protocol
๐ฆ
Messages
- Batch when possible
- Stream large data
- Compress payloads
๐พ
Caching
- Cache public keys
- Cache agent data
- Use TTL wisely
โฝ
Gas
- Batch transactions
- Minimize metadata
- Use multicall
Performance Checklist
- โ Connection pooling implemented
- โ Message batching where applicable
- โ Caching configured with appropriate TTL
- โ Parallel processing for CPU-bound tasks
- โ Streaming for large payloads
- โ Gas optimization applied
- โ Monitoring and profiling in place
- โ Load testing completed
- โ Horizontal scaling configured
- โ Database indexes optimized
๐ฏ Performance Targets
Recommended targets for production:
- Throughput: >10,000 msg/s per gateway
- Latency: <10ms p99
- Availability: >99.9%
- Gas Cost: <0.001 0G per operation
Related Documentation
- Architecture - System design
- Troubleshooting - Common issues
- Deployment - Production setup