Performance Guide

Optimize Opacus Protocol for high-throughput agent communication.

Performance Benchmarks

Message Throughput

โšก

TypeScript SDK

WebSocket: ~10,000 msg/s
WebTransport: ~15,000 msg/s
Latency: 2-5ms avg

๐Ÿš€

Rust SDK

QUIC: ~50,000 msg/s
Direct: ~100,000 msg/s
Latency: <1ms avg

๐Ÿ”

Encryption

X25519 ECDH: ~20,000 ops/s
AES-256-GCM: ~500 MB/s
Ed25519: ~25,000 signs/s

โ›“๏ธ

On-Chain

Register: ~100k gas
Attestation: ~80k gas
Block Time: ~3s

Benchmarks run on: M2 MacBook Pro, 16GB RAM, 1Gbps network

Optimization Strategies

1. Connection Pooling

Reuse connections to reduce handshake overhead:

// TypeScript: Connection pool
class ConnectionPool {
  private connections = new Map();
  
  async getConnection(gatewayUrl: string): Promise {
    if (!this.connections.has(gatewayUrl)) {
      const client = new OpacusClient({ gateway: gatewayUrl });
      await client.connect();
      this.connections.set(gatewayUrl, client);
    }
    return this.connections.get(gatewayUrl)!;
  }
}

const pool = new ConnectionPool();
const client = await pool.getConnection('wss://gateway.opacus.ai');

// โœ… Reuses existing connection
// โœ… No handshake overhead
// โœ… 10x faster than reconnecting
// Rust: Connection pool with lazy_static
use lazy_static::lazy_static;
use std::sync::Arc;

lazy_static! {
    static ref CLIENT_POOL: Arc>> = 
        Arc::new(Mutex::new(HashMap::new()));
}

async fn get_connection(gateway: &str) -> OpacusClient {
    let mut pool = CLIENT_POOL.lock().await;
    
    pool.entry(gateway.to_string())
        .or_insert_with(|| {
            OpacusClient::new(gateway).connect().await.unwrap()
        })
        .clone()
}

2. Message Batching

Send multiple messages in one batch:

// TypeScript: Batch messages
const messages = [
  { to: agent1, content: 'Hello' },
  { to: agent2, content: 'World' },
  { to: agent3, content: 'Batch' }
];

// Bad: Send individually (3 round trips)
for (const msg of messages) {
  await client.sendMessage(msg);
}

// Good: Batch send (1 round trip)
await client.sendBatch(messages);

// โœ… 3x fewer round trips
// โœ… Lower latency
// โœ… Better throughput

3. Caching

Cache frequently accessed data:

// TypeScript: Cache agent public keys
class AgentCache {
  private cache = new Map();
  private ttl = 5 * 60 * 1000; // 5 minutes
  
  async getKeys(agentId: string) {
    // Check cache first
    if (this.cache.has(agentId)) {
      return this.cache.get(agentId);
    }
    
    // Fetch from contract
    const agent = await agentRegistry.getAgent(agentId);
    const keys = {
      edKey: agent.edPublicKey,
      xKey: agent.xPublicKey
    };
    
    // Cache for future use
    this.cache.set(agentId, keys);
    setTimeout(() => this.cache.delete(agentId), this.ttl);
    
    return keys;
  }
}

// โœ… First call: ~100ms (RPC)
// โœ… Cached calls: <1ms (memory)
// โœ… 100x faster for repeated lookups

4. Parallel Processing

// TypeScript: Process messages in parallel
async function processMessages(messages: Message[]) {
  // Bad: Sequential processing
  for (const msg of messages) {
    await handleMessage(msg);
  }
  
  // Good: Parallel processing
  await Promise.all(
    messages.map(msg => handleMessage(msg))
  );
}

// โœ… CPU utilization optimized
// โœ… Latency reduced
// โœ… Throughput increased
// Rust: Parallel processing with Rayon
use rayon::prelude::*;

fn process_messages(messages: Vec) {
    messages.par_iter()
        .for_each(|msg| {
            handle_message(msg);
        });
}

// โœ… Automatic parallelization
// โœ… Work-stealing scheduler
// โœ… Maximum CPU utilization

5. Stream Processing

// Rust: Stream large payloads
use tokio::io::{AsyncReadExt, AsyncWriteExt};

async fn stream_large_message(
    client: &mut OpacusClient,
    data: &[u8]
) -> Result<()> {
    const CHUNK_SIZE: usize = 64 * 1024; // 64KB chunks
    
    for chunk in data.chunks(CHUNK_SIZE) {
        client.send_chunk(chunk).await?;
    }
    
    client.finish().await?;
    Ok(())
}

// โœ… Memory efficient (constant memory)
// โœ… Handles GB-sized messages
// โœ… Progressive processing

Gas Optimization

Batch Contract Calls

// TypeScript: Batch on-chain operations
import { Multicall3 } from '@/contracts/Multicall3';

const multicall = new Multicall3(MULTICALL_ADDRESS, provider);

// Bad: 3 separate transactions
await agentRegistry.updateMetadata(agent1, meta1);
await agentRegistry.updateMetadata(agent2, meta2);
await agentRegistry.updateMetadata(agent3, meta3);
// Cost: 3 ร— 50k gas = 150k gas

// Good: 1 multicall transaction
const calls = [
  agentRegistry.interface.encodeFunctionData('updateMetadata', [agent1, meta1]),
  agentRegistry.interface.encodeFunctionData('updateMetadata', [agent2, meta2]),
  agentRegistry.interface.encodeFunctionData('updateMetadata', [agent3, meta3])
];

await multicall.aggregate(calls);
// Cost: ~120k gas (20% savings)

Optimize Metadata Size

// Bad: Large JSON metadata
const metadata = JSON.stringify({
  name: 'WeatherBot',
  version: '1.0.0',
  description: 'A comprehensive weather forecasting agent...',
  capabilities: ['weather', 'climate', 'forecasting'],
  author: 'Opacus Team',
  license: 'MIT'
});
// Size: ~200 bytes, Gas: ~100k

// Good: Minimal metadata
const metadata = JSON.stringify({
  name: 'WeatherBot',
  v: '1.0.0',
  cap: ['weather']
});
// Size: ~50 bytes, Gas: ~75k (25% savings)

Network Optimization

Choose Right Protocol

// TypeScript: Protocol selection
const client = new OpacusClient({
  gateway: 'gateway.opacus.ai',
  // Auto-detect best protocol
  protocol: 'auto'  // Tries WebTransport โ†’ WebSocket
});

// Or explicitly choose
const fastClient = new OpacusClient({
  gateway: 'gateway.opacus.ai',
  protocol: 'webtransport'  // Force WebTransport
});

Connection Tuning

// Rust: QUIC connection tuning
let mut config = quinn::ClientConfig::new(Arc::new(crypto));
let mut transport = quinn::TransportConfig::default();

// Increase window sizes
transport.max_concurrent_bidi_streams(1000u32.into());
transport.send_window(10_000_000);  // 10MB
transport.receive_window(10_000_000);

// Reduce latency
transport.max_idle_timeout(Some(Duration::from_secs(30).try_into()?));
transport.keep_alive_interval(Some(Duration::from_secs(5)));

config.transport = Arc::new(transport);

// โœ… Higher throughput
// โœ… Lower latency
// โœ… Better connection stability

Monitoring & Profiling

Measure Performance

// TypeScript: Performance monitoring
class PerformanceMonitor {
  private metrics = {
    messagesSent: 0,
    messagesReceived: 0,
    avgLatency: 0,
    errors: 0
  };
  
  measureSend(fn: () => Promise) {
    const start = performance.now();
    
    return fn().then(() => {
      const latency = performance.now() - start;
      this.metrics.messagesSent++;
      this.updateLatency(latency);
    }).catch(err => {
      this.metrics.errors++;
      throw err;
    });
  }
  
  getMetrics() {
    return {
      ...this.metrics,
      throughput: this.metrics.messagesSent / (Date.now() / 1000)
    };
  }
}

const monitor = new PerformanceMonitor();

// Use with monitoring
await monitor.measureSend(() => 
  client.sendMessage({ to: agentId, content: 'Hello' })
);

// Check metrics
console.log(monitor.getMetrics());
// { messagesSent: 1234, avgLatency: 3.5ms, throughput: 123 msg/s }

Profiling

// Rust: Profiling with criterion
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn benchmark_encryption(c: &mut Criterion) {
    let plaintext = vec![0u8; 1024];  // 1KB message
    
    c.bench_function("encrypt_message", |b| {
        b.iter(|| {
            let ciphertext = encrypt(black_box(&plaintext));
            black_box(ciphertext);
        });
    });
}

criterion_group!(benches, benchmark_encryption);
criterion_main!(benches);

// Run: cargo bench
// Results:
// encrypt_message         time:   [12.5 ยตs 12.7 ยตs 12.9 ยตs]
//                         thrpt:  [77.5 MiB/s 78.7 MiB/s 80.0 MiB/s]

Scalability

Horizontal Scaling

Load Balancing Architecture Load Balancer Gateway 1 Gateway 2 Gateway 3 Shared State (Redis)
// Deploy multiple gateway instances
docker-compose up --scale gateway=3

// Configure load balancer (nginx)
upstream opacus_gateways {
    least_conn;  // Route to least busy
    server gateway1:8080;
    server gateway2:8080;
    server gateway3:8080;
}

server {
    listen 443 ssl;
    location / {
        proxy_pass http://opacus_gateways;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Database Optimization

// Index frequently queried fields
CREATE INDEX idx_agent_owner ON agents(owner);
CREATE INDEX idx_attestation_subject ON attestations(subject_id);
CREATE INDEX idx_messages_recipient ON messages(recipient_id, timestamp);

// Use read replicas
const primaryDb = new Database(PRIMARY_URL);
const replicaDb = new Database(REPLICA_URL);

// Writes go to primary
await primaryDb.insertMessage(message);

// Reads from replica
const messages = await replicaDb.getMessages(agentId);

Best Practices Summary

๐Ÿ”Œ

Connections

  • Pool connections
  • Reuse clients
  • Choose right protocol
๐Ÿ“ฆ

Messages

  • Batch when possible
  • Stream large data
  • Compress payloads
๐Ÿ’พ

Caching

  • Cache public keys
  • Cache agent data
  • Use TTL wisely
โ›ฝ

Gas

  • Batch transactions
  • Minimize metadata
  • Use multicall

Performance Checklist

๐ŸŽฏ Performance Targets

Recommended targets for production:

  • Throughput: >10,000 msg/s per gateway
  • Latency: <10ms p99
  • Availability: >99.9%
  • Gas Cost: <0.001 0G per operation

Related Documentation