Skip to main content

L{CORE} Troubleshooting Guide

Solutions for common L{CORE} issues and debugging procedures.

Last Updated: 2025-01-14


Table of Contents

  1. Quick Diagnostics
  2. EigenCloud Deployment Issues
  3. Health Check Failures
  4. Encryption Issues
  5. Attestation Problems
  6. Grant Issues
  7. Query Failures
  8. Performance Issues
  9. Cartesi Node Issues
  10. Network Issues
  11. Debugging Tools

Quick Diagnostics

Run these checks first to quickly identify the problem area.

1. System Health Check

#!/bin/bash
# save as: lcore-health-check.sh

echo "=== L\{CORE\} Health Check ==="
echo ""

# Check Attestor
echo "1. Attestor Health:"
curl -s http://localhost:8001/api/health | jq . || echo "FAILED"
echo ""

# Check L\{CORE\} Status
echo "2. L\{CORE\} Status:"
curl -s http://localhost:8001/api/lcore/status | jq . || echo "FAILED"
echo ""

# Check L\{CORE\} Health
echo "3. L\{CORE\} Health:"
curl -s http://localhost:8001/api/lcore/health | jq . || echo "FAILED"
echo ""

# Check Cartesi Node (if accessible)
echo "4. Cartesi Node:"
curl -s http://localhost:5004/health 2>/dev/null | jq . || echo "NOT ACCESSIBLE"
echo ""

# Check Environment
echo "5. Environment Variables:"
echo " LCORE_ENABLED: ${LCORE_ENABLED:-NOT SET}"
echo " LCORE_ROLLUP_URL: ${LCORE_ROLLUP_URL:-NOT SET}"
echo " LCORE_ADMIN_PUBLIC_KEY: ${LCORE_ADMIN_PUBLIC_KEY:+SET (hidden)}"
echo " LCORE_ADMIN_PRIVATE_KEY: ${LCORE_ADMIN_PRIVATE_KEY:+SET (hidden)}"

2. Identify Problem Category

SymptomLikely Category
Attestor returns 503Health Check Failures
"Decryption failed"Encryption Issues
Attestations not savingAttestation Problems
"No grant found"Grant Issues
Queries timing outPerformance Issues
Cartesi node not syncingCartesi Node Issues

EigenCloud Deployment Issues

These issues are specific to deploying L{CORE} on EigenCloud's TEE infrastructure.

Symptom: Port 10000 Not Accessible

Error: Connection refused or timeout when querying Cartesi Node at http://IP:10000/inspect/...

Cause: The Cartesi Node is binding to 127.0.0.1 instead of 0.0.0.0.

Solution:

Add CARTESI_HTTP_ADDRESS=0.0.0.0 to the Cartesi Node environment variables:

# CRITICAL: Without this, port 10000 is only accessible from inside the container
CARTESI_HTTP_ADDRESS=0.0.0.0

Then redeploy the container in EigenCloud.

Verification:

# Should return data, not connection refused
curl "http://${CARTESI_NODE_IP}:10000/inspect/%7B%22type%22%3A%22all_provider_schemas%22%2C%22params%22%3A%7B%7D%7D"

Symptom: "Exposed Ports: map[]" in EigenCloud

Error: EigenCloud dashboard shows the Cartesi container has no exposed ports.

Cause: The base Cartesi Docker image doesn't include an EXPOSE directive, which EigenCloud needs for port mapping.

Solution:

Create a wrapper Dockerfile that adds the EXPOSE directive:

# cartesi-node.dockerfile
FROM modernsociety/lcore-cartesi-node:latest
USER root
EXPOSE 10000

Build and push:

docker build -f cartesi-node.dockerfile -t modernsociety/lcore-cartesi-node:eigencloud .
docker push modernsociety/lcore-cartesi-node:eigencloud

Then redeploy using the :eigencloud tag.


Symptom: "Unknown Query Type" Error

Error:

{"error": "Unknown query type", "received": "0x7b2274797065..."}

Cause: The inspect query payload is hex-encoded instead of URL-encoded.

Solution:

Use URL-encoded JSON, NOT hex-encoded:

# CORRECT: URL-encoded JSON
curl "http://${CARTESI_NODE_IP}:10000/inspect/$(python3 -c "import urllib.parse; print(urllib.parse.quote('{\"type\":\"all_provider_schemas\",\"params\":{}}'))")"

# Also correct: Pre-encoded URL
curl "http://${CARTESI_NODE_IP}:10000/inspect/%7B%22type%22%3A%22all_provider_schemas%22%2C%22params%22%3A%7B%7D%7D"

# WRONG: Hex-encoded (this is what causes the error)
curl "http://${CARTESI_NODE_IP}:10000/inspect/0x7b2274797065223a22616c6c5f70726f76696465725f736368656d6173222c22706172616d73223a7b7d7d"

In JavaScript/TypeScript:

// CORRECT
const query = { type: 'all_provider_schemas', params: {} };
const url = `http://${CARTESI_NODE_IP}:10000/inspect/${encodeURIComponent(JSON.stringify(query))}`;

// WRONG - don't use hex encoding
const hexPayload = Buffer.from(JSON.stringify(query)).toString('hex');
const wrongUrl = `http://${CARTESI_NODE_IP}:10000/inspect/0x${hexPayload}`;

Symptom: Input Rejected by Cartesi

Error: Inputs submitted via InputBox contract are rejected or have no effect.

Possible Causes:

  1. Provider schema not registered - The attestation's provider/flowType combination isn't in the database
  2. Invalid payload format - The input doesn't match expected structure
  3. Invalid TEE signature - The attestation wasn't signed correctly

Debugging:

# Check if provider schema exists
curl "http://${CARTESI_NODE_IP}:10000/inspect/$(python3 -c "import urllib.parse; print(urllib.parse.quote('{\"type\":\"all_provider_schemas\",\"params\":{}}'))")"

# Check Cartesi node logs (if you have access)
# Look for rejection messages

Solution:

Register the provider schema before submitting attestations:

curl -X POST http://${ATTESTOR_IP}:8001/api/lcore/provider-schema \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"provider": "your-provider",
"flowType": "web_request",
"domain": "lending",
"bucketDefinitions": { ... },
"dataKeys": ["parameters", "context"],
"freshnessHalfLife": 604800
}'

Symptom: Container Restart Loop in EigenCloud

Error: Container keeps restarting, never reaches "Running" state.

Possible Causes:

  1. Invalid environment variables - Typo or missing required variable
  2. Database connection failure - PostgreSQL endpoint unreachable
  3. RPC endpoint unreachable - Blockchain RPC not accessible
  4. Out of memory - Container needs more RAM

Debugging Steps:

  1. Check container logs in EigenCloud dashboard
  2. Verify all required environment variables are set:
# Required for Cartesi Node
CARTESI_HTTP_ADDRESS=0.0.0.0
CARTESI_CONTRACTS_APPLICATION_ADDRESS=0x...
CARTESI_CONTRACTS_INPUT_BOX_ADDRESS=0x...
CARTESI_BLOCKCHAIN_HTTP_ENDPOINT=https://...
CARTESI_AUTH_MNEMONIC=...
  1. Test RPC endpoint connectivity from your local machine
  2. Try increasing container resources (4GB RAM minimum recommended)

Symptom: Attestor Cannot Reach Cartesi Node

Error:

{"error": "L\{CORE\} unavailable", "code": "LCORE_UNAVAILABLE"}

Cause: The Attestor's LCORE_NODE_URL is incorrect or the Cartesi Node isn't accessible.

Solution:

  1. Verify the Cartesi Node is running and accessible:
curl http://${CARTESI_NODE_IP}:10000/inspect/%7B%22type%22%3A%22all_provider_schemas%22%2C%22params%22%3A%7B%7D%7D
  1. Update the Attestor's environment variable:
LCORE_NODE_URL=http://${CARTESI_NODE_IP}:10000
  1. Ensure both containers are in the same network or the Cartesi Node's port is publicly accessible.

Symptom: Cartesi Node Not Syncing Blocks

Error: Node logs show "waiting for inputs" but no new blocks are processed.

Possible Causes:

  1. Wrong chain ID - Should be 421614 for Arbitrum Sepolia
  2. RPC endpoint rate limited - Use a dedicated RPC provider
  3. Wrong InputBox deployment block - Node starting from wrong block

Solution:

Verify configuration:

CARTESI_BLOCKCHAIN_ID=421614
CARTESI_CONTRACTS_INPUT_BOX_DEPLOYMENT_BLOCK_NUMBER=2838409
CARTESI_BLOCKCHAIN_HTTP_ENDPOINT=https://arb-sepolia.g.alchemy.com/v2/YOUR_KEY

Health Check Failures

Symptom: L{CORE} Health Returns Unhealthy

{ "healthy": false, "reason": "connection_failed" }

Cause 1: Cartesi Node Not Running

# Check if node is running
docker ps | grep lcore-node

# If not running, start it
docker-compose -f docker-compose.lcore.yml up -d

# Check logs
docker logs lcore-node --tail 50

Cause 2: Wrong Rollup URL

# Verify environment variable
echo $LCORE_ROLLUP_URL

# Test direct connection (local dev only)
curl $ROLLUP_HTTP_SERVER_URL/health

# If using Docker networking, ensure correct hostname
# Wrong: http://localhost:5004 (from inside container)
# Right: http://lcore-node:5004 (Docker network name)

Cause 3: Network Connectivity

# From Attestor container, test connectivity
docker exec -it attestor sh -c "curl -v http://lcore-node:5004/health"

# Check Docker network
docker network inspect attestor_network

Symptom: L{CORE} Status Shows Disabled

{ "enabled": false, "status": "disabled" }

Solution: Enable L{CORE}

# Check environment
echo $LCORE_ENABLED # Should be "1"

# Update .env
echo "LCORE_ENABLED=1" >> .env

# Restart Attestor
docker-compose restart attestor

Encryption Issues

Symptom: "Decryption failed"

{ "error": "Decryption failed", "code": "DECRYPTION_ERROR" }

Cause 1: Key Mismatch

The private key doesn't match the public key registered in Cartesi.

# Verify keys match
node -e "
const nacl = require('tweetnacl');
const publicKey = Buffer.from('$LCORE_ADMIN_PUBLIC_KEY', 'base64');
const privateKey = Buffer.from('$LCORE_ADMIN_PRIVATE_KEY', 'base64');

const derivedPublic = nacl.box.keyPair.fromSecretKey(privateKey).publicKey;

if (Buffer.compare(publicKey, derivedPublic) === 0) {
console.log('✓ Keys match');
} else {
console.log('✗ Keys DO NOT match!');
console.log('Expected public key:', Buffer.from(derivedPublic).toString('base64'));
}
"

Fix: Either update the private key to match, or re-register the public key in Cartesi.

Cause 2: Wrong Public Key in Cartesi

# Check what public key is registered
curl -X POST http://localhost:8001/api/lcore/inspect \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-d '{"type": "encryption_config"}'

# If different, re-register
curl -X POST http://localhost:8001/api/lcore/encryption-key \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-d "{\"publicKey\": \"$LCORE_ADMIN_PUBLIC_KEY\", \"algorithm\": \"nacl-box\"}"

Cause 3: Corrupted Key Data

# Verify keys are valid base64
echo $LCORE_ADMIN_PUBLIC_KEY | base64 -d | wc -c # Should be 32 bytes
echo $LCORE_ADMIN_PRIVATE_KEY | base64 -d | wc -c # Should be 32 bytes

Symptom: "Encryption key not set"

{ "error": "Encryption key not configured", "code": "NO_ENCRYPTION_KEY" }

Solution: Register Encryption Key

# Register the public key in Cartesi
curl -X POST http://localhost:8001/api/lcore/encryption-key \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"publicKey": "'$LCORE_ADMIN_PUBLIC_KEY'",
"algorithm": "nacl-box"
}'

# Verify it was set
curl http://localhost:8001/api/lcore/status -H "Authorization: Bearer $ADMIN_TOKEN"

Attestation Problems

Symptom: Attestations Not Being Stored

After claim creation, attestation doesn't appear in L{CORE}.

Cause 1: Provider Schema Not Registered

# Check registered schemas
curl http://localhost:8001/api/lcore/provider-schemas \
-H "Authorization: Bearer $ADMIN_TOKEN"

# If missing, register the schema
curl -X POST http://localhost:8001/api/lcore/provider-schema \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"provider": "chase",
"flowType": "web_request",
"domain": "lending",
"bucketDefinitions": {
"balance": {
"boundaries": [0, 1000, 5000, 10000, 25000, 50000, 100000, 500000, 1000000000],
"labels": ["<1k", "1k-5k", "5k-10k", "10k-25k", "25k-50k", "50k-100k", "100k-500k", ">500k"]
}
},
"dataKeys": ["parameters", "context"],
"freshnessHalfLife": 604800
}'

Cause 2: L{CORE} Submission Disabled

# Check if L\{CORE\} is enabled
grep LCORE_ENABLED .env

# Check Attestor logs for submission attempts
docker logs attestor 2>&1 | grep -i "lcore"

Cause 3: Submission Rejected

# Check Cartesi node logs for rejections
docker logs lcore-node 2>&1 | grep -i "reject"

# Common rejection reasons:
# - Invalid TEE signature
# - Schema not found
# - Duplicate attestation hash

Symptom: "Provider schema not registered"

{ "error": "Provider schema not registered", "provider": "chase", "flow_type": "web_request" }

Solution: Register the Schema

See LCORE-DEPLOYMENT-GUIDE.md for schema registration.

Symptom: Attestation Stored But Missing Buckets

{
"attestation": {
"id": "...",
"buckets": {} // Empty!
}
}

Cause: Discretization Failed

# Check attestor logs for discretization errors
docker logs attestor 2>&1 | grep -i "discreti"

# Verify the claim has required parameters
# The provider schema defines which keys to bucketize

Grant Issues

Symptom: "No grant found"

{ "error": "No grant found for this attestation", "code": "NO_GRANT" }

Cause 1: Grant Was Never Created

# Check grants for a specific attestation (as admin)
curl "http://localhost:8001/api/lcore/admin/grants?attestation_id=$ATTESTATION_ID" \
-H "Authorization: Bearer $ADMIN_TOKEN"

Cause 2: Grant for Different dApp

# Check the grantee address
# Your dApp must be querying with the same address the grant was issued to

Solution: User Must Create Grant

Direct the user to create a grant via your frontend. See LCORE-DAPP-INTEGRATION.md.

Symptom: "Grant expired"

{ "error": "Grant has expired", "code": "GRANT_EXPIRED" }

Solution: Request New Grant

Grants have an expiration date. Prompt the user to create a new grant.

// Check grant status before querying
async function ensureValidGrant(attestationId: string) {
const grants = await lcore.getMyGrants();
const grant = grants.find(g => g.attestationId === attestationId);

if (!grant || new Date(grant.expiresAt) < new Date()) {
throw new Error('Grant expired or not found - please request new access');
}
return grant;
}

Symptom: "Grant revoked"

{ "error": "Grant has been revoked", "code": "GRANT_REVOKED" }

Explanation

The user intentionally revoked your access. You cannot restore it - they must create a new grant.


Query Failures

Symptom: Queries Returning Empty Results

{ "attestations": [] }

Cause 1: Wrong Owner Address

# Verify the owner address format (lowercase, checksummed, etc.)
# L\{CORE\} normalizes addresses to lowercase

Cause 2: Attestations Expired

# Check if attestations exist but are expired
curl "http://localhost:8001/api/lcore/admin/attestations?owner=$ADDRESS&include_expired=true" \
-H "Authorization: Bearer $ADMIN_TOKEN"

Cause 3: Wrong Domain Filter

# Query without domain filter
curl "http://localhost:8001/api/lcore/admin/attestations?owner=$ADDRESS" \
-H "Authorization: Bearer $ADMIN_TOKEN"

Symptom: Aggregate Queries Return Zeros

{
"bucketKey": "balance",
"counts": [],
"total": 0
}

Cause: K-Anonymity Threshold

Buckets with fewer than k users are suppressed. If all buckets have < 5 users, results appear empty.

Cause: Wrong Domain

# List available domains
curl http://localhost:8001/api/lcore/provider-schemas -H "Authorization: Bearer $ADMIN_TOKEN" | jq '.[].domain' | sort -u

Performance Issues

Symptom: Queries Taking > 5 Seconds

Cause 1: Cartesi Node Overloaded

# Check node resource usage
docker stats lcore-node

# If CPU/memory high, increase resources
# docker-compose.yml:
# services:
# lcore-node:
# deploy:
# resources:
# limits:
# cpus: '2'
# memory: 4G

Cause 2: Large Result Sets

# Use pagination
curl "http://localhost:8001/api/lcore/admin/attestations?limit=100&offset=0" \
-H "Authorization: Bearer $ADMIN_TOKEN"

Cause 3: Database Needs Optimization

# Check if Cartesi database needs vacuum (advanced)
# This requires accessing the Cartesi machine directly

Symptom: Rate Limiting

{ "error": "Rate limit exceeded", "code": "RATE_LIMITED", "retryAfter": 60 }

Solution: Implement Backoff

async function withBackoff<T>(fn: () => Promise<T>): Promise<T> {
for (let attempt = 0; attempt < 5; attempt++) {
try {
return await fn();
} catch (error) {
if (error.code === 'RATE_LIMITED') {
const waitMs = (error.retryAfter || 60) * 1000;
console.log(`Rate limited, waiting ${waitMs}ms`);
await new Promise(r => setTimeout(r, waitMs));
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}

Cartesi Node Issues

Symptom: Node Not Syncing

docker logs lcore-node | tail -20
# Shows: "waiting for new inputs" but no progress

Cause 1: Wrong Chain Configuration

# Verify chain settings
echo "Application Address: $LCORE_APPLICATION_ADDRESS"
echo "RPC URL: $ARBITRUM_RPC_URL"

# Test RPC connection
curl -X POST $ARBITRUM_RPC_URL \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'

Cause 2: Node Behind Firewall

# Check if node can reach Arbitrum
docker exec lcore-node curl -v https://arb1.arbitrum.io/rpc

Cause 3: Corrupted State

# Reset node state (WARNING: loses local data)
docker-compose -f docker-compose.lcore.yml down
rm -rf lcore-data/
docker-compose -f docker-compose.lcore.yml up -d

Symptom: Node Crashes Repeatedly

docker logs lcore-node | grep -i "error\|panic\|fatal"

Cause: Out of Memory

# Increase memory limit
# docker-compose.yml:
# services:
# lcore-node:
# deploy:
# resources:
# limits:
# memory: 8G

Cause: Disk Full

# Check disk space
df -h

# Clean up Docker if needed
docker system prune -f

Network Issues

Symptom: "Connection refused" to Attestor

curl: (7) Failed to connect to localhost port 8001: Connection refused

Solution: Check Attestor is Running

# Check process
docker ps | grep attestor

# Check port binding
netstat -tlnp | grep 8001

# Restart if needed
docker-compose restart attestor

Symptom: CORS Errors (Browser)

Access to fetch at 'http://localhost:8001/api/...' has been blocked by CORS policy

Solution: Configure CORS

# Add to .env
CORS_ORIGINS=https://your-dapp.com,http://localhost:3000

# Or allow all (development only!)
CORS_ORIGINS=*

Symptom: SSL/TLS Errors

SSL certificate problem: unable to get local issuer certificate

Solution: Use Valid SSL Certificates

# For production, use a valid certificate
# For development, you can disable verification (NOT FOR PRODUCTION)
curl -k https://...

Debugging Tools

1. L{CORE} Inspector

Query Cartesi state directly:

# Inspect all attestations for an address
curl -X POST http://localhost:5004/input/inspect \
-H "Content-Type: application/json" \
-d '{
"type": "user_attestations",
"params": {
"owner": "0x1234..."
}
}'

# Inspect provider schemas
curl -X POST http://localhost:5004/input/inspect \
-H "Content-Type: application/json" \
-d '{"type": "all_provider_schemas"}'

# Inspect schema admins
curl -X POST http://localhost:5004/input/inspect \
-H "Content-Type: application/json" \
-d '{"type": "all_schema_admins"}'

2. Log Analysis

# Attestor logs - L\{CORE\} specific
docker logs attestor 2>&1 | grep -i "lcore\|cartesi\|encrypt\|decrypt"

# Cartesi node logs
docker logs lcore-node 2>&1 | grep -i "error\|reject\|accept"

# Combined timeline
docker logs attestor --timestamps 2>&1 | grep lcore | sort

3. Test Encryption/Decryption

# Test encryption roundtrip
node -e "
const nacl = require('tweetnacl');
const util = require('tweetnacl-util');

const publicKey = Buffer.from(process.env.LCORE_ADMIN_PUBLIC_KEY, 'base64');
const privateKey = Buffer.from(process.env.LCORE_ADMIN_PRIVATE_KEY, 'base64');

// Encrypt
const message = JSON.stringify({ test: 'hello', timestamp: Date.now() });
const ephemeral = nacl.box.keyPair();
const nonce = nacl.randomBytes(24);
const ciphertext = nacl.box(
util.decodeUTF8(message),
nonce,
publicKey,
ephemeral.secretKey
);

console.log('Encrypted:', util.encodeBase64(ciphertext).substring(0, 50) + '...');

// Decrypt
const decrypted = nacl.box.open(
ciphertext,
nonce,
ephemeral.publicKey,
privateKey
);

if (decrypted) {
console.log('Decrypted:', util.encodeUTF8(decrypted));
console.log('✓ Encryption/decryption working');
} else {
console.log('✗ Decryption failed!');
}
"

4. Simulate Input

# Submit a test input to Cartesi (dev mode only)
curl -X POST http://localhost:5004/input/advance \
-H "Content-Type: application/json" \
-d '{
"sender": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
"payload": {
"action": "test_ping"
}
}'

Error Reference

Error CodeHTTP StatusDescriptionSolution
LCORE_DISABLED503L{CORE} not enabledSet LCORE_ENABLED=1
LCORE_UNAVAILABLE503Cannot reach CartesiCheck node, network
NO_ENCRYPTION_KEY500Public key not setRegister key
DECRYPTION_ERROR500Cannot decryptCheck key match
SCHEMA_NOT_FOUND400Provider not registeredRegister schema
DUPLICATE_ATTESTATION409Already existsUse existing
NO_GRANT403No access grantUser must grant
GRANT_EXPIRED403Grant past expiryRequest new grant
GRANT_REVOKED403User revokedRequest new grant
INVALID_SIGNATURE401Bad TEE/user sigCheck signing
RATE_LIMITED429Too many requestsImplement backoff

Getting Help

If you can't resolve the issue:

  1. Collect diagnostics:

    ./lcore-health-check.sh > diagnostics.txt
    docker logs attestor --tail 500 >> diagnostics.txt
    docker logs lcore-node --tail 500 >> diagnostics.txt
  2. Check documentation:

  3. Open an issue with diagnostics attached


Document History

VersionDateChanges
2.02025-01-14Added EigenCloud deployment issues section
1.02025-01-12Initial troubleshooting guide