Skip to main content

Overview

Use this guide to benchmark a selected staking entity against the network average and then drill down into sub-entities. You can view the currently supported public entity list at beaconcha.in/entities.
API Endpoints: This guide uses /api/v2/ethereum/entities, /api/v2/ethereum/performance-aggregate, /api/v2/ethereum/entity/sub-entities, and /api/v2/ethereum/validators/metadata.
Premium access: /api/v2/ethereum/entities, /api/v2/ethereum/entity/sub-entities, and /api/v2/ethereum/validators/metadata require a Scale or Enterprise plan.
Public labels vs private sets: Entity/sub-entity labels are public mappings. If labels look missing or incorrect, see Validator Tagging (label coverage depends on upstream public datasets, such as Hildobby’s), and use Dashboard as Private Sets to define your own validator sets.

Why Benchmark vs Network?

Baseline Performance Check

Compare one entity against network-wide average performance over the same window.

Sub-Entity Diagnosis

Identify which operators are helping or hurting the parent entity score.

Incident Validation

Verify whether drops are entity-specific or market-wide.

Stakeholder Reporting

Produce clear entity-vs-network deltas for internal and external reporting.
MetricBest ForLuck Impact
BeaconScoreRelative performanceNo
APY / ROIAbsolute returnsYes
For how BeaconScore is calculated and why it can differ from weighted alternatives, see BeaconScore vs. 3rd Party Metrics.

1. Get the Entity Score

Fetch entities and select your target row: Benchmarking with BeaconScore isolates validator operational quality by normalizing for proposal luck.
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/entities \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "chain": "mainnet",
  "range": { "evaluation_window": "30d" }
}
'
Use the target entity’s beaconscore.

2. Get the Network Baseline

Request the same evaluation window for network aggregate:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/performance-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "chain": "mainnet",
  "range": { "evaluation_window": "30d" }
}
'
Use data.beaconscore.total as the network benchmark.
Keep the same chain and evaluation_window for both entity and network requests to avoid skewed comparisons.

3. Compare Entity vs Network

Compute: delta = entity_beaconscore - network_beaconscore Reference thresholds:
  • Green: delta >= +0.0025
  • Yellow: -0.0025 < delta < +0.0025
  • Red: delta <= -0.0025

4. Drill Into Sub-Entities

Compare operator-level performance within the selected entity:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/entity/sub-entities \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "chain": "mainnet",
  "entity": "Lido",
  "range": { "evaluation_window": "30d" },
  "sort_by": "beaconscore",
  "sort_order": "desc"
}
'
Sortable fields: beaconscore, net_share, validator_count.

5. Discover Entity Associations (Optional)

If you start from validator identifiers, map them first:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/metadata \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "chain": "mainnet",
  "validator": {
    "validator_identifiers": [1, 2, 3]
  },
  "page_size": 10
}
'
The response includes entity and sub_entity.

Example: Python Script

import requests

API_KEY = "<YOUR_API_KEY>"
BASE = "https://beaconcha.in"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
}

def post(endpoint, payload):
    resp = requests.post(f"{BASE}{endpoint}", headers=HEADERS, json=payload, timeout=30)
    resp.raise_for_status()
    return resp.json()

window = "30d"
entity_name = "Lido"

entities = post("/api/v2/ethereum/entities", {
    "chain": "mainnet",
    "range": {"evaluation_window": window},
})
entity = next((e for e in entities["data"] if e["entity"] == entity_name), None)
if not entity:
    raise ValueError(f"Entity not found: {entity_name}")

network = post("/api/v2/ethereum/performance-aggregate", {
    "chain": "mainnet",
    "range": {"evaluation_window": window},
})
network_score = network["data"]["beaconscore"]["total"]
entity_score = entity["beaconscore"]
delta = entity_score - network_score

print(f"Entity:  {entity_name}")
print(f"Score:   {entity_score:.4f}")
print(f"Network: {network_score:.4f}")
print(f"Delta:   {delta:+.4f}")

subs = post("/api/v2/ethereum/entity/sub-entities", {
    "chain": "mainnet",
    "entity": entity_name,
    "range": {"evaluation_window": window},
    "sort_by": "beaconscore",
    "sort_order": "desc",
})

for row in subs["data"][:10]:
    sub_delta = row["beaconscore"] - network_score
    print(f"- {row['sub_entity']:<25} {row['beaconscore']:.4f} ({sub_delta:+.4f})")

Best Practices

Use 30d or 90d

Prefer longer windows for stable benchmarking and trend analysis.

Track Delta History

Store entity - network deltas over time to detect persistent underperformance.

Inspect Sub-Entity Mix

Re-check sub-entity rankings when parent performance changes materially.

Map Unknown Validators

Use validator metadata to classify validators before benchmarking.

Data Freshness

  • Entity and sub-entity overview data is precomputed and updated hourly.
  • Validator-to-entity assignments are updated once per day.

For endpoint details, see the Entities and Network sections in the V2 API Docs sidebar.