Skip to main content

Overview

Compare recognized Ethereum staking entities by operational performance, returns, and missed rewards. Reproduce the metrics published on beaconcha.in/entities.
What is an entity? An entity is a named staking operator (e.g. Lido, Coinbase, Figment) whose validators are publicly labeled in the beaconcha.in dataset. The Entity Benchmarking APIs let you query performance, APY, missed rewards, and network-relative standing for any labeled entity, or for your own private validator set.

Guides in This Section


Key Metrics

The leaderboard and benchmarking guides use four metric families. Each is fetched from a different endpoint:
MetricEndpointWhat it measuresUsed in
BeaconScoreentities / performance-aggregateAttestation, proposal, sync committee efficiency (luck-normalized)Rank entities, Benchmark vs network, Private benchmarking
APY / ROIvalidators/apy-roiAnnualized return, split by CL and ELRank entities, Benchmark vs network, Private benchmarking
Missed rewardsvalidators/rewards-aggregateETH opportunity cost by duty typeRank entities, Benchmark vs network, Private benchmarking
Network baselineperformance-aggregate (no entity filter)Network-wide BeaconScore for delta computationRank entities, Benchmark vs network, Private benchmarking

BeaconScore Components

BeaconScore integrates three validator duty types weighted by their contribution to rewards. These fields are returned by the entities and performance-aggregate endpoints:
ComponentWeightAPI field
Attestation efficiency84.4%data.beaconscore.attestation
Proposal efficiency12.5%data.beaconscore.proposal
Sync committee efficiency3.1%data.beaconscore.sync_committee
Per-component efficiency can also be derived from rewards-aggregate earned/missed ratios. See Rank entities: Step 5 and Benchmark vs network: Step 5 for the calculation. BeaconScore and APY measure fundamentally different things. BeaconScore is a duty efficiency metric (0-100%): a score of 100% means the validator earned the maximum possible rewards for every assigned duty. APY is a financial return metric that expresses the annualized yield on staked ETH (e.g., 3%). A validator can have a perfect BeaconScore of 100% and an APY of 3% at the same time; these are not contradictory. BeaconScore normalizes for proposal luck, making it the recommended metric for cross-entity performance comparison. APY is better suited for reporting absolute returns.
See BeaconScore Calculation for the full formula and BeaconScore vs. 3rd Party Metrics for how BeaconScore compares to other metrics.

Missed Rewards Methodology

Missed rewards quantify the opportunity cost of a validator failing a duty, not a balance penalty. All missed reward data comes from the rewards-aggregate endpoint. The leaderboard uses:
CL missed = attestation missed (head + source + target)
          + sync committee missed
          + proposal missed (CL portion)

EL missed = proposal missed (EL portion, foregone MEV + tips)

Total missed = CL missed + EL missed

% of earned = total_missed / (total_missed + earned_gross) x 100
earned_gross is data.total_reward from rewards-aggregate (gross, before penalties). Do not use data.total (net) for this calculation.
All wei values returned by rewards-aggregate are JSON strings. Cast with int(str(v)) before dividing by 1e18.
For step-by-step implementation, see Rank entities: Step 3, Benchmark vs network: Step 5, or Private benchmarking: Step 6. For per-validator missed reward analysis, see Analyze Missed Rewards.

vs Average Thresholds

The leaderboard uses the following thresholds consistently across all comparisons (BeaconScore total, all three components, and APY deltas). See these applied in Rank entities: Step 5, Benchmark vs network: Step 3, and Private benchmarking: Step 4:
ColorCondition
🟢 Greendelta >= +0.0025 (+0.25 percentage points above network)
🟡 Yellow-0.0025 < delta < +0.0025 (within ±0.25pp of network)
🔴 Reddelta <= -0.0025 (more than 0.25pp below network)
For missed reward efficiency (lower = better):
ColorCondition
🟢 Green< 0.40% of earned
🟡 Yellow0.40% – 0.60% of earned
🔴 Red> 0.60% of earned

Evaluation Windows

All benchmarking endpoints support the same rolling windows:
WindowTypical use
24hIncident detection
7dWeekly review
30dMonthly leaderboard (recommended)
90dLong-term trend analysis
Use consistent windows when comparing entities. all_time is not supported for entity endpoints.

API Plan Requirements

EndpointMinimum plan
entitiesScale
entity/sub-entitiesScale
validators/apy-roi with entity selectorScale
validators/rewards-aggregate with entity selectorScale
performance-aggregate (network baseline)Any
validators/performance-aggregate with dashboard_idAny
See API Pricing for plan details.

Attribution

If you display BeaconScore publicly, follow the BeaconScore License and License Materials.