Skip to main content

Overview

The APY & ROI endpoint provides standardized return metrics for your validators, broken down by execution layer (EL) and consensus layer (CL) contributions.
API Endpoint: This guide uses /api/v2/ethereum/validators/apy-roi for aggregated return metrics.
When to use APY/ROI vs BeaconScore: APY and ROI measure absolute returns in ETH/percentage terms. BeaconScore measures relative efficiency normalized for luck. For comparing validator performance across operators, BeaconScore is recommended.
See Also: For detailed per-epoch missed reward analysis (which duty types are causing losses), see Analyze Missed Rewards. This endpoint focuses on aggregated return metrics.

Understanding the Metrics

MetricDescription
ROIReturn on Investment — Actual return during the evaluation period (not annualized)
APYAnnual Percentage Yield — ROI extrapolated to a yearly rate
Both metrics are calculated separately for:
ComponentSource
Execution LayerBlock proposals, MEV rewards, transaction fees
Consensus LayerAttestations, sync committees, CL block rewards
CombinedTotal return across both layers
APY is an extrapolation based on the selected evaluation window. Past performance does not guarantee future results, especially for execution layer rewards which are highly variable.

Quick Start: Fetch APY & ROI

curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/apy-roi \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "chain": "mainnet",
  "validator": {
    "dashboard_id": 123
  },
  "range": {
    "evaluation_window": "30d"
  }
}
'

Response

{
  "data": {
    "execution_layer": {
      "roi": "0.00125",
      "apy": "0.0152"
    },
    "consensus_layer": {
      "roi": "0.00285",
      "apy": "0.0347"
    },
    "combined": {
      "roi": "0.0041",
      "apy": "0.0499"
    },
    "finality": "finalized"
  },
  "range": {
    "epoch": { "start": 407453, "end": 414202 },
    "timestamp": { "start": 1763285975, "end": 1765877974 }
  }
}
The response shows:
  • Execution Layer: 1.52% APY from block proposals/MEV
  • Consensus Layer: 3.47% APY from attestations/sync committees
  • Combined: 4.99% total APY

Evaluation Windows

WindowBest For
24hQuick daily check (high variance)
7dWeekly reporting
30dMonthly reports, smoothed variance
90dQuarterly analysis
all_timeLifetime performance since activation
Longer evaluation windows provide more stable APY estimates. Short windows are heavily influenced by luck (block proposals, MEV).

APY vs BeaconScore

MetricUse CaseAffected By
APYAbsolute return calculationLuck (proposals, MEV), network conditions
BeaconScorePerformance comparisonOnly validator behavior (normalized for luck)
A validator with lower APY might still have a higher BeaconScore if they were unlucky with block proposals. BeaconScore removes luck from the equation, making it ideal for comparing operator performance.

Best Practices

Use 30d+ Windows

Short windows are noisy. Use 30 days or longer for meaningful APY estimates.

Track EL Separately

EL rewards (MEV) are highly variable. Track them separately from CL rewards.

Compare Apples to Apples

When comparing validators, ensure they use the same MEV-boost configuration.

Use BeaconScore for Ranking

For comparing validator quality, BeaconScore is more reliable than APY.