Overview
The APY & ROI endpoint provides standardized return metrics for your validators, broken down by execution layer (EL) and consensus layer (CL) contributions.
API Endpoint: This guide uses /api/v2/ethereum/validators/apy-roi for aggregated return metrics.
When to use APY/ROI vs BeaconScore: APY and ROI measure absolute returns in ETH/percentage terms. BeaconScore measures relative efficiency normalized for luck. For comparing validator performance across operators, BeaconScore is recommended.
See Also: For detailed per-epoch missed reward analysis (which duty types are causing losses), see Analyze Missed Rewards . This endpoint focuses on aggregated return metrics.
Understanding the Metrics
Metric Description ROI Return on Investment — Actual return during the evaluation period (not annualized) APY Annual Percentage Yield — ROI extrapolated to a yearly rate
Both metrics are calculated separately for:
Component Source Execution Layer Block proposals, MEV rewards, transaction fees Consensus Layer Attestations, sync committees, CL block rewards Combined Total return across both layers
APY is an extrapolation based on the selected evaluation window. Past performance does not guarantee future results, especially for execution layer rewards which are highly variable.
Quick Start: Fetch APY & ROI
curl --request POST \
--url https://beaconcha.in/api/v2/ethereum/validators/apy-roi \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '
{
"chain": "mainnet",
"validator": {
"dashboard_id": 123
},
"range": {
"evaluation_window": "30d"
}
}
'
Response
{
"data" : {
"execution_layer" : {
"roi" : "0.00125" ,
"apy" : "0.0152"
},
"consensus_layer" : {
"roi" : "0.00285" ,
"apy" : "0.0347"
},
"combined" : {
"roi" : "0.0041" ,
"apy" : "0.0499"
},
"finality" : "finalized"
},
"range" : {
"epoch" : { "start" : 407453 , "end" : 414202 },
"timestamp" : { "start" : 1763285975 , "end" : 1765877974 }
}
}
The response shows:
Execution Layer: 1.52% APY from block proposals/MEV
Consensus Layer: 3.47% APY from attestations/sync committees
Combined: 4.99% total APY
Evaluation Windows
Window Best For 24hQuick daily check (high variance) 7dWeekly reporting 30dMonthly reports, smoothed variance 90dQuarterly analysis all_timeLifetime performance since activation
Longer evaluation windows provide more stable APY estimates. Short windows are heavily influenced by luck (block proposals, MEV).
Compare APY Across Groups
Compare returns across different validator groups or nodes:
import requests
API_KEY = "<YOUR_API_KEY>"
DASHBOARD_ID = 123
def get_apy_roi ( dashboard_id : int , group_id : int = None , window : str = "30d" ):
"""Fetch APY and ROI for validators."""
payload = {
"chain" : "mainnet" ,
"validator" : { "dashboard_id" : dashboard_id},
"range" : { "evaluation_window" : window}
}
if group_id is not None :
payload[ "validator" ][ "group_id" ] = group_id
response = requests.post(
"https://beaconcha.in/api/v2/ethereum/validators/apy-roi" ,
headers = {
"Authorization" : f "Bearer { API_KEY } " ,
"Content-Type" : "application/json"
},
json = payload
)
return response.json()
# Compare groups
GROUPS = {
"Node A" : 1 ,
"Node B" : 2 ,
"Node C" : 3
}
print ( "30-Day APY Comparison" )
print ( "=" * 60 )
print ( f " { 'Group' :<15} { 'CL APY' :>10} { 'EL APY' :>10} { 'Total APY' :>10} " )
print ( "-" * 60 )
for name, group_id in GROUPS .items():
data = get_apy_roi( DASHBOARD_ID , group_id, "30d" ).get( "data" , {})
cl_apy = float (data.get( "consensus_layer" , {}).get( "apy" , 0 )) * 100
el_apy = float (data.get( "execution_layer" , {}).get( "apy" , 0 )) * 100
total_apy = float (data.get( "combined" , {}).get( "apy" , 0 )) * 100
print ( f " { name :<15} { cl_apy :>9.2f} % { el_apy :>9.2f} % { total_apy :>9.2f} %" )
Example Output: 30-Day APY Comparison
============================================================
Group CL APY EL APY Total APY
------------------------------------------------------------
Node A 3.47% 1.52% 4.99%
Node B 3.45% 1.89% 5.34%
Node C 3.48% 1.21% 4.69%
const API_KEY = '<YOUR_API_KEY>' ;
const DASHBOARD_ID = 123 ;
async function getApyRoi ( dashboardId , groupId = null , window = '30d' ) {
const payload = {
chain: 'mainnet' ,
validator: { dashboard_id: dashboardId },
range: { evaluation_window: window }
};
if ( groupId !== null ) {
payload . validator . group_id = groupId ;
}
const response = await fetch (
'https://beaconcha.in/api/v2/ethereum/validators/apy-roi' ,
{
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ API_KEY } ` ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ( payload )
}
);
return response . json ();
}
async function compareGroups () {
const groups = { 'Node A' : 1 , 'Node B' : 2 , 'Node C' : 3 };
console . log ( '30-Day APY Comparison' );
console . log ( '=' . repeat ( 60 ));
for ( const [ name , groupId ] of Object . entries ( groups )) {
const data = ( await getApyRoi ( DASHBOARD_ID , groupId , '30d' )). data || {};
const clApy = ( parseFloat ( data . consensus_layer ?. apy || 0 ) * 100 ). toFixed ( 2 );
const elApy = ( parseFloat ( data . execution_layer ?. apy || 0 ) * 100 ). toFixed ( 2 );
const totalApy = ( parseFloat ( data . combined ?. apy || 0 ) * 100 ). toFixed ( 2 );
console . log ( ` ${ name } : CL ${ clApy } %, EL ${ elApy } %, Total ${ totalApy } %` );
}
}
compareGroups ();
Track APY Over Time
Monitor APY trends to detect changes in staking returns:
def track_apy_history ( dashboard_id : int , windows : list = [ "7d" , "30d" , "90d" ]):
"""Compare APY across different evaluation windows."""
print ( "APY by Evaluation Window" )
print ( "=" * 50 )
for window in windows:
data = get_apy_roi(dashboard_id, window = window).get( "data" , {})
combined = data.get( "combined" , {})
apy = float (combined.get( "apy" , 0 )) * 100
roi = float (combined.get( "roi" , 0 )) * 100
print ( f " { window :>5} : APY { apy :.2f} % (ROI: { roi :.4f} %)" )
track_apy_history( dashboard_id = 123 )
Example Output:
APY by Evaluation Window
==================================================
7d: APY 5.23% (ROI: 0.1001%)
30d: APY 4.99% (ROI: 0.4100%)
90d: APY 4.85% (ROI: 1.1940%)
EL vs CL Breakdown
Understand where your returns come from:
def analyze_yield_sources ( dashboard_id : int , window : str = "30d" ):
"""Analyze the breakdown of yield by source."""
data = get_apy_roi(dashboard_id, window = window).get( "data" , {})
cl = float (data.get( "consensus_layer" , {}).get( "apy" , 0 )) * 100
el = float (data.get( "execution_layer" , {}).get( "apy" , 0 )) * 100
total = cl + el
cl_pct = (cl / total * 100 ) if total else 0
el_pct = (el / total * 100 ) if total else 0
print ( f "Yield Source Analysis ( { window } )" )
print ( "=" * 40 )
print ( f "Consensus Layer: { cl :.2f} % APY ( { cl_pct :.1f} % of total)" )
print ( f "Execution Layer: { el :.2f} % APY ( { el_pct :.1f} % of total)" )
print ( f "Total: { total :.2f} % APY" )
return { "cl_apy" : cl, "el_apy" : el, "cl_share" : cl_pct, "el_share" : el_pct}
analyze_yield_sources( dashboard_id = 123 , window = "30d" )
APY vs BeaconScore
Metric Use Case Affected By APY Absolute return calculation Luck (proposals, MEV), network conditions BeaconScore Performance comparison Only validator behavior (normalized for luck)
A validator with lower APY might still have a higher BeaconScore if they were unlucky with block proposals. BeaconScore removes luck from the equation, making it ideal for comparing operator performance.
Best Practices
Use 30d+ Windows Short windows are noisy. Use 30 days or longer for meaningful APY estimates.
Track EL Separately EL rewards (MEV) are highly variable. Track them separately from CL rewards.
Compare Apples to Apples When comparing validators, ensure they use the same MEV-boost configuration.
Use BeaconScore for Ranking For comparing validator quality, BeaconScore is more reliable than APY.