Rocket Pool
Rocket Pool
Overview
Guides
Website
简体中文
English
Overview
Guides
Website
简体中文
English
Rocket Pool

Guides

Overview
The Saturn 0 Upgrade

rETH Staker Guide

Overview
Staking directly via Rocket Pool
Staking via a Decentralised Exchange on the Ethereum Network (Layer 1)
Staking via a Decentralised Exchange on Layer 2
Staking on behalf of a node

Node Operator Guide

A Node Operator's Responsibilities
Node Requirements & Choosing a Platform

Preparing a Local Node

Overview
Selecting Staking Hardware
Preparing a PC, Mini-PC or NUC
Preparing a Mac
Intro to Secure Shell (SSH)

Preparing a Server Node

Overview
Selecting a Hosting Provider
Preparing the Operating System

Securing Your Node

Securing Your Node
Tailscale

Installing Rocket Pool

Overview
Choosing your ETH Clients
Selecting a Rocket Pool Mode
Creating a Standard Rocket Pool Node with Docker
Creating a Native Rocket Pool Node without Docker

Configuring Rocket Pool

Overview
Configuring the Smartnode Stack (Docker/hybrid mode)
Configuring the Smartnode Stack (native)
Advanced Smartnode Configuration for Docker Mode

Provisioning your Node

Overview
Starting Rocket Pool
Creating a New Wallet
Importing/Recovering an Existing Wallet
Preparing your Node for Operation
Intro to the Command Line Interface
Specifying a Fallback Node
Fee Distributors and the Smoothing Pool
MEV, MEV-Boost & MEV Rewards

Creating or Migrating Minipools

Overview
Creating a new Minipool (Validator)
The Minipool Delegate
Converting a Solo Validator into a Minipool
Migrating a 16-ETH Minipool to 8-ETH
The Deposit Credit System

Monitoring & Maintenance

Overview
Monitoring your Node's Performance
Setting up the Grafana Dashboard
Smartnode Stack Alert Notifications
Checking for Updates
Backing Up Your Node
Masquerading as Another Node Address
Expiring Pre-Merge History
Pruning the Execution Client
Changing Execution or Consensus Clients
Moving from One Node to Another

Claiming Rewards

Overview
Claiming Node Operator Rewards
Distributing Skimmed Rewards

Participating in pDAO governance

Overview
The Protocol DAO
Participating in on-chain pDAO Proposals
Setting your Snapshot Signalling Address
Delegating Voting Power
Viewing the State of a Proposal
Voting on a Proposal
Creating a Proposal
Executing a successful proposal
Claiming Bonds and Rewards
Creating and Claiming a recurring treasury spend

Exiting your Minipools

Shut Down a Minipool
Rescuing a Dissolved Minipool
FAQ (WIP)

Testing Rocket Pool with the Hoodi Test Network

Practicing with the Test Network
Migrating from the Test Network to Mainnet

Running an Oracle DAO Node

The Rocket Pool Oracle DAO
Setting up an Oracle DAO Node
Testing your Oracle DAO Node
Monitoring your Oracle DAO Node
Oracle DAO Proposals

Legacy Guides

Upgrading to Smartnode v1.3.x
Migrating the Smartnode from Previous Beta Tests
The Atlas Update
Lower ETH Bond Minipools

Redstone & The Merge

The Rocket Pool Redstone Update
[Docker Mode] Guide to the Redstone Update and the Merge
[Hybrid Mode] Guide to the Redstone Update and the Merge
[Native Mode] Guide to the Redstone Update and the Merge

The Houston Upgrade

Overview
Getting Started with Houston
The Protocol DAO
Participating in Proposals
Stake ETH on Behalf of Node
RPL Withdrawal Address
Preparing a Raspberry Pi
📝 Edit this page on GitHub
Previous PageExpiring Pre-Merge History
Next PageChanging Execution or Consensus Clients

#Pruning the Execution Client

NOTE

This is meant for geth and nethermind users. Besu does not need to be pruned.

If you use geth or nethermind as your primary Execution client, you will likely notice that your node's free disk space slowly decreases over time. The Execution client is by far the biggest contributor to this; depending on how much RAM you allocated to its cache during rocketpool service config, it can grow at a rate of several gigabytes per day!

To handle this, Execution clients provide a special function called pruning that lets them scan and clean up their database safely to reclaim some free space. Every node operator using Geth or Nethermind will have to prune it eventually.

If you have a 2 TB SSD, you can usually go for months between rounds of pruning. For 1 TB SSD users, you will have to prune more frequently.

If you have the Grafana dashboard enabled, a good rule of thumb is to start thinking about pruning your Execution client when your node's used disk space exceeds 80%.

When you decide that it's time, the Smartnode comes with the ability to prune it for you upon request. Read below to learn how it works, and what to expect.

NOTE

Pruning your Execution client is only possible in Docker Mode.

If you use your own Execution client, such as an external client in Hybrid mode or Native mode, you cannot use the Smartnode to prune the Execution client. You will need to do it manually. Please refer to the documentation for your Execution client to learn how to prune it.

#Prerequisites

Select the client you're using from the tabs below.

Geth
Nethermind

Pruning Geth means taking the primary Execution client offline so it can clean itself up. When this happens, the Smartnode (and your Consensus client) will need some other way to access the ETH1 chain in order to function properly.

The easiest way to provide this is with a fallback node. If you configured a fall back node using rocketpool service config already, then the Smartnode will automatically switch over to it when your Geth container goes down for maintenance for you. It will also inform your Consensus client to use the fallback as well.

WARNING

If you don't have a fallback node configured, your node will stop validating during the pruning process. It will miss all attestations and block proposals until it's finished and has resynced with the network. You will leak ETH due to missed validations during this time!

With that in mind, the following two conditions are required to successfully prune Geth:

  • A working fallback node configured
  • At least 50 GB of free space remaining on your SSD

#Starting a Prune

Select the client you're using from the tabs below.

Geth
Nethermind

When you want to prune Geth, simply run this command:

rocketpool service prune-eth1

If you do not have a fallback client pair enabled, you will receive the following warning:

This will shut down your main execution client and prune its database, freeing up disk space.
Once pruning is complete, your execution client will restart automatically.

You do not have a fallback execution client configured.
Your node will no longer be able to perform any validation duties (attesting or proposing blocks) until Geth is done pruning and has synced again.
Please configure a fallback client with `rocketpool service config` before running this.
Are you sure you want to prune your main execution client? [y/n]

If you do have one enabled, you will see the following prompt instead:

This will shut down your main execution client and prune its database, freeing up disk space.
Once pruning is complete, your execution client will restart automatically.

You have fallback clients enabled. Rocket Pool (and your consensus client) will use that while the main client is pruning.
Are you sure you want to prune your main execution client? [y/n]

If you accept, you'll see a few details as the Smartnode prepares things; it should end with a success message:

Are you sure you want to prune your main ETH1 client? [y/n]
y

Your disk has 303 GiB free, which is enough to prune.
Stopping rocketpool_eth1...
Provisioning pruning on volume rocketpool_eth1clientdata...
Restarting rocketpool_eth1...

Done! Your main ETH1 client is now pruning. You can follow its progress with `rocketpool service logs eth1`.
Once it's done, it will restart automatically and resume normal operation.
NOTE: While pruning, you **cannot** interrupt the client (e.g. by restarting) or you risk corrupting the database!
You must let it run to completion!

With that, Geth is now pruning and you're all set! You can follow its progress with:

rocketpool service logs eth1

Once it's done pruning, it will restart automatically and the Smartnode will resume using it again instead of your fallback.