The All Core Devs Consensus (ACDC) call #205, held on February 20, 2025, provided crucial updates on Devnet stability, testnet readiness, attestation handling, and upcoming network upgrades. The discussions also covered key technical challenges, including PeerDAS Devnet testing, blob transaction scalability, mempool efficiency and validator requirements.

Pectra Updates

Devnet 6 was reported as stable, with 97% participation from clients. All clients are now updated to their latest versions, and no significant new changes were reported. The network appeared to be in good health, with most processes running smoothly. However, there were concerns about the Flashbots system and a builder-related issue, which required further debugging.

Another major point of discussion revolved around test aggregation, particularly in how different clients handle attestations. Client diversity was considered a significant advantage, ensuring that if one client failed to include an attestation, another might be able to do so. There was ongoing debate on the best way to manage these attestations across different slots while maintaining efficiency.

PeerDAS Updates

This particularly involved in execution-layer (EL) changes and EOF (Ethereum Object Format) testing. One of the primary issues discussed was that almost all participants in Devnet 4 were running Geth as their execution client. This raised concerns about lack of client diversity, as having a single EL client across the network could introduce centralization risks and fail to uncover bugs in other clients.

Another major topic was whether EOF (Ethereum Object Format) should be tested in conjunction with PeerDAS or if it should be temporarily turned off for separate testing. This led to a discussion on whether the current EL clients had an easy mechanism to turn off EOF, and it was noted that this could be done with a simple two-line change in client code.

One of the main discussions involved on how proof computations for blobs would shift from the CL to the EL, allowing for more efficient verification and transaction processing. However, this transition required modifications to how blob transactions were gossiped across the network.

A critical question was whether a new transaction type should be introduced for PeerDAS or if the existing blob transaction type (4844) should be repurposed. Some developers preferred introducing a separate transaction type, as this would make it easier to track and debug issues. However, others argued that repurposing the existing type and using network-level modifications to handle the new proof structures would be a cleaner solution.

Another topic discussed was how roll-up protocols (Layer 2 solutions) would adapt to the new blob transaction format. Today, most roll-ups use the existing blob transaction type, and it was unclear if they would need to modify their transaction validation mechanisms to align with PeerDAS changes. Some developers suggested discussing this issue in a future roll-up call to ensure that all Layer 2 projects had adequate time to prepare.

To ensure a smooth transition for Pyrdos, a readiness checklist was created, outlining the critical steps needed before it could be fully deployed.

Blob Count Adjustments in Fulu

The discussion on blob count adjustments in Fulu revolved around how the network should approach scaling blob transactions. The original plan for Fulu included an average blob count of 12, significantly below the network’s theoretical limits. This conservative number was chosen to ensure stability during early testing. However, as developers gained confidence in the system, there was growing pressure to increase this number to take advantage of full network capacity.

Theoretically, if the network were to scale fully, the maximum blob count would be 8 times higher than the Pectra fork limit, leading to a maximum of around 48 blobs per slot. However, concerns were raised about infrastructure readiness, particularly regarding node storage, bandwidth requirements, and mempool behavior.

One of the proposed solutions was the introduction of Blob Parameter Only (BPO) forks, which would allow for blob count adjustments without requiring a full hard fork. This concept was attractive because it would reduce the complexity of network upgrades while still allowing developers to gradually increase blob capacity based on performance data.

There was also a discussion on whether the community should use on-chain governance to determine blob count adjustments, similar to how Ethereum’s gas limit is managed. However, most developers preferred a simpler config-based approach, as it would be easier to implement and would avoid adding unnecessary governance mechanisms.

Ultimately, the group agreed that Fulu should gradually scale up to 48 blobs per slot, with developers monitoring performance along the way. If issues arose, the BPO fork mechanism could be used to fine-tune blob parameters without requiring a major network upgrade.

Mempool & Validator Considerations

One of the main discussions in this section revolved around the behavior of the mempool and how validators should handle the propagation of blobs. The mempool plays a critical role in transaction inclusion, but as Ethereum scales, its limitations become more apparent, especially in relation to blob transactions. The challenge is ensuring that blobs are efficiently propagated and included in blocks without causing network congestion.

A key concern was whether all validators should be required to have access to all blobs at the time of block proposal. Some developers argued that if the system assumes every validator has all blobs in their local mempool, it contradicts the entire purpose of distributed data availability. The goal of Ethereum’s sharding mechanism is to prevent validators from needing to store or download the full dataset.

An alternative proposal suggested introducing a throttling mechanism where the mempool would limit the number of blobs that each validator receives. This would prevent unnecessary congestion while still ensuring that enough blobs are available for block production. However, some concerns were raised about whether such throttling could lead to uneven blob distribution, potentially resulting in inefficiencies in block production.

There was also a broader discussion on how different clients handle blobs. Each Ethereum client has its own method of propagating transactions, and differences in implementation could lead to inconsistencies in blob availability across the network. Some developers proposed creating a standardized approach for blob propagation to ensure fairness and reliability.

Another major issue discussed was private vs. public mempools. In Ethereum, some transactions and blobs are propagated through public mempools, while others are distributed through private relays or builder networks. This raised the question: if most blobs end up in private mempools, how does that impact validators who rely on public data? It was noted that private mempools could create a situation where only certain validators have access to specific blobs, leading to potential centralization risks.

To address this, some developers suggested requiring all blobs to be made publicly available within a specific time frame before block proposals. This would ensure that all validators have equal access to the data, rather than giving an advantage to those with private connections.

The discussion concluded with a recognition that the current mempool structure might not be sufficient for handling blobs efficiently, and further research was needed to explore alternative solutions. Some developers suggested using the upcoming testnets to experiment with different blob propagation methods to determine the most effective approach.

Hardware Requirements for Validators & Nodes

The discussion on hardware requirements was particularly important because it impacts the cost of running an Ethereum validator or full node. The key question was: what should be the minimum hardware specifications needed to operate effectively, and how do we define a sustainable cost model?

One of the primary considerations was how much storage, CPU power, and bandwidth a validator or full node should require. The Ethereum network has grown significantly, and running a full node now requires considerably more resources than in the past. However, setting the requirements too high could lead to centralization, as only well-funded participants would be able to afford the necessary hardware.

To ensure that validators remain accessible to a broad audience, some developers proposed setting a cost benchmark based on validator profitability. The idea was that a validator should be able to recover the cost of its hardware within 6 to 12 months through staking rewards. However, this raised additional concerns about the volatility of Ethereum staking returns.

A significant factor in this discussion was the impact of liquid staking. Liquid staking allows users to stake their ETH while maintaining access to liquidity, meaning they can still trade or use their funds elsewhere. Some developers argued that as liquid staking grows, staking rewards will naturally decline, making it harder for validators to recover their hardware costs within the proposed 6 to 12-month timeframe.

Another concern was the potential for network centralization due to expensive hardware requirements. If only large staking providers can afford to run validators, it could lead to a concentration of power among a small number of operators. To counter this, some suggested establishing a baseline hardware requirement that remains accessible to individual validators.

The debate also touched on whether hardware costs should be denominated in USD or ETH. Some developers believed that setting a fixed USD cost limit would be more practical, as it provides a stable benchmark regardless of ETH price fluctuations. Others argued that ETH-based pricing makes more sense, as Ethereum staking rewards are paid in ETH, and the economics should be based on network incentives rather than fiat currency.

One of the key proposals was to cap hardware costs at approximately $1,000, ensuring that running a validator remains affordable. However, this proposal was met with some skepticism, as hardware requirements are likely to increase over time. Some developers suggested instead focusing on performance benchmarks rather than price, ensuring that hardware requirements are dictated by network efficiency rather than arbitrary cost limits.

The discussion ended with a general consensus that Ethereum needs to find a balance between performance, decentralization, and accessibility. Some members suggested creating a formal working group to further refine the hardware requirements and conduct real-world benchmarking to determine the best approach.

Related Articles

  1. Highlights of Ethereum's All Core Devs Meeting (ACDE) #205
  2. Highlights of Ethereum's All Core Devs Meeting (ACDC) #150
  3. Highlights of Ethereum's All Core Devs Meeting (ACDE) #204
  4. Highlights of Ethereum's All Core Devs Meeting (ACDC) #149
  5. Highlights of Ethereum's All Core Devs Meeting (ACDE) #203
_____________________________________________________________________

Disclaimer: The information contained in this website is for general informational purposes only. The content provided on this website, including articles, blog posts, opinions, and analysis related to blockchain technology and cryptocurrencies, is not intended as financial or investment advice. The website and its content should not be relied upon for making financial decisions. Read full disclaimer and privacy Policy.

For Press Releases, project updates and guest posts publishing with us, email to contact@etherworld.co.

Subscribe to EtherWorld YouTube channel for ELI5 content.

Share if you like the content. Donate at avarch.eth or Gitcoin

You've something to share with the blockchain community, join us on Discord!

Follow us at Twitter, Facebook, LinkedIn, and Instagram.