profile
viewpoint

ethereum/pm 399

Project Management: Meeting notes and agenda items

vbuterin/bitcoinjs-lib 88

Bitcoin-related functions implemented in pure JavaScript

vbuterin/cult-of-craig 42

Facts about CSW's Involvement in Bitcoin

vbuterin/btckeysplit 36

Bitcoin private key splitter

vbuterin/2fawallet 32

Two-factor authentication wallet with multisig

vbuterin/go-ethereum 14

Official Go implementation of the Ethereum protocol

vbuterin/DAO 13

The Standard DAO Framework, inc. Whitepaper

vbuterin/fdtree 6

Fixed-depth tree. Uses a novel strategy for a tree data structure to achieve a red-black-tree-style hard depth limit.

issue commentethereum/eth2.0-specs

Unexpected multi-deposit incentive

The main solutions I see are:

  1. Change the increment/decrement thresholds to x-0.25 and x+0.75
  2. Allow deposits up to 32.5 ETH instead of 32.
JustinDrake

comment created time in 19 days

Pull request review commentethereum/EIPs

Eip1559 updates

 Ethereum currently prices transaction fees using a simple auction mechanism, whe * **Inefficiencies of first price auctions**: see https://ethresear.ch/t/first-and-second-price-auctions-and-improved-transaction-fee-markets/2410 for a detailed writeup. In short, the current approach, where transaction senders publish a transaction with a fee, miners choose the highest-paying transactions, and everyone pays what they bid, is well-known in mechanism design literature to be highly inefficient, and so complex fee estimation algorithms are required, and even these algorithms often end up not working very well, leading to frequent fee overpayment. See also https://blog.bitgo.com/the-challenges-of-bitcoin-transaction-fee-estimation-e47a64a61c72 for a Bitcoin core developer's description of the challenges involved in fee estimation in the status quo. * **Instability of blockchains with no block reward**: in the long run, blockchains where there is no issuance (including Bitcoin and Zcash) at present intend to switch to rewarding miners entirely through transaction fees. However, there are [known results](http://randomwalker.info/publications/mining_CCS.pdf) showing that this likely leads to a lot of instability, incentivizing mining "sister blocks" that steal transaction fees, opening up much stronger selfish mining attack vectors, and more. There is at present no good mitigation for this. -The proposal in this EIP is to start with a BASEFEE amount which is adjusted up and down by the protocol based on how congested the network is. To accommodate this system, the network capacity would be increased to 16 million gas, so that 50% utilization matches up with our current 8 million gas limit. Then, when the network is at >50% capacity, the BASEFEE increments up slightly and when capacity is at <50%, it decrements down slightly. Because these increments are constrained, the maximum difference in BASEFEE from block to block is predictable. This then allows wallets to auto-set the gas fees for users in a highly reliable fashion. It is expected that most users will not have to manually adjust gas fees, even in periods of high network activity. For most users, the BASEFEE will be automatically set by their wallet, along with the addition of a small fixed amount, called a ‘tip’, to compensate miners (e.g. 0.5 gwei).+The proposal in this EIP is to start with a base fee amount which is adjusted up and down by the protocol based on how congested the network is. To accommodate this system, the total network capacity would be increased to 16 million gas. When the network exceeds the target 10 million gas usage, the base fee increments up slightly and when capacity is below the target, it decrements down slightly. Because these increments are constrained, the maximum difference in base fee from block to block is predictable. This then allows wallets to auto-set the gas fees for users in a highly reliable fashion. It is expected that most users will not have to manually adjust gas fees, even in periods of high network activity. For most users post 1559 implementation the base fee will be estimated by their wallet and a small gas premium- which acts as a 'tip' to compensate miners (e.g. 0.5 gwei)- will be automatically set. Users can also manually set the transaction fee cap to bound their total costs.++An important aspect of this upgraded fee system is that miners only get to keep the tips. The base fee is always burned (i.e. it is destroyed by the protocol). Burning this is important because it prevents miners from manipulating the fee in order to extract more fees from users. It also ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform. Additionally, this burn counterbalances Ethereum inflation without greatly diminishing miner rewards. -An important aspect of this upgraded fee system is that miners only get to keep the tips. The BASEFEE is always burned (i.e. it is destroyed by the protocol). Burning this is important because it prevents miners from manipulating the fee in order to extract more fees from users. It also ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform.+The transition to this gas price system will occur in two phases, in the first phase both legacy and EIP1559 transactions will be accepted by the protocol. Over the course of this first phase the amount of gas available for processing legacy transactions will decrease while the amount of gas available for processing EIP1559 transactions will increase, moving gas from the legacy pool into the EIP1559 pool until the legacy pool is depleted and the EIP1559 pool contains the entire gas maximum. After all of the gas has transitioned to the EIP1559 pool, the second- finalized- phase is entered and legacy transactions will no longer be accepted on the network.   ## Specification <!--The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Ethereum platforms (go-ethereum, parity, cpp-ethereum, ethereumj, ethereumjs, and [others](https://github.com/ethereum/wiki/wiki/Clients)).--> **Parameters**-* `FORK_BLKNUM`: TBD+* `INITIAL_FORK_BLKNUM`: TBD * `BASEFEE_MAX_CHANGE_DENOMINATOR`: 8-* `SLACK_COEFFICIENT`: 3-* `TARGET_GASUSED`: 8,000,000+* `TARGET_GAS_USED`: 10,000,000+* `MAX_GAS_EIP1559`: 16,000,000+* `FINAL_FORK_BLKNUM`:  `INITIAL_FORK_BLKNUM + (MAX_GAS_EIP1559 / (10 * SLACK_COEFFICIENT))`+* `EIP1559_GAS_INCREMENT_AMOUNT`: `(MAX_GAS_EIP1559 / 2) / (FINAL_FORK_BLKNUM - INITIAL_FORK_BLKNUM)`+* `INITIAL_BASEFEE` : 1,000,000,000 wei (1 gwei)   **Proposal**-For all blocks where `block.number >= FORK_BLKNUM`:--* Impose a hard in-protocol gas limit of `SLACK_COEFFICIENT * TARGET_GASUSED`, used instead of the gas limit calculated using the previously existing formulas-* Replace the `GASLIMIT` field in the block header with a BASEFEE field (the same field can be used)-* Let `PARENT_BASEFEE` be the parent block's `BASEFEE` (or 1 billion wei if `block.number == FORK_BLKNUM`). A valid `BASEFEE` is one such that `abs(BASEFEE - PARENT_BASEFEE) <= max(1, PARENT_BASEFEE // BASEFEE_MAX_CHANGE_DENOMINATOR)`-* Redefine the way the `tx.gasprice` field is used: define `tx.fee_premium = tx.gasprice // 2**128` and `tx.fee_cap = tx.gasprice % 2**128`-* During transaction execution, we calculate the cost to the `tx.origin` and the gain to the `block.coinbase` as follows:-  * Let `gasprice = min(BASEFEE + tx.fee_premium, tx.fee_cap)`. The `tx.origin` initially pays `gasprice * tx.gas`, and gets refunded `gasprice * (tx.gas - gasused)`.-  * The `block.coinbase` gains `(gasprice - BASEFEE) * gasused`. If `gasprice < BASEFEE` (due to the `fee_cap`), this means that the `block.coinbase` _loses_ funds from this operation; in this case, check that the post-balance is non-negative and throw an exception if it is negative.-As a default strategy, miners set `BASEFEE` as follows. Let `delta = block.gas_used - TARGET_GASUSED` (possibly negative). Set `BASEFEE = PARENT_BASEFEE + PARENT_BASEFEE * delta // TARGET_GASUSED // BASEFEE_MAX_CHANGE_DENOMINATOR`, clamping this result inside of the allowable bounds if needed (with the parameter setting above clamping will not be required).+For all blocks where `block.number >= INITIAL_FORK_BLKNUM`:++For the gas limit:++* `MAX_GAS_EIP1559` acts as the hard in-protocol gas limit, instead of the gas limit calculated using the previously existing formulas+* The `GASLIMIT` field in the block header is the gas limit for the EIP1559 gas pool, and over the transition period this value increases until it reaches `MAX_GAS_EIP1559` at `FINAL_FORK_BLKNUM`+* The gas limit for the legacy gas pool is `MAX_GAS_EIP1559 - GASLIMIT`, as `GASLIMIT` increases towards `MAX_GAS_EIP1559` gas is moved from the legacy pool into the EIP1559 pool until all of the gas is in the EIP1559 pool+* At `block.number == INITIAL_FORK_BLKNUM`, let `GASLIMIT = (MAX_GAS_EIP1559 / 2)` so that the gas maximum is split evenly between the legacy and EIP1559 gas pools+* As `block.number` increases towards `FINAL_FORK_BLKNUM`, at every block we shift `EIP1559_GAS_INCREMENT_AMOUNT` from the legacy pool into the EIP1559 gas pool+* At `block.number >= FINAL_FORK_BLKNUM` the entire `MAX_GAS_EIP1559` is assigned to the EIP1559 gas pool and the legacy pool is empty+* We enforce a per-transaction gas limit of 8,000,000.``++For the gas price:++* We add a new field to the block header, `BASEFEE`+  * `BASEFEE` is maintained under consensus by the ethash engine+* At `block.number == INITIAL_FORK_BLKNUM` we set `BASEFEE = INITIAL_BASEFEE`+* As a default strategy, miners set `BASEFEE` as follows.+  * Let `delta = block.gas_used - TARGET_GASUSED` (possibly negative).+  * Set `BASEFEE = PARENT_BASEFEE + PARENT_BASEFEE * delta // TARGET_GASUSED // BASEFEE_MAX_CHANGE_DENOMINATOR`+  * Clamp the resulting `BASEFEE` inside of the allowable bounds if needed, where a valid `BASEFEE` is one such that `abs(BASEFEE - PARENT_BASEFEE) <= max(1, PARENT_BASEFEE // BASEFEE_MAX_CHANGE_DENOMINATOR)`

Are we letting miners adjust the basefee? I thought we are forcing the specific change in the protocol?

i-norden

comment created time in 20 days

Pull request review commentethereum/EIPs

Eip1559 updates

 Ethereum currently prices transaction fees using a simple auction mechanism, whe * **Inefficiencies of first price auctions**: see https://ethresear.ch/t/first-and-second-price-auctions-and-improved-transaction-fee-markets/2410 for a detailed writeup. In short, the current approach, where transaction senders publish a transaction with a fee, miners choose the highest-paying transactions, and everyone pays what they bid, is well-known in mechanism design literature to be highly inefficient, and so complex fee estimation algorithms are required, and even these algorithms often end up not working very well, leading to frequent fee overpayment. See also https://blog.bitgo.com/the-challenges-of-bitcoin-transaction-fee-estimation-e47a64a61c72 for a Bitcoin core developer's description of the challenges involved in fee estimation in the status quo. * **Instability of blockchains with no block reward**: in the long run, blockchains where there is no issuance (including Bitcoin and Zcash) at present intend to switch to rewarding miners entirely through transaction fees. However, there are [known results](http://randomwalker.info/publications/mining_CCS.pdf) showing that this likely leads to a lot of instability, incentivizing mining "sister blocks" that steal transaction fees, opening up much stronger selfish mining attack vectors, and more. There is at present no good mitigation for this. -The proposal in this EIP is to start with a BASEFEE amount which is adjusted up and down by the protocol based on how congested the network is. To accommodate this system, the network capacity would be increased to 16 million gas, so that 50% utilization matches up with our current 8 million gas limit. Then, when the network is at >50% capacity, the BASEFEE increments up slightly and when capacity is at <50%, it decrements down slightly. Because these increments are constrained, the maximum difference in BASEFEE from block to block is predictable. This then allows wallets to auto-set the gas fees for users in a highly reliable fashion. It is expected that most users will not have to manually adjust gas fees, even in periods of high network activity. For most users, the BASEFEE will be automatically set by their wallet, along with the addition of a small fixed amount, called a ‘tip’, to compensate miners (e.g. 0.5 gwei).+The proposal in this EIP is to start with a base fee amount which is adjusted up and down by the protocol based on how congested the network is. To accommodate this system, the total network capacity would be increased to 16 million gas. When the network exceeds the target 10 million gas usage, the base fee increments up slightly and when capacity is below the target, it decrements down slightly. Because these increments are constrained, the maximum difference in base fee from block to block is predictable. This then allows wallets to auto-set the gas fees for users in a highly reliable fashion. It is expected that most users will not have to manually adjust gas fees, even in periods of high network activity. For most users post 1559 implementation the base fee will be estimated by their wallet and a small gas premium- which acts as a 'tip' to compensate miners (e.g. 0.5 gwei)- will be automatically set. Users can also manually set the transaction fee cap to bound their total costs.++An important aspect of this upgraded fee system is that miners only get to keep the tips. The base fee is always burned (i.e. it is destroyed by the protocol). Burning this is important because it prevents miners from manipulating the fee in order to extract more fees from users. It also ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform. Additionally, this burn counterbalances Ethereum inflation without greatly diminishing miner rewards. -An important aspect of this upgraded fee system is that miners only get to keep the tips. The BASEFEE is always burned (i.e. it is destroyed by the protocol). Burning this is important because it prevents miners from manipulating the fee in order to extract more fees from users. It also ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform.+The transition to this gas price system will occur in two phases, in the first phase both legacy and EIP1559 transactions will be accepted by the protocol. Over the course of this first phase the amount of gas available for processing legacy transactions will decrease while the amount of gas available for processing EIP1559 transactions will increase, moving gas from the legacy pool into the EIP1559 pool until the legacy pool is depleted and the EIP1559 pool contains the entire gas maximum. After all of the gas has transitioned to the EIP1559 pool, the second- finalized- phase is entered and legacy transactions will no longer be accepted on the network.   ## Specification <!--The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Ethereum platforms (go-ethereum, parity, cpp-ethereum, ethereumj, ethereumjs, and [others](https://github.com/ethereum/wiki/wiki/Clients)).--> **Parameters**-* `FORK_BLKNUM`: TBD+* `INITIAL_FORK_BLKNUM`: TBD * `BASEFEE_MAX_CHANGE_DENOMINATOR`: 8-* `SLACK_COEFFICIENT`: 3-* `TARGET_GASUSED`: 8,000,000+* `TARGET_GAS_USED`: 10,000,000+* `MAX_GAS_EIP1559`: 16,000,000+* `FINAL_FORK_BLKNUM`:  `INITIAL_FORK_BLKNUM + (MAX_GAS_EIP1559 / (10 * SLACK_COEFFICIENT))`+* `EIP1559_GAS_INCREMENT_AMOUNT`: `(MAX_GAS_EIP1559 / 2) / (FINAL_FORK_BLKNUM - INITIAL_FORK_BLKNUM)`+* `INITIAL_BASEFEE` : 1,000,000,000 wei (1 gwei)   **Proposal**-For all blocks where `block.number >= FORK_BLKNUM`:--* Impose a hard in-protocol gas limit of `SLACK_COEFFICIENT * TARGET_GASUSED`, used instead of the gas limit calculated using the previously existing formulas-* Replace the `GASLIMIT` field in the block header with a BASEFEE field (the same field can be used)-* Let `PARENT_BASEFEE` be the parent block's `BASEFEE` (or 1 billion wei if `block.number == FORK_BLKNUM`). A valid `BASEFEE` is one such that `abs(BASEFEE - PARENT_BASEFEE) <= max(1, PARENT_BASEFEE // BASEFEE_MAX_CHANGE_DENOMINATOR)`-* Redefine the way the `tx.gasprice` field is used: define `tx.fee_premium = tx.gasprice // 2**128` and `tx.fee_cap = tx.gasprice % 2**128`-* During transaction execution, we calculate the cost to the `tx.origin` and the gain to the `block.coinbase` as follows:-  * Let `gasprice = min(BASEFEE + tx.fee_premium, tx.fee_cap)`. The `tx.origin` initially pays `gasprice * tx.gas`, and gets refunded `gasprice * (tx.gas - gasused)`.-  * The `block.coinbase` gains `(gasprice - BASEFEE) * gasused`. If `gasprice < BASEFEE` (due to the `fee_cap`), this means that the `block.coinbase` _loses_ funds from this operation; in this case, check that the post-balance is non-negative and throw an exception if it is negative.-As a default strategy, miners set `BASEFEE` as follows. Let `delta = block.gas_used - TARGET_GASUSED` (possibly negative). Set `BASEFEE = PARENT_BASEFEE + PARENT_BASEFEE * delta // TARGET_GASUSED // BASEFEE_MAX_CHANGE_DENOMINATOR`, clamping this result inside of the allowable bounds if needed (with the parameter setting above clamping will not be required).+For all blocks where `block.number >= INITIAL_FORK_BLKNUM`:++For the gas limit:++* `MAX_GAS_EIP1559` acts as the hard in-protocol gas limit, instead of the gas limit calculated using the previously existing formulas+* The `GASLIMIT` field in the block header is the gas limit for the EIP1559 gas pool, and over the transition period this value increases until it reaches `MAX_GAS_EIP1559` at `FINAL_FORK_BLKNUM`+* The gas limit for the legacy gas pool is `MAX_GAS_EIP1559 - GASLIMIT`, as `GASLIMIT` increases towards `MAX_GAS_EIP1559` gas is moved from the legacy pool into the EIP1559 pool until all of the gas is in the EIP1559 pool+* At `block.number == INITIAL_FORK_BLKNUM`, let `GASLIMIT = (MAX_GAS_EIP1559 / 2)` so that the gas maximum is split evenly between the legacy and EIP1559 gas pools+* As `block.number` increases towards `FINAL_FORK_BLKNUM`, at every block we shift `EIP1559_GAS_INCREMENT_AMOUNT` from the legacy pool into the EIP1559 gas pool+* At `block.number >= FINAL_FORK_BLKNUM` the entire `MAX_GAS_EIP1559` is assigned to the EIP1559 gas pool and the legacy pool is empty+* We enforce a per-transaction gas limit of 8,000,000.``

8,000,000 should be a parameter, no?

i-norden

comment created time in 20 days

Pull request review commentethereum/EIPs

Eip1559 updates

 Ethereum currently prices transaction fees using a simple auction mechanism, whe * **Inefficiencies of first price auctions**: see https://ethresear.ch/t/first-and-second-price-auctions-and-improved-transaction-fee-markets/2410 for a detailed writeup. In short, the current approach, where transaction senders publish a transaction with a fee, miners choose the highest-paying transactions, and everyone pays what they bid, is well-known in mechanism design literature to be highly inefficient, and so complex fee estimation algorithms are required, and even these algorithms often end up not working very well, leading to frequent fee overpayment. See also https://blog.bitgo.com/the-challenges-of-bitcoin-transaction-fee-estimation-e47a64a61c72 for a Bitcoin core developer's description of the challenges involved in fee estimation in the status quo. * **Instability of blockchains with no block reward**: in the long run, blockchains where there is no issuance (including Bitcoin and Zcash) at present intend to switch to rewarding miners entirely through transaction fees. However, there are [known results](http://randomwalker.info/publications/mining_CCS.pdf) showing that this likely leads to a lot of instability, incentivizing mining "sister blocks" that steal transaction fees, opening up much stronger selfish mining attack vectors, and more. There is at present no good mitigation for this. -The proposal in this EIP is to start with a BASEFEE amount which is adjusted up and down by the protocol based on how congested the network is. To accommodate this system, the network capacity would be increased to 16 million gas, so that 50% utilization matches up with our current 8 million gas limit. Then, when the network is at >50% capacity, the BASEFEE increments up slightly and when capacity is at <50%, it decrements down slightly. Because these increments are constrained, the maximum difference in BASEFEE from block to block is predictable. This then allows wallets to auto-set the gas fees for users in a highly reliable fashion. It is expected that most users will not have to manually adjust gas fees, even in periods of high network activity. For most users, the BASEFEE will be automatically set by their wallet, along with the addition of a small fixed amount, called a ‘tip’, to compensate miners (e.g. 0.5 gwei).+The proposal in this EIP is to start with a base fee amount which is adjusted up and down by the protocol based on how congested the network is. To accommodate this system, the total network capacity would be increased to 16 million gas. When the network exceeds the target 10 million gas usage, the base fee increments up slightly and when capacity is below the target, it decrements down slightly. Because these increments are constrained, the maximum difference in base fee from block to block is predictable. This then allows wallets to auto-set the gas fees for users in a highly reliable fashion. It is expected that most users will not have to manually adjust gas fees, even in periods of high network activity. For most users post 1559 implementation the base fee will be estimated by their wallet and a small gas premium- which acts as a 'tip' to compensate miners (e.g. 0.5 gwei)- will be automatically set. Users can also manually set the transaction fee cap to bound their total costs.++An important aspect of this upgraded fee system is that miners only get to keep the tips. The base fee is always burned (i.e. it is destroyed by the protocol). Burning this is important because it prevents miners from manipulating the fee in order to extract more fees from users. It also ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform. Additionally, this burn counterbalances Ethereum inflation without greatly diminishing miner rewards. -An important aspect of this upgraded fee system is that miners only get to keep the tips. The BASEFEE is always burned (i.e. it is destroyed by the protocol). Burning this is important because it prevents miners from manipulating the fee in order to extract more fees from users. It also ensures that only ETH can ever be used to pay for transactions on Ethereum, cementing the economic value of ETH within the Ethereum platform.+The transition to this gas price system will occur in two phases, in the first phase both legacy and EIP1559 transactions will be accepted by the protocol. Over the course of this first phase the amount of gas available for processing legacy transactions will decrease while the amount of gas available for processing EIP1559 transactions will increase, moving gas from the legacy pool into the EIP1559 pool until the legacy pool is depleted and the EIP1559 pool contains the entire gas maximum. After all of the gas has transitioned to the EIP1559 pool, the second- finalized- phase is entered and legacy transactions will no longer be accepted on the network.   ## Specification <!--The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Ethereum platforms (go-ethereum, parity, cpp-ethereum, ethereumj, ethereumjs, and [others](https://github.com/ethereum/wiki/wiki/Clients)).--> **Parameters**-* `FORK_BLKNUM`: TBD+* `INITIAL_FORK_BLKNUM`: TBD * `BASEFEE_MAX_CHANGE_DENOMINATOR`: 8-* `SLACK_COEFFICIENT`: 3-* `TARGET_GASUSED`: 8,000,000+* `TARGET_GAS_USED`: 10,000,000+* `MAX_GAS_EIP1559`: 16,000,000+* `FINAL_FORK_BLKNUM`:  `INITIAL_FORK_BLKNUM + (MAX_GAS_EIP1559 / (10 * SLACK_COEFFICIENT))`

Slack coefficient was removed, but is still used here.

Also, why a slack coefficient of only 1.6?

i-norden

comment created time in 20 days

issue commentethereum/eth2.0-specs

Dual-key voluntary exits

Another option would be for the hot key to just pre-sign a VoluntaryExit message.

JustinDrake

comment created time in a month

PR opened ethereum/eth2.0-specs

Per-committee anti-correlation penalties

The basic rationale to have per-committee anti-correlation penalties is that in phases 1+, there is a lot of harm that can be caused by even a single slashed committee, and so the harm from multiple validators acting harmfully is best viewed as a percentage of the committees that those validators were part of, and not a percentage of the whole validator set.

This is a maximally simple proposal for accomplishing this.

Note that there may be better ways of doing this, for example identifying for every slashing that uses an attestation what committee that attestation was for, and scanning through that entire committee at that time, removing the need for an independent object. Also, this code should be moved into phase 1 after the merge of the phase 1 PR.

+27 -16

0 comment

1 changed file

pr created time in a month

create barnchethereum/eth2.0-specs

branch : vbuterin-patch-3

created branch time in a month

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

In private chats I proposed an even simpler version; I'll copy it here as well for more public discussion.

Hash tree structure

leaves = [(address, weight, is_ens, hash_to_peer1), (address, weight, is_ens, hash_to_peer2) ...]
leaf_hashes = [hash(leaves[0]), hash(leaves[1]) ...]
leaf_root = hash(leaf_hashes)
commitment_hash = hash(leaf_root, execute_hash)

SETUP PROCEDURE (Unchanged from above)

  • The owner of W determines the (address, weight, is_ens) values and computes a private_hash
  • The owner of W computes commitment_hash. W calls R's method register, passing commitment_hash as an argument. R saves self.commitments[W] = commitment_hash in its storage

RECOVERY PROCEDURE

  • Each participant submits the full contents of their leaf along with secret_call (hash of calldata + destination + execute_hash) and a signature to R's submit_leaf method. Upon verifying the signature, R saves self.approved_weights[hash(hash(leaf), secret_call)] = leaf.weight
  • When the weight supporting some execute hash for some address is sufficient, the owner of W (technically, anyone can do this, but in practice it would be the owner of W) can call R's execute method, revealing execute_hash, calldata and destination (= W) along with a list of all leaf_hash values. R computes commitment_hash = hash(hash(leaf_roots), execute_hash) and checks that self.commitments[W] = commitment_hash. It also computes secret_call, computes hash(leaf_hash + secret_call) for every leaf_hash, accesses those positions in self.approved_weights and verifies that the sum of the returned values is sufficient. If both checks pass, R clears those values out of storage and calls W with the given calldata.

This removes the need for Merkle trees, decreases total costs in many cases, and creates a higher degree of privacy because it is not known on-chain whether or not two recovery approvals even refer to the same wallet, or which wallet it is, before the finalization step.

Optimizations

As written, the gas cost for the execute step is 512 for each leaf_hash + 800 for SLOAD + ~100 for SHA3 and other logistics, so ~1400 total, plus 5000 for storage clearing (which is refunded). If desired, we could require execute to provide a bitfield of which leaves are nonzero, reducing the number of SLOAD operations required by restricting them to nonzero leaves; this would reduce per-user gas costs by more than half and hence facilitate eg. 10-of-200 recoveries. This would also be good forward-planning for a future where SLOAD becomes even more expensive for stateless client reasons.

We could optimize further by using a multiproof instead of requiring every leaf, but this greatly increases complexity and is IMO unlikely to be worth it.

3esmit

comment created time in a month

PR closed ethereum/eth2.0-specs

Added data availability proofs, earliest draft phase 2

Includes description of binary fields, link to sample python implementation, and structure of extended data roots and slashings for incorrect construction. Depends on some pieces that are not fully specified yet.

+825 -0

1 comment

2 changed files

vbuterin

pr closed time in a month

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 8c6b2b6f8dbcbb8101d4f78104fc635e9bad9d16

proof -> claim

view details

push time in a month

push eventethereum/eth2.0-specs

vbuterin

commit sha 3b446196eee3fd0344f8cc0f36b79f04ab253652

Update specs/core/2_data-availability-proofs.md

view details

push time in a month

Pull request review commentethereum/eth2.0-specs

Added data availability checks

+# Ethereum 2.0 Phase 2 -- Data Availability Checks++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 2 -- Data Availability Checks](#ethereum-20-phase-2----data-availability-checks)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Constants](#constants)+        - [Misc](#misc)+    - [Data structures](#data-structures)+        - [`DataExtensionSlashing`](#dataextensionslashing)+    - [Helper functions](#helper-functions)+        - [`badd`](#badd)+        - [`bmul`](#bmul)+        - [`eval_polynomial_at`](#eval-polynomial-at)+        - [`interpolate`](#interpolate)+        - [`fill`](#fill)+        - [`fill_axis`](#fill-axis)+        - [`get_data_sqiare`](#get-data-square)+        - [`extend_data_square`](#extend-data-square)+        - [`mk_data_root`](#mk-data-root)+        - [`process_data_extension_slashing`](#process-data-extension-slashing)++<!-- /TOC -->++## Introduction++This document describes the expected formula for calculating data availability proofs and the beacon chain changes (namely, slashing conditions) needed to enforce them.
This document describes the expected formula for calculating the Merkle root used for data availability checks and the beacon chain changes (namely, slashing conditions) needed to enforce them.
vbuterin

comment created time in a month

push eventethereum/eth2.0-specs

vbuterin

commit sha 643055b384c59a4c7b503b9c6c405e2aa6e0a73b

Update specs/core/2_data-availability-proofs.md

view details

push time in a month

push eventethereum/eth2.0-specs

vbuterin

commit sha f74e06ba6765f0593fdcb5515862299a9680b442

Update specs/core/2_data-availability-proofs.md

view details

push time in a month

Pull request review commentethereum/eth2.0-specs

Added data availability checks

+# Ethereum 2.0 Phase 2 -- Data Availability Proofs++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 2 -- Data Availability Proofs](#ethereum-20-phase-2----data-availability-proofs)
- [Ethereum 2.0 Phase 2 -- Data Availability Checks](#ethereum-20-phase-2----data-availability-checks)
vbuterin

comment created time in a month

Pull request review commentethereum/eth2.0-specs

Added data availability checks

+# Ethereum 2.0 Phase 2 -- Data Availability Proofs
# Ethereum 2.0 Phase 2 -- Data Availability Checks
vbuterin

comment created time in a month

pull request commentethereum/eth2.0-specs

Added data availability proofs

@vbuterin let me know if you agree with this and I would be happy to change it

Sounds reasonable to me! Agree that data availability checks is better than proofs.

vbuterin

comment created time in a month

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

Here's my proposal for reworking the design to achieve a few more properties:

  • One single recovery contract
  • It's not known which address is recovering until recovery time
  • Different hash_to_peer for each user, providing more privacy

First, the "secrets" hash structure:

hash(private_hash) = execute_hash
hash(execute_hash) = public_hash
hash(execute_hash + peer_address) = hash_to_peer

This way, each leaf has a different hash_to_peer so revealing one leaf won't allow brute force attacks that can unmask other leaves, as those other leaves will still have their own still-unknown hash_to_peer values.

Now, the hash structure for commitments:

         [commitment_hash]
        /                 \
[merkle_root]        [execute_hash]
      /  \
    ...  /\    
       ... \
          [address, weight, hash_to_peer, is_ens]

There is one single recovery contract, R. Suppose that a wallet W wants to set up a recovery.

SETUP PROCEDURE

  1. The owner of W determines the (address, weight, is_ens) values and computes a private_hash
  2. W computes commitment_hash. W calls R's method register, passing commitment_hash as an argument. R saves self.commitments[W] = commitment_hash in its storage

RECOVERY PROCEDURE

  1. Each participant publishes their merkle branches (going up to merkle_root only) in a call to R's approve method. The contract counts the total weight supporting each (merkle_root, secret_call) pair, and stores a bitfield to prevent double calls from one address
  2. When the weight supporting some pair is sufficient, the owner of W (technically, anyone can do this, but in practice it would be the owner of W) can call R's execute method, revealing execute_hash, calldata and destination (= W). R checks that h(execute_hash, calldata, W) actually equals secret_call and checks that self.commitments[W] = h(execute_hash, merkle_root). If both checks pass, R calls W with the given calldata.
3esmit

comment created time in a month

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

Should we just make a telegram chat or some other more synchronous medium to go through all the details? Feels like there's a lot of moving parts here and it would be good to quickly get on the same page.

3esmit

comment created time in a month

push eventethereum/eth2.0-specs

vbuterin

commit sha 9fe92788a8902701841e65b78ad589f01ac68de6

Update specs/core/2_data-availability-proofs.md

view details

push time in a month

Pull request review commentethereum/eth2.0-specs

Added data availability proofs

+# Ethereum 2.0 Phase 2 -- Data Availability Proofs++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 2 -- Data Availability Proofs](#ethereum-20-phase-2----data-availability-proofs)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Constants](#constants)+        - [Misc](#misc)+    - [Data structures](#data-structures)+        - [`DataExtensionSlashing`](#dataextensionslashing)+    - [Helper functions](#helper-functions)+        - [`badd`](#badd)+        - [`bmul`](#bmul)+        - [`eval_polynomial_at`](#eval-polynomial-at)+        - [`interpolate`](#interpolate)+        - [`fill`](#fill)+        - [`fill_axis`](#fill-axis)+        - [`get_data_sqiare`](#get-data-square)+        - [`extend_data_square`](#extend-data-square)+        - [`mk_data_root`](#mk-data-root)+        - [`process_data_extension_slashing`](#process-data-extension-slashing)++<!-- /TOC -->++## Introduction++This document describes the expected formula for calculating data availability proofs and the beacon chain changes (namely, slashing conditions) needed to enforce them.++Not yet in scope: the procedure for verifying that data in a crosslink is available, and modifications to the fork choice rule around this.++## Constants++### Misc++| Name | Value | Description |+| - | - | - |+| `FIELD_ELEMENT_BITS` | `16` | 2 bytes |+| `FIELD_MODULUS` | `65579` | |+| `CHUNKS_PER_ROW` | `2**13 = 8,192` | 256 kB |+| `ROWS_PER_BLOCK` | `MAX_SHARD_BLOCK_SIZE // 32 // CHUNKS_PER_ROW = 4` | |+| `CHUNKS_PER_BLOCK` | `CHUNKS_PER_ROW * ROWS_PER_BLOCK = 32,768` | |+| `DOMAIN_DATA_AVAILABILITY` | `144` | |+| `DOMAIN_DATA_AVAILABILITY_SLASH` | `145` | |++## Data structures++### `DataAvailabilityProof`++```python+{+    'rows': List[Vector[Bytes32, CHUNKS_PER_ROW], ROWS_PER_BLOCK * MAX_SHARDS * MAX_SHARD_BLOCKS_PER_ATTESTATION],+    'extension': List[Vector[Bytes32, CHUNKS_PER_ROW], ROWS_PER_BLOCK * MAX_SHARDS * MAX_SHARD_BLOCKS_PER_ATTESTATION],+    'columns': Vector[List[Bytes32, ROWS_PER_BLOCK * MAX_SHARDS * MAX_SHARD_BLOCKS_PER_ATTESTATION], CHUNKS_PER_ROW * 2],+    'row_cutoff': uint64+    'row_count': uint64+}+```++### `SignedDataAvailabilityProof`++```python+{+    'data': DataAvailabilityProof,+    'signature': BLSSignature+}+```++### `DataExtensionSlashing`++```python+{+    'is_column': bool,+    'checking_extension': bool,+    'indices': List[int],+    'axis_index': int,+    'source_is_column': List[bool],+    'values': List[Bytes32],+    'row_cutoff': uint64,+    'row_count': uint64,+    'actual_full_axis_root': Bytes32,+    'data_availability_root': Bytes32,+    'data_availability_root_signature': BLSSignature,+    'proof': SSZMultiProof,+    'slasher_index': ValidatorIndex,+}+```++### `SignedDataExtensionSlashing`++```python+{+    'data': DataExtensionSlashing,+    'slasher_signature': BLSSignature+}+```++## Helper functions (binary fields)++### `badd`++```python+def badd(a: int, b:int) -> int:+    return a ^ b+```++### `bmul`++```python+def bmul(a: int, b: int) -> int:+    if a*b == 0:+        return 0+    o = 0+    for i in range(FIELD_ELEMENT_BITS):+        if b & (1<<i):+            o ^= a<<i+    for h in range(FIELD_ELEMENT_BITS * 2 - 1, FIELD_ELEMENT_BITS - 1, -1):+        if o & (1<<h):+            o ^= (FIELD_MODULUS << (h - FIELD_ELEMENT_BITS))+    return o+```++### `eval_polynomial_at`++```python+def eval_polynomial_at(polynomial: List[int], x: int) -> int:+    o = 0+    power_of_x = 1+    for coefficient in polynomial:+        o = badd(o, bmul(power_of_x, coefficient))+        power_of_x = bmul(power_of_x, x)+    return o+```++### `multi_evaluate`++`multi_evaluate` is defined as the function `multi_evaluate(xs: List[int], polynomial: List[int]) -> List[int]` that returns `[eval_polynomial_at(polynomial, x) for x in xs]`, though there are faster Fast Fourier Transform-based algorithms.++### `interpolate`++`interpolate` is defined as the function `interpolate(xs: List[int], values: List[int]) -> List[int]` that returns the `polynomial` such that for all `0 <= i < len(xs)`, `eval_polynomial_at(polynomial, xs[i]) == values[i]`. This can be implemented via Lagrange interpolation in `O(N**2)` time or Fast Fourier transform in `O(N * log(N))` time.++You can find a sample implementation here: [https://github.com/ethereum/research/tree/master/binary_fft](https://github.com/ethereum/research/tree/master/binary_fft)++### `fill`++```python+def fill(xs: List[int], values: List[int], length: int) -> List[int]:+    """+    Takes the minimal polynomial that returns values[i] at xs[i] and computes+    its outputs for all values in range(0, length)+    """+    poly = interpolate(xs, values)+    return multi_evaluate(list(range(length)), poly)+```++### `fill_axis`++```python+def fill_axis(xs: List[int], values: List[Bytes32], length: int) -> List[Bytes32]:+    """+    Interprets a series of 32-byte chunks as a series of ordered packages of field+    elements. For each i, treats the set of i'th field elements in each chunk as+    evaluations of a polynomial. Evaluates the polynomials on the extended domain+    range(0, length) and provides the 32-byte chunks that are the packages of the+    extended evaluations of every polynomial at each coordinate.+    """+    data = [[bytes_to_int(a[i: FIELD_ELEMENT_BITS//8]) for a in values] for i in range(0, 32, FIELD_ELEMENT_BITS)]+    newdata = [fill(xs, d, length) for d in data]+    return [b''.join([int_to_bytes(n[i], FIELD_ELEMENT_BITS//8) for n in newdata]) for i in range(length)]+```++### `get_full_data_availability_proof`++```python+def get_full_data_availability_proof(blocks: List[Bytes[MAX_SHARD_BLOCK_SIZE]]) -> DataAvailabilityProof:+    """+    Converts data into a row_count * ROW_SIZE rectangle, padding with zeroes if necessary,+    and then extends both dimensions by 2x, in the row case only adding one new row per+    nonzero row+    """+    # Split blocks into rows, express rows as sequences of 32-byte chunks+    rows = []+    for block in blocks:+        block = block + b'\x00' * (MAX_SHARD_BLOCK_SIZE - len(block))+        for pos in range(0, CHUNKS_PER_BLOCK, CHUNKS_PER_ROW):+            rows.append([block[i: i+32] for i in range(pos, pos + CHUNKS_PER_ROW, 32)])+    # Number of rows that are not entirely zero bytes+    nonzero_row_count = len([row for row in rows if row != [ZERO_HASH] * CHUNKS_PER_ROW])+    # Add one vertical extension row for every nonzero row+    new_rows = [[] for _ in range(nonzero_row_count)]+    for i in range(CHUNKS_PER_ROW):+        vertical_extension = fill_axis(list(range(len(rows))), [row[i] for row in rows], len(rows) + nonzero_row_count)+        for new_row, new_value in zip(new_rows, vertical_extension[len(rows):]):+            new_row.append(new_value)+    rows.extend(new_rows)+    # Extend all rows horizontally+    extension = []+    for row in rows:+        extension.append(fill_axis(list(range(CHUNKS_PER_ROW)), row, CHUNKS_PER_ROW * 2)[CHUNKS_PER_ROW:])+    # Compute columns+    columns = (+        [[row[i] for row in rows] for i in range(CHUNKS_PER_ROW)] ++        [[row[i] for row in extension] for i in range(CHUNKS_PER_ROW)]+    )+    return DataAvailabilityProof(rows, extension, columns, len(blocks) * ROWS_PER_BLOCK, len(rows))+```++### `process_data_extension_slashing`++```python+def process_data_extension_slashing(state: BeaconState, signed_proof: SignedDataExtensionSlashing):+    """+    Slashes for an invalid extended data root. Covers both mismatches within a+    row and column and mismatches between rows and columns.++    This is done by allowing the prover to provide >=1/2 of the data in an axis by+    arbitrarily mix-and-matching merkle proofs from the row-then-column tree and+    the column-then-row tree.+    """+    # Verify the signature+    assert bls_verify(+        pubkey=state.validators[get_signer_index(proof)].pubkey,+        message_hash=proof.data_availability_root,+        signature=proof.data_availability_root_signature,+        domain=get_domain(state, DOMAIN_DATA_AVAILABILITY, get_current_epoch(state))+    )+    proof = signed_proof.data+    assert bls_verify(+        pubkey=state.validators[proof.slasher_index].pubkey,+        message_hash=proof,+        signature=signed_proof.signature,+        domain=get_domain(state, DOMAIN_DATA_AVAILABILITY_SLASH, get_current_epoch(state))+    )+    # Get generalized indices (for proof verification)+    generalized_indices = [get_generalized_index(DataAvailabilityProof, 'row_cutoff')]+    if proof.is_column:+        # Case 1: proof is getting indices along a column+        assert proof.checking_extension is False+        generalized_indices.append(get_generalized_index(DataAvailabilityProof, 'columns', proof.axis_index))+        coordinates = [(index, proof.axis_index) for index in proof.indices]+    elif proof.checking_extension:+        # Case 2: proof is getting indices along a row, we are checking against the extension root+        generalized_indices.append(get_generalized_index(DataAvailabilityProof, 'extension', proof.axis_index - CHUNKS_PER_ROW))+        coordinates = [(proof.axis_index, index) for index in proof.indices]+    else:+        # Case 3: proof is getting indices along a row, we are checking against the row root+        generalized_indices.append(get_generalized_index(DataAvailabilityProof, 'rows', proof.axis_index))+        coordinates = [(proof.axis_index, index) for index in proof.indices]+    # Each individual element can be come from the row/extension trees, or from the column tree
    # Each individual element can be come from the row/extension trees, or from the column tree
    assert proof.indices == sorted(set(proof.indices))
vbuterin

comment created time in a month

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 524b38afeb7c598598aaf3a275fc4d9c3fd5f77d

A few fixes

view details

push time in a month

push eventethereum/eth2.0-specs

vbuterin

commit sha a3f67fd606e5fecf75c805750f2d33f0845f7dd4

Update specs/core/1_beacon-chain.md

view details

push time in a month

Pull request review commentethereum/eth2.0-specs

Remove shard block chunking

 def apply_shard_transition(state: BeaconState, shard: Shard, transition: ShardTr def process_crosslink_for_shard(state: BeaconState,                                 shard: Shard,                                 shard_transition: ShardTransition,-                                attestations: Sequence[Attestation]) -> Root:+                                attestations: Sequence[Attestation]) -> Root:US
                                attestations: Sequence[Attestation]) -> Root:
vbuterin

comment created time in a month

PR opened ethereum/eth2.0-specs

Added data availability proofs

Replacement for #1083

+243 -0

0 comment

1 changed file

pr created time in a month

create barnchethereum/eth2.0-specs

branch : vitalik61

created branch time in a month

push eventethereum/eth2.0-specs

Danny Ryan

commit sha 748165cc03772d37b8cad334928367dbe83a9c26

Merge pull request #1140 from 0xKiwi/patch-1 Remove mentions of current_shuffling_epoch

view details

Danny Ryan

commit sha e8b4c4c57f1f07a8eae6c88249bad6c19335017f

Merge pull request #1077 from ethereum/ssz-impl-rework SSZ implementation for exec. spec - Support for Python 3 typing.

view details

Ivan Martinez

commit sha e83500cef8ec9a936dd566f797af2f746358032b

Reorganize data structures to mirror beacon state order

view details

Ivan Martinez

commit sha 65d2a502191f4a70edc3d0906f9cb4d5ae6c953f

Change data structure to match beacon state order

view details

Ivan Martinez

commit sha c250296d8ae2176ddf6297a47a2d86fffc62ba38

Move crosslink above attestation data

view details

Carl Beekhuizen

commit sha d761b6f041db05d2dda58972510875c7705d0163

Implements new SSZ types

view details

Carl Beekhuizen

commit sha e5fb91c4a2ba2eaa24db1840bf4e85a079918a23

Make test generators work with phase 1 execution

view details

Danny Ryan

commit sha 853c34eb60a707ede5339597af666f380f1218e4

add beaconblockheader back to toc

view details

Danny Ryan

commit sha 6feede7f6bdcbbf3e294fdd5287775a0c3a20cc0

Merge pull request #1141 from 0xKiwi/patch-2 Change data structure order to mirror beacon state property order

view details

protolambda

commit sha a7554d503c10d1bb423731486ae45c3d1258168a

fix for typing check of vector elements with non-type element type (annotation)

view details

Danny Ryan

commit sha 1daff359ba8bccb02c1d28bb935bf1eaf0277e63

Merge pull request #1139 from terencechain/patch-76 Use get_total_balance for get_attestation_deltas

view details

protolambda

commit sha 1cc7c7309d25bd9c26c09824ec9898e8ee22e5dd

change to issubclass, hope parametrized external type is ok

view details

protolambda

commit sha b9abc5f2cf349cf0ec211458df36b86b92b42edf

List[uint64] is not like a type but just for annotation, same for other Generics with __args__, Vector/BytesN work, because their metaclasses produce non-parametrized types, so don't check vector values when type contains args

view details

terence tsao

commit sha eefd3062539e058519e19cc1c50fa97284665d77

Update 0_beacon-chain-validator.md

view details

Carl Beekhuizen

commit sha 38414c2e4ebc2049a5edab14c6bcec35e6bdb468

Merge branch 'dev' into dankrad-patch-7 * dev: add beaconblockheader back to toc Move crosslink above attestation data Change data structure to match beacon state order Reorganize data structures to mirror beacon state order Update 0_beacon-chain.md

view details

Danny Ryan

commit sha 71ab58a53038362b70af9673bb9a1233024e63ec

Merge pull request #1142 from terencechain/patch-77 Inline Attestations Variables

view details

Carl Beekhuizen

commit sha e498ff7e94ee47ea69d390860a20c8d2a7061ca3

Separates tests into phases

view details

Carl Beekhuizen

commit sha 60d9dc68c4d95f28b50a1d144985b3a6cfdfd726

Apply suggestions from @djrtwo's code review

view details

Carl Beekhuizen

commit sha 58a137e81c229f88d8d9f709a5819a11e30eb885

Merge branch 'dev' into dankrad-patch-7 * dev: Update 0_beacon-chain-validator.md

view details

Carl Beekhuizen

commit sha 4c1b9ef6d6fc3b844eeb9f7c8bff8c7a614ba48d

Fixes custody key reveal test bug

view details

push time in 2 months

PR opened ethereum/eth2.0-specs

Remove shard block chunking

Only store a 32 byte root for every shard block

Rationale: originally, I added shard block chunking (store 4 chunks for every shard block instead of one root) to facilitate construction of data availability roots. However, it turns out that there is an easier technique. Set the width of the data availability rectangle's rows to be 1/4 the max size of a shard block, so each block would fill multiple rows. Then, non-full blocks will generally create lots of zero rows. For example if the block bodies are 31415926535 and 897932 with a max size of 24 bytes, the rows might look like this:

31415926
53500000
00000000
89793200
00000000
00000000

Zero rows would extend rightward to complete zero rows, and when extending downward we can count the number of zero rows, and reduce the number of extra rows that we make, so we only make a new row for every nonzero row in the original data. This way we get only a close-to-optimal ~4-5x blowup in the data even if the data has zero rows in the middle.

+8 -25

0 comment

1 changed file

pr created time in 2 months

create barnchethereum/eth2.0-specs

branch : vbuterin-patch-1

created branch time in 2 months

Pull request review commentethereum/eth2.0-specs

Phase 1 rebase

 This document details the beacon chain additions and changes in Phase 1 of Ether | - | - | | `MAX_CUSTODY_KEY_REVEALS` | `2**4` (= 16) |

I think we need more than 16 MAX_CUSTODY_KEY_REVEALS. Every validator reveals once per 2048-epoch period, so in the worst case that's 222 validators revealing once every 216 slots, or 64 per slot. So the max should probably be 256.

djrtwo

comment created time in 2 months

Pull request review commentethereum/eth2.0-specs

Phase 1 rebase

 def process_early_derived_secret_reveal(state: BeaconState, reveal: EarlyDerived         state.exposed_derived_secrets[derived_secret_location].append(reveal.revealed_index) ``` -#### Chunk challenges--Verify that `len(block.body.custody_chunk_challenges) <= MAX_CUSTODY_CHUNK_CHALLENGES`.--For each `challenge` in `block.body.custody_chunk_challenges`, run the following function:+#### Custody Slashings  ```python-def process_chunk_challenge(state: BeaconState, challenge: CustodyChunkChallenge) -> None:-    # Verify the attestation-    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, challenge.attestation))-    # Verify it is not too late to challenge-    assert (compute_epoch_at_slot(challenge.attestation.data.slot)-            >= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY)-    responder = state.validators[challenge.responder_index]-    assert responder.exit_epoch >= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY-    # Verify the responder participated in the attestation-    attesters = get_attesting_indices(state, challenge.attestation.data, challenge.attestation.aggregation_bits)-    assert challenge.responder_index in attesters-    # Verify the challenge is not a duplicate-    for record in state.custody_chunk_challenge_records:-        assert (-            record.data_root != challenge.attestation.data.crosslink.data_root or-            record.chunk_index != challenge.chunk_index-        )-    # Verify depth-    depth = ceillog2(get_custody_chunk_count(challenge.attestation.data.crosslink))-    assert challenge.chunk_index < 2**depth-    # Add new chunk challenge record-    new_record = CustodyChunkChallengeRecord(-        challenge_index=state.custody_challenge_index,-        challenger_index=get_beacon_proposer_index(state),-        responder_index=challenge.responder_index,-        inclusion_epoch=get_current_epoch(state),-        data_root=challenge.attestation.data.crosslink.data_root,-        depth=depth,-        chunk_index=challenge.chunk_index,-    )-    replace_empty_or_append(state.custody_chunk_challenge_records, new_record)--    state.custody_challenge_index += 1-    # Postpone responder withdrawability-    responder.withdrawable_epoch = FAR_FUTURE_EPOCH-```+def process_custody_slashing(state: BeaconState, signed_custody_slashing: SignedCustodySlashing) -> None:+    custody_slashing = signed_custody_slashing.message+    attestation = custody_slashing.attestation++    # Any signed custody-slashing should result in at least one slashing.+    # If the custody bits are valid, then the claim itself is slashed.+    malefactor = state.validators[custody_slashing.malefactor_index] +    whistleblower = state.validators[custody_slashing.whistleblower_index]+    domain = get_domain(state, DOMAIN_CUSTODY_BIT_SLASHING, get_current_epoch(state))+    assert bls_verify(whistleblower.pubkey, hash_tree_root(custody_slashing), signed_custody_slashing.signature, domain)+    # Verify that the whistleblower is slashable+    assert is_slashable_validator(whistleblower, get_current_epoch(state))+    # Verify that the claimed malefactor is slashable+    assert is_slashable_validator(malefactor, get_current_epoch(state)) -#### Bit challenges+    # Verify the attestation+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation)) -Verify that `len(block.body.custody_bit_challenges) <= MAX_CUSTODY_BIT_CHALLENGES`.+    # TODO: custody_slashing.data is not chunked like shard blocks yet, result is lots of padding. -For each `challenge` in `block.body.custody_bit_challenges`, run the following function:+    # TODO: can do a single combined merkle proof of data being attested.+    # Verify the shard transition is indeed attested by the attestation+    shard_transition = custody_slashing.shard_transition+    assert hash_tree_root(shard_transition) == attestation.shard_transition_root+    # Verify that the provided data matches the shard-transition+    shard_chunk_roots = shard_transition.shard_data_roots[custody_slashing.data_index]+    assert hash_tree_root(custody_slashing.data) == chunks_to_body_root(shard_chunk_roots) -```python-def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) -> None:-    attestation = challenge.attestation-    epoch = attestation.data.target.epoch-    shard = attestation.data.crosslink.shard--    # Verify challenge signature-    challenger = state.validators[challenge.challenger_index]-    domain = get_domain(state, DOMAIN_CUSTODY_BIT_CHALLENGE, get_current_epoch(state))-    # TODO incorrect hash-tree-root, but this changes with phase 1 PR #1483-    assert bls_verify(challenger.pubkey, hash_tree_root(challenge), challenge.signature, domain)-    # Verify challenger is slashable-    assert is_slashable_validator(challenger, get_current_epoch(state))-    # Verify attestation-    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))-    # Verify attestation is eligible for challenging-    responder = state.validators[challenge.responder_index]-    assert get_current_epoch(state) <= get_randao_epoch_for_custody_period(-        get_custody_period_for_validator(state, challenge.responder_index, epoch),-        challenge.responder_index-    ) + 2 * EPOCHS_PER_CUSTODY_PERIOD + responder.max_reveal_lateness--    # Verify the responder participated in the attestation+    # Verify existence and participation of claimed malefactor     attesters = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)-    assert challenge.responder_index in attesters-    # Verifier challenger is not already challenging-    for record in state.custody_bit_challenge_records:-        assert record.challenger_index != challenge.challenger_index-    # Verify the responder custody key+    assert custody_slashing.malefactor_index in attesters+    +    # Verify the malefactor custody key     epoch_to_sign = get_randao_epoch_for_custody_period(-        get_custody_period_for_validator(state, challenge.responder_index, epoch),-        challenge.responder_index,+        get_custody_period_for_validator(custody_slashing.malefactor_index, attestation.data.target.epoch),+        custody_slashing.malefactor_index,     )     domain = get_domain(state, DOMAIN_RANDAO, epoch_to_sign)-    assert bls_verify(responder.pubkey, hash_tree_root(epoch_to_sign), challenge.responder_key, domain)-    # Verify the chunk count-    chunk_count = get_custody_chunk_count(attestation.data.crosslink)-    assert chunk_count == len(challenge.chunk_bits)-    # Verify custody bit is incorrect-    committee = get_beacon_committee(state, epoch, shard)-    custody_bit = attestation.custody_bits[committee.index(challenge.responder_index)]-    assert custody_bit != get_chunk_bits_root(challenge.chunk_bits)-    # Add new bit challenge record-    new_record = CustodyBitChallengeRecord(-        challenge_index=state.custody_challenge_index,-        challenger_index=challenge.challenger_index,-        responder_index=challenge.responder_index,-        inclusion_epoch=get_current_epoch(state),-        data_root=attestation.data.crosslink.data_root,-        chunk_count=chunk_count,-        chunk_bits_merkle_root=hash_tree_root(challenge.chunk_bits),-        responder_key=challenge.responder_key,-    )-    replace_empty_or_append(state.custody_bit_challenge_records, new_record)-    state.custody_challenge_index += 1-    # Postpone responder withdrawability-    responder.withdrawable_epoch = FAR_FUTURE_EPOCH-```--#### Custody responses--Verify that `len(block.body.custody_responses) <= MAX_CUSTODY_RESPONSES`.--For each `response` in `block.body.custody_responses`, run the following function:--```python-def process_custody_response(state: BeaconState, response: CustodyResponse) -> None:-    chunk_challenge = next((record for record in state.custody_chunk_challenge_records-                            if record.challenge_index == response.challenge_index), None)-    if chunk_challenge is not None:-        return process_chunk_challenge_response(state, response, chunk_challenge)--    bit_challenge = next((record for record in state.custody_bit_challenge_records-                          if record.challenge_index == response.challenge_index), None)-    if bit_challenge is not None:-        return process_bit_challenge_response(state, response, bit_challenge)--    assert False-```--```python-def process_chunk_challenge_response(state: BeaconState,-                                     response: CustodyResponse,-                                     challenge: CustodyChunkChallengeRecord) -> None:-    # Verify chunk index-    assert response.chunk_index == challenge.chunk_index-    # Verify bit challenge data is null-    assert response.chunk_bits_branch == [] and response.chunk_bits_leaf == Bytes32()-    # Verify minimum delay-    assert get_current_epoch(state) >= challenge.inclusion_epoch + MAX_SEED_LOOKAHEAD-    # Verify the chunk matches the crosslink data root-    assert is_valid_merkle_branch(-        leaf=hash_tree_root(response.chunk),-        branch=response.data_branch,-        depth=challenge.depth,-        index=response.chunk_index,-        root=challenge.data_root,-    )-    # Clear the challenge-    records = state.custody_chunk_challenge_records-    records[records.index(challenge)] = CustodyChunkChallengeRecord()-    # Reward the proposer-    proposer_index = get_beacon_proposer_index(state)-    increase_balance(state, proposer_index, Gwei(get_base_reward(state, proposer_index) // MINOR_REWARD_QUOTIENT))+    assert bls_verify(malefactor.pubkey, hash_tree_root(epoch_to_sign), custody_slashing.malefactor_key, domain)++    # Get the custody bit+    custody_bits = attestation.custody_bits[custody_slashing.data_index]+    committee = get_beacon_committee(state, attestation.data.slot, attestation.data.index)+    claimed_custody_bit = custody_bits[committee.index(custody_slashing.malefactor_index)]+    +    # Compute the custody bit+    computed_custody_bit = compute_custody_bit(custody_slashing.malefactor_key, custody_slashing.data)+    +    # Verify the claim+    if claimed_custody_bit != computed_custody_bit:+        # Slash the malefactor, reward the other committee members+        slash_validator(state, custody_slashing.malefactor_index)+        others_count = len(committee) - 1+        whistleblower_reward = Gwei(malefactor.effective_balance // WHISTLEBLOWER_REWARD_QUOTIENT // others_count)+        for attester_index in attesters:+            if attester_index != custody_slashing.malefactor_index:+                increase_balance(state, attester_index, whistleblower_reward)+        # No special whisteblower reward: it is expected to be an attester. Others are free to slash too however. +    else:+        # The claim was false, the custody bit was correct. Slash the whistleblower that induced this work.+        slash_validator(state, custody_slashing.whistleblower_index)

I think the goal is precisely to protect against network-level DoS.

djrtwo

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

Is what's on the github above right now intended to be the latest version?

3esmit

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

When the merkle tree is revealed, it becomes public (or partially public), because you need to proof it to the contract. We expect that after a recovery, people keeps using recovery, but we don't want to loose any of the security assumptions, such as the most basic the merkle tree of multisig privacy. So, the standard is requesting that every recovery setup derives an unique merkle tree,

Agree that this privacy protection is important! Though it is achieved just by having any kind of secret in the leaves, and changing it when you re-do the setup, no?

3esmit

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

I feel like revealing the threshold when the first recovery leaf is submitted is okay; the time period between that and the final recovery leaf is going to be very small relative to the entire time that the account lives.

To enabled that I think that we should add a "weight" value to the leafs

So this actually opens up another approach: you can set the threshold to be a fixed 100 points, and use weights to determine the actual threshold. This way you could support differing weights, and you even get a bit of uncertainty around the threshold, because you could do a 3-of-5 using weights like eg. 50 33 20 48 48.

3esmit

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

reuse the merkle tree of friends (thats why peer hash is used inside merkle leafs),

Can you elaborate on what you mean by this?

to obfuscate the threshold for until the execution is possible.

You mean obfuscate it from the world, or obfuscate from the participants themselves?

3esmit

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

Ah, I see. So the recovery data is not even provided to the friends until the recovery is warranted. I didn't even realize that this was a desired objective. IMO it's not clear that this is a good idea, because it limits some use cases of recovery, particularly where the account holder is incapacitated or dies. Sure, you could just give the recovery data to someone else, but then if you do that then why not just make that person be one of the recovery participants?

Basically, the challenge is that you're IMO needlessly increasing complexity because you're splitting one functionality (authorization) across two different mechanisms: knowledge of the secret, and control over the address. You can simplify greatly by just putting all the authorization into control over addresses, and then use the secrets mechanism only to solve the problem of ensuring privacy of who the guardians for some address are (without secrets, an attacker with medium computation power can just loop over all combinations of addresses until they find a matching Merkle root).

For example, suppose that you want a scheme with the following property. An account can be recovered if 3 of 5 of (Alice, Bob, Charlie, David, Evan) approve the recovery, but only after a secret S is revealed and the holder of secret S approves the request (S could be stored by the account holder or in a drawer). Here is how you would replicate this within my framework.

Make an 8 of 10 recovery, between (Alice, Bob, Charlie, David, Evan, S_1, S_2, S_3, S_4, S_5), where S_i is an address derived from a private key derived from S. That is: S_i = privtoaddr(hash(S, i)). Then, to start a recovery, you would need to generate the 5 S_i keys and generate signatures for them, and then ask your friends to recover. The friends by themselves cannot do anything without S.

Glad that ERC 1271 exists! I somehow did not know about it :D That definitely solves that problem.

3esmit

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

BTW it seems worthwhile to make checkAuthorized a separate ERC, generalizing the notion of ethereum signed messages to contract accounts. It could be usable in many other cases. Not sure if this has been done already.

3esmit

comment created time in 2 months

pull request commentethereum/EIPs

ERC: Secret Multisig Recovery

This feels like it could be simplified more. Why not this version:

  • We save into a contract a Merkle root recovery_root = merkle_root([address1, threshold, secret1], [address2, threshold, secret2] ...)
  • We give every recovery participant the sister leaves to provide a Merkle proof of their own address, the threshold and secret (BTW we also need to standardize the encoding of this, so that you can eg. paste the data to someone via chat application if they're not using the same app as you)
  • To recover, you call their contract and provide the Merkle proof, along with some recoverTo data and an authorizationWitness. If your address is an EOA, then the authorizationWitness must be an ethereum signed message. If your address is a contract, then the recovery should call a checkAuthorized(recoverTo, authorizationWitness) -> bool method of your address.

All other details, eg. how the secret per user get computed, can be left to individual wallets to figure out.

3esmit

comment created time in 2 months

Pull request review commentethereum/eth2.0-specs

Phase 1 rebase

 def process_early_derived_secret_reveal(state: BeaconState, reveal: EarlyDerived         state.exposed_derived_secrets[derived_secret_location].append(reveal.revealed_index) ``` -#### Chunk challenges--Verify that `len(block.body.custody_chunk_challenges) <= MAX_CUSTODY_CHUNK_CHALLENGES`.--For each `challenge` in `block.body.custody_chunk_challenges`, run the following function:+#### Custody Slashings  ```python-def process_chunk_challenge(state: BeaconState, challenge: CustodyChunkChallenge) -> None:-    # Verify the attestation-    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, challenge.attestation))-    # Verify it is not too late to challenge-    assert (compute_epoch_at_slot(challenge.attestation.data.slot)-            >= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY)-    responder = state.validators[challenge.responder_index]-    assert responder.exit_epoch >= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY-    # Verify the responder participated in the attestation-    attesters = get_attesting_indices(state, challenge.attestation.data, challenge.attestation.aggregation_bits)-    assert challenge.responder_index in attesters-    # Verify the challenge is not a duplicate-    for record in state.custody_chunk_challenge_records:-        assert (-            record.data_root != challenge.attestation.data.crosslink.data_root or-            record.chunk_index != challenge.chunk_index-        )-    # Verify depth-    depth = ceillog2(get_custody_chunk_count(challenge.attestation.data.crosslink))-    assert challenge.chunk_index < 2**depth-    # Add new chunk challenge record-    new_record = CustodyChunkChallengeRecord(-        challenge_index=state.custody_challenge_index,-        challenger_index=get_beacon_proposer_index(state),-        responder_index=challenge.responder_index,-        inclusion_epoch=get_current_epoch(state),-        data_root=challenge.attestation.data.crosslink.data_root,-        depth=depth,-        chunk_index=challenge.chunk_index,-    )-    replace_empty_or_append(state.custody_chunk_challenge_records, new_record)--    state.custody_challenge_index += 1-    # Postpone responder withdrawability-    responder.withdrawable_epoch = FAR_FUTURE_EPOCH-```+def process_custody_slashing(state: BeaconState, signed_custody_slashing: SignedCustodySlashing) -> None:+    custody_slashing = signed_custody_slashing.message+    attestation = custody_slashing.attestation++    # Any signed custody-slashing should result in at least one slashing.+    # If the custody bits are valid, then the claim itself is slashed.+    malefactor = state.validators[custody_slashing.malefactor_index] +    whistleblower = state.validators[custody_slashing.whistleblower_index]+    domain = get_domain(state, DOMAIN_CUSTODY_BIT_SLASHING, get_current_epoch(state))+    assert bls_verify(whistleblower.pubkey, hash_tree_root(custody_slashing), signed_custody_slashing.signature, domain)+    # Verify that the whistleblower is slashable+    assert is_slashable_validator(whistleblower, get_current_epoch(state))+    # Verify that the claimed malefactor is slashable+    assert is_slashable_validator(malefactor, get_current_epoch(state)) -#### Bit challenges+    # Verify the attestation+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation)) -Verify that `len(block.body.custody_bit_challenges) <= MAX_CUSTODY_BIT_CHALLENGES`.+    # TODO: custody_slashing.data is not chunked like shard blocks yet, result is lots of padding. -For each `challenge` in `block.body.custody_bit_challenges`, run the following function:+    # TODO: can do a single combined merkle proof of data being attested.+    # Verify the shard transition is indeed attested by the attestation+    shard_transition = custody_slashing.shard_transition+    assert hash_tree_root(shard_transition) == attestation.shard_transition_root+    # Verify that the provided data matches the shard-transition+    shard_chunk_roots = shard_transition.shard_data_roots[custody_slashing.data_index]+    assert hash_tree_root(custody_slashing.data) == chunks_to_body_root(shard_chunk_roots) -```python-def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) -> None:-    attestation = challenge.attestation-    epoch = attestation.data.target.epoch-    shard = attestation.data.crosslink.shard--    # Verify challenge signature-    challenger = state.validators[challenge.challenger_index]-    domain = get_domain(state, DOMAIN_CUSTODY_BIT_CHALLENGE, get_current_epoch(state))-    # TODO incorrect hash-tree-root, but this changes with phase 1 PR #1483-    assert bls_verify(challenger.pubkey, hash_tree_root(challenge), challenge.signature, domain)-    # Verify challenger is slashable-    assert is_slashable_validator(challenger, get_current_epoch(state))-    # Verify attestation-    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))-    # Verify attestation is eligible for challenging-    responder = state.validators[challenge.responder_index]-    assert get_current_epoch(state) <= get_randao_epoch_for_custody_period(-        get_custody_period_for_validator(state, challenge.responder_index, epoch),-        challenge.responder_index-    ) + 2 * EPOCHS_PER_CUSTODY_PERIOD + responder.max_reveal_lateness--    # Verify the responder participated in the attestation+    # Verify existence and participation of claimed malefactor     attesters = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)-    assert challenge.responder_index in attesters-    # Verifier challenger is not already challenging-    for record in state.custody_bit_challenge_records:-        assert record.challenger_index != challenge.challenger_index-    # Verify the responder custody key+    assert custody_slashing.malefactor_index in attesters+    +    # Verify the malefactor custody key     epoch_to_sign = get_randao_epoch_for_custody_period(-        get_custody_period_for_validator(state, challenge.responder_index, epoch),-        challenge.responder_index,+        get_custody_period_for_validator(custody_slashing.malefactor_index, attestation.data.target.epoch),+        custody_slashing.malefactor_index,     )     domain = get_domain(state, DOMAIN_RANDAO, epoch_to_sign)-    assert bls_verify(responder.pubkey, hash_tree_root(epoch_to_sign), challenge.responder_key, domain)-    # Verify the chunk count-    chunk_count = get_custody_chunk_count(attestation.data.crosslink)-    assert chunk_count == len(challenge.chunk_bits)-    # Verify custody bit is incorrect-    committee = get_beacon_committee(state, epoch, shard)-    custody_bit = attestation.custody_bits[committee.index(challenge.responder_index)]-    assert custody_bit != get_chunk_bits_root(challenge.chunk_bits)-    # Add new bit challenge record-    new_record = CustodyBitChallengeRecord(-        challenge_index=state.custody_challenge_index,-        challenger_index=challenge.challenger_index,-        responder_index=challenge.responder_index,-        inclusion_epoch=get_current_epoch(state),-        data_root=attestation.data.crosslink.data_root,-        chunk_count=chunk_count,-        chunk_bits_merkle_root=hash_tree_root(challenge.chunk_bits),-        responder_key=challenge.responder_key,-    )-    replace_empty_or_append(state.custody_bit_challenge_records, new_record)-    state.custody_challenge_index += 1-    # Postpone responder withdrawability-    responder.withdrawable_epoch = FAR_FUTURE_EPOCH-```--#### Custody responses--Verify that `len(block.body.custody_responses) <= MAX_CUSTODY_RESPONSES`.--For each `response` in `block.body.custody_responses`, run the following function:--```python-def process_custody_response(state: BeaconState, response: CustodyResponse) -> None:-    chunk_challenge = next((record for record in state.custody_chunk_challenge_records-                            if record.challenge_index == response.challenge_index), None)-    if chunk_challenge is not None:-        return process_chunk_challenge_response(state, response, chunk_challenge)--    bit_challenge = next((record for record in state.custody_bit_challenge_records-                          if record.challenge_index == response.challenge_index), None)-    if bit_challenge is not None:-        return process_bit_challenge_response(state, response, bit_challenge)--    assert False-```--```python-def process_chunk_challenge_response(state: BeaconState,-                                     response: CustodyResponse,-                                     challenge: CustodyChunkChallengeRecord) -> None:-    # Verify chunk index-    assert response.chunk_index == challenge.chunk_index-    # Verify bit challenge data is null-    assert response.chunk_bits_branch == [] and response.chunk_bits_leaf == Bytes32()-    # Verify minimum delay-    assert get_current_epoch(state) >= challenge.inclusion_epoch + MAX_SEED_LOOKAHEAD-    # Verify the chunk matches the crosslink data root-    assert is_valid_merkle_branch(-        leaf=hash_tree_root(response.chunk),-        branch=response.data_branch,-        depth=challenge.depth,-        index=response.chunk_index,-        root=challenge.data_root,-    )-    # Clear the challenge-    records = state.custody_chunk_challenge_records-    records[records.index(challenge)] = CustodyChunkChallengeRecord()-    # Reward the proposer-    proposer_index = get_beacon_proposer_index(state)-    increase_balance(state, proposer_index, Gwei(get_base_reward(state, proposer_index) // MINOR_REWARD_QUOTIENT))+    assert bls_verify(malefactor.pubkey, hash_tree_root(epoch_to_sign), custody_slashing.malefactor_key, domain)
    assert bls_verify(malefactor.pubkey, hash_tree_root(epoch_to_sign), custody_slashing.malefactor_secret, domain)
djrtwo

comment created time in 2 months

Pull request review commentethereum/eth2.0-specs

Phase 1 rebase

 The following types are defined, mapping into `DomainType` (little endian):  | Name | Value | | - | - |-| `DOMAIN_CUSTODY_BIT_CHALLENGE` | `6` |+| `DOMAIN_CUSTODY_BIT_SLASHING` | `6` | -### TODO PLACEHOLDER--| Name | Value |-| - | - |-| `PLACEHOLDER` | `2**32` |  ## Data structures -### Custody objects+### New Beacon Chain operations -#### `CustodyChunkChallenge`+#### `CustodySlashing`  ```python-class CustodyChunkChallenge(Container):-    responder_index: ValidatorIndex+class CustodySlashing(Container):+    # Attestation.custody_bits[data_index][committee.index(malefactor_index)] is the target custody bit to check.+    # (Attestation.data.shard_transition_root as ShardTransition).shard_data_roots[data_index] is the root of the data.+    data_index: uint64+    malefactor_index: ValidatorIndex+    malefactor_key: BLSSignature
    malefactor_secret: BLSSignature
djrtwo

comment created time in 2 months

push eventethereum/research

Vitalik Buterin

commit sha c4e1bdd137e2bea7346b2e523de607618320dfa2

Added out-of-domain evaluations in evaluation form

view details

push time in 3 months

issue commentethereum/eth2.0-specs

Sparse Merkle Trees (SMTs) designs

Navigate the SMT till you hit the bottom (zero right hand

Ah, are we assuming that the leaf in an SMT is (0, real_leaf)? Then where would the key go?

protolambda

comment created time in 3 months

issue commentethereum/eth2.0-specs

Sparse Merkle Trees (SMTs) designs

You would also need to specify the length of the branches. Otherwise if you just provide a branch it's not clear what part is supposed to go down to the leaf and what part is doing something further after that leaf. One possible case to think about would be a direct 2-level SMT (ie. an SMT of SMTs); you need to declare where the first one ends and the second begins.

protolambda

comment created time in 3 months

issue commentethereum/eth2.0-specs

Sparse Merkle Trees (SMTs) designs

Say, for some type there is a SMT with its root anchored at 0b10010011, then it would be sufficient to pass that to the multiproof checker. The checker can check if it is in this subtree, and if so, check that the 32 bytes after this prefix in the gindex of a SMT leaf match the key part of the SMT leaf. (Assuming we standardize on 32 byte keys, and hash-tree-root if the actual key is bigger).

Ah, I was thinking of the case where the SMT is fully embedded into the SSZ suite, so you could have the SMT be an element of other structures, other structures be an element of the SMT, etc, with possibly multiple SMTs in one path. Yes, if you have a gindex for just an SMT leaf from its own root then that is sufficient.

protolambda

comment created time in 3 months

issue commentethereum/eth2.0-specs

Signing root problems

Support alternative 1.

Optional: less repetitive definitions, add a Signed[MsgType] helper type to wrap any container with a signature

This would definitely be useful!

protolambda

comment created time in 3 months

Pull request review commentethereum/eth2.0-specs

Move new phase 1 changes into the spec

+# Ethereum 2.0 Phase 1 -- The Beacon Chain for Shards++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++TODO++<!-- /TOC -->++## Introduction++This document describes the extensions made to the Phase 0 design of The Beacon Chain+ to facilitate the new shards as part of Phase 1 of Eth2.++## Configuration++Configuration is not namespaced. Instead it is strictly an extension;+ no constants of phase 0 change, but new constants are adopted for changing behaviors.++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `MIN_GASPRICE` | `2**5` (= 32) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_COMMITTEE` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++_Wrapper for being broadcasted over the network._++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignableHeader`++```python+class ShardSignableHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    data: Hash+    latest_block_root: Hash+```++### New `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Current-slot shard block root+    head_shard_root: Hash+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate shard states+    shard_states: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### New `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `AttestationAndCommittee`++```python+class AttestationAndCommittee(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### New `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++### New extended `Validator`++```python+class Validator(Container):+    pubkey: BLSPubkey+    withdrawal_credentials: Hash  # Commitment to pubkey for withdrawals+    effective_balance: Gwei  # Balance at stake+    slashed: boolean+    # Status epochs+    activation_eligibility_epoch: Epoch  # When criteria for activation were met+    activation_epoch: Epoch+    exit_epoch: Epoch+    withdrawable_epoch: Epoch  # When validator can withdraw funds++    # TODO: older pre-proposal custody field additions, keep this?+    #+    # next_custody_secret_to_reveal is initialised to the custody period+    # (of the particular validator) in which the validator is activated+    # = get_custody_period_for_validator(...)+    next_custody_secret_to_reveal: uint64+    max_reveal_lateness: Epoch+```++### New extended `BeaconBlockBody`++```python+class BeaconBlockBody(phase0.BeaconBlockBody):+    randao_reveal: BLSSignature+    eth1_data: Eth1Data  # Eth1 data vote+    graffiti: Bytes32  # Arbitrary data+    # Slashings+    proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]+    attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS]+    # Attesting+    attestations: List[Attestation, MAX_ATTESTATIONS]+    # Enty & exit+    deposits: List[Deposit, MAX_DEPOSITS]+    voluntary_exits: List[VoluntaryExit, MAX_VOLUNTARY_EXITS]+    # Custody game+    custody_chunk_challenges: List[CustodyChunkChallenge, PLACEHOLDER]+    custody_bit_challenges: List[CustodyBitChallenge, PLACEHOLDER]+    custody_key_reveals: List[CustodyKeyReveal, PLACEHOLDER]+    early_derived_secret_reveals: List[EarlyDerivedSecretReveal, PLACEHOLDER]+    # Shards+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    # Light clients+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### New extended `BeaconBlock`++```python+class BeaconBlock(phase0.BeaconBlock):+    slot: Slot+    parent_root: Hash+    state_root: Hash+    body: BeaconBlockBody+    signature: BLSSignature+```++### New extended `BeaconState`++```python+class BeaconState(phase0.BeaconState):+    # Versioning+    genesis_time: uint64+    slot: Slot+    fork: Fork+    # History+    latest_block_header: BeaconBlockHeader+    block_roots: Vector[Hash, SLOTS_PER_HISTORICAL_ROOT]+    state_roots: Vector[Hash, SLOTS_PER_HISTORICAL_ROOT]+    historical_roots: List[Hash, HISTORICAL_ROOTS_LIMIT]+    # Eth1+    eth1_data: Eth1Data+    eth1_data_votes: List[Eth1Data, SLOTS_PER_ETH1_VOTING_PERIOD]+    eth1_deposit_index: uint64+    # Registry+    validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]+    balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]+    # Randomness+    randao_mixes: Vector[Hash, EPOCHS_PER_HISTORICAL_VECTOR]+    # Slashings+    slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR]  # Per-epoch sums of slashed effective balances+    # Attestations+    previous_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]+    current_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]+    # Finality+    justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH]  # Bit set for every recent justified epoch+    previous_justified_checkpoint: Checkpoint  # Previous epoch snapshot+    current_justified_checkpoint: Checkpoint+    finalized_checkpoint: Checkpoint+    # Phase 1+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+    +    # TODO: custody game refactor, no challenge-records, immediate processing.+    custody_challenge_index: uint64+    # Future derived secrets already exposed; contains the indices of the exposed validator+    # at RANDAO reveal period % EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS+    exposed_derived_secrets: Vector[List[ValidatorIndex, PLACEHOLDER],+                                    EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS]+```++## Helper functions++### Crypto++#### `bls_verify_multiple`++`bls_verify_multiple` is a function for verifying a BLS signature constructed from multiple messages, as defined in the [BLS Signature spec](../bls_signature.md#bls_verify_multiple).+++### Misc++#### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++#### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++#### `chunks_to_body_root`++```python+def chunks_to_body_root(chunks):+    return hash_tree_root(chunks + [EMPTY_CHUNK_ROOT] * (MAX_SHARD_BLOCK_CHUNKS - len(chunks)))+```++### Beacon state accessors++#### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++#### `get_shard_committee`++```python+def get_shard_committee(beacon_state: BeaconState, epoch: Epoch, shard: Shard) -> Sequence[ValidatorIndex]:

Consider a rename to get_proposer_committee. Also consider re-adding staggering.

protolambda

comment created time in 3 months

Pull request review commentethereum/eth2.0-specs

Move new phase 1 changes into the spec

+# Ethereum 2.0 Phase 1 -- The Beacon Chain for Shards++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++TODO++<!-- /TOC -->++## Introduction++This document describes the extensions made to the Phase 0 design of The Beacon Chain+ to facilitate the new shards as part of Phase 1 of Eth2.++## Configuration++Configuration is not namespaced. Instead it is strictly an extension;+ no constants of phase 0 change, but new constants are adopted for changing behaviors.++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `MIN_GASPRICE` | `2**5` (= 32) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_COMMITTEE` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++_Wrapper for being broadcasted over the network._++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignableHeader`++```python+class ShardSignableHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    data: Hash+    latest_block_root: Hash+```++### New `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Current-slot shard block root+    head_shard_root: Hash+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate shard states+    shard_states: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### New `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `AttestationAndCommittee`++```python+class AttestationAndCommittee(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### New `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++### New extended `Validator`++```python+class Validator(Container):+    pubkey: BLSPubkey+    withdrawal_credentials: Hash  # Commitment to pubkey for withdrawals+    effective_balance: Gwei  # Balance at stake+    slashed: boolean+    # Status epochs+    activation_eligibility_epoch: Epoch  # When criteria for activation were met+    activation_epoch: Epoch+    exit_epoch: Epoch+    withdrawable_epoch: Epoch  # When validator can withdraw funds++    # TODO: older pre-proposal custody field additions, keep this?+    #+    # next_custody_secret_to_reveal is initialised to the custody period+    # (of the particular validator) in which the validator is activated+    # = get_custody_period_for_validator(...)+    next_custody_secret_to_reveal: uint64+    max_reveal_lateness: Epoch+```++### New extended `BeaconBlockBody`++```python+class BeaconBlockBody(phase0.BeaconBlockBody):+    randao_reveal: BLSSignature+    eth1_data: Eth1Data  # Eth1 data vote+    graffiti: Bytes32  # Arbitrary data+    # Slashings+    proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]+    attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS]+    # Attesting+    attestations: List[Attestation, MAX_ATTESTATIONS]+    # Enty & exit+    deposits: List[Deposit, MAX_DEPOSITS]+    voluntary_exits: List[VoluntaryExit, MAX_VOLUNTARY_EXITS]+    # Custody game+    custody_chunk_challenges: List[CustodyChunkChallenge, PLACEHOLDER]+    custody_bit_challenges: List[CustodyBitChallenge, PLACEHOLDER]+    custody_key_reveals: List[CustodyKeyReveal, PLACEHOLDER]+    early_derived_secret_reveals: List[EarlyDerivedSecretReveal, PLACEHOLDER]+    # Shards+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    # Light clients+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### New extended `BeaconBlock`++```python+class BeaconBlock(phase0.BeaconBlock):+    slot: Slot+    parent_root: Hash+    state_root: Hash+    body: BeaconBlockBody+    signature: BLSSignature+```++### New extended `BeaconState`++```python+class BeaconState(phase0.BeaconState):+    # Versioning+    genesis_time: uint64+    slot: Slot+    fork: Fork+    # History+    latest_block_header: BeaconBlockHeader+    block_roots: Vector[Hash, SLOTS_PER_HISTORICAL_ROOT]+    state_roots: Vector[Hash, SLOTS_PER_HISTORICAL_ROOT]+    historical_roots: List[Hash, HISTORICAL_ROOTS_LIMIT]+    # Eth1+    eth1_data: Eth1Data+    eth1_data_votes: List[Eth1Data, SLOTS_PER_ETH1_VOTING_PERIOD]+    eth1_deposit_index: uint64+    # Registry+    validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]+    balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]+    # Randomness+    randao_mixes: Vector[Hash, EPOCHS_PER_HISTORICAL_VECTOR]+    # Slashings+    slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR]  # Per-epoch sums of slashed effective balances+    # Attestations+    previous_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]+    current_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]+    # Finality+    justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH]  # Bit set for every recent justified epoch+    previous_justified_checkpoint: Checkpoint  # Previous epoch snapshot+    current_justified_checkpoint: Checkpoint+    finalized_checkpoint: Checkpoint+    # Phase 1+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+    +    # TODO: custody game refactor, no challenge-records, immediate processing.+    custody_challenge_index: uint64+    # Future derived secrets already exposed; contains the indices of the exposed validator+    # at RANDAO reveal period % EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS+    exposed_derived_secrets: Vector[List[ValidatorIndex, PLACEHOLDER],+                                    EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS]+```++## Helper functions++### Crypto++#### `bls_verify_multiple`++`bls_verify_multiple` is a function for verifying a BLS signature constructed from multiple messages, as defined in the [BLS Signature spec](../bls_signature.md#bls_verify_multiple).+++### Misc++#### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++#### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++#### `chunks_to_body_root`++```python+def chunks_to_body_root(chunks):+    return hash_tree_root(chunks + [EMPTY_CHUNK_ROOT] * (MAX_SHARD_BLOCK_CHUNKS - len(chunks)))+```++### Beacon state accessors++#### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])

BTW did we decide that the uint8 list for online_countdown is okay? @djrtwo @JustinDrake @hwwhww

The alternative that doesn't introduce a unique use of uint8 is my proposal to have two bitlists.

protolambda

comment created time in 3 months

Pull request review commentethereum/eth2.0-specs

Move new phase 1 changes into the spec

+# Ethereum 2.0 Phase 1 -- The Beacon Chain for Shards++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++TODO++<!-- /TOC -->++## Introduction++This document describes the extensions made to the Phase 0 design of The Beacon Chain+ to facilitate the new shards as part of Phase 1 of Eth2.++## Configuration++Configuration is not namespaced. Instead it is strictly an extension;+ no constants of phase 0 change, but new constants are adopted for changing behaviors.++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `MIN_GASPRICE` | `2**5` (= 32) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_COMMITTEE` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++_Wrapper for being broadcasted over the network._++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignableHeader`++```python+class ShardSignableHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    data: Hash+    latest_block_root: Hash+```++### New `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Current-slot shard block root+    head_shard_root: Hash+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate shard states+    shard_states: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### New `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `AttestationAndCommittee`++```python+class AttestationAndCommittee(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### New `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++### New extended `Validator`++```python+class Validator(Container):+    pubkey: BLSPubkey+    withdrawal_credentials: Hash  # Commitment to pubkey for withdrawals+    effective_balance: Gwei  # Balance at stake+    slashed: boolean+    # Status epochs+    activation_eligibility_epoch: Epoch  # When criteria for activation were met+    activation_epoch: Epoch+    exit_epoch: Epoch+    withdrawable_epoch: Epoch  # When validator can withdraw funds++    # TODO: older pre-proposal custody field additions, keep this?+    #+    # next_custody_secret_to_reveal is initialised to the custody period+    # (of the particular validator) in which the validator is activated+    # = get_custody_period_for_validator(...)+    next_custody_secret_to_reveal: uint64+    max_reveal_lateness: Epoch

Yeah, we still need these two.

protolambda

comment created time in 3 months

Pull request review commentethereum/eth2.0-specs

Move new phase 1 changes into the spec

+# Ethereum 2.0 Phase 1 -- The Beacon Chain for Shards++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++TODO++<!-- /TOC -->++## Introduction++This document describes the extensions made to the Phase 0 design of The Beacon Chain+ to facilitate the new shards as part of Phase 1 of Eth2.++## Configuration++Configuration is not namespaced. Instead it is strictly an extension;+ no constants of phase 0 change, but new constants are adopted for changing behaviors.++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `MIN_GASPRICE` | `2**5` (= 32) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_COMMITTEE` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++_Wrapper for being broadcasted over the network._++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignableHeader`++```python+class ShardSignableHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    data: Hash+    latest_block_root: Hash+```++### New `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Current-slot shard block root+    head_shard_root: Hash+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate shard states+    shard_states: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### New `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `AttestationAndCommittee`++```python+class AttestationAndCommittee(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### New `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++### New extended `Validator`++```python+class Validator(Container):+    pubkey: BLSPubkey+    withdrawal_credentials: Hash  # Commitment to pubkey for withdrawals+    effective_balance: Gwei  # Balance at stake+    slashed: boolean+    # Status epochs+    activation_eligibility_epoch: Epoch  # When criteria for activation were met+    activation_epoch: Epoch+    exit_epoch: Epoch+    withdrawable_epoch: Epoch  # When validator can withdraw funds++    # TODO: older pre-proposal custody field additions, keep this?

Yeah, we still need these two.

protolambda

comment created time in 3 months

Pull request review commentethereum/eth2.0-specs

Move new phase 1 changes into the spec

 def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:     # Verify that outstanding deposits are processed up to the maximum number of deposits     assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index) -    for operations, function in (-        (body.proposer_slashings, process_proposer_slashing),-        (body.attester_slashings, process_attester_slashing),-        (body.attestations, process_attestation),-        (body.deposits, process_deposit),-        (body.voluntary_exits, process_voluntary_exit),-        # @process_shard_receipt_proofs-    ):-        for operation in operations:-            function(state, operation)+    process_operations(body.proposer_slashings, process_proposer_slashing)+    process_operations(body.attester_slashings, process_attester_slashing)+    process_operations(body.attestations, process_attestations)

Are we sure that this works? Can process_operations just take a singular or plural function like that?

protolambda

comment created time in 3 months

Pull request review commentethereum/eth2.0-specs

Move new phase 1 changes into the spec

+# Ethereum 2.0 Phase 1 -- The Beacon Chain for Shards++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++TODO++<!-- /TOC -->++## Introduction++This document describes the extensions made to the Phase 0 design of The Beacon Chain+ to facilitate the new shards as part of Phase 1 of Eth2.++## Configuration++Configuration is not namespaced. Instead it is strictly an extension;+ no constants of phase 0 change, but new constants are adopted for changing behaviors.++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `MIN_GASPRICE` | `2**5` (= 32) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_COMMITTEE` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++_Wrapper for being broadcasted over the network._++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignableHeader`++```python+class ShardSignableHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    data: Hash+    latest_block_root: Hash+```++### New `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Current-slot shard block root+    head_shard_root: Hash+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate shard states+    shard_states: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### New `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `AttestationAndCommittee`++```python+class AttestationAndCommittee(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### New `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++### New extended `Validator`++```python+class Validator(Container):+    pubkey: BLSPubkey+    withdrawal_credentials: Hash  # Commitment to pubkey for withdrawals+    effective_balance: Gwei  # Balance at stake+    slashed: boolean+    # Status epochs+    activation_eligibility_epoch: Epoch  # When criteria for activation were met+    activation_epoch: Epoch+    exit_epoch: Epoch+    withdrawable_epoch: Epoch  # When validator can withdraw funds++    # TODO: older pre-proposal custody field additions, keep this?+    #+    # next_custody_secret_to_reveal is initialised to the custody period+    # (of the particular validator) in which the validator is activated+    # = get_custody_period_for_validator(...)+    next_custody_secret_to_reveal: uint64+    max_reveal_lateness: Epoch+```++### New extended `BeaconBlockBody`++```python+class BeaconBlockBody(phase0.BeaconBlockBody):+    randao_reveal: BLSSignature+    eth1_data: Eth1Data  # Eth1 data vote+    graffiti: Bytes32  # Arbitrary data+    # Slashings+    proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]+    attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS]+    # Attesting+    attestations: List[Attestation, MAX_ATTESTATIONS]+    # Enty & exit+    deposits: List[Deposit, MAX_DEPOSITS]+    voluntary_exits: List[VoluntaryExit, MAX_VOLUNTARY_EXITS]+    # Custody game+    custody_chunk_challenges: List[CustodyChunkChallenge, PLACEHOLDER]+    custody_bit_challenges: List[CustodyBitChallenge, PLACEHOLDER]+    custody_key_reveals: List[CustodyKeyReveal, PLACEHOLDER]+    early_derived_secret_reveals: List[EarlyDerivedSecretReveal, PLACEHOLDER]+    # Shards+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    # Light clients+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### New extended `BeaconBlock`++```python+class BeaconBlock(phase0.BeaconBlock):+    slot: Slot+    parent_root: Hash+    state_root: Hash+    body: BeaconBlockBody+    signature: BLSSignature+```++### New extended `BeaconState`++```python+class BeaconState(phase0.BeaconState):+    # Versioning+    genesis_time: uint64+    slot: Slot+    fork: Fork+    # History+    latest_block_header: BeaconBlockHeader+    block_roots: Vector[Hash, SLOTS_PER_HISTORICAL_ROOT]+    state_roots: Vector[Hash, SLOTS_PER_HISTORICAL_ROOT]+    historical_roots: List[Hash, HISTORICAL_ROOTS_LIMIT]+    # Eth1+    eth1_data: Eth1Data+    eth1_data_votes: List[Eth1Data, SLOTS_PER_ETH1_VOTING_PERIOD]+    eth1_deposit_index: uint64+    # Registry+    validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]+    balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]+    # Randomness+    randao_mixes: Vector[Hash, EPOCHS_PER_HISTORICAL_VECTOR]+    # Slashings+    slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR]  # Per-epoch sums of slashed effective balances+    # Attestations+    previous_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]+    current_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]+    # Finality+    justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH]  # Bit set for every recent justified epoch+    previous_justified_checkpoint: Checkpoint  # Previous epoch snapshot+    current_justified_checkpoint: Checkpoint+    finalized_checkpoint: Checkpoint+    # Phase 1+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+    +    # TODO: custody game refactor, no challenge-records, immediate processing.+    custody_challenge_index: uint64+    # Future derived secrets already exposed; contains the indices of the exposed validator+    # at RANDAO reveal period % EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS+    exposed_derived_secrets: Vector[List[ValidatorIndex, PLACEHOLDER],+                                    EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS]+```++## Helper functions++### Crypto++#### `bls_verify_multiple`++`bls_verify_multiple` is a function for verifying a BLS signature constructed from multiple messages, as defined in the [BLS Signature spec](../bls_signature.md#bls_verify_multiple).+++### Misc++#### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++#### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++#### `chunks_to_body_root`++```python+def chunks_to_body_root(chunks):+    return hash_tree_root(chunks + [EMPTY_CHUNK_ROOT] * (MAX_SHARD_BLOCK_CHUNKS - len(chunks)))+```++### Beacon state accessors++#### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++#### `get_shard_committee`++```python+def get_shard_committee(beacon_state: BeaconState, epoch: Epoch, shard: Shard) -> Sequence[ValidatorIndex]:+    source_epoch = epoch - epoch % SHARD_COMMITTEE_PERIOD +    if source_epoch > 0:+        source_epoch -= SHARD_COMMITTEE_PERIOD+    active_validator_indices = get_active_validator_indices(beacon_state, source_epoch)+    seed = get_seed(beacon_state, source_epoch, DOMAIN_SHARD_COMMITTEE)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)+```++#### `get_shard_proposer_index`++```python+def get_shard_proposer_index(beacon_state: BeaconState, slot: Slot, shard: Shard) -> ValidatorIndex:+    committee = get_shard_committee(beacon_state, slot_to_epoch(slot), shard)+    r = bytes_to_int(get_seed(beacon_state, get_current_epoch(state), DOMAIN_SHARD_COMMITTEE)[:8])+    return committee[r % len(committee)]+```++#### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    source_epoch = epoch - epoch % LIGHT_CLIENT_COMMITTEE_PERIOD +    if source_epoch > 0:+        source_epoch -= LIGHT_CLIENT_COMMITTEE_PERIOD+    active_validator_indices = get_active_validator_indices(beacon_state, source_epoch)+    seed = get_seed(beacon_state, source_epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++#### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> AttestationAndCommittee:+    committee = get_beacon_committee(beacon_state, attestation.data.slot, attestation.data.index)+    return AttestationAndCommittee(committee, attestation)+```++#### `get_updated_gasprice`++```python+def get_updated_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return max(prev_gasprice, MIN_GASPRICE + delta) - delta+```++#### `get_shard`++```python+def get_shard(state: BeaconState, attestation: Attestation) -> Shard:+    return Shard((attestation.data.index + get_start_shard(state, attestation.data.slot)) % ACTIVE_SHARDS)+```++#### `get_offset_slots`++```python+def get_offset_slots(state: BeaconState, start_slot: Slot) -> Sequence[Slot]:+    return [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+```+++### Predicates++#### New `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: AttestationAndCommittee) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    aggregation_bits = indexed_attestation.attestation.aggregation_bits+    assert len(aggregation_bits) == len(indexed_attestation.committee)+    for i, custody_bits in enumerate(indexed_attestation.attestation.custody_bits):+        assert len(custody_bits) == len(indexed_attestation.committee)+        for participant, abit, cbit in zip(indexed_attestation.committee, aggregation_bits, custody_bits):+            if abit:+                all_pubkeys.append(state.validators[participant].pubkey)+                # Note: only 2N distinct message hashes+                all_message_hashes.append(hash_tree_root(+                    AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, cbit)+                ))+            else:+                assert cbit == False+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```+++### Block processing++```python+def process_block(state: BeaconState, block: BeaconBlock) -> None:+    process_block_header(state, block)+    process_randao(state, block.body)+    process_eth1_data(state, block.body)+    verify_shard_transition_false_positives(state, block)+    process_light_client_signatures(state, block)+    process_operations(state, block.body)+```+++#### Operations++```python+def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:+    # Verify that outstanding deposits are processed up to the maximum number of deposits+    assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index)+    +    def process_operations(operations, fn):

This looks confusing. The outer and inner function should not have the same name.

protolambda

comment created time in 3 months

PR merged ethereum/eth2.0-specs

Backport master (v0.9.1) to dev

Backport v0.9.1 to dev, to then update the new shard work with.

+376 -222

0 comment

17 changed files

protolambda

pr closed time in 3 months

push eventethereum/eth2.0-specs

Diederik Loerakker

commit sha b15669b7a51dc460deeaef3d563de89746f486fa

Backport master (v0.9.1) to dev (#1482) * p2p-interface: clarify that signing_root is used for block requests * hash cleanups * one more hash tree root gone for blocks - block hashes are always signing roots! * use simple serialize data types consistently * Describe which finalized root/epoch to use * remove custody_bits from attestation * remove AttestationDataAndCustodyBit * Specify inclusive range for genesis deposits * add initial fork choice bounce prevention and tests * PR feedback * further test bounce attack * wipe queued justified after epoch transition * remove extra var * minor fmt * only allow attestatiosn to be considered from current and previous epoch * use best_justified_checkpoint instead of queued_justified_checkpoints * use helper for slots since epoch start * be explicit about use of genesis epoch for previous epoch in fork choice on_block * pr feedback * add note aboutgenesis attestations * cleanup get_eth1_vote * make eth1_follow_distance clearer * Update the expected proposer period Since `SECONDS_PER_SLOT` is now `12` * minor fix to comment in mainnet config * Update 0_beacon-chain.md

view details

push time in 3 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [`ShardBlockWrapper`](#shardblockwrapper)+        - [`ShardSignedHeader`](#shardsignedheader)+        - [`ShardState`](#shardstate)+        - [`AttestationData`](#attestationdata)+        - [`ShardTransition`](#shardtransition)+        - [`Attestation`](#attestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+        - [`PendingAttestation`](#pendingattestation)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`update_gasprice`](#update_gasprice)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+        - [`get_attestation_shard`](#get_attestation_shard)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New beacon state fields](#new-beacon-state-fields)+        - [New beacon block data fields](#new-beacon-block-data-fields)+        - [Attestation processing](#attestation-processing)+            - [`validate_attestation`](#validate_attestation)+            - [`apply_shard_transition`](#apply_shard_transition)+            - [`process_attestations`](#process_attestations)+        - [Misc block post-processing](#misc-block-post-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+    - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignedHeader`++```python+class ShardSignedHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei

I think no need. It's zero by default, but then after one slot get_updated_gasprice pushes it up to MIN_GASPRICE.

vbuterin

comment created time in 3 months

issue commentethereum/eth2.0-specs

Sparse Merkle Trees (SMTs) designs

Out of all the options I think I like leaf-level mixins the most. The main challenge (really, with any of these schemes except my modified hash function proposal) is that generalized indices would not work because the path for any given object would be dynamic. I would suggest we just bite the bullet and in objects that contain sets/dicts we just require a path to consist of a generalized index (that pretends every key is the full 256 bits) along with a second object that is a list of pairs (depth of dict root, depth of leaf relative to root). Note that this list would need to be dynamically generated and passed along with each proof.

From this list of pairs, you can translate the generalized index into a "modified generalized index" (basically, the bit positions corresponding to levels that were elided in the tree get snipped out of the generalized index), and generate a list of mixin checks ("check that the value at generalized index x is y").

protolambda

comment created time in 3 months

issue commentethereum/eth2.0-specs

Phase 0 to 1 upgradability: fork/version boundary deserialization issue

My instinct is to just disallow phase 0 objects to be included in phase 1 and vice versa. There are a few validators that would lose a tiny amount of rewards, but the losses are tiny. Also, from the protocol point of view, finality will happen in phase 1 just fine. The main risk is that fraud proofs from the end of phase 0 would not carry over into phase 1 so validators could get away with some mischief in the last few slots of phase 0, but especially between phase 0 and phase 1 I don't see even that risk as being anywhere close to high enough to justify the complexity of working around it; relying on honest majority is fine.

hwwhww

comment created time in 3 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 7c831729614f5c9b7d35ab211973289cb4c5e5e5

Added get_shard_proposer_index

view details

push time in 3 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 736fe983b60a88b4a96dd4369a7f089a62babf8f

Some fixes

view details

push time in 3 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [`ShardBlockWrapper`](#shardblockwrapper)+        - [`ShardSignedHeader`](#shardsignedheader)+        - [`ShardState`](#shardstate)+        - [`AttestationData`](#attestationdata)+        - [`ShardTransition`](#shardtransition)+        - [`Attestation`](#attestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+        - [`PendingAttestation`](#pendingattestation)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`update_gasprice`](#update_gasprice)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+        - [`get_attestation_shard`](#get_attestation_shard)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New beacon state fields](#new-beacon-state-fields)+        - [New beacon block data fields](#new-beacon-block-data-fields)+        - [Attestation processing](#attestation-processing)+            - [`validate_attestation`](#validate_attestation)+            - [`apply_shard_transition`](#apply_shard_transition)+            - [`process_attestations`](#process_attestations)+        - [Misc block post-processing](#misc-block-post-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+    - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignedHeader`++```python+class ShardSignedHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    source_epoch = epoch - epoch % LIGHT_CLIENT_COMMITTEE_PERIOD +    if source_epoch > 0:+        source_epoch -= LIGHT_CLIENT_COMMITTEE_PERIOD+    active_validator_indices = get_active_validator_indices(beacon_state, source_epoch)+    seed = get_seed(beacon_state, source_epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    committee = get_beacon_committee(beacon_state, attestation.data.slot, attestation.data.index)+    return IndexedAttestation(committee, attestation)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    aggregation_bits = indexed_attestation.attestation.aggregation_bits+    assert len(aggregation_bits) == len(indexed_attestation.committee)+    for i, custody_bits in enumerate(indexed_attestation.attestation.custody_bits):+        assert len(custody_bits) == len(indexed_attestation.committee)+        for participant, abit, cbit in zip(indexed_attestation.committee, aggregation_bits, custody_bits):+            if abit:+                all_pubkeys.append(state.validators[participant].pubkey)+                # Note: only 2N distinct message hashes+                all_message_hashes.append(hash_tree_root(+                    AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, cbit)+                ))+            else:+                assert cbit == False+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```++### `get_attestation_shard`++```python+def get_shard(state: BeaconState, attestation: Attestation) -> Shard:+    return Shard((attestation.data.index + get_start_shard(state, data.slot)) % ACTIVE_SHARDS)+```++## Beacon Chain Changes++### New beacon state fields++```python+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+```++### New beacon block data fields++```python+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### Attestation processing++#### `validate_attestation`++```python+def validate_attestation(state: BeaconState, attestation: Attestation) -> None:+    data = attestation.data+    assert data.index < ACTIVE_SHARDS+    shard = get_shard(state, attestation)+    proposer_index = get_beacon_proposer_index(state)++    # Signature check+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))+    # Type 1: on-time attestations+    if data.custody_bits != []:+        # Correct slot+        assert data.slot == state.slot+        # Correct data root count+        start_slot = state.shard_next_slots[shard]+        offset_slots = [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+        assert len(attestation.custody_bits) == len(offset_slots)+        # Correct parent block root+        assert data.beacon_block_root == get_block_root_at_slot(state, state.slot - 1)+    # Type 2: delayed attestations+    else:+        assert state.slot - slot_to_epoch(data.slot) < EPOCH_LENGTH+        assert data.shard_transition_root == Hash()

Yes, it's a reduced form, and late attesters do have to rebroadcast.

vbuterin

comment created time in 3 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha d98141fa50b2c2f6336c1ea29d6ba170bd9c0d02

Simplified gasprice update

view details

push time in 4 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 6daee55f22241e7dd02f13e2db77ecaf419f076c

Some initial changes

view details

push time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [`ShardBlockWrapper`](#shardblockwrapper)+        - [`ShardSignedHeader`](#shardsignedheader)+        - [`ShardState`](#shardstate)+        - [`AttestationData`](#attestationdata)+        - [`ShardTransition`](#shardtransition)+        - [`Attestation`](#attestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+        - [`PendingAttestation`](#pendingattestation)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`update_gasprice`](#update_gasprice)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+        - [`get_attestation_shard`](#get_attestation_shard)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New beacon state fields](#new-beacon-state-fields)+        - [New beacon block data fields](#new-beacon-block-data-fields)+        - [Attestation processing](#attestation-processing)+            - [`validate_attestation`](#validate_attestation)+            - [`apply_shard_transition`](#apply_shard_transition)+            - [`process_attestations`](#process_attestations)+        - [Misc block post-processing](#misc-block-post-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+    - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignedHeader`++```python+class ShardSignedHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64

It's needed to distinguish the 0/1 values for each position, otherwise you'd just be adding up 1 bits without regard for what position they come in.

vbuterin

comment created time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 5100548d91385c79d644236fa8dbe8f479cfedc3

Update specs/core/1_new_shards.md Co-Authored-By: Danny Ryan <dannyjryan@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha a8a76ffe6442127fd02cfd55063ce74cf49470d4

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 45ff8c8803c4cb2247cb59a7d4e5197df9ed9bc1

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha b9c9faed30e2ec7e354ab608d455c4fe61b6c5cc

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 563126d6de25deb496aab471ff0c9b4b2d80f048

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 420f56fd18ebbc0be77653885df2fc779870069e

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 011df4fc04ee3fb9a426e5ce34f219253f628a81

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha dce1117854e9fe9beaea4346f68fcca3cf93f783

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 14c44c596e993f79fffc796801ef34ae11647dda

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 3dc29ea790be5ac91aa378ff20a132e5a5128ece

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 58bb5fc4d1d0c42c89a95b0a871192364c5a23ce

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 8e3cf322e341f4850af5d5809c6fa7d5b1bb694f

Update specs/core/1_new_shards.md

view details

push time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [`ShardBlockWrapper`](#shardblockwrapper)+        - [`ShardSignedHeader`](#shardsignedheader)+        - [`ShardState`](#shardstate)+        - [`AttestationData`](#attestationdata)+        - [`ShardTransition`](#shardtransition)+        - [`Attestation`](#attestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+        - [`PendingAttestation`](#pendingattestation)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`update_gasprice`](#update_gasprice)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+        - [`get_attestation_shard`](#get_attestation_shard)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New beacon state fields](#new-beacon-state-fields)+        - [New beacon block data fields](#new-beacon-block-data-fields)+        - [Attestation processing](#attestation-processing)+            - [`validate_attestation`](#validate_attestation)+            - [`apply_shard_transition`](#apply_shard_transition)+            - [`process_attestations`](#process_attestations)+        - [Misc block post-processing](#misc-block-post-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+    - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |+| `DOMAIN_SHARD_LIGHT_CLIENT` | `192` | |+| `DOMAIN_SHARD_PROPOSAL` | `193` | |++## Containers++### `ShardBlockWrapper`++```python+class ShardBlockWrapper(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body: BytesN[SHARD_BLOCK_CHUNK_SIZE]+    signature: BLSSignature+```++### `ShardSignedHeader`++```python+class ShardSignedHeader(Container):+    shard_parent_root: Hash+    beacon_parent_root: Hash+    slot: Slot+    body_root: Hash+```++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition root+    shard_transition_root: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint64, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Proposer signature aggregate+    proposer_signature_aggregate: BLSSignature+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    committee: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE]+    attestation: Attestation+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    block_index: uint64+    bit: bool+```++### `PendingAttestation`++```python+class PendingAttestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    inclusion_delay: Slot+    proposer_index: ValidatorIndex+    crosslink_success: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    source_epoch = epoch - epoch % LIGHT_CLIENT_COMMITTEE_PERIOD +    if source_epoch > 0:+        source_epoch -= LIGHT_CLIENT_COMMITTEE_PERIOD+    active_validator_indices = get_active_validator_indices(beacon_state, source_epoch)+    seed = get_seed(beacon_state, source_epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    committee = get_beacon_committee(beacon_state, attestation.data.slot, attestation.data.index)+    return IndexedAttestation(committee, attestation)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    aggregation_bits = indexed_attestation.attestation.aggregation_bits+    assert len(aggregation_bits) == len(indexed_attestation.committee)+    for i, custody_bits in enumerate(indexed_attestation.attestation.custody_bits):+        assert len(custody_bits) == len(indexed_attestation.committee)+        for participant, abit, cbit in zip(indexed_attestation.committee, aggregation_bits, custody_bits):+            if abit:+                all_pubkeys.append(state.validators[participant].pubkey)+                # Note: only 2N distinct message hashes+                all_message_hashes.append(hash_tree_root(+                    AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, cbit)+                ))+            else:+                assert cbit == False+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```++### `get_attestation_shard`++```python+def get_shard(state: BeaconState, attestation: Attestation) -> Shard:+    return Shard((attestation.data.index + get_start_shard(state, data.slot)) % ACTIVE_SHARDS)+```++## Beacon Chain Changes++### New beacon state fields++```python+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+```++### New beacon block data fields++```python+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### Attestation processing++#### `validate_attestation`++```python+def validate_attestation(state: BeaconState, attestation: Attestation) -> None:+    data = attestation.data+    assert data.index < ACTIVE_SHARDS+    shard = get_shard(state, attestation)+    proposer_index = get_beacon_proposer_index(state)++    # Signature check+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))+    # Type 1: on-time attestations+    if data.custody_bits != []:+        # Correct slot+        assert data.slot == state.slot+        # Correct data root count+        start_slot = state.shard_next_slots[shard]+        offset_slots = [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+        assert len(attestation.custody_bits) == len(offset_slots)+        # Correct parent block root+        assert data.beacon_block_root == get_block_root_at_slot(state, state.slot - 1)+    # Type 2: delayed attestations+    else:+        assert state.slot - slot_to_epoch(data.slot) < EPOCH_LENGTH+        assert data.shard_transition_root == Hash()+        assert len(attestation.custody_bits) == 0+```++#### `apply_shard_transition`++```python+def apply_shard_transition(state: BeaconState, shard: Shard, transition: ShardTransition) -> None:+    # Slot the attestation starts counting from+    start_slot = state.shard_next_slots[shard]++    # Correct data root count+    offset_slots = [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+    assert len(transition.shard_data_roots) == len(transition.shard_states) == len(transition.shard_block_lengths) == len(offset_slots)+    assert transition.start_slot == start_slot++    def chunks_to_body_root(chunks):+        return hash_tree_root(chunks + [EMPTY_CHUNK_ROOT] * (MAX_SHARD_BLOCK_CHUNKS - len(chunks)))++    # Reonstruct shard headers+    headers = []+    proposers = []+    shard_parent_root = state.shard_states[shard].latest_block_hash+    for i in range(len(offset_slots)):+        if any(transition.shard_data_roots):+            headers.append(ShardSignedHeader(+                shard_parent_root=shard_parent_root,+                parent_hash=get_block_root_at_slot(state, state.slot-1),+                slot=offset_slots[i],+                body_root=chunks_to_body_root(transition.shard_data_roots[i])+            ))+            proposers.append(get_shard_proposer(state, shard, offset_slots[i]))+            shard_parent_root = hash_tree_root(headers[-1])++    # Verify correct calculation of gas prices and slots and chunk roots+    prev_gasprice = state.shard_states[shard].gasprice+    for i in range(len(offset_slots)):+        shard_state, block_length, chunks = transition.shard_states[i], transition.shard_block_lengths[i], transition.shard_data_roots[i]+        assert shard_state.gasprice == update_gasprice(prev_gasprice, block_length)+        assert shard_state.slot == offset_slots[i]+        assert len(chunks) == block_length // SHARD_BLOCK_CHUNK_SIZE+        prev_gasprice = shard_state.gasprice++    # Verify combined proposer signature+    assert bls_verify_multiple(+        pubkeys=[state.validators[proposer].pubkey for proposer in proposers],+        message_hashes=[hash_tree_root(header) for header in headers],+        signature=proposer.proposer_signature_aggregate,+        domain=DOMAIN_SHARD_PROPOSAL+    )++    # Save updated state+    state.shard_states[shard] = transition.shard_states[-1]+    state.shard_states[shard].slot = state.slot - 1+```++#### `process_attestations`++```python+def process_attestations(state: BeaconState, block: BeaconBlock, attestations: Sequence[Attestation]) -> None:+    pending_attestations = []+    # Basic validation+    for attestation in attestations:+       validate_attestation(state, attestation)+    # Process crosslinks+    online_indices = get_online_indices(state)+    winners = set()+    for shard in range(ACTIVE_SHARDS):+        success = False+        # All attestations in the block for this shard+        this_shard_attestations = [attestation for attestation in attestations if get_shard(state, attestation) == shard and attestation.data.slot == state.slot]+        # The committee for this shard+        this_shard_committee = get_beacon_committee(state, get_current_epoch(state), shard)+        # Loop over all shard transition roots+        for shard_transition_root in sorted(set([attestation.data.shard_transition_root for attestation in this_shard_attestations])):+            all_participants = set()+            participating_attestations = []+            for attestation in this_shard_attestations:+                participating_attestations.append(attestation)+                if attestation.data.shard_transition_root == shard_transition_root:+                    all_participants = all_participants.union(get_attesting_indices(state, attestation.data, attestation.aggregation_bits))+            if (+                get_total_balance(state, online_indices.intersection(all_participants)) * 3 >=+                get_total_balance(state, online_indices.intersection(this_shard_committee)) * 2+                and success is False+            ):+                assert shard_transition_root == hash_tree_root(block.shard_transition)+                process_crosslink(state, shard, block.shard_transition)
                apply_shard_transition(state, shard, block.shard_transition)
vbuterin

comment created time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha b30eba30d136787619eb862b216d47f643ce8e7e

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha e4dbdbbcf023d10beb6e0ba5b4d8e022a19ed8a3

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 6d7ea191551921902b5a8ff7b681b5973a35b458

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 896815632972bd53d8f664ff38a1fc11940d52fa

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha b9c42ffdd3dd624111cdcd0a05f34af05f1375fe

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 8f9aaf6350e242e2d1d856cf1862a5cf49218d2b

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 0e32314cb23b38c525188b5462da67efb20ddad7

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha e3d61abb55b2e8423c0b36c9402db9625b4fb572

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

vbuterin

commit sha 7e9f044086d5a5e8cfd9c0b81b9a9ed11f816ea0

Update specs/core/1_new_shards.md Co-Authored-By: Hsiao-Wei Wang <hwwang156@gmail.com>

view details

push time in 4 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 0732327c9756c9e01dfe56cf114888ae5848bbbc

A few cleanups

view details

push time in 4 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 760430fb54ad1f80832e3b8b5d6116f4299da9d9

Fixed pending attestation handling and added empty transition check

view details

push time in 4 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 6919571ca2e14fc2093e9acde30418050e80d64c

Restructured shard blocks

view details

push time in 4 months

issue closedethereum/eth2.0-specs

Attestations and custody bits proposal for phase 1

Challenges

  • An attestation is signing over a maximum of 256 shard blocks and state roots (maximum 1024 shard blocks and state roots signed over per block in total)
  • This is a lot of data, and we want to only store the data we need to store (eg. we don't need to store the shard blocks and state roots signed over by losing attestations)
  • We want verifying attestations to be efficient (i) in beacon blocks, (ii) over the wire during the aggregation process.

Proposal

We define two data structures, Attestation and AttestationShardData. A beacon block contains one AttestationShardData for each shard that it is processing (min 1 max 64). Attestation objects only contain a hash of the AttestationShardData (the hash is still needed to verify signatures). We verify that the AttestationShardData hashes to the hash of the winning attestation, ie. the one that gets >=2/3 and updates that shard (or is empty if there is no winner).

An Attestation contains a 2D custody bits array, 1 bit per committee member (max 2048) per block processed by that attestation (max 256).

When an attestation is broadcasted over the wire, it does not come with explicit AttestationShardData; instead, it is broadcasted in a wrapper that contains a pointer to a single "head". This is because AttestationShardData can be inferred from the head assuming that a given client has downloaded all of the shard blocks.

We define the message hash that a validator is expected to sign over as follows. Let bits[0]....bits[n-1] be the validator's custody bits of the n blocks. Let R be the attestation data root. The validator signs hash_to_point((R, 0, bits[0])) + ... + hash_to_point((R, n-1, bits[n-1])), where + is elliptic curve addition over G2.

Note that verifying this signature "on the wire" requires n hash to point operations and additions, though the hash-to-point operations need only be computed once for any particular attestation data; only G2 additions are per-attestation work.

Verifying an aggregate signature in a block can be done as follows. Let custody_subset(i, b) be the subset of validators whose i'th custody bit is b. We do e(hash_to_point((R, 0, 0)), group_pubkey(custody_subset(0, 0))) * e(hash_to_point((R, 0, 1)), group_pubkey(custody_subset(0, 1))) * ... * e(hash_to_point((R, n-1, 1)), group_pubkey(custody_subset(n-1, 1))) to compute the Fp12 element the signature is checked against. This requires 2n pairings.

closed time in 4 months

vbuterin

issue commentethereum/eth2.0-specs

Attestations and custody bits proposal for phase 1

The IndexedAttestation (used by slashings) would need to be redesigned to be a list of indices with an associated max-256-bit bitfield each which ups the max size of AttesterSlashings to 2 * (MAX_COMMITTEE_SIZE * BYTES_PER_VAL_INDEX + MAX_COMMITTEE_SIZE * MAX_SIZE_OF_CUSTODY_BITFIELD) = 2* (2048 * 8 + 2048 * 64) = ~300kb from the previous ~66kb so a moderately substantial increase here.

Yep, this is how I changed indexed attestations to work.

vbuterin

comment created time in 4 months

issue closedethereum/eth2.0-specs

Slowing down shard activity if crosslinking fails

In the case where blocks from a shard are not crosslinked for a long time, there is a chain of shard blocks of growing length that must eventually be included in a shard block. Currently, the intention is that these blocks can be included a maximum of 256 at a time.

However, there are challenges to this approach:

  • In an attestation, we have to choose between either (i) creating an attestation with many custody bits, adding complexity and overhead to handle this or (ii) using one custody bit over many blocks, requiring a 2-round game for fraud proofs.
  • In an attestation, a large amount of data representing shard blocks and intermediate state roots would need to be included
  • There are two cases in which crosslinking can be delayed:
    1. There are very few validators online so only a few shards are processed every slot, requiring many slots to cycle back to any given shard (eg. at 256k ETH, it takes the maximum 64 slots)
    2. There is an active attack due to committees being >1/3 inactive
  • In the first case, there are very few validators, so it is not reasonable to ask them to verify so much data, so a reduction in the shard chain capacity seems correct. In the second case, reduction in capacity during an attack also seems like a reasonable safety measure, especially in the case where an attack happens due to network DoS.
  • There is limited value in preserving high performance for intra-shard communication when cross-shard communication is stalled.

To remedy these issues, I suggest adding the following rule. Suppose the current slot is CURRENT_SLOT and the shard chain on a given shard was included up to LAST_INCLUDED_SLOT. We define a set of offsets, eg. offsets = [1, 2, 4, 8, 16, 32, 64, 128, 256], and require an attestation to include blocks for slot LAST_INCLUDED_SLOT + offset for any offset in offsets such that LAST_INCLUDED_SLOT + offset < CURRENT_SLOT. That is, if crosslinks stop happening, the slots on a shard chain that are allowed to be nonempty start to back-off exponentially.

Offsets can be chosen to be more "forgiving" at the beginning, eg. one could choose Fibonacci numbers instead of 2: [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233] or even [1, 2, 3, 4, 5, 6, 7, 8, 12, 16, 32, 64, 128].

Why not shrink shard blocks instead of increasing inter-shard-block distances?

  • Simpler: does not require logic for fitting multiple shard blocks into a single custody bit or fraud proof object
  • Removing capability of including large transactions may well be more harmful than increasing block times, eg. consider large fraud proofs, STARKs, etc
  • We lose fast cross-shard tx times in these circumstances anyway

closed time in 4 months

vbuterin

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 8a2cdbebbca7a87633da491120bc3ffc83089b31

Reformed attestations

view details

push time in 4 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha f6e658bab4991780c05ee628c39003093cf8bb2f

Changes to make Danny happy

view details

push time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)+    return IndexedAttestation(attesting_indices, data, custody_bits, signature)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify indices are sorted+    if indexed_attestation.participants != sorted(indexed_attestation.participants):+        return False+    +    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    for participant, custody_bits in zip(participants, indexed_attestation.custody_bits):+        for i, bit in enumerate(custody_bits):+            all_pubkeys.append(state.validators[participant].pubkey)+            # Note: only 2N distinct message hashes+            all_message_hashes.append(AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, bit))+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```++## Beacon Chain Changes++### New state variables++```python+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+```++### New block data structures++```python+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### Attestation processing++```python+def process_attestation(state: BeaconState, attestation: Attestation) -> None:+    data = attestation.data+    assert data.index < ACTIVE_SHARDS+    shard = (data.index + get_start_shard(state, data.slot)) % ACTIVE_SHARDS+    proposer_index=get_beacon_proposer_index(state)++    # Signature check+    committee = get_crosslink_committee(state, get_current_epoch(state), shard)+    for bits in attestation.custody_bits + [attestation.aggregation_bits]:+        assert len(bits) == len(committee)+    # Check signature+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))+    # Get attesting indices+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)++    # Prepare pending attestation object+    pending_attestation = PendingAttestation(+        slot=data.slot,+        shard=shard,+        aggregation_bits=attestation.aggregation_bits,+        inclusion_delay=state.slot - data.slot,+        crosslink_success=False,+        proposer_index=proposer_index+    )+    +    # Type 1: on-time attestations+    if data.custody_bits != []:+        # Correct slot+        assert data.slot == state.slot+        # Slot the attestation starts counting from+        start_slot = state.shard_next_slots[shard]+        # Correct data root count+        offset_slots = [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+        assert len(attestation.custody_bits) == len(offset_slots)+        # Correct parent block root+        assert data.beacon_block_root == get_block_root_at_slot(state, state.slot - 1)+        # Apply+        online_indices = get_online_indices(state)+        if get_total_balance(state, online_indices.intersection(attesting_indices)) * 3 >= get_total_balance(state, online_indices) * 2:+            # Check correct formatting of shard transition data+            transition = block.shard_transitions[shard]+            assert data.shard_transition_hash == hash_tree_root(transition)+            assert len(transition.shard_data_roots) == len(transition.shard_states) == len(transition.shard_block_lengths) == len(offset_slots)+            assert transition.start_slot == start_slot++            # Verify correct calculation of gas prices and slots and chunk roots+            prev_gasprice = state.shard_states[shard].gasprice+            for i in range(len(offset_slots)):+                shard_state, block_length, chunks = transition.shard_states[i], transition.shard_block_lengths[i], transition.shard_data_roots[i]+                block_length = transition.shard+                assert shard_state.gasprice == update_gasprice(prev_gasprice, block_length)+                assert shard_state.slot == offset_slots[i]+                assert len(chunks) == block_length // SHARD_BLOCK_CHUNK_SIZE+                filled_roots = chunks + [EMPTY_CHUNK_ROOT] * (MAX_SHARD_BLOCK_CHUNKS - len(chunks))+                assert shard_state.latest_block_hash == hash_tree_root(filled_roots)+                prev_gasprice = shard_state.gasprice++            # Save updated state+            state.shard_states[shard] = transition.shard_states[-1]+            state.shard_states[shard].slot = state.slot - 1++            # Save success (for end-of-epoch rewarding)+            pending_attestation.crosslink_success = True++            # Apply proposer reward and cost+            estimated_attester_reward = sum([get_base_reward(state, attester) for attester in attesting_indices])+            increase_balance(state, proposer, estimated_attester_reward // PROPOSER_REWARD_COEFFICIENT)+            for state, length in zip(transition.shard_states, transition.shard_block_lengths):+                decrease_balance(state, proposer, state.gasprice * length)+            +    # Type 2: delayed attestations+    else:+        assert slot_to_epoch(data.slot) in (get_current_epoch(state), get_previous_epoch(state))+        assert data.shard_transition_hash == Hash()+        assert len(attestation.custody_bits) == 0++    for index in attesting_indices:+        online_countdown[index] = ONLINE_PERIOD+++    if data.target.epoch == get_current_epoch(state):+        assert data.source == state.current_justified_checkpoint+        state.current_epoch_attestations.append(pending_attestation)+    else:+        assert data.source == state.previous_justified_checkpoint+        state.previous_epoch_attestations.append(pending_attestation)+```++### Misc block post-processing++```python+def misc_block_post_process(state: BeaconState, block: BeaconBlock):+    # Verify that a `shard_transition` in a block is empty if an attestation was not processed for it+    for shard in range(MAX_SHARDS):+        if state.shard_states[shard].slot != state.slot - 1:+            assert block.shard_transition[shard] == ShardTransition()+```++### Light client processing++```python+def verify_light_client_signatures(state: BeaconState, block: BeaconBlock):+    period_start = get_current_epoch(state) - get_current_epoch(state) % LIGHT_CLIENT_COMMITTEE_PERIOD+    committee = get_light_client_committee(state, period_start - min(period_start, LIGHT_CLIENT_COMMITTEE_PERIOD))

We need to have logic somewhere to say "the light client committee used at slot N takes data from slot N - N % PERIOD - PERIOD (with one special case where N < PERIOD to avoid underflow). I just fixed; moved all that logic into get_light_client_committee itself.

vbuterin

comment created time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)+    return IndexedAttestation(attesting_indices, data, custody_bits, signature)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify indices are sorted+    if indexed_attestation.participants != sorted(indexed_attestation.participants):+        return False+    +    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    for participant, custody_bits in zip(participants, indexed_attestation.custody_bits):+        for i, bit in enumerate(custody_bits):+            all_pubkeys.append(state.validators[participant].pubkey)+            # Note: only 2N distinct message hashes+            all_message_hashes.append(AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, bit))+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```++## Beacon Chain Changes++### New state variables++```python+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+```++### New block data structures++```python+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### Attestation processing++```python+def process_attestation(state: BeaconState, attestation: Attestation) -> None:+    data = attestation.data+    assert data.index < ACTIVE_SHARDS+    shard = (data.index + get_start_shard(state, data.slot)) % ACTIVE_SHARDS+    proposer_index=get_beacon_proposer_index(state)++    # Signature check+    committee = get_crosslink_committee(state, get_current_epoch(state), shard)+    for bits in attestation.custody_bits + [attestation.aggregation_bits]:+        assert len(bits) == len(committee)+    # Check signature+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))+    # Get attesting indices+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)++    # Prepare pending attestation object+    pending_attestation = PendingAttestation(+        slot=data.slot,+        shard=shard,+        aggregation_bits=attestation.aggregation_bits,+        inclusion_delay=state.slot - data.slot,+        crosslink_success=False,+        proposer_index=proposer_index+    )+    +    # Type 1: on-time attestations+    if data.custody_bits != []:+        # Correct slot+        assert data.slot == state.slot+        # Slot the attestation starts counting from+        start_slot = state.shard_next_slots[shard]+        # Correct data root count+        offset_slots = [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+        assert len(attestation.custody_bits) == len(offset_slots)+        # Correct parent block root+        assert data.beacon_block_root == get_block_root_at_slot(state, state.slot - 1)+        # Apply+        online_indices = get_online_indices(state)+        if get_total_balance(state, online_indices.intersection(attesting_indices)) * 3 >= get_total_balance(state, online_indices) * 2:+            # Check correct formatting of shard transition data+            transition = block.shard_transitions[shard]+            assert data.shard_transition_hash == hash_tree_root(transition)+            assert len(transition.shard_data_roots) == len(transition.shard_states) == len(transition.shard_block_lengths) == len(offset_slots)+            assert transition.start_slot == start_slot++            # Verify correct calculation of gas prices and slots and chunk roots+            prev_gasprice = state.shard_states[shard].gasprice+            for i in range(len(offset_slots)):+                shard_state, block_length, chunks = transition.shard_states[i], transition.shard_block_lengths[i], transition.shard_data_roots[i]+                block_length = transition.shard+                assert shard_state.gasprice == update_gasprice(prev_gasprice, block_length)+                assert shard_state.slot == offset_slots[i]+                assert len(chunks) == block_length // SHARD_BLOCK_CHUNK_SIZE+                filled_roots = chunks + [EMPTY_CHUNK_ROOT] * (MAX_SHARD_BLOCK_CHUNKS - len(chunks))+                assert shard_state.latest_block_hash == hash_tree_root(filled_roots)+                prev_gasprice = shard_state.gasprice++            # Save updated state+            state.shard_states[shard] = transition.shard_states[-1]+            state.shard_states[shard].slot = state.slot - 1++            # Save success (for end-of-epoch rewarding)+            pending_attestation.crosslink_success = True++            # Apply proposer reward and cost+            estimated_attester_reward = sum([get_base_reward(state, attester) for attester in attesting_indices])+            increase_balance(state, proposer, estimated_attester_reward // PROPOSER_REWARD_COEFFICIENT)

I suppose we could, though if we do that we would also need to thread through the per-byte block fee.

vbuterin

comment created time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)+    return IndexedAttestation(attesting_indices, data, custody_bits, signature)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify indices are sorted+    if indexed_attestation.participants != sorted(indexed_attestation.participants):+        return False+    +    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    for participant, custody_bits in zip(participants, indexed_attestation.custody_bits):+        for i, bit in enumerate(custody_bits):+            all_pubkeys.append(state.validators[participant].pubkey)+            # Note: only 2N distinct message hashes+            all_message_hashes.append(AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, bit))+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```++## Beacon Chain Changes++### New state variables++```python+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]+    current_light_committee: CompactCommittee+    next_light_committee: CompactCommittee+```++### New block data structures++```python+    shard_transitions: Vector[ShardTransition, MAX_SHARDS]+    light_client_signature_bitfield: Bitlist[LIGHT_CLIENT_COMMITTEE_SIZE]+    light_client_signature: BLSSignature+```++### Attestation processing++```python+def process_attestation(state: BeaconState, attestation: Attestation) -> None:+    data = attestation.data+    assert data.index < ACTIVE_SHARDS+    shard = (data.index + get_start_shard(state, data.slot)) % ACTIVE_SHARDS+    proposer_index=get_beacon_proposer_index(state)++    # Signature check+    committee = get_crosslink_committee(state, get_current_epoch(state), shard)+    for bits in attestation.custody_bits + [attestation.aggregation_bits]:+        assert len(bits) == len(committee)+    # Check signature+    assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))+    # Get attesting indices+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)++    # Prepare pending attestation object+    pending_attestation = PendingAttestation(+        slot=data.slot,+        shard=shard,+        aggregation_bits=attestation.aggregation_bits,+        inclusion_delay=state.slot - data.slot,+        crosslink_success=False,+        proposer_index=proposer_index+    )+    +    # Type 1: on-time attestations+    if data.custody_bits != []:+        # Correct slot+        assert data.slot == state.slot+        # Slot the attestation starts counting from+        start_slot = state.shard_next_slots[shard]+        # Correct data root count+        offset_slots = [start_slot + x for x in SHARD_BLOCK_OFFSETS if start_slot + x < state.slot]+        assert len(attestation.custody_bits) == len(offset_slots)+        # Correct parent block root+        assert data.beacon_block_root == get_block_root_at_slot(state, state.slot - 1)+        # Apply+        online_indices = get_online_indices(state)+        if get_total_balance(state, online_indices.intersection(attesting_indices)) * 3 >= get_total_balance(state, online_indices) * 2:

Interesting! We should discuss this; definitely open to considering going back to the "loop through attestations first, then finish in a second round" option.

vbuterin

comment created time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)+    return IndexedAttestation(attesting_indices, data, custody_bits, signature)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify indices are sorted+    if indexed_attestation.participants != sorted(indexed_attestation.participants):+        return False+    +    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    for participant, custody_bits in zip(participants, indexed_attestation.custody_bits):+        for i, bit in enumerate(custody_bits):+            all_pubkeys.append(state.validators[participant].pubkey)+            # Note: only 2N distinct message hashes+            all_message_hashes.append(AttestationCustodyBitWrapper(hash_tree_root(indexed_attestation.data), i, bit))+        +    return bls_verify_multiple(+        pubkeys=all_pubkeys,+        message_hashes=all_message_hashes,+        signature=indexed_attestation.signature,+        domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),+    )+```++## Beacon Chain Changes++### New state variables++```python+    shard_states: Vector[ShardState, MAX_SHARDS]+    online_countdown: Bytes[VALIDATOR_REGISTRY_LIMIT]

I was keeping it as a list of bytes because that way the hashing is lower. Hashing over a uint64 for every validator is currently only done in the validator balances list and is a primary component of epoch transition overhead; adding yet more hashing of this type would significantly add overhead.

If we think using bytes/uint8 is too impure, I have another suggestion: have a bitlist "online during previous 4-epoch period" and "online during current 4-epoch period". Validators would be considered online if their bit if flipped on for either one. This would have even less hashing overhead and would use existing types.

vbuterin

comment created time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)+    return IndexedAttestation(attesting_indices, data, custody_bits, signature)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """++    # Verify indices are sorted+    if indexed_attestation.participants != sorted(indexed_attestation.participants):+        return False+    +    # Verify aggregate signature+    all_pubkeys = []+    all_message_hashes = []+    for participant, custody_bits in zip(participants, indexed_attestation.custody_bits):

The problem is that in the normal case, the number of blocks in the attestation is ~1-4, so making a list of length 1-4 bitlists would be a huge amount of hashing overhead. But I fixed the loops to do things in the right order, good catch.

vbuterin

comment created time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)+    return compute_committee(active_validator_indices, seed, 0, ACTIVE_SHARDS)[:TARGET_COMMITTEE_SIZE]+```++### `get_indexed_attestation`++```python+def get_indexed_attestation(beacon_state: BeaconState, attestation: Attestation) -> IndexedAttestation:+    attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)+    return IndexedAttestation(attesting_indices, data, custody_bits, signature)+```++### `update_gasprice`++```python+def update_gasprice(prev_gasprice: Gwei, length: uint8) -> Gwei:+    if length > BLOCK_SIZE_TARGET:+        delta = prev_gasprice * (length - BLOCK_SIZE_TARGET) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        return min(prev_gasprice + delta, MAX_GASPRICE)+    else:+        delta = prev_gasprice * (BLOCK_SIZE_TARGET - length) // BLOCK_SIZE_TARGET // GASPRICE_ADJUSTMENT_COEFFICIENT+        if delta > prev_gasprice - GASPRICE_ADJUSTMENT_COEFFICIENT:+            return GASPRICE_ADJUSTMENT_COEFFICIENT+        else:+            return prev_gasprice - delta+```++### `is_valid_indexed_attestation`++```python+def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:+    """+    Check if ``indexed_attestation`` has valid indices and signature.+    """+

The length verification is already done by the SSZ datatype.

vbuterin

comment created time in 4 months

Pull request review commentethereum/eth2.0-specs

[WIP] New shard proposal

+# Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data++**Notice**: This document is a work-in-progress for researchers and implementers.++## Table of contents++<!-- TOC -->++- [Ethereum 2.0 Phase 1 -- Crosslinks and Shard Data](#ethereum-20-phase-1----crosslinks-and-shard-data)+    - [Table of contents](#table-of-contents)+    - [Introduction](#introduction)+    - [Configuration](#configuration)+        - [Misc](#misc)+    - [Containers](#containers)+        - [Aliases](#aliases)+        - [`AttestationData`](#attestationdata)+        - [`AttestationShardData`](#attestationsharddata)+        - [`ReducedAttestationData`](#reducedattestationdata)+        - [`Attestation`](#attestation)+        - [`ReducedAttestation`](#reducedattestation)+        - [`IndexedAttestation`](#indexedattestation)+        - [`CompactCommittee`](#compactcommittee)+        - [`AttestationCustodyBitWrapper`](#attestationcustodybitwrapper)+    - [Helpers](#helpers)+        - [`get_online_validators`](#get_online_validators)+        - [`pack_compact_validator`](#pack_compact_validator)+        - [`committee_to_compact_committee`](#committee_to_compact_committee)+        - [`get_light_client_committee`](#get_light_client_committee)+        - [`get_indexed_attestation`](#get_indexed_attestation)+        - [`is_valid_indexed_attestation`](#is_valid_indexed_attestation)+    - [Beacon Chain Changes](#beacon-chain-changes)+        - [New state variables](#new-state-variables)+        - [New block data structures](#new-block-data-structures)+        - [Attestation processing](#attestation-processing)+        - [Light client processing](#light-client-processing)+        - [Epoch transition](#epoch-transition)+        - [Fraud proofs](#fraud-proofs)+    - [Shard state transition function](#shard-state-transition-function)+    - [Honest committee member behavior](#honest-committee-member-behavior)++<!-- /TOC -->++## Introduction++This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.++## Configuration++### Misc++| Name | Value | Unit | Duration |+| - | - | - | - | +| `MAX_SHARDS` | `2**10` (= 1024) |+| `ACTIVE_SHARDS` | `2**6` (= 64) |+| `ONLINE_PERIOD` | `2**3` (= 8) | epochs | ~51 min |+| `LIGHT_CLIENT_COMMITTEE_SIZE` | `2**7` (= 128) |+| `LIGHT_CLIENT_COMMITTEE_PERIOD` | `2**8` (= 256) | epochs | ~29 hours |+| `SHARD_BLOCK_CHUNK_SIZE` | `2**18` (= 262,144) | |+| `MAX_SHARD_BLOCK_CHUNKS` | `2**2` (= 4) | |+| `BLOCK_SIZE_TARGET` | `3 * 2**16` (= 196,608) | |+| `SHARD_BLOCK_OFFSETS` | `[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]` | |+| `MAX_SHARD_BLOCKS_PER_ATTESTATION` | `len(SHARD_BLOCK_OFFSETS)` | |+| `EMPTY_CHUNK_ROOT` | `hash_tree_root(BytesN[SHARD_BLOCK_CHUNK_SIZE]())` | |+| `MAX_GASPRICE` | `2**14` (= 16,384) | Gwei | |+| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `2**3` (= 8) | |++## Containers++### `ShardState`++```python+class ShardState(Container):+    slot: Slot+    gasprice: Gwei+    root: Hash+    latest_block_hash: Hash+```++### `AttestationData`++```python+class AttestationData(Container):+    slot: Slot+    index: CommitteeIndex+    # LMD GHOST vote+    beacon_block_root: Hash+    # FFG vote+    source: Checkpoint+    target: Checkpoint+    # Shard transition hash+    shard_transition_hash: Hash+```++### `ShardTransition`++```python+class ShardTransition(Container):+    # Starting from slot+    start_slot: Slot+    # Shard block lengths+    shard_block_lengths: List[uint8, MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Shard data roots+    shard_data_roots: List[Hash, List[Hash, MAX_SHARD_BLOCK_CHUNKS], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    # Intermediate state roots+    shard_state_roots: List[ShardState, MAX_SHARD_BLOCKS_PER_ATTESTATION]+```++### `Attestation`++```python+class Attestation(Container):+    aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `IndexedAttestation`++```python+class IndexedAttestation(Container):+    participants: List[ValidatorIndex, MAX_COMMITTEE_SIZE]+    data: AttestationData+    custody_bits: List[Bitlist[MAX_VALIDATORS_PER_COMMITTEE], MAX_SHARD_BLOCKS_PER_ATTESTATION]+    signature: BLSSignature+```++### `CompactCommittee`++```python+class CompactCommittee(Container):+    pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]+    compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]+```++### `AttestationCustodyBitWrapper`++```python+class AttestationCustodyBitWrapper(Container):+    attestation_root: Hash+    index: uint64+    bit: bool+```++## Helpers++### `get_online_validators`++```python+def get_online_indices(state: BeaconState) -> Set[ValidatorIndex]:+    active_validators = get_active_validator_indices(state, get_current_epoch(state))+    return set([i for i in active_validators if state.online_countdown[i] != 0])+```++### `pack_compact_validator`++```python+def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:+    """+    Creates a compact validator object representing index, slashed status, and compressed balance.+    Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with+    the unpacking function.+    """+    return (index << 16) + (slashed << 15) + balance_in_increments+```++### `committee_to_compact_committee`++```python+def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:+    """+    Given a state and a list of validator indices, outputs the CompactCommittee representing them.+    """+    validators = [state.validators[i] for i in committee]+    compact_validators = [+        pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)+        for i, v in zip(committee, validators)+    ]+    pubkeys = [v.pubkey for v in validators]+    return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)+```++### `get_light_client_committee`++```python+def get_light_client_committee(beacon_state: BeaconState, epoch: Epoch) -> Sequence[ValidatorIndex]:+    assert epoch % LIGHT_CLIENT_COMMITTEE_PERIOD == 0+    active_validator_indices = get_active_validator_indices(beacon_state, epoch)+    seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_LIGHT_CLIENT)

That's true though we have expressed desire for more domain separation before. I'd be ok either way.

vbuterin

comment created time in 4 months

push eventethereum/eth2.0-specs

Vitalik Buterin

commit sha 54f963c50326b77c4ef302ca15b662c9c0ab2de1

Small edits

view details

push time in 4 months

more