profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/anorth/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

drand/drand 376

🎲 A Distributed Randomness Beacon Daemon - Go implementation

ipfs/go-graphsync 57

Initial Implementation Of GraphSync Wire Protocol

filecoin-project/go-fil-markets 46

Shared Implementation of Storage and Retrieval Markets for Filecoin Node Implementations

filecoin-project/chain-validation 10

(DEPRECATED) See https://github.com/filecoin-project/test-vectors instead. (was: chain validation tools)

anorth/go-dar 4

Indexed content-addressed archive file format

ZenGround0/ent 4

Herd filecoin state trees for upgrade migrations

anorth/expl 3

An expression language for rapid, explorable, explainable programming [WIP]

ngalin/Illumimateys 2

Interview Prep.

anorth/gflows 1

Git/GitHub workflow scripts for rapid and parallel team development

pull request commentfilecoin-project/specs-actors

Remove cc upgrade

ZenGround0

comment created time in 2 days

issue commentfilecoin-project/FIPs

Add return value to WithdrawBalance

Thanks, sounds good to me

On Sat, 7 Aug 2021 at 10:40 am, Kaitlin Beegle ***@***.***> wrote:

Hi @anorth https://github.com/anorth! I'm currently working to audit all open FIP discussion items so that we can triage existing issues and flag priorities that may have been missed.

It looks like this issue was lost in the bundle of FIPs that were implemented prior to the Actors v5 launch. Is this the case? If so, we should reintroduce the idea and write a FIP draft.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/filecoin-project/FIPs/issues/26#issuecomment-894578800, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADMW6UX4X3EKKARHMOJSMDT3R6IHANCNFSM4TIMP64A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

steven004

comment created time in 2 months

issue commentfilecoin-project/FIPs

Explicit FIL+ subsidy

[For a follow-up proposal, I think the final step in decoupling storage power from deal market is to move the sector->deal mapping from sector metadata into some equivalent but reversed deal->sector mapping in the market actor]

anorth

comment created time in 2 months

issue openedfilecoin-project/FIPs

Explicit FIL+ subsidy

This proposal is an idea of mine to reduce coupling between the storage power mechanism and deal market. Coupling between these two limits design freedom around problems like supporting exponential growth in storage capacity or making deals much more scalable. This proposal is written independent of proposals like Neutron (#119). An analogous idea would apply to supersectors in order to remove linear-in-deals costs from storage maintenance. This proposal could be made either prior to Neutron (simplifying it) or subsequently.


Background

Filecoin network storage growth will eventually be limited by the chain computation costs associated with maintaining storage if a linear amount of state is accessed or retained per sector (where sectors are a fixed size). The Neutron proposal (#119) resolves this for committed-capacity sectors, but sectors with deals still have per-sector (and per-deal) metadata. Per-sector on-chain data includes the sector->deal mapping and per-sector heterogeneous weight, pledge and penalty values. These are all required because verified deals alter a sector’s power as a means of adjusting reward distribution.

Maintaining the facility for heterogenous sectors constrains freedom for more scalable designs, even though most sectors don’t have deals. The premise that most sectors don’t have deals is not something to rely on, though. In the long term we aim for many more deals and a much greater proportion of committed storage to be in active use.

Goals

This proposal aims to reduce deal market limitations on the scale and efficiency of onboarding and maintaining exponentially larger amounts of capacity.

  • Remove per-sector on-chain information from state
  • Normalise sectors as far as possible, enabling summarised/aggregate accounting with sub-linear state
  • Decouple network security from reward distribution policy

These goals are to be sought within existing product and crypto-economic constraints around sound storage and economic security.

Design ideas

The current storage power mechanism assigns a different power value to equal-size sectors based on the size and duration of Filecoin Plus (FIL+) verified deals. This is a means of incentivising useful storage, delegating the definition of “useful” to an off-chain notary. The incentive for storing verified deals is the increased block reward that may be earned from the increased power of those sectors, despite constant physical storage and infrastructure costs. In essence, the network subsidises useful storage by taxing the other power.

Storage power has two different roles, currently coupled. One is to secure the network through raising the economic cost of some party maintaining a significant fraction of the total power, and the other is to determine the distribution of rewards. The block reward is split between rewarding security and subsidising useful storage. This coupling theoretically means the verified deal power boost reduces the hardware cost to attack the network by a factor of 10, if colluding notaries would bless all of a malicious miner’s storage (this is an impractical attack today, though).

This subsidy could be made much more direct, reducing complexity and coupling between the storage market and individual sectors, and clearly separating security from reward distribution policy.

Uniform sector power

Every sector has uniform power corresponding to its raw committed storage size, regardless of the presence of deals. This removes the concepts of sector quality and quality-adjusted power, and makes all bytes equal in the eyes of the power table. This would remove the DealWeight and VerifiedDealWeight fields from SectorOnChainInfo. Network power and committed storage are now the same concept and need not be tracked separately by the power actor.

Uniform power also means that all similarly-sized sectors committed at the same epoch would require the same initial pledge. Similarly the expected day reward and storage pledge (parameters to a possible future termination fee calculation) depend only on activation epoch. The complicated values maintained to track penalties for replaced CC sectors become unnecessary. Historical pledge/reward values may be maintained just once for the network by the power and reward actors. We currently store each of these numbers in chain state some ~500 times per epoch (@ 50PiB/day growth).

With uniform sector power, the power of groups of sectors may be trivially calculated by multiplication. Window PoSt deadline and partition metadata no longer need to maintain values for live, unproven, faulty and recovering power, but the sector number bitfields remain. Processing or recovering from faults does not require loading each sector’s specific values. The complexity of code and scope for error in these various derived data is much reduced.

This ability to aggregate by multiplying becomes instrumental to Neutron-like ideas around hierarchical storage accounting with deals. Without this, supporting partial faults requires on-chain metadata about the individual sectors (with deals) that comprise a supersector, restoring a linear-in-power network cost.

Note that this proposal does leave the per-sector deal IDs on chain. After this proposal and Neutron, this would be the only piece of per-sector data retained.

A miner’s chance of winning the election at each epoch is proportional to their share of the raw byte power committed to the network and non-faulty. Winning the consensus reward remains a lottery.

Market actor tracks deal space-time

The market actor maintains:

  • a current total of active verified deal space (optionally unverified, too);
  • a short list of “reward periods”, each aggregating a period of, say, 24 hours, and comprising:
    • a table of verified deal space-time totals provided by each miner during the period;
    • a record of the total deal subsidy earned during the period (see below)

As payments are periodically processed for each deal (currently every 100 epochs), the market actor adds the deal’s size multiplied by the update period epochs to the provider’s space-time tally for the current reward period.

After a reward period completes, the ratio between verified deal space-time of each provider gives their share of the deal subsidy to be claimable. A miner can claim a corresponding share of the total deal subsidy earned at any point up until the reward period expires (e.g. 60 days). Upon claiming a deal subsidy, the reward funds are inserted into the miner’s vesting schedule, alongside block rewards.

FIL+ subsidy paid to market actor

At every epoch, the reward actor queries the power actor for the total network power (= storage) and the market actor for the total verified deal space. The total block reward to be paid is then split into the power reward and the verified deal subsidy according to the ratio of storage to 9*deal space. The block reward is paid as usual, and the deal subsidy is sent to the market actor.

Discussion

The primary motivation for this proposal is to remove per-sector account-keeping metadata in order to unlock exponential scaling of storage. It changes reward behaviour in a couple of ways that we must verify as being beneficial.

  • Earning the verified deal subsidy doesn’t depend on winning blocks. This is a deviation from the current protocol that requires a miner to win a block (with a Winning PoSt) in order to claim rewards. This is more reliable for smaller parties that might win only occasional blocks, and might suffer from less robust blockchain node infrastructure and connectivity.
  • The market actor does not track temporary faults in sectors: a provider is eligible for the full client payment if the corresponding sector faults, so long as it is recovered soon thereafter. The power table does track temporary faults, suspending rewards until a sector is recovered (necessary for economic arguments about security). This change moves the deal subsidy from being subject to suspension during faults to the same more tolerant treatment of deals. Verified deals no longer subvert the economic security arguments, but it’s not obvious that this makes the more tolerant treatment acceptable. If not, we might need to communicate faults to the market actor, making them a bit more expensive and exposing the market actor to a concept of sectors it’s currently mostly abstracted from.
  • We probably need a separate pledge for the deal subsidy, so that collateral remains proportional to future earnings. This pledge would be about (verified) deals, rather than sectors, because this proposal separates those reward streams. This pledge would likely be owned by the market actor and forfeit when a deal defaults.
  • A potentially tricky situation to analyze is what happens when the power reward is significantly less than the deal subsidy. Does it create any unwanted incentives for the block producing miner to deviate from the protocol (assuming they don’t have many deals)?

created time in 2 months

pull request commentfilecoin-project/go-state-types

Properly guard against 'hollow' objects

Ok, so one position would be that Lotus and other APIs shouldn't be so tightly coupled to the actor state representation. In this instance, due to the very strict compatibility requirements, Lotus UX or implementation are not in a position to push changes on this library without (themselves) incurring large overhead due to versioning. You sorta have to take this as given, and decide if you want to re-use it as-is or not. If you decide not to, then you need to adapt to/from this bigint to interact with chain/state libraries. Since it's roughly the same work to either (a) adapt to a v2 of this library, or (b) adapt to a local bigint library independent of actors, I'd say (b) is better since Lotus would gain control and reduce coupling. I can of course see the unpalatable nature of having two very similar libraries for very similar needs; the justification would be the uniquely strict blockchain compatibility needs.

But let's zoom out a bit to a discussion of how to solve this problem, rather than how to make this solution work. As I understand it, another possible resolution is to change all callers of IsZero to use IsNilOrZero. I don't see anything standing in the way of this today. I'd be willing to mark IsZero deprecated to warn against future naive use, but we can't remove it. One might object to this as being imperfectly final as a fix, but I think it's quite pragmatic in light of, again, the uniquely strict blockchain compatibility needs.

Revising the problem statement further:

These are often instantiated as part of a larger struct unmarshall

(and from elsewhere)

There isn't much I can ... != nil as the objects are there. They have just been instantiated incomplete by the marshaller

This sounds like the problem - your unmarshaller needs fixing! The actors and chain manipulation code more or less have no problem here because the CBOR de/serialization layer ensures these values are always instantiated non-nil. Your context is implicitly deserialising JSON, as I understand it. You want a policy of interpreting missing bigint values as zero. Can you fix whatever deserialization system is in use to enforce that, instead of changing the semantics of the library after nil initialisation.

ribasushi

comment created time in 2 months

pull request commentfilecoin-project/specs-actors

Create CODEOWNERS

Nice improvement to test coverage! 😕

BigLep

comment created time in 2 months

PullRequestReviewEvent

pull request commentfilecoin-project/go-state-types

Properly guard against 'hollow' objects

Actually I have no solid opinion on what specs-actors should do here.

I understand where you are coming from, but you can't change this library without having an opinion about actors. This library is logically part of actors, it's the definition of chain state. It was broken out specifically to reduce client overheads of versioning infrequently-changing nearly-primitive types like bigints.

Versioning this library will be quite painful. Lotus and other clients of actors will need to adapt bigints wherever they come from chain state. @Stebalien would have a better idea that me of the overhead likely in doing so (it might not be that bad). But if you were going to do that, you could abstract the bigints anyway, without touching this library, into a local version with a more hand-holdy attitude toward uninitialised values.

surface APIs are failing left and right due to missing payloads

I don't have a very good idea of problem details you're describing. If code is interacting with chain state, I think it needs to use this library, and be careful. Code that is not interacting with chain state maybe shouldn't? I'm not sure if this is reasonable.

ribasushi

comment created time in 2 months

pull request commentipfs/go-bs-sqlite3

fix: require cgo for Open

Thanks!

Stebalien

comment created time in 2 months

pull request commentipfs/go-bs-sqlite3

sync: update CI config files

The error doesn't make much sense to me.

conn.Exec undefined (type *sqlite3.SQLiteConn has no field or method Exec)

There certainly is an Exec method in the code.

web3-bot

comment created time in 2 months

PR opened ipfs/go-mfs

Reviewers
Fix lint errors
+14 -94

0 comment

3 changed files

pr created time in 2 months

create barnchipfs/go-mfs

branch : anorth/lint

created branch time in 2 months

push eventipfs/go-ipld-cbor

anorth

commit sha 796083620f285a909640b95938bacf5db8dd15d0

Fix lint errors

view details

push time in 2 months

PR opened ipfs/go-ipld-cbor

Reviewers
Fix lint errors

Clean on go test ./... && go test -race ./... && go vet ./... && staticcheck ./...

+15 -15

0 comment

4 changed files

pr created time in 2 months

create barnchipfs/go-ipld-cbor

branch : anorth/lint

created branch time in 2 months

PR opened ipfs/go-bs-sqlite3

Fix lint by removing unused lock

Clean on go test ./... && go test -race ./... && go vet ./... && staticcheck ./...

+0 -4

0 comment

2 changed files

pr created time in 2 months

create barnchipfs/go-bs-sqlite3

branch : anorth/lint

created branch time in 2 months

PR opened ipfs/go-dagwriter

Reviewers
Fix lint errors
+14 -20

0 comment

4 changed files

pr created time in 2 months

create barnchipfs/go-dagwriter

branch : anorth/lint

created branch time in 2 months

PullRequestReviewEvent

Pull request review commentfilecoin-project/go-state-types

Properly guard against 'hollow' objects

 func TestOperations(t *testing.T) { 	assert.True(t, ta.Nil()) } +func TestInt_NilUnmarshal(t *testing.T) {++	parsedStruct := func() (s struct{ Big Int }) {+		require.NoError(t, json.Unmarshal([]byte("{}"), &s))+		return s+	}

This test is kinda confusing two different things. It would be more clear as two different tests:

  • What do you get when you unmarshal nothing into an Int. Show exactly that you get a wrapped nil, separate from how that behaves.
  • How do various operations deal with nil parameters. Just use an uninitialised Int without jumping through the hoops.
ribasushi

comment created time in 2 months

PullRequestReviewEvent

issue commentfilecoin-project/FIPs

Support non-deal data in sectors (off-chain deals part 1)

We've had some time to consider this more now, with related proposals and discussion from miners and other community. I am going to present a counterargument: why we should not attempt this.

Supporting non-deal data in committed-capacity sectors sidesteps the Filecoin deal market. Whether or not the goal is stated up front as enabling an off-chain deal market, that is what it will do. Any miner who commits non-deal data into a sector is doing an off-chain deal of some sort, even if with themselves. It's not committed-capacity any more, it's a storage market that is outside the Filecoin network's cryptoeconomic island and would probably be settled in fiat/stablecoin. The long-term health of the Filecoin economy depends on the innovation and growth happening within that economy, data about economic activity being transparent to the participants, and trade being settled in the native token.

It is very likely that any off-chain or alternative deal market could be more cost-efficient than the "official" one, at least in the short term. There's no argument right now that deal-making is expensive and that it's important for the Filecoin network that we work on developing more efficient marketplaces. In the short term we can mitigate costs with off-chain aggregation of deals, and the Filecoin Plus program provides a significant subsidy to early providers as we bootstrap this economy. In the mid and long term we need to develop much more scalable representation and upkeep of markets. Much of the work could indeed move off-chain, with techniques like state channels, rollups, ZK-proofs etc, but be tied back to a settlement layer in FIL.

As a supporting point, from a more technical point of view, the enforced zero data in CC sectors presents a valuable constraint that future development work can depend on. For example:

  • Early ideas for upgrading CC sectors without resealing (#114) depend on the upgraded sector data being known zero (I think because otherwise we can't guarantee physical uniqueness).
  • The Neutron proposal (#119) for exponentially scaling network capacity depends on removing per-sector heterogeneity so they can be aggregated into large groups. Different data in different sectors reintroduces per-sector linearities that eventually overwhelm chain capacity

CC within Filecoin needs to remain that: capacity that is intended to be replaced/upgraded into useful storage deals. Sidestepping the on-chain storage market in the short term seems attractive on the surface, but in the long term undermines Filecoin development and the economy.

magik6k

comment created time in 2 months

issue commentfilecoin-project/FIPs

FIP Proposal: Client can make an off-chain deal and miner can use CC sectors to storage data without resealing

There are two different ideas in this proposal.

  • enabling off-chain deals
  • upgrading CC sectors without resealing

FIP-0016 lists #57 as the venue for discussion about supporting off-chain deals. I think it best that this discussion be focussed around CC upgrades without resealing.

andrewzhao87

comment created time in 2 months

Pull request review commentfilecoin-project/FIPs

add impl description for fip16

 This proposal introduces the possibilities for further improvement (which could  ## Implementation -WIP+As the design descripted above, we need a new package version of v6 and network migration to v14.++As we determined to limit same data type in one sector, either pieces with deals or pieces without deals, but cannot combines all together, I adjusted the method of unsealedCID to more simplicity.++1. add PieceInfos field to SectorPrecommitInfo and SectorOnChainInfo struct to store piece data cid and piece size for forward verify.+```+type SectorPreCommitInfo struct {+	SealProof       abi.RegisteredSealProof+	SectorNumber    abi.SectorNumber+	SealedCID       cid.Cid `checked:"true"` // CommR+	SealRandEpoch   abi.ChainEpoch+	DealIDs         []abi.DealID+	Expiration      abi.ChainEpoch+	PieceInfos      []abi.PieceInfo+	...+}++type SectorOnChainInfo struct {+	SectorNumber          abi.SectorNumber+	SealProof             abi.RegisteredSealProof // The seal proof type implies the PoSt proof/s+	SealedCID             cid.Cid                 // CommR+	DealIDs               []abi.DealID+	Activation            abi.ChainEpoch  // Epoch during which the sector proof was accepted+	Expiration            abi.ChainEpoch  // Epoch during which the sector expires+	DealWeight            abi.DealWeight  // Integral of active deals over sector lifetime+	VerifiedDealWeight    abi.DealWeight  // Integral of active verified deals over sector lifetime+	PieceInfos            []abi.PieceInfo //PieceInfos+	...+}+```++Rightnow, the `PieceInfos  []abi.PieceInfo` just present meaning of this field, for real environmnet it will be a cid.Cid to save onchain space. +```+PieceInfos     cid.Cid +```++2. change getVerifyInfo for diffrent way to get unsealedCIDs of cc sectors with piece data inside, if in single sector proven verify+```+builtin.RequireState(rt, len(params.DealIDs) > 0 && len(params.PieceInfos) > 0, "deals and raw pieces cannot exist at the same time in one sector")++commDs := requestUnsealedSectorCIDs(rt, &market.SectorDataSpec{+	SectorType: params.RegisteredSealProof,+	DealIDs:    params.DealIDs,+	Pieces:     params.PieceInfos,+})

Calling the market actor for something that is explicitly trying to sidestep the on-chain market seems wrong. We should instead either refactor that method to return the the PieceInfos, or otherwise invoke rt.ComputeUnsealedSectorCID here by duplicating the relevant logic. This comment applies to all the code in (3) too.

lyswifter

comment created time in 2 months

Pull request review commentfilecoin-project/FIPs

add impl description for fip16

 This proposal introduces the possibilities for further improvement (which could  ## Implementation -WIP+As the design descripted above, we need a new package version of v6 and network migration to v14.++As we determined to limit same data type in one sector, either pieces with deals or pieces without deals, but cannot combines all together, I adjusted the method of unsealedCID to more simplicity.++1. add PieceInfos field to SectorPrecommitInfo and SectorOnChainInfo struct to store piece data cid and piece size for forward verify.+```+type SectorPreCommitInfo struct {+	SealProof       abi.RegisteredSealProof+	SectorNumber    abi.SectorNumber+	SealedCID       cid.Cid `checked:"true"` // CommR+	SealRandEpoch   abi.ChainEpoch+	DealIDs         []abi.DealID+	Expiration      abi.ChainEpoch+	PieceInfos      []abi.PieceInfo+	...+}++type SectorOnChainInfo struct {+	SectorNumber          abi.SectorNumber+	SealProof             abi.RegisteredSealProof // The seal proof type implies the PoSt proof/s+	SealedCID             cid.Cid                 // CommR+	DealIDs               []abi.DealID+	Activation            abi.ChainEpoch  // Epoch during which the sector proof was accepted+	Expiration            abi.ChainEpoch  // Epoch during which the sector expires+	DealWeight            abi.DealWeight  // Integral of active deals over sector lifetime+	VerifiedDealWeight    abi.DealWeight  // Integral of active verified deals over sector lifetime+	PieceInfos            []abi.PieceInfo //PieceInfos+	...+}+```++Rightnow, the `PieceInfos  []abi.PieceInfo` just present meaning of this field, for real environmnet it will be a cid.Cid to save onchain space. +```+PieceInfos     cid.Cid +```

The size is a necessary part of PieceInfo for the CommD calculations.

lyswifter

comment created time in 2 months

PullRequestReviewEvent
PullRequestReviewEvent

issue openedfilecoin-project/FIPs

Scalable storage onboarding and maintenance (project Neutron)

Some people from the Filecoin team have been working on the next iteration of scalable storage growth and capacity for the Filecoin network. The recent Hyperdrive network upgrade unlocked a big multiple of capacity, but we expect mining demand to rise over time to meet this and again be limited by blockchain throughput. In the next iteration of improvements we aim to solve this problem for the long term, enabling exponential network growth. This effort is known as project Neutron (after the density of neutron stars).

We're still fleshing out many details ahead of a full FIP, but I'm filing this issue to show where we're headed and as a reference for other efforts. We'll publish more extensive design documents once we're more confident in the approach.

@nicola @Kubuxu @nikkolasg


Background

The Filecoin network’s capacity to onboard new storage and to maintain proofs of committed storage are limited by blockchain transaction processing throughput. The recent Hyperdrive network upgrade raised onboarding capacity to about 500-1000 PiB/day, but we expect this capacity to become saturated.

As onboarding rates increase, the fixed amount of network throughput consumed by maintaining the proofs of already-committed storage will increase, eventually toward a significant cost for the network.

Problem detail

Validation of the Filecoin blockchain is subject to a fixed amount of computational work per epoch (including state access), enforced as the block gas limit. Many parts of the application logic for onboarding and maintaining storage incur a constant computational and/or state cost per sector. This results in blockchain validation costs that are linear in the rate of growth of storage, and in the total amount of storage committed.

Linearities exist in:

  • Pre-committing and proving new sectors (both state access and proof verification)
  • Proving Window PoSt of all storage daily (state access, proof validation off-chain)
  • Detecting and accounting for faults and recoveries
  • Cron processing checking for missed proofs, expiring sectors, and other housekeeping

We wish to remove or reduce all such linear costs from the blockchain validation process in order to remove limitations on rate of growth, now and in the long future when power and growth are significantly (exponentially) higher. SNARKPack goes a long way toward addressing the linear cost of PoRep and PoSt proof verification. However, there remain linear costs associated with providing the public inputs for each sector’s proof.

Goals

Our goal is to enable arbitrary amounts of storage to be committed and maintained by miners within a fixed network transaction throughput.

This means redesigning storage onboarding and maintenance state and processes to remove linear per-sector costs, or dramatically reduce constants below practical needs. We want to do this while maintaining security, micro- and macro-economic attractiveness, discoverable and verifiable information about deals, and reasonable miner operational requirements.

This effort is seeking a solution that is in reach for implementation in the next 3-6 months (which means relying on PoRep and ZK proof technologies that already exist today), and that is good enough that we won’t have to re-solve the problem within a few years

Of course there exist other, orthogonal approaches to the general problem of scaling, but these are generally longer and harder propositions (e.g. sharding, layer 2 state).

Out of scope

This proposal does not attempt to solve exponential growth in deals, except by making it no harder to solve that problem later. We think this sequencing is reasonable because (a) deals are in practise rare at present, and (b) off-chain aggregation into whole-sector-size deals mitigates costs in the near term. We expect exponential deal growth to be a challenge to address in 2022.

Key ideas

The premise behind this proposal is that we cannot store or access a fixed-size piece of state for each 32 or 64 GiB sector of storage, either while onboarding or maintaining storage. Specifically, we cannot store or access a replica commitment (CommR) per sector, nor mutate per-partition state when accounting Window PoSt. CommR in aggregate today accounts for over half of the state tree at a single epoch, and Window PoSt partition state manipulation dominates the cost of maintenance.

The key design idea is to maintain largely the same data and processes we have today, but applied to an arbitrary number of sectors as a unit. The proposal will redesign state, proofs and algorithms to enable a miner to commit to and maintain units of storage larger than one sector, with cost that is logarithmic or better in the amount of storage. Thus, with a fixed chain processing capacity, the unit of accounting and proof can increase in size over time to support unbounded storage growth and capacity. We will assume that miners will increase their unit of commitment if blockchain transaction throughput is near capacity.

created time in 2 months