profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/raulk/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
raulk raulk @protocol Lisboa, Portugal @protocol

libp2p/rust-libp2p 1757

The Rust Implementation of the libp2p networking stack.

libp2p/libp2p 1598

A modular and extensible networking stack which solves many challenges of peer-to-peer applications.

ConsenSys/ethql 564

A GraphQL interface to Ethereum :fire:

libp2p/cpp-libp2p 138

C++17 implementation of libp2p

ipfs-inactive/blog 100

[ARCHIVED] Source for the IPFS Blog

libp2p/go-libp2p-webrtc-direct 60

A libp2p transport that enables browser-to-server, and server-to-server, direct communication over WebRTC without requiring signalling servers

ipfs/dht-node 38

[ARCHIVED] Run just an ipfs dht node (Or many nodes at once!)

libp2p/devgrants 35

want to hack on libp2p? this repo tracks libp2p endeavors eligible for incentivization.

libp2p/workspace-go-libp2p 15

workspace for go-libp2p contributors

filecoin-project/test-vectors 13

💎 VM and Chain test vectors for Filecoin implementations

push eventfilecoin-project/dealbot

Will Scott

commit sha 9de8e5711aef1fec23bdf8fb74d34025db1b54be

car already made

view details

push time in 26 minutes

Pull request review commentfilecoin-project/dealbot

remove retrieved files before completion.

 func (de *retrievalDealExecutor) executeAndMonitorDeal(ctx context.Context, upda 				}  				if de.task.CARExport.x {-					return errors.New("car export not implemented")+					carPath := filepath.Join(de.config.NodeDataDir, de.task.PayloadCID.x+".car")

@willscott I believe https://github.com/filecoin-project/dealbot/pull/235/files#diff-6714f1aa5772e99d5e914178bb06c11557a44ff9d6934cfbad3ff02e1dcffd39R160 will actually make LOTUS generate a car for this retrieval automatically. This code was originally meant to double check the result I guess by scanning the CAR file? I'm not sure. I think you can just remove this if statement entirely.

willscott

comment created time in 30 minutes

pull request commentlibp2p/js-libp2p-interfaces

chore: update to new multiformats

Pointing out that there is a browser issue in CI

achingbrain

comment created time in 38 minutes

push eventfilecoin-project/lotus

Anton Evangelatov

commit sha aa584475cc92a514e9edd5bddc8b300a3c301c61

fix paych and sdr tests

view details

push time in 39 minutes

PR opened libp2p/go-routing-language

Middleware and context interfaces

Goal

Draft of the middleware and context interfaces to get a sense of the client UX, and the work required to implement custom middlewares. Feedback more than welcome.

+241 -9

0 comment

9 changed files

pr created time in 43 minutes

create barnchlibp2p/go-routing-language

branch : adlrocha/v0.1

created branch time in an hour

issue openedfilecoin-project/lotus

[BUG] Unable to exit sealing state "TerminateFailed"

Describe the bug After trying to apply "lotus-miner sectors terminate" command, sector is stuck in Terminating loop.

Version (run lotus version): Happens both in 1.10.0-rc6 and lotus-miner version 1.11.0-dev+mainnet+git.6e1cf6fdc

To Reproduce Steps to reproduce the behavior:

  1. Run 'lotus-miner sectors terminate --really-do-it 1276' against broken sealing task
  2. Sector remains in TerminateFailed state
  3. Attempt lotus-miner sectors remove --really-do-it 1276 - (see errors below)

Expected behavior I expect the sector to be removed from sectors list after lotus-miner restart

Logs 2021-06-22T13:56:26.028Z INFO stores stores/remote.go:338 Delete http://192.168.1.96:3456/remote/sealed/s-t01278-1276 2021-06-22T13:56:26.065Z ERROR evtsm go-statemachine@v0.0.0-20200925024713-05bd7c71fbfe/machine.go:83 Executing event planner failed: running planner for state Terminating failed: github.com/filecoin-project/lotus/extern/storage-sealing.(*Sealing).plan /home/stuart/lotus/extern/storage-sealing/fsm.go:296

  • planner for state Terminating received unexpected event sealing.SectorRemoved ({User:{}}): github.com/filecoin-project/lotus/extern/storage-sealing.planOne.func1 /home/stuart/lotus/extern/storage-sealing/fsm.go:625

Screenshots 7. 2021-06-15 15:54:27 +0000 UTC: [event;sealing.SectorSealPreCommit2Failed] {"User":{}} seal pre commit(2) failed: storage call error 0: task aborted 8. 2021-06-15 15:55:27 +0000 UTC: [event;sealing.SectorRetrySealPreCommit2] {"User":{}} 9. 2021-06-15 17:35:07 +0000 UTC: [event;sealing.SectorSealPreCommit2Failed] {"User":{}} seal pre commit(2) failed: storage call error 101: worker restarted 37. 2021-06-16 16:56:49 +0000 UTC: [event;sealing.SectorSealPreCommit1Failed] {"User":{}} ticket expired: ticket expired: seal height: 848408, head: 851873 38. 2021-06-16 16:57:49 +0000 UTC: [event;sealing.SectorRetrySealPreCommit1] {"User":{}} 39. 2021-06-16 16:57:49 +0000 UTC: [event;sealing.SectorOldTicket] {"User":{}} 40. 2021-06-16 16:57:49 +0000 UTC: [event;sealing.SectorTicket] {"User":{"TicketValue":"ESAlNc/5Rfzf4fVi1/GsR1zWnzn+o5lMvEjn0UwWeTo=","TicketEpoch":847508}} 7070. 2021-06-22 13:55:26 +0000 UTC: [event;sealing.SectorTerminate] {"User":{}} 7071. 2021-06-22 13:55:26 +0000 UTC: [event;sealing.SectorTerminateFailed] {"User":{}} checking precommit presence: sectorNumber is allocated, but PreCommit info wasn't found on chain 7072. 2021-06-22 13:56:26 +0000 UTC: [event;sealing.SectorRemove] {"User":{}} 7073. 2021-06-22 13:56:26 +0000 UTC: [event;sealing.SectorTerminate] {"User":{}} 7074. 2021-06-22 13:56:26 +0000 UTC: [event;sealing.SectorTerminate] {"User":{}} 7075. 2021-06-22 13:56:26 +0000 UTC: [event;sealing.SectorRemoved] {"User":{}}

lotus-miner sectors list |grep 1276 1276 Terminating NO NO n/a

Additional context I have restarted the miner several times after trying to use remove command.

created time in an hour

issue closedfilecoin-project/lotus

[Mining Issue] go-libp2p-nat : Soap request got Http 500 Internal Serve Error

Note: For security-related bugs/issues, please follow the security policy.

Please provide all the information requested here to help us troubleshoot "mining/WinningPoSt failed" issues. If the information requested is missing, you may be asked you to provide it.

Describe the problem

After the successful Lotus-miner init operation, the following warning is issued when the Lotus-miner run operation is performed:

A libp2p related warning occurs after miner is run.

image

Mining doesn't seem to be working normally.

also, Nonce values increase slightly when checked with lotus wallet list -i image

Version

The output of lotus --version. lotus version 1.9.0+mainnet+git.ada7f97ba

Setup

You miner and daemon setup, including what hardware do you use, your environment variable settings, how do you run your miner and worker, do you use GPU and etc.

Swap : 255Gi image

Storage : /filecoine : 3.6T image

CPU Info image

Env image

Config Path image

lotus config.toml(/filecoin/lotus/node/folder/config.toml)

# Default config:
[API]
# Binding address for the Lotus API
# ListenAddress = "/ip4/127.0.0.1/tcp/9200/http"
# Not used by lotus daemon
# RemoteListenAddress = "221.229.101.4:9200"
# General network timeout value
# Timeout = "30s"

#
[Backup]
#  DisableMetadataLog = false
#
[Libp2p]
#  ListenAddresses = ["/ip4/0.0.0.0/tcp/24001"]
#  AnnounceAddresses = ["/ip4/211.229.101.4/tcp/24001"]
#  NoAnnounceAddresses = []
#  ConnMgrLow = 150
#  ConnMgrHigh = 180
#  ConnMgrGrace = "20s"
#
[Pubsub]
#  Bootstrapper = false
#  RemoteTracer = "/dns4/pubsub-tracer.filecoin.io/tcp/4001/p2p/QmTd6UvR47vUidRNZ1ZKXHrAFhqTJAD27rKL9XYghEKgKX"
#
[Client]
#  UseIpfs = false
#  IpfsOnlineMode = false
#  IpfsMAddr = ""
#  IpfsUseForRetrieval = false
#  SimultaneousTransfers = 20

[Metrics]
#  Nickname = ""
#  HeadNotifs = false

[Wallet]
#  RemoteBackend = ""
#  EnableLedger = false
#  DisableLocal = false
#
[Fees]
#  #DefaultMaxFee = "0.07 FIL"

[Chainstore]
#  EnableSplitstore = false
#  [Chainstore.Splitstore]
#    HotStoreType = "badger"
#    TrackingStoreType = ""
#    MarkSetType = ""
#    EnableFullCompaction = false
#    EnableGC = false
#    Archival = false
#

lotus-miner config.toml(/filecoin/miner/storage/config.toml)

# Default config:
[API]
  ListenAddress = "/ip4/127.0.0.1/tcp/2345/http"
  RemoteListenAddress = "127.0.0.1:2345"
  Timeout = "30s"
#
[Backup]
#  DisableMetadataLog = false
#
[Libp2p]
  ListenAddresses = ["/ip4/0.0.0.0/tcp/24001"]
  AnnounceAddresses = ["/ip4/211.229.101.4/tcp/24001"]
#  NoAnnounceAddresses = []
  ConnMgrLow = 150
  ConnMgrHigh = 180
  ConnMgrGrace = "20s"
#
[Pubsub]
  Bootstrapper = false
  RemoteTracer = "/dns4/pubsub-tracer.filecoin.io/tcp/4001/p2p/QmTd6UvR47vUidRNZ1ZKXHrAFhqTJAD27rKL9XYghEKgKX"
#
[Dealmaking]
  ConsiderOnlineStorageDeals = true
  ConsiderOfflineStorageDeals = true
  ConsiderOnlineRetrievalDeals = true
  ConsiderOfflineRetrievalDeals = true
  ConsiderVerifiedStorageDeals = true
  ConsiderUnverifiedStorageDeals = true
#  PieceCidBlocklist = []
  ExpectedSealDuration = "24h0m0s"
  PublishMsgPeriod = "1h0m0s"
  MaxDealsPerPublishMsg = 8
  MaxProviderCollateralMultiplier = 2
#  Filter = ""
#  RetrievalFilter = ""
#
[Sealing]
  MaxWaitDealsSectors = 2
  MaxSealingSectors = 0
  MaxSealingSectorsForDeals = 0
  WaitDealsDelay = "6h0m0s"
  AlwaysKeepUnsealedCopy = true
#
[Storage]
  ParallelFetchLimit = 10
  AllowAddPiece = true
  AllowPreCommit1 = true
  AllowPreCommit2 = true
  AllowCommit = true
  AllowUnseal = true
#
[Fees]
#  MaxPreCommitGasFee = "0.025 FIL"
#  MaxCommitGasFee = "0.05 FIL"
#  MaxTerminateGasFee = "0.5 FIL"
#  MaxWindowPoStGasFee = "5 FIL"
#  MaxPublishDealsFee = "0.05 FIL"
#  MaxMarketBalanceAddFee = "0.007 FIL"
#
[Addresses]
  PreCommitControl = ["f3sfwubigzqrrve6q4t3fpgwl26wmm5m7cftlw7crhyuc2umcqschv3kdu47jqbu44xj2l7y3c7twre3e55mga"]
  CommitControl = ["f3u2ww4xs74jahcdkddibkarfrqzg43tx5ixroiclybr2bd644vyj4mrdcoqqndjpwue6nmxdqtsfyzuyapika"]
  DisableOwnerFallback = false
  DisableWorkerFallback = false
#

If you have modified parts of lotus, please describe which areas were modified, and the scope of those modifications

closed time in an hour

KimJongYoon

issue commentfilecoin-project/lotus

[Mining Issue] go-libp2p-nat : Soap request got Http 500 Internal Serve Error

@cwhiggins This issue has been resolved since the router was replaced. Thank you.

KimJongYoon

comment created time in an hour

Pull request review commentfilecoin-project/dagstore

initial design doc.

+## CARv2 + DAG store implementation notes++## Overview++The purpose of the CARv2 + DAG store endeavour is to eliminate overhead from the+deal-making processes with the mission of unlocking scalability, performance,+and resource frugality on both miner and client side within the Filecoin+network.++Despite serving Lotus/Filecoin immediate needs, we envision the DAG store to be+a common interplanetary building block across IPFS, Filecoin, and IPLD-based+projects.++The scalability of the former two is bottlenecked by the usage of Badger as a+monolithic blockstore. The DAG store represents an alternative that leverages+sharding and the concept of detachable "data cartridges" to feed and manage data+as transactional units.++For the project that immediately concerns us, the definition of done is the+removal of the Badger blockstore from all client and miner codepaths involved in+storage and retrieval deals, and the transition towards interfacing directly+with low-level data repository files known as CARs (Content ARchives), on both+the read and the write sides.++Note that Badger is generally a fitting database for write-intense and+iterator-heavy workloads. It just doesn't withstand the pattern of usage we+subject it to in large IPLD-based software components, especially past the 100s+GBs data volume mark.++For more information on motivation and architecture, please refer to+[CAR-native DAG store](https://docs.google.com/document/d/118fJdf8HGK8tHYOCQfMVURW9gTF738D0weX-hbG-BvY/edit#). This document is +recommended as a pre-read.++## Overview++The DAG store comprises three layers:++1. Storage layer (manages shards)+2. Index repository (manages indices)+3. DAG access layer (manages queries)++## Storage layer++The DAG store holds shards of data. In Filecoin, shard = deal. Each shard is a+repository of data capable of acting as a standalone blockstore.++### Shards++Shards are identified by an opaque byte string: the shard key. In the case of+Filecoin, the shard key is the `PieceCID` of the storage deal.++A shard contains:++1. the shard key (identifier).+2. the means to obtain shard data (a `Mount` that provides the CAR either+   locally, remotely, or some other location).+3. the shard index (both acting as a manifest of the contents of the shard, and+   a means to efficiently perform random reads).++### CAR formats++A shard can be filled with CARv1 and CARv2 data. CARv2 can be indexed or+indexless.++The choice of version and characteristics affects how the shard index is+populated:++1. CARv1: the index is calculated upon shard activation.+2. Indexless CARv2: the index is calculated upon shard activation.+3. Indexed CARv2: the inline index is adopted as-is as the shard index.++### Shard states++1. **Available:** the system knows about this shard and is capable of serving+   data from it instantaneously, because (a) an index exists and is loaded, and+   (b) data is locally accessible (e.g. it doesn't need to be fetched from a+   remote mount).+2. **Unavailable:** the system knows about this shard, but is not capable of+   serving data from it because the shard is being initialized, or the mount is+   not available locally, but still accessible with work (e.g. fetching from a+   remote location).+3. **Destroyed**: the shard is no longer available; this is permanent condition.++### Operations++#### Shard registration++To register a new shard, call:++```go+dagst.RegisterShard(key []byte, mount Mount, opts ...RegisterOpts) error+```++1. This method takes a shard key and a `Mount`.+2. It initializes the shard in `Unavailable` state.+3. Calls `mount.Info()` to determine if the mount is of local or remote type.+4. Calls `mount.Stat()` to determine if the mount target exists. If not, it+   errors the registration.+5. If remote, it fetches the remote resource into the scrap area.+6. It determines the kind of CAR it is, and populates the index accordingly.+7. Sets the state to `Available`.+8. Returns.++This method is _synchronous_. It returns only when the shard is fully indexed+and available for serving data. This embodies the consistency property of the+ACID model, whereby an INSERT is immediately queriable upon return.++_RegisterOpts is an extension point._ In the future it can be used to pass in an+unseal/decryption function to be used on access (e.g. such as when fast, random+unseal is available).++#### Shard destruction++To destroy a shard, call:++```go+dagst.DestroyShard(key []byte) (destroyed bool, err error)+```++#### Other operations++  * `[Pin/Unpin]Shard()`: retains the shard data in the local scrap area.+  * `ReleaseShard()`: dispose of / release local scrap copies or other resources+    on demand (e.g. unsealed copy).+  * `[Lock/Unlock]Shard()`: for exclusive access.+  * `[Incr/Decr]Shard()`: refcounting on shards.++### Mounts++Shards can be located anywhere, and can come and go dynamically e.g. Filecoin+deals expire, removable media is attached/detached, or the IPFS user purges+content.++It is possible to mount shards with CARs accessible through the local+filesystem, detachable mounts, NFS mounts, distributed filesystems like+Ceph/GlusterFS, HTTP, FTP, etc.++This versatility is provided by an abstraction called `mount.Mount`, a pluggable+component which encapsulates operations/logic to:++1. `Fetch() (io.ReadCloser, error)` Load a CAR from its origin.+2. `Info() mount.Info` Provides info about the mount, e.g. whether it's local or+   remote. This is used to determine whether the fetched CAR needs to be copied+   to a scrap area.+3. `Stat() (mount.Stat, error)` Equivalent to a filesystem stat, provides+   information about the target of the mount: whether it exists, size, etc.++When instantiating `Mount` implementations, one can provide credentials, access+tokens, or other parameters through the implementation constructors. This is+necessary if access to the CAR is permissioned/authenticated.++**Local filesystem Mount**++A local filesystem `Mount` implementation loads the CAR directly from the+filesystem file. It is of `local` type and therefore requires no usage of scrap+area.++*This `Mount` is provided out-of-box by the DAG store.*++**Lotus Mount implementation**++A Lotus `Mount` implementation would be instantiated with a sector ID and a+bytes range within the sealed sector file (i.e. the deal segment).++Loading the CAR consists of calling the worker HTTP API to fetch the unsealed+piece. Because the mount is of `remote` type, the DAG store will need to store+it in a local scrap area. Currently, this may lead to actual unsealing on the+Lotus worker cluster through the current PoRep (slow) if the piece is sealed.++With a future PoRep (cheap, snappy) PoRep, sealing can be performed _just_ for+the blocks that are effectively accessed, potentially during IPLD block access+time. A transformation function may be provided in the future as a `RegisterOpt`+that conducts the unsealing.++A prerequisite to enable unsealing-on-demand possible is PoRep and index+symmetry, i.e. the byte positions of blocks in the unsealed CAR must be+identical to those in the unsealed CAR.++*This `Mount` is provided by Lotus, as it's implementation specific.*++### Shard representation and persistence++The shard catalogue needs to survive restarts. Thus, it needs to be persistent.+Options to explore here include LMDB, BoltDB, or others. Here's what the+persisted shard entry could look like:++```go+type PersistedShard struct {+   Key   []byte+   // Mount is a URL representation of the Mount, e.g.+   //   file://path/to/file?opt=1&opt=2+   //   lotus://sectorID?offset=1&length=2+   Mount string+   // LocalPath is the path to the local replica of the CAR in the scrap area, +   // if the mount if of remote type.+   LocalPath string+}+```++Upon starting, the DAG store will load the catalogue from disk and will+reinstantiate the shard catalogue, the mounts, and the shard states.++### Scrap area++When dealing with remote mounts (e.g. Filecoin storage cluster), the DAG store+will need to copy the remote CAR into local storage to be able to serve DAG+access queries. These copies are called _transient copies_.++Readers access shards from the storage layer by calling+`Acquire/ReleaseShard(key []byte)` methods, which drive the copies into scrap+storage and the deletion of resources.++These methods will need to do refcounting. When no readers are accessing a+shard, the DAG store is free to release local resources. In a first version,+this may happen instantly. In future versions, we may introduce some active+management of the scrap area through usage monitoring + GC. Storage space+assigned to the scrap area may by configuration in the future.++## Index repository++The index repository is the subcomponent that owns and manages the indices in+the DAG store.++There exists three kinds of indices:++1. **Full shard indices.** Consisting of `{ CID: offset }` mappings. In indexed+   CARv2, these are extracted from the inline indices. In unindexed CARv2 and+   CARv1, these are computed using the [Carbs library](https://github.com/willscott/carbs), or the CARv2 upgrade path.+   +   Full shard indices are protected, and not writable externally. Every+   available/unavailable shard MUST have a full shard index. Upon shard+   destruction, its associated full shard index can be disposed.+   +2. **Semantic indices.** Manifests of externally-relevant CIDs contained within+   a shard, i.e. `[]CID` with no offset indication. In other words, subsets of+   the full-shard index with no offset indication.+   +   These are calculated externally (e.g. semantic indexing service) and supplied+   to the DAG store for storage and safekeeping.++   A shard can have infinite number of semantic indices associated with it. Each+   semantic index is identifid and stamped with its generation data (rule and+   version).++   We acknowledge that semantic indices perhaps don't belong in the DAG store+   long-term. Despite that, we decide to incubate them here to potentially spin+   them off in the future.++3. **Top-level cross-shard index.** Aggregates of full shard indices that enable+   shard routing of reads for concrete CIDs.++### Interactions++This component receives the queries coming in from the miner-side indexing+sidecar, which in turn come from network indexers.++It also serves the DAG access layer. When a shard is registered/acquired in the+storage layer, and a Blockstore is demanded for it, the full index to provide+random-access capability is obtained from the index repo.++In the future, this subcomponent will also serve the top-level cross-shard+index.++### Interface++```go+type IndexRepo interface {+   FullIndexRepo+   ManifestRepo+}++type FullIndexRepo interface {+   // public+   GetFullIndex(key shard.Key) (FullIndex, error)++   // private+   InsertFullIndex(key shard.Key, index FullIndex) error+   DeleteFullIndex(key shard.Key) (bool, error)+}++type ManifestRepo interface {+   // public+   GetManifest(key ManifestKey) (Manifest, error)+   InsertManifest(key ManifestKey, manifest Manifest) error+   DeleteManifest(key ManifestKey) (bool, error)+   ExistsManifest(key ManifestKey) (bool, error)+}++type ManifestKey struct {+   Shard    key.Shard+   Rule     string+   Version  string+}++type FullIndex interface {+   Offset(c cid.Cid) (offset int64, err error) // inexistent: offset=-1, err=nil+   Contains(c cid.Cid) (bool, error)+   Len() (l int64, err error)+   ForEach(func(c cid.Cid, offset int64) (ok bool, err error)) error+}+```++## DAG access layer++This layer is responsible for serving DAGs or sub-DAGs from one or many shards.+Initially, queries will require the shard key. That is, queries will be bounded+to a single identifiable shard.++In the future, when the cross-shard top-level index is implemented, the DAG+store will be capable of resolving the shard for any given root CID.++DAG access layer allows various access abstractions:+   1. Obtaining a Blockstore bound to a single shard.+   2. Obtaining a Blockstore bound to a specific set of shards.+   3. Obtaining a global Blockstore.+   4. `ipld.Node` -- TBD.++At this point, we are only concerned with (1). The remaining access patterns+will be elaborated on in the future.++```go+type DAGAccessor interface {+   Shard(key shard.Key) ShardAccessor+}++type ShardAccessor interface {+   Blockstore() (ReadBlockstore, error)+}+```++## Requirements for CARv2++- Index needs to be detachable.+- Index offsets need to be relative to the CARv1, and not absolute in the+  physical CARv2 file.+- Index needs to be iterable.+- Given any CAR file, we should be able to determine its version and+  characteristics (concretely, indexed or not indexed).+- Given a CARv1 file, it should be possible to generate a CARv2 index for it in+  detached form.+- Given a CARv2 file, we should be able to decompose it to the corresponding+  CARv1 payload and the Carbs index. The CAR payload should be exposed with+  `io.ReaderAt` and `io.Reader` abstractions.+- Given an indexed CARv2 file, it should be possible to instantiate a+  `ReadBlockstore` on it in a self-sufficient manner.+- Given a CARv1 or unindexed CARv2, it should be possible to instantiate a+  `ReadBlockstore` with an index provided by us.+- It should be possible to write an indexed CARv2 in streaming fashion. The+  CARv2 library should do the right thing depending on the declared write+  characteristics and the output characteristics. For example:+   1. the writer may declare that blocks are provided in depth-first order and+      without uniqueness guarantees, but may state they want the output to be in+      depth-first order AND deduplicated. In this case, the CARv2 library must

The current carbon will write a block everytime you call Put, so won't automatically dedup for you. Does it make more sense for you as the caller to do the Has check to skip duplicates, or do we want that logic in the library?

@masih - I haven't looked closely at the method for opening the carv2 as a blockstore. Do we anticipate there'll be a reasonably ergonomic place to pass in an option for deduplicating puts when we do that?

I don't think this is hard, it's just a matter of where we do it.

raulk

comment created time in an hour

Pull request review commentfilecoin-project/dagstore

initial design doc.

+## CARv2 + DAG store implementation notes++## Overview++The purpose of the CARv2 + DAG store endeavour is to eliminate overhead from the+deal-making processes with the mission of unlocking scalability, performance,+and resource frugality on both miner and client side within the Filecoin+network.++Despite serving Lotus/Filecoin immediate needs, we envision the DAG store to be+a common interplanetary building block across IPFS, Filecoin, and IPLD-based+projects.++The scalability of the former two is bottlenecked by the usage of Badger as a+monolithic blockstore. The DAG store represents an alternative that leverages+sharding and the concept of detachable "data cartridges" to feed and manage data+as transactional units.++For the project that immediately concerns us, the definition of done is the+removal of the Badger blockstore from all client and miner codepaths involved in+storage and retrieval deals, and the transition towards interfacing directly+with low-level data repository files known as CARs (Content ARchives), on both+the read and the write sides.++Note that Badger is generally a fitting database for write-intense and+iterator-heavy workloads. It just doesn't withstand the pattern of usage we+subject it to in large IPLD-based software components, especially past the 100s+GBs data volume mark.++For more information on motivation and architecture, please refer to+[CAR-native DAG store](https://docs.google.com/document/d/118fJdf8HGK8tHYOCQfMVURW9gTF738D0weX-hbG-BvY/edit#). This document is +recommended as a pre-read.++## Overview++The DAG store comprises three layers:++1. Storage layer (manages shards)+2. Index repository (manages indices)+3. DAG access layer (manages queries)++## Storage layer++The DAG store holds shards of data. In Filecoin, shard = deal. Each shard is a+repository of data capable of acting as a standalone blockstore.++### Shards++Shards are identified by an opaque byte string: the shard key. In the case of+Filecoin, the shard key is the `PieceCID` of the storage deal.++A shard contains:++1. the shard key (identifier).+2. the means to obtain shard data (a `Mount` that provides the CAR either+   locally, remotely, or some other location).+3. the shard index (both acting as a manifest of the contents of the shard, and+   a means to efficiently perform random reads).++### CAR formats++A shard can be filled with CARv1 and CARv2 data. CARv2 can be indexed or+indexless.++The choice of version and characteristics affects how the shard index is+populated:++1. CARv1: the index is calculated upon shard activation.+2. Indexless CARv2: the index is calculated upon shard activation.+3. Indexed CARv2: the inline index is adopted as-is as the shard index.++### Shard states++1. **Available:** the system knows about this shard and is capable of serving+   data from it instantaneously, because (a) an index exists and is loaded, and+   (b) data is locally accessible (e.g. it doesn't need to be fetched from a+   remote mount).+2. **Unavailable:** the system knows about this shard, but is not capable of+   serving data from it because the shard is being initialized, or the mount is+   not available locally, but still accessible with work (e.g. fetching from a+   remote location).+3. **Destroyed**: the shard is no longer available; this is permanent condition.++### Operations++#### Shard registration++To register a new shard, call:++```go+dagst.RegisterShard(key []byte, mount Mount, opts ...RegisterOpts) error+```++1. This method takes a shard key and a `Mount`.+2. It initializes the shard in `Unavailable` state.+3. Calls `mount.Info()` to determine if the mount is of local or remote type.+4. Calls `mount.Stat()` to determine if the mount target exists. If not, it+   errors the registration.+5. If remote, it fetches the remote resource into the scrap area.+6. It determines the kind of CAR it is, and populates the index accordingly.+7. Sets the state to `Available`.+8. Returns.++This method is _synchronous_. It returns only when the shard is fully indexed+and available for serving data. This embodies the consistency property of the+ACID model, whereby an INSERT is immediately queriable upon return.++_RegisterOpts is an extension point._ In the future it can be used to pass in an+unseal/decryption function to be used on access (e.g. such as when fast, random+unseal is available).++#### Shard destruction++To destroy a shard, call:++```go+dagst.DestroyShard(key []byte) (destroyed bool, err error)+```++#### Other operations++  * `[Pin/Unpin]Shard()`: retains the shard data in the local scrap area.+  * `ReleaseShard()`: dispose of / release local scrap copies or other resources+    on demand (e.g. unsealed copy).+  * `[Lock/Unlock]Shard()`: for exclusive access.+  * `[Incr/Decr]Shard()`: refcounting on shards.++### Mounts++Shards can be located anywhere, and can come and go dynamically e.g. Filecoin+deals expire, removable media is attached/detached, or the IPFS user purges+content.++It is possible to mount shards with CARs accessible through the local+filesystem, detachable mounts, NFS mounts, distributed filesystems like+Ceph/GlusterFS, HTTP, FTP, etc.++This versatility is provided by an abstraction called `mount.Mount`, a pluggable+component which encapsulates operations/logic to:++1. `Fetch() (io.ReadCloser, error)` Load a CAR from its origin.+2. `Info() mount.Info` Provides info about the mount, e.g. whether it's local or+   remote. This is used to determine whether the fetched CAR needs to be copied+   to a scrap area.+3. `Stat() (mount.Stat, error)` Equivalent to a filesystem stat, provides+   information about the target of the mount: whether it exists, size, etc.++When instantiating `Mount` implementations, one can provide credentials, access+tokens, or other parameters through the implementation constructors. This is+necessary if access to the CAR is permissioned/authenticated.++**Local filesystem Mount**++A local filesystem `Mount` implementation loads the CAR directly from the+filesystem file. It is of `local` type and therefore requires no usage of scrap+area.++*This `Mount` is provided out-of-box by the DAG store.*++**Lotus Mount implementation**++A Lotus `Mount` implementation would be instantiated with a sector ID and a+bytes range within the sealed sector file (i.e. the deal segment).++Loading the CAR consists of calling the worker HTTP API to fetch the unsealed

Yes, we are on the same page.

raulk

comment created time in 2 hours

push eventfilecoin-project/lotus

Anton Evangelatov

commit sha 218c9199f3303a22bcb2133288c3053115f4f480

fix testNonGenesisMiner

view details

push time in 2 hours

issue closedfilecoin-project/dealbot

Task record should include miner and client multiaddrs used

currently, we don't track the IP / multiaddr resolved for the miner, or the IP that the client is visible from. These should be collected and filled into the task record.

closed time in 2 hours

willscott

issue commentfilecoin-project/dealbot

Task record should include miner and client multiaddrs used

These are now present

willscott

comment created time in 2 hours

push eventfilecoin-project/lotus

Anton Evangelatov

commit sha 4f2d8b0a856db53a908312c8fcded8920fd902bf

add all subsystems to deadlines and wdpost_dispute tests

view details

push time in 2 hours

issue commentfilecoin-project/lotus

[Feature Request] Prebuilt binaries for lotus in brew

Probably depends on https://github.com/filecoin-project/filecoin-ffi/pull/179

atopal

comment created time in 2 hours

issue openedfilecoin-project/lotus

[Feature Request] Prebuilt binaries for lotus in brew

Is your feature request related to a problem? Please describe. Now that lotus is in brew, the installation process for users who want to get started on MacOS is much simpler. Unfortunately brew still needs to built Lotus from scratch, which can take a significant amount of time.

Describe the solution you'd like Please offer prebuilt binaries for package managers like brew.

Describe alternatives you've considered The alternative is currently building from scratch. It works, but doesn't compare favorably with the getting started experience of other solutions in this space.

created time in 2 hours

push eventfilecoin-project/lotus

Anton Evangelatov

commit sha a828b15fbead16a30d4f0329ad526602d525ab34

revert MockSectorMgr

view details

push time in 2 hours

push eventprotocol/web3-dev-team

jnthnvctr

commit sha 629fca18f819ab59f2c2dbe2a40af3445a1befcf

Update fil-storage-homepage.md

view details

push time in 2 hours

push eventprotocol/web3-dev-team

jnthnvctr

commit sha 84d1a3516b8347dd51c4e89db72b833fa1ce846a

Rename fil-storage-homepage to fil-storage-homepage.md

view details

push time in 2 hours

push eventprotocol/web3-dev-team

jnthnvctr

commit sha cfa9e73492e66058daf3dc0323d2cb073a01f4ff

Create fil-storage-homepage

view details

push time in 2 hours

PR opened filecoin-project/dealbot

Reviewers
remove retrieved files before completion.

fix #221

+24 -2

0 comment

1 changed file

pr created time in 2 hours

create barnchfilecoin-project/dealbot

branch : retrieveunlink

created branch time in 2 hours

pull request commentfilecoin-project/dagstore

skeleton: mounts.

Overall LGTM 👍

I'm wondering if we need the registry - do we anticipate needing to load mount types dynamically at runtime?

raulk

comment created time in 3 hours

create barnchprotocol/web3-dev-team

branch : jvictor/fil-storage-homepage

created branch time in 3 hours

Pull request review commentfilecoin-project/lotus

revamped integration test kit (aka. Operation Sparks Joy)

+package kit++import (+	"bytes"+	"context"+	"crypto/rand"+	"io/ioutil"+	"sync"+	"testing"+	"time"++	"github.com/filecoin-project/go-address"+	"github.com/filecoin-project/go-state-types/abi"+	"github.com/filecoin-project/go-state-types/big"+	"github.com/filecoin-project/go-state-types/exitcode"+	"github.com/filecoin-project/go-state-types/network"+	"github.com/filecoin-project/go-storedcounter"+	"github.com/filecoin-project/lotus/api"+	"github.com/filecoin-project/lotus/api/v1api"+	"github.com/filecoin-project/lotus/build"+	"github.com/filecoin-project/lotus/chain"+	"github.com/filecoin-project/lotus/chain/actors"+	"github.com/filecoin-project/lotus/chain/actors/builtin/miner"+	"github.com/filecoin-project/lotus/chain/actors/builtin/power"+	"github.com/filecoin-project/lotus/chain/gen"+	genesis2 "github.com/filecoin-project/lotus/chain/gen/genesis"+	"github.com/filecoin-project/lotus/chain/messagepool"+	"github.com/filecoin-project/lotus/chain/types"+	"github.com/filecoin-project/lotus/chain/wallet"+	"github.com/filecoin-project/lotus/cmd/lotus-seed/seed"+	sectorstorage "github.com/filecoin-project/lotus/extern/sector-storage"+	"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"+	"github.com/filecoin-project/lotus/extern/sector-storage/mock"+	"github.com/filecoin-project/lotus/genesis"+	lotusminer "github.com/filecoin-project/lotus/miner"+	"github.com/filecoin-project/lotus/node"+	"github.com/filecoin-project/lotus/node/config"+	"github.com/filecoin-project/lotus/node/modules"+	"github.com/filecoin-project/lotus/node/modules/dtypes"+	testing2 "github.com/filecoin-project/lotus/node/modules/testing"+	"github.com/filecoin-project/lotus/node/repo"+	"github.com/filecoin-project/lotus/storage/mockstorage"+	miner2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/miner"+	power2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/power"+	"github.com/ipfs/go-datastore"+	libp2pcrypto "github.com/libp2p/go-libp2p-core/crypto"+	"github.com/libp2p/go-libp2p-core/peer"+	mocknet "github.com/libp2p/go-libp2p/p2p/net/mock"+	"github.com/stretchr/testify/require"+)++func init() {+	chain.BootstrapPeerThreshold = 1+	messagepool.HeadChangeCoalesceMinDelay = time.Microsecond+	messagepool.HeadChangeCoalesceMaxDelay = 2 * time.Microsecond+	messagepool.HeadChangeCoalesceMergeInterval = 100 * time.Nanosecond+}++// Ensemble is a collection of nodes instantiated within a test.+//+// Create a new ensemble with:+//+//   ens := kit.NewEnsemble()+//+// Create full nodes and miners:+//+//   var full TestFullNode+//   var miner TestMiner+//   ens.FullNode(&full, opts...)       // populates a full node+//   ens.Miner(&miner, &full, opts...)  // populates a miner, using the full node as its chain daemon+//+// It is possible to pass functional options to set initial balances,+// presealed sectors, owner keys, etc.+//+// After the initial nodes are added, call `ens.Start()` to forge genesis+// and start the network. Mining will NOT be started automatically. It needs+// to be started explicitly by calling `BeginMining`.+//+// Nodes also need to be connected with one another, either via `ens.Connect()`+// or `ens.InterconnectAll()`. A common inchantation for simple tests is to do:+//+//   ens.InterconnectAll().BeginMining(blocktime)+//+// You can continue to add more nodes, but you must always follow with+// `ens.Start()` to activate the new nodes.+//+// The API is chainable, so it's possible to do a lot in a very succinct way:+//+//   kit.NewEnsemble().FullNode(&full).Miner(&miner, &full).Start().InterconnectAll().BeginMining()+//+// You can also find convenient fullnode:miner presets, such as 1:1, 1:2,+// and 2:1, e.g.:+//+//   kit.EnsembleMinimal()+//   kit.EnsembleOneTwo()+//   kit.EnsembleTwoOne()+//+type Ensemble struct {+	t            *testing.T+	bootstrapped bool+	genesisBlock bytes.Buffer+	mn           mocknet.Mocknet+	options      *ensembleOpts++	inactive struct {+		fullnodes []*TestFullNode+		miners    []*TestMiner+	}+	active struct {+		fullnodes []*TestFullNode+		miners    []*TestMiner+	}+	genesis struct {+		miners   []genesis.Miner+		accounts []genesis.Actor+	}+}++// NewEnsemble instantiates a new blank Ensemble.+func NewEnsemble(t *testing.T, opts ...EnsembleOpt) *Ensemble {+	options := DefaultEnsembleOpts+	for _, o := range opts {+		err := o(&options)+		require.NoError(t, err)+	}++	n := &Ensemble{t: t, options: &options}++	// add accounts from ensemble options to genesis.+	for _, acc := range options.accounts {+		n.genesis.accounts = append(n.genesis.accounts, genesis.Actor{+			Type:    genesis.TAccount,+			Balance: acc.initialBalance,+			Meta:    (&genesis.AccountMeta{Owner: acc.key.Address}).ActorMeta(),+		})+	}++	return n+}++// FullNode enrolls a new full node.+func (n *Ensemble) FullNode(full *TestFullNode, opts ...NodeOpt) *Ensemble {+	options := DefaultNodeOpts+	for _, o := range opts {+		err := o(&options)+		require.NoError(n.t, err)+	}++	key, err := wallet.GenerateKey(types.KTBLS)+	require.NoError(n.t, err)++	if !n.bootstrapped && !options.balance.IsZero() {+		// if we still haven't forged genesis, create a key+address, and assign+		// it some FIL; this will be set as the default wallet when the node is+		// started.+		genacc := genesis.Actor{+			Type:    genesis.TAccount,+			Balance: options.balance,+			Meta:    (&genesis.AccountMeta{Owner: key.Address}).ActorMeta(),+		}++		n.genesis.accounts = append(n.genesis.accounts, genacc)+	}++	*full = TestFullNode{t: n.t, options: options, DefaultKey: key}+	n.inactive.fullnodes = append(n.inactive.fullnodes, full)+	return n+}++// Miner enrolls a new miner, using the provided full node for chain+// interactions.+func (n *Ensemble) Miner(miner *TestMiner, full *TestFullNode, opts ...NodeOpt) *Ensemble {+	require.NotNil(n.t, full, "full node required when instantiating miner")++	options := DefaultNodeOpts+	for _, o := range opts {+		err := o(&options)+		require.NoError(n.t, err)+	}++	privkey, _, err := libp2pcrypto.GenerateEd25519Key(rand.Reader)+	require.NoError(n.t, err)++	peerId, err := peer.IDFromPrivateKey(privkey)+	require.NoError(n.t, err)++	tdir, err := ioutil.TempDir("", "preseal-memgen")+	require.NoError(n.t, err)++	minerCnt := len(n.inactive.miners) + len(n.active.miners)++	actorAddr, err := address.NewIDAddress(genesis2.MinerStart + uint64(minerCnt))+	require.NoError(n.t, err)++	ownerKey := options.ownerKey+	if !n.bootstrapped {+		var (+			sectors = options.sectors+			k       *types.KeyInfo+			genm    *genesis.Miner+		)++		// create the preseal commitment.+		if n.options.mockProofs {+			genm, k, err = mockstorage.PreSeal(abi.RegisteredSealProof_StackedDrg2KiBV1, actorAddr, sectors)+		} else {+			genm, k, err = seed.PreSeal(actorAddr, abi.RegisteredSealProof_StackedDrg2KiBV1, 0, sectors, tdir, []byte("make genesis mem random"), nil, true)+		}+		require.NoError(n.t, err)++		genm.PeerId = peerId++		// create an owner key, and assign it some FIL.+		ownerKey, err = wallet.NewKey(*k)+		require.NoError(n.t, err)++		genacc := genesis.Actor{+			Type:    genesis.TAccount,+			Balance: options.balance,+			Meta:    (&genesis.AccountMeta{Owner: ownerKey.Address}).ActorMeta(),+		}++		n.genesis.miners = append(n.genesis.miners, *genm)+		n.genesis.accounts = append(n.genesis.accounts, genacc)+	} else {+		require.NotNil(n.t, ownerKey, "worker key can't be null if initializing a miner after genesis")+	}++	*miner = TestMiner{+		t:          n.t,+		ActorAddr:  actorAddr,+		OwnerKey:   ownerKey,+		FullNode:   full,+		PresealDir: tdir,+		options:    options,+	}++	miner.Libp2p.PeerID = peerId+	miner.Libp2p.PrivKey = privkey++	n.inactive.miners = append(n.inactive.miners, miner)++	return n+}++// Start starts all enrolled nodes.+func (n *Ensemble) Start() *Ensemble {+	ctx, cancel := context.WithCancel(context.Background())+	n.t.Cleanup(cancel)++	var gtempl *genesis.Template+	if !n.bootstrapped {+		// We haven't been bootstrapped yet, we need to generate genesis and+		// create the networking backbone.+		gtempl = n.generateGenesis()+		n.mn = mocknet.New(ctx)+	}++	// ---------------------+	//  FULL NODES+	// ---------------------++	// Create all inactive full nodes.+	for i, full := range n.inactive.fullnodes {+		opts := []node.Option{+			node.FullAPI(&full.FullNode, node.Lite(full.options.lite)),+			node.Online(),+			node.Repo(repo.NewMemory(nil)),+			node.MockHost(n.mn),+			node.Test(),++			// so that we subscribe to pubsub topics immediately+			node.Override(new(dtypes.Bootstrapper), dtypes.Bootstrapper(true)),+		}++		// append any node builder options.+		opts = append(opts, full.options.extraNodeOpts...)++		// Either generate the genesis or inject it.+		if i == 0 && !n.bootstrapped {+			opts = append(opts, node.Override(new(modules.Genesis), testing2.MakeGenesisMem(&n.genesisBlock, *gtempl)))+		} else {+			opts = append(opts, node.Override(new(modules.Genesis), modules.LoadGenesis(n.genesisBlock.Bytes())))+		}++		// Are we mocking proofs?+		if n.options.mockProofs {+			opts = append(opts,+				node.Override(new(ffiwrapper.Verifier), mock.MockVerifier),+				node.Override(new(ffiwrapper.Prover), mock.MockProver),+			)+		}++		// Call option builders, passing active nodes as the parameter+		for _, bopt := range full.options.optBuilders {+			opts = append(opts, bopt(n.active.fullnodes))+		}++		// Construct the full node.+		stop, err := node.New(ctx, opts...)++		require.NoError(n.t, err)++		addr, err := full.WalletImport(context.Background(), &full.DefaultKey.KeyInfo)+		require.NoError(n.t, err)++		err = full.WalletSetDefault(context.Background(), addr)+		require.NoError(n.t, err)++		// Are we hitting this node through its RPC?+		if full.options.rpc {+			withRPC := fullRpc(n.t, full)+			n.inactive.fullnodes[i] = withRPC+		}++		n.t.Cleanup(func() { _ = stop(context.Background()) })++		n.active.fullnodes = append(n.active.fullnodes, full)+	}++	// If we are here, we have processed all inactive fullnodes and moved them+	// to active, so clear the slice.+	n.inactive.fullnodes = n.inactive.fullnodes[:0]++	// Link all the nodes.+	err := n.mn.LinkAll()+	require.NoError(n.t, err)++	// ---------------------+	//  MINERS+	// ---------------------++	// Create all inactive miners.+	for i, m := range n.inactive.miners {+		if n.bootstrapped {+			// this is a miner created after genesis, so it won't have a preseal.+			// we need to create it on chain.+			params, aerr := actors.SerializeParams(&power2.CreateMinerParams{+				Owner:         m.OwnerKey.Address,+				Worker:        m.OwnerKey.Address,+				SealProofType: m.options.proofType,+				Peer:          abi.PeerID(m.Libp2p.PeerID),+			})+			require.NoError(n.t, aerr)++			createStorageMinerMsg := &types.Message{+				From:  m.OwnerKey.Address,+				To:    power.Address,+				Value: big.Zero(),++				Method: power.Methods.CreateMiner,+				Params: params,++				GasLimit:   0,+				GasPremium: big.NewInt(5252),+			}+			signed, err := m.FullNode.FullNode.MpoolPushMessage(ctx, createStorageMinerMsg, nil)+			require.NoError(n.t, err)++			mw, err := m.FullNode.FullNode.StateWaitMsg(ctx, signed.Cid(), build.MessageConfidence, api.LookbackNoLimit, true)+			require.NoError(n.t, err)+			require.Equal(n.t, exitcode.Ok, mw.Receipt.ExitCode)++			var retval power2.CreateMinerReturn+			err = retval.UnmarshalCBOR(bytes.NewReader(mw.Receipt.Return))+			require.NoError(n.t, err, "failed to create miner")++			m.ActorAddr = retval.IDAddress+		}++		has, err := m.FullNode.WalletHas(ctx, m.OwnerKey.Address)+		require.NoError(n.t, err)++		// Only import the owner's full key into our companion full node, if we+		// don't have it still.+		if !has {+			_, err = m.FullNode.WalletImport(ctx, &m.OwnerKey.KeyInfo)+			require.NoError(n.t, err)+		}++		// // Set it as the default address.+		// err = m.FullNode.WalletSetDefault(ctx, m.OwnerAddr.Address)+		// require.NoError(n.t, err)++		r := repo.NewMemory(nil)++		lr, err := r.Lock(repo.StorageMiner)+		require.NoError(n.t, err)++		ks, err := lr.KeyStore()+		require.NoError(n.t, err)++		pk, err := m.Libp2p.PrivKey.Bytes()+		require.NoError(n.t, err)++		err = ks.Put("libp2p-host", types.KeyInfo{+			Type:       "libp2p-host",+			PrivateKey: pk,+		})+		require.NoError(n.t, err)++		ds, err := lr.Datastore(context.TODO(), "/metadata")+		require.NoError(n.t, err)++		err = ds.Put(datastore.NewKey("miner-address"), m.ActorAddr.Bytes())+		require.NoError(n.t, err)++		nic := storedcounter.New(ds, datastore.NewKey(modules.StorageCounterDSPrefix))+		for i := 0; i < m.options.sectors; i++ {+			_, err := nic.Next()+			require.NoError(n.t, err)+		}+		_, err = nic.Next()+		require.NoError(n.t, err)++		err = lr.Close()+		require.NoError(n.t, err)++		enc, err := actors.SerializeParams(&miner2.ChangePeerIDParams{NewID: abi.PeerID(m.Libp2p.PeerID)})+		require.NoError(n.t, err)++		msg := &types.Message{+			From:   m.OwnerKey.Address,+			To:     m.ActorAddr,+			Method: miner.Methods.ChangePeerID,+			Params: enc,+			Value:  types.NewInt(0),+		}++		_, err = m.FullNode.MpoolPushMessage(ctx, msg, nil)+		require.NoError(n.t, err)++		var mineBlock = make(chan lotusminer.MineReq)+		opts := []node.Option{+			node.StorageMiner(&m.StorageMiner),+			node.Online(),+			node.Repo(r),+			node.Test(),++			node.MockHost(n.mn),++			node.Override(new(v1api.FullNode), m.FullNode.FullNode),+			node.Override(new(*lotusminer.Miner), lotusminer.NewTestMiner(mineBlock, m.ActorAddr)),++			// disable resource filtering so that local worker gets assigned tasks+			// regardless of system pressure.+			node.Override(new(sectorstorage.SealerConfig), func() sectorstorage.SealerConfig {+				scfg := config.DefaultStorageMiner()+				scfg.Storage.ResourceFiltering = sectorstorage.ResourceFilteringDisabled+				return scfg.Storage+			}),+		}++		// append any node builder options.+		opts = append(opts, m.options.extraNodeOpts...)++		idAddr, err := address.IDFromAddress(m.ActorAddr)+		require.NoError(n.t, err)++		// preload preseals if the network still hasn't bootstrapped.+		var presealSectors []abi.SectorID+		if !n.bootstrapped {+			sectors := n.genesis.miners[i].Sectors+			for _, sector := range sectors {+				presealSectors = append(presealSectors, abi.SectorID{

It is used here, it wasn't being used on my PR, since I am not using the MockSectorMgr.

raulk

comment created time in 3 hours

push eventfilecoin-project/lotus

Dirk McCormick

commit sha da789939b198c2391d1ca946406d52f43daac37c

fix: bump blocktime of TestQuotePriceForUnsealedRetrieval to 1 second

view details

Anton Evangelatov

commit sha 9567807144efc55a4933a6526998882d5ba29fd0

Merge branch 'raulk/itests-refactor-kit' into nonsense/split-market-miner-processes

view details

push time in 3 hours