profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/neondragon/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Alex Fox neondragon Filecoin Slack: @NeonixAF

push eventNeonixWS/filecoin-discover-dealer

Alex Fox

commit sha fec3f5f6281a62ad46706e3fbbd4bfaee1f702e0

Support IPv6 API_HOST address in fil-spid.bash

view details

Alex Fox

commit sha 0fb00783ea3d167b277e6adf9fe75d31e722dd0e

Fix jq error 'string () and number (2) cannot be added' in fil-spid.bash

view details

push time in 5 days

IssuesEvent

issue commentfilecoin-project/lotus

Standardize per-customer pricing implementation with a JSON payload in the proposal rejection response

Making a deal - client uses existing query-ask system

  1. Client performs query-ask against storage provider.
  2. Storage provider returns their default verified/unverified prices and min/max sizes from set-ask as normal. The prices are high enough for the storage provider to accept them.
  3. Client proposes deal.

Making a deal - client uses magic PieceCID

  1. Client proposes deal with magic PieceCID, and all other deal parameters the same as the real deal they intend to propose.
  2. Storage provider invokes deal filter to return JSON response. Deal filter is able to return a price for deals that the regular 'query-ask' criteria would have rejected.
  3. Client parses JSON response and if it is happy, it sends the real deal proposal.
neondragon

comment created time in 17 days

issue commentfilecoin-project/lotus

Standardize per-customer pricing implementation with a JSON payload in the proposal rejection response

@ribasushi Fantastic suggestion!

I think two things should happen:

1. Implement the 'just tell me stuff' magic API call

The lotus client should allow sending a deal proposal using this magic PieceCID without actual data existing.

When lotus receives a deal proposal with magic PieceCID "baga6ea4seaqfilecoinpricingrequestversiononexxxxxxxxxxxxxxxxqqqq" it:

  • Passes this full deal proposal to the deal filter script, if present.
  • The deal filter script is responsible for returning the JSON payload as a rejection message
  • Lotus then rejects the deal if the deal filter script did not.

The JSON payload format is:

{
  "response_type":"deal_criteria"
  "version":"1",
  "criteria": {
    "price":2000000,
    "validity": 1800
    "x_nonstandard_key": "storage providers can send any extra non-standard data in the response by prefixing the custom key's name with 'x_'",
  }
}

Price in attoFIL, validity in seconds.

2. Clients that don't understand this protocol should be able to query-ask a storage miner as usual and receive a usable response. We don't have to do this, but if we don't, the existing query-ask system will remain unusable. Currently, storage providers are setting their ask prices to 0. This breaks existing tools and is fragmenting the ecosystem. It is only necessary because lotus will reject deals priced lower than the ask price before sending them to the deal filter script.

go-fil-markets needs the price test to be moved to after the deal filter script is executed. If the deal filter script returns ACCEPT, then the price test should be skipped. ACCEPT should signal that the deal filter has performed all business logic and that go-fil-markets should only then reject the deal due to technical reasons.

I have been using this technique for about 8 months now. It works perfectly. It just needs to be made default behaviour in Lotus in a way that doesn't break existing installs (my current code will).

neondragon

comment created time in 17 days

issue commentfilecoin-project/lotus

Standardize per-customer pricing implementation with a JSON payload in the proposal rejection response

I agree with @s0nik42 's idea to return just a price and validity, applicable for the type of deal proposed. That way, the logic that the deal filter uses to decide prices is irrelevant. The client just needs to know what the price is for a specific deal.

I suggested the special signal for a storage provider to return the JSON would be a deal proposal with price 0 FIL and piece size 0. We would need to specify a piece size to allow the deal filter script to be provided with the proposed size. Proposing a deal with price 0 FIL is valid so we can't use that.

Proposing a deal with price -1 attofil may be the best way to signal the storage provider to reject the proposal and return the JSON payload? That should be compatible with existing implementations (the proposal should just be rejected as invalid).

Lotus would need to be altered so that an incoming deal with a price of -1 attoFIL would be sent to the deal filter script. The deal filter script can return REJECT {"response_type":"deal_price", ...}.

neondragon

comment created time in 17 days

issue openedfilecoin-project/lotus

Standardise per-customer pricing implementation with a JSON payload in the proposal rejection response

Checklist

  • [X] This is not a new feature or an enhancement to the Filecoin protocol. If it is, please open an FIP issue.
  • [X] This is not a new feature request. If it is, please file a feature request instead.
  • [X] This is not brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on the lotus forum and select the category as Ideas.
  • [X] I have a specific, actionable, and well motivated improvement to propose.

Lotus component

  • [ ] lotus daemon - chain sync
  • [ ] lotus miner - mining and block production
  • [ ] lotus miner/worker - sealing
  • [ ] lotus miner - proving(WindowPoSt)
  • [X] lotus miner/market - storage deal
  • [X] lotus miner/market - retrieval deal
  • [ ] lotus miner/market - data transfer
  • [X] lotus client
  • [X] lotus JSON-RPC API
  • [ ] lotus message management (mpool)
  • [ ] Other

Improvement Suggestion

This is a suggested improvement to how storage providers present their storage prices to customers using Lotus.

User Story Alice runs a successful Filecoin storage provider service. Alice would like to offer different prices to different customers for business reasons.

Existing implementation in Lotus Storage providers set a single set of deal acceptance criteria that is applied to all customers, using lotus-miner storage-deals set-ask

Problem Lotus does not provide a way for a storage provider to offer a different price to different customers.

Storage providers work around this restriction by agreeing prices with customers outside of lotus. They then provide the customer specific pricing in non-standard ways e.g.:

  • Temporarily lowering their global ask price to allow a customer to send deals, and then raising their prices back to normal. Other customers might also send deals and will unintentionally receive the lower price.
  • Setting their ask price to 0, and fully relying on customers to manually specify the agreed prices in their deal proposals.

Both solutions break automation. A customer using the price a storage provider returns in their query-ask response may either pay more than they could have, or attempt to make a deal with a price of 0, which is rejected.

Improvement A proposal to allow clients to access their custom deal acceptance criteria:

  1. A client using this convention does not use query-ask.
  2. The client proposes a deal with the storage provider for 0 FIL and for 0 bytes in size.
  3. The storage provider recognises this deal proposal as a special case, rejects the deal, and responds with a well-formed JSON payload describing the deal acceptance criteria for that specific client and wallet address.
  4. The client parses the standards-based part of the information and sends deal proposals if it accepts the criteria.

This implementation retains combability with existing tooling.

The JSON payload returned to the client could be a set of keys and values like this:

{
  "response_type":"deal_criteria"
  "version":"1",
  "criteria": {
    "price":2000000,
    "x_nonstandard_key": "storage providers can send any extra non-standard data in the response by prefixing the custom key's name with 'x_'"
  }
}

I would like to ask the community what other data it may be useful to return in this payload, so that we can agree on the standard way to format it. Please comment in this thread with suggestions.

created time in 17 days

issue closedfilecoin-project/lotus

Query-ask version 2 - standardize per-customer pricing (with new query-ask format)

Checklist

  • [X] This is not a new feature or an enhancement to the Filecoin protocol. If it is, please open an FIP issue.
  • [X] This is not a new feature request. If it is, please file a feature request instead.
  • [X] This is not brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on the lotus forum and select the category as Ideas.
  • [X] I have a specific, actionable, and well motivated improvement to propose.

Lotus component

  • [ ] lotus daemon - chain sync
  • [ ] lotus miner - mining and block production
  • [ ] lotus miner/worker - sealing
  • [ ] lotus miner - proving(WindowPoSt)
  • [X] lotus miner/market - storage deal
  • [X] lotus miner/market - retrieval deal
  • [ ] lotus miner/market - data transfer
  • [X] lotus client
  • [X] lotus JSON-RPC API
  • [ ] lotus message management (mpool)
  • [ ] Other

Improvement Suggestion

This is a suggested improvement to how storage providers present their storage prices to customers using Lotus.

User Story Alice runs a successful Filecoin storage provider service. Alice would like to offer different prices to different customers for business reasons.

Existing implementation in Lotus Storage providers set a single set of deal acceptance criteria that is applied to all customers, using lotus-miner storage-deals set-ask

Problem Lotus does not provide a way for a storage provider to offer a different price to different customers.

Storage providers work around this restriction by agreeing prices with customers outside of lotus. They then provide the customer specific pricing in non-standard ways e.g.:

  • Temporarily lowering their global ask price to allow a customer to send deals, and then raising their prices back to normal. Other customers might also send deals and will unintentionally receive the lower price.
  • Setting their ask price to 0, and fully relying on customers to manually specify the agreed prices in their deal proposals.

Both solutions break automation. A customer using the price a storage provider returns in their query-ask response may either pay more than they could have, or attempt to make a deal with a price of 0, which is rejected.

Improvement My proposal for query-ask version 2:

  1. Send the default wallet address in a query-ask request, or allow it to be specified: lotus client query-ask --from <walletAddress>
  2. Sign the query-ask request with the provided wallet address, to prove the client has ownership of the address
  3. On the storage provider, by default, respond with the acceptance criteria set in lotus-miner storage-deals set-ask.
  4. Add an 'ask filter' script option to the storage provider. If an ask filter is present, first verify the query-ask message signature. Then, run the ask filter and provide to standard input a JSON representation of the query-ask parameters including the requesting wallet address. The ask filter script should return a price, minimum deal size, and maximum deal size. That data is sent back to the client as the query-ask response.
  5. When a deal is received, call the ask filter with the deal's wallet address to retrieve the current deal acceptance criteria, and make the decision to accept or reject. If ask filter not present, use the default lotus criteria from set-ask.
  6. Add extra metadata key-value pairs such as: storage provider name, storage provider geolocation, storage provider jurisdiction, website URL, a URL link to terms of service. I welcome discussion from people in this thread as to what metadata would be useful to include.
  7. Additional arbitrary key-value pairs should be accepted from the ask filter script and passed to the client in the query-ask response. These could be used by the community to communicate custom data to clients that could more easily allow development of new software for the ecosystem.

closed time in 17 days

neondragon

issue commentfilecoin-project/lotus

Query-ask version 2 - standardize per-customer pricing (with new query-ask format)

I changed my mind and am going to write a new suggestion based on providing deal acceptance criteria as well-formed JSON in the deal rejection reason payload.

neondragon

comment created time in 17 days

issue openedfilecoin-project/lotus

Query-ask version 2 - standardize per-customer pricing

Checklist

  • [X] This is not a new feature or an enhancement to the Filecoin protocol. If it is, please open an FIP issue.
  • [X] This is not a new feature request. If it is, please file a feature request instead.
  • [X] This is not brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on the lotus forum and select the category as Ideas.
  • [X] I have a specific, actionable, and well motivated improvement to propose.

Lotus component

  • [ ] lotus daemon - chain sync
  • [ ] lotus miner - mining and block production
  • [ ] lotus miner/worker - sealing
  • [ ] lotus miner - proving(WindowPoSt)
  • [X] lotus miner/market - storage deal
  • [X] lotus miner/market - retrieval deal
  • [ ] lotus miner/market - data transfer
  • [X] lotus client
  • [X] lotus JSON-RPC API
  • [ ] lotus message management (mpool)
  • [ ] Other

Improvement Suggestion

This is a suggested improvement to how storage providers present their storage prices to customers using Lotus.

User Story Alice runs a successful Filecoin storage provider service. Alice would like to offer different prices to different customers for business reasons.

Existing implementation in Lotus Storage providers set a single set of deal acceptance criteria that is applied to all customers, using lotus-miner storage-deals set-ask

Problem Lotus does not provide a way for a storage provider to offer a different price to different customers.

Storage providers work around this restriction by agreeing prices with customers outside of lotus. They then provide the customer specific pricing in non-standard ways e.g.:

  • Temporarily lowering their global ask price to allow a customer to send deals, and then raising their prices back to normal. Other customers might also send deals and will unintentionally receive the lower price.
  • Setting their ask price to 0, and fully relying on customers to manually specify the agreed prices in their deal proposals.

Both solutions break automation. A customer using the price a storage provider returns in their query-ask response may either pay more than they could have, or attempt to make a deal with a price of 0, which is rejected.

Improvement My proposal for query-ask version 2:

  1. Send the default wallet address in a query-ask request, or allow it to be specified: lotus client query-ask --from <walletAddress>
  2. Sign the query-ask request with the provided wallet address, to prove the client has ownership of the address
  3. On the storage provider, by default, respond with the acceptance criteria set in lotus-miner storage-deals set-ask.
  4. Add an 'ask filter' script option to the storage provider. If an ask filter is present, first verify the query-ask message signature. Then, run the ask filter and provide to standard input a JSON representation of the query-ask parameters including the requesting wallet address. The ask filter script should return a price, minimum deal size, and maximum deal size. That data is sent back to the client as the query-ask response.
  5. When a deal is received, call the ask filter with the deal's wallet address to retrieve the current deal acceptance criteria, and make the decision to accept or reject. If ask filter not present, use the default lotus criteria from set-ask.
  6. Add extra metadata key-value pairs such as: storage provider name, storage provider geolocation, storage provider jurisdiction, website URL, a URL link to terms of service. I welcome discussion from people in this thread as to what metadata would be useful to include.
  7. Additional arbitrary key-value pairs should be accepted from the ask filter script and passed to the client in the query-ask response. These could be used by the community to communicate custom data to clients that could more easily allow development of new software for the ecosystem.

created time in 17 days

issue commentfilecoin-project/lotus

Retry AddPiece if it fails

About 20% of my deals result in AddPieceFailed. I haven't narrowed down the cause. It doesn't happen to CC sectors, it just seems to happen during transferring piece data to the AP worker over the network. It seems to happen less when the lotus-miner markets process is less busy, but it still happens, and causes the deal to move into an Error state when it should be trivial to retry AddPiece instead.

Example sector log

SectorID: 466 Status: Removed CIDcommD: <nil> CIDcommR: <nil> Ticket: TicketH: 0 Seed: SeedH: 0 Precommit: <nil> Commit: <nil> Proof: Deals: [] Retries: 0

Event Log: 0. 2021-09-03 18:25:00 +0000 UTC: [event;sealing.SectorStart] {"User":{"ID":466,"SectorType":9}}

  1.  2021-09-03 18:25:00 +0000 UTC:  [event;sealing.SectorAddPiece]  {"User":{}}
    
  2.  2021-09-03 18:43:28 +0000 UTC:  [event;sealing.SectorAddPiece]  {"User":{}}
    
  3.  2021-09-03 18:43:28 +0000 UTC:  [event;sealing.SectorAddPieceFailed]    {"User":{}}
     writing piece: storage call error 0: pr read error: read tcp 10.5.12.34:3456->185.37.2.3:57504: read: connection timed out
    
  4.  2021-09-04 01:10:13 +0000 UTC:  [event;sealing.SectorRemove]    {"User":{}}
    
  5.  2021-09-04 01:10:14 +0000 UTC:  [event;sealing.SectorRemoved]   {"User":{}}
    
jennijuju

comment created time in 21 days

issue commentfilecoin-project/lotus

WARN sectors storage-sealing/states_failed.go:357 piece 3 (of 8) of sector 16366 refers deal 2343888 with wrong PieceCID: baga6ea4seaqmzq6acl23axubdiv37xipnaz3qqtvwr57ekoaauviescphqnfwpi != baga6ea4seaqkloqjjw5nkhz5azw4g4bz5ylcshoh5ewbx6ex4tconddpmyttcda

{"level":"info","ts":"2021-08-31T12:39:39.849+0000","logger":"filcrypto::proofs::api","caller":"src/proofs/api.rs:987","msg":"generate_winning_post: finish"} panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x13b137d] goroutine 1106 [running]: github.com/filecoin-project/lotus/extern/storage-sealing.(*Sealing).HandleRecoverDealIDs(0xc0006c3080, 0x377cb60, 0xc019559698, 0xc011f840d0, 0xe, 0x1a3, 0x9, 0x61262f3d, 0xc00eebeb40, 0x2, ...) /root/lotus/extern/storage-sealing/states_failed.go:397 +0x65d github.com/filecoin-project/lotus/extern/storage-sealing.(*Sealing).plan.func2.1(0x37976d0, 0xc0000528f0, 0xc000c80730, 0xc011f840d0, 0xe, 0x1a3, 0x9, 0x61262f3d, 0xc00eebeb40, 0x2, ...) /root/lotus/extern/storage-sealing/fsm.go:357 +0xd5 github.com/filecoin-project/lotus/extern/storage-sealing.(*Sealing).Plan.func1(0x37976d0, 0xc0000528f0, 0xc000c80730, 0xc011f840d0, 0xe, 0x1a3, 0x9, 0x61262f3d, 0xc00eebeb40, 0x2, ...) /root/lotus/extern/storage-sealing/fsm.go:26 +0x95 reflect.Value.call(0x2e98ce0, 0xc000c80720, 0x13, 0x3279185, 0x4, 0xc000630f30, 0x2, 0x2, 0x54723f, 0x2fcfc00, ...) /usr/local/go/src/reflect/value.go:476 +0x8e7 reflect.Value.Call(0x2e98ce0, 0xc000c80720, 0x13, 0xc000cc4730, 0x2, 0x2, 0x0, 0x0, 0x0) /usr/local/go/src/reflect/value.go:337 +0xb9 github.com/filecoin-project/go-statemachine.(*StateMachine).run.func3(0xc00eeece88, 0xc00ee9f650, 0x37976d0, 0xc0000528f0, 0xc000c80730, 0xc00ee9f660, 0xc0155d0300) /root/go/pkg/mod/github.com/filecoin-project/go-statemachine@v1.0.1/machine.go:108 +0x3cc created by github.com/filecoin-project/go-statemachine.(*StateMachine).run /root/go/pkg/mod/github.com/filecoin-project/go-statemachine@v1.0.1/machine.go:103 +0x451

cryptowhizzard

comment created time in 25 days

issue commentfilecoin-project/lotus

AP (Add Piece) multi core performance issues / not always desirable

The scheduler also schedules one AP task per available CPU core on a worker, left-over behavior from when AP was single-threaded. It schedules e.g. 16 multi-core AP tasks on a 16 CPU worker. On a 64-core worker, it will schedule maximum 64 AP tasks running in parallel.

As running parallel AP appears to decrease overall throughput, I think we either need performance improvements to AP such that they scale linearly, or a way to limit the maximum number of parallel AP tasks per-worker.

benjaminh83

comment created time in a month

issue commentfilecoin-project/lotus

Lotus Market node backup does not work

It works with MINER_API_INFO set to the API info string for the markets process, so that the lotus-miner client command runs the backup API call on the markets node. --call-on-markets doesn't appear to have the same result.

stuberman

comment created time in a month

issue commentfilecoin-project/lotus

Improve/Fix PreCommit Ticket Logic

Magik6k 1 hour ago But we could always be getting fresh tickets for sectors which aren't precomitted yet

Magik6k 1 hour ago Which should fix most of issues like this

neondragon

comment created time in a month

issue openedfilecoin-project/lotus

Improve PreCommit Ticket Logic

Checklist

  • [X] This is not a security-related bug/issue. If it is, please follow please follow the security policy.
  • [X] This is not a question or a support request. If you have any lotus related questions, please ask in the lotus forum.
  • [X] This is not a new feature request. If it is, please file a feature request instead.
  • [X] This is not an enhancement request. If it is, please file a improvement suggestion instead.
  • [X] I have searched on the issue tracker and the lotus forum, and there is no existing related issue or discussion.
  • [X] I am running the Latest release, or the most recent RC(release canadiate) for the upcoming release or the dev branch(master), or have an issue updating to any of these.
  • [X] I did not make any code changes to lotus.

Lotus component

  • [ ] lotus daemon - chain sync
  • [ ] lotus miner - mining and block production
  • [X] lotus miner/worker - sealing
  • [ ] lotus miner - proving(WindowPoSt)
  • [ ] lotus miner/market - storage deal
  • [ ] lotus miner/market - retrieval deal
  • [ ] lotus miner/market - data transfer
  • [ ] lotus client
  • [ ] lotus JSON-RPC API
  • [ ] lotus message management (mpool)
  • [ ] Other

Lotus Version

Daemon:  1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty+api1.2.0
Local: lotus-miner version 1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty

Describe the Bug

Problem Lotus currently wastes time computing PC1 and PC2 on sectors with an expired ticket or a ticket that will shortly expire.

When I have more sectors in state PreCommit1 than I can seal simultaneously (most recently due to receiving a high volume of deals from Estuary), those sectors have to wait. The wait time can mean when lotus does start the PC1 task for that sector, the ticket has expired, or about to expire. I have noticed that lotus will fill the sealing pipeline with such PC1 tasks that are not going to be able to pass PC2, and while it's computing these pointless tasks, the tickets in other PreCommit1 sectors age and expire. Then Lotus starts PC1 on those sectors even though they aren't going to succeed.

I consider this to be a bug rather than an enhancement request because if more new PreCommit1 sectors are continuously created due to new deals, lotus just keeps starting PC1 tasks on the newer PreCommit1 sectors and the deals in the earlier PreCommit1 sectors eventually pass StartEpoch.

Suggestions Possible ways to improve this:

  • When a sector is packed, don't immediately get a new ticket, because the PC1 task may not be invoked immediately. Get a ticket just-in-time before the PC1 task is actually scheduled, to minimise the chance that the sector doesn't complete PC1 and PC2 in time.
  • Do not start PC1/PC2 on a sector with an already expired ticket.
  • Abort running PC1/PC2 tasks if the sector's ticket expires. Letting them continue is a waste of resources and can delay other sectors and compound the problem.
  • If sector precommit is not on chain and PreCommit1 is being re-run, get a fresh ticket (so we don't use one with 2 hours time remaining that will never work).

Logging Information

Event log of a sector that aimlessly started PC1 when the ticket was already expired and is now waiting for Lotus to complete other PC1 tasks before trying it again, deals getting increasingly stale.

Event Log:
0.	2021-08-16 21:02:01 +0000 UTC:	[event;sealing.SectorStart]	{"User":{"ID":335,"SectorType":9}}
1.	2021-08-16 21:02:01 +0000 UTC:	[event;sealing.SectorAddPiece]	{"User":{}}
2.	2021-08-16 21:05:34 +0000 UTC:	[event;sealing.SectorAddPiece]	{"User":{}}
3.	2021-08-16 21:05:34 +0000 UTC:	[event;sealing.SectorAddPiece]	{"User":{}}
4.	2021-08-16 21:05:34 +0000 UTC:	[event;sealing.SectorAddPiece]	{"User":{}}
5.	2021-08-16 21:05:34 +0000 UTC:	[event;sealing.SectorPieceAdded]	{"User":{"NewPieces":[{"Piece":{"Size":17179869184,"PieceCID":{"/":"baga6ea4seaqkflajhmtag3seipdq7zi3xzywysi66aahjdyvrkqzut3f3t4vooa"}},"DealInfo":{"PublishCid":{"/":"bafy2bzacea5qy3lw65r4jdecectkrcm2ch6wpjg7p3yp5vqsw352fszi2haf6"},"DealID":2285388,"DealProposal":{"PieceCID":{"/":"baga6ea4seaqkflajhmtag3seipdq7zi3xzywysi66aahjdyvrkqzut3f3t4vooa"},"PieceSize":17179869184,"VerifiedDeal":true,"Client":"f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a","Provider":"f0694396","Label":"QmWFoHAd9tgSYCHi3CzHyRPDNPTJY1upUV8GQqWxWhibRS","StartEpoch":1048051,"EndEpoch":2542771,"StoragePricePerEpoch":"0","ProviderCollateral":"3130900769749360","ClientCollateral":"0"},"DealSchedule":{"StartEpoch":1048051,"EndEpoch":2542771},"KeepUnsealed":true}}]}}
6.	2021-08-16 21:05:34 +0000 UTC:	[event;sealing.SectorAddPiece]	{"User":{}}
7.	2021-08-16 21:12:39 +0000 UTC:	[event;sealing.SectorPieceAdded]	{"User":{"NewPieces":[{"Piece":{"Size":17179869184,"PieceCID":{"/":"baga6ea4seaqdialqfvotmwt4z63bkwdof65izp7tqaax25domlm3kuwczsluqcq"}},"DealInfo":{"PublishCid":{"/":"bafy2bzacea5qy3lw65r4jdecectkrcm2ch6wpjg7p3yp5vqsw352fszi2haf6"},"DealID":2285384,"DealProposal":{"PieceCID":{"/":"baga6ea4seaqdialqfvotmwt4z63bkwdof65izp7tqaax25domlm3kuwczsluqcq"},"PieceSize":17179869184,"VerifiedDeal":true,"Client":"f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a","Provider":"f0694396","Label":"QmSzEzsaTCuA1XbZCZdUvKAfruzHjmgdFvGZVdxzLR58pt","StartEpoch":1047982,"EndEpoch":2542702,"StoragePricePerEpoch":"0","ProviderCollateral":"3131116738456543","ClientCollateral":"0"},"DealSchedule":{"StartEpoch":1047982,"EndEpoch":2542702},"KeepUnsealed":true}},{"Piece":{"Size":17179869184,"PieceCID":{"/":"baga6ea4seaqmxp55p5te2wyymio2vk7nkjxfbgm2fu6aqhcr24h3txspylbcsca"}},"DealInfo":{"PublishCid":{"/":"bafy2bzacea5qy3lw65r4jdecectkrcm2ch6wpjg7p3yp5vqsw352fszi2haf6"},"DealID":2285387,"DealProposal":{"PieceCID":{"/":"baga6ea4seaqmxp55p5te2wyymio2vk7nkjxfbgm2fu6aqhcr24h3txspylbcsca"},"PieceSize":17179869184,"VerifiedDeal":true,"Client":"f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a","Provider":"f0694396","Label":"Qmc4BNbHzsmhm5MqWeDpuxo9eQHY7hcPALctuRopDgnZTM","StartEpoch":1048037,"EndEpoch":2542757,"StoragePricePerEpoch":"0","ProviderCollateral":"3130903266809760","ClientCollateral":"0"},"DealSchedule":{"StartEpoch":1048037,"EndEpoch":2542757},"KeepUnsealed":true}},{"Piece":{"Size":17179869184,"PieceCID":{"/":"baga6ea4seaqaafwgm4g5duzggkddn4igwet3n3coamvzoywfrcooxyadlxujonq"}},"DealInfo":{"PublishCid":{"/":"bafy2bzacea5qy3lw65r4jdecectkrcm2ch6wpjg7p3yp5vqsw352fszi2haf6"},"DealID":2285386,"DealProposal":{"PieceCID":{"/":"baga6ea4seaqaafwgm4g5duzggkddn4igwet3n3coamvzoywfrcooxyadlxujonq"},"PieceSize":17179869184,"VerifiedDeal":true,"Client":"f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a","Provider":"f0694396","Label":"QmR9tj9d2NFVsq7VZNqENMkBXeA7LyoYLxqfbCgyXExUV3","StartEpoch":1048036,"EndEpoch":2542756,"StoragePricePerEpoch":"0","ProviderCollateral":"3130902205364065","ClientCollateral":"0"},"DealSchedule":{"StartEpoch":1048036,"EndEpoch":2542756},"KeepUnsealed":true}}]}}
8.	2021-08-16 21:12:39 +0000 UTC:	[event;sealing.SectorStartPacking]	{"User":{}}
9.	2021-08-16 21:12:39 +0000 UTC:	[event;sealing.SectorPacked]	{"User":{"FillerPieces":null}}
10.	2021-08-16 21:12:39 +0000 UTC:	[event;sealing.SectorTicket]	{"User":{"TicketValue":"xYIZxyR2AkvAPKPMXdKxSIXXEMMg1TA4WycH/iffXI8=","TicketEpoch":1027165}}
11.	2021-08-17 22:27:03 +0000 UTC:	[event;sealing.SectorPreCommit1]	{"User":{"PreCommit1Out":"eyJyZWdpc3RlcmVkX3Byb29mIjoiU3RhY2tlZERyZzY0R2lCVjFfMSIsImxhYmVscyI6eyJTdGFja2VkRHJnNjRHaUJWMSI6eyJsYWJlbHMiOlt7InBhdGgiOiIvc3J2L2xvdHVzL3NlYWw0XzEvc3RvcmFnZS02NC9jYWNoZS9zLXQwNjk0Mzk2LTMzNSIsImlkIjoibGF5ZXItMSIsInNpemUiOjIxNDc0ODM2NDgsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL3Nydi9sb3R1cy9zZWFsNF8xL3N0b3JhZ2UtNjQvY2FjaGUvcy10MDY5NDM5Ni0zMzUiLCJpZCI6ImxheWVyLTIiLCJzaXplIjoyMTQ3NDgzNjQ4LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9zcnYvbG90dXMvc2VhbDRfMS9zdG9yYWdlLTY0L2NhY2hlL3MtdDA2OTQzOTYtMzM1IiwiaWQiOiJsYXllci0zIiwic2l6ZSI6MjE0NzQ4MzY0OCwicm93c190b19kaXNjYXJkIjo3fSx7InBhdGgiOiIvc3J2L2xvdHVzL3NlYWw0XzEvc3RvcmFnZS02NC9jYWNoZS9zLXQwNjk0Mzk2LTMzNSIsImlkIjoibGF5ZXItNCIsInNpemUiOjIxNDc0ODM2NDgsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL3Nydi9sb3R1cy9zZWFsNF8xL3N0b3JhZ2UtNjQvY2FjaGUvcy10MDY5NDM5Ni0zMzUiLCJpZCI6ImxheWVyLTUiLCJzaXplIjoyMTQ3NDgzNjQ4LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9zcnYvbG90dXMvc2VhbDRfMS9zdG9yYWdlLTY0L2NhY2hlL3MtdDA2OTQzOTYtMzM1IiwiaWQiOiJsYXllci02Iiwic2l6ZSI6MjE0NzQ4MzY0OCwicm93c190b19kaXNjYXJkIjo3fSx7InBhdGgiOiIvc3J2L2xvdHVzL3NlYWw0XzEvc3RvcmFnZS02NC9jYWNoZS9zLXQwNjk0Mzk2LTMzNSIsImlkIjoibGF5ZXItNyIsInNpemUiOjIxNDc0ODM2NDgsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL3Nydi9sb3R1cy9zZWFsNF8xL3N0b3JhZ2UtNjQvY2FjaGUvcy10MDY5NDM5Ni0zMzUiLCJpZCI6ImxheWVyLTgiLCJzaXplIjoyMTQ3NDgzNjQ4LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9zcnYvbG90dXMvc2VhbDRfMS9zdG9yYWdlLTY0L2NhY2hlL3MtdDA2OTQzOTYtMzM1IiwiaWQiOiJsYXllci05Iiwic2l6ZSI6MjE0NzQ4MzY0OCwicm93c190b19kaXNjYXJkIjo3fSx7InBhdGgiOiIvc3J2L2xvdHVzL3NlYWw0XzEvc3RvcmFnZS02NC9jYWNoZS9zLXQwNjk0Mzk2LTMzNSIsImlkIjoibGF5ZXItMTAiLCJzaXplIjoyMTQ3NDgzNjQ4LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9zcnYvbG90dXMvc2VhbDRfMS9zdG9yYWdlLTY0L2NhY2hlL3MtdDA2OTQzOTYtMzM1IiwiaWQiOiJsYXllci0xMSIsInNpemUiOjIxNDc0ODM2NDgsInJvd3NfdG9fZGlzY2FyZCI6N31dLCJfaCI6bnVsbH19LCJjb25maWciOnsicGF0aCI6Ii9zcnYvbG90dXMvc2VhbDRfMS9zdG9yYWdlLTY0L2NhY2hlL3MtdDA2OTQzOTYtMzM1IiwiaWQiOiJ0cmVlLWQiLCJzaXplIjo0Mjk0OTY3Mjk1LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LCJjb21tX2QiOls4NCw5NCwyMTUsMTE1LDU4LDE1NywxNTksMjA0LDI1NCwxNTgsMzIsNzUsMjU0LDE3NiwxOTQsNTgsMzcsNTAsMTAyLDM5LDcwLDIxMyw3Miw4MCw1NiwwLDE5MiwxMzksOCwyMjAsMjUsMzVdfQ=="}}
12.	2021-08-17 23:21:51 +0000 UTC:	[event;sealing.SectorPreCommit2]	{"User":{"Sealed":{"/":"bagboea4b5abcbsjhcx6r3tzvckvi5ykq3ghtr6j6sxrlczlwjrfmjg7owvuz6t2f"},"Unsealed":{"/":"baga6ea4seaqfixwxom5j3h6m72pcas76wdbdujjsmytunvkika4abqelbdobsiy"}}}
13.	2021-08-17 23:21:51 +0000 UTC:	[event;sealing.SectorSealPreCommit1Failed]	{"User":{}}
	ticket expired: ticket expired: seal height: 1028065, head: 1031203
14.	2021-08-17 23:22:51 +0000 UTC:	[event;sealing.SectorRetrySealPreCommit1]	{"User":{}}
15.	2021-08-17 23:22:51 +0000 UTC:	[event;sealing.SectorOldTicket]	{"User":{}}
16.	2021-08-17 23:22:51 +0000 UTC:	[event;sealing.SectorTicket]	{"User":{"TicketValue":"37XN4xGMyzjUP8TJufktBKgzNndla7+O3jb/JIp3jSM=","TicketEpoch":1030305}}

Repo Steps

  1. Run '...'
  2. Do '...'
  3. See error '...' ...

created time in a month

issue commentfilecoin-project/lotus

[BUG] Piece Cid mismatch between deal proposal in Actor state and deal Piece added to Sector

7 sectors and 63 deals stuck with RecoverDealIDs on f019551 (1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty+api1.2.0).

https://github.com/filecoin-project/lotus/pull/7117

aarshkshah1992

comment created time in a month

pull request commentfilecoin-project/lotus

sealing: Fix RecoverDealIDs loop with changed PieceCID

"rather unlikely edge-case" -- I have 7 sectors and 63 deals stuck with RecoverDealIDs on f019551.

magik6k

comment created time in a month

issue commentfilecoin-project/lotus

Command for removing expired sectors

Related: Terminated sectors are also hard to locate and remove.

benjaminh83

comment created time in a month

issue commentfilecoin-project/lotus

Allow manual retry of deal publishing

+1 It would be great if Lotus could better differentiate between recoverable and unrecoverable storage deal errors. Currently when a PublishStorageDeals message fails for any reason, all deals published in the message are destroyed by Lotus, even when only one deal in the message has caused an error (such as insufficient DataCap). Deals and data are lost unnecessarily.

Unrecoverable errors -> StorageDealFailing -> StorageDealError Potentially recoverable publishing errors -> perhaps transition to StorageDealPublishFailed and provide a 'storage-deals recover' and 'storage-deals cancel' command to allow the operator to push the deal back into the publish queue.

Automatically re-queueing and retrying publishing deals would be even better, but only where there is no chance of a loop of failing messages.

f8-ptrk

comment created time in a month

issue commentfilecoin-project/lotus

Occasional StorageDealError due to 'AddPiece failed: piece [...] assigned to sector [...] with not enough space'

Market process is split, and on the same machine as the main miner.

lotus-miner [Mining Sealing SectorStorage]

  • On machine: gb-wlv1-filecoin2 (512G RAM, 6.4G SSD)
  • storage.json
{
  "StoragePaths": [
    {
      "Path": "/srv/cephfs/hoststorage/gb-wlv1-filecoin1/store1"
    }
  ]
}

lotus-miner [Markets]

  • On machine: gb-wlv1-filecoin2 (512G RAM, 6.4G SSD)
  • storage.json
{
  "StoragePaths": [
    {
      "Path": "/srv/cephfs/hoststorage/gb-wlv1-filecoin1/store1"
    }
  ]
}
  • /srv/cephfs/hoststorage/gb-wlv1-filecoin1/store1/sectorstore.json
{
  "ID": "2aaec1c2-9ad6-4563-8f59-0ac57179649e",
  "Weight": 10,
  "CanSeal": false,
  "CanStore": true
}

lotus-worker [AddPiece]

  • On machine: gb-wlv1-filecoin-seal4 (19T NVME)
  • storage.json

{
  "StoragePaths": [
    {
      "Path": "/srv/lotus/seal4_1/storage"
    }
  ]
}
  • /srv/lotus/seal4_1/storage/sectorstore.json
{
  "ID": "fe33a74a-ca27-467b-b956-31b15259fe64",
  "Weight": 10,
  "CanSeal": true,
  "CanStore": false
}
neondragon

comment created time in a month

issue openedfilecoin-project/lotus

Occasional StorageDealError due to 'AddPiece failed: piece [...] assigned to sector [...] with not enough space'

Checklist

  • [X] This is not a question or a support request. If you have any lotus related questions, please ask in the lotus forum.
  • [X] I am reporting a bug w.r.t one of the M1 tags. If not, choose another issue option here.
  • [X] I am reporting a bug around deal making. If not, create a M1 Bug Report For Non Deal Making Issue.
  • [X] I have my log level set as instructed here and have logs available for troubleshooting.
  • [X] The deal is coming from one of the M1 clients(communitcated in the coordination slack channel).
  • [X] I have searched on the issue tracker and the lotus forum, and there is no existing related issue or discussion.

Lotus Component

lotus miner market subsystem - storage deal

Lotus Tag and Version

# lotus-miner version
Daemon:  1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty+api1.2.0
Local: lotus-miner version 1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty
# lotus version
Daemon:  1.11.1-m1.3.4+mainnet+git.ccf7f9a2f.dirty+api1.3.1
Local: lotus version 1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty

Describe the Bug

Problem Some deals on my miner f019551 have transitioned to StorageDealError with an error such as

handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacedwtizyuzhollaotqtz532lqgz7nzmt4khb6etxav2e7n4onhhbb2 assigned to sector 3846 with not enough space

Expected behaviour Sectors should not be assigned more deal data than they can store.

Actual behaviour First occurrence of this error was at 2021-08-14 22:25:12, and it has happened 11 times in total.

Deal Status

# lotus-miner storage-deals list -v | grep "not enough space"

Aug 14 22:25:12  true   bafyreickl3ssoofl4jshlsz6cfymnxcmtpabtq3gujicnviplrhazbt2t4  2275606  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747882413  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacecog23hk5faixunulnkayw7qsufizt6usr7ed24jysviroafe4ezq assigned to sector 3846 with not enough space
Aug 14 22:25:35  true   bafyreihqsikaxufxhew24lio5epennotczdltiytq34yyedteobucau4su  2275640  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  1GiB    0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747882555  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacebr6h2bxknqih2p5ioxtjy3cr6xcdtm544l3df433l4ix7ljllv6g assigned to sector 3846 with not enough space
Aug 14 22:25:38  true   bafyreidvm5kpdunfwml7pp5xn4jfv6quwje2d2fu2nmgdj2cjq64kew67i  2275642  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747882556  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacedhuheqhtxlr5qongkbmirsqdfwuumslqmwkrxbgv2docgknnbfb2 assigned to sector 3846 with not enough space
Aug 14 22:29:20  true   bafyreiepey5htrtwsbglvbq5gzj3qpqsdyk75ykrpxxvlebnnsk4mp5nqu  2275620  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747882654  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacea3p6bvogtdzdgefkru6r7lb7nz4hnvvounlkzqlo5nxutrm47f7k assigned to sector 3846 with not enough space
Aug 14 22:31:57  true   bafyreiergadea5vts6jkhhwlf4ywbvmidijj7iewplvw7sgvs43ftja3vq  2275611  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747882778  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacedwtizyuzhollaotqtz532lqgz7nzmt4khb6etxav2e7n4onhhbb2 assigned to sector 3846 with not enough space
Aug 14 23:41:22  true   bafyreidcfuf5rvkz6a423jwxj5ri4pmdsqbclu7jfmueb22xxyix2qckee  2275651  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747882919  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacec44ltgoh7mahryrl3556gagslkgzz5ubgyvzbeyxwluh5ge7mxug assigned to sector 3846 with not enough space
Aug 15 03:55:07  true   bafyreib4m7tnalkbxleeowukukzrkinzwajupqveantv4bew2eup37ks6m  2276828  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747885097  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzaceanvmejy7j3qh34wcrqtrc7uzsmvehqf7xvp2hns4jhde6r45cbbk assigned to sector 3850 with not enough space
Aug 15 04:00:39  true   bafyreicazgtyuwxqkvwzjeztnbfvziwdk5zku7pwdpr5zoqxt5ub5diyiy  2276831  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  2GiB    0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747885118  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzaceaw3j7vnsevm6m4e353p5w7yh5j7wx5c3dzmh2eybdddnkhajph6g assigned to sector 3850 with not enough space
Aug 15 04:00:58  true   bafyreihxbqsor2nbqpeo3bjvsuxuopqowssxqb6t7j2a354qyvvnq4qayy  2276829  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747885133  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacebncmxl5vedv5zo2dkvs7ehamvfb4qb6rbyw4jnb72fg5olfcp6t6 assigned to sector 3850 with not enough space
Aug 15 04:10:05  true   bafyreifbmtekmdejs2ylk4lxduostg6phro2jkvsjb6spbcq4kz3o643um  2276830  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  512MiB  0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1628966926747885157  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacedrmthgt6asgzsiphcxudfswaxtrw3l2wejfnugz6csejh6somymw assigned to sector 3850 with not enough space
Aug 16 04:27:25  true   bafyreigtkppdehdtqvao7g6zfygruxrj5yszxfd4hzoby4wyzkin5nzp34  2282721  StorageDealError                         f3vnq2cmwig3qjisnx5hobxvsd4drn4f54xfxnv4tciw6vnjdsf5xipgafreprh5riwmgtcirpcdmi3urbg36a  4GiB    0 FIL             1494720  12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien-12D3KooWKAd5C78zMyqbaMCm7Pt9CMyAy6eoJNzedadUuiDBkfhY-1629050718006021557  handing off deal to node: packing piece <nil>: AddPiece failed: piece bafy2bzacea5rz4p7vi4mow4hazexjmhsv375zkiphbms5a27dqbqy2pfhjem2 assigned to sector 3856 with not enough space

Data Transfer Status

# lotus-miner data-transfers list -v --completed | grep -E 'bafyreickl3ssoofl4jshlsz6cfymnxcmtpabtq3gujicnviplrhazbt2t4|bafyreihqsikaxufxhew24lio5epennotczdltiytq34yyedteobucau4su|bafyreidvm5kpdunfwml7pp5xn4jfv6quwje2d2fu2nmgdj2cjq64kew67i|bafyreiepey5htrtwsbglvbq5gzj3qpqsdyk75ykrpxxvlebnnsk4mp5nqu|bafyreiergadea5vts6jkhhwlf4ywbvmidijj7iewplvw7sgvs43ftja3vq|bafyreidcfuf5rvkz6a423jwxj5ri4pmdsqbclu7jfmueb22xxyix2qckee|bafyreib4m7tnalkbxleeowukukzrkinzwajupqveantv4bew2eup37ks6m|bafyreicazgtyuwxqkvwzjeztnbfvziwdk5zku7pwdpr5zoqxt5ub5diyiy|bafyreihxbqsor2nbqpeo3bjvsuxuopqowssxqb6t7j2a354qyvvnq4qayy|bafyreifbmtekmdejs2ylk4lxduostg6phro2jkvsjb6spbcq4kz3o643um|bafyreigtkppdehdtqvao7g6zfygruxrj5yszxfd4hzoby4wyzkin5nzp34'

1628966926747882413  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmUjjK87gqofzVUCjDz5yaH9Q3tKNNL4wNrxrdxfWCY8iL                                                                                                                                                            N           264.1MiB     {"Proposal":{"/":"bafyreickl3ssoofl4jshlsz6cfymnxcmtpabtq3gujicnviplrhazbt2t4"}}
1628966926747882555  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmSihZbrthrBkPcpVBua2Z6Be3ZHStDgjWBytCNAEpwHhM                                                                                                                                                            N           606.3MiB     {"Proposal":{"/":"bafyreihqsikaxufxhew24lio5epennotczdltiytq34yyedteobucau4su"}}
1628966926747882556  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmTWduH26LTAMwen4KXiERtNcUC1dr5UGv9i95a4xjXrid                                                                                                                                                            N           499.8MiB     {"Proposal":{"/":"bafyreidvm5kpdunfwml7pp5xn4jfv6quwje2d2fu2nmgdj2cjq64kew67i"}}
1628966926747882654  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmV2mAJ7CPFteCNf3q5zCLo24pgvLBibECCknp7DoVMD6a                                                                                                                                                            N           279.6MiB     {"Proposal":{"/":"bafyreiepey5htrtwsbglvbq5gzj3qpqsdyk75ykrpxxvlebnnsk4mp5nqu"}}
1628966926747882778  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  Qmd5rrVeyQfQeVdswZzbSharEUypkvNhX25XtCzqBHXdkn                                                                                                                                                            N           309MiB       {"Proposal":{"/":"bafyreiergadea5vts6jkhhwlf4ywbvmidijj7iewplvw7sgvs43ftja3vq"}}
1628966926747882919  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmXoCoLCGwxadTU9oGKizHEEuB8wjyCXFAM7A44ENjdiaJ                                                                                                                                                            N           302.7MiB     {"Proposal":{"/":"bafyreidcfuf5rvkz6a423jwxj5ri4pmdsqbclu7jfmueb22xxyix2qckee"}}
1628966926747885097  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmNwnEjMWKTAw1kBPZKi3m82shcoRx948GG88RF7UcaesY                                                                                                                                                            N           284.9MiB     {"Proposal":{"/":"bafyreib4m7tnalkbxleeowukukzrkinzwajupqveantv4bew2eup37ks6m"}}
1628966926747885118  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmcFieRL6sfubWe4exmkEHjGj3vmp1deLpkBEzxZum7wmu                                                                                                                                                            N           1.084GiB     {"Proposal":{"/":"bafyreicazgtyuwxqkvwzjeztnbfvziwdk5zku7pwdpr5zoqxt5ub5diyiy"}}
1628966926747885133  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmbA1BkwY2neKACWYdmUhAFgq9BnwSaEi1UCpaCSNppcUW                                                                                                                                                            N           257.1MiB     {"Proposal":{"/":"bafyreihxbqsor2nbqpeo3bjvsuxuopqowssxqb6t7j2a354qyvvnq4qayy"}}
1628966926747885157  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmczNm2N6yiZpjm6o8qFJwP7UeRtRwxqECGa7V6J9jnhsB                                                                                                                                                            N           279.7MiB     {"Proposal":{"/":"bafyreifbmtekmdejs2ylk4lxduostg6phro2jkvsjb6spbcq4kz3o643um"}}
1629050718006021557  Completed        12D3KooWGBWx9gyUFTVQcKMTenQMSyE2ad9m7c9fpjS4NMjoDien  QmRspTvMhBa9R6fG1RxkigRA4JaegxEufVaq35dvz5wzBT                                                                                                                                                            N           3.466GiB     {"Proposal":{"/":"bafyreigtkppdehdtqvao7g6zfygruxrj5yszxfd4hzoby4wyzkin5nzp34"}}

Logging Information

https://www.dropbox.com/s/e9ercg8uce8utj4/addpiece-sector-not-enough-space-bugreport.log?raw=1

Repo Steps (optional)

  1. Run '...'
  2. Do '...'
  3. See error '...' ...

created time in a month

issue commentfilecoin-project/lotus

[BUG] lotus-worker doesn't execute unseal for initializing shard with --include-sealed.

I am experiencing the exact same behavior.

Daemon:  1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty+api1.2.0
Local: lotus-miner version 1.11.1-m1.3.5+mainnet+git.7be207bc5.dirty
tmyuu

comment created time in a month

issue commentfilecoin-project/lotus

Separate Max SimultaneousTransfers configuration for storage and retrieval deals

This makes sense to me. It increases flexibility and has no significant disadvantage that I can think of.

jennijuju

comment created time in a month

created tagNeonixWS/go-fil-markets

tagv1.6.0-rc1

Shared Implementation of Storage and Retrieval Markets for Filecoin Node Implementations

created time in 2 months

created tagNeonixWS/go-fil-markets

tagv1.5.0

Shared Implementation of Storage and Retrieval Markets for Filecoin Node Implementations

created time in 2 months

push eventNeonixWS/go-fil-markets

dirkmc

commit sha 0f8f468a065953548c6c8b6c07f3bcd744d12c71

fix: close the reader after unsealing into blockstore (#507)

view details

Whyrusleeping

commit sha 46f73ec722078184e1ba2e79f079696003562dd1

always try to return some message to the client (#498) * always try to return some message to the client * fix: better error handling Co-authored-by: Dirk McCormick <dirkmdev@gmail.com>

view details

dirkmc

commit sha 1f259af31727888969baf7ffd483fca70cd4b355

feat: update to go-data-transfer v1.2.9 (#508)

view details

dirkmc

commit sha 20da4f952b875280d1b943f1be1730021fe03e74

feat: update to go-data-transfer v1.2.9 (#508) (#504)

view details

dirkmc

commit sha 2779d3dc582494277a232ef54eda177d1fea8764

release: v1.2.0 (#509)

view details

dirkmc

commit sha 99e0b3bdc54bfd61acde31432ddb11ef5a202622

feat: update tests for go-data-transfer 1.3.0 (#510)

view details

dirkmc

commit sha 7f8d9b6a2aedfb6194a15f57569ccf88d7096ba7

release: v1.2.1 (#511)

view details

dirkmc

commit sha dec1a28a914b53f2d55007d966547b72d2505c03

feat: update to go-data-transfer v1.4.0 (#514)

view details

dirkmc

commit sha 7ff7573a1da2cbf8e5596aa7b10cdba049f285ed

release: v1.2.2 (#515)

view details

dirkmc

commit sha 84c7b986b9062d0922ba6a1ff87b270d0009a1e1

fix: remove LocatePieceForDealWithinSector (no longer used) (#518)

view details

dirkmc

commit sha cf830ee0459221d4ba7e91f105a0f19b6d5a453e

fix: process payment request from provider while initiating payment channel (#520)

view details

dirkmc

commit sha a247678e5968226a62156fb595a3e76276f60c4f

release: v1.2.3 (#521)

view details

Anton Evangelatov

commit sha 39a8025d35332ab72b1fd4a9004956584fca4456

Add DealStages to track and log Deal status updates (#502) * Add DealStages field to track and keep history of lifecycle of deal * fixup * fixup * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * decrease log level to debug * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * Update storagemarket/impl/clientstates/client_fsm.go Co-authored-by: dirkmc <dirkmdev@gmail.com> * explicit set of log with empty value * fix test * fix: dont panic when adding log to nil DealStages * add godocs to deal stages objects. Co-authored-by: dirkmc <dirkmdev@gmail.com> Co-authored-by: raulk <raul@protocol.ai>

view details

Aarsh Shah

commit sha 1203c12c6de9faabb0ff7772ac550993647b51ef

Poll Provider for acceptance only till (deal start epoch + grace period) has elapsed (#516) * poll only till deal elapses * Apply suggestions from code review Co-authored-by: dirkmc <dirkmdev@gmail.com> Co-authored-by: dirkmc <dirkmdev@gmail.com>

view details

dirkmc

commit sha b5de1930279605b69c2b64f14e1f60725c0c00f4

feat: update to go-data-transfer v1.4.1 (#523)

view details

dirkmc

commit sha 262e47f5e37028af311f896b9f517af7f765c01d

release: v1.2.4 (#524)

view details

dirkmc

commit sha 7ff273ec3fcc9ad6c228b74c6cd97d1e11072619

fix: use time-based deal ID instead of stored counter (#529)

view details

Aarsh Shah

commit sha dd9b0da6555b679ffb0d996f4e73783f6a8540d1

Flush out & fix retrieval bugs (#525) * remove extern file * removed stages file * fix test * go mod tidy * fix: tidy up go sum * return ErrResume when we want to resume * merged * fix: go.sum * feat: go-data-transfer v1.4.2 * fix: flaky test TestClient_DuplicateRetrieve Co-authored-by: Dirk McCormick <dirkmdev@gmail.com>

view details

dirkmc

commit sha ccf567f2bbddd05f79b55052f47ab35c831fdd47

release: v1.2.5-rc1 (#530) Co-authored-by: Aarsh Shah <aarshkshah1992@gmail.com>

view details

dirkmc

commit sha ff7bf5a261ceec50d4ab5c81c013513f7fd5b2a3

add timeout for sending cancel message to peer when retrieval cancelled (#531) * fix: add timeout for sending cancel message to peer on close * fix: remove extraneous import

view details

push time in 2 months