Pieces could get queued for hash multiple times when we receive chunks if the piece starts getting hashed before we're done writing all the chunks out. This was only found because piece hashing currently only checks the incomplete data, which is missing after the first piece hash passes, the data is marked complete, then the subsequently queued hash has nothing to read.
c.onDirtiedPiece(pieceIndex(req.Index))
- if t.pieceAllDirty(pieceIndex(req.Index)) {
+ // We need to ensure the piece is only queued once, so only the last chunk writer gets this job.
+ if t.pieceAllDirty(pieceIndex(req.Index)) && piece.pendingWrites == 0 {
t.queuePieceCheck(pieceIndex(req.Index))
// We don't pend all chunks here anymore because we don't want code dependent on the dirty
// chunk status (such as the haveChunk call above) to have to check all the various other
publicPieceState PieceState
priority piecePriority
+ // This can be locked when the Client lock is taken, but probably not vice versa.
pendingWritesMutex sync.Mutex
pendingWrites int
noPendingWrites sync.Cond