commit 983a4c3686faad99ba691a59612d87d024b571e8
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Thu Feb 12 16:34:06 2026 +0000

    azureblob: add server side copy real time accounting

commit d516515dfe9d0ef5445027317cef9b2bb132d4b7
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Thu Feb 12 16:32:09 2026 +0000

    operations: add method to real time account server side copy
    
    Before this change server side copies would show at 0% until they were
    done then show at 100%.
    
    With support from the backend, server side copies can now be accounted
    in real time. This will only work for backends which have been
    modified and themselves get feedback about how copies are going.

commit d17425eb1f08cd3fecc45a8f0cc97f0124c571d6
Author: Duncan F <131309315+duncanaf@users.noreply.github.com>
Date:   Tue Feb 10 18:09:35 2026 -0800

    azureblob: add --azureblob-copy-total-concurrency to limit total multipart copy concurrency

commit 83d0c186a74bd2e44e17e297f4bc81a1ce028e1c
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Fri Feb 6 13:02:18 2026 +0000

    pacer: re-read the sleep time as it may be stale
    
    Before this change we read sleepTime before acquiring the pacer token
    and uses that possibly stale value to schedule the token return. When
    many goroutines enter while sleepTime is high (e.g., 10s), each
    goroutine caches this 10s value. Even if successful calls rapidly
    decay the pacer state to 0, the queued goroutines still schedule 10s
    token returns, so the queue drains at 1 req/10s for the entire herd.
    This can create multi‑minute delays even after the pacer has dropped
    to 0.
    
    After this change we refresh the sleep time after getting the token.
    
    This problem was introduced by the desire to skip reading the pacer
    token entirely when sleepTime is 0 in high performance backends (eg
    s3, azure blob).

commit 2887806f33898209dc3ecec343b59820f9c0f801
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Tue Feb 3 16:40:54 2026 +0000

    pacer: fix deadlock between pacer token and --max-connections
    
    It was possible in the presence of --max-connections and recursive
    calls to the pacer to deadlock it leaving all connections waiting on
    either a max connection token or a pacer token.
    
    This fixes the problem by making sure we return the pacer token on
    schedule if we take it.
    
    This also short circuits the pacer token if sleepTime is 0.

commit 9ed4295e34445c389800159b3bd0a5d466fbeb54
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Sat Aug 30 11:06:40 2025 +0100

    pacer: fix deadlock with --max-connections
    
    If the pacer was used recursively and --max-connections was in use
    then it could deadlock if all the connections were in use at the time
    of recursive call (likely).
    
    This affected the azureblob backend because when it receives an
    InvalidBlockOrBlob error it attempts to clear the condition before
    retrying. This in turn involves recursively calling the pacer.
    
    This fixes the problem by skipping the --max-connections check if the
    pacer is called recursively.
    
    The recursive detection is done by stack inspection which isn't ideal,
    but the alternative would be to add ctx to all >1,000 pacer calls. The
    benchmark reveals stack inspection takes about 55nS per stack level so
    it is relatively cheap.

commit 2fa1a52f22cf052d7ab0a9ab313b203bed9b39df
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Sat Aug 30 10:29:45 2025 +0100

    Revert "azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors"
    
    This reverts commit 0c1902cc6037d81eaf95e931172879517a25d529.
    
    This turns out not to be sufficient so we need a better approach
