commit 4c8bfb75007e5376d466dccc0f107eb12a8e864e
Author: Leon Brocard <lbrocard@fastly.com>
Date:   Thu May 7 08:32:36 2026 +0100

    s3: add new Fastly Object Storage regions
    
    Add three new regions and their endpoints for Fastly Object Storage:
    
    - eu-west-1 (Paris)
    - us-east-1 (Virginia)
    - us-west-1 (Oregon)
    
    These are distinct from the existing us-east, us-west and eu-central
    endpoints, which are kept in place.

commit 0c8d098b7f7707383566113263b5191cead9da22
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 15:47:17 2026 +0100

    cloudinary: fix retrying every error and fix pacer sleep units
    
    shouldRetry treated every non-nil error as retryable, so permanent
    failures (auth, 4xx, not-found) burned through the LowLevelRetries
    budget instead of returning fast.
    
    This also fixes the pacer sleeps: pacer.MinSleep(1000) and
    MaxSleep(10000) are time.Duration values, so they were 1µs and 10µs -
    almost certainly intended as 10ms and 2s.

commit 895e56e401c45f8d6a5800f68ef3345ef8eff923
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 15:30:02 2026 +0100

    test_all: give Cloudinary more list retries to fix flaky tests

commit daacfb6035ac08fe5a640773c18740d33f5c9de0
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 15:19:06 2026 +0100

    sync: fix flaky transform tests with retries
    
    The TransformFile tests in fs/sync call operations.TransformFile
    immediately after MoveDir. On eventually-consistent backends the
    internal NewObject lookup can momentarily fail with "object not
    found", making the tests flaky.
    
    This wraps the two operations.TransformFile calls in TestTransformFile
    and TestManualTransformFile with fstest.Retry

commit 384e9053736b5fbef4863421c6490a9c0018cf5c
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 15:19:06 2026 +0100

    fstest: make Retry helper public
    
    Add a public fstest.Retry helper that retries any function with
    exponential backoff up to *ListRetries attempts.

commit de67f29b3fe3693739c8d431ce4156dce73a7d6d
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 17:01:07 2026 +0100

    sync: fix --fix-case rename failing on backends that can't update modtime
    
    When --fix-case was used (e.g. by bisync) on backends that can't set
    modification times in place - such as Dropbox - files whose content
    matched but whose modtimes differed would fail to rename with a
    "from_lookup/not_found" error and abort the operation.
    
    This happened because operations.NeedTransfer was called before the
    fix-case rename. NeedTransfer's equality check would delete the
    destination as a precursor to re-uploading it (the standard way to
    update a modtime on these backends), so by the time the rename ran the
    file no longer existed on the remote.
    
    Fix by running the fix-case rename first, so that any subsequent
    delete/re-upload happens at the correctly-cased destination path.
    
    See: #8881

commit 40642fee0128a95daa1a4b97d7d2f123697475ea
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 17:47:23 2026 +0100

    Add Tim Schumacher to contributors

commit fdda89ae63f95f2924e4401a90c0fa8323e45ebc
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 17:47:23 2026 +0100

    Add kkocdko to contributors

commit d86b72c405e54bbf301d7e791ae23ff7d7141f1a
Author: kkocdko <31189892+kkocdko@users.noreply.github.com>
Date:   Wed May 6 19:41:15 2026 +0800

    serve: support custom http response headers
    
    Co-authored-by: Tim Schumacher <tim@tschumacher.net>

commit 03b06ac459b77daf4c8fe13e927a76a81dd33ace
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 11:35:18 2026 +0100

    ftp: fix flaky UploadTimeout test on slow integration servers
    
    The test set the short idle timeout before creating the test Fs, which
    made fs.NewFs fail to read the FTP welcome banner within 1s on slow CI
    hosts. Restore the long timeout while NewFs dials the control
    connection, then apply the short idle timeout before the upload so the
    data connection still exercises the close race that shut_timeout fixes.

commit 38926f43be333bf59b8c9b7c125732e60524dd10
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 11:13:00 2026 +0100

    test_all: give Drime more list retries to fix flaky tests

commit 3e78426fc9608ed4fbadac4c2011b5f6de5cc6e0
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 11:00:37 2026 +0100

    test_all: remove Webdav Infinite Scale integration tests
    
    These are broken and the submitter has not shown any interest in
    fixing them.

commit dac3fa851e47bef2356944ac134f20cda1ed6804
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 10:59:01 2026 +0100

    test_all: remove Seafile V6 tests as they are broken and V6 is 10 years old

commit 7b54c7a6e6e3adcf0c3b1512229cdfa3d8261b05
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Wed May 6 11:39:57 2026 +0100

    Add KTibow to contributors

commit 7200e377dd403630a2daf8bfb11d56641a6e58eb
Author: KTibow <KTibow@users.noreply.github.com>
Date:   Sat May 2 16:05:39 2026 -0700

    oauthutil: clarify token replacement prompt wording
    
    The previous wording "Already have a token - refresh?" was misleading
    because answering yes triggers a full re-authorization flow, not an
    OAuth2 refresh token grant. Updated to "Token already configured -
    replace it?" to accurately describe what happens.
    
    Also updated the SugarSync backend which has its own copy of the prompt,
    and the docs for box, drive, and onedrive that reference it.

commit 6f1678419fc8d8cd7cb2250367e89069d6eca7b2
Author: Leon Brocard <acme@astray.com>
Date:   Wed May 6 10:43:55 2026 +0100

    serve webdav: add gzip compression for compressible responses
    
    Enable on-the-fly response compression for WebDAV when the client sends
    Accept-Encoding and the response content type is suitable for
    compression.
    
    This adds compression for the WebDAV responses that benefit most in
    practice, notably PROPFIND XML responses and text file downloads.
    I tested this with Cyberduck, which sends
    `Accept-Encoding: gzip,deflate` and accepted the compressed responses.
    
    Range requests are explicitly left uncompressed.
    
    Fixes #5777

commit 6e99f8b301f0db7c9f74c0f8a37eab53c266f10e
Author: Leon Brocard <acme@astray.com>
Date:   Wed May 6 10:40:34 2026 +0100

    gui: serve static files with gzip/deflate compression
    
    Before this change, the GUI server sent all static files uncompressed,
    meaning the browser had to download the full size of every JS, CSS,
    and HTML asset.
    
    After this change, the GUI server uses chi's Compress middleware at
    level 5, which negotiates gzip or deflate encoding based on the
    client's Accept-Encoding header.
    
    This reduces transfer sizes significantly for the web UI assets, for
    example assets/index-CvfdU_RR.js is 874 KB uncompressed, and
    265 KB compressed.
    
    This is consistent with how rclone serve http, webdav, and restic
    already compress their responses.

commit 9d4c912e0e479dc459464cf3559088bb67385655
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Sat May 2 21:11:50 2026 +0100

    s3: fix STS call per request by caching AssumeRole credentials
    
    The stscreds.AssumeRoleProvider from AWS SDK Go v2 does not cache
    credentials by itself. The SDK only auto-wraps providers with
    aws.CredentialsCache when they are loaded via
    config.LoadDefaultConfig; when assigned directly to
    aws.Config.Credentials it must be wrapped manually, as documented on
    stscreds.NewAssumeRoleProvider.
    
    Without the cache, configurations using role_arn would call AssumeRole
    once per S3 request, flooding STS and CloudTrail.
    
    See: https://forum.rclone.org/t/aws-iam-roles-credentials-arent-cached/53732

commit 0737599cd4b4ecd74b37721a401b0891d04bb6f6
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Tue Apr 28 10:57:02 2026 +0100

    protondrive: fix segfault when copying files missing revision metadata
    
    When a Proton Drive file has no active revision attributes,
    readMetaDataForLink returns a nil FileSystemAttrs and Object.originalSize
    is left as nil. Object.Open then dereferenced this nil pointer when
    calling fs.FixRangeOption, causing a SIGSEGV during copy.
    
    Use Object.Size() instead, which already implements the correct fallback
    to the link size when originalSize is unavailable.
    
    This updates the github.com/rclone/Proton-API-Bridge package to fix a
    segfault when reading files with no metadata.
    
    Fixes #9377
    Fixes #9117

commit 3b2011c7a09fc32d8a62036891bbd47a36175e4d
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Mon May 4 18:25:56 2026 +0100

    protondrive: route library logging through rclone's logger
    
    Previously all log output produced by Proton-API-Bridge (stdlib log)
    and go-proton-api (logrus + resty's logger) bypassed rclone's
    logging: it ignored -v / -vv levels and didn't reach --log-file.
    
    Add a small adapter implementing the resty.Logger / bridge Logger
    shape that calls fs.Errorf / fs.Logf / fs.Debugf, and pass it via
    the new Config.Logger hook. The bridge in turn forwards the same
    value to go-proton-api's WithLogger option, so HTTP-layer warnings
    and the formerly-hardcoded logrus warnings inside go-proton-api
    also surface through rclone's log levels.

commit ef26e6d26d3b6ccb989fe69e2c78eb8fe707b88d
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Mon May 4 18:19:10 2026 +0100

    protondrive: route HTTP through rclone's transport
    
    The Proton Drive backend constructed the upstream Proton-API-Bridge
    without ever passing rclone's HTTP transport. As a result none of
    rclone's HTTP flags reached Proton: --dump headers, --dump bodies,
    --no-check-certificate, --user-agent, --bind, --ca-cert, --header,
    --tpslimit etc. all silently did nothing for this remote, and HTTP
    traffic was invisible to -vv.
    
    Pass fshttp.NewTransport(ctx) through the new Config.Transport hook on
    the bridge, which forwards it to the updated go-proton-api's
    WithTransport option and so to the underlying resty client.

commit c0a8b2597d6957c4ffe88174fa2038b6a3b82b32
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Tue May 5 09:43:22 2026 +0100

    Add Copilot to contributors

commit 74d65bf670503d26ca1b2c5fe163859e558d269b
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Tue May 5 09:43:22 2026 +0100

    Add Sven Rebhan to contributors

commit a4e647664221c84861bdbf66f9f73640efce53f9
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Tue May 5 09:43:22 2026 +0100

    Add Gustavo V. F. to contributors

commit 6c66b4985dbc4b8eeaa4a239de133897aa56c30a
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Tue May 5 09:43:22 2026 +0100

    Add 王一赫 to contributors

commit b8b3346499139ead43474530bbac8ab2ef542492
Author: Sven Rebhan <36194019+srebhan@users.noreply.github.com>
Date:   Mon May 4 12:06:30 2026 +0200

    log: fix side effects when importing rclone as a library
    
    Avoid side effects by using own logger instance
    
    - Importing fs/log only sets rclone's private logger via fs.SetLogger,
      so internal rclone logging works from the moment the package is
      imported but the process-wide slog default is left untouched.
    
    - slog.SetDefault and slog.SetLogLoggerLevel move into InitLogging,
      which is called explicitly from the CLI (cmd/cmd.go), the librclone
      wrapper and the integration test framework. So rclone-as-a-program
      keeps capturing log.Print/log.Fatal and slog.Default() output as
      before.
    
    Library consumers that import fs/log without calling InitLogging now
    keep their own slog default and can safely route rclone output back
    into it via log.Handler.SetOutput without recursing.
    
    Fixes #8907
    
    Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>

commit 9f89102a574c841dbb00f2e8acdaa77c1b3b8a60
Author: Gustavo V. F. <31892323+Gustavo-V-F@users.noreply.github.com>
Date:   Sat May 2 12:47:07 2026 -0300

    bisync: fix retryable without --resync error message when --resync has a critical failure

commit 075552367ef240d90aea7f894948ff6485269909
Author: Leon Brocard <acme@astray.com>
Date:   Sat May 2 12:28:30 2026 +0100

    cmd/serve/s3: return object listings in key order
    
    The S3 ListObjects response from `rclone serve s3` was sorting object
    contents by modification time instead of object key. This made the
    listing order incompatible with S3 clients which expect lexicographic
    key ordering.
    
    In particular, `aws s3 sync` assumes both source and destination
    iterators are ordered by key. With the old modtime ordering it could
    misidentify files as missing or outdated and re-download objects that
    were already up to date.
    
    Change the pager to sort returned objects by key and add a regression
    test which uses keys and modtimes arranged so the old behaviour would
    fail.
    
    Fixes #9002

commit ada5559fe115385d28b0db02d85f413ca3c91d51
Author: Nick Craig-Wood <nick@craig-wood.com>
Date:   Fri May 1 17:15:20 2026 +0100

    Start v1.75.0-DEV development
