Skip to main content

Verify Cloud File Integrity with RcloneView's Check and Compare Features

· 6 min read
Tayson
Senior Engineer

Copying files to the cloud is only half the job. Verifying that every byte arrived intact is what separates a reliable workflow from a hopeful one.

Moving terabytes across providers, running nightly backups, or archiving important datasets all share a common risk: silent corruption. A file can appear present in the destination yet differ from the source due to interrupted transfers, provider-side bugs, or plain bit rot over time. Rclone provides a dedicated check command that compares source and destination file by file, and RcloneView makes that process visual and accessible. This guide explains when and how to verify your cloud files.

RcloneView app preview

Manage & Sync All Clouds in One Place

RcloneView is a cross-platform GUI for rclone. Compare folders, transfer or sync files, and automate multi-cloud workflows with a clean, visual interface.

  • One-click jobs: Copy · Sync · Compare
  • Schedulers & history for reliable automation
  • Works with Google Drive, OneDrive, Dropbox, S3, WebDAV, SFTP and more
WindowsmacOSLinux
Get Started Free →

Free core features. Plus automations available.

Why File Integrity Verification Matters

Cloud providers replicate data internally, but no system is immune to errors. Here are the most common scenarios where verification catches real problems:

  • Interrupted transfers -- a network drop during a large copy can leave partial files on the destination that look complete by name alone.
  • Bit rot -- storage media can degrade over months or years, flipping bits in rarely accessed files.
  • Provider bugs -- occasional API errors can result in zero-byte uploads or truncated writes that pass without raising an error.
  • Migration validation -- after moving hundreds of thousands of files between providers, you need proof that nothing was lost or altered.

Without a verification step, these issues go undetected until you actually need the file.

How Rclone Check Works

The rclone check command compares a source and destination path and reports files that differ. Depending on the providers involved, it uses one of these methods:

MethodHow It WorksWhen Used
Hash checkCompares checksums (MD5, SHA1, etc.) reported by both providersBoth providers support a common hash
Size checkCompares file sizes onlyNo common hash available
Download checkDownloads both files and compares byte by byteForced with --download flag

Hash-based checking is the fastest and most reliable when both providers support it. Google Drive, OneDrive, S3-compatible providers, and Backblaze B2 all report file hashes, though not always the same type.

RcloneView compare folders showing file differences

Using Compare in RcloneView

RcloneView's built-in Compare feature gives you a visual side-by-side view of source and destination folders:

  1. Open the Explorer pane and select your source remote on one side and destination on the other.
  2. Navigate to the folders you want to compare.
  3. Click Compare to run the analysis.
  4. Review the results -- files are color-coded by status: matching, source-only, destination-only, or different.

This visual approach is ideal for spot-checking specific folders or reviewing post-migration results without memorizing command-line output.

RcloneView two-pane explorer with source and destination

Running Rclone Check via the Terminal

For a full integrity scan across an entire remote, open the Terminal in RcloneView and run:

rclone check source:path dest:path

Useful flags to know:

FlagPurpose
--size-onlyCompare by size only, skip hash check
--downloadForce byte-by-byte comparison by downloading both copies
--one-wayOnly check that source files exist in destination (not vice versa)
--combined report.txtWrite a combined report of matches and mismatches to a file
--missing-on-src missing.txtLog files present in destination but missing from source
--missing-on-dst missing.txtLog files present in source but missing from destination
--error errors.txtLog files that failed the check

Example for a thorough post-migration check:

rclone check gdrive:/Archive s3-backup:archive-bucket --combined /tmp/check-report.txt

Post-Migration Verification Workflow

After migrating data between providers, follow this workflow to confirm success:

  1. Run a one-way check from source to destination to confirm all source files arrived:
    rclone check source:path dest:path --one-way
  2. Review mismatches -- any reported differences indicate files that need re-copying.
  3. Re-transfer failed files using RcloneView's copy or sync with --ignore-existing removed.
  4. Re-run the check to confirm all differences are resolved.
  5. Save the report for audit purposes using the --combined flag.
RcloneView job history showing completed check operations

Detecting Bit Rot Over Time

For long-term archives, schedule periodic integrity checks:

  1. Create a Job in RcloneView that runs rclone check against your archive.
  2. Schedule it weekly or monthly using the Job Scheduler.
  3. Review the job history after each run to catch any newly corrupted files.

This is especially important for cold storage tiers (S3 Glacier, Backblaze B2 archives) where files are written once and read rarely.

Schedule periodic integrity check in RcloneView

Checksum Compatibility Between Providers

Not every provider supports the same hash algorithm. Here is a quick reference:

ProviderMD5SHA1Other
Google DriveYesNoQuickxor available
OneDriveNoNoQuickXorHash
Amazon S3Yes (ETag for single-part)No--
Backblaze B2NoYesSHA1 native
DropboxNoNoDropbox content hash
SFTP/LocalYesYesMultiple

When two providers share no common hash, rclone falls back to size-only comparison. Use --download for byte-level certainty in those cases.

Best Practices

  • Always verify after large migrations -- a successful copy command does not guarantee every file is intact.
  • Use --combined reports to create an auditable record of what matched and what did not.
  • Schedule periodic checks for archival data that sits untouched for months.
  • Prefer hash checks over size-only when possible -- same-size corruption is rare but real.
  • Run dry-run syncs after a check to automatically fix any mismatches.

Related Guides:

Supported Cloud Providers

Local Files
WebDAV
FTP
SFTP
HTTP
SMB / CIFS
Google Drive
Google Photos
Google Cloud Storage
OneDrive
Dropbox
Box
MS Azure Blob
MS File Storage
S3 Compatible
Amazon S3
pCloud
Wasabi
Mega
Backblaze B2
Cloudflare R2
Alibaba OSS
Ceph
Swift (OpenStack)
IBM Cloud Object Storage
Oracle Cloud Object Storage
IDrive e2
MinIO
Storj
DigitalOcean Spaces