Issue

Christopher
Technical Support
StableBit CloudDrive
1.0.0.486
Windows Server 2008 R2
Public
Alex

I ran the 800 GB data consistency test and didn't find any data consistency issues, with the settings suggested here. Every bit of every file written to the cloud drive was downloaded again with no issues.

However, the question remains, as far as what's going on with the chunk deletions, I've added additional safeguards in the latest build (1.0.0.536) to protect against inadvertent chunk deletions due to unforseen bugs.

Starting with 1.0.0.536, any encrypted drives (whether using a public key or a private key) will never allow a full chunk to be uploaded that contains all NULLs. It is a statistical improbability that on an encrypted drive a chunk would contain all zeros. If such a condition is ever encountered the upload will not be allowed to continue and an error will be shown to the user: "Cannot upload a chunk of NULLs on an encrypted drive."

As you're probably aware, there are only 2 instances here when a chunk can be deleted by StableBit CloudDrive:
  1. When the chunk contains all NULLs, it is more efficient to simply delete it than to re-upload.
  2. When Google Drive returns the undocumented MIME type error, we attempt to delete the chunk and re-upload. This is specific to Google Drive only. If the upload still fails, the chunk remains deleted, but this is ok since we still have the data that needs to be uploaded stored locally in the cache, and we will retry at a later time.
#1 is now eliminated as a possibility with the new changes for encrypted drives. #2 will now explicitly write a warning-level log entry: "[W] Conflict. Deleting chunk [...], and will attempt to re-upload." So starting with 1.0.0.536, you will now be able to definitively tell why chunks are being deleted on Google Drive.

Because of these changes, I will re-run my 800 GB data consistency test one more time and look not only for data consistency issues but also for NULL upload errors and MIME type upload failures.

If this test passes, I will consider this issue resolved.
Public
Alex

That's really odd. I'm actively looking into reproducing this issue.

Normally, a chunk deletion means that we need to write 0s to the entire chunk (as in the case that all 0s were written to that locations on the disk). Instead of uploading the whole chunk containing nothing but 0s, it's more optimal to just delete the chunk and assume on download that any missing chunks simply mean that they contain all 0s. Some providers don't support permanent deletions (e.g. they always keep revisions or always delete to trash), so for those providers we don't make this optimization.

The whole thing with upload pausing making a difference is very odd. Upload pausing is a service-level activity and doesn't affect the write map (which lives in the kernel, and potentially, if this is a bug, that's where the problem would be).

Currently I'm trying to reproduce this issue as closely as possible:
  • Based on the submitted data (thank you), I've recreated a drive with the exact same settings.
  • I'm using the data consistency test to upload and verify 800 GB of data.
So far, no observed issues, but the test is still running, and I'll have a full report generated at the end.

Other than a bug, the only thing that I can think of that would cause something like this is a tool writing 0s to the disk (such as SDelete -z).

SDelete: https://technet.microsoft.com/en-us/sysinternals/sdelete.aspx

I'm still actively looking into this. I'll let you know what I find as soon as I have something substantial to report.