The local Cache is part of how StableBit CloudDrive was designed. StableBit CloudDrive's primary use case is with high latency providers, such as networked providers. This means that most providers would benefit from an extra level of cache between the RAM cache and the cloud data.
Right now, the I/O path is as such for all providers:
- Read / Write I/O
- OS RAM Cache (small)
- StableBit CloudDrive disk cache (larger)
- Read / write provider data (all the data)
The StableBit CloudDrive disk cache is effective at speeding things up when that last step, the access to the provider data, is the slowest compared to everything above it.
With the Local Disk provider, this is not necessarily the case, and the cache can be unnecessary. Note that if the cache is on a SSD and the provider data is stored on a spinning drive, the cache is still useful.
We can in theory provide a special direct I/O path for data that is going to a local disk, bypassing the cache. But this would have to be a special path engineered specifically for the local disk provider. Perhaps we will add this feature in the future.