Jump to content

dannybuoy

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

696 profile views

dannybuoy's Achievements

Member

Member (2/3)

1

Reputation

  1. Cheers, I will enable it then. Expecting it to take a looong time to encrypt an 8TB disk though!
  2. The popup help tooltip for this options suggests that it may be incompatible with certain filter drivers such as those that deal with disk encryption. I was planning on enabling Bitlocker on my pooled drives, does anyone know if enabling this setting could cause a problem?
  3. I just tried this plugin and it did nothing until I rebooted my machine. After that, files went to the cache drive then the archive drives in the order I selected.
  4. Ah, I get it now! If I test this again I will try running sdelete on the drive to see what happens... but if that tries to download every chunk in order to analyse the disk, perhaps some other utility that can securely delete files by zeroing the unused bits would be appropriate. Personally I would prefer that CloudDrive had an option to ignore chunks that aren't referenced by an actual file, if that were architecturally possible.
  5. It was happening last night until this afternoon when I had to manually destroy the drive. Upload eventually resumed and no more errors were appearing in the UI, but I didn't want to wait for it to upload 300GB of deleted files when my cloud drive volume was in fact empty!
  6. Currently trialling 1.0.0.723. I created 2TB drive on a trial unlimited Google Drive account tried to copy a >300GB music library to it. However it kept stopping due to I/O errors, believing there to be a problem with the cache disk: I/O error Cloud drive (H:\) - Google Drive is having trouble writing to the local disk DISK4 (C:\MNT\DISK4\). This error has occurred 46,138 times. Your local disk may be failing. Detach the cloud drive from this disk. Continuing to get this error can affect the data integrity of your cloud drive. Stablebit Scanner is not reporting any issues with the drive and I can copy the same set of files to the cache drive no problem. So then I destroyed the drive and started again, choosing a different cache disk this time. The files copied over fine this time, but when I returned to it I saw that hardly anything had been uploaded to the drive and upload had stopped. The error this time read: I/O error Cloud drive (H:\) - Google Drive is having trouble uploading data to Google Drive in a timely manner. Error: Thread was being aborted. This error has occurred 477 times. Ths error can be caused by having insufficient bandwidth or bandwidth throttling. Your data will be cached locally if this persists. I have about 15Mbps upload and 45Mbps download - checked the logs and it's full of 'User Rate Limit Exceeded' errors. Anyone know if this is a rate limit they apply to trial accounts that improve if purchased?! After a reboot I saw some upload activity happen for a while before stopping again and the same error shown as above. I I then deleted the files from the drive and copied over 2 or 3 GB of video files. Now even after a few reboots, the UI is still saying there is around 300GB to upload. I've tried wiping the cache to no avail. Is this expected behaviour - that once a file is queued for upload it stays in that queue even if deleted from the drive? Right now I can't even detach the drive as it still believes it has 300GB to upload, and it's unable to upload them anywhere, so I would right now be forced to stop the CloudDrive service and delete the CloudDrive folder on my Google Drive if I wanted to destroy this disk.
  7. Well, Duplicati appears to work fine, even when backing up to the same cloud that CloudDrive is attached to. I tested a small batch of different types of media and was pretty impressed with the speeds - might be a different story with a large library however!
  8. I had assumed so, since all activity on the disk in the CloudDrive UI had stopped after the upload had complete. I've now tried it on another machine though and did not experience the same problem, so I shall keep an eye on it!
  9. I am just evaluating CloudDrive 1.0.0.723 against a 10GB free Google Drive account. My bandwidth is around 45Mbps down and 15Mbps up. I have put a few movies and folders of mp3s and photos to test, and when I browse into a folder I have not been into before it takes a long time before I get to see any files. It was taking around 10 seconds or more to show me the contents of an mp3 album, and opening a folder containing 36 photos / 225MB took around 40 seconds before I could see thumbnails and file names. Is this performance expected? I can understand the photos taking a while to generate thumbnails but having to wait that long just to see under 10 files in a list view is not really going to work for me.
  10. Ah of course, but there is always the possibility of getting something via 0-day exploits from a hijacked site, so it's for piece of mind really. I didn't know that about Google Drive, as far as I know OneDrive do not support the same feature.
  11. Open Crashplan config and disable dedup - I did this and upload went from like 1Mbps to 6Mbps. I would only do that for initial upload though as their dedup engine saves you from reuploading if you moved a file! Still uses stupid amounts of RAM though, almost 4GB for backing up 5TB!
  12. Anybody tried any solutions to this? I like the idea of having everything fully offline from a single provider, encrypted, and versioned with no need for Crashplan or external backups. Built-in file history for Windows shares might work but modern cryptoware apparently trashes the backups before encrypting. One idea I had is Cloudberry backup - this could read from CloudDrive hooked up to an unlimited Amazon Cloud Drive account (once it's all working properly) then write encrypted versioned backups directly back to the same cloud provider!
  13. I was very interested in Plex Cloud as a replacement for my NAS, until I realised that everything would be stored unencrypted. A good alternative though is a low power NUC device acting as a download box and Plex server, with CloudDrive encrypting the lot!
  14. I think I will be using Cloudberry. Other options are Arq, Cloudbacko, Duplicati and SyncBackPro. A simple mirror/sync is not reliable as a backup. What if you delete stuff or get shafted by ransomware? The changes would just be mirrored to Amazon Cloud and you would be stuck without another offline backup. The ones I listed keep a file history to work round this. Cloudberry also does cloud-to-cloud backups so I can backup my OneDrive to Amazon with full file history too. The basic version is free too, but it looks like you have to pay for the next level up if you want to install it on a server OS. Worth it though! When a pooled drive goes down with unduplicated files you may end up with a random dispersal of missing files, so you need a restore option that can restore just missing files rather than restore EVERYTHING or having to manually select each missing file. Looks like Clouberry can do this, Arq and Cloudbacko cannot; not tried the last two yet.
  15. I did just back up just my logical Drivepool. Let me explain: I have a P: drive for my pool, which in reality is 4 drives. Over time when I used the pool, files were spread randomly in a balanced fashion. Then a disk dies; because the files were spread randomly, random files are now missing from the pool. Crashplan allows me to select exactly what I want to restore. I don't want to select to restore everything as there is no 'do not overwrite existing files' option, and I'm not going to restore multiple terabytes unnecessarily if I only have a few GB of files gone. If I only want to restore the actual missing files I have to work out what is missing and select them individually, this is what took me several days of agony. Anyway, I have just been testing various cloud backup solutions now that Amazon offer unlimited Cloud Drive storage: Cloudberry Arq Cloubacko Duplicati SyncBackPro Arq and Cloudbacko are similarly restrictive in restore options. Haven't tried the last two yet. But after trying Cloudberry I don't think I'll even bother! Cloudberry looks amazing, great UI, tons of options including the 'do not overwrite existing files' option I was desperately seeking from Crashplan. With this I should be able to just select the entire root and restore the lot and it will just retrieve the missing stuff rather than everything. It's very light on CPU and RAM and the upload is saturating my bandwidth at 13mbps. With Crashplan I was getting around 1mbps, and then 4mbps after disabling dedup and compression. Check it out!
×
×
  • Create New...