Jump to content
Covecube Inc.

Chris Downs

Members
  • Content Count

    56
  • Joined

  • Last visited

  • Days Won

    6

Chris Downs last won the day on January 3

Chris Downs had the most liked content!

About Chris Downs

  • Rank
    Advanced Member

Recent Profile Visitors

487 profile views
  1. Google Drive, and many others (especially free ones), have upload bandwidth restrictions (mb/hr) that you may be exceeding. The first stage is a limiter on speeds if you go over it. I believe the time window is 2 hours, but I do not know the current data figures. There also is a per-day limit of 750GB, but that's a hard cap that blocks uploading completely for a day. What speeds do you get if you use the native Google Drive app? As my upstream Internet is only 20mbit, I can't help by testing mine. I stopped using Google Drive in my Cloud Drive setup a while back, as even with my lower up
  2. Try this: Get hold of "Geek Uninstaller" here: https://geekuninstaller.com/download Use that to a) see if the plugin is in the list and b) remove all trace of it if it's there. Then disable your AV software and re-install SSD Optimizer, and reboot. (This particularly applies if you use Windows Defender, as W10 20H2 has some significant security changes that have caused issues for many non-Microsoft programs.)
  3. I wonder, is this something that could be implemented by means of a plugin at all? Or does the API not expose/allow things at that level? If it can, then perhaps a community led solution might be possible, with a little guidance?
  4. It can't be added. The "RAID" setup presents the resulting drive to the OS as a volume - it has no SMART data. The individual disks may well be accessible as hardware devices, but they do not present as accessible volumes/drives. WD Dashboard works because it is specifically designed to access the relevant hardware at a low level. Each manufacturer does this a little differently, so it's probably not feasible to add to Scanner, which works on a software level.
  5. You'll need to find another cloud provider - google drive has a bandwidth and speed limiter. If you go over it, they cap your further.
  6. I see, so then you need the opposite option in the balancer - "Equalize by free space remaining", I think? This should then give you what you want, for incoming new files?
  7. I'm confused - if you have real time balancing turned on, then this option has existed since... forever? Just make sure the disk space equalizer is above all other plugins besides Scanner (if you use that).
  8. No, that won't work at all. You can't duplicate and then force reads to only come from one disk. You would need a second SSD, and some file placement rules so that only your game folders are allowed on those disks. edit: why not just make a scheduled task to run a backup of the relevant game folders on the SSD, and copy them to the Drivepool in a "Game Backups" folder? Just a little batch file with some copy commands, then Task Scheduler to run it once a week?
  9. [1] Yes, it will only move the data from disk 2 if a balancing rule causes it to be moved (if you have disk space equalisation turned on, for example). Otherwise, it will stay put. [2] You could just set the drive-overfill plugin to 75-80%? Then if any disk reaches that capacity, it'll move files out? Personally, I assign a pair of landing disks for my pools. Two cheap SSDs where incoming files get dumped. DP then moves them out later, or if it fills up. Note that the disks should be larger than the largest single file you would put on the pool. If cost is an issue, you could try the fo
  10. You should be able to pass the iGPU, but I am using old Xeons so I can't test it. Keep in mind that you then lose the host video if you do that. I'm no Hyper-V expert though, I use Proxmox. I do know that all four of these options need to be ticked when adding a PCIe GPU in proxmox: Is the Hyper-V equivalent obvious to you? I've only ever used Hyper-V on my local PC for testing and no passthrough. I have been testing VM pass-through of my HBAs for over a year now, and just moved my bare-metal machines over to VMs as the setup passed my testing to my satisfaction. Remember to
  11. No problem. Here is a somewhat old thread on ReFS: https://community.covecube.com/index.php?/topic/3296-refs/#comment-22766 There seemed to be some concern over different ReFS versions not being compatible (yikes!), so not sure if that is still a concern?
  12. I pass through the HBA for Drivepool. I use Dell Perc H310 cards and the SMART data is all visible, as it should be because my Windows VM has direct access to the HBA. edit: Wrong Chris I know, but hopefully helpful?
  13. Just move the files you see in "E:", to the new drive "K:". You only have one disk in the pool so far?
  14. It's odd that the form isn't working for you - I literally just used it the other day. Try another browser to be sure? But just tag @Christopher (Drashna), or use one of the contact methods in his profile: He will help you sort it out.
  15. It literally duplicates the data, so a 30TB pool has space for 15TB of data if you select 2x, 10TB if you selected 3x, etc. It will still show as 30TB in Explorer though. So yes, if you have 30TB of actual data, you need 60TB of disks for 2x duplication. The files in the pool are stored in a hidden folder in the root of each drive. Change your Explorer options to show hidden files, and navigate into the long-named folder starting with "PoolPart." and you will see all your files and a ".covefs" folder at the top. Simply move everything except the ".covefs" folder out to another location
×
×
  • Create New...