Jump to content
Covecube Inc.

Christopher (Drashna)

Administrators
  • Content Count

    10975
  • Joined

  • Last visited

  • Days Won

    305

Christopher (Drashna) last won the day on September 30

Christopher (Drashna) had the most liked content!

1 Follower

About Christopher (Drashna)

  • Rank
    Customer and Technical Support
  • Birthday 06/25/1983

Contact Methods

  • MSN
    drashna@live.com
  • Website URL
    https://drashna.net/blog/
  • Jabber
    christopher@covecube.com
  • Skype
    drashnajaelre

Profile Information

  • Gender
    Male
  • Location
    San Diego, CA, USA

Recent Profile Visitors

7077 profile views
  1. Have you tried changing the auto-tuning level? netsh int tcp set global autotuninglevel=highlyrestricted This can have a significant impact. Though, you'd want to run it on each Windows system that connects to the system. Also, on the Network Adapter(s) in device management, try disabling any option that has "checksum" or "offload" in the name. Also, green" ethernet, or interrupt modulation. Tweaking jumbo frames may also help. And like above, on all of the Windows system connecting. There is also a "Network I/O Boost" option in the performance options for the pool. Try toggling this, as it tries to prioritize network access over local, at the cost of CPU cycles.
  2. For anyone experiencing this issue: This is a known issue, and due to changes on Google's ends. The fix is in the beta version, but that hasn't shipped yet. The simplest method to fix this is to grab the beta version: http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.2.0.1356_x64_BETA.exe Also, once a stable release is out, you should get that, and will be switched back to the stable channel for updates (unless you explicitly set it otherwise).
  3. For This is a known issue. It's not CloudDrive specifically. It's a change with Google's APIs. We've seen this mostly with personal APIs but it seems that Google has "flipped the switch" and enabled this limitation globally, now. The public release version does not have the fix to handle this, right now. However, the latest beta versions do. But upgrading to the beta reorganizes the contents, so you can't downgrade, and expect to have te drive working, in that case. you can wait for a new release with this fix (which will likely be soon, because we're seeing a significant number of people running into this issue). Or you can grab the beta here: http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.2.0.1356_x64_BETA.exe
  4. Hi,

    I have created 6 CloudDrives using 6 different GSuite accounts. Have not formatted them yet, only option available during drive creation is NTFS.

    How do I format them as ReFS and DrivePool all of them together?

  5. Hi,

    I have created 6 CloudDrives using 6 different GSuite accounts. Have not formatted them yet, only option available during drive creation is NTFS.

    How do I format them as ReFS and DrivePool all of them together?

  6. If the drives are in a dock, it may actually be the chipset for the enclosure that is returing the wrong information. We've ssen this on JMicron and other controllers, in the past.
  7. welcome! And Read Striping is enabled by default on DrivePool, but is worth double checking.
  8. Restoring to the pool drive would be best, skil exiting files, if they exist. Otherwise, it gets complicated. And duplication may be simpler.
  9. I believe that it doesn't grow. The main limit is that the drive needs to be as large as the largest disk it's protecting, IIRC
  10. For the file placement rules, you'd want to add "thumb.db" and whatever else to the rules, and have it limited to just the local disk. One entry per file/type. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=File Placement
  11. Looks good, and you are very welcome!
  12. You could do so by having the CloudDrive disk in a pool with a local disk, and using the file placement rules, though.
  13. Well, StableBit DrivePool can use 100x drives, without any issues. As for snapRAID, I'm not too familiar with it, but you may want to take a look at this link: http://www.snapraid.it/faq.html#howmanypar basically, more == better, but they've only had reports of 4 disks failing at a time. So, you may be able to get away with 5-6 disks for parity. But again, I'm not too familiar with it.
×
×
  • Create New...