Jump to content
Covecube Inc.


Popular Content

Showing content with the highest reputation since 09/21/20 in all areas

  1. 2 points
    Managed to fix this today as my client was giving errors also. Install Beta version from here: http://dl.covecube.com/CloudDriveWindows/beta/download/ (I used 1344) Reboot. Don't start CloudDrive and/or service. Add the below to this file: C:\ProgramData\StableBit CloudDrive\Service\Settings.json "GoogleDrive_ForceUpgradeChunkOrganization": { "Default": true, "Override": true } Start Service & CloudDrive. Should kick in straight away. I have 42TB in GDrive and it went through immediately. Back to uploading as usual now. Hope this helps.
  2. 1 point
    Hi, I have a google g suite account with unlimited space. Now that g suite is changing to google workspace i am concerned, that my unlimited space will be limited. Does someone know, how my account will change? If users like I will be lucky and wont have any changes? Or will I be limited to for example 1TB and I will get a warning, that anything above that 1 TB will be destroyed? And if so, are there peaople here willing to group togehter, to buy for example a "Dropbox business advanced" account, wich is 15€ per month (by paying yearly) for unlimited space, but only if at least 3 accounts are bought?
  3. 1 point
    CloudDrive itself operates at a block level, so it isn't aware of what files in the file system are being written to or read by the applications on your computer. Much like the firmware for a HDD or SDD is unaware of that information as well. So, that is to say, that it isn't possible via CloudDrive. Windows Resource Monitor or Process Explorer or another tool to look at Windows' file system usage would be required--as it sounds like you discovered.
  4. 1 point

    Bad drive causing BSOD

    1. Manually move all files to the outside of the poolpart.xxx folder on the bad disk so that drivepool can't see them any more. 2. Remove the old disk from DrivePool (it will be instantly complete since the poolpart folder is empty). 3. Insert new disk (new different PoolPart.xxx is created). 4. Manually copy the files from the old disk to the new poolpart folder on the new disk. I found out by doing this i didn't get BSOD compared to moving the disk using DP Above is why i think the BSOD is related to drivepool, what i'm worried about if one my drives go bad then it would just throw random BSOD and screw up my OS
  5. 1 point
    I don't have any info on this other than to say that I am not experiencing these issues, and that I haven't experienced any blue screens related to those settings. That user isn't suggesting a rollback, they're suggesting that you edit the advanced settings to force your drive to convert to the newer hierarchical format. I should also note that I do not work for Covecube--so aside from a lot of technical experience with the product, I'm probably not the person to consult about new issues. I think we might need to wait on Christopher here. My understanding, though, was that those errors were fixed with release .1314. It presumes that existing data is fine as-is, and begins using a hierarchical structure for any NEW data that you add to the drive. That should solve the problem. So make sure that you're on .1314 or later for sure. Relevant changelog: .1314 * [Issue #28415] Created a new chunk organization for Google Drive called Hierarchical Pure. - All new drives will be Hierarchical Pure. - Flat upgraded drives will be Hierarchical, which is now a hybrid Flat / Hierarchical mode. - Upgrading from Flat -> Hierarchical is very fast and involves no file moves. * Tweaked Web UI object synchronization throttling rules. .1312 * Added the drive status bar to the Web UI. .1310 * Tuned statistics reporting intervals to enable additional statistics in the StableBit Cloud. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off.
  6. 1 point
    Reid Rankin

    WSL 2 support

    Here's the DrivePool tracking issue; it appears has been resolved in version
  7. 1 point
    Reid Rankin

    WSL 2 support

    I've been following up on this with some disassembly and debugging to try and figure out what precisely is going wrong. WSL2's "drvfs" is just a 9P2000.L file server implementation (yep, that's the protocol from Plan 9) exposed over a Hyper-V Socket. (On the Windows side, this is known as a VirtIO socket; on the Linux side, however, that means something different and they're called AF_VSOCK.) The 9P server itself is hard to find because it's not in WSL-specific code -- it's baked into the Hyper-V "VSMB" infrastructure for running Linux containers, which predates WSL entirely. The actual server code is in vp9fs.dll, which is loaded by both the WSL2 VM's vmwp.exe instance and a copy of dllhost.exe which it starts with the token of the user who started the WSL2 distro. Because the actual file system operations occur in the dllhost.exe instance they can use the proper security token instead of doing everything as SYSTEM. The relevant ETW GUID is e13c8d52-b153-571f-78c5-1d4098af2a1e. This took way too long to find out, but allows you to build debugging logs of what the 9P server is doing by using the tracelog utility. tracelog -start p9trace -guid "#e13c8d52-b153-571f-78c5-1d4098af2a1e" -f R:\p9trace.etl <do the stuff> tracelog -stop p9trace The directory listing failure is reported with a "Reply_Rlerror" message with an error code of 5. Unfortunately, the server has conveniently translated the Windows-side NTSTATUS error code into a Linux-style error.h code, turning anything it doesn't recognize into a catch-all "I/O error" in the process. Luckily, debugging reveals that the underlying error in this case is an NTSTATUS of 0xC00000E5 (STATUS_INTERNAL_ERROR) returned by a call to ZwQueryDirectoryFile. This ZwQueryDirectoryFile call requests the new-to-Win10 FileInformationClass of FileIdExtdDirectoryInformation (60), which is supposed to return a structure with an extra ReparsePointTag field -- which will be zero in almost all cases because most things aren't reparse points. Changing the FileInformationClass parameter to the older FileIdFullDirectoryInformation (38) prevents the error, though it results in several letters being chopped off of the front of each filename because the 9P server expects the larger struct and has the wrong offsets baked in. So things would probably work much better if CoveFs supported that newfangled FileIdExtdDirectoryInformation option and the associated FILE_ID_EXTD_DIR_INFO struct; it looks like that should be fairly simple. That's not to say that other WSL2-specific issues aren't also present, but being able to list directories would give us a fighting chance to work around other issues on the Linux side of things.
  8. 1 point

    how to determine what drive failed

    I read this is as asking how to identify the actual physical drives in the case. I physically label my drives and stack them in the server according to their labels. Without something like that, I have no clue how you would be able to identify the physical drives...
  9. 1 point
    Umfriend is correct. The service should be stopped to prevent any chance of balancing occurring during the migration when using that method. And that method is fine so long as your existing arrangement is compatible with DrivePool's pooling structure. E.g. if you have: drive D:\FolderA\FileB moved to D:\PoolPart.someguid\FolderA\FileB drive E:\FolderA\FileB moved to E:\PoolPart.someguid\FolderA\FileB drive F:\FolderA\FileC moved to F:\PoolPart.someguid\FolderA\FileC then your drivepool drive (in this example P: drive) will show: P:\FolderA\FileB P:\FolderA\FileC as DrivePool will presume that FileB is the same file duplicated on two drives. As Umfriend has warned, when it next performs consistency checking DrivePool will create/remove copies as necessary to match your chosen settings (e.g. "I want all files in FolderA to exist on three drives"), and will warn if it finds a "duplicated" file that does not match its duplicate(s) on the other drives. As to Snapraid, I'd follow Umfriend's advice there too.
  10. 1 point
    Moving data to the Pool while retaining the data on the same drive is called seeding and it is advised to stop the service first (https://wiki.covecube.com/StableBit_DrivePool_Q4142489). I think this is because otherwise DP might start balancing while you are in the process of moving drive-by-drive. I am not sure but I would think you would first set settings, then do the seeding. (I am pretty sure that) DP does not "index" the files. Whenever you query a folder DP will on the spot read the drives and indeed show the "sum". Duplicate filenames will be an issue I think. I think that DP measures the Pool it will either delete one copy (I think if the name, size and timestamp are the same or otherwise inform of some sort of file conflict. This is something you could actually test before you do the real move (stop service, create a spreadhseet "Test.xlsx", save directly to a Poolpart.*/some folder on one of the drives, edit the file, save directly to Poolpart.*/some folder on another drive, start service and see what it does?). DP does not go mad with same folder names, some empty, some containing data. In fact, as a result of balancing, it can cause this to occur itself. I have no clue about snapraid. I would speculate that you first create and populate the Pool, let DP measure and rebalance and then implement snapraid. Not sure though. You may have to read up on this a bit and there is plenty to find, e.g. https://community.covecube.com/index.php?/topic/1579-best-practice-for-drivepool-and-snapraid/.
  11. 1 point
    I'd suggest a tool called Everything, by Voidtools. It'll scan the disks (defaults to all NTFS volumes) then just type in a string (e.g. "exam 2020" or ".od3") and it shows all files (you can also set it to search folder names as well) that have that string in the name, with the complete path. Also useful for "I can't remember what I called that file or where I saved it, but I know I saved it on the 15th..." problems.
  12. 1 point

    WSL2 Support for drive mounting

    I ran WSL2 + Storage Space from August 2019 to March 2020. It works well. I ended up switching from Storage Space to DrivePool because it's really hard to diagnose Storage Space when it misbehaves. And when it does fail (and it will), it does so in spectacular fashion. I was bummed out about losing parity functionality but DrivePool + Snapraid is fine for my use case. Anyway, I was able to use DrivePool with WSL2 (Docker) by mounting cifs volumes (windows shares). Here's an example: version: '3.7' services: sonarr: build: . container_name: sonarr volumes: - type: bind source: C:\AppData\sonarr\config target: /config - downloads:/downloads cap_add: - SYS_ADMIN - DAC_READ_SEARCH volumes: downloads: driver_opts: type: cifs o: username=user,password=somethingsecure,iocharset=utf8,rw,nounix,file_mode=0777,dir_mode=0777 device: \\IP\H$$\Downloads Note that I'm building the image. This is because I need to inject cifs-utils into it. Here's the dockerfile: FROM linuxserver/sonarr RUN \ apt update && \ apt install -y cifs-utils && \ apt-get clean There are security considerations with this solution: 1. Adding SYS_ADMIN capability to the docker container is dangerous 2. You need to expose your drive/folders on the network. Depending on how your windows shares are configured, this may be less than ideal. Hope this helps!
  13. 1 point

    My Rackmount Server

    So, nearly two and a half years down the line and a few small changes have been made: Main ESXi/Storage Server Case: LogicCase SC-4324S OS: VMWare ESXi 6.7 CPU: Xeon E5-2650L v2 (deca-core) Mobo: Supermicro X9SRL-F RAM: 96GB (6 x 16GB) ECC RAM GFX: Onboard Matrox (+ Nvidia Quadro P400 passed through to Media Server VM for hardware encode/decode) LAN: Quad-port Gigabit PCIe NIC + dual on-board Gigabit NIC PSU: Corsair CX650 OS Drive: 16GB USB Stick IBM M5015 SAS RAID Controller with 4 x Seagate Ironwolf 1TB RAID5 array for ESXi datastores (Bays 1-4) Dell H310 (IT Mode - passed through to Windows VM) + Intel RES2SV240 Expander for Drivepool drives (Bays 5-24) Onboard SATA Controller with 240GB SSD (passed through to Media Server VM) ESXi Server (test & tinker box) HP DL380 G6 OS: VMWare ESXi 6.5 (custom HP image) CPU: 2 x Xeon L5520 (quad core) RAM: 44GB ECC DDR3 2 x 750W Redundant PSU 3 x 72GB + 3 x 300GB SAS drives (2 RAID5 arrays) Network Switch TP-Link SG-1016D 16-port Gigabit switch UPS APC SmartUPS SMT1000RMI2U Storage pools on the Windows storage VM now total 34TB (mixture of 1,2 and 4TB drives) and still got 6 bays free in the new 24 bay chassis for future expansion. There's always room for more tinkering and expansion but no more servers unless I get a bigger rack!
  14. 1 point

    Optimal settings for Plex

    Nope. No need to change anything at all. Just use DrivePool to create a pool using your existing CloudDrive drive, expand your CloudDrive using the CloudDrive UI, format the new volume with Windows Disk Management, and add the new volume to the pool. You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. You will also want to disable most if not all of DrivePool's balancers because a) they don't matter, and b) you don't want DrivePool wasting bandwidth downloading and moving data around between the drives. So let's say you have an existing CloudDrive volume at E:. First you'll use DrivePool to create a new pool, D:, and add E: Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space. Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive. Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F: Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool. Then just navigate to D: and all of your content will be there, as well as plenty of room for more. You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. My drive layout looks like this:
  15. 1 point
    Deactivate the license on the old system move the drives over to the new system install the software activate the license That's it. The software will see the pooled drives and automatically recreate the pool. Duplication information will be retained, but balancing information won't be. You may want to reset the permissions on the pool, but that depends on if you customized them or not. For StableBit Scanner, just deactivate the license and activate it on the new system. To do so: StableBit DrivePool 2.X/Stablebit CloudDrive: Open the UI on the system that the software is installed on, click on the "Gear" icon in the top, right corner and select the "manage license" option. StableBit Scanner: Open the UI on the system that Scanner is installed on. Click on "Settings" and select "Scanner Settings". Open the "Licensing" tab, and click on the "Manage license" link. This will open a window that shows you the Activation ID, as well as a big button to "Deactivate" the license. Once you've done this, you can activate the license on a new system. Otherwise, activate the trial period on the new system, and contact us at https://stablebit.com/contactand let us know.
  16. 0 points
    Christopher (Drashna)

    WSL 2 support

    Unfortunately, we don't have any plans on adding support for WSL2 at this time.


  • Create New...