Jump to content

XAlpha

Members
  • Posts

    33
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by XAlpha

  1. Is this in the roadmap/strategy for future releases?
  2. Any updates? Lack of individual drive scans within a storage pool is the only thing preventing me from using them...
  3. XAlpha

    Empty json file

    Nevermind - deleting it apparently resolves the issue upon service restart.
  4. Has there been any progress with allowing Scanner to see individual drives within a Storage Pool? SMART health and disk surface scanning would be awesome within a Storage Pool, and I believe this was planned at one time.
  5. So I'm primarily using ReFS storage pools a fresh Server 2019 build with dedup turned on. There are several pools of individual drives in a Simple pool configuration so that data will be duplicated between sets of storage pools, and then dedup is enabled for each pool. Great - works beautifully with 9 internal drives in four pools. But I need to add a Drobo CloudDrive into the pool, which could be NTFS or ReFS. That's also fine, assuming the JSON isn't blank, but the StableBit wiki recommends against mixed format pools for "compatibility reasons." Is this still an issue? What's the risk?
  6. XAlpha

    Empty json file

    I tried opening the JSON settings file in both Notepad and Wordpad - both open the file but all white, no text/content. Using fresh DP Beta 2.3.0.1278 on a fresh Server 2019 install. Tried opening on a different computer, and same results. File is attached. Settings.json
  7. It's a Drobo NAS, and it's set up via CD as a Windows file share; I don't think Drobo NAS support iSCSI, and without a dedicated network connection, it would trash the WiFi mesh network that it runs on. I'm using Server 2016, so I have some options, but I need something that can be added to DP. At one point Christopher alluded that native access to the CD data was one of the most often requested features, and that was years ago. I'm still waiting... and hoping... As far as the recovery, no idea why it takes so long. File transfers to/from the NAS run around 115 MB/sec, so I'm saturating the network connections. It shouldn't take that long, but it does consistently. Router firmware updates are the worst.
  8. Yes, but I want it to duplicate as part of my DP. Total drivepool size is 192 TB, and I don't want to manually determine which media files are copied to the NAS. I want a solution that has the NAS as part of the DP, included in the duplicated files, where I can specify which folders get duplicated on the NAS. So I need a way of adding a NAS to the DP, and it be readable from third party devices. Example: DrivePool has 5 folders, 2 of which are media folders. Four folders are set up for 2x duplication, and one folder is set up for 3x duplication. I want my NAS to count towards the duplication count, and its spare capacity leveraged into the DP. I can set balancing rules to only have the media folders on the NAS from DP, or at least that be the primary location for them, and then I don't have to store the files manually and copy them around when I want to watch them, in addition to managing their duplication. I believe that was part of the original design intent of CD, or at least an option thereof. And it takes way, way too long to rebuild/recovery when CD loses network connectivity. Days is not okay, even if 13 TB.
  9. Need some help - I've been waiting for CloudDrive to allow for non-encrypted storage. I have a main server with ~20 drives attached all using DrivePool, and I have a media player at my TV that physically connects to a local mesh access point. The speed between the mesh access points is good, but I still get "buffering" a lot when streaming from the server from the DrivePool. So when CloudDrive first came out, I was like great! I can install a local NAS device, have the DrivePool server put media on the local NAS device via CloudDrive, and then the server would count it towards duplicate storage, along with a local NAS that the media player could play off of. Except it can't. Because the media player can't read anything in the CloudDrive. My second issue with using the encryption is that there is occasionally a hiccup in the mesh network, or power flicker, whatever that upsets and causes a disconnect between the server and the NAS. Recovering 13TB on a NAS takes days, and in those days, my DrivePool won't allow writes. So - while yes, it would defeat the purpose of cloud storage to allow turning off encryption, it would be incredibly handy for NAS and Windows File Shares to use non-encrypted storage. Can it be done? I've been watching this thread and for this capability for 5+ years.
  10. Bump - any updates? I've been waiting since CD came out to do this... would love to set up a NAS to do this.
  11. Is there an update for this? This would be great for remote NAS devices being used for local media caches.
  12. Measured and rebalancing with 701, but the GUI has some glitches. And for whatever reason, when I reboot the server, any volumes without drive letters lose their labels and won't show up again in the pool until I assign a drive letter and restart the DP service.
  13. I may be missing the issue - what's the down side of having both types in the pool?
  14. The issue also seems to be with file names greater than 255 characters... Win10 is using the pool for file histories, and some of those directory structures get pretty long. That also caused the measurements to hang. I'm waiting for the pool to "heal" & rebalance after removing the long file/directory names, along with re-populating the dedup'd drives. That will probably take a week or so... so stay tuned.
  15. For future can we make sure both types work within the pool? I have data aware SAN's and tiered data on them that has to stay in the NTFS format to maintain compatibility with the SAN's, but those SAN's are pooled with stand-alone drives that are ReFS. That's working as it is, but if the pool has to be of homogeneous file type, it would prevent me from operating the pool as it is, and I could never move to ReFS in the foreseeable future...
  16. Ditto. I had to undo all deduplication (that took about a week). DP does not like dedup yet, so it would seem.
  17. With build 692 I seem to be having issues with dedup - it's set up on a couple of the larger disks, and direct drive access through Windows is working fine. However, DP will not successfully finish measuring the pool. It will measure all the drives, except the drives with dedup enabled.
  18. XAlpha

    Measuring Speed

    I was using StableBit.DrivePool_2.2.0.652_x64_BETA.exe on WHS2011. I just upgraded to 655.
  19. XAlpha

    Measuring Speed

    "Measuring" is taking a very, very long time (days). Is there a setting that would allow it to measure multiple drives at once or to increase the threads to get this done faster? DP Pool is ~90 TB with 30-32 drives.
  20. XAlpha

    Duplication Warnings

    Remeasuring with 561 didn't fix it. And the problem originally occurred while using 634. I've re-installed 634; the pool has been measuring for over 24 hours. I'll let you know...
  21. XAlpha

    Duplication Warnings

    Yes, why after the file and folder were removed, and I checked all the individual drives - the file and folder were gone from every individual drive, the duplication warning still persisted.
  22. XAlpha

    Duplication Warnings

    Load cycle count, and yes using Scanner. No bad sectors, Scanner reports healthy other than the SMART warning. And maybe? But not sure, as I successfully moved the folder off the pool onto a Drobo, and even with the folder gone, not in the pool, DP continued giving me the duplication error for that folder & file.
  23. XAlpha

    Duplication Warnings

    Kind of, one of the drives threw a SMART warning, and I have been having issues getting the pool to complete measuring, checking, and duplicating. Assuming the two may be related, I increased the duplication from 2 to 3 until the problem could be isolated and to ensure duplication was re-checked for all folders. And then I got that warning. And I'm currently on WIndows 8.1 Pro. I had previously used WHS 2011 and saved the directory in case I needed one of the backups. So no, the files were not in use by the OS at the time of this warning.
  24. XAlpha

    Duplication Warnings

    Never mind, I recreated the directory with a dummy text file with same file name, and then it resolved the discrepancy. It's still odd it happened to begin with.
×
×
  • Create New...