Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Shane

    Drive question.

    When you say chkdsk, was that a basic chkdsk or a chkdsk /b (performs bad sector testing)? I think I'd try - on a different machine - cleaning (using the CLEAN ALL feature of command line DISKPART), reinitialising (disk management) and long formatting (using command line FORMAT) it to see whether the problem is the drive or the original machine. If I didn't have a different machine, I'd still try just in case CLEAN ALL got rid of something screwy in the existing partition structure. I'd then run disk-filltest (a third party util) or similar on it to see if that shows any issues. If it passes both, I'd call it good. If it can only be quick-formatted, but passes disk-filltest, I'd still call it okay for anything I didn't care strongly about (because backups, duplications, apathy, whatever). If it fails both, it's RMA time (or just binning it). YMMV, IMO, etc. Hope this helps!
  3. Yesterday
  4. fleggett1

    Drive question.

    What do you guys think about a drive that fails a long format (for unknown reasons), but passes all of Scanner's tests, isn't throwing out SMART errors, and comes up fine when running chkdsk?
  5. I recently replaced two 4tb drives that were starting to fail with two 8tb drives. I replaced them one a time over a span of 4 days. The first drive replacement was flawless. The second drive I found an unallocated 2048.00GB Disk 6 (screenshot 1). When looking at it via Drive pool's interface it shows up as a COVECUBEcoveFsDisk__ (Picture 2). Windows wants to initialize it. I don't want to lose data. But I'm not entirely sure what's going on. Any insight or instructions on how to fix it? Thank you. *Update. My Drivepool is only 21.8TB large so i think it is missing some space. Forgive the simple math. 8+8+4+4 is about 24tb and im showing a drivepool of 21.8tb. Thats about what my unallocated space is give or take.
  6. Last week
  7. Sorry, I didn't mention: Upload verification was disabled. I opened a ticket.
  8. If that's 100GB (gigabytes) a day then you'd only get about another 3TB done by the deadline (100gb - gigabits - would be much worse), so unless you can obtain your own API key to finish the other 4TB post-deadline (and hopefully Google doesn't do anything to break the API during that time), that won't be enough to finish before May 15th. So with no way to discern empty chunks I'd second Christopher's recommendation to instead begin manually downloading the cloud drive's folder now (I'd also be curious to know what download rate you get for that).
  9. I have been using robocopy with move, over write. it 'deletes' the file from the cloud drive. its not shown any longer, and the 'space' is shown to be free. the mapping is being done to some degree, even as a read only path. if there were some way to scan the google drive path and 'blank' as in, bad map locally in the BAM map that would force a deletion of the relevant files from the google path. this would do wonders. glasswire reads that I'm getting about 2mb down. this is no where near the maximum, but it looks like I get about 100gb a day.
  10. That's a hard question to answer. SSDs tend to be fast, including in how they fail. But given this, I suspect that you might have some time. However, back up the drive, for sure! Even if you can't restore/clone the drive, you'd be able to recover settings and such from it. But ideally, you could restore from it. And cloning the drive should be fine. Most everything is SSD aware anymore, so shouldn't be an issue. But also, sometimes a clean install is nice.
  11. Would you mind opening a ticket about this? https://stablebit.com/contact/ This is definitely diving into the more technical aspects of the software, and I'm not as comfortable with how well I understand it, and would prefer to point Alex, the developer, to this discussion directly. However, I think that some of the "twice" is part of the upload verification process, which can be changed/disabled. Also, the file system has duplicate blocks enabled for the storage, for extra redundancy in case of provider issues (*cough* google drive *cough*). But it also sounds like this may not be related to that.
  12. This is mainly an informational post. This is concerning Windows 10. I have 13 drives pooled and i have every power management function set so as to not allow Windows to control power or in any way shut down the drives or anything else. I do not allow anything on my server to sleep either. I received a security update from Windows about 5 days ago. After the update I began daily to receive notices that my drives were disconnected. Shortly after any of those notices (within 2 minutes) I received a notice that all drives have been reconnected. There was never any errors resulting from whatever triggered the notices. I decided to check and I found that one of my USB controllers had its power control status changed. I changed it back to not allowing Windows to control its power and I have not received any notices since. I do not know for sure but I am 99% sure that the Windows update toggled that one controller's power control status to allow windows to turn it off when not being used. I cannot be 100% sure that I have had it always turned off but, until the update, I received none of the notices I started receiving after the update. I suggest, if anyone starts receiving weird notices about several drives becoming lost from the pool, that you check the power management status of your drives. Sometimes Windows updates are just not able to resist changing things. They also introduce gremlins. You just have to be careful to not feed them after midnight and under no circumstances should you get an infested computer wet.
  13. Make sure that you're on the latest release version. There are some changes to better handle when the provider is read only (should be at least version 1.2.5.1670). As for the empty chunks, there isn't really a way to discern that, unfortunately. If you have the space, the best bet is to download the entire directory from Google Drive, and convert the data/drive, and then move the data out.
  14. I'm not aware of a way to manually discern+delete "empty" CD chunk files. @Christopher (Drashna) is that possible without compromising the ability to continue using a (read-only) cloud drive? Would it prevent later converting the cloud drive to local storage? I take it there's something (download speeds? a quick calc suggests 7TB over 30 days would need an average of 22 Mbps, but that's not including overhead) preventing you from finishing copying the remaining uncopied 7TB on the cloud drive to local storage before the May 15th deadline?
  15. I need some assistance, fast. about a year ago, google changed their storage policy. I had exceeded what was then the unlimited storage by about 12 tb. I immediately started evacuating it. I only got it down 10 gig from 16, and google suspended me with read only mode. it can delete, but it cannot write. I have continued to download the drive. it is now down to about 7 tb so far as the pool drive is concerned, but unless drive pool has a mode to 'bad map' and delete the sector files from the drive pool, I am still at 12 tb and way over its capacity. I do appreciate that cloud drive is discontinuing google drive, but I still have some time, but I need to delete as much from the google path as possible, and I can't do that without being able to write back to it, which I cannot do. 'delete bad/read only but empty' would be a big help.
  16. Thanks, fortunately no issues once all drive scans completed...but it took a while. Potential related question - and I can post a new ticket if you prefer: If I upgrade my server to W11 from W10 - anything I need to be concerned about re: Drivepool/Scanner? I'm assuming all should migrate over w/o any other work required? I also just noticed there are new versions for both (see screenshot). Haven't updated in ages, but looks like I can just follow the "click here" prompts to update? Should I shut down the software first?
  17. Hi, isn't there some priorization taking place under the hood when deciding about which chunk to upload first? I just did few experiments with Google Cloud Storage and 100 MB chunk size, cache size 1 GB (initially empty except pinned metadata and folders), no prefetch, latest public CloudDrive: a) pause upload b) copy a 280 MB file to the cloud drive c) resume upload With this sequence, the whole plan of actions seems to be well defined before the actual transfer starts. So lot's of opportunity for CloudDrive for batching, queueing in order etc. Observing the "Technical Details" window for the latest try, the actual provider I/O (in this order) was: Chunk 3x1 Read 100MB because: "WholeChunkIoPartialMasterRead", length 72 MB Chunk 3x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 72 MB Chunk 4x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 80 MB Chunk 10x1 Read 100MB because: "WholeChunkIoPartialMasterRead", length 4 kb, + 3 "WholeChunkIoPartialSharedWaitForRead" with few kb (4 kb, 4 kb, 8 kb) Chunk 10x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 4 kb, + 3 "WholeChunkIoPartialSharedWaitForCompletion" with few kb (4 kb, 4 kb, 8 kb) Chunk 0x1 Read 100MB because: "WholeChunkIoPartialMasterRead", length 4 kb Chunk 0x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 4 kb Chunk 4x1 Read 100MB because: "WholeChunkIoPartialMasterRead", length 23 MB Chunk 4x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 23 MB Chunk 5x1 Write 100MB, length 100 MB Chunk 6x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 11 MB Chunk 10x1 Read 100MB because: "WholeChunkIoPartialMasterRead", length 16 kb, + 4 "WholeChunkIoPartialSharedWaitForRead" with few kb (4 kb, 4 kb, 4 kb, 12 kb) Chunk 10x1 Write 100MB because: "WholeChunkIoPartialMasterWrite", length 16 kb, + 4 "WholeChunkIoPartialSharedWaitForCompletion" with few kb (4 kb, 4 kb, 4 kb, 12 kb) So my questions / suggestions / hints to things that shouldn't happen (?) in my opinion: The chunk 10x1 is obviously just for filesystem metadata or something; it's few kb, for which a chunk of 100 MB has to be downloaded and uploaded - so far so unavoidable (as described in here). Now the elephant in the room: Why is it downloaded and uploaded TWICE? The whole copy operation and all changes were clear from the beginning of the transmission (that's what I paused the upload for until copying completely finished). Ok, may be Windows thought to write some desktop.ini or stuff while CloudDrive was doing the work. But then why did it have to be read again and wasn't in the cache on the second read? Caching was enabled with enough space, also metadata pinning was enabled, so shouldn't it then be one of the first chunks to cache?. Why is chunk 4x1 uploaded TWICE (2 x 100MB) with 80 MB productive data the first time and 20 MB the second?! Isn't this an obviuous candidate for batching? If chunk 5x1 is known to be fully new data (100 MB actual WORTH of upload), why does it come after 3x1, 4x1 and 10x1, which were all only "partial" writes that needed the full chunk to be downloaded first, only to write the full chunk back with only a fraction of it actually being new data. Wouldn't it be more efficient to upload completely new chunks first? Especially the filessystem chunks (10x1 and 0x1 I'm looking at you) are very likely to change *very* often; so prioritizing them (with 2x99 MB wasted transfered bytes) over 100 MB of actual new data (e.g. in chunk 5x1) seems a bad decision for finishing the job fast, doesn't it? Also, each upload triggers a new file version of 100 MB at e.g. Google Cloud Storage, which get's billed (storage, early deletion charges, ops...) without any actual benefit for me. So regarding network use (which is billed by cloud providers!): Naive point of view: I want to upload 280 MB of productive data Justified because of chunking etc.: 300 MB download (partial chunks 0x1, 3x1, 10x1) + 600 MB upload (4x1, 5x1, 6x1, 0x1, 3x1, 10x1) Actually transfered in the end: 500 MB download + 800 MB upload. That's 66% resp. 33% more than needed?
  18. Wouldn't it be more efficient to just load the parts of the chunk that were not modified by the user, instead of the whole chunk? One could on average save half the downloaded volume, if I'm correct. Egress is expensive with cloud providers. 😅 Think: I modify the first 50 MB of a 100 MB chunk. Why is the whole 100 MB chunk downloaded just to overwrite (= throw away) the first 50 MB after downloading?
  19. hello 1. enable 'show hidden files and folders' in file explorer settings. this will allow you to see the hidden 'PoolPart.xxxx' folder on each of the underlying drives that comprise your pool. 2. in file explorer, navigate to the PoolPart.xxxx folder on each underlying drive where the file in question is stored, find the file and right-click it and choose 'copy' (do not just drag-n-drop). paste the files elsewhere outside the pool (may need to rename one of them if copying both to the same folder/directory). you now have safely (for the moment) backed up the file(s) as per your original post. 3. open each file on each underlying drive and determine which one you want to keep. then delete the file from the underlying drive that you DO NOT want to keep. then right-click COPY the good file from your backup location and paste right back to where you just deleted the bad file. (make sure both files on each underlying drive are named the same) 4. in the DrivePool error/notification box, choose the 'fix it myself' option, or hit the X in the corner and just close the box. 5. in the DrivePool GUI - upper right corner cog wheel with downpointing arrow/Troubleshooting/Recheck duplication... 6. Wait for it to finish and Viola... you're done. this is a micro-managey way to do it, but safer than allowing DrivePool to just default select to keep the newer file, which may in fact be the bad one. cheers
  20. My DrivePool reports inconsistent files in a duplicated folder, probbably as a result of an occasional system crash. How can I save both copies of the drivepool stored duplicates, before I start the resolving process? I do not want to copy the whole drive or so.... Result of "dpcmd check-pool-fileparts "L:\_FAMILY\00_ALLE\_Familie" 4" -> relevant extract: ** [2x/2x] L:\_FAMILY\00_ALLE\_Familie\Karteiordner.txt(128 KB - 131.550 -> \Device\HarddiskVolume8\PoolPart.cc193de9-fd45-41f6-8a68-37835871574d\_FAMILY\00_ALLE\_Familie\Karteiordner.txt[Device 0] -> \Device\HarddiskVolume7\PoolPart.4a0c374a-4b21-4362-acb1-712dd4230fe7\_FAMILY\00_ALLE\_Familie\Karteiordner.txt[Device 5] Summary of dpcmd: Directories: (4) - [Streams: (0) 0 B (0 B)] Directory parts: (8) - [Streams: (0) 0 B (0 B)] Files: (16) 8,43 MB (8.836.169 - [Streams: (11) 890 B (890 B)] File parts: (32) 16,9 MB (17.672.338 - [Streams: (22) 1,74 KB (1.780 B)] File parts by pool part UID: - 4a0c374a-4b21-4362-acb1-712dd4230fe7: x2 - (16) 8,43 MB (8.836.169 B) [Streams: (11) 890 B (890 B)] - cc193de9-fd45-41f6-8a68-37835871574d: x2 - (16) 8,43 MB (8.836.169 B) [Streams: (11) 890 B (890 B)] Directory parts by pool part UID: - 4a0c374a-4b21-4362-acb1-712dd4230fe7: x2 - (4) [Streams: (0) 0 B (0 B)] - cc193de9-fd45-41f6-8a68-37835871574d: x2 - (4) [Streams: (0) 0 B (0 B)] * Inconsistent directories: 0 * Inconsistent files: 1 * Missing file parts: (0) 0 B (0 B) * Some attributes were inconsistent, StableBit DrivePool has been notified. ! Error reading directories: 0 ! Error reading files: 0
  21. Earlier
  22. Thanks for the reply! mmm yep I thought that it would be the only option... but do I have time? Do you think I can wait about 15 days? And to replace the drive, can I clone this old 980 on a new 990 ? Or is better that I install everything again?
  23. The problem is Aomei Backupper Pro doesn't see the PoolParts.xxxxxxx (as it's hidden). I think I'll stick with my method as metioned above; far less hassle Thanks anyway
  24. In general, this shouldn't happen, but certain scenarios seem to trigger this more. Namely the large semi-annual updates for Windows 10/11. But if you do seee this happening, open a ticket, and then run the StableBit Troubleshooter, and let us know when it's complete.
  25. Given the read errors and the NVMe Health warning, I would definitely recommend replacing the drive.
  26. Mostly, it's just accessing the disks via the poolparts. Otherwise, it is theoretically possible. You could use dism tools to install the drivers for the pool into the winPE image, but you'd also need to mount the image, and manually add the drivepool service. This is complicated, and I'm not entirely sure that this will work.
  27. I use Aomei backupper pro to create a system image of Windows 10 Pro into the same pc's drivepool. I ran a test restore on this Windows 10 pc from Aomei but on boot into the winpe enviroment the drivepool isn't listed, only the HDD's that make up the pool. Is there a way to access the contents of my drivepool to allow a system restore? My work around at the moment is to place a copy of the system backup image from the drivepool onto one of the HDD's that belong to the pool but outside the PoolPart.xxxxxxxxxxx folder, thus making this folder visible to the backup software
  28. Hi all! This night Stablebit Scanner sent me 2 emails that my 980 Pro is Damaged. It's the NVME where Windows is installed This is the report: And this is the NVMe Health Report: It's the first time that happens, actually I don't know what to do Is it better that I start the file scan ? What else can I do? Thanks for the support!
  1. Load more activity
×
×
  • Create New...