Jump to content
Covecube Inc.

JohnKimble

Members
  • Content Count

    18
  • Joined

  • Last visited

  • Days Won

    2
  1. I think I found the source of all my problems, because I had no more free SATA ports on my motherboard I bought a "Delock PCI Express Card > Hybrid 4 x internal SATA 6 Gb/s RAID". I removed it and so far so good. Delock is going on my blacklist, I also had a Delock USB3.0 to SATA adapter that was shady as hell. Never again I am going to expose my data to cheap/unknown brands... Now have to reset Stablebit Scanner because im pretty sure all these "unreadable sectors/blocks" are perfectly fine. A Dell H310 with an Intel RES2SV240 is in transit, good thing I already ordered this since I can take 15 days to get here.
  2. I hope someone can help me with this issue and pinpoint the problem i am having with my server. Basically what happens is that when I RDP into my server I see the server as randomly rebooted. Searching through event viewer I find errors/warnings like this: Now I am in the process of removing Disk 3 from my DrivePool but can a bad disk in my pool cause my entire system to crash? Or am I looking at hardware related trouble elsewhere?
  3. Hi, I enjoy Stablebit Scanner in combination with DrivePool. Recently I switched from a Fractal Design R5 to a 24 Hot-swap Rack-mount since the R5 was so filled up I could barely put the side panel up. Now since the fans can't push the air anymore it has to "pull" or suck in the air from the front which is less efficient than the R5. Now I don't really want to keep the 3 inner 120mm fans running at 100% since that will result in a jet engine noise. Now stablebit scanner can read the HDD temps now is there any software out there that can dynamically adjust the fan speeds based on the temperature of the HDDs? Let's say at 45degrees I want them to fully blast to keep them within a normal temp. In the R5 I had 30degrees (Celsius), now the HDD are pushing 40-50 since they are stacked closer together.
  4. From here: http://blog.covecube.com/2017/09/stablebit-drivepool-2-2-0-847-beta/ I am a bit confused, this example tells about existing DATA that is already stored locally. But lets say I want to add a CloudPool as backup within a new Hierarchical Pool, I need to move everything from the current pool into the Hierarchical pool? (Since it is a new pool i guess so) But is there a way to leave (in my case) 40TB of data but add a Clouddrive as a backup without moving the data at all?
  5. Well it works again here but my download speed went from 200mbit to around 20mbit max. Gonna be fun restoring a backup with this speed.
  6. So far only the WebUI is works, can't do anything else. CloudDrive doesn't even connect anymore. Not sure why, been solid since I first started using it and now all of the sudden its unusable. Tried Google Drive File Stream (their own product, same thing). I have a GSuite account which im the only user of.
  7. Been getting those since this morning as-well. Basically started when Windows decided to reboot for an update and my drive got closed non-cleanly and started re-upping my entire cache again. Its getting flooded with these: Exception: CloudDriveService.Cloud.Providers.Apis.GoogleDrive.GoogleDriveHttpProtocolException: User rate limit exceeded. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden. at System.Net.HttpWebRequest.GetResponse() at CloudDriveService.Cloud.Providers.Apis.Base.Parts.HttpApi.HttpApiBase.GetJsonResponse[T](HttpWebRequest request) at CloudDriveService.Cloud.Providers.Apis.GoogleDrive.Files.<>c__DisplayClass14_0.<UploadNew>b__1(HttpWebRequest request) at CloudDriveService.Cloud.Providers.Apis.Base.Parts.HttpApi.OAuth2HttpApiBase`1.<>c__DisplayClass6_0`1.<RequestBlock>b__0(HttpWebRequest request) at CloudDriveService.Cloud.Providers.Apis.Base.Parts.HttpApi.HttpApiBase.<>c__DisplayClass19_0`1.<RequestBlock>b__0() --- End of inner exception stack trace --- at CoveUtil.RetryBlock.Run[TException,TResult](Func`2 Func, Action`1 Control, Boolean ThrowOnOperationCanceled) at CoveUtil.RetryBlock.Run[TException,TResult](Func`1 Func, Action`1 Control) at CloudDriveService.Cloud.Providers.Apis.Base.Parts.HttpApi.HttpApiBase.RequestBlock[TResponse](String fullUrl, Func`2 func) at CloudDriveService.Cloud.Providers.Apis.Base.Parts.HttpApi.OAuth2HttpApiBase`1.RequestBlock[TResponse](String fullUrl, Func`2 func) at CloudDriveService.Cloud.Providers.Apis.GoogleDrive.Files.UploadNew(String fileName, Stream buffer, String parentFolderId, Action`1 generatedId) at CloudDriveService.Cloud.Providers.Io.GoogleDrive.GoogleDriveIoProvider.#bjg(#UBf #rxd, ChunkInfo #fAe, #Xtf #gb) at CloudDriveService.Cloud.Providers.Io.ChunkIo.ChunkIdIoProvider`2.#4af.#8tf(#UBf #rxd) at CloudDriveService.Cloud.Providers.Io.ChunkIo.Helpers.ChunkId.ChunkIdHelper.#Gqe(UInt64 #HAe, Action`1 #wBf) at CloudDriveService.Cloud.Providers.Io.ChunkIo.ChunkIdIoProvider`2.#Gqe(ChunkInfo #fAe, #Xtf #gb) at #Iqe.#PAe.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#UEf.#Saf.#9Lf(Stream #oYb) at #AKf.#zKf.#yKf(ChunkInfo #fAe, Stream #gb, Action`1 #wBf) at #Iqe.#UEf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#9Qg.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#6Uf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#3Pf.#jbf.#9Lf(Stream #oYb) at #AKf.#zKf.#yKf(ChunkInfo #fAe, Stream #gb, Action`1 #wBf) at #Iqe.#3Pf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#NRf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#LCf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#1Sf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hzf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#CAe.#Saf.#9Lf(Stream #oYb) at #AKf.#zKf.#yKf(ChunkInfo #fAe, Stream #gb, Action`1 #wBf) at #Iqe.#CAe.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#vzf.#U9e.#9Lf(Stream #EQf) at #AKf.#zKf.#yKf(ChunkInfo #fAe, Stream #gb, Action`1 #wBf) at #Iqe.#vzf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#UAe.#adf.#9Lf(Stream #oYb) at #AKf.#zKf.#yKf(ChunkInfo #fAe, Stream #gb, Action`1 #wBf) at #Iqe.#UAe.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hzf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hzf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Iqe.#TEf.#OLf(ChunkInfo #fAe, Stream #gb) at #Iqe.#Hqe.#Gqe(ChunkInfo #fAe, Stream #gb) at #Kqe.#Tqe.#Wc(UInt64 #Y, Int32 #59f, Stream #vAe, IoType #NOe) at CloudDriveService.Cloud.IoManager.#Wc(WriteRequest #89f, RetryInformation`1 #7tf) at CloudDriveService.Cloud.IoManager.#iTf.#3bf(RetryInformation`1 #7tf) at CoveUtil.RetryBlock..(RetryInformation`1 ) at CoveUtil.RetryBlock.Run[TException,TResult](Func`2 Func, Action`1 Control, Boolean ThrowOnOperationCanceled)
  8. Adjusting the "CloudFsDisk_MaximumConsecutiveIoFailures" helped a bit. I don't want to overdo it. Once or twice a week it still disconnects usually at the same exact time at 8:00 in the morning here, so i guess its on Google end (short disruption of their service)? Internet has been rock solid so far here. Too bad you cant make a script that checks if the drive is de-attached and then checks your internet connection first before attempting a re-attach, would make my life a bit easier but hey 1st world problems. I made a habit now every time i wake up to quickly check if clouddrive is still working.
  9. Hmm you were right, apparently Sabnzb was hogging all my memory for some reason and my OS drive was close to full. Fixed this, wasn't a Clouddrive issue.
  10. Once every X days my CloudDrive disconnects from Gsuite (my internet connection hasn't dropped, quite stable but error said it can't read from Gsuite after 5 retries) so i have to manually click re-attach after logging in on my server. My question is if there is a way to do this automatically or get notified on critical errors?
  11. Every once in a while i get this error: What does it mean and how can i prevent it?
  12. Does CloudDrive retry to reconnect automatically?
  13. JohnKimble

    Is there a way?

    To see what files Clouddrive is currently downloading or what process is trying to read from the CloudDrive? Would be handy to see because something is reading from the drive causing constant prefetching.
  14. JohnKimble

    File copy very slow

    Ok i reverted back to the last stable build and am running a drive trace while performing a copy. I will upload the logs later. 6-10 MB/s seems to be the average which is quite slow if i need to upload a couple of bluray remuxes. Edit: I have uploaded a drive trace, copy speed was limited by my upload speed. Hopefully it was of use, otherwise i can redo it.
×
×
  • Create New...