← Back to team overview

nssbackup-team team mailing list archive

Re: [Question #116342]: tar takes a long time, 100% CPU, with no output

 

Question #116342 on NSsbackup changed:
https://answers.launchpad.net/nssbackup/+question/116342

    Status: Answered => Open

B.J. Herbison is still having a problem:
Thank you for the response. I appreciate that you did not write and do
not maintain tar, but I hope you have insight into how nssbackup is
using tar.

I don't have insight into the NAS device, but a simple two window
experiment with cat and ls -l showed that writing to a file shows up in
the file size before the file is closed so it appears that the backup
process is hung before gzip has enough data to start writing to disk.

Looking at top, tar is taking 100% of a CPU and gzip 0%. Looking at ps,
tar has over nine minutes of CPU time in ten minutes, gzip still shows
up at "0:00".  There was a burst of network traffic at the backup start
(writing the nine files relating to the backup) but the network traffic
has been under 4 KiB/s since then.  I did not catch the start of the
backup with iotop, but I never saw the tar process on the list.  (I
should have read the man page and used "iotop -o" or pressed "o" before
the backup started.)

Do you have any suggestions for what I could try next?

I remember tar having some problems with long file names quite a while
ago (over a decade) but I don't remember the symptoms -- and I don't
remember doing anything strange that would add unusually long file names
between when the backups worked and when they stopped.

(By the way, do the old nssback.<date>.log files left by a failed run
get cleaned up automatically at some point or will I need to clean them
up manually when this issue gets resolved?)

Thank you for your help with this problem.

You received this question notification because you are a member of
NSsbackup team, which is an answer contact for NSsbackup.



References