Greetings!
For version 3.6.3
I would expect binlog files of faststore service to get rotated, or any other mechanism for taking care of the old records. Instead it has got 25 Gb in size(which i consider to be a lot for the system disk). I tried to run some tests but it kept failed after several minutes running. At first the service was working in 12Gb cgroup slice, later i increased it to 16Gb, now it's 23Gb and the service not even able to start up on a cluster node properly. I guess the daemon tries to load up all the content from /opt/fastcfs/fstore/data/slice and replica to RAM, which is 25 Gb in my case.
Version 3.7.1 also has the same flaw, and I guess I managed to find the cause: seems to me the fstore data files mentioned above(constantly increasing in size during the storage is under write) is result of gradually growing space consumed on SSDs given to fstore as underlying storage disks. Doesn't matter if I clean fcfs-fuse mounted folder or I just write to the same volume in a loop, the underlying disks are getting depleted; as the result a got almost empty fuse-mount and full SSD's.
And if the /opt/fastcfs/fstore/data/slice reflects the SSD's content it makes sense why it steaduly grows, no matter what.