I thought you stopped supporting 32 bit.
lxq@triton:/usr$ ls
bin games include lib lib32 lib64 libexec libx32 local sbin share src
Used daily image dated 28 Nov to install 20.04.1.
I thought you stopped supporting 32 bit.
lxq@triton:/usr$ ls
bin games include lib lib32 lib64 libexec libx32 local sbin share src
Used daily image dated 28 Nov to install 20.04.1.
We did. But there are a rare few 32 bit packages left in the archive, mostly libraries.
Since youâre here, let me have an expert opinion. Do these figures look reasonable to you:
lxq@triton:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1,9G 0 1,9G 0% /dev
tmpfs 382M 1,5M 381M 1% /run
/dev/sda3 292G 13G 265G 5% /
tmpfs 1,9G 33M 1,9G 2% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
/dev/sda2 512M 4,0K 512M 1% /boot/EFI
tmpfs 382M 8,0K 382M 1% /run/user/1000
Also /usr has 4G of software which made me think the system is multi-lib now. Thatâs why I asked.
In a relatively fresh 20.04 I got:
Filesystem Size Used Avail Use% Mounted on
udev 446M 0 446M 0% /dev
tmpfs 99M 1.1M 98M 2% /run
/dev/sda1 9.8G 5.2G 4.1G 57% /
tmpfs 491M 0 491M 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 491M 0 491M 0% /sys/fs/cgroup
tmpfs 99M 8.0K 99M 1% /run/user/1000
While /usr
is 4.2G.
I am guessing sizes of udev and tmpfs are dynamic and allocated with respect to disk size. This whole thing started when I notice file-man showed 20G space was used after a clean install. Maybe itâs reserved for /root. Well, have space, will use.
Let me also thank you for your efforts. Lubuntu is lightning fast and fun to use.
I wouldnât come to so simple of a conclusion, but youâre on the right track. Since udev
is responsible for device management, I would imagine, rather, that it increases with the number of devices. /run/lock
isnât different at all, though anything in /run
is likely to increase with the number of processes. Since cgroup
s are also process related, I would imagine that increasing similarly. And /dev/shm
is a ramdisk which certainly could be increased by the number of processes using it, too.
That said, I donât know about your whole 20G business. You can see above that even if I include everything there, thereâs only a total of 11.4G. And in many cases I donât think that storage is ârealâ per se. For example, /dev/shm
is a bit like /tmp
, i.e. itâs ephemeral.
I did a bit more digging on the issue. It looks like kernel reserves 5% of disk to avoid fragmentation when disk is 95% full. Lubuntu does not install anything, kernel simply blocks space:
/dev/sda3 292G 13G 265G 5% /
It reports used space correctly but the available space shows 5% less:
292-13=279 whereas system reports 265, aprx 14GB (5% of disk) less.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.