On my dell optiplex 780; I have the following installed
- Lubuntu kinetic (21.10)
- Lubuntu 22.04 LTS (jammy)
- Lubuntu 20.04 LTS (focal)
- Debian testing
Each has grub installed; with it found in /boot/grub/ on all systems, but as the machine boots in legacy (BIOS or CSM) mode, the only one active is the one that is written to the MBR, or first sector of the internal drive the BIOS setting tells the machine to boot from.
This machine I use for QA-test installs (install using existing partition), meaning I don’t upgrade the installed OSes at all, but re-install using existing partition which is non-destructive meaning my music etc is untouched… As 20.04.5 was the last (expected) ISO we’ll have for focal or 20.04, that partition is likely to become stale (no new dailies to refresh it), but the kinetic & 22.04/jammy will get re-freshed ~weekly.
When I perform these re-installs; whatever OS I (re-)install will take over ownership of the MBR & it’s grub will control boot; the others ignored. On that box, I just assess (post-install QA checks) checking Debian last, and let it run a update-grub; grub-install
manually so it retakes ownership of the boot. I consider that my Debian workstation (why it controls boot).
Another box; hp 8200 uses uEFI but the effect is the same. It contained
- Lubuntu 20.04 LTS
- Fedora (whatever latest is)
- OpenSuSE (tumbleweed)
I used that box for re-install tests of Lubuntu… and had a few failures… As long as I can get the box to grub rescue; I’ll boot another OS (openSuSE tumbleweed my usual fallback) & have it take ownership of boot; meaning it’s grub version is what I use. For years, Lubuntu 20.04 LTS has been the grub I used, but recent failures have had me switch to using openSuSE’s instead.
Personally I’d not remove other GRUB packages; I sure don’t. My point is only one will be used by the system.
(here I’ll skip uEFI as it can be a little more complex; and some boxes aren’t 100% compliant with uEFI standards anyway so boxes can be unique)
The standard of 1982 set by IBM says each drive has a MBR (they used their earlier standard for floppies of the 1981 IBM PC); it’s the first sector of the drive; so if you have three physical drives you can have three systems with an active grub on them. The BIOS settings themselves will control which of those drives MBR (first sector of drive) is actually loaded into RAM & executed - ie. controls boot. If you were using PCs in the 80s, you may even recall it was jumpers on the drive we first used to control boot… but then cable standards & BIOS configuration took over from that.
I’d not remove other grubs, just remember a re-install, or the last OS installed usually takes ownership of the boot, and if you’re not happy with that (as I’m often not), just give it back to the OS you prefer controlling boot of your box.
I have had other OSes perform a grub-install
type command during normal package upgrades when a new patched kernel appears; however most don’t instead running the Ubuntu equivalent of update-grub
, meaning no boot ownership should occur (only a re-scan & update of grub configuration file occurs), but it’s possible & OS specific.
As for boxes not detecting an OpenSuSE install; that’s likely related to BTRFS, as a normal Ubuntu/Debian (and many others too) don’t handle BTRFS well in the update-grub
search & thus that file-system is ignored (meaning any OS on a BTRFS system won’t likely appear in grub listing). The issue is BTRFS related, not the OS on it; but I bet that’s your issue, as BTRFS is a liked default of OpenSuSE. It’s somewhat easily (but messily) dealt with, but yeah it can be a little annoying (I suspect my 8200 install doesn’t use BTRFS as I didn’t want to fight with that issue again)