grub-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Slow x86 BIOS load with certain partition CHS values


From: ValdikSS
Subject: Slow x86 BIOS load with certain partition CHS values
Date: Mon, 3 Jul 2023 03:10:51 +0300
User-agent: Mozilla/5.0 (Windows NT 10.0; rv:78.5.0) Gecko/20100101, Thunderbird/78.5.0

Hello list,

I noticed that GRUB 2.06 menu and Linux kernel and initrd load times are greatly influenced by the "last sector" data in the first partition entry (byte 0x06 in MBR partition entry, offset 0x1c4 from the beginning of the disk for the first partition) on my WYSE C10LE x86 thin client (32-bit VIA Eden Esther with Phoenix bios, from 2008).

Fresh Debian 12 installation takes about 5 seconds to show GRUB menu (GRUB loading... for 5 seconds), the kernel (4.5 MB) is loaded in 27 seconds, and initrd takes ages to load.

However, if I use gdisk software to perform MBR→GPT→MBR partition table type conversion, without touching any partitions per se, GRUB speeds up by an order of magnitude: the menu is shown instantly and the kernel is loaded under 3 seconds, initrd takes about 8 seconds.

Several hours of debugging resulted in the following observations:

1. My BIOS returns some strange C/H/S values in int13h, influenced by "last sector" data in the first partition entry of MBR. With stock Debian MBR it's 301/255/2, with MBR→GPT→MBR it's 480/255/63.

Debian's MBR contain 4/4/1 CHS start, 255/254/194 CHS end.
Converted MBR contain 1/0/1 CHS start, 255/254/255 CHS end.

Is this a BIOS bug, or some kind of quirk? I don't see this behavior on SeaBIOS in qemu. Does Debian installer fill MBR correctly? Does gdisk fill it correctly?

2. Slow loading times are due to low sector value with stock MBR. GRUB is forced to read only 2 sectors at a time.

3. GRUB obeys CHS layout even if LBA via IBM/MS INT13 Extensions is used. My BIOS supports INT13 extensions and the reads are performed using it, but GRUB still reads by 2 segments.

grub-core/disk/i386/pc/biosdisk.c, function grub_biosdisk_open contains the following logic for HDD:

if (grub_biosdisk_get_diskinfo_standard() != 0) /* if it fails */ {
  if (data->flags & GRUB_BIOSDISK_FLAG_LBA) {
    data->sectors = 63;
    data->heads = 255;
    ...
  }
}

That means it rewrites the value for LBA only if CHS function fails. Shouldn't LBA mode ignore CHS values and read as many as possible? Is this a GRUB bug? I've changed the function to always rewrite data for LBA to 64 (not 63) sectors, to fill the whole segment in a single read, and it seem to work fine. With 63 sectors GRUB reads 63 sectors, then 1 sector, then 63 sectors, then 1 sector again, which decreases the performance, but not by much.


P.S. nativedisk doesn't have any performance problems.
Thanks.

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]