qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio_blk: fix defaults for max_hw_sectors and max_seg


From: Jens Axboe
Subject: Re: [Qemu-devel] virtio_blk: fix defaults for max_hw_sectors and max_segment_size
Date: Wed, 26 Nov 2014 12:48:42 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0

On 11/21/2014 08:49 AM, Mike Snitzer wrote:
> On Fri, Nov 21 2014 at  4:54am -0500,
> Christoph Hellwig <address@hidden> wrote:
> 
>> On Thu, Nov 20, 2014 at 02:00:59PM -0500, Mike Snitzer wrote:
>>> virtio_blk incorrectly established -1U as the default for these
>>> queue_limits.  Set these limits to sane default values to avoid crashing
>>> the kernel.  But the virtio-blk protocol should probably be extended to
>>> allow proper stacking of the disk's limits from the host.
>>>
>>> This change fixes a crash that was reported when virtio-blk was used to
>>> test linux-dm.git commit 604ea90641b4 ("dm thin: adjust max_sectors_kb
>>> based on thinp blocksize") that will initially set max_sectors to
>>> max_hw_sectors and then rounddown to the first power-of-2 factor of the
>>> DM thin-pool's blocksize.  Basically that commit assumes drivers don't
>>> suck when establishing max_hw_sectors so it acted like a canary in the
>>> coal mine.
>>
>> Is that a crash in the host or guest?  What kind of mishandling did you
>> see?  Unless the recent virtio standard changed anything the host
>> should be able to handle our arbitrary limits, and even if it doesn't
>> that something we need to hash out with qemu and the virtio standards
>> folks.
> 
> Some good news: this guest crash isn't an issue with recent kernels (so
> upstream, fedora 20, RHEL7, etc aren't impacted -- Jens feel free to
> drop my virtio_blk patch; even though some of it's limits are clearly
> broken I'll defer to the virtio_blk developers on the best way forward
> -- sorry for the noise!).
> 
> The BUG I saw only seems to impact RHEL6 kernels so far (note to self,
> actually _test_ on upstream before reporting a crash against upstream!)
> 
> address@hidden ~]# echo 1073741824 > /sys/block/vdc/queue/max_sectors_kb
> address@hidden ~]# lvs
> 
> Message from address@hidden at Nov 21 15:32:15 ...
>  kernel:Kernel panic - not syncing: Fatal exception
> 
> Here is the RHEL6 guest crash, just for full disclosure:
> 
> kernel BUG at fs/direct-io.c:696!
> invalid opcode: 0000 [#1] SMP
> last sysfs file: /sys/devices/virtual/block/dm-4/dev
> CPU 0
> Modules linked in: nfs lockd fscache auth_rpcgss nfs_acl sunrpc ipv6 ext2 
> dm_thin_pool dm_bio_prison dm_persistent_data dm_bufio libcrc32c dm_mirror 
> dm_region_hash dm_log dm_mod microcode virtio_balloon i2c_piix4 i2c_core 
> virtio_net ext4 jbd2 mbcache virtio_blk virtio_pci virtio_ring virtio 
> pata_acpi ata_generic ata_piix [last unloaded: speedstep_lib]
> 
> Pid: 1679, comm: lvs Not tainted 2.6.32 #6 Bochs Bochs
> RIP: 0010:[<ffffffff811ce336>]  [<ffffffff811ce336>] 
> __blockdev_direct_IO_newtrunc+0x986/0x1270
> RSP: 0018:ffff88011a11ba48  EFLAGS: 00010287
> RAX: 0000000000000000 RBX: ffff8801192fbd28 RCX: 0000000000001000
> RDX: ffffea0003b3d218 RSI: ffff88011aac4300 RDI: ffff880118572378
> RBP: ffff88011a11bbe8 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000000 R12: ffff8801192fbd00
> R13: 0000000000000000 R14: ffff880118c3cac0 R15: 0000000000000000
> FS:  00007fde78bc37a0(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00000000012706f0 CR3: 000000011a432000 CR4: 00000000000407f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process lvs (pid: 1679, threadinfo ffff88011a11a000, task ffff8801185a4aa0)
> Stack:
>  ffff88011a11bb48 ffff88011a11baa8 ffff88010000000c ffff88011a11bb18
> <d> 0000000000000000 0000000000000000 ffff88011a11bdc8 ffff88011a11beb8
> <d> 0000000c1a11baa8 ffff880118c3cb98 0000000000000000 0000000018c3ccb8
> Call Trace:
>  [<ffffffff811c9e90>] ? blkdev_get_block+0x0/0x20
>  [<ffffffff811cec97>] __blockdev_direct_IO+0x77/0xe0
>  [<ffffffff811c9e90>] ? blkdev_get_block+0x0/0x20
>  [<ffffffff811caf17>] blkdev_direct_IO+0x57/0x60
>  [<ffffffff811c9e90>] ? blkdev_get_block+0x0/0x20
>  [<ffffffff8112619b>] generic_file_aio_read+0x6bb/0x700
>  [<ffffffff811cba60>] ? blkdev_get+0x10/0x20
>  [<ffffffff811cba70>] ? blkdev_open+0x0/0xc0
>  [<ffffffff8118af4f>] ? __dentry_open+0x23f/0x360
>  [<ffffffff811ca2d1>] blkdev_aio_read+0x51/0x80
>  [<ffffffff8118dc6a>] do_sync_read+0xfa/0x140
>  [<ffffffff8109eaf0>] ? autoremove_wake_function+0x0/0x40
>  [<ffffffff811ca22c>] ? block_ioctl+0x3c/0x40
>  [<ffffffff811a34c2>] ? vfs_ioctl+0x22/0xa0
>  [<ffffffff811a3664>] ? do_vfs_ioctl+0x84/0x580
>  [<ffffffff8122cee6>] ? security_file_permission+0x16/0x20
>  [<ffffffff8118e625>] vfs_read+0xb5/0x1a0
>  [<ffffffff8118e761>] sys_read+0x51/0x90
>  [<ffffffff810e5aae>] ? __audit_syscall_exit+0x25e/0x290
>  [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
> Code: fe ff ff c7 85 fc fe ff ff 00 00 00 00 48 89 95 10 ff ff ff 8b 95 34 ff 
> ff ff e8 46 ac ff ff 3b 85 34 ff ff ff 0f 84 fc 02 00 00 <0f> 0b eb fe 8b 9d 
> 34 ff ff ff 8b 85 30 ff ff ff 01 d8 85 c0 0f
> RIP  [<ffffffff811ce336>] __blockdev_direct_IO_newtrunc+0x986/0x1270
>  RSP <ffff88011a11ba48>
> ---[ end trace 73be5dcaf8939399 ]---
> Kernel panic - not syncing: Fatal exception

That code isn't even in mainline, as far as I can tell...

-- 
Jens Axboe




reply via email to

[Prev in Thread] Current Thread [Next in Thread]