[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 o

From: lampahome
Subject: Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
Date: Fri, 17 Aug 2018 16:05:30 +0800

Really? How to mount a blk device to /dev/nbdN?
I always find tips to mount from file-like image to /dev/nbdN

2018-08-16 19:46 GMT+08:00 Eric Blake <address@hidden>:

> On 08/16/2018 03:22 AM, Daniel P. Berrangé wrote:
>> On Thu, Aug 16, 2018 at 09:35:52AM +0800, lampahome wrote:
>>> We all know there's a file size limit 16TB in ext4 and other fs has their
>>> limit,too.
>>> If I create an qcow2 20TB on ext4 and write to it more than 16TB. Data
>>> more
>>> than 16TB can't be written to qcow2.
>>> So, is there any better ways to solve this situation?
>> I'd really just recommend using a different filesystem, in particular XFS
>> has massively higher file size limit - tested to 500 TB in RHEL-7, with a
>> theoretical max size of 8 EB. It is a very mature filesystem & the default
>> in RHEL-7.
> Or target raw block devices instead of using a filesystem. LVM works great.
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org

reply via email to

[Prev in Thread] Current Thread [Next in Thread]