qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [ovirt-users] Re: Debugging ceph access


From: Jason Dillaman
Subject: Re: [Qemu-block] [ovirt-users] Re: Debugging ceph access
Date: Wed, 6 Jun 2018 09:46:07 -0400

On Wed, Jun 6, 2018 at 7:20 AM, Bernhard Dick <address@hidden> wrote:
> Hi,
>
> Am 05.06.2018 um 22:11 schrieb Nir Soffer:
>>
>> On Fri, Jun 1, 2018 at 3:54 PM Stefan Hajnoczi <address@hidden
>> <mailto:address@hidden>> wrote:
>>
>>     On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
>>      > On Thu, May 31, 2018 at 1:55 AM Bernhard Dick <address@hidden
>>     <mailto:address@hidden>> wrote:
>>      >
>>      > > Hi,
>>      > >
>>      > > I found the reason for my timeout problems: It is the version
>>     of librbd1
>>      > > (which is 0.94.5) in conjunction with my CEPH test-environment
>>     which is
>>      > > running the luminous release.
>>      > > When I install the librbd1 (and librados2) packages from the
>>      > > centos-ceph-luminous repository (version 12.2.5) I'm able to
>>     start and
>>      > > migrate VMs inbetween the hosts.
>>      > >
>>      >
>>      > vdsm does not require librbd since qemu brings this dependency,
>>     and vdsm
>>      > does not access ceph directly yet.
>>      >
>>      > Maybe qemu should require newer version of librbd?
>>
>>     Upstream QEMU builds against any librbd version that exports the
>>     necessary APIs.
>>
>>     The choice of library versions is mostly up to distro package
>>     maintainers.
>>
>>     Have you filed a bug against Ceph on the distro you are using?
>>
>>
>> Thanks for clearing this up Stefan.
>>
>> Bernhard, can you give more info about your Linux version and
>> installed packages (.e.g qemu-*)?
>
> Sure. I have two test-systems. The first is running a stock oVirt Node 4.3
> which states "CentOS Linux release 7.5.1804 (Core)" as version string. The
> qemu and ceph packages are:
> Name        : qemu-img-ev
> Arch        : x86_64
> Epoch       : 10
> Version     : 2.10.0
> Release     : 21.el7_5.3.1
>
> Name        : qemu-kvm-common-ev
> Arch        : x86_64
> Epoch       : 10
> Version     : 2.10.0
> Release     : 21.el7_5.3.1
>
> Name        : qemu-kvm-ev
> Arch        : x86_64
> Epoch       : 10
> Version     : 2.10.0
> Release     : 21.el7_5.3.1
>
> Name        : librados2
> Arch        : x86_64
> Epoch       : 1
> Version     : 0.94.5
> Release     : 2.el7
>
> Name        : librbd1
> Arch        : x86_64
> Epoch       : 1
> Version     : 0.94.5
> Release     : 2.el7
>
> The Centos 7 system is a centos minimal installation with the following
> repos being enabled:
> CentOS-7 - Base
> CentOS-7 - Updates
> CentOS-7 - Extras
> ovirt-4.2-epel
> ovirt-4.2-centos-gluster123
> ovirt-4.2-virtio-win-latest
> ovirt-4.2-centos-qemu-ev
> ovirt-4.2-centos-opstools
> centos-sclo-rh-release
> ovirt-4.2-centos-ovirt42
> ovirt-4.2
>
> The version numbers for the qemu packages are the same as above as they're
> from the ovirt-4.2-centos-qemu-ev repository. Also the version numbers for
> librados2 and librbd1 match, while they're from the centos-base (instead of
> ovirt-base) repository.
>
> When I activate the ceph-centos-luminous repository librbd1 and librados2
> get upgraded to the following versions (leaving the qemu packages untouched,
> what is as expected):
> Name        : librados2
> Arch        : x86_64
> Epoch       : 2
> Version     : 12.2.5
> Release     : 0.el7
>
> Name        : librbd1
> Arch        : x86_64
> Epoch       : 2
> Version     : 12.2.5
> Release     : 0.el7
>
> So from my perspective on side of oVirt there should be thaught about a way
> to add a more recent ceph library version into the ovirt node image, as it
> is not the most common task to add extra repositories here (and I'm not
> whether that might break the image based upgrade-path).
>
> I will go for Centos based hosts in my case, as I'm a bit more flexible than
> so at least for me there is no real need to get the above noted implemented
> quickly :-)

Without knowing more about your setup, I suspect you might be using
unsupported tunables which are preventing the Hammer clients from
connecting to the Luminous cluster. Note that upstream, we really only
test client/server backwards compatibility between N - 2 releases (it
was N - 1) [1]. Therefore, in a perfect world, you should be selecting
the best Ceph client repo to match your Ceph cluster version.

There is an open BZ against RHEL 7 to upgrade the base Ceph client
RPMs to a more recent version [2], but in the interim if you need
newer client RPMs for CentOS 7, you will need to get them from
upstream Ceph by picking the appropriate release repository.

>   Regards
>     Bernhard
>

[1] http://docs.ceph.com/docs/master/releases/schedule/#stable-releases-x-2-z
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1421783

-- 
Jason



reply via email to

[Prev in Thread] Current Thread [Next in Thread]