gnash-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnash-dev] Hardware acceleration support


From: John Gilmore
Subject: Re: [Gnash-dev] Hardware acceleration support
Date: Tue, 02 Mar 2010 18:16:12 -0800

What ever happened to the licensing issues discussion?  I did a pretty
exhaustive analysis of VA API in September, and concluded that there
was virtually no free software support for it, the spec only specifies
nonfree codecs, the implementations only accelerate nonfree codecs, it
was not designed to be easily extensible to new codecs, and that there
was lots of concern among free software developers that it's an Intel
trojan horse to get proprietary drivers embedded in the free
ecosystem.  And that half the work was done by Gwenole, presumably
under contract to Intel.

What progress (toward freedom and extensibility) has happened since
then?  Gwenole, you recommended that we wait through 2009 for more
secret stuff to get released; did that happen?

Why are we now drawing this Trojan Horse inside our gates?

Here's a copy of our correspondence from back then.

        John

To: Bastiaan Jacques <address@hidden>, Rob Savoye <address@hidden>
cc: Gwenole Beauchesne <address@hidden>, address@hidden,
   address@hidden
Subject: Re: [Gnash-dev] Flash HD (H.264) video decoding acceleration 
In-reply-to: <address@hidden> 
References: <address@hidden> <address@hidden>
Comments: In-reply-to Bastiaan Jacques <address@hidden>
   message dated "Wed, 23 Sep 2009 05:39:10 -0700."
Date: Wed, 23 Sep 2009 14:37:37 -0700
From: John Gilmore <address@hidden>

Thank you for working on improving gnash.

> First off, I have to express some concern that apparently only
> proprietary GPU drivers are supported for use with VA API -- at least,
> for the codecs that are commonly used in Flash videos. The last
> thing we want to do is encourage people to use proprietary drivers for
> use with Gnash. I would be interested in merging the VA API integration
> if we have some indication that the free GPU driver community (i.e.,
> Nouveau or AMD-free) will soon be supported by VA API.

I have the same concern.  I don't think we should let Intel use Gnash
to slide proprietary licenses into the free ecosystem.  Instead, we
should support the best of the all-free acceleration APIs.

Here's the best index page I found for the VA API:

  http://www.freedesktop.org/wiki/Software/vaapi

Intel has published a truly free VA API driver here, for their G45 and
GM45 chips:

  git clone git://git.freedesktop.org/git/libva
  http://lists.freedesktop.org/archives/intel-gfx/2009-August/003708.html

It's a really early driver and it only includes full decoding of the
MPEG2 bitstream; it doesn't include any of the sub-stuff like the IDCT
or Motion Compensation interfaces (which would let applications or
libraries accelerate any software-rendered video format, other than
MPEG2).  They are working on supporting H.264 in that driver
eventually.  I don't think they've cut a release that includes even
the existing code, it's just a working repo so far.

It's unclear to me why there isn't an Ogg Theora video profile in the
VA API itself.  I find it unlikely that Theora decoding or encoding
can't be accelerated by a bunch of the hardware out there.  It seems
like the people who defined the API weren't thinking much about free
formats -- which is a red flag.

Corbin Simpson (free Radeon driver developer) says that the free
Radeon and free AMD developers have agreed to implement VDPAU (but
have been busy with other things):

  http://article.gmane.org/gmane.comp.freedesktop.xorg/40358

In theory, using Gwenole's GPLv2 adapter code, a free application that
calls VA API can end up using a free VDPAU driver, when one exists:

  http://www.splitted-desktop.com/~gbeauchesne/vdpau-video/

But Gwenole's adapter also doesn't support anything but full decoding,
and doesn't support any free video formats.  He also has a VA API
to XvBA adapter, but it's proprietary (it sits behind a password-wall).

There's a summary in Wikipedia, which I recommend that Gwenole (or
someone else who is clued in to the graphics-api-wars) add a section
comparing VA API, VDPAU, XvBA, and XvMC:

  http://en.wikipedia.org/wiki/Graphics_hardware_and_FOSS

LWN has an article on VA API that describes many of Gwenole's contributions:

  http://lwn.net/Articles/338581/

  "... a "wait and see" attitude was taken by most projects during the
  first year or so of VA API development. Some of that is, no doubt,
  caution that would be taken towards any entirely new API; perhaps
  some of it is attributable to the perception that VA API was an
  Intel-only project or at best fated to duke it out against VDPAU and
  XvBA until one of them becomes dominant."

The Fedora steering committee has concerns about VAAPI:

  http://www.opensubscriber.com/message/address@hidden/12677631.html

Ultimately, they decided to allow VDPAU into Fedora, but not VA API:

  https://fedorahosted.org/fesco/ticket/238

Younes Manton is doing generic GPU-accelerated video decoding, which
started as his Google Summer of Code project.  He's using mplayer and
the XvMC interface.  But he's doing it right -- he's accelerating the
low layers (motion compensation) first, then working up to the higher
layers (inverse DCT).  He's "been looking at VDPAU" too.  Gwenole, you
could probably slap a VA API front end onto his code, which would make
the first free VA API implementation that does the low layers (and it
would speed up NVidia video playback all in free software, and give
you clues about how to do that for the Intel chips).  If the mplayer
or gstreamer VA API interface would call these layers (the way the
mplayer XvMC interface currently does), this would provide
acceleration for codecs that aren't fully decoded -- like all the ones
Gnash cares about.  See:

  http://www.bitblit.org/gsoc/g3dvl/
  http://bitblitter.blogspot.com/

Gwenole, if you're interested in pushing VA API further into the free
software community, get Intel to explain why its main released VA API
driver is proprietary -- and how and when they're going to replace
that with free software.  If they're never going to replace it, the
free community will probably say "never mind VA API then, we'll go our
own way".  It would suck if AMD and NVidia hardware had an all-free
path, but most Intel hardware ended up using proprietary drivers.
Intel has been very good at making its drivers free in the past; this
departure from that policy makes VA API look like Intel's attempt to
find a path by which they can make lots more drivers proprietary.
Thus, the resistance among people who care about freedom.

Gwenole, once those Intel issues are resolved, I recommend raising
VA API inclusion with the Fedora steering committee again.  Having a
released all-free-software path from a high level application that
calls VA API, down to a hardware driver that actually accelerates,
should encourage them to adopt it.  Also having a free VDPAU adapter
would let high level apps have a free path to much more hardware.

>    I assume this would be a corporate copyright assignment or a personal 
> one ? I can send you the appropriate paperwork. On occasion I've had 
> corporations refuse to assign copyright to the FSF (we're a GNU 
> project), which has forced me to remove code I wish I didn't have to. I 
> can think of a few other projects (OLPC's new hardware is Via based) 
> that would benefit from this patch, so hopefully there is no problem.

Half the people at Splitted-Desktop come from Mandriva, so they know
a lot about free software.

        John Gilmore


Cc: Bastiaan Jacques <address@hidden>, Rob Savoye <address@hidden>,
   address@hidden
Message-Id: <address@hidden>
From: Gwenole Beauchesne <address@hidden>
To: John Gilmore <address@hidden>
In-Reply-To: <address@hidden>
Subject: Re: [Gnash-dev] Flash HD (H.264) video decoding acceleration 
Date: Thu, 24 Sep 2009 01:26:39 +0200

Hi,

Le 23 sept. 09 à 23:37, John Gilmore a écrit :

> It's unclear to me why there isn't an Ogg Theora video profile in the
> VA API itself.  I find it unlikely that Theora decoding or encoding
> can't be accelerated by a bunch of the hardware out there.  It seems
> like the people who defined the API weren't thinking much about free
> formats -- which is a red flag.

I had asked some Intel people about other formats. The main reason was  
only broadly accepted and standard formats are available on silicon.  
Even VP6 did not match their criteria. Besides, the people who  
designed the API worked with existing HW implementations, not with  
hypothetical and future implementations.

> But Gwenole's adapter also doesn't support anything but full decoding,
> and doesn't support any free video formats.  He also has a VA API
> to XvBA adapter, but it's proprietary (it sits behind a password- 
> wall).

I will implement MPEG-2 MoComp and iDCT paths when I understand how  
this works and how I can manage to get this accepted through the  
FFmpeg AVHWAccel infrastructure.

> Younes Manton is doing generic GPU-accelerated video decoding, which
> started as his Google Summer of Code project.  He's using mplayer and
> the XvMC interface.  But he's doing it right -- he's accelerating the
> low layers (motion compensation) first, then working up to the higher
> layers (inverse DCT).  He's "been looking at VDPAU" too.  Gwenole, you
> could probably slap a VA API front end onto his code, which would make
> the first free VA API implementation that does the low layers

One of my plans was to actually write an XvMC backend to VA-API. ;-)

> (and it
> would speed up NVidia video playback all in free software, and give
> you clues about how to do that for the Intel chips).  If the mplayer
> or gstreamer VA API interface would call these layers (the way the
> mplayer XvMC interface currently does), this would provide
> acceleration for codecs that aren't fully decoded -- like all the ones
> Gnash cares about.

I am not sure to fully understand what you mean but I will think again  
about it tomorrow or over this night. Isn't Gnash/Flash limited to VP6  
and H.264? How would MPEG-2 MoComp/iDCT support help Gnash?

> Gwenole, if you're interested in pushing VA API further into the free
> software community, get Intel to explain why its main released VA API
> driver is proprietary -- and how and when they're going to replace
> that with free software.

Are you talking about the Poulsbo driver? If I were to do some  
reasoning based on public information and thus without being tainted  
by privileged information, I would say: there could have been  
something available by Q4 2009. However, it's questionnable as to what  
form this would take and what precise chipset (current or future) this  
would be available for. I would also say GMA500, more precisely VXD370  
in current or future chips, is not in-house Intel technology. However,  
their recent raise in stocks in ImgTech was probably a sign to {,be  
allowed to} do something but Apple also raised their stocks  
significantly afterwards, thus probably inhibiting some actions?

Nobody can really know or talk about anything related to that, I am  
afraid. Doing so would only feed bad or good rumors. Of course, my  
interpretations of facts I exposed here are probably not the reality.  
For sure, we can only wait for Q4 (the whole 3 months) and see what  
actually happens.

BTW, the G45 VA driver is fully Open Source right now because this is  
Intel technology in a whole, unlike US15W.

Regards,
Gwenole.


To: Gwenole Beauchesne <address@hidden>
cc: John Gilmore <address@hidden>, Bastiaan Jacques <address@hidden>,
   Rob Savoye <address@hidden>, address@hidden
Subject: Re: [Gnash-dev] Flash HD (H.264) video decoding acceleration 
In-reply-to: <address@hidden> 
References: <address@hidden> <address@hidden> <address@hidden> <address@hidden>
Comments: In-reply-to Gwenole Beauchesne <address@hidden>
   message dated "Thu, 24 Sep 2009 01:26:39 +0200."
Date: Wed, 23 Sep 2009 20:45:21 -0700
From: John Gilmore <address@hidden>

> I am not sure to fully understand what you mean but I will think again  
> about it tomorrow or over this night. Isn't Gnash/Flash limited to VP6  
> and H.264? How would MPEG-2 MoComp/iDCT support help Gnash?

All existing video codecs (except Dirac) work basically the same way.
They use the same building blocks inside.  VP6 and H.264 have motion
compensation and do iDCT.  See:

  http://www.dspdesignline.com/211100053?printableArticle=true

> I had asked some Intel people about other formats. The main reason was  
> only broadly accepted and standard formats are available on silicon.  
> Even VP6 did not match their criteria. Besides, the people who  
> designed the API worked with existing HW implementations, not with  
> hypothetical and future implementations.

This is another red flag.

It's typical of the "PC industry" to keep designing products that look
backwards, not forwards.  Then they are surprised (!) when after half
a generation they need yet a new interface.  Obvious examples are the
buses that only supported 640K of RAM (PC-AT), or 4GB of flash (SD),
8-character file names, the list goes on and on and on and on and on.
They create self-fulfilling prophecies of obsolesence.  No wonder their
hardware is full of backward-looking compatibility kludges.

Let me guess, nobody is ever going to invent a codec again.  And no
codec will ever become popular except the ones that Intel(TM) chips
implement in 2009.  And that's why this API is not extensible.  Right?
Right!

If the VA API isn't designed to work with the next generation of video
hardware, what's the point of rewriting all our software to use it?  A
well designed protocol would work with the next TWO or THREE
generations of hardware.  The Linux community is still working with
clean APIs that were designed in the 1970s (open/close/read/write,
fork/exec, etc) and 1980s (socket/connect/bind).  TCP/IP was also
designed in the 1970s, as was Ethernet.  *IT STILL WORKS!* All of
these were designed outside the shortsighted "PC industry".  Learn
from real standards with real longevity, not from the people at Intel
who can't see beyond their own noses.

If the free community can't implement chips that accelerate OUR OWN
protocols and standards, and call them using Intel's protocol, why
should we bother to use this protocol rather than making our own?
Designing chips isn't rocket science.  High school students do it.
Motherboards come with accelerator sockets (at least in AMD motherboards).
And with programmable rather than hardcoded accelerators, which are an
obvious trend even today, anyone could write microcode to accelerate
any video format.  "Oh, but there's no way to tell the API that we
accelerated that -- so let's not bother."  Wrong.

The VA API makes callers pick a "profile", which is what video format
they're working with, and an "entrypoint", which is how much of the
work will get done by software versus hardware.  It should be possible
to tell the VA API that you're using *ANY* video format.  These
formats should be specified by character strings, not by short binary
numbers.  If there's no driver for that video format, the API already
has clean ways to tell you there's no driver.  What it doesn't have
clean ways to do is to make a new driver for a new format -- nor, for
an application which knows the name of the video format it wants to
play, to figure out the short binary number for that format.  Let me
guess: every application will need to make a stupid little table that
maps video formats to VA API's stupid little numbers?

I would even make a profile whose arguments are "I have this video
file and here's the first 8 kbytes of it -- please tell me if you can
play it, and if so, set up the right profile."  Without knowing the
NAME of the format!  Most applications, like gnash, have no control
over the format of the video they'll be asked to play.

So fix that.  In the API.  They've asked for community input, they
want it to evolve.

The same with "entrypoints".  A "bitstream" entrypoint, i.e.  feed the
whole movie to hardware, should be standardized.  The other entrypoint
strings should be specific to the codecs involved, though "motion
compensation" is clearly one of them, and "inverse discrete cosine
transform" is another.

Clue, meet Intel.  Intel, meet clue.  Hulk smash clue into Intel head.
Run, clue, run!

        John




reply via email to

[Prev in Thread] Current Thread [Next in Thread]