Re: [st-ericsson] v4l2 vs omx for camera

classic Classic list List threaded Threaded
37 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Robert Fekete
Hi,

In order to expand this knowledge outside of Linaro I took the Liberty of inviting both [hidden email] and [hidden email]. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)

To make a long story short:
Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX.
Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?

Let the discussion continue...


On 17 February 2011 14:48, Laurent Pinchart <[hidden email]> wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
> On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
> > On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
> > > OMX main purpose is to handle multimedia hardware and offer an
> > > interface to that HW that looks identical indenpendent of the vendor
> > > delivering that hardware, much like the v4l2 or USB subsystems tries to
> > > do. And yes optimally it should be implemented in drivers/omx in Linux
> > > and a user space library on top of that.
> >
> > Thanks for clarifying this part, it was unclear to me. The reason being
> > that it seems OMX does not imply userspace/kernelspace separation, and
> > I was thinking more of it as a userspace lib. Now my understanding is
> > that if e.g. OpenMAX defines a certain data structure, say for a PCM
> > frame or whatever, then that exact struct is supposed to be used by the
> > kernelspace/userspace interface, and defined in the include file exported
> > by the kernel.
> >
> > > It might be that some alignment also needs to be made between 4vl2 and
> > > other OS's implementation, to ease developing drivers for many OSs
> > > (sorry I don't know these details, but you ST-E guys should know).
> >
> > The basic conflict I would say is that Linux has its own API+ABI, which
> > is defined by V4L and ALSA through a community process without much
> > thought about any existing standard APIs. (In some cases also predating
> > them.)
> >
> > > By the way IL is about to finalize version 1.2 of OpenMAX IL which is
> > > more than a years work of aligning all vendors and fixing unclear and
> > > buggy parts.
> >
> > I suspect that the basic problem with Khronos OpenMAX right now is
> > how to handle communities - for example the X consortium had
> > something like the same problem a while back, only member companies
> > could partake in the standard process, and they need of course to pay
> > an upfront fee for that, and the majority of these companies didn't
> > exactly send Linux community members to the meetings.
> >
> > And now all the companies who took part in OpenMAX somehow
> > end up having to do a lot of upfront community work if they want
> > to drive the API:s in a certain direction, discuss it again with the V4L
> > and ALSA maintainers and so on. Which takes a lot of time and
> > patience with uncertain outcome, since this process is autonomous
> > from Khronos. Nobody seems to be doing this, I javen't seen a single
> > patch aimed at trying to unify the APIs so far. I don't know if it'd be
> > welcome.
> >
> > This coupled with strict delivery deadlines and a marketing will
> > to state conformance to OpenMAX of course leads companies into
> > solutions breaking the Linux kernelspace API to be able to present
> > this.

From my experience with OMX, one of the issues is that companies usually
extend the API to fullfill their platform's needs, without going through any
standardization process. Coupled with the lack of open and free reference
implementation and test tools, this more or less means that OMX
implementations are not really compatible with eachother, making OMX-based
solution not better than proprietary solutions.

> > Now I think we have a pretty clear view of the problem, I don't
> > know what could be done about it though :-/
>
> One option might be to create a OMX wrapper library around the V4L2 API.
> Something similar is already available for the old V4L1 API (now removed
> from the kernel) that allows apps that still speak V4L1 only to use the
> V4L2 API. This is done in the libv4l1 library. The various v4l libraries
> are maintained here: http://git.linuxtv.org/v4l-utils.git
>
> Adding a libomx might not be such a bad idea. Linaro might be the
> appropriate organization to look into this. Any missing pieces in V4L2
> needed to create a fully functioning omx API can be discussed and solved.
>
> Making this part of v4l-utils means that it is centrally maintained and
> automatically picked up by distros.
>
> It will certainly be a non-trivial exercise, but it is a one-time job that
> should solve a lot of problems. But someone has to do it...

It's an option, but why would that be needed ? Again from my (probably
limited) OMX experience, platforms expose higher-level APIs to applications,
implemented on top of OMX. If the OMX layer is itself implemented on top of
V4L2, it would just be an extraneous useless internal layer that could (should
?) be removed completely.


[Robert F]
This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL media framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is a big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.

The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).

A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))

Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete? Answering these questions could be improved guidelines on what embedded device vendors in the future would provide as hw-driver front-ends. OMX is just one of these. Perhaps we could do better to fit and evolve the Linux eco-system?

 
> Regarding using V4L to communicate with DSPs/other processors: that too
> could be something for Linaro to pick up: experiment with it for one
> particular board, see what (if anything) is needed to make this work. I
> expect it to be pretty easy, but again, nobody has actually done the
> initial work.

The main issue with the V4L2 API compared with the OMX API is that V4L2 is a
kernelspace/userspace API only, while OMX can live in userspace. When the need
to communicate with other processors (CPUs, DSP, dedicated image processing
hardware blocks, ...) arises, platforms usually ship with a thin kernel layer
to handle low-level communication protocols, and a userspace OMX library that
does the bulk of the work. We would need to be able to do something similar
with V4L2.

[Robert F]
Ok, doesn.t mediacontroller/subdevices solve many of these issues?
 

> Once you have an example driver, then it should be much easier for others
> to follow.
>
> As Linus said, companies are unlikely to start doing this by themselves,
> but it seems that this work would exactly fit the Linaro purpose. From the
> Linaro homepage:
>
> "Linaro™ brings together the open source community and the electronics
> industry to work on key projects, deliver great tools, reduce industry
> wide fragmentation and provide common foundations for Linux software
> distributions and stacks to land on."
>
> Spot on, I'd say :-)
>
> Just for the record, let me say again they the V4L2 community will be very
> happy to assist with this when it comes to extending/improving the V4L2 API
> to make all this possible.

The first step would probably be to decide what Linux needs. Then I'll also be
happy to assist with the implementation phase :-)

--
Regards,

Laurent Pinchart

_______________________________________________
linaro-dev mailing list
[hidden email]
http://lists.linaro.org/mailman/listinfo/linaro-dev

BR
/Robert Fekete


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Clark, Rob
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
<[hidden email]> wrote:

> Hi,
>
> In order to expand this knowledge outside of Linaro I took the Liberty of
> inviting both [hidden email] and
> [hidden email]. For any newcomer I really recommend
> to do some catch-up reading on
> http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> ("v4l2 vs omx for camera" thread) before making any comments. And sign up
> for Linaro-dev while you are at it :-)
>
> To make a long story short:
> Different vendors provide custom OpenMax solutions for say Camera/ISP. In
> the Linux eco-system there is V4L2 doing much of this work already and is
> evolving with mediacontroller as well. Then there is the integration in
> Gstreamer...Which solution is the best way forward. Current discussions so
> far puts V4L2 greatly in favor of OMX.
> Please have in mind that OpenMAX as a concept is more like GStreamer in many
> senses. The question is whether Camera drivers should have OMX or V4L2 as
> the driver front end? This may perhaps apply to video codecs as well. Then
> there is how to in best of ways make use of this in GStreamer in order to
> achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
> forward?

just fwiw, there were some patches to make v4l2src work with userptr
buffers in case the camera has an mmu and can handle any random
non-physically-contiguous buffer..  so there is in theory no reason
why a gst capture pipeline could not be zero copy and capture directly
into buffers allocated from the display

Certainly a more general way to allocate buffers that any of the hw
blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
could use, and possibly share across-process for some zero copy DRI
style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

>
> Let the discussion continue...
>
>
> On 17 February 2011 14:48, Laurent Pinchart
> <[hidden email]> wrote:
>>
>> On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
>> > On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
>> > > On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
>> > > > OMX main purpose is to handle multimedia hardware and offer an
>> > > > interface to that HW that looks identical indenpendent of the vendor
>> > > > delivering that hardware, much like the v4l2 or USB subsystems tries
>> > > > to
>> > > > do. And yes optimally it should be implemented in drivers/omx in
>> > > > Linux
>> > > > and a user space library on top of that.
>> > >
>> > > Thanks for clarifying this part, it was unclear to me. The reason
>> > > being
>> > > that it seems OMX does not imply userspace/kernelspace separation, and
>> > > I was thinking more of it as a userspace lib. Now my understanding is
>> > > that if e.g. OpenMAX defines a certain data structure, say for a PCM
>> > > frame or whatever, then that exact struct is supposed to be used by
>> > > the
>> > > kernelspace/userspace interface, and defined in the include file
>> > > exported
>> > > by the kernel.
>> > >
>> > > > It might be that some alignment also needs to be made between 4vl2
>> > > > and
>> > > > other OS's implementation, to ease developing drivers for many OSs
>> > > > (sorry I don't know these details, but you ST-E guys should know).
>> > >
>> > > The basic conflict I would say is that Linux has its own API+ABI,
>> > > which
>> > > is defined by V4L and ALSA through a community process without much
>> > > thought about any existing standard APIs. (In some cases also
>> > > predating
>> > > them.)
>> > >
>> > > > By the way IL is about to finalize version 1.2 of OpenMAX IL which
>> > > > is
>> > > > more than a years work of aligning all vendors and fixing unclear
>> > > > and
>> > > > buggy parts.
>> > >
>> > > I suspect that the basic problem with Khronos OpenMAX right now is
>> > > how to handle communities - for example the X consortium had
>> > > something like the same problem a while back, only member companies
>> > > could partake in the standard process, and they need of course to pay
>> > > an upfront fee for that, and the majority of these companies didn't
>> > > exactly send Linux community members to the meetings.
>> > >
>> > > And now all the companies who took part in OpenMAX somehow
>> > > end up having to do a lot of upfront community work if they want
>> > > to drive the API:s in a certain direction, discuss it again with the
>> > > V4L
>> > > and ALSA maintainers and so on. Which takes a lot of time and
>> > > patience with uncertain outcome, since this process is autonomous
>> > > from Khronos. Nobody seems to be doing this, I javen't seen a single
>> > > patch aimed at trying to unify the APIs so far. I don't know if it'd
>> > > be
>> > > welcome.
>> > >
>> > > This coupled with strict delivery deadlines and a marketing will
>> > > to state conformance to OpenMAX of course leads companies into
>> > > solutions breaking the Linux kernelspace API to be able to present
>> > > this.
>>
>> From my experience with OMX, one of the issues is that companies usually
>> extend the API to fullfill their platform's needs, without going through
>> any
>> standardization process. Coupled with the lack of open and free reference
>> implementation and test tools, this more or less means that OMX
>> implementations are not really compatible with eachother, making OMX-based
>> solution not better than proprietary solutions.
>>
>> > > Now I think we have a pretty clear view of the problem, I don't
>> > > know what could be done about it though :-/
>> >
>> > One option might be to create a OMX wrapper library around the V4L2 API.
>> > Something similar is already available for the old V4L1 API (now removed
>> > from the kernel) that allows apps that still speak V4L1 only to use the
>> > V4L2 API. This is done in the libv4l1 library. The various v4l libraries
>> > are maintained here: http://git.linuxtv.org/v4l-utils.git
>> >
>> > Adding a libomx might not be such a bad idea. Linaro might be the
>> > appropriate organization to look into this. Any missing pieces in V4L2
>> > needed to create a fully functioning omx API can be discussed and
>> > solved.
>> >
>> > Making this part of v4l-utils means that it is centrally maintained and
>> > automatically picked up by distros.
>> >
>> > It will certainly be a non-trivial exercise, but it is a one-time job
>> > that
>> > should solve a lot of problems. But someone has to do it...
>>
>> It's an option, but why would that be needed ? Again from my (probably
>> limited) OMX experience, platforms expose higher-level APIs to
>> applications,
>> implemented on top of OMX. If the OMX layer is itself implemented on top
>> of
>> V4L2, it would just be an extraneous useless internal layer that could
>> (should
>> ?) be removed completely.
>>
>
> [Robert F]
> This would be the case in a GStreamer driven multimedia, i.e. Implement
> GStreamer elements using V4L2 directly(or camerabin using v4l2 directly).
> Perhaps some vendors would provide a library in between as well but that
> could be libv4l in that case. If someone would have an OpenMAX AL/IL media
> framework an OMX component would make sense to have but in this case it
> would be a thinner OMX component which in turn is implemented using V4L2.
> But it might be that Khronos provides OS independent components that by
> vendors gets implemented as the actual HW driver forgetting that there is a
> big difference in the driver model of an RTOS system compared to
> Linux(user/kernel space) or any OS...never mind.
>

Not even different vendor's omx camera implementations are
compatible.. there seems to be too much various in ISP architecture
and features for this.

Another point, and possibly the reason that TI went the OMX camera
route, was that a userspace API made it possible to move the camera
driver all to a co-processor (with advantages of reduced interrupt
latency for SIMCOP processing, and a larger part of the code being OS
independent)..  doing this in a kernel mode driver would have required
even more of syslink in the kernel.

But maybe it would be nice to have a way to have sensor driver on the
linux side, pipelined with hw and imaging drivers on a co-processor
for various algorithms and filters with configuration all exposed to
userspace thru MCF.. I'm not immediately sure how this would work, but
it sounds nice at least ;-)

> The question is if the Linux kernel and V4L2 is ready to incorporate several
> HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason
> Embedded Vendors provide custom solutions is to implement low power non(or
> minimal) CPU intervention pipelines where dedicated HW does the work most of
> the time(like full screen Video Playback).
>
> A common way of managing memory would of course also be necessary as well,
> like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between
> different drivers and processes all the way from sources(camera, video
> parser/decode) to sinks(display, hdmi, video encoders(record))

(ahh, ok, you have some of the same thoughts as I do regarding sharing
buffers between various drivers)

> Perhaps GStreamer experts would like to comment on the future plans ahead
> for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some
> ideas from OMX IL making OMX IL obsolete?

perhaps OMX should adapt some of the ideas from GStreamer ;-)

OpenMAX is missing some very obvious stuff to make it an API for
portable applications like autoplugging, discovery of
capabilities/formats supported, etc..  at least with gst I can drop in
some hw specific plugins and have apps continue to work without code
changes.

Anyways, it would be an easier argument to make if GStreamer was the
one true framework across different OSs, or at least across linux and
android.

BR,
-R

> Answering these questions could be
> improved guidelines on what embedded device vendors in the future would
> provide as hw-driver front-ends. OMX is just one of these. Perhaps we could
> do better to fit and evolve the Linux eco-system?
>
>
>>
>> > Regarding using V4L to communicate with DSPs/other processors: that too
>> > could be something for Linaro to pick up: experiment with it for one
>> > particular board, see what (if anything) is needed to make this work. I
>> > expect it to be pretty easy, but again, nobody has actually done the
>> > initial work.
>>
>> The main issue with the V4L2 API compared with the OMX API is that V4L2 is
>> a
>> kernelspace/userspace API only, while OMX can live in userspace. When the
>> need
>> to communicate with other processors (CPUs, DSP, dedicated image
>> processing
>> hardware blocks, ...) arises, platforms usually ship with a thin kernel
>> layer
>> to handle low-level communication protocols, and a userspace OMX library
>> that
>> does the bulk of the work. We would need to be able to do something
>> similar
>> with V4L2.
>
> [Robert F]
> Ok, doesn.t mediacontroller/subdevices solve many of these issues?
>
>>
>> > Once you have an example driver, then it should be much easier for
>> > others
>> > to follow.
>> >
>> > As Linus said, companies are unlikely to start doing this by themselves,
>> > but it seems that this work would exactly fit the Linaro purpose. From
>> > the
>> > Linaro homepage:
>> >
>> > "Linaro™ brings together the open source community and the electronics
>> > industry to work on key projects, deliver great tools, reduce industry
>> > wide fragmentation and provide common foundations for Linux software
>> > distributions and stacks to land on."
>> >
>> > Spot on, I'd say :-)
>> >
>> > Just for the record, let me say again they the V4L2 community will be
>> > very
>> > happy to assist with this when it comes to extending/improving the V4L2
>> > API
>> > to make all this possible.
>>
>> The first step would probably be to decide what Linux needs. Then I'll
>> also be
>> happy to assist with the implementation phase :-)
>>
>> --
>> Regards,
>>
>> Laurent Pinchart
>>
>> _______________________________________________
>> linaro-dev mailing list
>> [hidden email]
>> http://lists.linaro.org/mailman/listinfo/linaro-dev
>
> BR
> /Robert Fekete
>
>
> _______________________________________________
> linaro-dev mailing list
> [hidden email]
> http://lists.linaro.org/mailman/listinfo/linaro-dev
>
>
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Sachin Gupta
Hi All,
 
   Just wanted to add one last point in this discussion.
The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
 
I am not sure if that is the case with all the platforms today but my feeling is that this is going to be excercised more in future architectures where a single dedicated dsp/arm processor is used to control the video/imaging specific hardware blocks and there could be some other tasks offloaded to this dedicated dsp/arm processor in case it has free bandwidth to support those tasks. 
 
Thanks
Sachin

On Tue, Feb 22, 2011 at 8:14 AM, Clark, Rob <[hidden email]> wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
<[hidden email]> wrote:
> Hi,
>
> In order to expand this knowledge outside of Linaro I took the Liberty of
> inviting both [hidden email] and
> [hidden email]. For any newcomer I really recommend
> to do some catch-up reading on
> http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> ("v4l2 vs omx for camera" thread) before making any comments. And sign up
> for Linaro-dev while you are at it :-)
>
> To make a long story short:
> Different vendors provide custom OpenMax solutions for say Camera/ISP. In
> the Linux eco-system there is V4L2 doing much of this work already and is
> evolving with mediacontroller as well. Then there is the integration in
> Gstreamer...Which solution is the best way forward. Current discussions so
> far puts V4L2 greatly in favor of OMX.
> Please have in mind that OpenMAX as a concept is more like GStreamer in many
> senses. The question is whether Camera drivers should have OMX or V4L2 as
> the driver front end? This may perhaps apply to video codecs as well. Then
> there is how to in best of ways make use of this in GStreamer in order to
> achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
> forward?

just fwiw, there were some patches to make v4l2src work with userptr
buffers in case the camera has an mmu and can handle any random
non-physically-contiguous buffer..  so there is in theory no reason
why a gst capture pipeline could not be zero copy and capture directly
into buffers allocated from the display

Certainly a more general way to allocate buffers that any of the hw
blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
could use, and possibly share across-process for some zero copy DRI
style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

>
> Let the discussion continue...
>
>
> On 17 February 2011 14:48, Laurent Pinchart
> <[hidden email]> wrote:
>>
>> On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
>> > On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
>> > > On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
>> > > > OMX main purpose is to handle multimedia hardware and offer an
>> > > > interface to that HW that looks identical indenpendent of the vendor
>> > > > delivering that hardware, much like the v4l2 or USB subsystems tries
>> > > > to
>> > > > do. And yes optimally it should be implemented in drivers/omx in
>> > > > Linux
>> > > > and a user space library on top of that.
>> > >
>> > > Thanks for clarifying this part, it was unclear to me. The reason
>> > > being
>> > > that it seems OMX does not imply userspace/kernelspace separation, and
>> > > I was thinking more of it as a userspace lib. Now my understanding is
>> > > that if e.g. OpenMAX defines a certain data structure, say for a PCM
>> > > frame or whatever, then that exact struct is supposed to be used by
>> > > the
>> > > kernelspace/userspace interface, and defined in the include file
>> > > exported
>> > > by the kernel.
>> > >
>> > > > It might be that some alignment also needs to be made between 4vl2
>> > > > and
>> > > > other OS's implementation, to ease developing drivers for many OSs
>> > > > (sorry I don't know these details, but you ST-E guys should know).
>> > >
>> > > The basic conflict I would say is that Linux has its own API+ABI,
>> > > which
>> > > is defined by V4L and ALSA through a community process without much
>> > > thought about any existing standard APIs. (In some cases also
>> > > predating
>> > > them.)
>> > >
>> > > > By the way IL is about to finalize version 1.2 of OpenMAX IL which
>> > > > is
>> > > > more than a years work of aligning all vendors and fixing unclear
>> > > > and
>> > > > buggy parts.
>> > >
>> > > I suspect that the basic problem with Khronos OpenMAX right now is
>> > > how to handle communities - for example the X consortium had
>> > > something like the same problem a while back, only member companies
>> > > could partake in the standard process, and they need of course to pay
>> > > an upfront fee for that, and the majority of these companies didn't
>> > > exactly send Linux community members to the meetings.
>> > >
>> > > And now all the companies who took part in OpenMAX somehow
>> > > end up having to do a lot of upfront community work if they want
>> > > to drive the API:s in a certain direction, discuss it again with the
>> > > V4L
>> > > and ALSA maintainers and so on. Which takes a lot of time and
>> > > patience with uncertain outcome, since this process is autonomous
>> > > from Khronos. Nobody seems to be doing this, I javen't seen a single
>> > > patch aimed at trying to unify the APIs so far. I don't know if it'd
>> > > be
>> > > welcome.
>> > >
>> > > This coupled with strict delivery deadlines and a marketing will
>> > > to state conformance to OpenMAX of course leads companies into
>> > > solutions breaking the Linux kernelspace API to be able to present
>> > > this.
>>
>> From my experience with OMX, one of the issues is that companies usually
>> extend the API to fullfill their platform's needs, without going through
>> any
>> standardization process. Coupled with the lack of open and free reference
>> implementation and test tools, this more or less means that OMX
>> implementations are not really compatible with eachother, making OMX-based
>> solution not better than proprietary solutions.
>>
>> > > Now I think we have a pretty clear view of the problem, I don't
>> > > know what could be done about it though :-/
>> >
>> > One option might be to create a OMX wrapper library around the V4L2 API.
>> > Something similar is already available for the old V4L1 API (now removed
>> > from the kernel) that allows apps that still speak V4L1 only to use the
>> > V4L2 API. This is done in the libv4l1 library. The various v4l libraries
>> > are maintained here: http://git.linuxtv.org/v4l-utils.git
>> >
>> > Adding a libomx might not be such a bad idea. Linaro might be the
>> > appropriate organization to look into this. Any missing pieces in V4L2
>> > needed to create a fully functioning omx API can be discussed and
>> > solved.
>> >
>> > Making this part of v4l-utils means that it is centrally maintained and
>> > automatically picked up by distros.
>> >
>> > It will certainly be a non-trivial exercise, but it is a one-time job
>> > that
>> > should solve a lot of problems. But someone has to do it...
>>
>> It's an option, but why would that be needed ? Again from my (probably
>> limited) OMX experience, platforms expose higher-level APIs to
>> applications,
>> implemented on top of OMX. If the OMX layer is itself implemented on top
>> of
>> V4L2, it would just be an extraneous useless internal layer that could
>> (should
>> ?) be removed completely.
>>
>
> [Robert F]
> This would be the case in a GStreamer driven multimedia, i.e. Implement
> GStreamer elements using V4L2 directly(or camerabin using v4l2 directly).
> Perhaps some vendors would provide a library in between as well but that
> could be libv4l in that case. If someone would have an OpenMAX AL/IL media
> framework an OMX component would make sense to have but in this case it
> would be a thinner OMX component which in turn is implemented using V4L2.
> But it might be that Khronos provides OS independent components that by
> vendors gets implemented as the actual HW driver forgetting that there is a
> big difference in the driver model of an RTOS system compared to
> Linux(user/kernel space) or any OS...never mind.
>

Not even different vendor's omx camera implementations are
compatible.. there seems to be too much various in ISP architecture
and features for this.

Another point, and possibly the reason that TI went the OMX camera
route, was that a userspace API made it possible to move the camera
driver all to a co-processor (with advantages of reduced interrupt
latency for SIMCOP processing, and a larger part of the code being OS
independent)..  doing this in a kernel mode driver would have required
even more of syslink in the kernel.

But maybe it would be nice to have a way to have sensor driver on the
linux side, pipelined with hw and imaging drivers on a co-processor
for various algorithms and filters with configuration all exposed to
userspace thru MCF.. I'm not immediately sure how this would work, but
it sounds nice at least ;-)

> The question is if the Linux kernel and V4L2 is ready to incorporate several
> HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason
> Embedded Vendors provide custom solutions is to implement low power non(or
> minimal) CPU intervention pipelines where dedicated HW does the work most of
> the time(like full screen Video Playback).
>
> A common way of managing memory would of course also be necessary as well,
> like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between
> different drivers and processes all the way from sources(camera, video
> parser/decode) to sinks(display, hdmi, video encoders(record))

(ahh, ok, you have some of the same thoughts as I do regarding sharing
buffers between various drivers)

> Perhaps GStreamer experts would like to comment on the future plans ahead
> for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some
> ideas from OMX IL making OMX IL obsolete?

perhaps OMX should adapt some of the ideas from GStreamer ;-)

OpenMAX is missing some very obvious stuff to make it an API for
portable applications like autoplugging, discovery of
capabilities/formats supported, etc..  at least with gst I can drop in
some hw specific plugins and have apps continue to work without code
changes.

Anyways, it would be an easier argument to make if GStreamer was the
one true framework across different OSs, or at least across linux and
android.

BR,
-R

> Answering these questions could be
> improved guidelines on what embedded device vendors in the future would
> provide as hw-driver front-ends. OMX is just one of these. Perhaps we could
> do better to fit and evolve the Linux eco-system?
>
>
>>
>> > Regarding using V4L to communicate with DSPs/other processors: that too
>> > could be something for Linaro to pick up: experiment with it for one
>> > particular board, see what (if anything) is needed to make this work. I
>> > expect it to be pretty easy, but again, nobody has actually done the
>> > initial work.
>>
>> The main issue with the V4L2 API compared with the OMX API is that V4L2 is
>> a
>> kernelspace/userspace API only, while OMX can live in userspace. When the
>> need
>> to communicate with other processors (CPUs, DSP, dedicated image
>> processing
>> hardware blocks, ...) arises, platforms usually ship with a thin kernel
>> layer
>> to handle low-level communication protocols, and a userspace OMX library
>> that
>> does the bulk of the work. We would need to be able to do something
>> similar
>> with V4L2.
>
> [Robert F]
> Ok, doesn.t mediacontroller/subdevices solve many of these issues?
>
>>
>> > Once you have an example driver, then it should be much easier for
>> > others
>> > to follow.
>> >
>> > As Linus said, companies are unlikely to start doing this by themselves,
>> > but it seems that this work would exactly fit the Linaro purpose. From
>> > the
>> > Linaro homepage:
>> >
>> > "Linaro™ brings together the open source community and the electronics
>> > industry to work on key projects, deliver great tools, reduce industry
>> > wide fragmentation and provide common foundations for Linux software
>> > distributions and stacks to land on."
>> >
>> > Spot on, I'd say :-)
>> >
>> > Just for the record, let me say again they the V4L2 community will be
>> > very
>> > happy to assist with this when it comes to extending/improving the V4L2
>> > API
>> > to make all this possible.
>>
>> The first step would probably be to decide what Linux needs. Then I'll
>> also be
>> happy to assist with the implementation phase :-)
>>
>> --
>> Regards,
>>
>> Laurent Pinchart
>>
>> _______________________________________________
>> linaro-dev mailing list
>> [hidden email]
>> http://lists.linaro.org/mailman/listinfo/linaro-dev
>
> BR
> /Robert Fekete
>
>
> _______________________________________________
> linaro-dev mailing list
> [hidden email]
> http://lists.linaro.org/mailman/listinfo/linaro-dev
>
>

_______________________________________________
linaro-dev mailing list
[hidden email]
http://lists.linaro.org/mailman/listinfo/linaro-dev


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Linus Walleij
2011/2/23 Sachin Gupta <[hidden email]>:

> The imaging coprocessor in today's platforms have a general purpose DSP
> attached to it I have seen some work being done to use this DSP for
> graphics/audio processing in case the camera use case is not being tried or
> also if the camera usecases does not consume the full bandwidth of this
> dsp.I am not sure how v4l2 would fit in such an architecture,

Earlier in this thread I discussed TI:s DSPbridge.

In drivers/staging/tidspbridge
http://omappedia.org/wiki/DSPBridge_Project
you find the TI hackers happy at work with providing a DSP accelerator
subsystem.

Isn't it possible for a V4L2 component to use this interface (or something
more evolved, generic) as backend for assorted DSP offloading?

So using one kernel framework does not exclude using another one
at the same time. Whereas something like DSPbridge will load firmware
into DSP accelerators and provide control/datapath for that, this can
in turn be used by some camera or codec which in turn presents a
V4L2 or ALSA interface.

Yours,
Linus Walleij
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Hans Verkuil
On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:

> 2011/2/23 Sachin Gupta <[hidden email]>:
>
> > The imaging coprocessor in today's platforms have a general purpose DSP
> > attached to it I have seen some work being done to use this DSP for
> > graphics/audio processing in case the camera use case is not being tried or
> > also if the camera usecases does not consume the full bandwidth of this
> > dsp.I am not sure how v4l2 would fit in such an architecture,
>
> Earlier in this thread I discussed TI:s DSPbridge.
>
> In drivers/staging/tidspbridge
> http://omappedia.org/wiki/DSPBridge_Project
> you find the TI hackers happy at work with providing a DSP accelerator
> subsystem.
>
> Isn't it possible for a V4L2 component to use this interface (or something
> more evolved, generic) as backend for assorted DSP offloading?
>
> So using one kernel framework does not exclude using another one
> at the same time. Whereas something like DSPbridge will load firmware
> into DSP accelerators and provide control/datapath for that, this can
> in turn be used by some camera or codec which in turn presents a
> V4L2 or ALSA interface.

Yes, something along those lines can be done.

While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
instead.

The hardest part will be to identify the missing V4L2 API pieces and design
and add them. I don't think the actual driver code will be particularly hard.
It should be nothing more than a thin front-end for the DSP. Of course, that's
just theory at the moment :-)

The problem is that someone has to do the actual work for the initial driver.
And I expect that it will be a substantial amount of work. Future drivers should
be *much* easier, though.

A good argument for doing this work is that this API can hide which parts of
the video subsystem are hardware and which are software. The application really
doesn't care how it is organized. What is done in hardware on one SoC might be
done on a DSP instead on another SoC. But the end result is pretty much the same.

Regards,

        Hans

--
Hans Verkuil - video4linux developer - sponsored by Cisco
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Laurent Pinchart
On Thursday 24 February 2011 14:04:19 Hans Verkuil wrote:

> On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
> > 2011/2/23 Sachin Gupta <[hidden email]>:
> > > The imaging coprocessor in today's platforms have a general purpose DSP
> > > attached to it I have seen some work being done to use this DSP for
> > > graphics/audio processing in case the camera use case is not being
> > > tried or also if the camera usecases does not consume the full
> > > bandwidth of this dsp.I am not sure how v4l2 would fit in such an
> > > architecture,
> >
> > Earlier in this thread I discussed TI:s DSPbridge.
> >
> > In drivers/staging/tidspbridge
> > http://omappedia.org/wiki/DSPBridge_Project
> > you find the TI hackers happy at work with providing a DSP accelerator
> > subsystem.
> >
> > Isn't it possible for a V4L2 component to use this interface (or
> > something more evolved, generic) as backend for assorted DSP offloading?
> >
> > So using one kernel framework does not exclude using another one
> > at the same time. Whereas something like DSPbridge will load firmware
> > into DSP accelerators and provide control/datapath for that, this can
> > in turn be used by some camera or codec which in turn presents a
> > V4L2 or ALSA interface.
>
> Yes, something along those lines can be done.
>
> While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
> instead.
>
> The hardest part will be to identify the missing V4L2 API pieces and design
> and add them. I don't think the actual driver code will be particularly
> hard. It should be nothing more than a thin front-end for the DSP. Of
> course, that's just theory at the moment :-)
>
> The problem is that someone has to do the actual work for the initial
> driver. And I expect that it will be a substantial amount of work. Future
> drivers should be *much* easier, though.
>
> A good argument for doing this work is that this API can hide which parts
> of the video subsystem are hardware and which are software. The
> application really doesn't care how it is organized. What is done in
> hardware on one SoC might be done on a DSP instead on another SoC. But the
> end result is pretty much the same.

I think the biggest issue we will have here is that part of the inter-
processors communication stack lives in userspace in most recent SoCs (OMAP4
comes to mind for instance). This will make implementing a V4L2 driver that
relies on IPC difficult.

It's probably time to start seriously thinking about userspace
drivers/librairies/middlewares/frameworks/whatever, at least to clearly tell
chip vendors what the Linux community expects.

--
Regards,

Laurent Pinchart
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Hans Verkuil
In reply to this post by Clark, Rob
On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:

> On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
> <[hidden email]> wrote:
> > Hi,
> >
> > In order to expand this knowledge outside of Linaro I took the Liberty of
> > inviting both [hidden email] and
> > [hidden email]. For any newcomer I really recommend
> > to do some catch-up reading on
> > http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> > ("v4l2 vs omx for camera" thread) before making any comments. And sign up
> > for Linaro-dev while you are at it :-)
> >
> > To make a long story short:
> > Different vendors provide custom OpenMax solutions for say Camera/ISP. In
> > the Linux eco-system there is V4L2 doing much of this work already and is
> > evolving with mediacontroller as well. Then there is the integration in
> > Gstreamer...Which solution is the best way forward. Current discussions so
> > far puts V4L2 greatly in favor of OMX.
> > Please have in mind that OpenMAX as a concept is more like GStreamer in many
> > senses. The question is whether Camera drivers should have OMX or V4L2 as
> > the driver front end? This may perhaps apply to video codecs as well. Then
> > there is how to in best of ways make use of this in GStreamer in order to
> > achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
> > forward?
>
> just fwiw, there were some patches to make v4l2src work with userptr
> buffers in case the camera has an mmu and can handle any random
> non-physically-contiguous buffer..  so there is in theory no reason
> why a gst capture pipeline could not be zero copy and capture directly
> into buffers allocated from the display

V4L2 also allows userspace to pass pointers to contiguous physical memory.
On TI systems this memory is usually obtained via the out-of-tree cmem module.

> Certainly a more general way to allocate buffers that any of the hw
> blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
> could use, and possibly share across-process for some zero copy DRI
> style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

There are two parts to this: first of all you need a way to allocate large
buffers. The CMA patch series is available (but not yet merged) that does this.
I'm not sure of the latest status of this series.

The other part is that everyone can use and share these buffers. There isn't
anything for this yet. We have discussed this in the past and we need something
generic for this that all subsystems can use. It's not a good idea to tie this
to any specific framework like GEM. Instead any subsystem should be able to use
the same subsystem-independent buffer pool API.

The actual code is probably not too bad, but trying to coordinate this over all
subsystems is not an easy task.

>
> >
> > Let the discussion continue...
> >
> >
> > On 17 February 2011 14:48, Laurent Pinchart
> > <[hidden email]> wrote:
> >>
> >> On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
> >> > On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
> >> > > On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
> >> > > > OMX main purpose is to handle multimedia hardware and offer an
> >> > > > interface to that HW that looks identical indenpendent of the vendor
> >> > > > delivering that hardware, much like the v4l2 or USB subsystems tries
> >> > > > to
> >> > > > do. And yes optimally it should be implemented in drivers/omx in
> >> > > > Linux
> >> > > > and a user space library on top of that.
> >> > >
> >> > > Thanks for clarifying this part, it was unclear to me. The reason
> >> > > being
> >> > > that it seems OMX does not imply userspace/kernelspace separation, and
> >> > > I was thinking more of it as a userspace lib. Now my understanding is
> >> > > that if e.g. OpenMAX defines a certain data structure, say for a PCM
> >> > > frame or whatever, then that exact struct is supposed to be used by
> >> > > the
> >> > > kernelspace/userspace interface, and defined in the include file
> >> > > exported
> >> > > by the kernel.
> >> > >
> >> > > > It might be that some alignment also needs to be made between 4vl2
> >> > > > and
> >> > > > other OS's implementation, to ease developing drivers for many OSs
> >> > > > (sorry I don't know these details, but you ST-E guys should know).
> >> > >
> >> > > The basic conflict I would say is that Linux has its own API+ABI,
> >> > > which
> >> > > is defined by V4L and ALSA through a community process without much
> >> > > thought about any existing standard APIs. (In some cases also
> >> > > predating
> >> > > them.)
> >> > >
> >> > > > By the way IL is about to finalize version 1.2 of OpenMAX IL which
> >> > > > is
> >> > > > more than a years work of aligning all vendors and fixing unclear
> >> > > > and
> >> > > > buggy parts.
> >> > >
> >> > > I suspect that the basic problem with Khronos OpenMAX right now is
> >> > > how to handle communities - for example the X consortium had
> >> > > something like the same problem a while back, only member companies
> >> > > could partake in the standard process, and they need of course to pay
> >> > > an upfront fee for that, and the majority of these companies didn't
> >> > > exactly send Linux community members to the meetings.
> >> > >
> >> > > And now all the companies who took part in OpenMAX somehow
> >> > > end up having to do a lot of upfront community work if they want
> >> > > to drive the API:s in a certain direction, discuss it again with the
> >> > > V4L
> >> > > and ALSA maintainers and so on. Which takes a lot of time and
> >> > > patience with uncertain outcome, since this process is autonomous
> >> > > from Khronos. Nobody seems to be doing this, I javen't seen a single
> >> > > patch aimed at trying to unify the APIs so far. I don't know if it'd
> >> > > be
> >> > > welcome.
> >> > >
> >> > > This coupled with strict delivery deadlines and a marketing will
> >> > > to state conformance to OpenMAX of course leads companies into
> >> > > solutions breaking the Linux kernelspace API to be able to present
> >> > > this.
> >>
> >> From my experience with OMX, one of the issues is that companies usually
> >> extend the API to fullfill their platform's needs, without going through
> >> any
> >> standardization process. Coupled with the lack of open and free reference
> >> implementation and test tools, this more or less means that OMX
> >> implementations are not really compatible with eachother, making OMX-based
> >> solution not better than proprietary solutions.
> >>
> >> > > Now I think we have a pretty clear view of the problem, I don't
> >> > > know what could be done about it though :-/
> >> >
> >> > One option might be to create a OMX wrapper library around the V4L2 API.
> >> > Something similar is already available for the old V4L1 API (now removed
> >> > from the kernel) that allows apps that still speak V4L1 only to use the
> >> > V4L2 API. This is done in the libv4l1 library. The various v4l libraries
> >> > are maintained here: http://git.linuxtv.org/v4l-utils.git
> >> >
> >> > Adding a libomx might not be such a bad idea. Linaro might be the
> >> > appropriate organization to look into this. Any missing pieces in V4L2
> >> > needed to create a fully functioning omx API can be discussed and
> >> > solved.
> >> >
> >> > Making this part of v4l-utils means that it is centrally maintained and
> >> > automatically picked up by distros.
> >> >
> >> > It will certainly be a non-trivial exercise, but it is a one-time job
> >> > that
> >> > should solve a lot of problems. But someone has to do it...
> >>
> >> It's an option, but why would that be needed ? Again from my (probably
> >> limited) OMX experience, platforms expose higher-level APIs to
> >> applications,
> >> implemented on top of OMX. If the OMX layer is itself implemented on top
> >> of
> >> V4L2, it would just be an extraneous useless internal layer that could
> >> (should
> >> ?) be removed completely.
> >>
> >
> > [Robert F]
> > This would be the case in a GStreamer driven multimedia, i.e. Implement
> > GStreamer elements using V4L2 directly(or camerabin using v4l2 directly).
> > Perhaps some vendors would provide a library in between as well but that
> > could be libv4l in that case. If someone would have an OpenMAX AL/IL media
> > framework an OMX component would make sense to have but in this case it
> > would be a thinner OMX component which in turn is implemented using V4L2.
> > But it might be that Khronos provides OS independent components that by
> > vendors gets implemented as the actual HW driver forgetting that there is a
> > big difference in the driver model of an RTOS system compared to
> > Linux(user/kernel space) or any OS...never mind.
> >
>
> Not even different vendor's omx camera implementations are
> compatible.. there seems to be too much various in ISP architecture
> and features for this.
>
> Another point, and possibly the reason that TI went the OMX camera
> route, was that a userspace API made it possible to move the camera
> driver all to a co-processor (with advantages of reduced interrupt
> latency for SIMCOP processing, and a larger part of the code being OS
> independent)..  doing this in a kernel mode driver would have required
> even more of syslink in the kernel.
>
> But maybe it would be nice to have a way to have sensor driver on the
> linux side, pipelined with hw and imaging drivers on a co-processor
> for various algorithms and filters with configuration all exposed to
> userspace thru MCF.. I'm not immediately sure how this would work, but
> it sounds nice at least ;-)

MCF? What does that stand for?

>
> > The question is if the Linux kernel and V4L2 is ready to incorporate several
> > HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason
> > Embedded Vendors provide custom solutions is to implement low power non(or
> > minimal) CPU intervention pipelines where dedicated HW does the work most of
> > the time(like full screen Video Playback).
> >
> > A common way of managing memory would of course also be necessary as well,
> > like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between
> > different drivers and processes all the way from sources(camera, video
> > parser/decode) to sinks(display, hdmi, video encoders(record))
>
> (ahh, ok, you have some of the same thoughts as I do regarding sharing
> buffers between various drivers)

Perhaps the time is right for someone to start working on this?

Regards,

        Hans

> > Perhaps GStreamer experts would like to comment on the future plans ahead
> > for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some
> > ideas from OMX IL making OMX IL obsolete?
>
> perhaps OMX should adapt some of the ideas from GStreamer ;-)
>
> OpenMAX is missing some very obvious stuff to make it an API for
> portable applications like autoplugging, discovery of
> capabilities/formats supported, etc..  at least with gst I can drop in
> some hw specific plugins and have apps continue to work without code
> changes.
>
> Anyways, it would be an easier argument to make if GStreamer was the
> one true framework across different OSs, or at least across linux and
> android.
>
> BR,
> -R
>
> > Answering these questions could be
> > improved guidelines on what embedded device vendors in the future would
> > provide as hw-driver front-ends. OMX is just one of these. Perhaps we could
> > do better to fit and evolve the Linux eco-system?
> >
> >
> >>
> >> > Regarding using V4L to communicate with DSPs/other processors: that too
> >> > could be something for Linaro to pick up: experiment with it for one
> >> > particular board, see what (if anything) is needed to make this work. I
> >> > expect it to be pretty easy, but again, nobody has actually done the
> >> > initial work.
> >>
> >> The main issue with the V4L2 API compared with the OMX API is that V4L2 is
> >> a
> >> kernelspace/userspace API only, while OMX can live in userspace. When the
> >> need
> >> to communicate with other processors (CPUs, DSP, dedicated image
> >> processing
> >> hardware blocks, ...) arises, platforms usually ship with a thin kernel
> >> layer
> >> to handle low-level communication protocols, and a userspace OMX library
> >> that
> >> does the bulk of the work. We would need to be able to do something
> >> similar
> >> with V4L2.
> >
> > [Robert F]
> > Ok, doesn.t mediacontroller/subdevices solve many of these issues?
> >
> >>
> >> > Once you have an example driver, then it should be much easier for
> >> > others
> >> > to follow.
> >> >
> >> > As Linus said, companies are unlikely to start doing this by themselves,
> >> > but it seems that this work would exactly fit the Linaro purpose. From
> >> > the
> >> > Linaro homepage:
> >> >
> >> > "Linaro™ brings together the open source community and the electronics
> >> > industry to work on key projects, deliver great tools, reduce industry
> >> > wide fragmentation and provide common foundations for Linux software
> >> > distributions and stacks to land on."
> >> >
> >> > Spot on, I'd say :-)
> >> >
> >> > Just for the record, let me say again they the V4L2 community will be
> >> > very
> >> > happy to assist with this when it comes to extending/improving the V4L2
> >> > API
> >> > to make all this possible.
> >>
> >> The first step would probably be to decide what Linux needs. Then I'll
> >> also be
> >> happy to assist with the implementation phase :-)
> >>
> >> --
> >> Regards,
> >>
> >> Laurent Pinchart
> >>
> >> _______________________________________________
> >> linaro-dev mailing list
> >> [hidden email]
> >> http://lists.linaro.org/mailman/listinfo/linaro-dev
> >
> > BR
> > /Robert Fekete
> >
> >
> > _______________________________________________
> > linaro-dev mailing list
> > [hidden email]
> > http://lists.linaro.org/mailman/listinfo/linaro-dev
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to [hidden email]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

--
Hans Verkuil - video4linux developer - sponsored by Cisco
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Laurent Pinchart
In reply to this post by Clark, Rob
On Tuesday 22 February 2011 03:44:19 Clark, Rob wrote:

> On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
> > In order to expand this knowledge outside of Linaro I took the Liberty of
> > inviting both [hidden email] and
> > [hidden email]. For any newcomer I really
> > recommend to do some catch-up reading on
> > http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> > ("v4l2 vs omx for camera" thread) before making any comments. And sign up
> > for Linaro-dev while you are at it :-)
> >
> > To make a long story short:
> > Different vendors provide custom OpenMax solutions for say Camera/ISP. In
> > the Linux eco-system there is V4L2 doing much of this work already and is
> > evolving with mediacontroller as well. Then there is the integration in
> > Gstreamer...Which solution is the best way forward. Current discussions
> > so far puts V4L2 greatly in favor of OMX.
> > Please have in mind that OpenMAX as a concept is more like GStreamer in
> > many senses. The question is whether Camera drivers should have OMX or
> > V4L2 as the driver front end? This may perhaps apply to video codecs as
> > well. Then there is how to in best of ways make use of this in GStreamer
> > in order to achieve no copy highly efficient multimedia pipelines. Is
> > gst-omx the way forward?
>
> just fwiw, there were some patches to make v4l2src work with userptr
> buffers in case the camera has an mmu and can handle any random
> non-physically-contiguous buffer..  so there is in theory no reason
> why a gst capture pipeline could not be zero copy and capture directly
> into buffers allocated from the display
>
> Certainly a more general way to allocate buffers that any of the hw
> blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
> could use, and possibly share across-process for some zero copy DRI
> style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?

This is something we first discussed in the end of 2009. We need to get people
from different subsystems around the same table, with memory management
specialists (especially for ARM), and lay the ground for a common memory
management system. Discussions on the V4L2 side called this the global buffers
pool (see http://lwn.net/Articles/353044/ for instance, more information can
be found in the linux-media list archives).

[snip]


> > Let the discussion continue...
> >
> > On 17 February 2011 14:48, Laurent Pinchart wrote:
> >> On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
> >> > On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
> >> > > On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
> >> > > > OMX main purpose is to handle multimedia hardware and offer an
> >> > > > interface to that HW that looks identical indenpendent of the
> >> > > > vendor delivering that hardware, much like the v4l2 or USB
> >> > > > subsystems tries to
> >> > > > do. And yes optimally it should be implemented in drivers/omx in
> >> > > > Linux
> >> > > > and a user space library on top of that.
> >> > >
> >> > > Thanks for clarifying this part, it was unclear to me. The reason
> >> > > being
> >> > > that it seems OMX does not imply userspace/kernelspace separation,
> >> > > and I was thinking more of it as a userspace lib. Now my
> >> > > understanding is that if e.g. OpenMAX defines a certain data
> >> > > structure, say for a PCM frame or whatever, then that exact struct
> >> > > is supposed to be used by the
> >> > > kernelspace/userspace interface, and defined in the include file
> >> > > exported
> >> > > by the kernel.
> >> > >
> >> > > > It might be that some alignment also needs to be made between 4vl2
> >> > > > and
> >> > > > other OS's implementation, to ease developing drivers for many OSs
> >> > > > (sorry I don't know these details, but you ST-E guys should know).
> >> > >
> >> > > The basic conflict I would say is that Linux has its own API+ABI,
> >> > > which
> >> > > is defined by V4L and ALSA through a community process without much
> >> > > thought about any existing standard APIs. (In some cases also
> >> > > predating
> >> > > them.)
> >> > >
> >> > > > By the way IL is about to finalize version 1.2 of OpenMAX IL which
> >> > > > is more than a years work of aligning all vendors and fixing
> >> > > > unclear and buggy parts.
> >> > >
> >> > > I suspect that the basic problem with Khronos OpenMAX right now is
> >> > > how to handle communities - for example the X consortium had
> >> > > something like the same problem a while back, only member companies
> >> > > could partake in the standard process, and they need of course to
> >> > > pay an upfront fee for that, and the majority of these companies
> >> > > didn't exactly send Linux community members to the meetings.
> >> > >
> >> > > And now all the companies who took part in OpenMAX somehow end up
> >> > > having to do a lot of upfront community work if they want to drive
> >> > > the API:s in a certain direction, discuss it again with the V4L and
> >> > > ALSA maintainers and so on. Which takes a lot of time and patience
> >> > > with uncertain outcome, since this process is autonomous from
> >> > > Khronos. Nobody seems to be doing this, I javen't seen a single patch
> >> > > aimed at trying to unify the APIs so far. I don't know if it'd be
> >> > > welcome.

Patches are usually welcome, but one issue with OMX is that it doesn't feel
like a real Linux API. Linux developers usually don't like to be forced to use
alien APIs that originate in other worlds (such as Windows) and don't feel
good on Linux.

> >> > > This coupled with strict delivery deadlines and a marketing will
> >> > > to state conformance to OpenMAX of course leads companies into
> >> > > solutions breaking the Linux kernelspace API to be able to present
> >> > > this.

The end result is that Khronos publishes and API spec that chip vendors
implement, but nobody in the community is interested in it. OMX is something
the Linux community mostly ignores. And I don't see this changing any time
soon, or even ever. OMX was designed without the community. If Khronos really
want good Linux support, they need to ditch OMX and design something new with
the community. I don't see this happening anytime soon though, so the
community will keep working on its APIs and pushing vendors to implement them
(or even create community-supported implementations). That's a complete waste
of resources for everybody.

> >> From my experience with OMX, one of the issues is that companies usually
> >> extend the API to fullfill their platform's needs, without going through
> >> any standardization process. Coupled with the lack of open and free
> >> reference implementation and test tools, this more or less means that
> >> OMX implementations are not really compatible with eachother, making
> >> OMX-based solution not better than proprietary solutions.
> >>
> >> > > Now I think we have a pretty clear view of the problem, I don't
> >> > > know what could be done about it though :-/
> >> >
> >> > One option might be to create a OMX wrapper library around the V4L2
> >> > API. Something similar is already available for the old V4L1 API (now
> >> > removed from the kernel) that allows apps that still speak V4L1 only
> >> > to use the V4L2 API. This is done in the libv4l1 library. The various
> >> > v4l libraries are maintained here:
> >> > http://git.linuxtv.org/v4l-utils.git
> >> >
> >> > Adding a libomx might not be such a bad idea. Linaro might be the
> >> > appropriate organization to look into this. Any missing pieces in V4L2
> >> > needed to create a fully functioning omx API can be discussed and
> >> > solved.
> >> >
> >> > Making this part of v4l-utils means that it is centrally maintained
> >> > and automatically picked up by distros.
> >> >
> >> > It will certainly be a non-trivial exercise, but it is a one-time job
> >> > that should solve a lot of problems. But someone has to do it...
> >>
> >> It's an option, but why would that be needed ? Again from my (probably
> >> limited) OMX experience, platforms expose higher-level APIs to
> >> applications, implemented on top of OMX. If the OMX layer is itself
> >> implemented on top of V4L2, it would just be an extraneous useless
> >> internal layer that could (should ?) be removed completely.
> >
> > This would be the case in a GStreamer driven multimedia, i.e. Implement
> > GStreamer elements using V4L2 directly(or camerabin using v4l2 directly).
> > Perhaps some vendors would provide a library in between as well but that
> > could be libv4l in that case. If someone would have an OpenMAX AL/IL
> > media framework an OMX component would make sense to have but in this
> > case it would be a thinner OMX component which in turn is implemented
> > using V4L2. But it might be that Khronos provides OS independent
> > components that by vendors gets implemented as the actual HW driver
> > forgetting that there is a big difference in the driver model of an RTOS
> > system compared to Linux(user/kernel space) or any OS...never mind.
>
> Not even different vendor's omx camera implementations are
> compatible.. there seems to be too much various in ISP architecture
> and features for this.
>
> Another point, and possibly the reason that TI went the OMX camera
> route, was that a userspace API made it possible to move the camera
> driver all to a co-processor (with advantages of reduced interrupt
> latency for SIMCOP processing, and a larger part of the code being OS
> independent)..  doing this in a kernel mode driver would have required
> even more of syslink in the kernel.

That's a very valid point. This is why we need to think about what we want as
a Linux middleware for multimedia devices. The conclusion might be that
everything needs to be pushed in the kernel (although I doubt that), but the
goal is to give a clear message to chip vendors. This is in my opinion one of
the most urgent tasks.

> But maybe it would be nice to have a way to have sensor driver on the
> linux side, pipelined with hw and imaging drivers on a co-processor
> for various algorithms and filters with configuration all exposed to
> userspace thru MCF.. I'm not immediately sure how this would work, but
> it sounds nice at least ;-)

If the IPC communication layer is in the kernel, that shouldn't be very
difficult. If it's in userspace, we need help of userspace librairies with
some kind of userspace driver (in my opinion at least).

> > The question is if the Linux kernel and V4L2 is ready to incorporate
> > several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The
> > reason Embedded Vendors provide custom solutions is to implement low
> > power non(or minimal) CPU intervention pipelines where dedicated HW does
> > the work most of the time(like full screen Video Playback).
> >
> > A common way of managing memory would of course also be necessary as
> > well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers
> > in between different drivers and processes all the way from
> > sources(camera, video parser/decode) to sinks(display, hdmi, video
> > encoders(record))
>
> (ahh, ok, you have some of the same thoughts as I do regarding sharing
> buffers between various drivers)
>
> > Perhaps GStreamer experts would like to comment on the future plans ahead
> > for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
> > some ideas from OMX IL making OMX IL obsolete?
>
> perhaps OMX should adapt some of the ideas from GStreamer ;-)

I'd very much like to see GStreamer (or something else, maybe lower level, but
community-maintainted) replace OMX.

Does anyone have any GStreamer vs. OMX memory and CPU usage numbers ? I
suppose it depends on the actual OMX implementations, but what I'd like to
know is if GStreamer is too heavy for platforms on which OMX works fine.

> OpenMAX is missing some very obvious stuff to make it an API for
> portable applications like autoplugging, discovery of
> capabilities/formats supported, etc..  at least with gst I can drop in
> some hw specific plugins and have apps continue to work without code
> changes.
>
> Anyways, it would be an easier argument to make if GStreamer was the
> one true framework across different OSs, or at least across linux and
> android.

Let's push for GStreamer on Android then :-)

--
Regards,

Laurent Pinchart
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Laurent Pinchart
In reply to this post by Hans Verkuil
On Thursday 24 February 2011 14:17:12 Hans Verkuil wrote:

> On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
> > On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
> > > Hi,
> > >
> > > In order to expand this knowledge outside of Linaro I took the Liberty
> > > of inviting both [hidden email] and
> > > [hidden email]. For any newcomer I really
> > > recommend to do some catch-up reading on
> > > http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> > > ("v4l2 vs omx for camera" thread) before making any comments. And sign
> > > up for Linaro-dev while you are at it :-)
> > >
> > > To make a long story short:
> > > Different vendors provide custom OpenMax solutions for say Camera/ISP.
> > > In the Linux eco-system there is V4L2 doing much of this work already
> > > and is evolving with mediacontroller as well. Then there is the
> > > integration in Gstreamer...Which solution is the best way forward.
> > > Current discussions so far puts V4L2 greatly in favor of OMX.
> > > Please have in mind that OpenMAX as a concept is more like GStreamer in
> > > many senses. The question is whether Camera drivers should have OMX or
> > > V4L2 as the driver front end? This may perhaps apply to video codecs
> > > as well. Then there is how to in best of ways make use of this in
> > > GStreamer in order to achieve no copy highly efficient multimedia
> > > pipelines. Is gst-omx the way forward?
> >
> > just fwiw, there were some patches to make v4l2src work with userptr
> > buffers in case the camera has an mmu and can handle any random
> > non-physically-contiguous buffer..  so there is in theory no reason
> > why a gst capture pipeline could not be zero copy and capture directly
> > into buffers allocated from the display
>
> V4L2 also allows userspace to pass pointers to contiguous physical memory.
> On TI systems this memory is usually obtained via the out-of-tree cmem
> module.

On the OMAP3 the ISP doesn't require physically contiguous memory. User
pointers can be used quite freely, except that they introduce cache management
issues on ARM when speculative prefetching comes into play (those issues are
currently ignored completely).

> > Certainly a more general way to allocate buffers that any of the hw
> > blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
> > could use, and possibly share across-process for some zero copy DRI
> > style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?
>
> There are two parts to this: first of all you need a way to allocate large
> buffers. The CMA patch series is available (but not yet merged) that does
> this. I'm not sure of the latest status of this series.

Some platforms don't require contiguous memory. What we need is a way to
allocate memory in the kernel with various options, and use that memory in
various drivers (V4L2, GPU, ...)

> The other part is that everyone can use and share these buffers. There
> isn't anything for this yet. We have discussed this in the past and we
> need something generic for this that all subsystems can use. It's not a
> good idea to tie this to any specific framework like GEM. Instead any
> subsystem should be able to use the same subsystem-independent buffer pool
> API.
>
> The actual code is probably not too bad, but trying to coordinate this over
> all subsystems is not an easy task.

[snip]

> > Not even different vendor's omx camera implementations are
> > compatible.. there seems to be too much various in ISP architecture
> > and features for this.
> >
> > Another point, and possibly the reason that TI went the OMX camera
> > route, was that a userspace API made it possible to move the camera
> > driver all to a co-processor (with advantages of reduced interrupt
> > latency for SIMCOP processing, and a larger part of the code being OS
> > independent)..  doing this in a kernel mode driver would have required
> > even more of syslink in the kernel.
> >
> > But maybe it would be nice to have a way to have sensor driver on the
> > linux side, pipelined with hw and imaging drivers on a co-processor
> > for various algorithms and filters with configuration all exposed to
> > userspace thru MCF.. I'm not immediately sure how this would work, but
> > it sounds nice at least ;-)
>
> MCF? What does that stand for?

Media Controller Framework I guess.

> > > The question is if the Linux kernel and V4L2 is ready to incorporate
> > > several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance.
> > > The reason Embedded Vendors provide custom solutions is to implement
> > > low power non(or minimal) CPU intervention pipelines where dedicated
> > > HW does the work most of the time(like full screen Video Playback).
> > >
> > > A common way of managing memory would of course also be necessary as
> > > well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers
> > > in between different drivers and processes all the way from
> > > sources(camera, video parser/decode) to sinks(display, hdmi, video
> > > encoders(record))
> >
> > (ahh, ok, you have some of the same thoughts as I do regarding sharing
> > buffers between various drivers)
>
> Perhaps the time is right for someone to start working on this?

Totally. It's time to start working on lots of things :-)

--
Regards,

Laurent Pinchart
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Kyungmin Park
In reply to this post by Hans Verkuil
On Thu, Feb 24, 2011 at 10:17 PM, Hans Verkuil <[hidden email]> wrote:

> On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
>> On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete
>> <[hidden email]> wrote:
>> > Hi,
>> >
>> > In order to expand this knowledge outside of Linaro I took the Liberty of
>> > inviting both [hidden email] and
>> > [hidden email]. For any newcomer I really recommend
>> > to do some catch-up reading on
>> > http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
>> > ("v4l2 vs omx for camera" thread) before making any comments. And sign up
>> > for Linaro-dev while you are at it :-)
>> >
>> > To make a long story short:
>> > Different vendors provide custom OpenMax solutions for say Camera/ISP. In
>> > the Linux eco-system there is V4L2 doing much of this work already and is
>> > evolving with mediacontroller as well. Then there is the integration in
>> > Gstreamer...Which solution is the best way forward. Current discussions so
>> > far puts V4L2 greatly in favor of OMX.
>> > Please have in mind that OpenMAX as a concept is more like GStreamer in many
>> > senses. The question is whether Camera drivers should have OMX or V4L2 as
>> > the driver front end? This may perhaps apply to video codecs as well. Then
>> > there is how to in best of ways make use of this in GStreamer in order to
>> > achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
>> > forward?
>>
>> just fwiw, there were some patches to make v4l2src work with userptr
>> buffers in case the camera has an mmu and can handle any random
>> non-physically-contiguous buffer..  so there is in theory no reason
>> why a gst capture pipeline could not be zero copy and capture directly
>> into buffers allocated from the display
>
> V4L2 also allows userspace to pass pointers to contiguous physical memory.
> On TI systems this memory is usually obtained via the out-of-tree cmem module.
>
>> Certainly a more general way to allocate buffers that any of the hw
>> blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
>> could use, and possibly share across-process for some zero copy DRI
>> style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?
>
> There are two parts to this: first of all you need a way to allocate large
> buffers. The CMA patch series is available (but not yet merged) that does this.
> I'm not sure of the latest status of this series.
Still ARM maintainer doesn't agree these patches since it's not solve
the ARM memory different attribute mapping problem.
but try to send the CMA v9 patch soon.

We need really require the physical memory management module. Each
chip vendors use the their own implementations.
Our approach called it as CMA, others called it as cmem, carveout,
hwmon and so on.

I think Laurent's approaches is similar one.

We will try it again to merge CMA.

Thank you,
Kyungmin Park


>
> The other part is that everyone can use and share these buffers. There isn't
> anything for this yet. We have discussed this in the past and we need something
> generic for this that all subsystems can use. It's not a good idea to tie this
> to any specific framework like GEM. Instead any subsystem should be able to use
> the same subsystem-independent buffer pool API.
>
> The actual code is probably not too bad, but trying to coordinate this over all
> subsystems is not an easy task.
>
>>
>> >
>> > Let the discussion continue...
>> >
>> >
>> > On 17 February 2011 14:48, Laurent Pinchart
>> > <[hidden email]> wrote:
>> >>
>> >> On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
>> >> > On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
>> >> > > On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
>> >> > > > OMX main purpose is to handle multimedia hardware and offer an
>> >> > > > interface to that HW that looks identical indenpendent of the vendor
>> >> > > > delivering that hardware, much like the v4l2 or USB subsystems tries
>> >> > > > to
>> >> > > > do. And yes optimally it should be implemented in drivers/omx in
>> >> > > > Linux
>> >> > > > and a user space library on top of that.
>> >> > >
>> >> > > Thanks for clarifying this part, it was unclear to me. The reason
>> >> > > being
>> >> > > that it seems OMX does not imply userspace/kernelspace separation, and
>> >> > > I was thinking more of it as a userspace lib. Now my understanding is
>> >> > > that if e.g. OpenMAX defines a certain data structure, say for a PCM
>> >> > > frame or whatever, then that exact struct is supposed to be used by
>> >> > > the
>> >> > > kernelspace/userspace interface, and defined in the include file
>> >> > > exported
>> >> > > by the kernel.
>> >> > >
>> >> > > > It might be that some alignment also needs to be made between 4vl2
>> >> > > > and
>> >> > > > other OS's implementation, to ease developing drivers for many OSs
>> >> > > > (sorry I don't know these details, but you ST-E guys should know).
>> >> > >
>> >> > > The basic conflict I would say is that Linux has its own API+ABI,
>> >> > > which
>> >> > > is defined by V4L and ALSA through a community process without much
>> >> > > thought about any existing standard APIs. (In some cases also
>> >> > > predating
>> >> > > them.)
>> >> > >
>> >> > > > By the way IL is about to finalize version 1.2 of OpenMAX IL which
>> >> > > > is
>> >> > > > more than a years work of aligning all vendors and fixing unclear
>> >> > > > and
>> >> > > > buggy parts.
>> >> > >
>> >> > > I suspect that the basic problem with Khronos OpenMAX right now is
>> >> > > how to handle communities - for example the X consortium had
>> >> > > something like the same problem a while back, only member companies
>> >> > > could partake in the standard process, and they need of course to pay
>> >> > > an upfront fee for that, and the majority of these companies didn't
>> >> > > exactly send Linux community members to the meetings.
>> >> > >
>> >> > > And now all the companies who took part in OpenMAX somehow
>> >> > > end up having to do a lot of upfront community work if they want
>> >> > > to drive the API:s in a certain direction, discuss it again with the
>> >> > > V4L
>> >> > > and ALSA maintainers and so on. Which takes a lot of time and
>> >> > > patience with uncertain outcome, since this process is autonomous
>> >> > > from Khronos. Nobody seems to be doing this, I javen't seen a single
>> >> > > patch aimed at trying to unify the APIs so far. I don't know if it'd
>> >> > > be
>> >> > > welcome.
>> >> > >
>> >> > > This coupled with strict delivery deadlines and a marketing will
>> >> > > to state conformance to OpenMAX of course leads companies into
>> >> > > solutions breaking the Linux kernelspace API to be able to present
>> >> > > this.
>> >>
>> >> From my experience with OMX, one of the issues is that companies usually
>> >> extend the API to fullfill their platform's needs, without going through
>> >> any
>> >> standardization process. Coupled with the lack of open and free reference
>> >> implementation and test tools, this more or less means that OMX
>> >> implementations are not really compatible with eachother, making OMX-based
>> >> solution not better than proprietary solutions.
>> >>
>> >> > > Now I think we have a pretty clear view of the problem, I don't
>> >> > > know what could be done about it though :-/
>> >> >
>> >> > One option might be to create a OMX wrapper library around the V4L2 API.
>> >> > Something similar is already available for the old V4L1 API (now removed
>> >> > from the kernel) that allows apps that still speak V4L1 only to use the
>> >> > V4L2 API. This is done in the libv4l1 library. The various v4l libraries
>> >> > are maintained here: http://git.linuxtv.org/v4l-utils.git
>> >> >
>> >> > Adding a libomx might not be such a bad idea. Linaro might be the
>> >> > appropriate organization to look into this. Any missing pieces in V4L2
>> >> > needed to create a fully functioning omx API can be discussed and
>> >> > solved.
>> >> >
>> >> > Making this part of v4l-utils means that it is centrally maintained and
>> >> > automatically picked up by distros.
>> >> >
>> >> > It will certainly be a non-trivial exercise, but it is a one-time job
>> >> > that
>> >> > should solve a lot of problems. But someone has to do it...
>> >>
>> >> It's an option, but why would that be needed ? Again from my (probably
>> >> limited) OMX experience, platforms expose higher-level APIs to
>> >> applications,
>> >> implemented on top of OMX. If the OMX layer is itself implemented on top
>> >> of
>> >> V4L2, it would just be an extraneous useless internal layer that could
>> >> (should
>> >> ?) be removed completely.
>> >>
>> >
>> > [Robert F]
>> > This would be the case in a GStreamer driven multimedia, i.e. Implement
>> > GStreamer elements using V4L2 directly(or camerabin using v4l2 directly).
>> > Perhaps some vendors would provide a library in between as well but that
>> > could be libv4l in that case. If someone would have an OpenMAX AL/IL media
>> > framework an OMX component would make sense to have but in this case it
>> > would be a thinner OMX component which in turn is implemented using V4L2.
>> > But it might be that Khronos provides OS independent components that by
>> > vendors gets implemented as the actual HW driver forgetting that there is a
>> > big difference in the driver model of an RTOS system compared to
>> > Linux(user/kernel space) or any OS...never mind.
>> >
>>
>> Not even different vendor's omx camera implementations are
>> compatible.. there seems to be too much various in ISP architecture
>> and features for this.
>>
>> Another point, and possibly the reason that TI went the OMX camera
>> route, was that a userspace API made it possible to move the camera
>> driver all to a co-processor (with advantages of reduced interrupt
>> latency for SIMCOP processing, and a larger part of the code being OS
>> independent)..  doing this in a kernel mode driver would have required
>> even more of syslink in the kernel.
>>
>> But maybe it would be nice to have a way to have sensor driver on the
>> linux side, pipelined with hw and imaging drivers on a co-processor
>> for various algorithms and filters with configuration all exposed to
>> userspace thru MCF.. I'm not immediately sure how this would work, but
>> it sounds nice at least ;-)
>
> MCF? What does that stand for?
>
>>
>> > The question is if the Linux kernel and V4L2 is ready to incorporate several
>> > HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason
>> > Embedded Vendors provide custom solutions is to implement low power non(or
>> > minimal) CPU intervention pipelines where dedicated HW does the work most of
>> > the time(like full screen Video Playback).
>> >
>> > A common way of managing memory would of course also be necessary as well,
>> > like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between
>> > different drivers and processes all the way from sources(camera, video
>> > parser/decode) to sinks(display, hdmi, video encoders(record))
>>
>> (ahh, ok, you have some of the same thoughts as I do regarding sharing
>> buffers between various drivers)
>
> Perhaps the time is right for someone to start working on this?
>
> Regards,
>
>        Hans
>
>> > Perhaps GStreamer experts would like to comment on the future plans ahead
>> > for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some
>> > ideas from OMX IL making OMX IL obsolete?
>>
>> perhaps OMX should adapt some of the ideas from GStreamer ;-)
>>
>> OpenMAX is missing some very obvious stuff to make it an API for
>> portable applications like autoplugging, discovery of
>> capabilities/formats supported, etc..  at least with gst I can drop in
>> some hw specific plugins and have apps continue to work without code
>> changes.
>>
>> Anyways, it would be an easier argument to make if GStreamer was the
>> one true framework across different OSs, or at least across linux and
>> android.
>>
>> BR,
>> -R
>>
>> > Answering these questions could be
>> > improved guidelines on what embedded device vendors in the future would
>> > provide as hw-driver front-ends. OMX is just one of these. Perhaps we could
>> > do better to fit and evolve the Linux eco-system?
>> >
>> >
>> >>
>> >> > Regarding using V4L to communicate with DSPs/other processors: that too
>> >> > could be something for Linaro to pick up: experiment with it for one
>> >> > particular board, see what (if anything) is needed to make this work. I
>> >> > expect it to be pretty easy, but again, nobody has actually done the
>> >> > initial work.
>> >>
>> >> The main issue with the V4L2 API compared with the OMX API is that V4L2 is
>> >> a
>> >> kernelspace/userspace API only, while OMX can live in userspace. When the
>> >> need
>> >> to communicate with other processors (CPUs, DSP, dedicated image
>> >> processing
>> >> hardware blocks, ...) arises, platforms usually ship with a thin kernel
>> >> layer
>> >> to handle low-level communication protocols, and a userspace OMX library
>> >> that
>> >> does the bulk of the work. We would need to be able to do something
>> >> similar
>> >> with V4L2.
>> >
>> > [Robert F]
>> > Ok, doesn.t mediacontroller/subdevices solve many of these issues?
>> >
>> >>
>> >> > Once you have an example driver, then it should be much easier for
>> >> > others
>> >> > to follow.
>> >> >
>> >> > As Linus said, companies are unlikely to start doing this by themselves,
>> >> > but it seems that this work would exactly fit the Linaro purpose. From
>> >> > the
>> >> > Linaro homepage:
>> >> >
>> >> > "Linaro™ brings together the open source community and the electronics
>> >> > industry to work on key projects, deliver great tools, reduce industry
>> >> > wide fragmentation and provide common foundations for Linux software
>> >> > distributions and stacks to land on."
>> >> >
>> >> > Spot on, I'd say :-)
>> >> >
>> >> > Just for the record, let me say again they the V4L2 community will be
>> >> > very
>> >> > happy to assist with this when it comes to extending/improving the V4L2
>> >> > API
>> >> > to make all this possible.
>> >>
>> >> The first step would probably be to decide what Linux needs. Then I'll
>> >> also be
>> >> happy to assist with the implementation phase :-)
>> >>
>> >> --
>> >> Regards,
>> >>
>> >> Laurent Pinchart
>> >>
>> >> _______________________________________________
>> >> linaro-dev mailing list
>> >> [hidden email]
>> >> http://lists.linaro.org/mailman/listinfo/linaro-dev
>> >
>> > BR
>> > /Robert Fekete
>> >
>> >
>> > _______________________________________________
>> > linaro-dev mailing list
>> > [hidden email]
>> > http://lists.linaro.org/mailman/listinfo/linaro-dev
>> >
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to [hidden email]
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>
> --
> Hans Verkuil - video4linux developer - sponsored by Cisco
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to [hidden email]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Laurent Pinchart
Hi,

On Thursday 24 February 2011 15:48:20 Kyungmin Park wrote:

> On Thu, Feb 24, 2011 at 10:17 PM, Hans Verkuil <[hidden email]> wrote:
> > On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
> >> On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
> >> > Hi,
> >> >
> >> > In order to expand this knowledge outside of Linaro I took the Liberty
> >> > of inviting both [hidden email] and
> >> > [hidden email]. For any newcomer I really
> >> > recommend to do some catch-up reading on
> >> > http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> >> > ("v4l2 vs omx for camera" thread) before making any comments. And sign
> >> > up for Linaro-dev while you are at it :-)
> >> >
> >> > To make a long story short:
> >> > Different vendors provide custom OpenMax solutions for say Camera/ISP.
> >> > In the Linux eco-system there is V4L2 doing much of this work already
> >> > and is evolving with mediacontroller as well. Then there is the
> >> > integration in Gstreamer...Which solution is the best way forward.
> >> > Current discussions so far puts V4L2 greatly in favor of OMX.
> >> > Please have in mind that OpenMAX as a concept is more like GStreamer
> >> > in many senses. The question is whether Camera drivers should have
> >> > OMX or V4L2 as the driver front end? This may perhaps apply to video
> >> > codecs as well. Then there is how to in best of ways make use of this
> >> > in GStreamer in order to achieve no copy highly efficient multimedia
> >> > pipelines. Is gst-omx the way forward?
> >>
> >> just fwiw, there were some patches to make v4l2src work with userptr
> >> buffers in case the camera has an mmu and can handle any random
> >> non-physically-contiguous buffer..  so there is in theory no reason
> >> why a gst capture pipeline could not be zero copy and capture directly
> >> into buffers allocated from the display
> >
> > V4L2 also allows userspace to pass pointers to contiguous physical
> > memory. On TI systems this memory is usually obtained via the
> > out-of-tree cmem module.
> >
> >> Certainly a more general way to allocate buffers that any of the hw
> >> blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc)
> >> could use, and possibly share across-process for some zero copy DRI
> >> style rendering, would be nice.  Perhaps V4L2_MEMORY_GEM?
> >
> > There are two parts to this: first of all you need a way to allocate
> > large buffers. The CMA patch series is available (but not yet merged)
> > that does this. I'm not sure of the latest status of this series.
>
> Still ARM maintainer doesn't agree these patches since it's not solve
> the ARM memory different attribute mapping problem.
> but try to send the CMA v9 patch soon.
>
> We need really require the physical memory management module. Each
> chip vendors use the their own implementations.
> Our approach called it as CMA, others called it as cmem, carveout,
> hwmon and so on.
>
> I think Laurent's approaches is similar one.

Just for the record, my global buffers pool RFC didn't try to solve the
contiguous memory allocation problem. It aimed at providing drivers (and
applications) with an API to allocate and use buffers. How the memory is
allocated is outside the scope of the global buffers pool, CMA makes perfect
sense for that.

> We will try it again to merge CMA.

--
Regards,

Laurent Pinchart
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Edward Hervey
Administrator
In reply to this post by Robert Fekete
Hi,

On Fri, 2011-02-18 at 17:39 +0100, Robert Fekete wrote:

> Hi,
>
> In order to expand this knowledge outside of Linaro I took the Liberty
> of inviting both [hidden email] and
> [hidden email]. For any newcomer I really
> recommend to do some catch-up reading on
> http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html
> ("v4l2 vs omx for camera" thread) before making any comments. And sign
> up for Linaro-dev while you are at it :-)
>
> To make a long story short:
> Different vendors provide custom OpenMax solutions for say Camera/ISP.
> In the Linux eco-system there is V4L2 doing much of this work already
> and is evolving with mediacontroller as well. Then there is the
> integration in Gstreamer...Which solution is the best way forward.
> Current discussions so far puts V4L2 greatly in favor of OMX.
> Please have in mind that OpenMAX as a concept is more like GStreamer
> in many senses. The question is whether Camera drivers should have OMX
> or V4L2 as the driver front end? This may perhaps apply to video
> codecs as well. Then there is how to in best of ways make use of this
> in GStreamer in order to achieve no copy highly efficient multimedia
> pipelines. Is gst-omx the way forward?
>
> Let the discussion continue...
>

  I'll try to summarize here my perspective from a GStreamer point of
view. You wanted some, here it is :) This is a summary to answering
everything in this mail thread at this time. You can go straight to the
last paragraphs for a summary.

  The question to be asked, imho, is not "omx or v4l2 or gstreamer", but
rather "What purpose does each of those API/interface serve, when do
they make sense, and how can they interact in the most efficient way
possible"

  Looking at the bigger picture, the end goal to all of us is to make
best usage of what hardware/IP/silica is available all the way up to
end-user applications/use-cases, and do so in the most efficient way
possible (whether in terms of memory/cpu/power usage at the lower
levels, but also in terms of manpower and flexibility at the higher
levels).

  Will GStreamer be as cpu/memory efficient as a pure OMX solution ? No,
I seriously doubt we'll break down all the fundamental notions in
GStreamer to make it use 0 cpu when running some processing.

  Can GStreamer provide higher flexibility than a pure OMX solution ?
Definitely, unless you have all the plugins for accesing all other hw
systems out there,  (de)muxers, rtp (de)payloaders, jitter buffers,
network components, auto-pluggers, convenience elements, application
interaction that GStreamer has been improving over the past 10 years.
All that is far from trivial.
  And as Rob Clark said that you could drop HW specific gst plugins in
and have it work with all existing applications, the same applies to all
the other peripheral existing *and* future plugins you need to make a
final application. So there you benefit from all the work done from the
non-hw-centric community.

  Can we make GStreamer use as little cpu/overhead as possible without
breaking the fundamental concepts it provides ? Definitely.
  There are quite a few examples out there of zero-memcpy gst plugins
wrapping hw accelerated systems for a ridiculous amount of cpu (they
just take a opaque buffer and pass it down. That's 300-500 cpu
instructions for a properly configured setup if my memory serves me
right). And efforts have been going on for the past 2 years to carry on
to make GStreamer overall consume as little cpu as possible, making it
as lockless as possible and so forth. The undergoing GStreamer 0.11/1.0
effort will allow breaking down even more barriers for even more
efficient usage.

  Can OMX provide a better interface than v4l2 for video sources ?
Possible, but doubtful, The V4L2 people have been working at it for ages
and works for a *lot* of devices out there. It is the interface one
expects to use on Linux based systems, you write your kernel drivers
with a v4l2 interface and people can use it straight away on any linux
setup.

  Do Hardware/Silica vendors want to write kernel/userspace drivers for
their hw-accelerated codecs in all variants available out there ? No
way, they've got better things to do, they need to chose one.
  Is OMX the best API out there for providing hw-accelerated codecs ?
Not in my opinion. Efforts like libva/vdpau are better in that regards,
but for most ARM SoC ... OMX is the closest thing to a '''standard'''.
And they (Khronos) don't even provide reference implementations, so you
end up with a bunch of header files that everybody {mis|ab}uses.



  So where does this leave us ?

  * OMX is here for HW-accelerated codecs and vendors are doubtfully
going to switch from it, but there are other system popping up that will
use other APIs (libva, vdpau, ...).
  * V4L2 has an long standing and evolving interface people expect for
video sources on linux-based systems. Making OMX provide an
as-robust/tested interface as that is going to be hard.
  * GStreamer can wrap all existing APIs (including the two mentionned
above), adds the missing blocks to go from standalone components to
full-blown future-looking applications/use-cases.

  * The main problem... is making all those components talk to eachother
in the most cpu/mem efficient way possible.

  No, GStreamer can't solve all of that last problem. We are working
hard on reducing as much as possible the overhead GStreamer brings in
while offering the most flexible solution out there and you can join in
making sure the plugins exposing those various APIs mentionned above
make the best usage of it. There is a point where we are going to reach
our limit.

  What *needs* to be solved is an API for data allocation/passing at the
kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
userspace (like GStreamer) can pass around, monitor and know about.
  That is a *massive* challenge on its own. The choice of using
GStreamer or not ... is what you want to do once that challenge is
solved.

  Regards,

    Edward

P.S. GStreamer for Android already works :
http://www.elinux.org/images/a/a4/Android_and_Gstreamer.ppt

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Edward Hervey
Administrator
On Thu, 2011-02-24 at 21:19 +0100, Edward Hervey wrote:
>
>   Will GStreamer be as cpu/memory efficient as a pure OMX solution ?
> No,
> I seriously doubt we'll break down all the fundamental notions in
> GStreamer to make it use 0 cpu when running some processing.

  I blame late night mails...

  I meant "Will GStreamer be capable of zero-cpu usage like OMX is
capable in some situation". The answer still stands.

  But regarding memory usage, GStreamer can do zero-memcpy provided the
underlying layers have a mechanism it can use.

   Edward


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Clark, Rob
In reply to this post by Laurent Pinchart
On Thu, Feb 24, 2011 at 7:10 AM, Laurent Pinchart
<[hidden email]> wrote:

> On Thursday 24 February 2011 14:04:19 Hans Verkuil wrote:
>> On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
>> > 2011/2/23 Sachin Gupta <[hidden email]>:
>> > > The imaging coprocessor in today's platforms have a general purpose DSP
>> > > attached to it I have seen some work being done to use this DSP for
>> > > graphics/audio processing in case the camera use case is not being
>> > > tried or also if the camera usecases does not consume the full
>> > > bandwidth of this dsp.I am not sure how v4l2 would fit in such an
>> > > architecture,
>> >
>> > Earlier in this thread I discussed TI:s DSPbridge.
>> >
>> > In drivers/staging/tidspbridge
>> > http://omappedia.org/wiki/DSPBridge_Project
>> > you find the TI hackers happy at work with providing a DSP accelerator
>> > subsystem.
>> >
>> > Isn't it possible for a V4L2 component to use this interface (or
>> > something more evolved, generic) as backend for assorted DSP offloading?
>> >
>> > So using one kernel framework does not exclude using another one
>> > at the same time. Whereas something like DSPbridge will load firmware
>> > into DSP accelerators and provide control/datapath for that, this can
>> > in turn be used by some camera or codec which in turn presents a
>> > V4L2 or ALSA interface.
>>
>> Yes, something along those lines can be done.
>>
>> While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
>> instead.
>>
>> The hardest part will be to identify the missing V4L2 API pieces and design
>> and add them. I don't think the actual driver code will be particularly
>> hard. It should be nothing more than a thin front-end for the DSP. Of
>> course, that's just theory at the moment :-)
>>
>> The problem is that someone has to do the actual work for the initial
>> driver. And I expect that it will be a substantial amount of work. Future
>> drivers should be *much* easier, though.
>>
>> A good argument for doing this work is that this API can hide which parts
>> of the video subsystem are hardware and which are software. The
>> application really doesn't care how it is organized. What is done in
>> hardware on one SoC might be done on a DSP instead on another SoC. But the
>> end result is pretty much the same.
>
> I think the biggest issue we will have here is that part of the inter-
> processors communication stack lives in userspace in most recent SoCs (OMAP4
> comes to mind for instance). This will make implementing a V4L2 driver that
> relies on IPC difficult.
>
> It's probably time to start seriously thinking about userspace
> drivers/librairies/middlewares/frameworks/whatever, at least to clearly tell
> chip vendors what the Linux community expects.
>

I suspect more of the IPC framework needs to move down to the kernel..
this is the only way I can see to move the virt->phys address
translation to a trusted layer.  I'm not sure how others would feel
about pushing more if the IPC stack down to the kernel, but at least
it would make it easier for a v4l2 driver to leverage the
coprocessors..

BR,
-R

> --
> Regards,
>
> Laurent Pinchart
>
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Clark, Rob
In reply to this post by Edward Hervey
On Thu, Feb 24, 2011 at 2:19 PM, Edward Hervey <[hidden email]> wrote:
>
>  What *needs* to be solved is an API for data allocation/passing at the
> kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
> userspace (like GStreamer) can pass around, monitor and know about.

yes yes yes yes!!

vaapi/vdpau is half way there, as they cover sharing buffers with
X/GL..  but sadly they ignore camera.  There are a few other
inconveniences with vaapi and possibly vdpau.. at least we'd prefer to
have an API the covered decoding config data like SPS/PPS and not just
slice data since config data NALU's are already decoded by our
accelerators..

>  That is a *massive* challenge on its own. The choice of using
> GStreamer or not ... is what you want to do once that challenge is
> solved.
>
>  Regards,
>
>    Edward
>
> P.S. GStreamer for Android already works :
> http://www.elinux.org/images/a/a4/Android_and_Gstreamer.ppt
>

yeah, I'm aware of that.. someone please convince google to pick it up
and drop stagefright so we can only worry about a single framework
between android and linux  (and then I look forward to playing with
pitivi on an android phone :-))

BR,
-R

> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Clark, Rob
In reply to this post by Hans Verkuil
On Thu, Feb 24, 2011 at 7:17 AM, Hans Verkuil <[hidden email]> wrote:
> There are two parts to this: first of all you need a way to allocate large
> buffers. The CMA patch series is available (but not yet merged) that does this.
> I'm not sure of the latest status of this series.
>
> The other part is that everyone can use and share these buffers. There isn't
> anything for this yet. We have discussed this in the past and we need something
> generic for this that all subsystems can use. It's not a good idea to tie this
> to any specific framework like GEM. Instead any subsystem should be able to use
> the same subsystem-independent buffer pool API.

yeah, doesn't need to be GEM.. but should at least inter-operate so we
can share buffers with the display/gpu..

[snip]
>> But maybe it would be nice to have a way to have sensor driver on the
>> linux side, pipelined with hw and imaging drivers on a co-processor
>> for various algorithms and filters with configuration all exposed to
>> userspace thru MCF.. I'm not immediately sure how this would work, but
>> it sounds nice at least ;-)
>
> MCF? What does that stand for?
>

sorry, v4l2 media controller framework

BR,
-R
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Linus Walleij
In reply to this post by Edward Hervey
2011/2/24 Edward Hervey <[hidden email]>:

>  What *needs* to be solved is an API for data allocation/passing at the
> kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
> userspace (like GStreamer) can pass around, monitor and know about.

I think the patches sent out from ST-Ericsson's Johan Mossberg to
linux-mm for "HWMEM" (hardware memory) deals exactly with buffer
passing, pinning of buffers and so on. The CMA (Contigous Memory
Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
so CMA provides buffers, HWMEM pass them around.

Johan, when you re-spin the HWMEM patchset, can you include
linaro-dev and linux-media in the CC? I think there is *much* interest
in this mechanism, people just don't know from the name what it
really does. Maybe it should be called mediamem or something
instead...

Yours,
Linus Walleij
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Hans Verkuil
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:

> 2011/2/24 Edward Hervey <[hidden email]>:
>
> >  What *needs* to be solved is an API for data allocation/passing at the
> > kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that
> > userspace (like GStreamer) can pass around, monitor and know about.
>
> I think the patches sent out from ST-Ericsson's Johan Mossberg to
> linux-mm for "HWMEM" (hardware memory) deals exactly with buffer
> passing, pinning of buffers and so on. The CMA (Contigous Memory
> Allocator) has been slightly modified to fit hand-in-glove with HWMEM,
> so CMA provides buffers, HWMEM pass them around.
>
> Johan, when you re-spin the HWMEM patchset, can you include
> linaro-dev and linux-media in the CC?

Yes, please. This sounds promising and we at linux-media would very much like
to take a look at this. I hope that the CMA + HWMEM combination is exactly
what we need.

Regards,

        Hans

> I think there is *much* interest
> in this mechanism, people just don't know from the name what it
> really does. Maybe it should be called mediamem or something
> instead...
>
> Yours,
> Linus Walleij
>
>

--
Hans Verkuil - video4linux developer - sponsored by Cisco
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Felipe Contreras
In reply to this post by Robert Fekete
Hi,

On Fri, Feb 18, 2011 at 6:39 PM, Robert Fekete <[hidden email]> wrote:

> To make a long story short:
> Different vendors provide custom OpenMax solutions for say Camera/ISP. In
> the Linux eco-system there is V4L2 doing much of this work already and is
> evolving with mediacontroller as well. Then there is the integration in
> Gstreamer...Which solution is the best way forward. Current discussions so
> far puts V4L2 greatly in favor of OMX.
> Please have in mind that OpenMAX as a concept is more like GStreamer in many
> senses. The question is whether Camera drivers should have OMX or V4L2 as
> the driver front end? This may perhaps apply to video codecs as well. Then
> there is how to in best of ways make use of this in GStreamer in order to
> achieve no copy highly efficient multimedia pipelines. Is gst-omx the way
> forward?
>
> Let the discussion continue...

We are talking about 3 different layers here which don't necessarily
overlap. You could have a v4l2 driver, which is wrapped in an OpenMAX
IL library, which is wrapped again by gst-openmax. Each layer is
different. The problem here is the OMX layer, which is often
ill-conceived.

First of all, you have to remember that whatever OMX is supposed to
provide, that doesn't apply to camera; you can argue that there's some
value in audio/video encoding/decoding, as the interfaces are very
simple and easy to standardize, but that's not the case with camera. I
haven't worked with OMX camera interfaces, but AFAIK it's very
incomplete and vendors have to implement their own interfaces, which
defeats the purpose of OMX. So OMX provides nothing in the camera
case.

Secondly, there's no OMX kernel interface. You still need something
between kernel to user-space, the only established interface is v4l2.
So, even if you choose OMX in user-space, the sensible choice in
kernel-space is v4l2, otherwise you would end up with some custom
interface which is never good.

And third, as Laurent already pointed out; OpenMAX is _not_ open. The
community has no say in what happens, everything is decided by a
consortium, you need to pay money to be in it, to access their
bugzilla, to subscribe to their mailing lists, and to get access to
their conformance test.

If you forget all the marketing mumbo jumbo about OMX, at the of the
day what is provided is a bunch of headers (and a document explaining
how to use them). We (the linux community) can come up with a bunch of
headers too, in fact, we already do much more than that with v4l2, the
only part missing is encoders/decoders, which if needed could be added
very easily (Samsung already does AFAIK). Right?

Cheers.

--
Felipe Contreras
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: [st-ericsson] v4l2 vs omx for camera

Felipe Contreras
In reply to this post by Hans Verkuil
Hi,

On Thu, Feb 24, 2011 at 3:04 PM, Hans Verkuil <[hidden email]> wrote:

> On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
>> 2011/2/23 Sachin Gupta <[hidden email]>:
>>
>> > The imaging coprocessor in today's platforms have a general purpose DSP
>> > attached to it I have seen some work being done to use this DSP for
>> > graphics/audio processing in case the camera use case is not being tried or
>> > also if the camera usecases does not consume the full bandwidth of this
>> > dsp.I am not sure how v4l2 would fit in such an architecture,
>>
>> Earlier in this thread I discussed TI:s DSPbridge.
>>
>> In drivers/staging/tidspbridge
>> http://omappedia.org/wiki/DSPBridge_Project
>> you find the TI hackers happy at work with providing a DSP accelerator
>> subsystem.
>>
>> Isn't it possible for a V4L2 component to use this interface (or something
>> more evolved, generic) as backend for assorted DSP offloading?

Yes it is, and it has been part of my to-do list for some time now.

>> So using one kernel framework does not exclude using another one
>> at the same time. Whereas something like DSPbridge will load firmware
>> into DSP accelerators and provide control/datapath for that, this can
>> in turn be used by some camera or codec which in turn presents a
>> V4L2 or ALSA interface.
>
> Yes, something along those lines can be done.
>
> While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP
> instead.
>
> The hardest part will be to identify the missing V4L2 API pieces and design
> and add them. I don't think the actual driver code will be particularly hard.
> It should be nothing more than a thin front-end for the DSP. Of course, that's
> just theory at the moment :-)

The pieces are known already. I started a project called gst-dsp,
which I plan to split into the gst part, and the part that
communicates with the DSP, this part can move to kernel side with a
v4l2 interface.

It's easier to identify the code in the patches for FFmpeg:
http://article.gmane.org/gmane.comp.video.ffmpeg.devel/116798

> The problem is that someone has to do the actual work for the initial driver.
> And I expect that it will be a substantial amount of work. Future drivers should
> be *much* easier, though.
>
> A good argument for doing this work is that this API can hide which parts of
> the video subsystem are hardware and which are software. The application really
> doesn't care how it is organized. What is done in hardware on one SoC might be
> done on a DSP instead on another SoC. But the end result is pretty much the same.

Exactly.

--
Felipe Contreras
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
12