Advice for soft-synth game audio project

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Advice for soft-synth game audio project

Mike Gran
Hi-
 
Since the last line of http://gstreamer.freedesktop.org/apps/ says that
this is the place to ask for advice, I've got a question.
 
I'm writing a game, and it is meant to have a simple, 4-channel soft
synth as well as a library of dozens of short (< 1 sec) sound clips
that are stored as uncompressed 8-bit waveforms in memory.  Each
channel of the soft-synth will generate an in-memory 8-bit waveform
of the next note or noise.
 
So, as far as I can tell, if I wanted to use GStreamer to make the
audio work on GNU/Linux, I'd
- use a different instance of the appsrc plugin for each channel
  of the soft-synth and for each sound clip to make a source
- somehow mix the various synth and sound clip channels into a
  left and right channels
- then sink it to Alsa
 
So I have a couple of starter questions.
 
First, I'm not clear on how you mix channels together.  Obviously
some sort of N-channel source to a 2-channel (left/right) sink
mixer element needs to be made.
 
Second, it seems like the way to get in memory waveforms into a
pipe is to use appsrc, but the docs also say appsrc is strongly
discouraged.  Is there some other way to do it?
 
Thanks,
 
Mike Gran
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Advice for soft-synth game audio project

Philippe Normand
On Fri, 2012-10-26 at 05:26 -0700, Mike Gran wrote:

> Hi-
>  
> Since the last line of http://gstreamer.freedesktop.org/apps/ says that
> this is the place to ask for advice, I've got a question.
>  
> I'm writing a game, and it is meant to have a simple, 4-channel soft
> synth as well as a library of dozens of short (< 1 sec) sound clips
> that are stored as uncompressed 8-bit waveforms in memory.  Each
> channel of the soft-synth will generate an in-memory 8-bit waveform
> of the next note or noise.
>  
> So, as far as I can tell, if I wanted to use GStreamer to make the
> audio work on GNU/Linux, I'd
> - use a different instance of the appsrc plugin for each channel
>   of the soft-synth and for each sound clip to make a source
> - somehow mix the various synth and sound clip channels into a
>   left and right channels
> - then sink it to Alsa
>  
> So I have a couple of starter questions.
>  
> First, I'm not clear on how you mix channels together.  Obviously
> some sort of N-channel source to a 2-channel (left/right) sink
> mixer element needs to be made.
>  

This is what the interleave element does. It has request sink pads and
produces data on a single src pad. The deinterleave element does the
contrary: split to n mono channels :)

> Second, it seems like the way to get in memory waveforms into a
> pipe is to use appsrc, but the docs also say appsrc is strongly
> discouraged.  Is there some other way to do it?

One alternative is to write your own source element as a bin internally
using interleave.

Philippe

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel