Hello friends.
I'm working on an RTMP ingestion server. Currently I'm trying to figure out how to handle the audio packets. The audio packets come across as AAC-HE 44kHz in an RTMP message (in this scenario). They're packed in what I think is called an FLV tag box? Basically the first byte indicates the codec (AAC in this case) and the second byte indicates whether or not it's a configuration packet. The configuration packets aren't very useful, and i just ignore it for now (the very first audio packet is a configuration packet). So far so good - the rest of the RTMP message is the AAC data. But it doesn't have an ADTS header, so I'm generating one (based on the FFMPEG code). So now i'm writing these packets (with a generated ADTS header) into the following pipeline and was expecting it to dump decoded audio into the file, but it results in error code 1: Internal data stream error:
appsrc name=audioInput is-live=true caps=audio/mpeg ! faad ! filesink name=output location=/the/location.data
When I use the following pipeline the data makes it into the file, but it's unplayable (mplayer just produces static):
appsrc name=audioInput is-live=true caps=audio/mpeg ! filesink name=output location=/the/location.data
I've even tried both pipelines without the is-live and caps configuration options on the appsrc.
I'm hoping someone out there has dealt with raw AAC and/or RTMP before and knows what I'm overlooking.
Oh, and just incase it matters - I'm setting the presentation timestamp on the Buffer before writing it to the AppSrc using the timestamp from the RTMP client.
Thanks!
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel