java - Audio Buffering in Android -


I am trying to implement efficient audio buffer for radio streaming. Regularly, I was trying to buffer: program flow

initially I have my inputstream which is taken from HTTRL connection . The data is being read from circular buffer from input stream which fills buffers. In this case I have 3 buffers but the volume and size of the buffers can be easily changed. So after filling all the buffers I start writing data from my first buffer with the output output which is connected to InputStream , so whenever I use OutputStream It can then be read from InputStream is responsible for reading the data from MediaCode to InputStream and decoding that data then Passing on AudioTrack

The problem with this set up is that after some time the outputstream eventually reaches the end of the circular buffer, so "OutputStream.giveMeNextBufferWithData" and "CircularBuffer.readDataFromInputStreamAndCreateBuffer There is no additional "extra buffer" in between " (pseudo-code).

I have tried to increase the volume and size of each buffer, but there is no help from this. This delays the amount of time before "hungry" to outputstream for more data.

There is a great library that talks about buffer audio, but unfortunately I could not understand how buffering is done and it does not work well in some situations.

Is there a "complete" algorithm for doing such a task? I was thinking about double / triple buffering but I am not sure about this and considering that I am searching around the net and have not had any success for the last few days, in the end I would ask him decided to.

I hope that I understand everything well.

Thank you all!

Buffering, in itself, is inadequate for fixed-rate broadcasts (versus on-demand) stream sources, Where you can not guarantee that the source and device sample rate are exactly .

For short programs, what you can do is a second or two audio buffer, and if your playback rate is high then you will start drowning in that reserve, but hopefully it will not be more than this As such, you need more buffer capability, if you are generating data more slowly but they will eventually fail when the sample rate difference exceeds the buffer reserve size.

Instead, you need a way to equalize production and consumption rates for streaming "to do" to work.

There is a "lazy" logic to do this Depending on the trend of error based on this direction, some long unit of time adds or drops a sample per copy. But it technically introduces deformation.

The "fancy" way of doing this is that the proportion of a variable sample rate converter can be adjusted at that time so that the audio consumption rate and output rate .

Comments

Popular posts from this blog

php - PDO bindParam() fatal error -

logging - How can I log both the Request.InputStream and Response.OutputStream traffic in my ASP.NET MVC3 Application for specific Actions? -

java - Why my included JSP file won't get processed correctly? -