Hi,
Thanks for the reply.. Kindly help me with few more queries..
I
tried to understand the audio implementation in qemu from the sources
you have suggested. Now, I am confused how audio emulation actually
works.
The subject is that I need to read the audio streams sampled at particular frequency
into some device buffers, then do some process on data buffers and
generate some output audio stream.
There
are some API's and structs which I believe can be useful for above
subject. But I could not understand what exactly it does, or simply how
it works ?
1) AUD_read --> As fas as
I understand this reads data from some HWVoiceIn structure member
associated with SWVoiceIn structure.
Please help me in understanding what it does ?
Does it read data from some hardware (i.e. mic) , how I can test this in QEMU ?
2) AUD_write -> Does it sends data to the hardware (i.e., speaker) ?, again how I can test this in QEMU ?
For
, above what I understand is that any hardware might have
associated buffer, and AUD_read/write api's read/write data from/to
that buffers, and drivers associated with the hardware might managing
it.
Is it possible that I can associate multiple SWVoiceOut to a single HWVoiceOut ?
What is the use of this structures:
SWVoiceIn/Out
HWVoiceIn/Out
3)
Reading a audio streams.. how it differs if we read some file data
(like reading from *.wav/*.mp4 etc) ? Do we still need to use AUD_read ?
4) What does audfmt_e enumeraion signifies ? is it for audio data in signed or unsigned form ?
5) What os the significance of channels parameter in audsettings structure ?
Thanks in advance !!