mirror of
https://gitlab.freedesktop.org/pulseaudio/pulseaudio.git
synced 2025-11-02 09:01:46 -05:00
merge and deduplicate some pa_buffer_attr documentation
This commit is contained in:
parent
b683350856
commit
26b1d0fc84
2 changed files with 44 additions and 78 deletions
|
|
@ -401,13 +401,18 @@ typedef struct pa_buffer_attr {
|
|||
uint32_t tlength;
|
||||
/**< Playback only: target length of the buffer. The server tries
|
||||
* to assure that at least tlength bytes are always available in
|
||||
* the per-stream server-side playback buffer. It is recommended
|
||||
* to set this to (uint32_t) -1, which will initialize this to a
|
||||
* value that is deemed sensible by the server. However, this
|
||||
* value will default to something like 2s, i.e. for applications
|
||||
* that have specific latency requirements this value should be
|
||||
* set to the maximum latency that the application can deal
|
||||
* with. When PA_STREAM_ADJUST_LATENCY is not set this value will
|
||||
* the per-stream server-side playback buffer. The server will
|
||||
* only send requests for more data as long as the buffer has
|
||||
* less than this number of bytes of data.
|
||||
*
|
||||
* It is recommended to set this to (uint32_t) -1, which will
|
||||
* initialize this to a value that is deemed sensible by the
|
||||
* server. However, this value will default to something like 2s;
|
||||
* for applications that have specific latency requirements
|
||||
* this value should be set to the maximum latency that the
|
||||
* application can deal with.
|
||||
*
|
||||
* When PA_STREAM_ADJUST_LATENCY is not set this value will
|
||||
* influence only the per-stream playback buffer size. When
|
||||
* PA_STREAM_ADJUST_LATENCY is set the overall latency of the sink
|
||||
* plus the playback buffer size is configured to this value. Set
|
||||
|
|
@ -421,11 +426,19 @@ typedef struct pa_buffer_attr {
|
|||
* playback before at least prebuf bytes are available in the
|
||||
* buffer. It is recommended to set this to (uint32_t) -1, which
|
||||
* will initialize this to the same value as tlength, whatever
|
||||
* that may be. Initialize to 0 to enable manual start/stop
|
||||
* control of the stream. This means that playback will not stop
|
||||
* on underrun and playback will not start automatically. Instead
|
||||
* pa_stream_cork() needs to be called explicitly. If you set
|
||||
* this value to 0 you should also set PA_STREAM_START_CORKED. */
|
||||
* that may be.
|
||||
*
|
||||
* Initialize to 0 to enable manual start/stop control of the stream.
|
||||
* This means that playback will not stop on underrun and playback
|
||||
* will not start automatically, instead pa_stream_cork() needs to
|
||||
* be called explicitly. If you set this value to 0 you should also
|
||||
* set PA_STREAM_START_CORKED. Should underrun occur, the read index
|
||||
* of the output buffer overtakes the write index, and hence the
|
||||
* fill level of the buffer is negative.
|
||||
*
|
||||
* Start of playback can be forced using pa_stream_trigger() even
|
||||
* though the prebuffer size hasn't been reached. If a buffer
|
||||
* underrun occurs, this prebuffering will be again enabled. */
|
||||
|
||||
uint32_t minreq;
|
||||
/**< Playback only: minimum request. The server does not request
|
||||
|
|
@ -444,11 +457,12 @@ typedef struct pa_buffer_attr {
|
|||
* but decrease control overhead. It is recommended to set this to
|
||||
* (uint32_t) -1, which will initialize this to a value that is
|
||||
* deemed sensible by the server. However, this value will default
|
||||
* to something like 2s, i.e. for applications that have specific
|
||||
* to something like 2s; For applications that have specific
|
||||
* latency requirements this value should be set to the maximum
|
||||
* latency that the application can deal with. If
|
||||
* PA_STREAM_ADJUST_LATENCY is set the overall source latency will
|
||||
* be adjusted according to this value. If it is not set the
|
||||
* latency that the application can deal with.
|
||||
*
|
||||
* If PA_STREAM_ADJUST_LATENCY is set the overall source latency
|
||||
* will be adjusted according to this value. If it is not set the
|
||||
* source latency is left unmodified. */
|
||||
|
||||
} pa_buffer_attr;
|
||||
|
|
|
|||
|
|
@ -92,68 +92,20 @@
|
|||
* overflows/underruns.
|
||||
*
|
||||
* The buffer metrics may be controlled by the application. They are
|
||||
* described with a pa_buffer_attr structure which contains a number
|
||||
* of fields:
|
||||
*
|
||||
* \li maxlength - The absolute maximum number of bytes that can be
|
||||
* stored in the buffer. If this value is exceeded
|
||||
* then data will be lost. It is recommended to pass
|
||||
* (uint32_t) -1 here which will cause the server to
|
||||
* fill in the maximum possible value.
|
||||
*
|
||||
* \li tlength - The target fill level of the playback buffer. The
|
||||
* server will only send requests for more data as long
|
||||
* as the buffer has less than this number of bytes of
|
||||
* data. If you pass (uint32_t) -1 (which is
|
||||
* recommended) here the server will choose the longest
|
||||
* target buffer fill level possible to minimize the
|
||||
* number of necessary wakeups and maximize drop-out
|
||||
* safety. This can exceed 2s of buffering. For
|
||||
* low-latency applications or applications where
|
||||
* latency matters you should pass a proper value here.
|
||||
*
|
||||
* \li prebuf - Number of bytes that need to be in the buffer before
|
||||
* playback will commence. Start of playback can be
|
||||
* forced using pa_stream_trigger() even though the
|
||||
* prebuffer size hasn't been reached. If a buffer
|
||||
* underrun occurs, this prebuffering will be again
|
||||
* enabled. If the playback shall never stop in case of a
|
||||
* buffer underrun, this value should be set to 0. In
|
||||
* that case the read index of the output buffer
|
||||
* overtakes the write index, and hence the fill level of
|
||||
* the buffer is negative. If you pass (uint32_t) -1 here
|
||||
* (which is recommended) the server will choose the same
|
||||
* value as tlength here.
|
||||
*
|
||||
* \li minreq - Minimum number of free bytes in the playback
|
||||
* buffer before the server will request more data. It is
|
||||
* recommended to fill in (uint32_t) -1 here. This value
|
||||
* influences how much time the sound server has to move
|
||||
* data from the per-stream server-side playback buffer
|
||||
* to the hardware playback buffer.
|
||||
*
|
||||
* \li fragsize - Maximum number of bytes that the server will push in
|
||||
* one chunk for record streams. If you pass (uint32_t)
|
||||
* -1 (which is recommended) here, the server will
|
||||
* choose the longest fragment setting possible to
|
||||
* minimize the number of necessary wakeups and
|
||||
* maximize drop-out safety. This can exceed 2s of
|
||||
* buffering. For low-latency applications or
|
||||
* applications where latency matters you should pass a
|
||||
* proper value here.
|
||||
* described with a pa_buffer_attr structure.
|
||||
*
|
||||
* If PA_STREAM_ADJUST_LATENCY is set, then the tlength/fragsize
|
||||
* parameters will be interpreted slightly differently than described
|
||||
* above when passed to pa_stream_connect_record() and
|
||||
* pa_stream_connect_playback(): the overall latency that is comprised
|
||||
* of both the server side playback buffer length, the hardware
|
||||
* playback buffer length and additional latencies will be adjusted in
|
||||
* a way that it matches tlength resp. fragsize. Set
|
||||
* PA_STREAM_ADJUST_LATENCY if you want to control the overall
|
||||
* playback latency for your stream. Unset it if you want to control
|
||||
* only the latency induced by the server-side, rewritable playback
|
||||
* buffer. The server will try to fulfill the client's latency requests
|
||||
* as good as possible. However if the underlying hardware cannot
|
||||
* parameters of the pa_buffer_attr structure will be interpreted
|
||||
* slightly differently than otherwise when passed to
|
||||
* pa_stream_connect_record() and pa_stream_connect_playback(): the
|
||||
* overall latency that is comprised of both the server side playback
|
||||
* buffer length, the hardware playback buffer length and additional
|
||||
* latencies will be adjusted in a way that it matches tlength resp.
|
||||
* fragsize. Set PA_STREAM_ADJUST_LATENCY if you want to control the
|
||||
* overall playback latency for your stream. Unset it if you want to
|
||||
* control only the latency induced by the server-side, rewritable
|
||||
* playback buffer. The server will try to fulfill the client's latency
|
||||
* requests as good as possible. However if the underlying hardware cannot
|
||||
* change the hardware buffer length or only in a limited range, the
|
||||
* actually resulting latency might be different from what the client
|
||||
* requested. Thus, for synchronization clients always need to check
|
||||
|
|
@ -164,8 +116,8 @@
|
|||
* tlength/fragsize, regardless whether PA_STREAM_ADJUST_LATENCY is
|
||||
* set or not.
|
||||
*
|
||||
* The server-side per-stream playback buffers are indexed by a write and a read
|
||||
* index. The application writes to the write index and the sound
|
||||
* The server-side per-stream playback buffers are indexed by a write and
|
||||
* a read index. The application writes to the write index and the sound
|
||||
* device reads from the read index. The read index is increased
|
||||
* monotonically, while the write index may be freely controlled by
|
||||
* the application. Subtracting the read index from the write index
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue