Use spa_steal_ptr to transfer props ownership when we can.
This fixes a problem in the upload stream where the props would be freed
twice when buffer allocation failed, once with properties_free and
then with stream_free.
A client-supplied device name ending in ".monitor" was stack-allocated
via strndupa without any size limit. Since protocol messages can be up
to 16MB, a malicious client could send a very long device name and
overflow the stack, crashing the daemon.
Cap the strndupa length at MAX_NAME (1024) in both find_device and
do_set_default.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
For playback and capture streams we allocate MAXLENGTH (4M) buffers but
for upload streams we must allow space for the total upload stream, which
can be up to the max allowed sample size (16M).
Keep the allocated size for the stream around in a variable so that we
can use it when writing/reading to/from the ringbuffer.
This could later also be extended to use the attr.maxlength variable to
size the buffer (but it's usually 4M anyway). This is more complicated
because we need to grow the buffer size when new attributes are set,
which is probably more complicated than useful.
Use the JSON builder to prepare arguments for modules and metadata
instead of custom memopen and fprintf. This makes it easier to ensure
the strings are all properly escaped.
This removes the use of spa_json_encode_string(), which could return a
truncated, non-zero terminated result, which we needed to check
everywhere.
do_cork_stream, do_flush_trigger_prebuf_stream, and do_set_stream_name
did not check whether the stream had completed format negotiation.
Add create_tag guards matching the pattern in do_set_stream_buffer_attr.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
If a client sends UPDATE_PLAYBACK_STREAM_SAMPLE_RATE before format
negotiation completes, stream->ss.rate could be 0, causing a
floating-point division by zero. Add the same create_tag guard used
in do_set_stream_buffer_attr.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Client-supplied int64_t offset was multiplied by 1000 without overflow
check. Use spa_overflow_mul to detect and reject values that would
overflow.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
If module_create succeeded but the subsequent calloc for
pending_module failed, the module was leaked in the modules map.
Move the calloc before module_create so failure cleanup is trivial.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The device name was interpolated into a JSON metadata string without
escaping. A node with crafted name containing quote characters could
inject arbitrary JSON keys into the default sink/source metadata.
Use spa_json_encode_string to properly escape the value.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
A client can create a stream with invalid sample_spec (rate=0) via
format_info negotiation, then send SET_STREAM_BUFFER_ATTR before
negotiation completes. fix_playback_buffer_attr divides by ss.rate,
crashing the daemon. Reject buffer attr changes on streams that
have not completed format negotiation.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The client-provided rate was used without validation. A zero or
excessively large rate produces extreme correction values passed
to pw_stream_set_control. Reject rates that are zero or exceed
RATE_MAX.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was no limit on the total size of the sample cache. A client
could upload many samples to exhaust server memory. Add a configurable
pulse.max-sample-cache property (default 64MB) to cap the total size
of all cached samples.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was no limit on the number of streams a single client could
create. Each stream allocates a 4MB ring buffer, allowing a malicious
client to exhaust server memory. Add a configurable pulse.max-streams
property (default 64) to limit streams per client.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Memory Safety: High
The stream_control_info() callback copied control->n_values floats
into stream->volume.values without checking bounds. The source allows
up to MAX_VALUES (256) entries but the destination volume array is
only CHANNELS_MAX (64) entries, so a stream with more than 64 channel
volumes would overflow the buffer. Clamp n_values to CHANNELS_MAX
before the copy.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Make a function like alloca but with overflow checks and a max
allocation size.
Use this function where we can and also make sure that all alloca calls
are in some way limited.
Memory Safety: Medium
The strlen() return value (size_t) is stored in an int before being
passed to alloca(). If a malicious client sets an extremely long
PW_KEY_NODE_NAME property, the addition could overflow the int,
resulting in a small or negative alloca size and a subsequent buffer
overflow in snprintf().
Change the type to size_t and add a bounds check to prevent
excessively large stack allocations.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Memory Safety: Medium
Two alloca() calls in the PulseAudio protocol server were missed by the
previous alloca bounds-checking fix (commit 0d2877c0d):
1. fill_node_info_proplist() adds n_items counts from node properties
and client properties without checking the total before alloca().
A client with a very large number of properties can exhaust the stack.
2. fill_card_info() uses pi->n_props from port info for an alloca()
without bounds checking. A card object with many port properties can
similarly exhaust the stack.
Add MAX_ALLOCA_SIZE checks consistent with the existing pattern to
prevent stack overflow from large property counts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Memory Safety: Medium
Several functions in the PulseAudio protocol implementation use alloca()
to allocate arrays of port_info, profile_info, or dict_item structs
based on counts derived from card parameters or client property lists.
These counts have no upper bounds, so a card object with a very large
number of parameters or a client sending many properties can cause
alloca() to exhaust the stack, resulting in a stack overflow crash.
Add a MAX_ALLOCA_SIZE (64KB) limit and check element counts before each
alloca() call. If the requested allocation exceeds the limit, the
function returns -ENOMEM instead of crashing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a node is inactive but linked to a driver, the only reason it is
not being scheduled is because it is inactive.
We already set up the links and negotiate the format and buffers to
prepare going to RUNNING. This patch now also make the node go to IDLE,
which makes the adapter negotiate a forma and buffers with the internal
node.
This makes things more symetrical, when linking a node, it becomes IDLE,
when activating it becomes RUNNABLE, when inactive it goes back to IDLE.
The switch to RUNNING will also be faster when things are already set up
in the IDLE state.
The main advantage is that it allows us to implement the startup of
corked streams in pulseaudio better. Before this patch we had to set the
stream to active to make it go through the Format and buffer negotiation
and then quickly set it back to inactive, hopefully without skipping a
cycle. After this patch, the corked stream goes all the way to IDLE,
where it then waits to become active.
See #4991
We can just concatenate the stream and client dict on the stack, reusing
all the strings without having to do allocations or copies.
Also filter out the object.id and object.serial from the client, we want
to keep the ones from the streams.
Instead of adding the client props to the stream props when we create
it, add them when we enumerate the streams.
This makes it possible to also have the client props in the stream props
for streams that are not created with the pulse API.
Fixes#5090
WirePlumber recently added a mechanism to force mono mixdown on audio
outputs, which is a useful feature for accessibility. Let's also expose
that setting via libpulse for existing audio settings UIs to be able to
use.
The dont-inhibit-auto-suspend flag does not do anything when using
direct-on-input-idx (capturing from a stream) in pulseaudio, so also
make it do nothing on pulse-server.
See #4991
Use the timer queue for scheduling stream and object data timeouts.
This avoids allocating timerfds for these timeouts and the timer queue
can handle many timeouts more efficiently.
Only send out SUSPENDED event when there is a change in the suspended
state. This avoids sending out unsuspend events when we simply uncork.
Implement the fail_on_suspend flag for capture and playback streams.
Instead of suspending those streams, we need to kill them.
When we simply need to change some state for the code executed in the
loop, we can use locked() instead of invoke(). This is more efficient
and avoids some context switches in the normal case.
Make the state_changed event and _get_state() function set errno with
the current error value if the state is in error, so that application
can use this to give more detailed error reporting.
Use this in alsa, v4l2 and pulse to give some other error codes than
EIO.
Fixes#4574
Set the corked state on streams so that we can use this in sink-input
and source-output info without guessing.
The problem is that when a stream starts un-corked, the state is less
than RUNNING and so before this patch, pulse-server reports a corked
stream, which is not what pulseaudio reports.
Make `client_queue_subscribe_event()` take the facility and type
separately, and calculate the mask itself, so that the caller
does not need to be concerned with that.
This gets the next key and value from an object. This function is better
because it will skip key/value pairs that don't fit in the array to hold
the key.
The previous code patter would stop parsing the object as soon as a key
larger than the available space was found.
Add spa_json_begin_array/object to replace
spa_json_init+spa_json_begin_array/object
This function is better because it does not waste a useless spa_json
structure as an iterator. The relaxed versions also error out when the
container is mismatched because parsing a mismatched container is not
going to give any results anyway.