We might overflow the path buffer when we strcat the provided filename
into it, which might crash or cause unexpected behaviour.
Instead use spa_scnprintf which avoids overflow and properly truncates
and null-terminates the string.
Found by Claude Code.
Check that the number of fds for the message does not exceed the number
of received fds with SCM_RIGHTS.
The check was simply doing an array bounds check. This could still lead
to out-of-sync fds or usage of uninitialized/invalid fds when the
message header claims more fds than there were passed with SCM_RIGHTS.
Found by Claude Code.
spa_poll_event should have exactly same layout as epoll_events to be
compatible across platforms. The structure is packed only on x86-64.
Fix packing and replace the data member with similar union as
epoll_data, to fix compatibility on 32-bit etc.
This reverts commit bb0efd777f.
It is unclear what the problem was before this commit. If there are any
pending operations, the suspend should simply cancel them.
See #5207
FDK-AAC encoder uses band pass filter, which is automatically
applied at all bitrates.
For CBR encoding mode, its values are as follows (for stereo):
* 0-12 kb/s: 5 kHz
* 12-20 kb/s: 6.4 kHz
* 20-28 kb/s: 9.6 kHz
* 40-56 kb/s: 13 kHz
* 56-72 kb/s: 16 kHz
* 72-576 kb/s: 17 kHz
VBR uses the following table (stereo):
* Mode 1: 13 kHz
* Mode 2: 13 kHz
* Mode 3: 15.7 kHz
* Mode 4: 16.5 kHz
* Mode 5: 19.3 kHz
17 kHz for CBR is a limiting value for high bitrate.
Assume >110 kbit/s as a "high bitrate" CBR and increase the
band pass cutout up to 19.3 kHz (as in mode 5 VBR).
Link: d8e6b1a3aa/libAACenc/src/bandwidth.cpp (L114-L160)
This makes it the same size as epoll_event and we don't need to copy the
results over.
It however technically causes an ABI break, in case someone was using
the system interface directly.
Using connect() on a UDP receiver creates a strict filter based on
the sender's _source_ port, not the sender's destination port. The
source port specifies at what sender port the packet exits the sender.
The destination port specifies at what receiver port the packet enters
the receiver. But, the RTP sink uses an ephemeral (= random) port as the
source port. Consequently, connect() at the receiver will cause a
comparison of that ephemeral port with the fixated one (which is actually
the number of the _destination_ port). This incorrect filtering causes
all packets to be dropped.
Use bind() to filter for the local destination port, and use recvmsg()
with manual IP comparison to filter for the sender's identity.
Remove support for changing `SPA_PROP_live` in node implementations
that supported it, and hard-code `SPA_PROP_live = true`. If a mode
of operation is desired where the data is processed as fast as possible,
it can be achieved by implementing non-driver operation and using the
freewheel driver in pipewire.
libcamera is planning to move to C++20 and drop the custom `libcamera::Span`
type at some point in the future. Since pipewire already uses C++20, remove
all uses of it and instead use `std::span` so that things will compile
after the removal.
Make the notify buffer larger, it was 8K but we can make it 64K. Also
reorder the notify struct fields to make it smaller.
This should avoid "notify queue full" warnings. Ideally we should
dynamically size this queue and not lose any messages.
They are emited from the streaming thread and therefore can be emitted
concurrently with the events on the main thread. This can cause crashes
when the hook list is iterated.
Instead, make those events into callbacks that are more efficient,
and threadsafe.
Add a control.ump port property. When true, the port wants UMP and the
mixer will convert to it. When false, the port supports both UMP and
Midi1 and no conversions will happen. When unset, the mixer will always
convert UMP to midi1.
Remove the CONTROL_types property from the filter. This causes problems
because this is the format negotiated with peers, which might not
support the types but can still be linked because the mixer will
convert.
The control.ump port property is supposed to be a temporary fix until we
can negotiate the mixer ports properly with the CONTROL_types.
Remove UMP handling from bluetooth midi, just use the raw Midi1 events
now that the mixer will give those and we are supposed to output our
unconverted format.
Fix midi events in-place in netjack because we can.
Update docs and pw-mididump to note that we are back to midi1 as the
default format.
With this, most of the midi<->UMP conversion should be gone again and we
should be able to avoid conversion problems in ALSA and PipeWire.
Fixes#5183
Since abf37dbdde the param enumeration in
the client-node can return 0 when the parameter is supported but there
are no params uploaded.
When negotiating buffers we need to assume a 0 result as a NULL filter
as well or else we will error.
Avoid doing conversions in the nodes between Midi formats, just assume
the imput is what we expect and output what we naturally produce.
For ALSA this means we produce and consume Midi1 or Midi2 depending on the
configurtation.
All of the other modules (ffado, RTP, netjack and VBAN) really only
produce and consume MIDI1.
Set the default MIDI format to MIDI1 in ALSA.
Whith this change, almost everything now produces and consumes MIDI1
again (previously the buffer format was forced to MIDI2).
The problem is that MIDI2 to and from MIDI1 conversion has problems in
some cases in PipeWire and ALSA and breaks compatibility with some
hardware.
The idea is to let elements produce their prefered format and that the
control mixer also negotiates and converts to the node prefered format.
There is then a mix of MIDI2 and MIDI1 on ports but with the control
port adapting, this should not be a problem.
There is one remaining problem to make this work, the port format is
taken from the node port and not the mixer port, which would then expose
the prefered format on the port and force negotiation to it with the
peer instead of in the mixer.
See #5183
Since c02cdcb5ce ("audioconvert: add avx2 optimized s32_to f32d")
`conv_s32_to_f32d_avx2()` reads `convert::cpu_flags`, which was
previously unitiailized, fix that by setting it to 0.
Since c02cdcb5ce ("audioconvert: add avx2 optimized s32_to f32d")
`conv_s32_to_f32d_avx2()` reads `convert::cpu_flags`, which was
previously unitiailized, fix that by setting it to 0.
Since SBC is mandatory in all devices that support A2DP, we dont need to inclide
them in the priority tables.
This change also increases the priority of OPUS_G codec as it has better latency
and quality than SBC.
Previously, if a remote node was set to running and immediately reverted
to suspended state, the remote node stayed in running state. This occurred
because suspend_node sent suspend command only when the locally cached
state was "idle" or "running."
Modified to send suspend to a node whenever its pending state is not
"suspended," ensuring the command is sent during state transitions.
Fixes#5026
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Add an alternative avx2 s32_to_f32d implementation that doesn't use the
gather function for when gather is slow.
Don't overwrite the orinal cpu_flags but store the selected flags in a
new variable. Use this to debug the selected function cpu flags.
Build libraries with defines from previous libraries so that we can
reuse functions from them.
We can then remove the SSE2 | SLOW_GATHER function selection from the
list. We will now select avx2 and it will then switch implementations
based on the CPU flags.
These 2 new profiles will select the highest quality and lowest latency A2DP
codecs respectively, making it easier for users to know which codec is the best
based on their needs.
The priority for these 2 new profiles is 0, so the default behavior should not
change.
Intel Skylake (level 0x16) is the first model with fast gather
opcodes. Mark lower versions with the SLOW_GATHER flag.
Prefer the SSE2 version of the format conversion without gather when
SLOW_GATHER is set. Makes the conversion much faster on my Ivy
Bridge.