This reverts commit 9ae89b4247.
All invokes should be paired with a lock/unlock if the loop requires
this. For internal calls of invoke, this will also be true because all
pipewire functions should be called with the lock.
Fixes#4215
`pw_link_info::error` was previously not cleared when a link was destroyed,
leading to a memory leak if an error message had been set. For example,
if format negotiation fails, and as a result the link is destroyed.
Claim that call waiting notifications are supported.
Required for some devices (e.g. Soundcore Motion 300),
as they stop sending commands if the reply to CCWA is not OK.
Check if the node is FINISHED instead of checking the refcounts. It's
possible that the refcounts are 0 but the node was not scheduled or
finished yet.
If the node is not FINISHED but TRIGGERED, we can run the recover
without reporting an error.
Any other state is an error and we need to log this and recover.
See #4182
Use ATOMIC_LOAD to get status.
Debug the pending state after decrementing so we debug the value we
are actually going to test.
Add node id to debug lines to better track things.
Don't just overwrite the state with FINISHED but only do this when the
state was AWAKE.
The server might already have started a new cycle and placed
NOT_TRIGGERED as the state. Or, it might have changed the state to
INACTIVE. In all cases, we should not overwrite the state unless it was
AWAKE and we should only trigger peers when we were AWAKE.
This fixes some spurious xruns and glitches.
See #4182
Now that start_monitor() (which calls start_inotify()) is called before
enum_devices() it no longer is necessary to call start_watching_device()
for devices which have been enumerated before start_inotify() gets
called (since there will not be any such devices anymore).
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
This fixes 2 races wrt probing v4l2 devices:
1. Before this change there was a window where a new udev device can get
added between the udev_enumerate_scan_devices() call in enum_devices() and
the udev_monitor_enable_receiving(this->umonitor); call. If this window was
hit then enum_devices() would not see the device and no udev-event for it
would be received either causing the device to not be seen.
Enabling udev event monitoring before calling udev_enumerate_scan_devices()
fixes this. Note that the code is already prepared to deal with getting
multiple add/change events for the same udev device, so hitting the new
race window where PipeWire may receive both an add- or change-event and
also sees + probes the device from enum_devices() is not a problem.
2. Before this change devices added by enum_devices() would not have
inotify monitoring activated right away because notify.fd = -1 at this
time turning start_watching_device() into a no-op.
These devices without inotify monitoring would then have their access
checked by process_device() calling check_access().
Then after all devices have been enumerated start_monitor() would call
start_inotify() which calls start_watching_device() for all devices added
by enum_devices(). This leaves a window where the ACL can change without
there being an inotify watch for it.
Calling start_monitor() before enum_devices() puts start_inotify()
notify before enum_devices() so that the add_device() calls done
by enum_devices() will now successfully call start_watching_device()
closing this window.
PipeWire is somewhat likely to not notify ACL changes because of this
because PipeWire is part of the systemd user default.target, where as
logind only starts applying the ACLs after GNOME has created the seat
for the GNOME session. So on first login we have PipeWire starting
and logind applying the ACLs at the same time, which allows for the ACL
change to hit the small race window where PipeWire is not monitoring
for ACL changes. Fixing this second race should hopefully resolve
issue #3960.
Closes: #3960
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Some complex camera pipelines, like the IPU6 can involve many /dev/video#
nodes (32 in the IPU6 case) and the current size of 128 chars is not enough
to hold all /dev/video# nodes in this cases causing SPA_KEY_DEVICE_DEVIDS
to get truncated, which in turn breaks the filtering of V4L2 devices which
are used by a libcamera driven camera in wireplumber.
Fix this by increasing the size of devices_str[] to 256.
This fixes wireplumber adding a bunch of non-function V4L2 video sources,
e.g. before this "wpctl status" outputs the following video sources:
Video
├─ Devices:
...
├─ Sources:
│ 90. ov2740
│ * 115. ipu6 (V4L2)
...
│ 135. ipu6 (V4L2)
│
├─ Filters:
After this fix the output is:
Video
├─ Devices:
...
├─ Sources:
│ * 92. ov2740
│
├─ Filters:
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
When the queue is full, before this patch we used to go into usleep in
the hope that the other thread will run and empty the queue and that we
can retry after the usleep.
This however does not always work because the other thread might be waiting
for the thread that does the invoke call and we lock forever.
Therefore we should always try to make progress in some way. Instead of
waiting, allocate an (or use the previously allocated) overflow queue and
write to that one. We can chain multiple overflow queues together as many
as we need (but we might want to bound that as well).
The loop.retry-timeout property is now deprecated.
See #4114
The control hooks of a loop are called before the loop starts polling
and after it has finished polling. Currently, this is used to implement
the locking in pw_thread_loop. This is used to guarantee that the thread
loop's lock is taken while the thread loop is dispatching, and that
the lock can be taken while the loop is polling, when it is running
no user-space code.
However, calling the thread control hooks of thread A when doing an
blocking invoke from thread B serves little purpose, and in fact
can cause issues: for example, issuing a blocking invoke on a
pw_thread_loop does not work unless the lock thereof is taken.
This behaviour, of calling the control hooks from other threads,
is also not documented, and goes contrary to what is currently
stated in the loop.h header file:
/** Executed right before waiting for events. It is typically used to
* release locks. */
...
/** Executed right after waiting for events. It is typically used to
* reacquire locks. */
At the moment the implementation allows any thread to queue invoke
items on any other thread without restrictions; calling the control
hooks only places extra restrictions on the usability of this mechanism
(in case of pw_thread_loop, having to take the loop's lock).
So do not call the control hooks when doing a blocking invoke.
We have various modules that set the priority higher than the dummy and
freewheel driver (ffado, netjack,...). This makes it impossible to use
the freewheel driver on them.
While the spec allows for 1ppm changes, our rate matching logic applies
these changes quite often, which can be spammy on USB. I haven't seen
hosts mind this, but it seems like it might be a problem at some point.
Additionally, if we also have bind ctls enabled, every pitch update is
also a wakeup for ourselves (whether or not we're listening for the
pitch ctls, since the mixer fd does not distinguish between ctls, those
are filtered after we wake up).
The 10ppm threshold is empirically tested as being not "too noisy" (i.e.
when updates happen, I can see them scroll by with `amixer events`).
If necessary, we can make this configurable in the future.
Use a memstream to collect the arguments so that it can dynamically
allocate as much memory as necessary.
Use a dynamic pod builder to construct the pods so that they can be of
arbitrary size.
Fixes#4166
Only set use the graph rate and duration when the ffado.sample-rate
and ffado.period-size properties are set to 0. Othersize use the
configure values.
Without this patch, it would just ignore the settings and always use the
graph rate.
Commit d04a28daef moved the configuration
of the IO_Position after we removed the node from the old driver but
forgot to move the code that updates the pending_state.
See #4094
Make sure we clear IO_Buffers on the port and mixer before we clear the
buffers or the format. The IO_Buffer is used to check if the port should
be processed or not and its update is synchronized with the data-thread.
Set IO_Buffers on the mixer and node only after we have configured the
buffers on the node.
See #4094
The IO_Buffers is used in the data thread to check if the port should be
scheduled or not. Make sure it is only set after we set buffers on the
port and cleared before the buffers are cleared.
Make sure we sync the port->io with the data thread.
See #4094
Due to the how the kernel part of BlueZ computes the extended
advertising interval for a Broadcast Source, a sync_factor smaller
than 2 will result in an invalid interval value (too small).
We simply cannot schedule async nodes properly if we don't have the
async link. This change was done to make sure that driver sources don't
end up with async buffers and cause a unneccesary 1 cycle delay in
async clients. But we can fix this in a better way, like this:
Increment the cycle counter after we copy the output port buffers. This
ensures the async clients immediately pick up the new buffers (or the
output buffers from the previous cycle).
Also remove some old compatibility code that is no longer useful.
Fixes#4138
See #4133
jack_port_get_buffer() can be called with 0 frames, This is to restrict
the available space in the returned midi buffer after mixdown. While we
mixdown, we should not check timestamps so that all midi events are
added to the mixdown buffer.
Fixes qsynth.
Unloading the module on stream errors is a bit too much because a
suspend can clear the stream error again (or the error might not be
fatal)
This can happen for example when negotiation fails on some stream ports
(wireplumber tries to link the midi ports to audio ports) and it's
better to not completely fail on that.
Fixes#4121
A remote node is prepared when the Start command sync reply has been
received.
If we however quickly switch from active to inactive, the
pending reply is cancelled but the remote node will have set the
FINISHED status and will be ready to be scheduled.
Make it so that we always set the INACTIVE status when the node is
canceled and unprepared, even if we didn't get the reply and the node
was not prepared.
Fixes#4122
dlopen() does not set errno on failure, rather you're supposed to call
dlerror() to get the latest error. dlerror() return a string so
instead return -ENOENT from weakjack_load_by_path().
Depending on errno weakjack_load() could think it successfully loaded
the library, and later module-jack-tunnel would crash because it call
a NULL function pointer.
They don't work on all HDMI output devices, and availability is
not detected so they're available also when they don't work.
Selecting the profiles on non-working devices results to
spa.alsa: plug:{SLAVE="a52:0,'hw:0,3'"}p: snd_pcm_start: Broken pipe
and noise output to speakers. Revert these profiles from stable branch
for now as the break things.
This reverts commit 916d2cdb28.
This reverts commit d6c17681da.
Make a new flag that is set when the process function is called because
of a recover from a graph xrun.
Use this flag in the freewheel driver to detect a recover and to avoid
scheduling a new timeout. We should schedule a new timeout only when the
process function was called after completion.
This fixes export in ardour some more when the initial driver timeout
didn't complete (when, for example, some nodes were still starting up).
When spa-plugins is enabled, the gio-2.0 global dependency is
overwritten.
When bluez support is enabled, OR when gsettings is enabled, the gio-2.0
dependency is then detected as found. This means that
pipewire-module-protocol-pulse can end up enabling gsettings support
even if it has been forcibly turned off.
Rename the meson variables to ensure they are looked up separately.