The AAC-ELD support was not properly tested on devices. In theory it
should be OK, but it's untested.
Bump it down in priority so it won't be selected by default.
Also log info on FDK-AAC AAC-ELD support status.
Reduce fallback delay values used when BT device doesn't provide the
information itself.
It may be better to have audio late than early, so use values that are
probably close to or below the delays of majority of headsets.
Don't include the quantum in latency: the latency relative to graph
cycle start doesn't depend on the quantum. Instead, the audio packet
size determines it.
Enable the /Internal media class hack also for SCO.
Session manager can use this to adjust SCO sink/source media.class when
it is going to emit front-end nodes hiding the hardware ones.
The rfcomm list may contain various AG & HF ones, so the profile must be
checked everywhere they are looked up.
Fix the rfcomm lookups everywhere to do it.
Fixes Pipewire<->Pipewire HFP connections, and sending HFP HF commands
to HSP or AG.
HFP 1.9 adds LC3 as a possible codec in addition to CVSD & mSBC.
E.g. Pixel Buds Pro latest firmware supports it.
Add the RFCOMM side and codec selection for it.
Devices may advertise other values, but not certain they will work well
in duplex configuration.
E.g. my Samsung Galaxy Buds2 Pro emits buzzing sound with 48kHz duplex
input.
Don't believe QoS values recommended by the device, which may be
suboptimal. Instead, pick the values from the BAP v1.0.1 Table 5.2.
Link: https://github.com/bluez/bluez/issues/713
The PAC profile UUIDs do not appear in the UUID list, but are still
useful to know before SelectProperties.
Set them ahead of time based on the visible remote endpoints.
The "default" codec is the one with fill_caps != NULL, and should be
picked if we don't know which one we are using.
Fixes showing AAC-ELD as supported when it's not, which happened because
it's ordered before the default AAC in the codec list unlike the other
"shared endpoint" codecs.
Not waiting for HFP when no HFP backend should be checked via
adapter_connectable_profiles in spa_bt_device_check_profiles where the
relevant logic is.
Cleanup by moving the checks there.
Unknown transports visible in DBus usually belong to a different
sound server instance that is talking to BlueZ.
Explain this in the warning message that we log, so that people can more
easily understand why things are not working.
In multi-ASE configurations there can be multiple transports per device,
each corresponding to different channels.
Emit sink/source nodes for each BAP transport present.
Combine them into a single sink/source in the same way as we do for
device sets.
For multi-ASE configurations, BlueZ does the channel allocation itself,
and passes us the result in the ChannelAllocation parameter.
If it is present, don't do the allocation ourselves but use that value
instead.
If Supported_Max_Codec_Frames_Per_SDU is less than what is required by
Supported_Audio_Channel_Counts, override its value assuming the device
actually supports at least that. Needed for Creative Zen Hybrid Pro.
Fix default value for channel count bitmask.
Do relaxed parsing of RFCOMM commands for AG & HF roles, allowing
multiple commands in same buffer.
Use same parser code for all HFP/HSP AG/HF. Parse input in relaxed way,
as some devices emit spurious \n
Primark True Wireless earbud doesn't support sbc-xq. Having it
enabled causes bluez to enter into a loop enabling/disabling
the device dozens of times per minute, making it unusable.
Don't have separate input route for A2DP and HFP, as it is generally not
necessary.
When in A2DP mode when there's also HFP possible, emit the input route
in SPA_PARAM_Route, even though there is no corresponding input node
emitted.
The host may then emit a loopback microphone node, and switch profiles
according to its status. Having the input route available at all times
allows to retain changes to volume settings made when there is no real
input node.
We delay the audio a bit to keep packet intervals equal, which keeps
some data in buffers.
In theory the calculation keeps one buffer free, but it doesn't
explicitly keep "extra" buffer space so in theory might flush too late
and next process() might not have free buffers. However, as we encode
next packet right away this shouldn't really occur...
Try to keep one extra spare buffer free so that the flush time is
certainly early enough.
Some devices appear to set Supported_Max_Codec_Frames_Per_SDU == 1 while
claiming they support two channels per stream, which is then not
possible.
In this case, limit the number of channels by the number of frames per
SDU when selecting.
Also adjust PAC sorting.