Remove the node buffers reply again. We don't need it. Instead add a
new method to the client-node to upload an array of buffer datas.
This method is called after the client has allocated buffer mem. It
will update the buffers on the server side with the client allocated
memory.
Wait for the async reply of use_buffers when doing alloc_buffers so
that we can get the updated buffer mem before we continue.
Let the link follow the states of the ports.
Add some error code to the port error states.
Add PW_STREAM_FLAG_ALLOC_BUFFERS flag to make the client alloc buffer
memory.
Remove the CAN_USE_BUFFERS flag, it is redundant. We can know this
because of the IO params and buffer params.
Add flags to the port_use_buffer call. We also want this call to
replace port_alloc_buffer. Together with a new result event we can
ask the node to (a)synchronously fill up the buffer data for us. This
is part of a plan to let remote nodes provide buffer data.
Add a memory pool to manage blocks of memory. Make it possible
to allocate and import blocks.
Add add_mem and remove_mem to the core events to signal a client
of a block of memory. Remove the client-node add_mem.
Make a global pool for memory and a per client pool where we
import and share the memory we need with the client.
Use the new memory pool to track and map memory in clients.
Make a set-prop command to set a property from the config file
into a pw_properties. Pass this to the pw_core_new() and the
main-loop to tweak some stuff.
Move some warns to errors
Move the epoll functions to the system functions and make the loop
use those. Use simple mask for events instead of enum.
Add the used system api in pw_loop.
Add System API to spa_support and use it where possible.
Pass the system API used in the realtime loops in spa_support as
well and use this in the realtime paths.
Improve bootstrapping, load only the log and cpu interfaces because
those can/need to be shared between instances. Let the core load
the other interfaces.
Add keys to configure the System and Loop implementations used in
pw_loop.
Remove override for resources, it can't work in general.
Rename method to add_object_listener to add a listener for
events/methods from the remote object.
Rename some methods to _call to call the interface and _notify
to notify the listeners.
Remove unused client event to be notified of resource
implementations.
The interface struct has the type,version and methods of the
interface.
Make spa interfaces extend from spa_interface and make a
separate structure for the methods.
Pass a generic void* as the first argument of methods, like
we don in PipeWire.
Bundle the methods + implementation in a versioned inteface
and use that to invoke methods. This way we can do version
checks on the methods.
Make resource and proxy interfaces that we can can call. We
can then make the core interfaces independent on proxy/resource and
hide them in the lower layers.
Add add_listener method to methods of core interfaces, just
like SPA.
Destroy all resources (except the core) for a client when it
does a hello. This typically needs to be done after passing the
connection fd from one client to another.
Add a function to recalculate all nodes associated with a driver by
iterating the graph for each driver node. We used to do this in an
incremental way, which is easy to join graph but expensive to
split.
A full scan simplifies some things and we can't avoid it when we
need to calculate latencies later. It will also simplifies assigning
master and slave roles to drivers when the graphs are joined/split.
When we link/unlink or add/remove nodes, recalc the graph.
Make a get node method that binds to the server side node of the
client-node immediately. use this in the remote_export and always
return a node proxy.
Use the node proxy to get property updates and signal those in the
stream.
Output controls can be linked to many input controls and many input
controls can receive input from many output controls. Keep the control
link information inside the link.
Destroying a local proxy will also attempt to destroy the remote
resource unless the server already did that and told us to remove
the proxy.
Fix some cleanups.
Simplify the scheduling by using simple lists and removing the
subgraphs etc..
Make the driver node trigger all nodes it manages and when they
complete, trigger the driver node to finish the graph.
Remove the mix states, we can get rid of them when:
The format is the same for all mixer ports. Set the existing
format on new mixer ports. When the first format is set, the port
becomes READY. When all mixer ports are cleared the port goes back
to CONFIGURE.
Only output ports allocate and manage buffers, input ports share
the buffers of the peer output port on the link.
Pass the complete spa_node_info to update node information. Remove
the redundant max/min port info.
Update the remote node based on port and node update events.
Make struct spa_node_events for events emited from the main thread
and keep the spa_node_callbacks for the data thread callbacks.
The add_listener method installs the events and it's possible to
install multiple handles. Adding a listener first emits the info
and port_info events when installed, similar to how the PipeWire
proxy bind works.
This removes the need for the spa_pending_queue and makes it easier
to implement the _sync versions.
Add some helpers to make it easier for plugins to emit all the info
to new listeners.
Use the listeners for devices as well.
Add a new struct spa_param_info that lists the available params on
a node/port and if they are readable/writable/updated. We can use
this to replace and improve the PARAM_List and also to notify
property change and updates.
Update elements and code to deal with this new param stuff. Add
port and node info to most elements and signal changes.
Use async enum_params in -inspect and use the param info to know
which ones to enumerate.
Use the port info to know what parameters to update in the
remote-node.
Make the bind function a callback instead of an event. We can then
get a return value and use that to clean up the pending proxy and
generate an error.
Don't use special callback in node to receive the results. Instead,
use a generic result callback to receive the result. This makes things
a bit more symetric and generic again because then you can choose how
to match the result to the request and you have a generic way to handle
both the sync and async case. We can then also remove the wait method.
This also makes the remote interface and spa interface to objects very
similar.
Make a helper object to receive and dispatch results. Use this in the
helper for enum_params.
Make device use the same result callbacks.
Add return values to events and method callbacks. This makes it
possible to pass any error or async return value.
Add sync/done/error in both directions so that both proxy and resource
and perform/reply sync and produce errors.
Return a SPA_ASYNC from remote method calls (and events), keep the
sequence number in the connection.
With the core sync/done we can remove the client-node done method and
it's internal sequence number along with the seq number in method calls.
We can also use the method/event async return value to perform a sync
with as the unique sequence number for this method.
Add sync method and done/error event to proxy and resource.
Add a port_info event. With this, we get updates to ports pushed to
us, which is more convenient and also allows for better dynamic
add/remove of ports.
We don't need to the PortChanged event anymore
We can remove the get_port_ids/get_n_ports/port_get_info methods.
Update plugins
Make some more varargs error functions
Make pw_resource_error always just send the error to the resource id.
Make sure we send errors to the right destination.
Add proxy error event and emit it when the core finds an error for
the given proxy id.
The client error is supposed to be sent to all resources of a client
for the given global.
Make an eventfd for each node and listen for events when the node
is activated.
Reorganize some graphs links to make it possible to activiate nodes
by signaling the eventfd
Pass the peer node to each remote node and let the remote node
directly activate the peer when needed.
Let each node signal the driver node when finished.
With this we don't need to go through the daemon to schedule the
graph, nodes will simply activate eachother. We only go to the
server when there is a server node to schedule.
Keep stats about the state of each node and the time it was
triggered, running and finished.
Allocate a per node a piece of shared memory where we place the
activation structure with the graph state and io_position.
We can then give this info to nodes so that they can get the position
in the graph directly but also later, activate the next node in
the graph.
Make the code to export objects more generic. Make it possible for
modules to register a type to export.
Make the client-node also able to export plain spa_nodes.
Let the remote signal the global of the exported object if any. We can
then remote the (unused) remote_id from the proxy.