Add a memory pool to manage blocks of memory. Make it possible
to allocate and import blocks.
Add add_mem and remove_mem to the core events to signal a client
of a block of memory. Remove the client-node add_mem.
Make a global pool for memory and a per client pool where we
import and share the memory we need with the client.
Use the new memory pool to track and map memory in clients.
The interface struct has the type,version and methods of the
interface.
Make spa interfaces extend from spa_interface and make a
separate structure for the methods.
Pass a generic void* as the first argument of methods, like
we don in PipeWire.
Bundle the methods + implementation in a versioned inteface
and use that to invoke methods. This way we can do version
checks on the methods.
Make resource and proxy interfaces that we can can call. We
can then make the core interfaces independent on proxy/resource and
hide them in the lower layers.
Add add_listener method to methods of core interfaces, just
like SPA.
Make a get node method that binds to the server side node of the
client-node immediately. use this in the remote_export and always
return a node proxy.
Use the node proxy to get property updates and signal those in the
stream.
Just prepare the output on the port and signal ready. When the graph
completes we will be signaled again to recycle the buffer and
prepare more output if we can.
Improve the bookkeeping a little when activating nodes.
Fix race with moving nodes between drivers.
Simplify the scheduling by using simple lists and removing the
subgraphs etc..
Make the driver node trigger all nodes it manages and when they
complete, trigger the driver node to finish the graph.
Remove the mix states, we can get rid of them when:
The format is the same for all mixer ports. Set the existing
format on new mixer ports. When the first format is set, the port
becomes READY. When all mixer ports are cleared the port goes back
to CONFIGURE.
Only output ports allocate and manage buffers, input ports share
the buffers of the peer output port on the link.
Pass the complete spa_node_info to update node information. Remove
the redundant max/min port info.
Update the remote node based on port and node update events.
Make struct spa_node_events for events emited from the main thread
and keep the spa_node_callbacks for the data thread callbacks.
The add_listener method installs the events and it's possible to
install multiple handles. Adding a listener first emits the info
and port_info events when installed, similar to how the PipeWire
proxy bind works.
This removes the need for the spa_pending_queue and makes it easier
to implement the _sync versions.
Add some helpers to make it easier for plugins to emit all the info
to new listeners.
Use the listeners for devices as well.
Add a new struct spa_param_info that lists the available params on
a node/port and if they are readable/writable/updated. We can use
this to replace and improve the PARAM_List and also to notify
property change and updates.
Update elements and code to deal with this new param stuff. Add
port and node info to most elements and signal changes.
Use async enum_params in -inspect and use the param info to know
which ones to enumerate.
Use the port info to know what parameters to update in the
remote-node.
Don't use special callback in node to receive the results. Instead,
use a generic result callback to receive the result. This makes things
a bit more symetric and generic again because then you can choose how
to match the result to the request and you have a generic way to handle
both the sync and async case. We can then also remove the wait method.
This also makes the remote interface and spa interface to objects very
similar.
Make a helper object to receive and dispatch results. Use this in the
helper for enum_params.
Make device use the same result callbacks.
Remove the done and error callbacks. The error callback is in an
error message. The done callback is replace with spa_pending.
Make enum_params take a callback and data for the results. This allows
us to push the results one after another to the app and avoids ownership
issues of the passed data. We can then extend this to handle the async
case by doing a _wait call with a spa_pending+callback+data that will
be called when the _enum_params returns and async result.
Add a sync method.
All methods can now return SPA_RESULT_IS_ASYNC return values and you
can use spa_node_wait() to register a callback when they complete
with optional extra parameters. This makes it easier to sync and
handle the reply.
Make helper methods to simulate the sync enum_params behaviour for
sync nodes.
Let the transport generate the sequence number for pw_resource_sync()
and pw_proxy_sync(). That way we don't need to keep track of numbers
ourselves and we can match the reply to the request easily.
Add return values to events and method callbacks. This makes it
possible to pass any error or async return value.
Add sync/done/error in both directions so that both proxy and resource
and perform/reply sync and produce errors.
Return a SPA_ASYNC from remote method calls (and events), keep the
sequence number in the connection.
With the core sync/done we can remove the client-node done method and
it's internal sequence number along with the seq number in method calls.
We can also use the method/event async return value to perform a sync
with as the unique sequence number for this method.
Add sync method and done/error event to proxy and resource.
Add a port_info event. With this, we get updates to ports pushed to
us, which is more convenient and also allows for better dynamic
add/remove of ports.
We don't need to the PortChanged event anymore
We can remove the get_port_ids/get_n_ports/port_get_info methods.
Update plugins
Make an eventfd for each node and listen for events when the node
is activated.
Reorganize some graphs links to make it possible to activiate nodes
by signaling the eventfd
Pass the peer node to each remote node and let the remote node
directly activate the peer when needed.
Let each node signal the driver node when finished.
With this we don't need to go through the daemon to schedule the
graph, nodes will simply activate eachother. We only go to the
server when there is a server node to schedule.
Keep stats about the state of each node and the time it was
triggered, running and finished.
Allocate a per node a piece of shared memory where we place the
activation structure with the graph state and io_position.
We can then give this info to nodes so that they can get the position
in the graph directly but also later, activate the next node in
the graph.
Make the code to export objects more generic. Make it possible for
modules to register a type to export.
Make the client-node also able to export plain spa_nodes.
Let the remote signal the global of the exported object if any. We can
then remote the (unused) remote_id from the proxy.