doc: consistently indent the xml files by 2 spaces

2 spaces is enough for xml, otherwise we end up with too little room for the
actual text.

Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This commit is contained in:
Peter Hutterer 2012-03-29 10:50:13 +10:00 committed by Kristian Høgsberg
parent 966e8ec9c0
commit f68e156b8f
5 changed files with 383 additions and 383 deletions

View file

@ -4,315 +4,315 @@
%BOOK_ENTITIES; %BOOK_ENTITIES;
]> ]>
<chapter id="chap-Wayland-Architecture"> <chapter id="chap-Wayland-Architecture">
<title>Wayland Architecture</title> <title>Wayland Architecture</title>
<section id="sect-Wayland-Architecture-wayland_architecture"> <section id="sect-Wayland-Architecture-wayland_architecture">
<title>X vs. Wayland Architecture</title> <title>X vs. Wayland Architecture</title>
<para> <para>
A good way to understand the wayland architecture A good way to understand the wayland architecture
and how it is different from X is to follow an event and how it is different from X is to follow an event
from the input device to the point where the change from the input device to the point where the change
it affects appears on screen. it affects appears on screen.
</para> </para>
<para> <para>
This is where we are now with X: This is where we are now with X:
</para> </para>
<mediaobject> <mediaobject>
<imageobject> <imageobject>
<imagedata fileref="images/x-architecture.png" format="PNG" /> <imagedata fileref="images/x-architecture.png" format="PNG" />
</imageobject> </imageobject>
</mediaobject> </mediaobject>
<para> <para>
<orderedlist> <orderedlist>
<listitem> <listitem>
<para> <para>
The kernel gets an event from an input The kernel gets an event from an input
device and sends it to X through the evdev device and sends it to X through the evdev
input driver. The kernel does all the hard input driver. The kernel does all the hard
work here by driving the device and work here by driving the device and
translating the different device specific translating the different device specific
event protocols to the linux evdev input event protocols to the linux evdev input
event standard. event standard.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The X server determines which window the The X server determines which window the
event affects and sends it to the clients event affects and sends it to the clients
that have selected for the event in question that have selected for the event in question
on that window. The X server doesn't on that window. The X server doesn't
actually know how to do this right, since actually know how to do this right, since
the window location on screen is controlled the window location on screen is controlled
by the compositor and may be transformed in by the compositor and may be transformed in
a number of ways that the X server doesn't a number of ways that the X server doesn't
understand (scaled down, rotated, wobbling, understand (scaled down, rotated, wobbling,
etc). etc).
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The client looks at the event and decides The client looks at the event and decides
what to do. Often the UI will have to change what to do. Often the UI will have to change
in response to the event - perhaps a check in response to the event - perhaps a check
box was clicked or the pointer entered a box was clicked or the pointer entered a
button that must be highlighted. Thus the button that must be highlighted. Thus the
client sends a rendering request back to the client sends a rendering request back to the
X server. X server.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
When the X server receives the rendering When the X server receives the rendering
request, it sends it to the driver to let it request, it sends it to the driver to let it
program the hardware to do the rendering. program the hardware to do the rendering.
The X server also calculates the bounding The X server also calculates the bounding
region of the rendering, and sends that to region of the rendering, and sends that to
the compositor as a damage event. the compositor as a damage event.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The damage event tells the compositor that The damage event tells the compositor that
something changed in the window and that it something changed in the window and that it
has to recomposite the part of the screen has to recomposite the part of the screen
where that window is visible. The compositor where that window is visible. The compositor
is responsible for rendering the entire is responsible for rendering the entire
screen contents based on its scenegraph and screen contents based on its scenegraph and
the contents of the X windows. Yet, it has the contents of the X windows. Yet, it has
to go through the X server to render this. to go through the X server to render this.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The X server receives the rendering requests The X server receives the rendering requests
from the compositor and either copies the from the compositor and either copies the
compositor back buffer to the front buffer compositor back buffer to the front buffer
or does a pageflip. In the general case, the or does a pageflip. In the general case, the
X server has to do this step so it can X server has to do this step so it can
account for overlapping windows, which may account for overlapping windows, which may
require clipping and determine whether or require clipping and determine whether or
not it can page flip. However, for a not it can page flip. However, for a
compositor, which is always fullscreen, this compositor, which is always fullscreen, this
is another unnecessary context switch. is another unnecessary context switch.
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>
</para> </para>
<para> <para>
As suggested above, there are a few problems with this As suggested above, there are a few problems with this
approach. The X server doesn't have the information to approach. The X server doesn't have the information to
decide which window should receive the event, nor can it decide which window should receive the event, nor can it
transform the screen coordinates to window local transform the screen coordinates to window local
coordinates. And even though X has handed responsibility for coordinates. And even though X has handed responsibility for
the final painting of the screen to the compositing manager, the final painting of the screen to the compositing manager,
X still controls the front buffer and modesetting. Most of X still controls the front buffer and modesetting. Most of
the complexity that the X server used to handle is now the complexity that the X server used to handle is now
available in the kernel or self contained libraries (KMS, available in the kernel or self contained libraries (KMS,
evdev, mesa, fontconfig, freetype, cairo, Qt etc). In evdev, mesa, fontconfig, freetype, cairo, Qt etc). In
general, the X server is now just a middle man that general, the X server is now just a middle man that
introduces an extra step between applications and the introduces an extra step between applications and the
compositor and an extra step between the compositor and the compositor and an extra step between the compositor and the
hardware. hardware.
</para> </para>
<para> <para>
In wayland the compositor is the display server. We transfer In wayland the compositor is the display server. We transfer
the control of KMS and evdev to the compositor. The wayland the control of KMS and evdev to the compositor. The wayland
protocol lets the compositor send the input events directly protocol lets the compositor send the input events directly
to the clients and lets the client send the damage event to the clients and lets the client send the damage event
directly to the compositor: directly to the compositor:
</para> </para>
<mediaobject> <mediaobject>
<imageobject> <imageobject>
<imagedata fileref="images/wayland-architecture.png" format="PNG" /> <imagedata fileref="images/wayland-architecture.png" format="PNG" />
</imageobject> </imageobject>
</mediaobject> </mediaobject>
<para> <para>
<orderedlist> <orderedlist>
<listitem> <listitem>
<para> <para>
The kernel gets an event and sends The kernel gets an event and sends
it to the compositor. This it to the compositor. This
is similar to the X case, which is is similar to the X case, which is
great, since we get to reuse all the great, since we get to reuse all the
input drivers in the kernel. input drivers in the kernel.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The compositor looks through its The compositor looks through its
scenegraph to determine which window scenegraph to determine which window
should receive the event. The should receive the event. The
scenegraph corresponds to what's on scenegraph corresponds to what's on
screen and the compositor screen and the compositor
understands the transformations that understands the transformations that
it may have applied to the elements it may have applied to the elements
in the scenegraph. Thus, the in the scenegraph. Thus, the
compositor can pick the right window compositor can pick the right window
and transform the screen coordinates and transform the screen coordinates
to window local coordinates, by to window local coordinates, by
applying the inverse applying the inverse
transformations. The types of transformations. The types of
transformation that can be applied transformation that can be applied
to a window is only restricted to to a window is only restricted to
what the compositor can do, as long what the compositor can do, as long
as it can compute the inverse as it can compute the inverse
transformation for the input events. transformation for the input events.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
As in the X case, when the client As in the X case, when the client
receives the event, it updates the receives the event, it updates the
UI in response. But in the wayland UI in response. But in the wayland
case, the rendering happens in the case, the rendering happens in the
client, and the client just sends a client, and the client just sends a
request to the compositor to request to the compositor to
indicate the region that was indicate the region that was
updated. updated.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The compositor collects damage The compositor collects damage
requests from its clients and then requests from its clients and then
recomposites the screen. The recomposites the screen. The
compositor can then directly issue compositor can then directly issue
an ioctl to schedule a pageflip with an ioctl to schedule a pageflip with
KMS. KMS.
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>
</para> </para>
</section> </section>
<section id="sect-Wayland-Architecture-wayland_rendering"> <section id="sect-Wayland-Architecture-wayland_rendering">
<title>Wayland Rendering</title> <title>Wayland Rendering</title>
<para> <para>
One of the details I left out in the above overview One of the details I left out in the above overview
is how clients actually render under wayland. By is how clients actually render under wayland. By
removing the X server from the picture we also removing the X server from the picture we also
removed the mechanism by which X clients typically removed the mechanism by which X clients typically
render. But there's another mechanism that we're render. But there's another mechanism that we're
already using with DRI2 under X: direct rendering. already using with DRI2 under X: direct rendering.
With direct rendering, the client and the server With direct rendering, the client and the server
share a video memory buffer. The client links to a share a video memory buffer. The client links to a
rendering library such as OpenGL that knows how to rendering library such as OpenGL that knows how to
program the hardware and renders directly into the program the hardware and renders directly into the
buffer. The compositor in turn can take the buffer buffer. The compositor in turn can take the buffer
and use it as a texture when it composites the and use it as a texture when it composites the
desktop. After the initial setup, the client only desktop. After the initial setup, the client only
needs to tell the compositor which buffer to use and needs to tell the compositor which buffer to use and
when and where it has rendered new content into it. when and where it has rendered new content into it.
</para> </para>
<para> <para>
This leaves an application with two ways to update its window contents: This leaves an application with two ways to update its window contents:
</para> </para>
<para> <para>
<orderedlist> <orderedlist>
<listitem> <listitem>
<para> <para>
Render the new content into a new buffer and tell the compositor Render the new content into a new buffer and tell the compositor
to use that instead of the old buffer. The application can to use that instead of the old buffer. The application can
allocate a new buffer every time it needs to update the window allocate a new buffer every time it needs to update the window
contents or it can keep two (or more) buffers around and cycle contents or it can keep two (or more) buffers around and cycle
between them. The buffer management is entirely under between them. The buffer management is entirely under
application control. application control.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Render the new content into the buffer that it previously Render the new content into the buffer that it previously
told the compositor to to use. While it's possible to just told the compositor to to use. While it's possible to just
render directly into the buffer shared with the compositor, render directly into the buffer shared with the compositor,
this might race with the compositor. What can happen is that this might race with the compositor. What can happen is that
repainting the window contents could be interrupted by the repainting the window contents could be interrupted by the
compositor repainting the desktop. If the application gets compositor repainting the desktop. If the application gets
interrupted just after clearing the window but before interrupted just after clearing the window but before
rendering the contents, the compositor will texture from a rendering the contents, the compositor will texture from a
blank buffer. The result is that the application window will blank buffer. The result is that the application window will
flicker between a blank window or half-rendered content. The flicker between a blank window or half-rendered content. The
traditional way to avoid this is to render the new content traditional way to avoid this is to render the new content
into a back buffer and then copy from there into the into a back buffer and then copy from there into the
compositor surface. The back buffer can be allocated on the compositor surface. The back buffer can be allocated on the
fly and just big enough to hold the new content, or the fly and just big enough to hold the new content, or the
application can keep a buffer around. Again, this is under application can keep a buffer around. Again, this is under
application control. application control.
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>
</para> </para>
<para> <para>
In either case, the application must tell the compositor In either case, the application must tell the compositor
which area of the surface holds new contents. When the which area of the surface holds new contents. When the
application renders directly the to shared buffer, the application renders directly the to shared buffer, the
compositor needs to be noticed that there is new content. compositor needs to be noticed that there is new content.
But also when exchanging buffers, the compositor doesn't But also when exchanging buffers, the compositor doesn't
assume anything changed, and needs a request from the assume anything changed, and needs a request from the
application before it will repaint the desktop. The idea application before it will repaint the desktop. The idea
that even if an application passes a new buffer to the that even if an application passes a new buffer to the
compositor, only a small part of the buffer may be compositor, only a small part of the buffer may be
different, like a blinking cursor or a spinner. different, like a blinking cursor or a spinner.
Hardware Enabling for Wayland Hardware Enabling for Wayland
</para> </para>
<para> <para>
Typically, hardware enabling includes modesetting/display Typically, hardware enabling includes modesetting/display
and EGL/GLES2. On top of that Wayland needs a way to share and EGL/GLES2. On top of that Wayland needs a way to share
buffers efficiently between processes. There are two sides buffers efficiently between processes. There are two sides
to that, the client side and the server side. to that, the client side and the server side.
</para> </para>
<para> <para>
On the client side we've defined a Wayland EGL platform. In On the client side we've defined a Wayland EGL platform. In
the EGL model, that consists of the native types the EGL model, that consists of the native types
(EGLNativeDisplayType, EGLNativeWindowType and (EGLNativeDisplayType, EGLNativeWindowType and
EGLNativePixmapType) and a way to create those types. In EGLNativePixmapType) and a way to create those types. In
other words, it's the glue code that binds the EGL stack and other words, it's the glue code that binds the EGL stack and
its buffer sharing mechanism to the generic Wayland API. The its buffer sharing mechanism to the generic Wayland API. The
EGL stack is expected to provide an implementation of the EGL stack is expected to provide an implementation of the
Wayland EGL platform. The full API is in the wayland-egl.h Wayland EGL platform. The full API is in the wayland-egl.h
header. The open source implementation in the mesa EGL stack header. The open source implementation in the mesa EGL stack
is in wayland-egl.c and platform_wayland.c. is in wayland-egl.c and platform_wayland.c.
</para> </para>
<para> <para>
Under the hood, the EGL stack is expected to define a Under the hood, the EGL stack is expected to define a
vendor-specific protocol extension that lets the client side vendor-specific protocol extension that lets the client side
EGL stack communicate buffer details with the compositor in EGL stack communicate buffer details with the compositor in
order to share buffers. The point of the wayland-egl.h API order to share buffers. The point of the wayland-egl.h API
is to abstract that away and just let the client create an is to abstract that away and just let the client create an
EGLSurface for a Wayland surface and start rendering. The EGLSurface for a Wayland surface and start rendering. The
open source stack uses the drm Wayland extension, which lets open source stack uses the drm Wayland extension, which lets
the client discover the drm device to use and authenticate the client discover the drm device to use and authenticate
and then share drm (GEM) buffers with the compositor. and then share drm (GEM) buffers with the compositor.
</para> </para>
<para> <para>
The server side of Wayland is the compositor and core UX for The server side of Wayland is the compositor and core UX for
the vertical, typically integrating task switcher, app the vertical, typically integrating task switcher, app
launcher, lock screen in one monolithic application. The launcher, lock screen in one monolithic application. The
server runs on top of a modesetting API (kernel modesetting, server runs on top of a modesetting API (kernel modesetting,
OpenWF Display or similar) and composites the final UI using OpenWF Display or similar) and composites the final UI using
a mix of EGL/GLES2 compositor and hardware overlays if a mix of EGL/GLES2 compositor and hardware overlays if
available. Enabling modesetting, EGL/GLES2 and overlays is available. Enabling modesetting, EGL/GLES2 and overlays is
something that should be part of standard hardware bringup. something that should be part of standard hardware bringup.
The extra requirement for Wayland enabling is the The extra requirement for Wayland enabling is the
EGL_WL_bind_wayland_display extension that lets the EGL_WL_bind_wayland_display extension that lets the
compositor create an EGLImage from a generic Wayland shared compositor create an EGLImage from a generic Wayland shared
buffer. It's similar to the EGL_KHR_image_pixmap extension buffer. It's similar to the EGL_KHR_image_pixmap extension
to create an EGLImage from an X pixmap. to create an EGLImage from an X pixmap.
</para> </para>
<para> <para>
The extension has a setup step where you have to bind the The extension has a setup step where you have to bind the
EGL display to a Wayland display. Then as the compositor EGL display to a Wayland display. Then as the compositor
receives generic Wayland buffers from the clients (typically receives generic Wayland buffers from the clients (typically
when the client calls eglSwapBuffers), it will be able to when the client calls eglSwapBuffers), it will be able to
pass the struct wl_buffer pointer to eglCreateImageKHR as pass the struct wl_buffer pointer to eglCreateImageKHR as
the EGLClientBuffer argument and with EGL_WAYLAND_BUFFER_WL the EGLClientBuffer argument and with EGL_WAYLAND_BUFFER_WL
as the target. This will create an EGLImage, which can then as the target. This will create an EGLImage, which can then
be used by the compositor as a texture or passed to the be used by the compositor as a texture or passed to the
modesetting code to use as an overlay plane. Again, this is modesetting code to use as an overlay plane. Again, this is
implemented by the vendor specific protocol extension, which implemented by the vendor specific protocol extension, which
on the server side will receive the driver specific details on the server side will receive the driver specific details
about the shared buffer and turn that into an EGL image when about the shared buffer and turn that into an EGL image when
the user calls eglCreateImageKHR. the user calls eglCreateImageKHR.
</para> </para>
</section> </section>
</chapter> </chapter>

View file

@ -4,13 +4,13 @@
%BOOK_ENTITIES; %BOOK_ENTITIES;
]> ]>
<authorgroup> <authorgroup>
<author> <author>
<firstname>Kristian</firstname> <firstname>Kristian</firstname>
<surname>Høgsberg</surname> <surname>Høgsberg</surname>
<affiliation> <affiliation>
<orgname>Intel Corporation</orgname> <orgname>Intel Corporation</orgname>
</affiliation> </affiliation>
<email>krh@bitplanet.net</email> <email>krh@bitplanet.net</email>
</author> </author>
</authorgroup> </authorgroup>

View file

@ -4,31 +4,31 @@
%BOOK_ENTITIES; %BOOK_ENTITIES;
]> ]>
<bookinfo id="book-Wayland-Wayland"> <bookinfo id="book-Wayland-Wayland">
<title>Wayland</title> <title>Wayland</title>
<subtitle>The Wayland display server</subtitle> <subtitle>The Wayland display server</subtitle>
<productname>Documentation</productname> <productname>Documentation</productname>
<productnumber>0.1</productnumber> <productnumber>0.1</productnumber>
<edition>0</edition> <edition>0</edition>
<pubsnumber>0</pubsnumber> <pubsnumber>0</pubsnumber>
<abstract> <abstract>
<para> <para>
Wayland is a protocol for a compositor to talk to Wayland is a protocol for a compositor to talk to
its clients as well as a C library implementation of its clients as well as a C library implementation of
that protocol. The compositor can be a standalone that protocol. The compositor can be a standalone
display server running on Linux kernel modesetting display server running on Linux kernel modesetting
and evdev input devices, an X application, or a and evdev input devices, an X application, or a
wayland client itself. The clients can be wayland client itself. The clients can be
traditional applications, X servers (rootless or traditional applications, X servers (rootless or
fullscreen) or other display servers. fullscreen) or other display servers.
</para> </para>
</abstract> </abstract>
<corpauthor> <corpauthor>
<inlinemediaobject> <inlinemediaobject>
<imageobject> <imageobject>
<imagedata fileref="images/wayland.png" format="PNG" /> <imagedata fileref="images/wayland.png" format="PNG" />
</imageobject> </imageobject>
</inlinemediaobject> </inlinemediaobject>
</corpauthor> </corpauthor>
<xi:include href="Common_Content/Legal_Notice.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Common_Content/Legal_Notice.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="Author_Group.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Author_Group.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
</bookinfo> </bookinfo>

View file

@ -23,23 +23,23 @@
<section id="sect-Wayland-Overview-Replacing-X11"> <section id="sect-Wayland-Overview-Replacing-X11">
<title>Replacing X11</title> <title>Replacing X11</title>
<para> <para>
In Linux and other Unix-like systems, the X stack has grown to In Linux and other Unix-like systems, the X stack has grown to
encompass functionality arguably belonging in client libraries, encompass functionality arguably belonging in client libraries,
helper libraries, or the host operating system kernel. Support for helper libraries, or the host operating system kernel. Support for
things like PCI resource management, display configuration management, things like PCI resource management, display configuration management,
direct rendering, and memory management has been integrated into the X direct rendering, and memory management has been integrated into the X
stack, imposing limitations like limited support for standalone stack, imposing limitations like limited support for standalone
applications, duplication in other projects (e.g. the Linux fb layer applications, duplication in other projects (e.g. the Linux fb layer
or the DirectFB project), and high levels of complexity for systems or the DirectFB project), and high levels of complexity for systems
combining multiple elements (for example radeon memory map handling combining multiple elements (for example radeon memory map handling
between the fb driver and X driver, or VT switching). between the fb driver and X driver, or VT switching).
</para> </para>
<para> <para>
Moreover, X has grown to incorporate modern features like offscreen Moreover, X has grown to incorporate modern features like offscreen
rendering and scene composition, but subject to the limitations of the rendering and scene composition, but subject to the limitations of the
X architecture. For example, the X implementation of composition adds X architecture. For example, the X implementation of composition adds
additional context switches and makes things like input redirection additional context switches and makes things like input redirection
difficult. difficult.
</para> </para>
<mediaobject> <mediaobject>
<imageobject> <imageobject>
@ -52,22 +52,22 @@ difficult.
the screen. the screen.
</para> </para>
<para> <para>
Over time, X developers came to understand the shortcomings of this Over time, X developers came to understand the shortcomings of this
approach and worked to split things up. Over the past several years, approach and worked to split things up. Over the past several years,
a lot of functionality has moved out of the X server and into a lot of functionality has moved out of the X server and into
client-side libraries or kernel drivers. One of the first components client-side libraries or kernel drivers. One of the first components
to move out was font rendering, with freetype and fontconfig providing to move out was font rendering, with freetype and fontconfig providing
an alternative to the core X fonts. Direct rendering OpenGL as a an alternative to the core X fonts. Direct rendering OpenGL as a
graphics driver in a client side library went through some iterations, graphics driver in a client side library went through some iterations,
ending up as DRI2, which abstracted most of the direct rendering ending up as DRI2, which abstracted most of the direct rendering
buffer management from client code. Then cairo came along and provided buffer management from client code. Then cairo came along and provided
a modern 2D rendering library independent of X, and compositing a modern 2D rendering library independent of X, and compositing
managers took over control of the rendering of the desktop as toolkits managers took over control of the rendering of the desktop as toolkits
like GTK+ and Qt moved away from using X APIs for rendering. Recently, like GTK+ and Qt moved away from using X APIs for rendering. Recently,
memory and display management have moved to the Linux kernel, further memory and display management have moved to the Linux kernel, further
reducing the scope of X and its driver stack. The end result is a reducing the scope of X and its driver stack. The end result is a
highly modular graphics stack. highly modular graphics stack.
</para> </para>
</section> </section>

View file

@ -4,12 +4,12 @@
%BOOK_ENTITIES; %BOOK_ENTITIES;
]> ]>
<book> <book>
<xi:include href="Book_Info.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Book_Info.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="Overview.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Overview.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="Architecture.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Architecture.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="Protocol.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Protocol.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="Compositors.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="Compositors.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="ProtocolSpec.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> <xi:include href="ProtocolSpec.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
<index /> <index />
</book> </book>