LINX for Linux User's Guide

Copyright © 2006-2007 Enea Software AB.
 
LINX, OSE, OSEck, OSE Epsilon, Optima, NASP, Element, Polyhedra are trademarks of Enea Software AB. Linux is a registered trademark of Linus Torvalds. All other copyrights, trade names and trademarks used herein are the property of their respective owners and are used for identification purposes only. The source code included in LINX for Linux is released partly under the GPL (see COPYING file) and partly under a BSD type license - see license text in each source file.

Disclaimer. The information in this document is subject to change without notice and should not be construed as a commitment by Enea Software AB.

Document version: 2.0.0


1. LINX Overview

1.1 Introduction

LINX is an open inter-process communications (IPC) protocol, designed to be platform and interconnect independent. It enables applications to communicate transparently regardless of whether they are running on the same CPU or are located on different nodes in a cluster. Any type of cluster configuration is supported, from a single multi-core board to large systems with many nodes interconnected by any network topology. LINX is based on the traditional message passing technology used in the Enea OSE / OSEck family of real-time operating systems.

LINX consists of a set of Linux kernel modules, a LINX library to be linked with applications and a few command tools for configuration of inter-node links and statistics reports.

There is one main LINX kernel module that implements the IPC mechanisms and the Rapid Link Handler (RLNH) protocol, which allows LINX functionality to span multiple nodes transparently over logical links. To use LINX for inter-node communication, a Connection Manager (CM) kernel module that supports the underlying interconnect must be loaded as well. Currently, LINX contains two CMs, one for raw Ethernet and one for TCP/IP. The CM is located below the main LINX kernel module in the protocol stack and its main task is to provide reliable, in-order delivery of messages. LINX can be adapted to any underlying transport by adding new CMs.

The RLNH protocol and the CM protocols are specified in a separate document (see section 9.2) .

The LINX kernel module provides a standard socket based interface using its own protocol family, PF_LINX.

The LINX library provides a set of function calls to applications. Application programmers should normally use the LINX API provided by this library, but it is possible to use the underlying socket interface too if necessary. Information on how to use the socket interface directly is found in the LINX reference documentation.

1.2 LINX Concepts

Endpoint An endpoint is an entity which can participate in LINX communication. Each endpoint is assigned a name by the application creating it.
SPID A binary identifier assigned to each endpoint by LINX. The SPID is used to refer to an endpoint when communicating with it.
LINX Signal Endpoints communicate by exchanging messages called LINX signals. When sending a LINX signal, the application specifies the SPID of the destination endpoint.
Connection A LINX connection provides reliable, in-order delivery of LINX data between two nodes over an underlying media or protocol stack.
Connection Manager A LINX component that implements support for setting up connections over a particular type of interconnect.
Link A logical association between two LINX nodes. Each link uses an underlying connection as transport. LINX IPC services are transparent across links.
Hunting A LINX mechanism that allows applications to look up the SPID of an endpoint by name. A LINX signal is sent back to the application when a matching endpoint is found or created. Applications can search for endpoints on remote nodes by specifying a path of links to traverse.
Attaching A LINX mechanism that allows application to supervise endpoints in order to find out when they are terminated. A LINX signal is sent back to the application when the supervised endpoint is terminated.

2. Installation

Download the LINX distribution linx-n.n.n.tar.gz, where n.n.n is the LINX version. See section 9.3 for information on where to download LINX. Extract the contents of the archive at a suitable place in your Linux system:

   $ tar –zxvf linx-n.n.n.tar.gz

This creates a LINX directory called linx-n.n.n/. The file linx-n.n.n/doc/index.html contains pointers to all documentation available in the release. Make sure to read the README, RELEASE_NOTES and Changelog files for information about this version. Reference documentation is available as man pages and in HTML format. There is also a document describing the LINX protocols.

The following is found under the top level LINX directory:

Makefile, config.mk, common.mk Make files for building LINX
bmark/ Example benchmark application
example/ Example client/server application
doc/ LINX documentation
drivers/ Dummy network driver for LINX message trace
include/ LINX include files
liblinx/ LINX library source code
linxcfg/, linxdisc/, linxstat/ LINX commands source code
net/linx/ LINX source code
patch/ Patches for libpcap and tcpdump
script/ Build scripts

3. Building LINX

To build LINX self hosted, e.g. for the running kernel, just go to the top level LINX directory and do make:

   $ cd /path/to/linx-n.n.n/
   $ make

This will build the entire LINX package.

Note that headers in the target Linux kernel source tree must be available to be able to compile LINX. This is needed also when compiling for the running Linux kernel.

Cross-compiling LINX for another target requires a few variables to be set, either as environment variables or by changing the config.mk file in the top level LINX directory. The following is needed:

In addition, the PATH environment variable must be set to reach the cross compiler tool kit. When this has been set up correctly, go to the top level LINX directory and do make.

When building the entire LINX package, the following is built:


4. Using LINX

This section describes the fundamental concepts of LINX communication. The examples show how to use the LINX API, defined in the file linx.h.

See section 5 for information on how to load and configure LINX for your system.

4.1 LINX Endpoints

An application that wants to communicate using LINX first creates a LINX endpoint by calling linx_open(). linx_open() assigns a name to the endpoint, the name is a null-terminated string. The name is not required to be unique. On Linux, the name may be of any length, but note that there may be restrictions on other platforms. One thread may own multiple LINX endpoints simultaneously.

   LINX *client = linx_open("client", 0, NULL);

LINX assigns each endpoint a binary identifier called a SPID. The SPID is used to refer to the endpoint when communicating with it. SPIDs are unique within the node on which they are created.

Each endpoint is internally associated with a LINX socket. An application can obtain the socket descriptor of a LINX endpoint using the linx_get_descriptor() call if needed, e.g. for generic poll() or select() calls together with other descriptors. Note that a LINX socket descriptor retrieved this way must not be closed by calling close().

A LINX endpoint is closed by calling linx_close(). This frees all resources owned by the endpoint. If a thread exits, all of its owned LINX endpoints will automatically be closed.

4.2 LINX Signals

Applications communicate by exchanging messages called LINX signals between endpoints. A LINX signal buffer contains a mandatory leading 4 byte signal number, optionally followed by data that the sender wishes to convey to the destination. Thus, the minimum size of a LINX signal is 4 bytes. Signal numbers are used to identify different types of signals and are mainly defined by applications. The signal number range 0x10000000 to 0xFFFFFFFF is available for user applications.

New LINX signals are easily defined. It is simply a matter of declaring a struct (see the example below). Each application shall also declare the union LINX_SIGNAL type to contain all LINX signals used in that particular application. LINX signals shall be cast to pointers to this generic structure in LINX API function calls.

   #define REQUEST_SIG 0x5001 /* Signal number */
   #define REPLY_SIG   0x5002

   struct request_sig
   {
      LINX_SIGSELECT sig_no;
      int code;
   }

   struct reply_sig
   {
      LINX_SIGSELECT sig_no;
      int status;
   }

   union LINX_SIGNAL
   {
      LINX_SIGSELECT     sig_no;
      struct request_sig request;
      struct reply_sig   reply;
   };

Before a LINX signal can be sent, it must be allocated and initialized. The linx_alloc() call returns a LINX signal buffer and initializes the signal number with a provided value.

   union LINX_SIGNAL *sig;

   sig = linx_alloc(endpoint, sizeof(struct request_sig), REQUEST_SIG);
   sig->request.code = 1;

The returned LINX signal buffer is owned by the LINX endpoint that allocated it and may not be used by other endpoints. Sending a LINX signal transfers its ownership to the destination endpoint. A LINX signal is never shared between different threads or endpoints. When a LINX signal buffer is not needed anymore, it should be freed by calling linx_free_buf().

Before sending a LINX signal, the SPID of the destination endpoint must be known. LINX provides a method to obtain the SPID of an endpoint by searching for its name, this is called hunting and is described in the next section. The receiver of a LINX signal can look up the SPID of the sender using the linx_sender() call. When the destination SPID is known, the LINX signal can be sent:

   linx_send(endpoint, &sig, server_spid);

Transferred LINX signals are stored in a receive queue associated with the destination endpoint. The destination endpoint chooses when to receive a LINX signal and what signal numbers to accept at any given time. This means that an endpoint may choose to receive LINX signals in a different order than they were sent, based on signal number filtering. A received LINX signal may be reused, for example to send a reply, if the buffer is large enough. Just overwrite the signal number field with the new value.

   union SIGNAL *sig;
   LINX_SIGSELECT any_sig[]= { 0 }; /* Signal filter. Here any signal is allowed */

   linx_receive(endpoint, &sig, any_sig);

LINX provides automatic endian conversion of the signal number if needed when a LINX signal is sent to an endpoint located on a remote node. The rest of the signal data is not converted, this must be taken care of in the applications.

4.3 Hunting for an Endpoint

Before sending a LINX signal, the sender must know the SPID of the destination endpoint. The SPID of a peer endpoint is obtained by asking LINX to hunt for its name using the linx_hunt() call. When the peer endpoint has been found, LINX makes sure that a LINX signal is sent to the hunting endpoint. This LINX signal appears to have been sent from the found peer endpoint, i.e the SPID can be obtained by looking at the sender of the LINX signal. The hunting endpoint may provide a LINX signal to be sent back when the sought endpoint has been found. If no LINX signal is provided, a default LINX signal of type LINX_OS_HUNT_SIG is sent instead.

   union SIGNAL *sig;
   LINX_SPID server_spid;
   LINX_SIGSELECT sig_sel_hunt = { 1, LINX_OS_HUNT_SIG };

   linx_hunt(endpoint, "server", NULL);
   linx_receive(endpoint, &sig, sig_sel_hunt);
   server_spid = linx_sender(endpoint, &sig);

If the peer endpoint does not exist when hunted for, LINX stores the hunt internally as pending. The LINX hunt signal is sent back to the hunting endpoint when an endpoint with matching name is created.

If there are several LINX endpoints with the same name, it is not defined which one is used to resolve a hunt call.

4.4 Attaching to an Endpoint

If a LINX endpoint sends a LINX signal to another endpoint, but the receiving endpoint has terminated for some reason, the LINX signal will be thrown away (freed) by LINX.

LINX provides a mechanism to supervise a peer endpoint, i.e. to request notification of when it is terminated. The linx_attach() call is used to attach to an endpoint. When a supervised endpoint terminates, LINX makes sure that a LINX signal is sent back to the supervising endpoint. This LINX signal appears to have been sent from the supervised (terminated) endpoint, i.e the SPID can be obtained by looking at the sender of the LINX signal. The endpoint that attaches may provide a LINX signal to be sent back when the supervised process terminates. If no LINX signal is provided, a default LINX signal of type LINX_OS_ATTACH_SIG is sent instead.

   linx_attach(endpoint, server_spid, NULL);

If the supervising endpoint wants to resume communication, it should issue a new hunt for the peer endpoint name, in order to be notified when a new endpoint with the same name is found or created.

4.5 Inter-node Communication

LINX endpoints are able to communicate transparently regardless of whether they are located on the same node or on different nodes interconnected in some way in a LINX cluster. A cluster may use different operating systems that support LINX – a LINX endpoint on a Linux node may for example communicate with a process on a connected DSP running the Enea OSEck real-time operating system.

A LINX cluster should be seen as a logical network established between a set of nodes interconnected by some underlying transport that is supported by LINX, such as Ethernet. For two nodes to be able to communicate, a LINX link must first be established between them. Each node may set up any number of links to other nodes which are directly reachable on the underlying transport. Links can be manually set up by using the linxcfg command, or dynamically established using the LINX discovery daemon, linxdisc.

Each link has a name that is unique within the node. The name of a link may be (and usually is) different on the two sides of the link, i.e. the link between nodes A and B may be called “LinkToB” on A and “LinkToA” on B. Often the link name is the same as the name of the remote node connected via the link.

Note that nodes do not have addresses in LINX. To reach a remote node, the complete path of link names to be used is specified.

To hunt for a LINX endpoint located on a remote node, the name of the endpoint is prepended with the path of link names that shall be used to reach that node, separated by “/”.

Example:

Hunting for “LinkToB/LinkToC/EndpointName” tells LINX to search for “EndpointName” on the node two hops away from us that is reachable by traversing first “LinkToB” and then “LinkToC”.

4.6 Virtual Endpoints

Since LINX SPIDs are unique within a single node only, it is not possible to address remote endpoints by using their remote SPIDs directly. LINX inter-node communication is based on automatic creation of local virtual endpoints that represent remote endpoints. Each LINX endpoint involved in inter-node communication has a virtual endpoint, internally created by LINX, representing it on the peer node. A virtual endpoint acts as a proxy for a particular remote endpoint and is communicated with in the same way as normal (user-created) endpoints. This way, applications do not need to know the true SPIDs of endpoints on other nodes - they always communicate with local virtual endpoints, which have local SPIDs. The life span of a virtual endpoint matches the life span of the remote endpoint it represents.

A LINX signal sent to a virtual endpoint is intercepted by LINX and automatically forwarded to the remote node where the endpoint it represents is located. On the destination node, LINX delivers the LINX signal to its intended destination and makes it appear as if it was sent from a virtual endpoint representing the true sender.

A LINX signal received from an endpoint on a remote node always appears to have been sent from the corresponding local virtual endpoint.

Virtual endpoints are created by LINX when needed, typically when a remote hunt call has been made and the peer endpoint has been found (or created) on the remote node. Virtual endpoints use the same naming syntax as the hunt path described above. This means that when LINX creates a virtual endpoint, the pending hunt request for its name are resolved and the hunt LINX signal is sent to the hunting endpoint from the SPID of the virtual endpoint.

LINX also creates virtual endpoints representing links to remote nodes. These endpoints carry the same name as the link, prepended with "/". An application can monitor the state of a link by hunting for and attaching to the virtual endpoint representing it.

When an endpoint terminates, LINX makes sure that all virtual endpoints representing it on remote nodes are terminated too (so that attach LINX signals are sent to supervising endpoints). If a remote node is shut down or becomes unreachable, LINX will detect this and terminate all virtual endpoints that represent endpoints located on that node.

Example:

An application on node A hunts for "LinkToB/server". This tells LINX to search for the endpoint "server" on node B, reachable by traversing link "LinkToB". When an endpoint named "server" has been found (or created) on B, LINX creates a virtual endpoint on node A named "LinkToB/server" and sends the hunt LINX signal from this virtual endpoint to the hunting endpoint. After receiving the hunt LINX signal, the application is able to communicate with the remote endpoint "server" on node B by sending LINX signals to the virtual endpoint "LinkToB/server".

Note that the scenario above will also create a virtual endpoint named "LinkToA/client" on node B (if "client" is the name of the hunting endpoint and "LinkToA" is the name of the link on node B).


5. Getting Started

5.1 Loading the LINX Kernel Modules

To enable LINX, simply load the LINX kernel module into the Linux kernel (requires root permissions):

   $ insmod net/linx/linx.ko

Applications are now able to use LINX, but only to communicate within the node.

To use LINX for communication between several interconnected nodes, also load the appropriate LINX Connection Manager kernel module depending on which underlying transport to use. LINX currently supports raw Ethernet and TCP/IP.

To use raw Ethernet as transport, load the Ethernet CM kernel module:

   $ insmod net/linx/linx_eth_cm.ko

To use TCP/IP as transport, load the TCP CM kernel module:

   $ insmod net/linx/linx_tcp_cm.ko

To setup a LINX cluster with two participating nodes, here called A and B, start by installing the appropriate kernel modules as described above on both nodes. Then use the linxcfg command on each node to create a link to the other node. The syntax of the command differs depending on which CM is used.

When using Ethernet, the MAC address of the remote node, the device name to use and a suitable link name shall be provided.

On node A (replace the MAC address with the actual value on node A):

   $ bin/linxcfg create 00:18:4D:72:13:1B eth0 LinkToB

On node B (replace the MAC address with the actual value on node B):

   $ bin/linxcfg create 00:0C:6E:C3:FB:A2 eth0 LinkToA

When using TCP/IP, the IP address of the remote node and a suitable link name shall be provided:

On node A (replace the IP address with the actual value on node A):

   $ bin/linxcfg –t tcp create 192.168.1.2 LinkToB

On node B (replace the IP address with the actual value on node B):

   $ bin/linxcfg –t tcp create 192.168.1.1 LinkToA

After these steps, the LINX cluster is available and applications can communicate with eachother transparently, regardless of on which node they are located.

5.3 Running the LINX Example Application

The LINX example application is found in the example/ directory. It is a simple client / server based application that serves both as an introduction to the LINX API programming model, and as a quick way of testing that LINX is up and running in a system with one or more nodes.

See above for information on how to build the LINX kernel modules and binaries (including the example).

Doing make example in the top level LINX directory builds only the example. This produces two executables in the example/bin directory: linx_basic_client and linx_basic_server.

The actual operation of the application is simple; the client sends LINX signals to the server and the server sends reply LINX signals back. There can be any number of clients distributed on different nodes. Each client will hunt for the server, either on a given link name (path of link names) or on the local machine if no linkname is provided. The server can be terminated and restarted, the clients use LINX attach to detect when the server disappears and will resume operation when the server is available again.

To run the example on a single node, the followings steps are needed:

  1. Build LINX
  2. Load the LINX kernel module into the kernel
  3. Start the example server in the background: example/bin/linx_basic_server &
  4. Start the example client: example/bin/linx_basic_client

To run the example on two nodes, the following steps are needed:

  1. Build LINX
  2. Load the LINX kernel module on both nodes
  3. Load the appropriate LINX CM kernel module on each node
  4. On each node, use the linxcfg command to establish the link
  5. On one node, start the example server: example/bin/linx_basic_server
  6. On the other node, start the example client: example/bin/linx_basic_client linkname
    Where linxname is either LinkToA or LinkToB according to the link names given above in 5.2).

6. LINX Utilities

6.1 linxcfg

The linxcfg command creates or destroys LINX links to remote nodes. The type of CM is specified using the -t option. If no CM type is given, the Ethernet CM is assumed.

Examples:

   $ bin/linxcfg create 00:18:4D:72:13:1B eth0 LinkToB
   $ bin/linxcfg destroy LinkToB

The linxcfg command must be used on both participating nodes for a link to be established.

Many optional parameters depending on CM type can be given when creating a link with linxcfg. See the linxcfg(1) reference documentation for details.

6.2 linxdisc

On Ethernet, a LINX cluster can be automatically established and supervised by running the linxdisc daemon on all participating nodes. The daemon periodically broadcasts advertisements and waits for advertisements from remote nodes. Each node advertises a cluster name and a node name. These values are defined in a configuration file provided to linxdisc. Each cluster and each node must have a unique name. The configuration file also defines filtering rules for which network interfaces to use and which remote node names to connect to. When an advertisement is received from a node that belongs to the same cluster and is allowed according to the node name filtering rules, a link is automatically set up between the nodes.

See the linxdisc(8) and linxdisc.conf(5) reference documentation for details.

6.3 linxstat

LINX status information can be fetched from the LINX kernel module and displayed using the linxstat command. Status is shown for local and virtual (remote) endpoints as well as links to other nodes. The information includes names, SPIDs, queued LINX signals and pending hunts and attaches. See the linxstat(1) reference documentation for details.

6.4 LINX Message Trace

When using an external protocol analyzer such as tcpdump, only LINX messages sent between nodes are visible. Intra-node transmissions never reach the point where the Linux kernel duplicates messages for listening protocol analyzers (the Linux General Device Driver Interface). LINX provides a feature to make these messages visible as well. This is done by duplicating all LINX signals that are sent within LINX to a special dummy network driver called linx0.

There are also patches for tcpdump and libpcap included in the LINX release (since LINX 1.2). These patches make tcpdump and libpcap understand the messages that are sent to linx0.

To start using LINX message trace, the following steps are required:

  1. Compile the LINX kernel module with the LINX_MESSAGE_TRACE=yes option and load it into the kernel.
  2. Compile the LINX dummy network driver and load it into the kernel. Enable it by doing: ifconfig linx0 up
  3. Compile tcpdump and libpcap with the patches provided in the LINX release patch/ directory.
  4. Start tcpdump and configure it to listen on linx0.

7. LINX Kernel Module Configuration

Parameters can be passed to the LINX kernel module at load time. Example:

   $ insmod linx.ko linx_max_links=64

To list available parameters and types, the modinfo command can be used:

   $ modinfo -p linx.ko

The following parameters can be passed to the LINX kernel module:

linx_max_links

The maximum number of links that can be established to remote nodes. When the limit is reached, LINX will refuse to create new links. The default value is 32 and the maximum value is 1024.

linx_max_sockets_per_link

The maximum number of endpoints (sockets) that are allowed to communicate over one link. If the limit is exceeded, the link will be disconnected and reestablished. The value must be a power of 2. The default value is 1024 and the maximum value is 65536.

linx_max_spids

The maximum number of LINX endpoints (sockets) that can be created. When the limit is reached, LINX will refuse to create new endpoints. The value must be a power of 2. The default value is 512 and the maximum value is 65536.

linx_max_attrefs

The maximum number of pending attaches. When the limit is reached, attach() calls will fail. The value needs to be a power of 2. The default value is 1024 and the maximum value is 65536.

linx_max_tmorefs

The maximum number of timeout references. When the limit is reached, linx_request_tmo() will fail. The value must be a power of 2. The default value is 1024 and the maximum value is 65536.

linx_sockbuf_size

Speficies the send and receive buffer queue size of a LINX socket. This value is passed to the socket struct fields socket->sk_sndbuf and socket->sk_rcvbuf, which control the memory a socket may use for sending and receiving packages. The default value and maximum value is 1073741824.


8. LINX Statistics

8.1 Per-endpoint Statistics

Statistics per endpoint can be enabled at compile-time for the LINX kernel module by using the -DSOCK_STAT flag when building.

Statistics are collected for both ordinary and virtual LINX endpoints, as well as for links, independent of which Connection Manager is used. The results can be found in the procfs file system in the file /proc/net/linx/sockets and are also available through the linx_get_stat() function call or by using the linxstat -S command.

The following statistics are presented:

no_recv_bytes Number of received bytes
no_sent_bytes Number of sent bytes
no_recv_signals Number of received LINX signals
no_sent_signals Number of sent LINX signals
no_recv_remote_bytes Number of bytes received from remote endpoints.
no_sent_remote_bytes Number of bytes sent to remote endpoints.
no_recv_remote_signals Number of LINX signals received from remote endpoints.
no_sent_remote_signals Number of LINX signals sent to remote endpoints.
no_recv_local_bytes Number of bytes received from local endpoints.
no_sent_local_bytes Number of bytes sent to local endpoints.
no_recv_local_signals Number of LINX signals received from local endpoints.
no_sent_local_signals Number of LINX signals sent to local endpoints.
no_queued_bytes Number of bytes waiting in the receive queue of the endpoint.
no_queued_signals Number of LINX signals waiting in the receive queue of the endpoint.

A sent LINX signal is shown as queued until the destination application has received it with a linx_receive() call (or a recvfrom() / recvmsg() if using the socket interface directly). When a hunt for a remote endpoint is resolved, the hunt reply will be counted as a “received remote” LINX signal even though the hunt reply itself is not sent over the link. From the LINX endpoints perspective, the hunt reply is received from a virtual endpoint which represents a remote endpoint. The same applies to attach LINX signals from virtual endpoints.

When a LINX endpoint is destroyed, so are the statistics for that endpoint. If an application needs to save the statistics, it should do so before calling linx_close() (or close() if using the socket interface directly).

8.2 Ethernet CM Statistics

Statistics can be enabled at compile-time for the LINX Ethernet Connection Manager by using the following flags when building LINX:

-DSTAT_SEND_PKTS
Shows the number of sent packets per link, stored in /proc/net/linx/cm/eth/send_pkts
-DSTAT_RECV_PKTS
Shows the number of received packets per link, stored in /proc/net/linx/cm/eth/recv_pkts
-DSTAT_SEND_PKTS
Shows the number of sent packets per link, stored in /proc/net/linx/cm/eth/send_pkts
-DSTAT_RECV_PKTS
Shows the number of received packets per link, stored in /proc/net/linx/cm/eth/recv_pkts
-DSTAT_SEND_RETRANS
Shows the number of received packets per link, stored in /proc/net/linx/cm/eth/send_retrans
-DSTAT_BASIC
Turns on all of the above statistics fields.

9. Where to Find More Information

9.1 Reference Documentation

The reference manuals can be found under the doc/ directory, in the man1 - man8 subdirectories. The linx(7) manual page is the top document. To be able to read the reference documentation with the man command in Linux, the path to the LINX doc/ directory needs to be added to the MANPATH environment variable. Alternatively, index.html in the LINX doc/ directory contains pointers to HTML version of the reference manual pages, which have been generated from the man page format files.

The LINX API is described in the reference manual pages, see linx.h(3) and linx_types.h(3).

How to use the LINX socket interface directly is described in the linx(7) manual page.

The linxcfg(1) command, the linxdisc(8) daemon and the linxstat(1) command are also described in reference manual pages. .

9.2 LINX Protocols

Specifications of all LINX protocols for inter-node communication are found in the separate LINX protocols document.

9.3 The LINX Project

The LINX project can be found on SourceForge on the following addresses:

Email: linx@enea.com

9.4 Other Information

See www.enea.com for more general information about LINX. You will find a LINX Datasheet, the LINX protocols described, Questions & Answers about LINX and other information.