A transport is the layer between the packet interface and a device IO interface. The advanced user can pass optional parameters into the underlying transport layer through the device address. These optional parameters control how the transport object allocates memory, resizes kernel buffers, spawns threads, etc. When not specified, the transport layer will use values for these parameters that are known to perform well on a variety of systems. The transport parameters are defined below for the various transports in the UHD software:
The UDP transport is implemented with user-space sockets. This means standard Berkeley sockets API using send()
/recv()
.
The following parameters can be used to alter the transport's default behavior (these options can be passed to a USRP device as arguments at initialization time, see also Device Configuration through address string):
recv_frame_size:
The size of a single receive buffer in bytesnum_recv_frames:
The number of receive buffers to allocatesend_frame_size:
The size of a single send buffer in bytesnum_send_frames:
The number of send buffers to allocaterecv_buff_fullness:
The targeted fullness factor of the the buffer (typically around 90%)ups_per_sec
: USRP2 only. Flow control ACKs per second on TX.ups_per_fifo
: USRP2 only. Flow control ACKs per total buffer size (in packets) on TX.Notes:
num_recv_frames
does not affect performance.num_send_frames
does not affect performance.recv_frame_size
and send_frame_size
can be used to increase or decrease the maximum number of samples per packet. The frame sizes default to an MTU of 1472 bytes per IP/UDP packet and may be increased if permitted by your network hardware.The host-based flow control expects periodic update packets from the device. These update packets inform the host of the last packet consumed by the device, which allows the host to determine throttling conditions for the transmission of packets. The following mechanisms affect the transmission of periodic update packets:
ups_per_fifo:
The number of update packets for each FIFO's worth of bytes sent into the deviceups_per_sec:
The number of update packets per second (defaults to 20 updates per second)It may be useful to increase the size of the socket buffers to move the burden of buffering samples into the kernel or to buffer incoming samples faster than they can be processed. However, if your application cannot process samples fast enough, no amount of buffering can save you. The following parameters can be used to alter socket's buffer sizes:
recv_buff_size:
The desired size of the receive buffer in bytessend_buff_size:
The desired size of the send buffer in bytesNote: Large send buffers tend to decrease transmit performance.
Latency is a measurement of the time it takes a sample to travel between the host and device. Most computer hardware and software is bandwidth optimized, which may negatively affect latency. If your application has strict latency requirements, please consider the following notes:
Note1: The time taken by the device to populate a packet is proportional to the sample rate. Therefore, to improve receive latency, configure the transport for a smaller frame size.
Note2: For overall latency improvements, look for "Interrupt Coalescing" settings for your OS and Ethernet chipset. It seems the Intel Ethernet chipsets offer fine-grained control in Linux. Also, consult:
On Linux, the maximum buffer sizes are capped by the sysctl values net.core.rmem_max
and net.core.wmem_max
. To change the maximum values, run the following commands: :
sudo sysctl -w net.core.rmem_max=<new value> sudo sysctl -w net.core.wmem_max=<new value>
Set the values permanently by editing /etc/sysctl.conf
.
It is also possible to tune the network interface controller (NIC) by using ethtool. Increasing the number of descriptors for TX or RX can dramatically boost performance on some hosts.
To change the number of TX/RX descriptors, run the following command:
sudo ethtool -G <interface> tx <N> rx <N>
One can query the maximums and current settings with the following command:
ethtool -g <interface>
UDP send fast-path: It is important to change the default UDP behavior such that 1500 byte packets still travel through the fast path of the sockets stack. This can be adjusted with the FastSendDatagramThreshold
registry key:
FastSendDatagramThreshold
registry key documented here:
<install-path>/share/uhd/FastSendDatagramThreshold.reg
Power profile: The Windows power profile can seriously impact instantaneous bandwidth. Application can take time to ramp-up to full performance capability. It is recommended that users set the power profile to "high performance".
The USB transport is implemented with LibUSB. LibUSB provides an asynchronous API for USB bulk transfers.
The following parameters can be used to alter the transport's default behavior:
recv_frame_size:
The size of a single receive transfers in bytesnum_recv_frames:
The number of simultaneous receive transferssend_frame_size:
The size of a single send transfers in bytesnum_send_frames:
The number of simultaneous send transfersOn Linux, Udev handles USB plug and unplug events. The following commands install a Udev rule so that non-root users may access the device:
cd <install-path>/lib/uhd/utils sudo cp uhd-usrp.rules /etc/udev/rules.d/ sudo udevadm control --reload-rules sudo udevadm trigger
A driver package must be installed to use a USB-based product with UHD software:
<directory>
.<directory>
, and select the .inf
file.The NI-RIO-based PCIe transport is only used with the X300/X310. It uses a separate driver stack (NI-RIO) which must be installed separately (see also NI RIO Kernel Modules for X-Series PCIe Connectivity).
More information on how to set it up can be found here: PCI Express (Desktop).
The X3x0 PCIe transport has 6 separate bidirectional DMA channels, and UHD will use two of those for command, control, and asynchronous messages. That means a total of four DMA channels can be used for streaming (either 4xRX, for TwinRX operations, or 2xRX + 2xTX for full-duplex operation).
The following parameters can be used to alter the transport's default behavior:
recv_frame_size:
The size of a single receive transfers in bytesnum_recv_frames:
The number of simultaneous receive transfersrecv_buff_size:
The socket buffer size. Must be a multiple of pagessend_frame_size:
The size of a single send transfers in bytesnum_send_frames:
The number of simultaneous send transferssend_buff_size:
The socket buffer size. Must be a multiple of pages