next up previous contents
Next: 3.2 The Performance of Up: 3. Performance Signatures Previous: 3. Performance Signatures   Contents


3.1 Directed Point

Directed Point (DP) is a kernel-level lightweight communication system that aims at supporting high-performance communication in a multiprogramming environment over a broad spectrum of commodity interconnects. The adopted programming model of DP is the message passing model, in which data are exchanged between communicating pairs by explicit send and receive operations. Although the DP abstraction model supports group communications, they are being realized with matching send and receive calls, e.g. dp_write() and dp_read(). This send-receive paradigm supports unreliable, asynchronous communication between communicating processes. The users need to implement a reliable layer atop of DP if they want to work directly on the DP messaging layer. Besides, it is the users responsibility to handle the fragmentation, re-assembly, multiplexing and demultiplexing of messages. DP provides a simple but flexible interfaces for system developers to build efficient high-level communication interfaces.

The system architecture of DP consists of three layers - the DP Application Programming Interface (DP API) Layer in the user space, the DP Services Layer and the DP Network Interface Layer in the kernel space. The DP API Layer consists of lightweight system calls and user-level function calls, which are operations provided to the users to program their communication codes. The DP Network Interface Layer consists of network driver modules. This layer is responsible for all hardware-specific messaging setup, and signaling the hardware to receive/inject messages from/to the network. Currently, supported network driver modules include Intel EEPro, Digital 21140A, 3Com 905C Fast Ethernet, Hamachi Gigabit Ethernet, and FORE PCA-200E ATM.

The DP Services Layer implements services for passing the packets from user space to the network hardware, as well as delivering the incoming packets to the user space buffers of the receiving processes. This layer realizes the DP messaging protocol and is hardware independent. To support asynchronous communication, a dedicated buffer is pre-allocated to each DP endpoint, which stores incoming messages that are directed to this endpoint. In DP abstraction model, an endpoint is the network abstraction for addressing a communication partner. Although DP supports dynamic binding of the same endpoint to different partners, at a particular instant, each endpoint corresponds to one communicating partner only.

The dedicated buffer is named as Token Buffer Pool (TBP) and is a fixed size memory area. Each TBP is shared by the kernel and the associated process through page remapping, which eliminates the delay caused by data copying from the kernel space to user space during the reception event. However, on the transmission event, DP writes the messages directly to the NIC address space without using the TBP. One of the design strategies of DP is the efficient utilization of memory resources. Incoming messages to the same endpoint are queued at the TBP in the form of FIFO linked list, with each segment corresponds to a variable-length message.

With the DP messaging protocol, to adhere to the message-passing semantic of sending messages from any arbitrary address and delivering them to any arbitrary address, we need to have one memory copy operation to move the data to a DMA-able memory region on the sender side, and two memory copies from the DMA-able memory region to the destination address on the receiver side. And the whole data transfer scenario of DP is summarized in Table 2.2 of Section 2.2.


next up previous contents
Next: 3.2 The Performance of Up: 3. Performance Signatures Previous: 3. Performance Signatures   Contents