Archive | Linux Networking RSS feed for this section

Linux Kernel Packet interfaces for Transmission and Reception

Packet ingresses and egresses in the Linux Kernel are implemented for differing, and also related, objectives.

On ingress, the traditional NIC/Interrupt model is modified to incorporate “bulk” processing. I.e. if the processor has more than a WEIGHT number of packets ingressing (WEIGHT specified when struct napi_struct as specified below is created), and available to be processed by the processor, then the Kernel “Polls” (with interrupts disabled) and processes the packets in “bulk” using the New per-device API Interface (struct NAPI_STRUCT). Else interrupts are enabled, and each packet processed on a per-interrupt basis. The objective is to reduce the effects of interrupt latencies for high-traffic conditions. However there are limits placed on “Polling” in “one-shot” without “deferral” (see below) so as to avoid “hogging” the processor resources from the perspectives of other duties of the Kernel/Processor. When that limit is reached (as specified by configurable BUDGET), or when we have been polling for a set number of time (jiffies), the Kernel’s “deferred processing” SOFTIRQ mechanisms kick in (NET_TX_SOFTIRQ with “action” NET_RX_ACTION). A per-napi “Poll” function is initialized at the time of device initialization (member “poll” within struct napi_struct) to process the packets and pass them to the upper layers, and it is this function that will check the WEIGHT of the device it is processing to determine how many packets to process.

Well, Packet egresses are significantly more complex than packet ingresses, with Queue management and QOS (perhaps even packet-shaping) being implemented. “Queue Discplines” are used to implement user-specifiable QOS policies (Specifiable by the “tc” command for example). Struct Qdisc (queue discipline) is the central actor, with deferred processing implemented via NET_TX_ACTION.

I explain Linux Kernel concepts and more in my classes ( Advanced Linux Kernel Programming @UCSC-Extension, and also in other classes that I teach independently). As always, Feedback, Questions and Comments are appreciated and will be responded to.

-Anand

Comments { 0 }

Demystifying the Linux Kernel Socket File Systems (Sockfs)

All Linux  networking works with System Calls creating network sockets (using the Socket System Call). The Socket System Call returns an integer (socket descriptor).

“Writing” or “reading” to/from that socket descriptor (as though it were a file) using generic System Calls  write / read respectively creates TCP network traffic rather than file-system writes/reads.

Note: The file-system descriptor would have been created by the “Open” system call IF … the descriptor were a “regular” file-system descriptor, intended for “regular” / file-system writes and reads (via System Calls write/read respectively) to files etc.

Further Note: This implies that the network socket descriptor created by the “socket” System Call will be used by systems programmer to write/read , using the same System Calls write/read used for “regular” file system writes/reads (System Calls that would, under normal and other circumstances, write/read data to/from memory).

Further further Note:  A System Call  “write” (to the descriptor that was created by the socket System Call)  must translate “magically” into a TCP transaction that “writes” the data across the network (ostensibly to the client on the other end), with the data “written” encapsulated within the payload section of a TCP packet.

This process of adapting  and hijacking the kernel file-system infrastructure to incorporate network operations /socket operations is called SOCKFS (Socket File System).

So how does  the linux kernel accomplish this process, where a file-system write is “faked” into a network-system “write”, if indeed it can be called that ?

Well…as is usually the case, the linux kernel’s methods begins at System / Kernel Initialization, when a special socket file-system (statically defined sock_fs_type)  for networks is “registered” by register_file_system. This happens in sock_init. File systems are registered so that disk partitions can be mounted for that file system.

The kernel registered file system type sock_fs_type  so that it could create a fake mount point  using kern_mount (for the file system sock_fs_type).  This mount point is necessary if the kernel is to later create a “fake file”   *struct file  using  existing/generic mechanisms and infrastructure  made available for the Virtual File System (VFS). These mechanisms  and infrastructure would include a mount point being available.

         Note:  No “actual” mount point exists, not in the sense an inode etc etc.

                       We will blog on file systems later.

Then when the socket System Call is initiated (to create the socket descriptor),  the kernel executes sock_create to create a new descriptor (aka the socket descriptor). The kernel also  executes sock_map_fd, which creates a   “fake file” , and  assigns the “fake file” to the socket descriptor. The “fake” files ops ( file->f_op) are then initialized to be socket_file_ops  (statically defined at compile time in source/net/socket.c).

The kernel assigns/maps the socket descriptor created earlier to the new “fake”  file using fd_install.

This socket descriptor is returned by the Socket System Call (as required by the MAN page of the Socket System Call) to the user program.

I only call it “fake” file because a System Call write executed against that socket descriptor will use the VFS infrastructure created, but  the data will not be written into a disk-file anywhere. It will, instead, be translated into a network operation because of the f_op‘s assigned to the “fake” file (socket_file_ops).

The kernel is now set up to create network traffic when System Calls write/read  are executed to/from to the “fake” file descriptor (the socket descriptor)  which was returned to the user when System Call socket was executed.

In point of fact, a System Call write to the “fake” files socket descriptor will then translate into a call to  __sock_sendmsg within the kernel, instead of a write into the “regular” file system. Because that is how socket_file_ops is statically defined before assignment to the “fake” file.

And then we are into networking space. And the promised Lan of milk, honey,  TCP traffic,  SOCKFS and File Systems.

No one said understanding the kernel was easy. But extremely gratification awaits those that work on it. And also creates enormous opportunities for innovation.  I  explain Linux Kernel concepts and more in my classes ( Advanced Linux Kernel Programming @UCSC-Extension, and also in other classes that I teach independently).

As always, Feedback, Questions  and Comments are appreciated and will be responded to. I will like to listen to gripes, especially  if you also paypal me some.  Thanks

-Anand

Comments { 6 }