JustKernel

Ray Of Hope

multipage support for netback driver + Xen

Here are some of my learnings from my work on multipage support for netback driver. Learnings related to how we are making use of multi page and also how netback and netfron communicate.

driver/xen/xen_probe.c keeps monitoring the backend and frontend states and calls the appropriate driver.otherend_changed(drivers/net/netfront or driver/net/netback) function.
xenbus.c : static DEFINE_XENBUS_DRIVER(netback, ,
.probe = netback_probe,
.remove = netback_remove,
.uevent = netback_uevent,
.otherend_changed = frontend_changed, //callback for frontend_change.
);

xenbus.c: frontend_changed( calls the backend state change whenever there is a change in state of frontend. frontend_changed is a callback when frontend changes.)—->set_backend_state()—>backend_connect()—>connect() —> connect_rings()

xenbus.c : connect_rings()—–> xenvif_connect();

interface.c xenvif_connect()—–> xenvif_map_frontend_rings();

netback.c xenvif_map_frontend_rings() —-> xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),tmp, ring_ref_count, vaddr )

int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
unsigned int nr_grefs, void **vaddr)
{
return ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);
}

ring_ops->map = xenbus_map_ring_valloc_pv
ring_ops->map = xenbus_map_ring_valloc_hvm

#define XENBUS_MAX_RING_PAGE_ORDER 4
#define XENBUS_MAX_RING_PAGES (1U << XENBUS_MAX_RING_PAGE_ORDER) // no specific reason as to why the number of shared pages have been restricted to 16. I have confirmed it. It should just be stored in xenstore. Important function netback.c: int xenvif_map_frontend_rings(struct xenvif_queue *queue, void **vaddr, unsigned long *ring_ref, unsigned int ring_ref_count) { grant_ref_t tmp[NETBK_MAX_RING_PAGES]; unsigned int i; for (i = 0; i < ring_ref_count; i++) tmp[i] = ring_ref[i]; return xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
tmp, ring_ref_count, vaddr);
}

xennet_connect —->setup_netfront();

setup_netfront()
{
allocated shared ring using simple malloc and call xenbus_grant_ring each for tx and rx to update the grant reference table and xenstore.
} —–>xenbus_grant_ring calls gnttab_grant_foreign_access(dev->otherend_id, virt_to_mfn(addr), 0); which leads a hypercall to modify the grant table. Hypervisor modifies the grant table and store the machine physical address (mfn) and state of the frontend and also some flags. Corresponding to this entry there is a index.
Now, as mentioned above, xenbus_probe detects a change in the frontend state and informs backend (frontend state change = buffers are avaialble). Netback backend then maps these pages to its own dom0. Now here there is a confusion as to whether netback backend is shared these mfns to be mapped or what data is backend shared to get this mapping done.

Frontend: how the ring buffers that have been allocated in frontend are notified to the dom0 netback backend so that netback dom0 backend can map them to its address space i.e kernel.

As I am new to XEN architecutre , so was wondering why this function which calls xenbus_map_ring_valloc which basically maps area allocated n domu to dom0 using grantrefs i.e area allocated in userspace using malloc to area allocated in kernel space with vmalloc. The I realized that its basically mapping the rings that have been allocated by the netfront driver in guest to the rings in netback driver that is running in dom0

Basically in this code that I am looking to support multiple pages for rings in case of network, what we are doing is that ring size is increased. Instead of limiting the ring size to a 1 page i.e 4K only, ring size can span across multiple pages. So essentially it just means
int tx_ring_ref[XENNET_MAX_RING_PAGES]; where
XENNET_MAX_RING_PAGES = 16.
union skb_entry {
struct sk_buff *skb;
unsigned long link;
} tx_skbs[XENNET_MAX_TX_RING_SIZE];

So essentially it means we have huge amount of area available now to store pending skbs in transmit.
#define XENNET_MAX_TX_RING_SIZE XENNET_TX_RING_SIZE(XENNET_MAX_RING_PAGES) = xen_netif_tx * PAGE_SIZE * no. of pages i.e 16.

earlier it was just
int tx_ring_ref;
union skb_entry {
struct sk_buff *skb;
unsigned long link;
} tx_skbs[NET_TX_RING_SIZE]; which was equal to just 1 PAGE_SIZE

Thanks
Anshul Makkar

Tags: , , ,


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.