-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathREADME
More file actions
349 lines (268 loc) · 12.6 KB
/
README
File metadata and controls
349 lines (268 loc) · 12.6 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
This project updates XenLoop, built for Project Xen on Linux, to a modern
kernel version and makes minor performance enhancements. The most recent
kernel version it was tested with is 5.4.0-42.
The original paper can be found here -> https://kartikgopalan.github.io/publications/wang08xenloop.pdf
The original source can be found here -> http://osnet.cs.binghamton.edu/projects/xenloop.html
The 'baseline_port' branch contains the port to Linux kernel 5.4.0-42 without changing the design.
The 'main' and 'ip_lookup' branches contain the changes for performance improvements made.
The performance improvements made were to use IP address lookups rather than MAC address
lookups so outgoing packets can be hooked higher in the networking stack. Rather than hook
at the Netfilter hook NF_INET_POST_ROUTING like original XenLoop, packets are hooked at the
NF_INET_LOCAL_OUT hook, after some routing and neighbor lookup has been performed. The justification
for hooking higher in the stack is that the datapath between domUs is shorter since a packet
has to traverse less of the stack. Experimentally using IP lookup had ~1% better throughput and
latency for almost no overhead. Determinging the IP address of co-resident guests was done with
another Netfilter hook that hooks incoming ARP traffic. It utilizes the original XenLoop mechanism
of storing co-resident domU's MAC addresses and when we get an ARP packet that resolves one of those
MAC address to an IPv4 address we store the address in a separate hash table and use that for lookups
on outgoing packets. This way we stay above the link layer and can hook higher in the stack, at the
expense of only being able to send IPv4 traffic through the XenLoop FIFOs.
Future improvements include:
-fixing page fault bug that sometimes occurs when the xenloop.ko module is unloaded in a domU after
a connection has been suspended.
-further performance improvements, packets received over the FIFO currently use the 'netif_rx' function,
but since we only route IP traffic we can use 'ip_local_deliver' or a similar function instead which
skips some of the lower parts of the stack. Currently this is impossible as 'ip_local_deliver' and
similar functions are not exported to be linked with kernel modules.
-IPv6 support, NDP packets would have to be hooked instead of just ARP.
The two kernel modules have an additional 'nic' parameter that the original source did not, so the modules
must be loaded differently.
The discovery.ko module uses the 'nic' parameter to specify the device name for the virtual interface that
can send packets to the guest domUs. For example the module can be loaded on the dom0 as follows:
insmod discovery.ko nic=virbr0
The xenloop.ko module uses the 'nic' parameter to specify the device name for the virtual device that can
communicate with the dom0 and other domUs through the standard netback/netfront interface. This device is
needed to send messages to create and suspend connections before the FIFOs are constructed and mapped. For
example the module can be loaded on the domU as follows:
insmod xenloop.ko nic=eth0
The original XenLoop source hardcoded these device names or searched for a device with name 'ethN'.
The above modules were tested with an Ubuntu 18.04 (kernel version 5.4.0-42) dom0 and two domU guests
with Ubuntu 18.04 (kernel version 5.4.0-42).
Additionally, in order to use the xenloop.ko module some XenStore permissions need to be set on the
dom0. After starting up the guests use the 'xl list' command to get the domids of the started guests,
then execute the following command for each domid:
xl xenstore-chmod /local/domain/[DOMID] b
This will allow each guest domU to advertise that they have the XenLoop module loaded in the XenStore.
Please email Will Merges (wdm7973@rit.edu) with questions about the port or modifications.
The original README that was packaged with XenLoop is pasted below for reference.
/*
* XenLoop -- A High Performance Inter-VM Network Loopback
*
* Installation and Usage instructions
*
* Authors:
* Jian Wang - Binghamton University (jianwang@cs.binghamton.edu)
* Kartik Gopalan - Binghamton University (kartik@cs.binghamton.edu)
*
* Copyright (C) 2007-2009 Kartik Gopalan, Jian Wang
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
===================
XENLOOP Version 2.0
===================
http://osnet.cs.binghamton.edu/projects/xenloop.html
Contents
========
What is XenLoop?
Where to Get XenLoop?
XenLoop Binary Components
Compatibility
Building XenLoop
Installing XenLoop
Using XenLoop
Testing XenLoop
Some Adjustable Parameters
Some Other Useful Components of XenLoop
Feedback Welcome
=====================================
What is XenLoop?
================
XenLoop is a transparent inter-VM network loopback
mechanism to communicate between Linux guest VMs
that are co-resident on the same physical machine.
XenLoop uses shared memory to bypass the standard
network data path via netback/netfront and Domain 0
and their associated performance overheads.
There is no need to modify the code in Linux guests
or the hypervisor or the networking applications
in order to use XenLoop. XenLoop also transparently
adapts to guest migration without disrupting any
ongoing network communication.
This file contains the installation and usage instructions
for XenLoop. More detailed documentation regarding design
and implementation of XenLoop is available in our
technical report at
http://osnet.cs.binghamton.edu/projects/xenloop.html
Where to Get XenLoop?
=====================
The latest release of XenLoop is available at
http://osnet.cs.binghamton.edu/projects/xenloop.html
XenLoop Binary Components
=========================
XenLoop is designed as two kernel modules
- xenloop.ko
This module is meant to be inserted in each Linux guest
in which you want to enable the inter-VM loopback
functionality.
- discovery.ko
This module is to be inserted in Domain 0, and helps in
discovering and announcing co-resident Linux guests that
have an active XenLoop module.
Xenloop does not require any changes to either
the Linux guest code or the Xen hypervisor code.
Compatibility
=============
This distribution contains sources that are known to work with
the following:
- paravirtualized Linux 2.6.18 and 2.6.18.8 guests
- Xen 3.2.
All the Linux guests using XenLoop need have "netfilter"
enabled in kernel.
http://www.netfilter.org/
This is standard in most linux kernel distributions, but
you may want to check.
We have not yet tested XenLoop with later versions of Xen
and Linux, nor have we tested it with HVMs.
Also, interaction of XenLoop with any firewalls
(such as iptables) has not been tested.
We welcome feedback on any porting related issues.
Building XenLoop
================
To compile XenLoop, simply execute
$ make
in the source directory. This should generate the two
modules, xenloop.ko and discovery.ko.
You will need to compile the modules against your specific
Linux kernel versions of both the Linux guest and Domain 0.
Installing XenLoop
==================
Transfer discovery.ko to Domain 0 and do the following:
$ insmod discovery.ko
In each Linux guest where you wish to run xenloop,
transfer xenloop.ko and do the following:
$ insmod xenloop.ko
The order of above operations does not matter.
What matters is that all modules be installed
for everything to work.
Alternatively, you can set up your system to load
the modules at boot time, depending upon your system
configuration. Only make sure that the modules
are loaded **AFTER** the network interfaces are
brought up. This is because XenLoop uses the
network interface to communicate out-of-band
control messages.
Using XenLoop
=============
Once the modules are installed as described above,
you don't have to do anything special to use xenloop,
other than to run your unmodified networking
applications -- TCP/UDP/ICMP/raw sockets etc.
Currently XenLoop forwards only IP traffic via
inter-VM channel. All other traffic passes
via the standard netfront/netback interface.
This could likely be extended to other protocols
with minor coding effort.
XenLoop modules automatically discover each others'
presence or absence in co-resident guests.
In the former case, they will transparently set up
and use the inter-VM loopback channel. In the latter
case, they will transparently use
the netfront/netback interface.
Any normal network communication between guests and
outside machines remains unaffected.
Running guests can also be migrated without
interrupting applications using XenLoop channels
and without removing the xenloop and discovery
modules. XenLoop modules automatically discover
when a co-resident guest migrates away, or a remote
guest becomes co-resident; correspondingly they
tear down or set up inter-VM loopback channels.
Testing XenLoop
===============
To test whether your installed XenLoop is working,
ping between co-resident guests without the modules and
note the RTT. Then insert the modules
and ping again. The RTT with XenLoop
should be about many times smaller than without
XenLoop. We observed a factor of 5 to 6 improvement
in RTT on our dual-core machines.
You can also test XenLoop throughput performance on
your system using standard unmodified benchmarks such
as netperf, lmbench, netpipe-mpich etc.
Some Adjustable Parameters in The Code
======================================
MAX_MAC_NUM:
If you want to install Xenloop on more than 10
guests in one physical machine, you can
adjust the "MAX_MAC_NUM" parameter accordingly
in discovery.h and main.h.
XENLOOP_ENTRY_ORDER:
(This parameter is rendered less useful with
XenLoop release 2.0 onwards)
"XENLOOP_ENTRY_ORDER" in xenloop.h determines
the number of FIFO entries in each direction.
Number of entries = 2 ^ XENLOOP_ENTRY_ORDER
MAX_FIFO_PAGES
"MAX_FIFO_PAGES" in xenfifo.h defines the maximum
number of shared memory pages you could can use for
each FIFO. Again this can be changed, but you may
hit a hypervisor-imposed limit at some point.
Feel free to browse the code for other parameters you
may want to tweak, such as periodic discovery
announcement interval from Domain 0, timeout intervals
for control message replies etc.
Some Other Useful Components of XenLoop
========================================
There are a few components in XenLoop that may be
useful in their own right.
- xenfifo.c and xenfifo.h provide a unidirectional
FIFO functionality using inter-domain shared memory.
They implement a lockless producer-consumer interaction,
assuming that one guest is the producer and another
is consumer. There is nothing to enforce these roles
though and both guests can act as both producers
and consumers. If there can be multiple producer
or consumer threads in a guest, you may want to use
your own producer-local and consumer-local locks, like
we do in XenLoop. The FIFO manipulation primitives,
we hope, are self-contained enough to be used in
other context with little or no modification.
- The discovery module could also be used, with
minor modifications, in other contexts where one
wishes to transparently discover other co-resident
domains.
Feedback Welcome
================
Please email the authors at
kartik@cs.binghamton.edu
jianwang@cs.binghamton.edu
We are particularly eager to hear about usage feedback such as
- What applications did you use XenLoop for?
- What kind of performance improvements did you obtain (or not obtain)?
- What problems did you face?
- What improvements can be made?
If you use XenLoop in a product or to publish any
research results, please do let us know as courtesy so that
we can link to your work from our project webpage.
We also welcome any constructive suggestions, bug reports,
bug fixes, feature enhancements, and ports.