So let's try to define network namespace, veth pair and linux bridge in one sentence:Ī " linux network namespace" is virtual network barrier encapsulating a process to isolate its network connectivity(in/out) and resources (i.e. This also enables container be connected to the host network and other container networks in the same bridge. Then, Docker connects the new container network to linux bridge docker0 using a veth pair. When Docker creates and runs a container it creates a separate network namespace (container network) and puts the container into it. Here is a 5 node bridge setup that I use that works.Docker (and probably any container technology) uses linux network namespaces to isolate container network from host network. ![]() Ip -netns nnsm route add default via 10.0.0.1 ![]() The ip commands can also be done before the xterm with ip -netns nnsm addr add 10.0.0.2/24 dev vm2 Thus you need to add these commands, from within the xterm: ip addr add 10.0.0.2/24 dev vm2Įcho "nameserver 8.8.8.8" > /etc/nfĮcho "nameserver 8.8.4.4" > /etc/nfĪnd now you can navigate from within xterm. I made a mistake, bringing the vm2 interface brings it down and clears its address. This is the contraption LXCs are based on. The advantage of this is that the new network namespace has its own stack, which means, for instance, you can start a VPN in it while the rest of your pc is not on the VPN. ![]() (if you are connected to the Internet via eth0, otherwise change accordingly), start a shell in the new network namespace, ip netns exec nnsm xterm &Īnd now, if you start typing in the new xterm, you will find you are in a separate virtual machine with IP address 10.0.0.2, but you can reach the Internet. Iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE We allow NATting in the main machine, iptables -t nat -A POSTROUTING -o brm -j MASQUERADE We endow the new network namespace with a lo interface (absolutely necessary), ip netns exec nnsm ip link set dev lo up Then we transfer vm2 to it, ip link set vm2 netns nnsm You start one called nnsm as follows ip netns add nnsm The most useful application of NICs of the veth kind is a network namespace, which is what is used in Linux containers (LXC). There is no need to add the route to the subnet 10.0.0.0/24 explicitly, it is automatically generated, you may check with ip route show. Now bring vm2 and brm up, ip link set brm up Now give addresses to the bridge and to the remaining veth interface vm2, ip addr add 10.0.0.1/24 dev brm Now enslave the interfaces tapm and vm1 to the bridge brm, ip link set tapm master brm Notice we did not bring up brm and vm2 because we have to assign them IP addresses, but we did bring up tapm and vm1, which is necessary to include them into the bridge brm. This is a correct sequence of commands for the use of interfaces of type veth:įirst create all required interfaces, ip link add dev vm1 type veth peer name vm2 Since you enabled IPv4 forwarding and added appropriate routes to the kernel routing table, they will be able to talk to each other right away.Īlso, remember that most of the commands you are using (brctl, ifconfig.) are obsolete: the iproute2 suite has commands to do all of this, see below my use of the ip command. If you want to create more pairs, repeat the steps below with different subnets, for instance 10.0.1.0/24, 10.0.2.0/24, and so on. ![]() Now you give IP addresses to brm and to vm2 (10.0.0.1 and 10.0.0.2, respectively), enable IPv4 forwarding by means of echo 1 > /proc/sys/net/ipv4/ip_forwardīring all interfaces up, and add a route instructing the kernel how to reach IP addresses 10.0.0.0/24. Since you want to keep this all virtual, you may bridge the vm1 end of the tunnel (vm2 is the other end of the tunnel) with a tap-type virtual interface, in a bridge called brm. For veth to work, one end of the tunnel must be bridged with another interface.
0 Comments
Leave a Reply. |