Border Routers
We are using 2 hosts for Production requests routing purposes: rb1v.itim.vn and rb2v.itim.vn
and 1 router for hosts in Production and Dev, that requires outgoing connections to Internet rb3v.itim.vn
Network Interfaces
rb{1,2} hosts has 2 integrated 1G ports and 4x1G ports from installed Intel ET2 Quad port NIC
All 6 ports aggregated into LACP etherchannel (bond0 inteface on server):
auto bond0
iface bond0 inet manual
bond-slaves eth0 eth1 eth2 eth3 eth4 eth5
bond-mode 4
bond-miimon 100
bond-downdelay 200
bond-updelay 200
bond_lacp_rate fast
bond_xmit_hash_policy layer3+4
SMP Affinity
RX/TX Queue tuning
NIC offloads
Failover
- rb3v is working as primary default gateway for:
- Core routers – rc*N (inject default-information originate metric 10 metric-type 1 into OSPF Area 1 (vlan 998); OSPF-based failover)
- Internet-required-hosts inside 172.16.16.0/24 Frontend’s vlan 999
- rb1v acts as PRIMARY router for non-internet-required hosts inside 172.16.16.0/24 Frontend’s vlan 999 (Keepalived: 1.2.12 – VRRP-based failover ); look into keepalived.conf:
- rb2v acts as BACKUP router for non-internet-required hosts inside 172.16.16.0/24 Frontend’s vlan 999 (Keepalived: 1.2.12 – VRRP-based failover ); look into keepalived.conf:
virtual_ipaddress { 172.16.16.1/24 dev eth0.999 }eth0.999 inet 172.16.16.1/24 scope global secondary eth0.999
- If rb1v is down, keepalived on rb2v must change state to MASTER
Routing
We are using quagga package ( 0.99.17-2+squeeze3 BGP/OSPF/RIP routing daemon ) from Debian Squeeze repo for OSPF.
OSPF-based failover schema:

Traffic from/to all subnets (exclude lb*v hosts and hosts with external ip addresses) going via rb1v, so SNAT and
Firewall
All rules in /etc/itim/firewall/
- firewall – main rules
- firewall-itim-inet-allowed – rules, that mark traffic (in PREROUTING) from local subnets, aloowed for SNAT (actual for rb3v)
Sysctl
rp.filter must be disabled!!!
Without it load-balancing will NOT work!!!
Also on rb3v is increased size of net.nf_conntrack_max and are tuned up timeouts for conntrack table:
# Networking tuning net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.proxy_arp = 0 net.ipv4.conf.all.rp_filter = 0 net.core.somaxconn = 65535 net.ipv4.tcp_max_tw_buckets = 1048576 net.ipv4.tcp_ecn = 0 net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 # core backlog net.core.netdev_max_backlog=10000 # conntrack net.netfilter.nf_conntrack_generic_timeout = 120 net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 60 net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 30 net.netfilter.nf_conntrack_tcp_timeout_established = 300 net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 60 net.netfilter.nf_conntrack_tcp_timeout_close_wait = 30 net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30 net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60 net.netfilter.nf_conntrack_tcp_timeout_close = 10 net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300 net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300 net.nf_conntrack_max = 2048576 net.netfilter.nf_conntrack_acct = 0