Homepage
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
Search
1
Centralized Deployment of EasyTier using Docker
1,705 Views
2
Adding KernelSU Support to Android 4.9 Kernel
1,091 Views
3
Enabling EROFS Support for an Android ROM with Kernel 4.9
309 Views
4
Installing 1Panel Using Docker on TrueNAS
300 Views
5
2025 Yangcheng Cup CTF Preliminary WriteUp
296 Views
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Login
Search
Search Tags
Network Technology
BGP
Linux
BIRD
DN42
C&C++
Android
Windows
OSPF
Docker
AOSP
MSVC
Services
DNS
STL
Interior Gateway Protocol
Kernel
caf/clo
Web
TrueNAS
Kagura iYoRoy
A total of
28
articles have been written.
A total of
14
comments have been received.
Index
Column
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Pages
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
6
articles related to
were found.
DN42&OneManISP - Troubleshooting OSPF Source Address in a Coexistence Environment
Backstory As mentioned in the previous post of this series, because the VRF solution was too isolating, the DNS service I deployed on the HKG node (172.20.234.225) became inaccessible from the DN42 network. Research indicated this could be achieved by setting up veth or NAT forwarding, but due to the scarcity of available documentation, I ultimately abandoned the VRF approach. Structure Analysis This time, I planned to place both DN42 and clearnet BGP routes into the system's main routing table, then separate them for export using filters to distinguish which should be exported. For clarity, I stored the configuration for the DN42 part and the clearnet part (hereinafter referred to as inet) separately, and then included them from the main configuration file. Also, since there should ideally only be one kernel configuration per routing table, I merged the DN42 and inet kernel parts, keeping only one instance. After multiple optimizations and revisions, my final directory structure is as follows: /etc/bird/ ├─envvars ├─bird.conf: Main Bird config file, defines basic info (ASN, IP, etc.), includes sub-configs below ├─kernel.conf: Kernel config, imports routes into the system routing table ├─dn42 | ├─defs.conf: DN42 function definitions, e.g., is_self_dn42_net() | ├─ibgp.conf: DN42 iBGP template | ├─rpki.conf: DN42 RPKI route validation | ├─ospf.conf: DN42 OSPF internal network | ├─static.conf: DN42 static routes | ├─ebgp.conf: DN42 Peer template | ├─ibgp | | └<ibgp configs>: DN42 iBGP configs for each node | ├─ospf | | └backbone.conf: OSPF area | ├─peers | | └<ibgp configs>: DN42 Peer configs for each node ├─inet | ├─peer.conf: Clearnet Peer | ├─ixp.conf: Clearnet IXP connection | ├─defs.conf: Clearnet function definitions, e.g., is_self_inet_v6() | ├─upstream.conf: Clearnet upstream | └static.conf: Clearnet static routes I separated the function definitions because I needed to reference them in the filters within kernel.conf, so I isolated them for early inclusion. After filling in the respective configurations and setting up the include relationships, I ran birdc configure and it started successfully. So, case closed... right? Problems occurred After running for a while, I suddenly found that I couldn't ping the HKG node from my internal devices, nor could I ping my other internal nodes from the HKG node. Strangely, external ASes could ping my other nodes or other external ASes through my HKG node, and my internal nodes could also ping other non-directly connected nodes (e.g., 226(NKG)->225(HKG)->229(LAX)) via the HKG node. Using ip route get <other internal node address> revealed: root@iYoRoyNetworkHKG:/etc/bird# ip route get 172.20.234.226 172.20.234.226 via 172.20.234.226 dev dn42_nkg src 23.149.120.51 uid 0 cache See the problem? The src address should have been the HKG node's own DN42 address (configured on the OSPF stub interface), but here it showed the HKG node's clearnet address instead. Attempting to read the route learned by Bird using birdc s r for 172.20.234.226: root@iYoRoyNetworkHKGBGP:/etc/bird/dn42/ospf# birdc s r for 172.20.234.226 BIRD 2.17.1 ready. Table master4: 172.20.234.226/32 unicast [dn42_ospf_iyoroynet_v4 00:30:29.307] * I (150/50) [172.20.234.226] via 172.20.234.226 on dn42_nkg onlink Looks seemingly normal...? Theoretically, although the DN42 source IP is different from the usual, DN42 rewrites krt_prefsrc when exporting to the kernel to inform the kernel of the correct source address, so this issue shouldn't occur: protocol kernel kernel_v4{ ipv4 { import none; export filter { if source = RTS_STATIC then reject; + if is_valid_dn42_network() then krt_prefsrc = DN42_OWNIP; accept; }; }; } protocol kernel kernel_v6 { ipv6 { import none; export filter { if source = RTS_STATIC then reject; + if is_valid_dn42_network_v6() then krt_prefsrc = DN42_OWNIPv6; accept; }; }; } Regarding krt_prefsrc, it stands for Kernel Route Preferred Source. This attribute doesn't manipulate the route directly but instead attaches a piece of metadata to it. This metadata directly instructs the Linux kernel to prioritize the specified IP address as the source address for packets sent via this route. I was stuck on this for a long time. The Solution Finally, during an unintentional attempt, I added the krt_prefsrc rewrite to the OSPF import configuration as well: protocol ospf v3 dn42_ospf_iyoroynet_v4 { router id DN42_OWNIP; ipv4 { - import where is_self_dn42_net() && source != RTS_BGP; + import filter { + if is_self_dn42_net() && source != RTS_BGP then { + krt_prefsrc=DN42_OWNIP; + accept; + } + reject; + }; export where is_self_dn42_net() && source != RTS_BGP; }; include "ospf/*"; }; protocol ospf v3 dn42_ospf_iyoroynet_v6 { router id DN42_OWNIP; ipv6 { - import where is_self_dn42_net_v6() && source != RTS_BGP; + import filter { + if is_self_dn42_net_v6() && source != RTS_BGP then { + krt_prefsrc=DN42_OWNIPv6; + accept; + } + reject; + }; export where is_self_dn42_net_v6() && source != RTS_BGP; }; include "ospf/*"; }; After running this, the src address became correct, and mutual pinging worked. Configuration files for reference: KaguraiYoRoy/Bird2-Configuration
29/10/2025
59 Views
0 Comments
0 Stars
DN42&OneManISP - Using VRF to Run Clearnet BGP and DN42 on the Same Machine
Background Currently, clearnet BGP and DN42 each use a separate VPS in the same region, meaning two machines are required per region. After learning about VRF from a group member, I explored using VRF to enable a single machine to handle both clearnet BGP and DN42 simultaneously. Note: Due to its isolation nature, the VRF solution will prevent DN42 from accessing services on the host. If you need to run services (like DNS) on the server for DN42, you might need additional port forwarding or veth configuration, which is beyond the scope of this article. (This is also the reason why I ultimately did not adopt VRF in my production environment). Advantages of VRF Although DN42 uses private IP ranges and internal ASNs, which theoretically shouldn't interfere with clearnet BGP, sharing the same routing table can lead to issues like route pollution and management complexity. VRF (Virtual Routing and Forwarding) allows creating multiple routing tables on a single machine. This means we can isolate DN42 routes into a separate routing table, keeping them apart from the clearnet routing table. The advantages include: Absolute Security and Policy Isolation: The DN42 routing table is isolated from the clearnet routing table, fundamentally preventing route leaks. Clear Operation and Management: Use commands like birdc show route table t_dn42 and birdc show route table t_inet to view and debug two completely independent routing tables, making things clear at a glance. Fault Domain Isolation: If a DN42 peer flaps, the impact is confined to the dn42 routing table. It won't consume routing computation resources for the clearnet instance nor affect clearnet forwarding performance. Alignment with Modern Network Design Principles: Using VRF for different routing domains (production, testing, customer, partner) is standard practice in modern network engineering. It logically divides your device into multiple virtual routers. Configuration System Part Creating the VRF Interface Use the following commands to create a VRF device named dn42-vrf and associate it with the system's routing table number 1042: ip link add dn42-vrf type vrf table 1042 ip link set dev dn42-vrf up # Enable it You can change the routing table number according to your preference, but avoid the following reserved routing table IDs: Name ID Description unspec 0 Unspecified, rarely used main 254 Main routing table, where most ordinary routes reside default 253 Generally unused, reserved local 255 Local routing table, contains 127.0.0.1/8, local IPs, broadcast addresses, etc. Cannot be modified Associating Existing Network Interfaces with VRF In my current DN42 setup, several WireGuard interfaces and a dummy interface are used for DN42. Therefore, associate these interfaces with the VRF: ip link set dev <interface_name> master dn42-vrf Note: After associating an interface with a VRF, it might lose its IP addresses. Therefore, you need to readd the addresses, for example: ip addr add 172.20.234.225 dev dn42 After completion, ip a should show the corresponding interface's master as dn42-vrf: 156: dn42: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master dn42-vrf state UNKNOWN group default qlen 1000 link/ether b6:f5:28:ed:23:04 brd ff:ff:ff:ff:ff:ff inet 172.20.234.225/32 scope global dn42 valid_lft forever preferred_lft forever inet6 fd18:3e15:61d0::1/128 scope global valid_lft forever preferred_lft forever inet6 fe80::b4f5:28ff:feed:2304/64 scope link valid_lft forever preferred_lft forever Persistence I use ifupdown2 to automatically load the dummy interface and VRF device on boot. auto dn42-vrf iface dn42-vrf inet manual vrf-table 1042 auto dn42 iface dn42 inet static pre-up ip link add $IFACE type dummy || true vrf dn42-vrf address <IPv4 Address>/32 address <IPv6 Address>/128 post-down ip link del $IFACE My dummy interface is named dn42; modify accordingly if yours is different. After creation, use ifup dn42-vrf && ifup dn42 to start the dummy interface. Note: The number prefix for the VRF device file should be smaller than that of the dummy interface file, ensuring the VRF device starts first. WireGuard Tunnels Add PostUp commands to associate them with the VRF and readd their addresses. Example: [Interface] PrivateKey = [Data Redacted] ListenPort = [Data Redacted] Table = off Address = fe80::2024/64 + PostUp = ip link set dev %i master dn42-vrf + PostUp = ip addr add fe80::2024/64 dev %i PostUp = sysctl -w net.ipv6.conf.%i.autoconf=0 [Peer] PublicKey = [Data Redacted] Endpoint = [Data Redacted] AllowedIPs = 10.0.0.0/8, 172.20.0.0/14, 172.31.0.0/16, fd00::/8, fe00::/8 Then restart the tunnel. Bird2 Part First, define two routing tables for DN42's IPv4 and IPv6: ipv4 table dn42_table_v4; ipv6 table dn42_table_v6 Then, specify the VRF and system routing table number in the kernel protocol, and specify the previously created v4/v6 routing tables in the IPv4/IPv6 sections: protocol kernel dn42_kernel_v6{ + vrf "dn42-vrf"; + kernel table 1042; scan time 20; ipv6 { + table dn42_table_v6; import none; export filter { if source = RTS_STATIC then reject; krt_prefsrc = DN42_OWNIPv6; accept; }; }; }; protocol kernel dn42_kernel_v4{ + vrf "dn42-vrf"; + kernel table 1042; scan time 20; ipv4 { + table dn42_table_v4; import none; export filter { if source = RTS_STATIC then reject; krt_prefsrc = DN42_OWNIP; accept; }; }; } For protocols other than kernel, add the VRF and the independent IPv4/IPv6 tables, but do not specify the system routing table number: protocol static dn42_static_v4{ + vrf "dn42-vrf"; route DN42_OWNNET reject; ipv4 { + table dn42_table_v4; import all; export none; }; } protocol static dn42_static_v6{ + vrf "dn42-vrf"; route DN42_OWNNETv6 reject; ipv6 { + table dn42_table_v6; import all; export none; }; } In summary: Configure a VRF and the previously defined routing tables for everything related to DN42. Only the kernel protocol needs the system routing table number specified; others do not. Apply the same method to BGP, OSPF, etc. However, I chose to use separate Router IDs for the clearnet and DN42, so a separate Router ID needs to be configured: # /etc/bird/dn42/ospf.conf protocol ospf v3 dn42_ospf_iyoroynet_v4 { + vrf "dn42-vrf"; + router id DN42_OWNIP; ipv4 { + table dn42_table_v4; import where is_self_dn42_net() && source != RTS_BGP; export where is_self_dn42_net() && source != RTS_BGP; }; include "ospf/*"; }; protocol ospf v3 dn42_ospf_iyoroynet_v6 { + vrf "dn42-vrf"; + router id DN42_OWNIP; ipv6 { + table dn42_table_v6; import where is_self_dn42_net_v6() && source != RTS_BGP; export where is_self_dn42_net_v6() && source != RTS_BGP; }; include "ospf/*"; }; # /etc/bird/dn42/ebgp.conf ... template bgp dnpeers { + vrf "dn42-vrf"; + router id DN42_OWNIP; local as DN42_OWNAS; path metric 1; ipv4 { + table dn42_table_v4; ... }; ipv6 { + table dn42_table_v6; ... }; } include "peers/*"; After completion, reload the configuration with birdc c. Now, we can view the DN42 routing table separately using ip route show vrf dn42-vrf: root@iYoRoyNetworkHKGBGP:~# ip route show vrf dn42-vrf 10.26.0.0/16 via inet6 fe80::ade0 dev dn42_4242423914 proto bird src 172.20.234.225 metric 32 10.29.0.0/16 via inet6 fe80::ade0 dev dn42_4242423914 proto bird src 172.20.234.225 metric 32 10.37.0.0/16 via inet6 fe80::ade0 dev dn42_4242423914 proto bird src 172.20.234.225 metric 32 ... You can also ping through the VRF using the -I dn42-vrf parameter: root@iYoRoyNetworkHKGBGP:~# ping 172.20.0.53 -I dn42-vrf ping: Warning: source address might be selected on device other than: dn42-vrf PING 172.20.0.53 (172.20.0.53) from 172.20.234.225 dn42-vrf: 56(84) bytes of data. 64 bytes from 172.20.0.53: icmp_seq=1 ttl=64 time=3.18 ms 64 bytes from 172.20.0.53: icmp_seq=2 ttl=64 time=3.57 ms 64 bytes from 172.20.0.53: icmp_seq=3 ttl=64 time=3.74 ms 64 bytes from 172.20.0.53: icmp_seq=4 ttl=64 time=2.86 ms ^C --- 172.20.0.53 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3006ms rtt min/avg/max/mdev = 2.863/3.337/3.740/0.341 ms Important Notes If the VRF device is reloaded, all devices originally associated with the VRF need to be reloaded as well, otherwise they won't function correctly. Currently, DN42 cannot access services inside the host configured with VRF. A future article might explain how to allow traffic within the VRF to access host services (Adding to the TODO list). Reference Articles:: Run your MPLS network with BIRD
16/09/2025
86 Views
0 Comments
0 Stars
DN42 - Ep.4 Configuring BGP Communities
Foreword I am a novice in BGP. This article may contain imprecise content/naive understandings/elementary mistakes. I kindly ask the experts to be lenient. If you find any issues, you are welcome to contact me via email, and I will correct them as soon as possible. If you find this unacceptable, it is recommended to close this article now. What are BGP Communities? TL;DR: BGP Communities "tag" routes, allowing others to use these tags for route optimization. This concept might be unfamiliar to newcomers (like me), who might not initially understand its purpose. Simply put, BGP Communities are a mechanism for tagging routes, similar to adding labels. They allow network administrators to attach one or more "tags" (i.e., community values) to routes propagated via BGP. These tags themselves do not alter the route's path attributes (like AS_PATH, LOCAL_PREF, MED, etc.), but they provide a signaling mechanism to indicate what policy or processing should be applied to that route by other routers within the same AS or in downstream peer ASes. BGP Communities can be used to: Simplify Policy Configuration: Routers inside the network or in downstream ASes only need to configure policies based on community values (like setting LOCAL_PREF, adding NO_EXPORT, applying route-maps, etc.), without needing to know the specific prefix details. This makes policies more centralized, easier to manage, and less prone to errors. Convey Policy Intent to Downstream ASes: An AS can attach community values to routes it advertises to its downstream customer or peer ASes. These communities convey requirements or suggestions on how these routes should be handled, such as route optimization based on geographic location, latency, or bandwidth. Coordinate Policies within an AS: Inside a large AS, when using IBGP full-mesh or route reflectors, edge routers (receiving EBGP routes or redistributing routes) can tag routes with community values. Core routers or route reflectors within the AS can recognize these communities and apply corresponding internal policies (like setting LOCAL_PREF, MED, deciding whether to advertise to certain IBGP peers, adding other communities, etc.), without needing complex prefix-based policies on every internal router. DN42 has its own set of Communities specifications. For details, please refer to: BGP-communities - DN42 Wiki Configuration The general idea and approach in this article are largely based on Xe_iu's method, focusing on adding BGP Communities for geographic information and performing route optimization. Generally, this is sufficient. (Another reason is that I haven't fully grasped the others yet) Note: We should ONLY add geographic information-related BGP Communities to routes originating from our own AS. We should not add such entries to routes received from neighbors. Adding our own regional Communities to a neighbor's routes constitutes forging the route origin, potentially leading to route hijacking. Downstream networks might misjudge the traffic path, routing traffic that should be direct through your network, increasing latency and consuming your network's bandwidth. (Large Communities are an exception, as they have a verification mechanism to prevent this, but that's beyond the scope of this article). The idea is clear. When exporting routes, we first need to verify if it's our own route. If it is, we add the communities tag to the route. The sample configuration provided by DN42 already includes two functions, is_self_net() and is_self_net_v6(), to check if a route is our own. Therefore, writing the configuration part is straightforward. Adding Communities to Routes First, we need to define the geographic region information for the current node at the beginning of the node configuration file. Please check BGP-communities - DN42 Wiki for details: define DN42_REGION = 52; # 52 represents East Asia define DN42_COUNTRY= 1344; # 1344 represents Hong Kong Please modify these values according to your node's actual geographic location. Then, modify the export filter in the dnpeers template: }; export filter { - if is_valid_network() && source ~ [RTS_STATIC, RTS_BGP] then accept; + if is_valid_network() && source ~ [RTS_STATIC, RTS_BGP] then{ + if (is_self_net()) then { # Check if it's our own route + bgp_community.add((64511, DN42_REGION)); # Add continent-level region info + bgp_community.add((64511, DN42_COUNTRY)); # Add country/region info + } + accept; + } reject; }; import limit 1000 action block; During export, it checks if it's our own route. If it is, it sets the bgp_community according to the defined DN42_REGION and DN42_COUNTRY. Here, 64511 is the reserved public AS number identifier specifically for geographic tags (Region/Country); you can just copy it. Apply the same method to the IPv6 export rules, but replace is_self_net() with is_self_net_v6(). Route Optimization Based on Communities Here we need to introduce another concept: local_pref (Local Preference). It is used within an AS to indicate the priority of a route. Its default value is 100, and a higher value indicates higher priority. Furthermore, in BGP route selection logic, local_pref has the highest priority, even higher than AS_PATH length. This means that by setting local_pref, we can adjust the priority of routes to achieve route optimization. Combining this with the BGP Communities mentioned above, we can set the corresponding local_pref based on Communities for optimization. Also, since BGP.local_pref is propagated within the AS, we need to modify the import logic for both eBGP and iBGP routes. My logic for handling BGP.local_pref here references (basically copies) Xe_iu's approach: For routes from the same region, priority +10 For routes from the same country, priority +5 additionally For routes received via direct peering with us, priority +20 additionally Create a function to calculate the priority: function ebgp_calculate_priority() { int priority = 100; # Base priority # Same region detection (+10) if bgp_community ~ [(64511, DN42_REGION)] then priority = priority + 10; # Same country detection (+5) if bgp_community ~ [(64511, DN42_COUNTRY)] then priority = priority + 5; # Direct eBGP neighbor detection (+20) if bgp_path.len = 1 then priority = priority + 20; return priority; } Then, in the import filter within the dnpeers template, set bgp_local_pref to the value calculated by the function: template bgp dnpeers { local as OWNAS; path metric 1; ipv4 { import filter { if is_valid_network() && !is_self_net() then { if (roa_check(dn42_roa, net, bgp_path.last) != ROA_VALID) then { print "[dn42] ROA check failed for ", net, " ASN ", bgp_path.last; reject; } + bgp_local_pref = ebgp_calculate_priority(); accept; } reject; }; Apply the same method for IPv6. After running birdc configure, we should be able to see that our routes have been tagged with Communities labels at our neighbors: (Screenshot source:https://lg.milu.moe/route_all/hk/172.20.234.224) Special thanks to Nuro Trace and Xe_iu. They helped deepen my understanding of BGP Communities and provided much assistance. Reference Articles: [DN42] bird2的配置文件 – Xe_iu's Blog | Xe_iu的杂物间 [DN42] 谈一谈如何配置 BGP community – Xe_iu's Blog | Xe_iu的杂物间 BGP-communities - DN42 Wiki
17/08/2025
70 Views
0 Comments
1 Stars
DN42 - Ep.3 Registering a Domain and Setting Up Authoritative DNS in DN42
Foreword I am a novice in BGP. This article may contain imprecise content/naive understandings/elementary mistakes. If you find any issues, you are welcome to contact me via email, and I will correct them as soon as possible. If you find this unacceptable, it is recommended to close this article now. Assumption: You have already joined DN42, can normally send and receive routing tables, and can access IPs within DN42. Motivation While debugging the network, I noticed that pinging or tracerouting others' DN42 IPs could display the reverse-resolved domain names, making it clear which nodes the route passed through, rather than just looking at IPs (as shown in the figure below). It's very intuitive, letting others see at a glance if you've taken a detour. Therefore, I decided to register my own DN42 domain and set up an authoritative DNS service. After reading Lantian's article, I saw he used a PowerDNS + MySQL master-slave synchronization solution. However, my server has limited performance (only 1 core, 1GB RAM), so I plan to use KnotDNS as the DNS server, utilizing the standard zone transfer protocol (AXFR/IXFR) for master-slave synchronization. Preparations {alert type="warning"} The domain names and IPs mentioned in this and subsequent chapters are my own. Please replace them with your own during actual deployment; values enclosed in angle brackets need to be changed according to your requirements. {/alert} I chose the domain: yori.dn42, and plan to deploy DNS servers on three machines: 172.20.234.225, fd18:3e15:61d0::1, ns1.yori.dn42 172.20.234.227, fd18:3e15:61d0::3, ns2.yori.dn42 172.20.234.229, fd18:3e15:61d0::5, ns3.yori.dn42 Among them, ns1.yori.dn42 will be the master node, and ns2, ns3 will be the slave nodes. Installing KnotDNS If port 53 on the system is occupied by a process like systemd-resolvd, disable it first: systemctl stop systemd-resolved systemctl disable systemd-resolved unlink /etc/resolv.conf echo "nameserver 8.8.8.8" > /etc/resolv.conf I am using Debian 12, so I'll use APT for installation: apt install knot knot-dnsutils -y Set KnotDNS to start automatically: systemctl enable knot Configuring KnotDNS Creating a Key First, create a key for synchronization: keymgr -t key_knsupdate Copy the output: # hmac-sha256:key_knsupdate:<your secret> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <your secret> Editing the Configuration File Master Node Edit /etc/knot/knot.conf and fill in the following content: server: rundir: "/run/knot" user: knot:knot automatic-acl: on listen: [ <listen_address1>@53, <listen_address2>@53, ... ] log: - target: syslog any: info database: storage: "/var/lib/knot" ### Paste the Key generated in the previous step here # hmac-sha256:key_knsupdate:<your secret> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <your secret> remote: - id: <DNS_Node_1_ID> address: <DNS_Node_1_IP>@53 - id: <DNS_Node_2_ID> address: <DNS_Node_2_IP>@53 - id: <DNS_Node_3_ID> address: <DNS_Node_3_IP>@53 acl: - id: acl_slave key: key_knsupdate action: transfer - id: acl_master key: key_knsupdate action: notify - id: acl_knsupdate key: key_knsupdate action: update template: - id: default storage: "/var/lib/knot" file: "%s.zone" zone: - domain: <DN42 Domain> notify: [ <Slave_Node_1_ID>, <Slave_Node_2_ID> ] acl: [ acl_slave, acl_knsupdate ] - domain: <IPv4 Reverse Lookup Domain> notify: [ <Slave_Node_1_ID>, <Slave_Node_2_ID> ] acl: [ acl_slave, acl_knsupdate ] - domain: <IPv6 Reverse Lookup Domain> notify: [ <Slave_Node_1_ID>, <Slave_Node_2_ID> ] acl: [ acl_slave, acl_knsupdate ] The listen addresses should include the machine's DN42 IPv4 and DN42 IPv6 addresses. For local debugging, you can add internal IPs like 127.0.0.1 and ::1. The Slave Node IDs are the IDs set in the remote section for the servers you designated as slave nodes. The address in remote can be an internal address, DN42 IPv4, or DN42 IPv6, used only for master-slave synchronization. If using an internal address, add it to the listen list. The template section sets the Zone file storage location to /var/lib/knot. The IPv4 Reverse Lookup Domain should follow the format specified in RFC 2317 based on your allocated IPv4 block. For example, my IPv4 block is 172.20.234.224/28, so my IPv4 reverse lookup domain should be 224/28.234.20.172.in-addr.arpa. This treats the last octet 224/28 as a whole, reverses the order of the remaining parts, and appends .in-addr.arpa. The IPv6 Reverse Lookup Domain should follow the format specified in RFC 3152 based on your allocated IPv6 block. For example, my IPv6 block is fd18:3e15:61d0::/48, so my IPv6 reverse lookup domain should be 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa. This involves reversing the order of the nibbles in the network prefix (excluding the /48) and appending .ip6.arpa. Pad with zeros if necessary. {collapse} {collapse-item label="Example"} server: rundir: "/run/knot" user: knot:knot automatic-acl: on listen: [ 172.20.234.225@53, fd18:3e15:61d0::1@53, localhost@53, 127.0.0.2@53 ] log: - target: syslog any: info database: storage: "/var/lib/knot" # hmac-sha256:key_knsupdate:<key> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <key> remote: - id: 225 # Master Node address: 172.20.234.225@53 - id: 227 # Slave Node address: 172.20.234.227@53 - id: 229 # Slave Node address: 172.20.234.229@53 acl: - id: acl_slave key: key_knsupdate action: transfer - id: acl_master key: key_knsupdate action: notify - id: acl_knsupdate key: key_knsupdate action: update template: - id: default storage: "/var/lib/knot" file: "%s.zone" zone: - domain: yori.dn42 notify: [ 227, 229 ] acl: [ acl_slave, acl_knsupdate ] - domain: 224/28.234.20.172.in-addr.arpa notify: [ 227, 229 ] acl: [ acl_slave, acl_knsupdate ] - domain: 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa notify: [ 227, 229 ] acl: [ acl_slave, acl_knsupdate ] {/collapse-item} {/collapse} Slave Nodes The configuration for slave nodes is largely similar to the master node. Just change the listen addresses to the slave node's addresses and modify the zone section configuration as follows: --- a/knot.conf +++ b/knot.conf zone: - domain: <DN42 Domain> - notify: [ <Slave_Node_1_ID>, <Slave_Node_2_ID> ] - acl: [ acl_slave, acl_knsupdate ] + master: <Master_Node_ID> + zonefile-load: whole + acl: acl_master - domain: <IPv4 Reverse Lookup Domain> - notify: [ <Slave_Node_1_ID>, <Slave_Node_2_ID> ] - acl: [ acl_slave, acl_knsupdate ] + master: <Master_Node_ID> + zonefile-load: whole + acl: acl_master - domain: <IPv6 Reverse Lookup Domain> - notify: [ <Slave_Node_1_ID>, <Slave_Node_2_ID> ] - acl: [ acl_slave, acl_knsupdate ] + master: <Master_Node_ID> + zonefile-load: whole + acl: acl_master The Master Node ID is the ID set in the remote section for the server you designated as the master node. {collapse} {collapse-item label="Examp;e"} server: rundir: "/run/knot" user: knot:knot automatic-acl: on listen: [ 172.20.234.227@53, fd18:3e15:61d0::3@53, localhost@53, 127.0.0.1@53 ] log: - target: syslog any: info database: storage: "/var/lib/knot" # hmac-sha256:key_knsupdate:<key> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <key> remote: - id: 225 address: 172.20.234.225@53 - id: 227 address: 172.20.234.227@53 - id: 229 address: 172.20.234.229@53 acl: - id: acl_slave key: key_knsupdate action: transfer - id: acl_master key: key_knsupdate action: notify - id: acl_knsupdate key: key_knsupdate action: update template: - id: default storage: "/var/lib/knot" file: "%s.zone" zone: - domain: yori.dn42 master: 225 zonefile-load: whole acl: acl_master - domain: 224/28.234.20.172.in-addr.arpa master: 225 zonefile-load: whole acl: acl_master - domain: 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa master: 225 zonefile-load: whole acl: acl_master {/collapse-item} {/collapse} After writing the configuration file, run the following command to restart KnotDNS: systemctl restart knot Editing Zone Files All configurations in this section are done on the primary DNS server. For record values (not hostnames) that require domain names, unless otherwise specified, please follow the RFC 1034 specification and use FQDN format. DN42 Domain Navigate to /var/lib/knot and create a file named <dn42_domain>.zone. SOA Record The first record of the zone must be the SOA record. The SOA record is the Start of Authority record, containing basic information about the domain, such as the primary NS server address. Fill in the following content: @ <TTL> SOA <Primary_NS_Server_Address> <Contact_Email> <Serial_Number> <Refresh_Time> <Retry_Time> <Expire_Time> <Minimum_TTL> @ represents the current domain itself, do not change it. TTL: The TTL (Time To Live) value for this SOA record. Primary NS Server Address: The address of the primary authoritative NS server for this domain. This can be a resolution within the domain. For example, my primary NS server is 172.20.234.225, and I plan to use ns1.yori.dn42. pointing to this address, so here I can fill in ns1.yori.dn42.. Contact Email: The email address, with @ replaced by .. For example, my email is
[email protected]
, so here I can fill in i.iyoroy.cn. Serial Number: A 10-digit number following RFC 1912, representing the version of the zone file. Other DNS servers will refetch the records if they detect an increase in the serial number when querying the SOA. It's commonly encoded using the date + a sequence number, so this value should be incremented after each modification. Refresh Time:: The interval for AXFR slave nodes to pull the zone. Retry Time: The retry interval for AXFR slave nodes after a failed pull. Expire Time: The maximum time an AXFR slave node can continue serving with the last successfully pulled records after a failure, after which it stops responding. Minimum TTL: The minimum TTL value for the entire domain, the minimum refresh time for all records. Records won't be refreshed before at least this much time has passed. {collapse} {collapse-item label="Examp;e"} ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072705 60 60 1800 60 {/collapse-item} {/collapse} NS Records @ <TTL> NS <NS_Server_1> @ <TTL> NS <NS_Server_2> @ <TTL> NS <NS_Server_3> Fill this in according to your actual situation; add as many records as you have servers. {collapse} {collapse-item label="Example"} ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. {/collapse-item} {/collapse} A, AAAA, CNAME, etc. Records Fill them in according to the following format: <Hostname> <TTL> <Type> <Record_Value> If your NS server values point to hosts within your own DN42 domain, be sure to add A or AAAA resolution records for them. {collapse} {collapse-item label="Example"} ; A ns1 600 A 172.20.234.225 ns2 600 A 172.20.234.227 ns3 600 A 172.20.234.229 hkg-cn.node 600 A 172.20.234.225 nkg-cn.node 600 A 172.20.234.226 tyo-jp.node 600 A 172.20.234.227 hfe-cn.node 600 A 172.20.234.228 lax-us.node 600 A 172.20.234.229 ; AAAA ns1 600 AAAA fd18:3e15:61d0::1 ns2 600 AAAA fd18:3e15:61d0::3 ns3 600 AAAA fd18:3e15:61d0::5 hkg-cn.node 600 AAAA fd18:3e15:61d0::1 nkg-cn.node 600 AAAA fd18:3e15:61d0::2 tyo-jp.node 600 AAAA fd18:3e15:61d0::3 hfe-cn.node 600 AAAA fd18:3e15:61d0::4 lax-us.node 600 AAAA fd18:3e15:61d0::5 {/collapse-item} {collapse-item label="Complete Example"} /var/lib/knot/yori.dn42.zone ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072705 60 60 1800 60 ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. ; A ns1 600 A 172.20.234.225 ns2 600 A 172.20.234.227 ns3 600 A 172.20.234.229 hkg-cn.node 600 A 172.20.234.225 nkg-cn.node 600 A 172.20.234.226 tyo-jp.node 600 A 172.20.234.227 hfe-cn.node 600 A 172.20.234.228 lax-us.node 600 A 172.20.234.229 ; AAAA ns1 600 AAAA fd18:3e15:61d0::1 ns2 600 AAAA fd18:3e15:61d0::3 ns3 600 AAAA fd18:3e15:61d0::5 hkg-cn.node 600 AAAA fd18:3e15:61d0::1 nkg-cn.node 600 AAAA fd18:3e15:61d0::2 tyo-jp.node 600 AAAA fd18:3e15:61d0::3 hfe-cn.node 600 AAAA fd18:3e15:61d0::4 lax-us.node 600 AAAA fd18:3e15:61d0::5 {/collapse-item} {/collapse} IPv4 Reverse Lookup Domain Create a file in /var/lib/knot named <IPv4_Reverse_Lookup_Domain>.zone, replacing / with _. For example, my IPv4 block is 172.20.234.224/28, and my IPv4 reverse lookup domain is 224/28.234.20.172.in-addr.arpa, so the filename here would be 224_28.234.20.172.in-addr.arpa.zone. Fill in the resolution records: ; SOA @ <TTL> SOA <Primary_NS_Server_Address> <Contact_Email> <Serial_Number> <Refresh_Time> <Retry_Time> <Expire_Time> <Minimum_TTL> ; NS @ <TTL> NS <NS_Server_1> @ <TTL> NS <NS_Server_2> @ <TTL> NS <NS_Server_3> ; PTR <Last_IPv4_Octet> <TTL> PTR <Reverse_DNS_Value> <Last_IPv4_Octet> <TTL> PTR <Reverse_DNS_Value> <Last_IPv4_Octet> <TTL> PTR <Reverse_DNS_Value> ... The SOA and NS records are the same as above. Last IPv4 Octet: The last octet of the DN42 IPv4 address you assigned to the device. For example, my HK node is assigned 172.20.234.225, so here I would put 225. {collapse} {collapse-item label="Example"} 224_28.234.20.172.in-addr.arpa.zone ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072802 60 60 1800 60 ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. ; PTR 225 600 PTR hkg-cn.node.yori.dn42. 226 600 PTR nkg-cn.node.yori.dn42. 227 600 PTR tyo-jp.node.yori.dn42. 228 600 PTR hfe-cn.node.yori.dn42. 229 600 PTR lax-us.node.yori.dn42. {/collapse-item} {/collapse} You might wonder why the CIDR mask is needed, which differs from the common Clearnet format of reversed octets (e.g., 234.20.172.in-addr.arpa). Also, if you test locally, you might find that reverse lookups for your own IP addresses fail directly. The reason lies in DN42's distributed registry mechanism: a single zone file cannot cover all reverse query entry points for your address block (i.e., the .in-addr.arpa name for each specific IP address). To solve this, after your PR is merged, the official DN42 DNS will add CNAME redirects for your address block on its authoritative servers, pointing individual IP PTR queries to your CIDR-formatted zone, as shown below: ~$ dig PTR 225.234.20.172.in-addr.arpa +short 225.224/28.234.20.172.in-addr.arpa. # <-- CNAME added by Registry (redirect) hkg-cn.node.yori.dn42. # <-- Final PTR record returned by your DNS When an external resolver queries the reverse record for a specific IP (e.g., 172.20.234.225) (querying 225.234.20.172.in-addr.arpa), the official DN42 DNS returns a CNAME record pointing it to the specific record under the CIDR zone name (225.224/28.234.20.172.in-addr.arpa). Ultimately, the PTR record is provided by your configured authoritative DNS server. IPv6 Reverse Lookup Domain Create a file in /var/lib/knot named <IPv6_Reverse_Lookup_Domain>.zone. For example, my IPv6 block is fd18:3e15:61d0::/48, and my IPv6 reverse lookup domain is 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa, so the filename here would be 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa.zone. Fill in the resolution records: ; SOA @ <TTL> SOA <Primary_NS_Server_Address> <Contact_Email> <Serial_Number> <Refresh_Time> <Retry_Time> <Expire_Time> <Minimum_TTL> ; NS @ <TTL> NS <NS_Server_1> @ <TTL> NS <NS_Server_2> @ <TTL> NS <NS_Server_3> ; PTR <Reversed_Last_20_Nibbles> <TTL> PTR <Reverse_DNS_Value> <Reversed_Last_20_Nibbles> <TTL> PTR <Reverse_DNS_Value> <Reversed_Last_20_Nibbles> <TTL> PTR <Reverse_DNS_Value> ... Handle SOA and NS records as above. For the PTR hostname, you need to take the last 80 bits of the host's IPv6 address (after removing the /48 prefix), expand them into 20 hexadecimal characters, and reverse the order of these characters, separating them with dots. For example, my Hong Kong node's IPv6 is fd18:3e15:61d0::1. Expanded, this is fd18:3e15:61d0:0000:0000:0000:0000:0001. The hostname here would be 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0. {collapse} {collapse-item label="Example"} 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa.zone ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072802 60 60 1800 60 ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. ; PTR 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR hkg-cn.node.yori.dn42. 2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR nkg-cn.node.yori.dn42. 3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR tyo-jp.node.yori.dn42. 4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR hfe-cn.node.yori.dn42. 5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR lax-us.node.yori.dn42. {/collapse-item} {/collapse} Verifying the Setup After saving everything, run knot reload on each DNS server. If all goes well, you should see the slave nodes synchronizing the zone files from the master node. You can then use dig or nslookup specifying the server to query the resolution records. Registration Domain Clone the DN42 Registry, navigate to data/dns, create a new file named <your_desired_domain>, and fill in the following content: domain: <your_desired_domain> admin-c: <Admin_NIC_Handle> tech-c: <Tech_NIC_Handle> mnt-by: <Maintainer> nserver: <NS1_Server_Domain> <NS1_Server_IP> nserver: <NS2_Server_Domain> <NS2_Server_IP> nserver: <NS3_Server_Domain> <NS3_Server_IP> ... source: DN42 Refer to DN42 - Ep.1 Joining the DN42 Network for admin-c, tech-c, and mnt-by. {collapse} {collapse-item label="Example"} data/dns/yori.dn42 domain: yori.dn42 admin-c: IYOROY-DN42 tech-c: IYOROY-DN42 mnt-by: IYOROY-MNT nserver: ns1.yori.dn42 172.20.234.225 nserver: ns1.yori.dn42 fd18:3e15:61d0::1 nserver: ns2.yori.dn42 172.20.234.227 nserver: ns2.yori.dn42 fd18:3e15:61d0::3 nserver: ns3.yori.dn42 172.20.234.229 nserver: ns3.yori.dn42 fd18:3e15:61d0::5 source: DN42 {/collapse-item} {/collapse} IPv4 Reverse Lookup Domain Navigate to data/inetnum, find the file for your registered address block, and add nserver fields pointing to your own DNS servers: nserver: <your_DNS_server_address> nserver: <your_DNS_server_address> ... ... {collapse} {collapse-item label="Example"} diff --git a/data/inetnum/172.20.234.224_28 b/data/inetnum/172.20.234.224_28 index 50c800945..5ad60e23d 100644 --- a/data/inetnum/172.20.234.224_28 +++ b/data/inetnum/172.20.234.224_28 @@ -8,3 +8,6 @@ tech-c: IYOROY-DN42 mnt-by: IYOROY-MNT status: ASSIGNED source: DN42 +nserver: ns1.yori.dn42 +nserver: ns2.yori.dn42 +nserver: ns3.yori.dn42 {/collapse-item} {/collapse} IPv6 Reverse Lookup Domain Navigate to data/inet6num, find the file for your registered address block, and add nserver fields pointing to your own DNS servers: nserver: <your_DNS_server_address> nserver: <your_DNS_server_address> ... {collapse} {collapse-item label="Example"} diff --git a/data/inet6num/fd18:3e15:61d0::_48 b/data/inet6num/fd18:3e15:61d0::_48 index 53f0de06d..1ae067b00 100644 --- a/data/inet6num/fd18:3e15:61d0::_48 +++ b/data/inet6num/fd18:3e15:61d0::_48 @@ -8,3 +8,6 @@ tech-c: IYOROY-DN42 mnt-by: IYOROY-MNT status: ASSIGNED source: DN42 +nserver: ns1.yori.dn42 +nserver: ns2.yori.dn42 +nserver: ns3.yori.dn42 {/collapse-item} {/collapse} Submit a PR and Wait for Merge After filling everything out, push your changes and submit a Pull Request. Because anyone in DN42 can run recursive DNS, it might take up to a week for the DNS configuration to fully propagate, although I found that the public DNS (172.20.0.53) could query my records within half a day after merging. Special thanks to たのしい for clarifying the differences between IPv4 reverse lookup in DN42 and the public internet. Reference Articles: https://www.haiyun.me/archives/1398.html https://www.jianshu.com/p/7d69ec2976c7 https://www.potat0.cc/posts/20220726/Register_DN42_Domain/ https://bbs.csdn.net/topics/393775423 https://blog.snorlax.blue/knot-reverse-dns-kickstart/ http://www.kkdlabs.jp/dns/automatic-dnssec-signing-by-knot-dns/ https://lantian.pub/article/modify-website/register-own-domain-in-dn42.lantian/ https://datatracker.ietf.org/doc/html/rfc2317 https://datatracker.ietf.org/doc/html/rfc3152 https://datatracker.ietf.org/doc/html/rfc1912#section-2.2
03/08/2025
69 Views
0 Comments
1 Stars
DN42 - Ep.2 Building Internal Network with OSPF and Enabling iBGP
Foreword I am a novice in BGP. This article may contain imprecise content/naive understandings/elementary mistakes. I kindly ask the experts to be lenient. If you find any issues, you are welcome to contact me via email, and I will correct them as soon as possible. If you find this unacceptable, it is recommended to close this article now. Article Update Log {timeline} {timeline-item color="#50BFFF"} July 22, 2025: First edition published, using VXLAN over WireGuard tunnel. {/timeline-item} {timeline-item color="#50BFFF"} July 25, 2025: Updated tunneling solution, using type ptp; to support OSPF traffic via WireGuard (Special thanks to Nuro Trance for the guidance!). {/timeline-item} {timeline-item color="#50BFFF"} August 8, 2025: Added explanation and configuration for iBGP. {/timeline-item} {timeline-item color="#4F9E28"} August 27, 2025: Updated node topology diagram. {/timeline-item} {/timeline} Why Do We Need Internal Routing? As the number of nodes increases, we need a proper way to handle internal routing within our AS (Autonomous System). BGP only handles routing to different ASes, which causes a problem: if nodes A and B are both peering with external networks, a request from node A may have its response routed to node B, even though they are part of the same AS. Without internal routing, node A will not receive the reply. To solve this, we need to ensure that all devices within our AS can communicate with each other. The common solutions are: Using network tools like ZeroTier: Simple to set up, just install the client on each node for P2P connectivity. Using P2P tools like WireGuard to manually create $\frac{n(n+1)}{2}$ tunnels, which works like the first solution but becomes cumbersome as nodes grow. Using WireGuard to establish $\frac{n(n+1)}{2}$ tunnels, then using an internal routing protocol like OSPF or Babel to manage the routing. This is more flexible and easier to scale, but it can be risky and could break the DN42 network due to misconfigurations. Thus, I decided to take the risk. Node Topology graph LR A[HKG<br>172.20.234.225<br>fd18:3e15:61d0::1] B[NKG<br>172.20.234.226<br>fd18:3e15:61d0::2] C[TYO<br>172.20.234.227<br>fd18:3e15:61d0::3] D[FRA<br>172.20.234.228<br>fd18:3e15:61d0::4] E[LAX<br>172.20.234.229<br>fd18:3e15:61d0::5] B <--> A C <--> A A <--> E A <--> D C <--> D C <--> E D <--> E Update Bird2 to v2.16 or Above To use IPv6 Link-Local addresses to transmit IPv4 OSPF data, Bird v2.16 or later is required. Here are the steps to update: sudo apt update && sudo apt -y install apt-transport-https ca-certificates wget lsb-release sudo wget -O /usr/share/keyrings/cznic-labs-pkg.gpg https://pkg.labs.nic.cz/gpg echo "deb [signed-by=/usr/share/keyrings/cznic-labs-pkg.gpg] https://pkg.labs.nic.cz/bird2 $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/cznic-labs-bird2.list sudo apt update && sudo apt install bird2 -y Tunnel Configuration [Interface] PrivateKey = <Local WireGuard Private Key> ListenPort = <Listen Port> Table = off Address = <IPv6 LLA>/64 PostUp = sysctl -w net.ipv6.conf.%i.autoconf=0 [Peer] PublicKey = <Peer Public Key> Endpoint = <Peer Public Endpoint> AllowedIPs = 10.0.0.0/8, 172.20.0.0/14, 172.31.0.0/16, fd00::/8, fe00::/8, ff02::5 ff02::5is the OSPFv3 router-specific link-local multicast address and should be included in AllowedIPs. If you're using Bird versions earlier than v2.16, you'll need to add an IPv4 address for the tunnel as well. See the example below: {collapse} {collapse-item label="WireGuard Configuration Example with IPv4"} [Interface] PrivateKey = <Local WireGuard Private Key> ListenPort = <Listen Port> Table = off Address = <IPv6 LLA>/64 PostUp = ip addr add 100.64.0.225/32 peer 100.64.0.226/32 dev %i PostUp = sysctl -w net.ipv6.conf.%i.autoconf=0 [Peer] PublicKey = <Peer Public Key> Endpoint = <Peer Public Endpoint> AllowedIPs = 10.0.0.0/8, 172.20.0.0/14, 100.64.0.0/16, 172.31.0.0/16, fd00::/8, fe00::/8, ff02::5 Please replace 100.64.0.225 and 100.64.0.226 with your local and peer IPv4 addresses, and remember to add AllowedIPs. {/collapse-item} {/collapse} Enable OSPF You should have already configured basic Bird settings as described in the previous article. Create a new file called ospf.conf under /etc/bird and add the following: protocol ospf v3 <name> { ipv4 { import where is_self_net() && source != RTS_BGP; export where is_self_net() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; protocol ospf v3 <name> { ipv6 { import where is_self_net_v6() && source != RTS_BGP; export where is_self_net_v6() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; Theoretically, OSPF v2 should be used for handling IPv4, but since we need to communicate IPv4 using IPv6 Link-Local addresses, we are using OSPF v3 for IPv4 in this case as well. The filter rules ensure that only routes within the local network segment are allowed to propagate through OSPF, and routes from external BGP protocols are filtered out. Never use import all; export all; indiscriminately, as this could lead to route hijacking and affect the entire DN42 network. OSPF should only handle internal network routes. {collapse} {collapse-item label="Example"} /etc/bird/ospf.conf protocol ospf v3 dn42_iyoroynet_ospf { ipv4 { import where is_self_net() && source != RTS_BGP; export where is_self_net() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; protocol ospf v3 dn42_iyoroynet_ospf6 { ipv6 { import where is_self_net_v6() && source != RTS_BGP; export where is_self_net_v6() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; {/collapse-item} {/collapse} Next, create the /etc/bird/ospf folder and then create an area configuration file (e.g., /etc/bird/ospf/backbone.conf) with the following content: area 0.0.0.0 { interface "<DN42 dummy interface>" { stub; }; interface "<wg0 interface>" { cost 80; # Modify according to your network situation type ptp; }; interface "<wg1 interface>" { cost 100; # Modify according to your network situation type ptp; }; # Continue for other interfaces }; The 0.0.0.0 area represents the backbone network. he dummy interface here refers to the DN42 virtual interface mentioned in the previous article The cost value is typically used for cost calculation but in DN42's case, where bandwidth is less critical but latency is more important, you can directly assign the latency value. OSPF will automatically choose the route with the lowest cost (sum of the cost values). {collapse} {collapse-item label="Example"} /etc/bird/ospf/backbone.conf area 0.0.0.0 { interface "dn42" { stub; }; interface "dn42_hkg" { cost 80; type ptp; }; interface "dn42_hfe" { cost 150; type ptp; }; interface "dn42_lax"{ cost 100; type ptp; }; }; {/collapse-item} {/collapse} Finally, open /etc/bird/bird.conf and add the following to include the OSPF configuration file at the end: include "ospf.conf"; Run birdc configure, and then birdc show protocols should show the OSPF status as Running. If not, check the configuration steps for errors. At this point, you should be able to ping between two non-directly connected machines: Enable iBGP Before establishing multiple peer connections, each of your nodes must first have complete knowledge of the internal AS topology. This involves configuring another key component: internal BGP (iBGP). Necessity of iBGP iBGP ensures that all routers within the AS have complete knowledge of external destination routes. It ensures that: Internal routers can select the best exit path. Traffic is correctly routed to the boundary routers responsible for specific external networks. Even if there are multiple boundary routers connected to the same external network, internal routers can choose the best exit based on policies. Compared to using a default route pointing to the border router within the AS, iBGP provides precise external route information, allowing internal routers to make more intelligent forwarding decisions. Disadvantages and Solutions To prevent uncontrolled propagation of routing information within the AS, which could cause loops, an iBGP router will not readvertise routes learned from one iBGP neighbor to other iBGP neighbors. This necessitates that traditional iBGP requires a full mesh of iBGP neighbor relationships between all iBGP-running routers within the same AS. (You still need to establish $\frac{n(n+1)}{2}$ connections , there's no way around it. But configuring iBGP is still easier than configuring tunnels after OSPF is set up ). Solutions include: Using a Route Reflector (RR): An RR router manages all routing information within the entire AS. The disadvantage is that if the RR router fails, the entire network can be paralyzed (which is not very Decentralized). Using BGP Confederation: This involves virtually dividing the routers within the AS into sub-ASes, treating the connections between routers as eBGP, and finally stripping the internal AS path information when advertising routes externally. I haven't tried the latter two solutions. Here are some potentially useful reference articles. This article focuses on the configuration of iBGP. DN42 Experimental Network: Intro and Registration (Updated 2022-12) - Lan Tian @ Blog Configure BGP Confederation & Fake Confederation in Bird (Updated 2020-06-07) - Lan Tian @ Blog Writing the iBGP Configuration File Create a new file ibgp.conf in /etc/bird and fill it with the following content: template bgp ibgpeers { local as OWNAS; ipv4 { import where source = RTS_BGP && is_valid_network() && !is_self_net(); export where source = RTS_BGP && is_valid_network() && !is_self_net(); next hop self; extended next hop; }; ipv6 { import where source = RTS_BGP && is_valid_network_v6() && !is_self_net_v6(); export where source = RTS_BGP && is_valid_network_v6() && !is_self_net_v6(); next hop self; }; }; include "ibgp/*"; The import and export filters ensure that iBGP only processes routes learned via the BGP protocol and filters out IGP routes to prevent loops. next hop self is required. It instructs BIRD to rewrite the next hop to the border router's own IP address (instead of the original external next hop) when exporting routes to iBGP neighbors. This is because internal routers cannot directly access the external neighbor's address; without rewriting, the address would be considered unreachable. After rewriting, internal routers only need to send traffic to the border router via IGP routing, and the border router handles the final external forwarding. Because I want to use IPv6 addresses to establish MP-BGP and route IPv4 over IPv6, extended next hop is enabled for IPv4. Next, create the /etc/bird/ibgp directory. Inside, create an iBGP Peer configuration file for each node: protocol bgp 'dn42_ibgp_<Node Name>' from ibgpeers{ neighbor <Corresponding Node's IPv6 ULA Address> as OWNAS; }; {collapse} {collapse-item label="Example"} /etc/bird/ibgp/hkg.conf: protocol bgp 'dn42_ibgp_HKG' from ibgpeers{ neighbor fd18:3e15:61d0::1 as OWNAS; }; {/collapse-item} {/collapse} Note: Each node needs to establish (n-1) iBGP connections, ensuring connectivity with all other machines within the AS. This is why ULA addresses are used. Using ULA addresses ensures that even if the WireGuard connection between two nodes goes down, iBGP can still establish connections via the internal routing established by OSPF. Otherwise, it could lead to the collapse of the entire internal network. Finally, add the inclusion of ibgp.conf in /etc/bird/bird.conf: include "ibgp.conf"; And run birdc configure to apply the configuration. References: BIRD 与 BGP 的新手开场 - 海上的宫殿 萌新入坑 DN42 之 —— 基于 tailscale + vxlan + OSPF 的组网 – 米露小窝 使用 Bird2 配置 WireGuard + OSPF 实现网络的高可用 | bs' realm DN42 实验网络介绍及注册教程(2022-12 更新) - Lan Tian @ Blog 如何引爆 DN42 网络(2023-05-12 更新) - Lan Tian @ Blog Bird 配置 BGP Confederation,及模拟 Confederation(2020-06-07 更新) - Lan Tian @ Blog 深入解析OSPF路径开销、优先级和计时器 - 51CTO New release 2.16 | BIRD Internet Routing Daemon 第一章·第二节 如何在 Linux 上安装最新版本的 BIRD? | BIRD 中文文档 [DN42] 使用 OSPF ptp 搭建内网与IBGP配置 – Xe_iu's Blog | Xe_iu的杂物间 [译] dn42 多服务器环境中的 iBGP 与 IGP 配置 | liuzhen932 的小窝
22/07/2025
191 Views
1 Comments
1 Stars
1
2