Homepage
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
Search
1
Docker下中心化部署EasyTier
1,704 Views
2
Adding KernelSU Support to Android 4.9 Kernel
1,090 Views
3
Enabling EROFS Support for an Android ROM with Kernel 4.9
308 Views
4
在TrueNAS上使用Docker安装1Panel
299 Views
5
2025 Yangcheng Cup Preliminary WriteUp
294 Views
Android
Maintenance
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Login
Search
Search Tags
Network Technology
BGP
Linux
BIRD
DN42
C&C++
Android
OSPF
MSVC
AOSP
Windows
Docker
caf/clo
TrueNAS
Interior Gateway Protocol
Services
iBGP
Clearnet
DNS
STL
Kagura iYoRoy
A total of
23
articles have been written.
A total of
11
comments have been received.
Index
Column
Android
Maintenance
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Pages
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
6
articles related to
were found.
DN42&OneManISP - Troubleshooting OSPF Source Address in a Coexistence Environment
Backstory As mentioned in the previous post of this series, because the VRF solution was too isolating, the DNS service I deployed on the HKG node (172.20.234.225) became inaccessible from the DN42 network. Research indicated this could be achieved by setting up veth or NAT forwarding, but due to the scarcity of available documentation, I ultimately abandoned the VRF approach. Structure Analysis This time, I planned to place both DN42 and public internet BGP routes into the system's main routing table, then separate them for export using filters to distinguish which should be exported. For clarity, I stored the configuration for the DN42 part and the public internet part (hereinafter referred to as inet) separately, and then included them from the main configuration file. Also, since there should ideally only be one kernel configuration per routing table, I merged the DN42 and inet kernel parts, keeping only one instance. After multiple optimizations and revisions, my final directory structure is as follows: /etc/bird/ ├─envvars ├─bird.conf: Main Bird config file, defines basic info (ASN, IP, etc.), includes sub-configs below ├─kernel.conf: Kernel config, imports routes into the system routing table ├─dn42 | ├─defs.conf: DN42 function definitions, e.g., is_self_dn42_net() | ├─ibgp.conf: DN42 iBGP template | ├─rpki.conf: DN42 RPKI route validation | ├─ospf.conf: DN42 OSPF internal network | ├─static.conf: DN42 static routes | ├─ebgp.conf: DN42 Peer template | ├─ibgp | | └<ibgp configs>: DN42 iBGP configs for each node | ├─ospf | | └backbone.conf: OSPF area | ├─peers | | └<ibgp configs>: DN42 Peer configs for each node ├─inet | ├─peer.conf: Public internet Peer | ├─ixp.conf: Public internet IXP connection | ├─defs.conf: Public internet function definitions, e.g., is_self_inet_v6() | ├─upstream.conf: Public internet upstream | └static.conf: Public internet static routes I separated the function definitions because I needed to reference them in the filters within kernel.conf, so I isolated them for early inclusion. After filling in the respective configurations and setting up the include relationships, I ran birdc configure and it started successfully. So, case closed... right? Problems occurred After running for a while, I suddenly found that I couldn't ping the HKG node from my internal devices, nor could I ping my other internal nodes from the HKG node. Strangely, external ASes could ping my other nodes or other external ASes through my HKG node, and my internal nodes could also ping other non-directly connected nodes (e.g., 226(NKG)->225(HKG)->229(LAX)) via the HKG node. Using ip route get <other internal node address> revealed: root@iYoRoyNetworkHKG:/etc/bird# ip route get 172.20.234.226 172.20.234.226 via 172.20.234.226 dev dn42_nkg src 23.149.120.51 uid 0 cache See the problem? The src address should have been the HKG node's own DN42 address (configured on the OSPF stub interface), but here it showed the HKG node's public internet address instead. Attempting to read the route learned by Bird using birdc s r for 172.20.234.226: root@iYoRoyNetworkHKGBGP:/etc/bird/dn42/ospf# birdc s r for 172.20.234.226 BIRD 2.17.1 ready. Table master4: 172.20.234.226/32 unicast [dn42_ospf_iyoroynet_v4 00:30:29.307] * I (150/50) [172.20.234.226] via 172.20.234.226 on dn42_nkg onlink Looks seemingly normal...? Theoretically, although the DN42 source IP is different from the usual, DN42 rewrites krt_prefsrc when exporting to the kernel to inform the kernel of the correct source address, so this issue shouldn't occur: protocol kernel kernel_v4{ ipv4 { import none; export filter { if source = RTS_STATIC then reject; + if is_valid_dn42_network() then krt_prefsrc = DN42_OWNIP; accept; }; }; } protocol kernel kernel_v6 { ipv6 { import none; export filter { if source = RTS_STATIC then reject; + if is_valid_dn42_network_v6() then krt_prefsrc = DN42_OWNIPv6; accept; }; }; } I was stuck on this for a long time. The Solution Finally, during an unintentional attempt, I added the krt_prefsrc rewrite to the OSPF import configuration as well: protocol ospf v3 dn42_ospf_iyoroynet_v4 { router id DN42_OWNIP; ipv4 { - import where is_self_dn42_net() && source != RTS_BGP; + import filter { + if is_self_dn42_net() && source != RTS_BGP then { + krt_prefsrc=DN42_OWNIP; + accept; + } + reject; + }; export where is_self_dn42_net() && source != RTS_BGP; }; include "ospf/*"; }; protocol ospf v3 dn42_ospf_iyoroynet_v6 { router id DN42_OWNIP; ipv6 { - import where is_self_dn42_net_v6() && source != RTS_BGP; + import filter { + if is_self_dn42_net_v6() && source != RTS_BGP then { + krt_prefsrc=DN42_OWNIPv6; + accept; + } + reject; + }; export where is_self_dn42_net_v6() && source != RTS_BGP; }; include "ospf/*"; }; After running this, the src address became correct, and mutual pinging worked. Configuration files for reference: KaguraiYoRoy/Bird2-Configuration
29/10/2025
7 Views
0 Comments
0 Stars
DN42&OneManISP - Using VRF to Run Public BGP and DN42 on the Same Machine
Background Currently, public BGP and DN42 each use a separate VPS in the same region, meaning two machines are required per region. After learning about VRF from a group member, I explored using VRF to enable a single machine to handle both public BGP and DN42 simultaneously. Note: Due to its isolation nature, the VRF solution will prevent DN42 from accessing services on the host. If you need to run services (like DNS) on the server for DN42, you might need additional port forwarding or veth configuration, which is beyond the scope of this article. (This is also the reason why I ultimately did not adopt VRF in my production environment). Advantages of VRF Although DN42 uses private IP ranges and internal ASNs, which theoretically shouldn't interfere with public BGP, sharing the same routing table can lead to issues like route pollution and management complexity. VRF (Virtual Routing and Forwarding) allows creating multiple routing tables on a single machine. This means we can isolate DN42 routes into a separate routing table, keeping them apart from the public routing table. The advantages include: Absolute Security and Policy Isolation: The DN42 routing table is isolated from the public routing table, fundamentally preventing route leaks. Clear Operation and Management: Use commands like birdc show route table t_dn42 and birdc show route table t_inet to view and debug two completely independent routing tables, making things clear at a glance. Fault Domain Isolation: If a DN42 peer flaps, the impact is confined to the dn42 routing table. It won't consume routing computation resources for the public instance nor affect public forwarding performance. Alignment with Modern Network Design Principles: Using VRF for different routing domains (production, testing, customer, partner) is standard practice in modern network engineering. It logically divides your device into multiple virtual routers. Configuration System Part Creating the VRF Interface Use the following commands to create a VRF device named dn42-vrf and associate it with the system's routing table number 1042: ip link add dn42-vrf type vrf table 1042 ip link set dev dn42-vrf up # Enable it You can change the routing table number according to your preference, but avoid the following reserved routing table IDs: Name ID Description unspec 0 Unspecified, rarely used main 254 Main routing table, where most ordinary routes reside default 253 Generally unused, reserved local 255 Local routing table, contains 127.0.0.1/8, local IPs, broadcast addresses, etc. Cannot be modified Associating Existing Network Interfaces with VRF In my current DN42 setup, several WireGuard interfaces and a dummy interface are used for DN42. Therefore, associate these interfaces with the VRF: ip link set dev <interface_name> master dn42-vrf Note: After associating an interface with a VRF, it might lose its IP addresses. Therefore, you need to readd the addresses, for example: ip addr add 172.20.234.225 dev dn42 After completion, ip a should show the corresponding interface's master as dn42-vrf: 156: dn42: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master dn42-vrf state UNKNOWN group default qlen 1000 link/ether b6:f5:28:ed:23:04 brd ff:ff:ff:ff:ff:ff inet 172.20.234.225/32 scope global dn42 valid_lft forever preferred_lft forever inet6 fd18:3e15:61d0::1/128 scope global valid_lft forever preferred_lft forever inet6 fe80::b4f5:28ff:feed:2304/64 scope link valid_lft forever preferred_lft forever Persistence I use ifupdown to automatically load the dummy interface and VRF device on boot. For the VRF device, create the file /etc/network/interfaces.d/01-dn42-vrf and add: auto dn42-vrf iface dn42-vrf inet manual pre-up ip link add $IFACE type vrf table 1042 up ip link set dev $IFACE up post-down ip link del $IFACE Then use ifup dn42-vrf to start it. For the dummy interface, create the file /etc/network/interfaces.d/90-dn42 and add: auto dn42 iface dn42 inet static address 172.20.234.225 netmask 32 pre-up ip link add $IFACE type dummy up ip link set $IFACE up master dn42-vrf # 此处master指定和dn42-vrf相关联 down ip link set $IFACE down post-down ip link del $IFACE iface dn42 inet6 static address fd18:3e15:61d0::1 netmask 128 Because ifupdown doesn't support configuring both IPv4 and IPv6 addresses in one iface block, they need to be split. My dummy interface is named dn42; modify accordingly if yours is different. After creation, use ifup dn42 to start the dummy interface. Note: The number prefix for the VRF device file should be smaller than that of the dummy interface file, ensuring the VRF device starts first. WireGuard Tunnels Add PostUp commands to associate them with the VRF and readd their addresses. Example: [Interface] PrivateKey = [Data Redacted] ListenPort = [Data Redacted] Table = off Address = fe80::2024/64 PostUp = sysctl -w net.ipv6.conf.%i.autoconf=0 + PostUp = ip link set dev %i master dn42-vrf + PostUp = ip addr add fe80::2024/64 dev %i [Peer] PublicKey = [Data Redacted] Endpoint = [Data Redacted] AllowedIPs = 10.0.0.0/8, 172.20.0.0/14, 172.31.0.0/16, fd00::/8, fe00::/8 Then restart the tunnel. Bird2 Part First, define two routing tables for DN42's IPv4 and IPv6: ipv4 table dn42_table_v4; ipv6 table dn42_table_v6 Then, specify the VRF and system routing table number in the kernel protocol, and specify the previously created v4/v6 routing tables in the IPv4/IPv6 sections: protocol kernel dn42_kernel_v6{ + vrf "dn42-vrf"; + kernel table 1042; scan time 20; ipv6 { + table dn42_table_v6; import none; export filter { if source = RTS_STATIC then reject; krt_prefsrc = DN42_OWNIPv6; accept; }; }; }; protocol kernel dn42_kernel_v4{ + vrf "dn42-vrf"; + kernel table 1042; scan time 20; ipv4 { + table dn42_table_v4; import none; export filter { if source = RTS_STATIC then reject; krt_prefsrc = DN42_OWNIP; accept; }; }; } For protocols other than kernel, add the VRF and the independent IPv4/IPv6 tables, but do not specify the system routing table number: protocol static dn42_static_v4{ + vrf "dn42-vrf"; route DN42_OWNNET reject; ipv4 { + table dn42_table_v4; import all; export none; }; } protocol static dn42_static_v6{ + vrf "dn42-vrf"; route DN42_OWNNETv6 reject; ipv6 { + table dn42_table_v6; import all; export none; }; } In summary: Configure a VRF and the previously defined routing tables for everything related to DN42. Only the kernel protocol needs the system routing table number specified; others do not. Apply the same method to BGP, OSPF, etc. However, I chose to use separate Router IDs for the public internet and DN42, so a separate Router ID needs to be configured: # /etc/bird/dn42/ospf.conf protocol ospf v3 dn42_ospf_iyoroynet_v4 { + vrf "dn42-vrf"; + router id DN42_OWNIP; ipv4 { + table dn42_table_v4; import where is_self_dn42_net() && source != RTS_BGP; export where is_self_dn42_net() && source != RTS_BGP; }; include "ospf/*"; }; protocol ospf v3 dn42_ospf_iyoroynet_v6 { + vrf "dn42-vrf"; + router id DN42_OWNIP; ipv6 { + table dn42_table_v6; import where is_self_dn42_net_v6() && source != RTS_BGP; export where is_self_dn42_net_v6() && source != RTS_BGP; }; include "ospf/*"; }; # /etc/bird/dn42/ebgp.conf ... template bgp dnpeers { + vrf "dn42-vrf"; + router id DN42_OWNIP; local as DN42_OWNAS; path metric 1; ipv4 { + table dn42_table_v4; ... }; ipv6 { + table dn42_table_v6; ... }; } include "peers/*"; After completion, reload the configuration with birdc c. Now, we can view the DN42 routing table separately using ip route show vrf dn42-vrf: root@iYoRoyNetworkHKGBGP:~# ip route show vrf dn42-vrf 10.26.0.0/16 via inet6 fe80::ade0 dev dn42_4242423914 proto bird src 172.20.234.225 metric 32 10.29.0.0/16 via inet6 fe80::ade0 dev dn42_4242423914 proto bird src 172.20.234.225 metric 32 10.37.0.0/16 via inet6 fe80::ade0 dev dn42_4242423914 proto bird src 172.20.234.225 metric 32 ... You can also ping through the VRF using the -I dn42-vrf parameter: root@iYoRoyNetworkHKGBGP:~# ping 172.20.0.53 -I dn42-vrf ping: Warning: source address might be selected on device other than: dn42-vrf PING 172.20.0.53 (172.20.0.53) from 172.20.234.225 dn42-vrf: 56(84) bytes of data. 64 bytes from 172.20.0.53: icmp_seq=1 ttl=64 time=3.18 ms 64 bytes from 172.20.0.53: icmp_seq=2 ttl=64 time=3.57 ms 64 bytes from 172.20.0.53: icmp_seq=3 ttl=64 time=3.74 ms 64 bytes from 172.20.0.53: icmp_seq=4 ttl=64 time=2.86 ms ^C --- 172.20.0.53 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3006ms rtt min/avg/max/mdev = 2.863/3.337/3.740/0.341 ms Important Notes If the VRF device is reloaded, all devices originally associated with the VRF need to be reloaded as well, otherwise they won't function correctly. Currently, DN42 cannot access services inside the host configured with VRF. A future article might explain how to allow traffic within the VRF to access host services (Adding to the TODO list). Reference Articles:: Run your MPLS network with BIRD
16/09/2025
45 Views
0 Comments
0 Stars
DN42 - Ep.4 Configuring BGP Communities
Foreword I am a novice in BGP. This article may contain imprecise content/naive understandings/elementary mistakes. I kindly ask the experts to be lenient. If you find any issues, you are welcome to contact me via email, and I will correct them as soon as possible. If you find this unacceptable, it is recommended to close this article now. What are BGP Communities? TL;DR: BGP Communities "tag" routes, allowing others to use these tags for route optimization. This concept might be unfamiliar to newcomers (like me), who might not initially understand its purpose. Simply put, BGP Communities are a mechanism for tagging routes, similar to adding labels. They allow network administrators to attach one or more "tags" (i.e., community values) to routes propagated via BGP. These tags themselves do not alter the route's path attributes (like AS_PATH, LOCAL_PREF, MED, etc.), but they provide a signaling mechanism to indicate what policy or processing should be applied to that route by other routers within the same AS or in downstream peer ASes. BGP Communities can be used to: Simplify Policy Configuration: Routers inside the network or in downstream ASes only need to configure policies based on community values (like setting LOCAL_PREF, adding NO_EXPORT, applying route-maps, etc.), without needing to know the specific prefix details. This makes policies more centralized, easier to manage, and less prone to errors. Convey Policy Intent to Downstream ASes: An AS can attach community values to routes it advertises to its downstream customer or peer ASes. These communities convey requirements or suggestions on how these routes should be handled, such as route optimization based on geographic location, latency, or bandwidth. Coordinate Policies within an AS: Inside a large AS, when using IBGP full-mesh or route reflectors, edge routers (receiving EBGP routes or redistributing routes) can tag routes with community values. Core routers or route reflectors within the AS can recognize these communities and apply corresponding internal policies (like setting LOCAL_PREF, MED, deciding whether to advertise to certain IBGP peers, adding other communities, etc.), without needing complex prefix-based policies on every internal router. DN42 has its own set of Communities specifications. For details, please refer to: BGP-communities - DN42 Wiki Configuration The general idea and approach in this article are largely based on Xe_iu's method, focusing on adding BGP Communities for geographic information and performing route optimization. Generally, this is sufficient. (Another reason is that I haven't fully grasped the others yet) Note: We should ONLY add geographic information-related BGP Communities to routes originating from our own AS. We should not add such entries to routes received from neighbors. Adding our own regional Communities to a neighbor's routes constitutes forging the route origin, potentially leading to route hijacking. Downstream networks might misjudge the traffic path, routing traffic that should be direct through your network, increasing latency and consuming your network's bandwidth. (Large Communities are an exception, as they have a verification mechanism to prevent this, but that's beyond the scope of this article). The idea is clear. When exporting routes, we first need to verify if it's our own route. If it is, we add the communities tag to the route. The sample configuration provided by DN42 already includes two functions, is_self_net() and is_self_net_v6(), to check if a route is our own. Therefore, writing the configuration part is straightforward. Adding Communities to Routes First, we need to define the geographic region information for the current node at the beginning of the node configuration file. Please check BGP-communities - DN42 Wiki for details: define DN42_REGION = 52; # 52 represents East Asia define DN42_COUNTRY= 1344; # 1344 represents Hong Kong Please modify these values according to your node's actual geographic location. Then, modify the export filter in the dnpeers template: }; export filter { - if is_valid_network() && source ~ [RTS_STATIC, RTS_BGP] then accept; + if is_valid_network() && source ~ [RTS_STATIC, RTS_BGP] then{ + if (is_self_net()) then { # Check if it's our own route + bgp_community.add((64511, DN42_REGION)); # Add continent-level region info + bgp_community.add((64511, DN42_COUNTRY)); # Add country/region info + } + accept; + } reject; }; import limit 1000 action block; During export, it checks if it's our own route. If it is, it sets the bgp_community according to the defined DN42_REGION and DN42_COUNTRY. Here, 64511 is the reserved public AS number identifier specifically for geographic tags (Region/Country); you can just copy it. Apply the same method to the IPv6 export rules, but replace is_self_net() with is_self_net_v6(). Route Optimization Based on Communities Here we need to introduce another concept: local_pref (Local Preference). It is used within an AS to indicate the priority of a route. Its default value is 100, and a higher value indicates higher priority. Furthermore, in BGP route selection logic, local_pref has the highest priority, even higher than AS_PATH length. This means that by setting local_pref, we can adjust the priority of routes to achieve route optimization. Combining this with the BGP Communities mentioned above, we can set the corresponding local_pref based on Communities for optimization. Also, since BGP.local_pref is propagated within the AS, we need to modify the import logic for both eBGP and iBGP routes. My logic for handling BGP.local_pref here references (basically copies) Xe_iu's approach: For routes from the same continent, priority +10 For routes from the same country, priority +5 additionally For routes received via direct peering with us, priority +20 additionally Create a function to calculate the priority: function ebgp_calculate_priority() { int priority = 100; # Base priority # Same region detection (+10) if bgp_community ~ [(64511, DN42_REGION)] then priority = priority + 10; # Same country detection (+5) if bgp_community ~ [(64511, DN42_COUNTRY)] then priority = priority + 5; # Direct eBGP neighbor detection (+20) if bgp_path.len = 1 then priority = priority + 20; return priority; } Then, in the import filter within the dnpeers template, set bgp_local_pref to the value calculated by the function: template bgp dnpeers { local as OWNAS; path metric 1; ipv4 { import filter { if is_valid_network() && !is_self_net() then { if (roa_check(dn42_roa, net, bgp_path.last) != ROA_VALID) then { print "[dn42] ROA check failed for ", net, " ASN ", bgp_path.last; reject; } + bgp_local_pref = ebgp_calculate_priority(); accept; } reject; }; Apply the same method for IPv6. After running birdc configure, we should be able to see that our routes have been tagged with Communities labels at our neighbors: (Screenshot source:https://lg.milu.moe/route_all/hk/172.20.234.224) Special thanks to Nuro Trace and Xe_iu. They helped deepen my understanding of BGP Communities and provided much assistance. Reference Articles: [DN42] bird2的配置文件 – Xe_iu's Blog | Xe_iu的杂物间 [DN42] 谈一谈如何配置 BGP community – Xe_iu's Blog | Xe_iu的杂物间 BGP-communities - DN42 Wiki
17/08/2025
52 Views
0 Comments
1 Stars
DN42探究日记 - Ep.3 在DN42中注册域名并搭建权威DNS
写在前面 本人是BGP小白,文章中可能会存在不严谨内容/小白理解/低级错误,请诸位大佬们手下留情。若发现存在问题,您愿意的话可以邮件联系我,我会在第一时间更正。如果您不能接受,建议现在就关闭此文章。 假设你已经加入了DN42,并且能够正常收发路由表并且访问到DN42内的IP 起因 在调试网络的时候发现Ping或traceroute别人的DN42 IP都能显示出来反向解析的域名,能知道路由经过了哪些节点而不是单纯的看IP(如下图),非常一目了然,让别人一眼就看出来你绕路了(逃), 因此打算自己也注册一个DN42域名并搭建权威DNS服务。 查阅了蓝天大佬的这篇文章,他使用了PowerDNS+MySQL主从同步方案,但是我的服务器性能较差(只有1核心1GB内存),因此打算使用KnotDNS作为DNS服务器,用标准区域传输协议(AXFR/IXFR)实现主从同步。 准备工作 {alert type="warning"} 本章及后续章节中提到的域名和IP均为我自己的域名和IP,实际部署时请换成你自己的;文章中尖括号括起来的值需要按你的需求修改 {/alert} 挑选一个自己心仪的域名:yori.dn42,并且计划在三台机器上部署DNS服务器: 172.20.234.225, fd18:3e15:61d0::1, ns1.yori.dn42 172.20.234.227, fd18:3e15:61d0::3, ns2.yori.dn42 172.20.234.229, fd18:3e15:61d0::5, ns3.yori.dn42 其中,ns1.yori.dn42作为主服务器,ns2、ns3作为从服务器。 安装KnotDNS 如果系统的53端口被systemd-resolvd之类的进程占用了,就先将其禁用: systemctl stop systemd-resolved systemctl disable systemd-resolved unlink /etc/resolv.conf echo "nameserver 8.8.8.8" > /etc/resolv.conf 我使用的是Debian12系统,因此使用APT安装: apt install knot knot-dnsutils -y 设置KnotDNS自启动: systemctl enable knot 配置KnotDNS 创建key 首先创建一个key用于同步: keymgr -t key_knsupdate 将输出部分复制下来: # hmac-sha256:key_knsupdate:<your secret> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <your secret> 编辑配置文件 主服务器 编辑/etc/knot/knot.conf,填入如下内容: server: rundir: "/run/knot" user: knot:knot automatic-acl: on listen: [ <监听地址1>@53, <监听地址2>@53, ... ] log: - target: syslog any: info database: storage: "/var/lib/knot" ### 此处粘贴上一步生成的Key # hmac-sha256:key_knsupdate:<your secret> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <your secret> remote: - id: <1号DNS服务器ID> address: <1号DNS服务器IP>@53 - id: <2号DNS服务器ID> address: <2号DNS服务器IP>@53 - id: <3号DNS服务器ID> address: <3号DNS服务器IP>@53 acl: - id: acl_slave key: key_knsupdate action: transfer - id: acl_master key: key_knsupdate action: notify - id: acl_knsupdate key: key_knsupdate action: update template: - id: default storage: "/var/lib/knot" file: "%s.zone" zone: - domain: <DN42 域名> notify: [ <从服务器1ID>, <从服务器2ID> ] acl: [ acl_slave, acl_knsupdate ] - domain: <IPv4反向解析域名> notify: [ <从服务器1ID>, <从服务器2ID> ] acl: [ acl_slave, acl_knsupdate ] - domain: <IPv6反向解析域名> notify: [ <从服务器1ID>, <从服务器2ID> ] acl: [ acl_slave, acl_knsupdate ] 其中,监听地址需要填写本机的DN42 IPv4和DN42 IPv6,如果需要本地调试可再加上127.0.0.1和localhost等内网IP 从服务器ID即remote里设置的服务器的ID,你选定哪台(哪些)机器作为从服务器就填写哪台(哪些)服务器的ID remote中的address可以填写内网地址或者DN42 IPv4或者DN42 IPv6,仅用于同步主从服务器。如果使用内网地址请将地址加到监听列表中 template中设置了将Zone文件存储在/var/lib/knot下 IPv4反向解析域名按照你申请的IPv4段填写,遵循RFC 2317规定的格式,如我的IPv4段是172.20.234.224/28,我的IPv4反向解析域名应该为224/28.234.20.172.in-addr.arpa,即将IPv4的最后一段225/28看作一个整体,剩下的按照点来分隔,再将各个部分倒序拼接,最后加上.in-addr.arpa IPv6反向解析域名按照你申请的IPv6段填写,遵循RFC 3152规定的格式,如我的IPv6段是fd18:3e15:61d0::/48,我的IPv6反向解析域名应该为0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa,即将地址块中的各个字符倒序拼接,最后加上.in-addr.arpa。如果有0需要补全。 {collapse} {collapse-item label="示例"} server: rundir: "/run/knot" user: knot:knot automatic-acl: on listen: [ 172.20.234.225@53, fd18:3e15:61d0::1@53, localhost@53, 127.0.0.2@53 ] log: - target: syslog any: info database: storage: "/var/lib/knot" # hmac-sha256:key_knsupdate:dyE7/N3CLTE5IFKfpBK0Gaij6aPnXkv/4sp1gUORlsk= key: - id: key_knsupdate algorithm: hmac-sha256 secret: dyE7/N3CLTE5IFKfpBK0Gaij6aPnXkv/4sp1gUORlsk= remote: - id: 225 # 主服务器 address: 172.20.234.225@53 - id: 227 # 从服务器 address: 172.20.234.227@53 - id: 229 # 从服务器 address: 172.20.234.229@53 acl: - id: acl_slave key: key_knsupdate action: transfer - id: acl_master key: key_knsupdate action: notify - id: acl_knsupdate key: key_knsupdate action: update template: - id: default storage: "/var/lib/knot" file: "%s.zone" zone: - domain: yori.dn42 notify: [ 227, 229 ] acl: [ acl_slave, acl_knsupdate ] - domain: 224/28.234.20.172.in-addr.arpa notify: [ 227, 229 ] acl: [ acl_slave, acl_knsupdate ] - domain: 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa notify: [ 227, 229 ] acl: [ acl_slave, acl_knsupdate ] # # Secondary zone # - domain: example.net # master: primary {/collapse-item} {/collapse} 从服务器 从服务器的大致内容和主服务器相同,只需要将监听地址改成从服务器的地址,并且将zone部分配置修改一下: --- a/knot.conf +++ b/knot.conf zone: - domain: <DN42 域名> - notify: [ <从服务器1ID>, <从服务器2ID> ] - acl: [ acl_slave, acl_knsupdate ] + master: <主服务器ID> + zonefile-load: whole + acl: acl_master - domain: <IPv4反向解析域名> - notify: [ <从服务器1ID>, <从服务器2ID> ] - acl: [ acl_slave, acl_knsupdate ] + master: <主服务器ID> + zonefile-load: whole + acl: acl_master - domain: <IPv6反向解析域名> - notify: [ <从服务器1ID>, <从服务器2ID> ] - acl: [ acl_slave, acl_knsupdate ] + master: <主服务器ID> + zonefile-load: whole + acl: acl_master 主服务器ID即remote里设置的服务器的ID,你选定哪台机器作为主服务器就填写哪台服务器的ID {collapse} {collapse-item label="样例"} server: rundir: "/run/knot" user: knot:knot automatic-acl: on listen: [ 172.20.234.227@53, fd18:3e15:61d0::3@53, localhost@53, 127.0.0.1@53 ] log: - target: syslog any: info database: storage: "/var/lib/knot" # hmac-sha256:key_knsupdate:<key> key: - id: key_knsupdate algorithm: hmac-sha256 secret: <key> remote: - id: 225 address: 172.20.234.225@53 - id: 227 address: 172.20.234.227@53 - id: 229 address: 172.20.234.229@53 acl: - id: acl_slave key: key_knsupdate action: transfer - id: acl_master key: key_knsupdate action: notify - id: acl_knsupdate key: key_knsupdate action: update template: - id: default storage: "/var/lib/knot" file: "%s.zone" zone: - domain: yori.dn42 master: 225 zonefile-load: whole acl: acl_master - domain: 224/28.234.20.172.in-addr.arpa master: 225 zonefile-load: whole acl: acl_master - domain: 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa master: 225 zonefile-load: whole acl: acl_master {/collapse-item} {/collapse} 编写完配置文件后运行如下指令重启KnotDNS: systemctl restart knot 编辑Zone区域文件 此章节中的记录值(非主机名)若需要填写域名,除了特殊说明外,都请遵循RFC 1034规范填写FQDN格式。此章节中的所有配置均在主DNS服务器上完成 DN42域名 进入/var/lib/knot,创建文件<dn42域名>.zone SOA记录 域名的第一条记录必须为SOA记录,SOA记录是起始授权记录,记录了域名的一些基本信息如主要NS服务器地址。填入如下内容: @ <TTL> SOA <主要NS服务器地址> <联系人邮件> <记录编号> <AXFR刷新时间> <AXFR重试时间> <AXFR过期时间> <最小TTL> @表示是当前域名本身,不用修改 TTL: 当前SOA记录的TTL值 主要NS服务器地址: 当前域名的主要权威NS服务器地址,可以是域名内的解析值,如我的主要NS服务器是172.20.234.225,我打算使用ns1.yori.dn42.指向此地址,那么此处可填写ns1.yori.dn42. 联系人邮件: 邮箱地址,并且用.代替@,如我的邮箱是i@iyoroy.cn,那么此处可填写i.iyoroy.cn 记录编号: 一个10位数字,遵循RFC 1912,表示Zone文件的版本号。其他DNS服务器在获取SOA后发现序列号增加了就会重新拉取新的记录。一般使用日期+编号的方式编码,因此此项值应该在每次修改后递增。 AXFR刷新时间: AXFR从服务器两次拉取的间隔 AXFR重试时间: AXFR从服务器拉取失败后重试时间 AXFR过期时间: AXFR从服务器拉取失败后,最多用先前最后一次拉取成功的记录继续提供服务这么长时间,之后停止应答 最小TTL: 当前整个域名的最小TTL值,所有记录的最小刷新时间,至少过了这么长时间才会刷新 {collapse} {collapse-item label="样例"} ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072705 60 60 1800 60 {/collapse-item} {/collapse} NS记录 @ <TTL> NS <NS服务器1> @ <TTL> NS <NS服务器2> @ <TTL> NS <NS服务器3> 根据你的实际情况填写,有几台服务器就填写几条记录。 {collapse} {collapse-item label="样例"} ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. {/collapse-item} {/collapse} A、AAAA、CNAME等记录 按照如下格式填写即可: <主机名> <TTL> <类型> <记录值> 如果你的NS服务器值指向了你自己的DN42域名的主机,请务必为其添加A类型或者AAAA类型的解析记录 {collapse} {collapse-item label="样例"} ; A ns1 600 A 172.20.234.225 ns2 600 A 172.20.234.227 ns3 600 A 172.20.234.229 hkg-cn.node 600 A 172.20.234.225 nkg-cn.node 600 A 172.20.234.226 tyo-jp.node 600 A 172.20.234.227 hfe-cn.node 600 A 172.20.234.228 lax-us.node 600 A 172.20.234.229 ; AAAA ns1 600 AAAA fd18:3e15:61d0::1 ns2 600 AAAA fd18:3e15:61d0::3 ns3 600 AAAA fd18:3e15:61d0::5 hkg-cn.node 600 AAAA fd18:3e15:61d0::1 nkg-cn.node 600 AAAA fd18:3e15:61d0::2 tyo-jp.node 600 AAAA fd18:3e15:61d0::3 hfe-cn.node 600 AAAA fd18:3e15:61d0::4 lax-us.node 600 AAAA fd18:3e15:61d0::5 {/collapse-item} {collapse-item label="完整样例"} /var/lib/knot/yori.dn42.zone ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072705 60 60 1800 60 ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. ; A ns1 600 A 172.20.234.225 ns2 600 A 172.20.234.227 ns3 600 A 172.20.234.229 hkg-cn.node 600 A 172.20.234.225 nkg-cn.node 600 A 172.20.234.226 tyo-jp.node 600 A 172.20.234.227 hfe-cn.node 600 A 172.20.234.228 lax-us.node 600 A 172.20.234.229 ; AAAA ns1 600 AAAA fd18:3e15:61d0::1 ns2 600 AAAA fd18:3e15:61d0::3 ns3 600 AAAA fd18:3e15:61d0::5 hkg-cn.node 600 AAAA fd18:3e15:61d0::1 nkg-cn.node 600 AAAA fd18:3e15:61d0::2 tyo-jp.node 600 AAAA fd18:3e15:61d0::3 hfe-cn.node 600 AAAA fd18:3e15:61d0::4 lax-us.node 600 AAAA fd18:3e15:61d0::5 {/collapse-item} {/collapse} IPv4反向解析域名 在/var/lib/knot下创建文件<IPv4反向解析域名>.in-addr.arpa,并用_代替/。如我的IPv4段是172.20.234.224/28,我的IPv4反向解析域名是224/28.234.20.172.in-addr.arpa,那么此处文件名为224_28.234.20.172.in-addr.arpa.zone。填入解析记录: ; SOA @ <TTL> SOA <主要NS服务器地址> <联系人邮件> <记录编号> <AXFR刷新时间> <AXFR重试时间> <AXFR过期时间> <最小TTL> ; NS @ <TTL> NS <NS服务器1> @ <TTL> NS <NS服务器2> @ <TTL> NS <NS服务器3> ; PTR <IPv4地址最后一位> <TTL> PTR <反向解析DNS值> <IPv4地址最后一位> <TTL> PTR <反向解析DNS值> <IPv4地址最后一位> <TTL> PTR <反向解析DNS值> ... SOA和NS记录和上方相同 IPv4地址最后一位: 你给设备绑定的DN42 IPv4地址四段中的最后一段,如我的HK节点分配的172.20.234.225地址,那么此处就填写`225 {collapse} {collapse-item label="样例"} 224_28.234.20.172.in-addr.arpa.zone ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072802 60 60 1800 60 ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. ; PTR 225 600 PTR hkg-cn.node.yori.dn42. 226 600 PTR nkg-cn.node.yori.dn42. 227 600 PTR tyo-jp.node.yori.dn42. 228 600 PTR hfe-cn.node.yori.dn42. 229 600 PTR lax-us.node.yori.dn42. {/collapse-item} {/collapse} 你可能会疑惑为何需要带上CIDR掩码,这与Clearnet中常见的、按八位组倒序的格式(如234.20.172.in-addr.arpa)不同;以及如果你尝试本地测试,你会发现直接测试你自己的IP地址的反解会失败。 原因在于DN42的分布式注册机制:单个区域文件无法覆盖你的地址段的所有反向查询入口点(即每个具体IP地址对应的.in-addr.arpa名称)。为了解决这个问题,DN42官方DNS会在你的PR被合并后,在其权威DNS服务器上为你的地址段添加CNAME重定向,将单个IP的PTR记录指向你的带CIDR的格式,如下: ~$ dig PTR 225.234.20.172.in-addr.arpa +short 225.224/28.234.20.172.in-addr.arpa. # <-- Registry 添加的 CNAME (重定向) hkg-cn.node.yori.dn42. # <-- DNS 返回的最终 PTR 记录 当外部解析器查询某个具体IP (如172.20.234.225) 的反向记录时(查询225.234.20.172.in-addr.arpa),官方DNS会返回一个CNAME记录,将其指向CIDR的区域名称下的具体记录(225.224/28.234.20.172.in-addr.arpa)。最终,PTR 记录由你配置的权威 DNS 服务器提供。 IPv6反向解析域名 在/var/lib/knot下创建文件<IPv6反向解析域名>.in-addr.arpa。如我的IPv6段是fd18:3e15:61d0::/48,我的IPv6反向解析域名是0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa,那么此处文件名为0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa.zone。填入解析记录: ; SOA @ <TTL> SOA <主要NS服务器地址> <联系人邮件> <记录编号> <AXFR刷新时间> <AXFR重试时间> <AXFR过期时间> <最小TTL> ; NS @ <TTL> NS <NS服务器1> @ <TTL> NS <NS服务器2> @ <TTL> NS <NS服务器3> ; PTR <最后20字符反向序列> <TTL> PTR <反向解析DNS值> <最后20字符反向序列> <TTL> PTR <反向解析DNS值> <最后20字符反向序列> <TTL> PTR <反向解析DNS值> ... SOA和NS记录处理方式同上 PTR的主机名,需要你将主机IPv6地址移除/48前缀后的80位部分展开为20个十六进制字符,并倒序排列这些字符并用点分隔。如我的香港节点IPv6是fd18:3e15:61d0::1,展开就是fd18:3e15:61d0:0000:0000:0000:0000:0001,此处主机名就填写1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 {collapse} {collapse-item label="样例"} 0.d.1.6.5.1.e.3.8.1.d.f.ip6.arpa.zone ; SOA @ 3600 SOA ns1.yori.dn42. i.iyoroy.cn. 2025072802 60 60 1800 60 ; NS @ 3600 NS ns1.yori.dn42. @ 3600 NS ns2.yori.dn42. @ 3600 NS ns3.yori.dn42. ; PTR 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR hkg-cn.node.yori.dn42. 2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR nkg-cn.node.yori.dn42. 3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR tyo-jp.node.yori.dn42. 4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR hfe-cn.node.yori.dn42. 5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 600 PTR lax-us.node.yori.dn42. {/collapse-item} {/collapse} 验证设置 全部保存后在每台DNS服务器上都运行一次knot reload重载,不出意外的话应该能看到从服务器同步了主服务器的zone文件,此时通过dig或者nslookup指定服务器查询应该能查到解析记录了 注册 域名 克隆下DN42 Registry,进入data/dns,新建文件<你打算注册的域名>,填入如下内容: domain: <你打算注册的域名> admin-c: <管理员NIC句柄> tech-c: <技术人员NIC句柄> mnt-by: <维护者> nserver: <NS1服务器域名> <NS1服务器IP> nserver: <NS2服务器域名> <NS2服务器IP> nserver: <NS3服务器域名> <NS3服务器IP> ... source: DN42 admin-c、tech-c、mnt-by请参考DN42探究日记 - Ep.1 加入DN42网络 {collapse} {collapse-item label="样例"} data/dns/yori.dn42 domain: yori.dn42 admin-c: IYOROY-DN42 tech-c: IYOROY-DN42 mnt-by: IYOROY-MNT nserver: ns1.yori.dn42 172.20.234.225 nserver: ns1.yori.dn42 fd18:3e15:61d0::1 nserver: ns2.yori.dn42 172.20.234.227 nserver: ns2.yori.dn42 fd18:3e15:61d0::3 nserver: ns3.yori.dn42 172.20.234.229 nserver: ns3.yori.dn42 fd18:3e15:61d0::5 source: DN42 {/collapse-item} {/collapse} IPv4反向解析域名 进入data/inetnum,找到你注册的地址块,加nserver字段,填写为你自己的DNS服务器: nserver: <你的DNS服务器地址> nserver: <你的DNS服务器地址> ... {collapse} {collapse-item label="样例"} diff --git a/data/inetnum/172.20.234.224_28 b/data/inetnum/172.20.234.224_28 index 50c800945..5ad60e23d 100644 --- a/data/inetnum/172.20.234.224_28 +++ b/data/inetnum/172.20.234.224_28 @@ -8,3 +8,6 @@ tech-c: IYOROY-DN42 mnt-by: IYOROY-MNT status: ASSIGNED source: DN42 +nserver: ns1.yori.dn42 +nserver: ns2.yori.dn42 +nserver: ns3.yori.dn42 {/collapse-item} {/collapse} IPv6反向解析域名 进入data/inet6num,找到你注册的地址块,加nserver字段,填写为你自己的DNS服务器: nserver: <你的DNS服务器地址> nserver: <你的DNS服务器地址> ... {collapse} {collapse-item label="样例"} diff --git a/data/inet6num/fd18:3e15:61d0::_48 b/data/inet6num/fd18:3e15:61d0::_48 index 53f0de06d..1ae067b00 100644 --- a/data/inet6num/fd18:3e15:61d0::_48 +++ b/data/inet6num/fd18:3e15:61d0::_48 @@ -8,3 +8,6 @@ tech-c: IYOROY-DN42 mnt-by: IYOROY-MNT status: ASSIGNED source: DN42 +nserver: ns1.yori.dn42 +nserver: ns2.yori.dn42 +nserver: ns3.yori.dn42 {/collapse-item} {/collapse} 提交PR,等待合并 填写完后推送并提交Pull Request。因为DN42中人人都可以建立递归DNS,DNS配置完全生效可能要一周左右。虽然我实测合并后半天以内公共DNS(172.20.0.53)就已经能查询到我的记录了 特别感谢たのしい大佬,让我明白了DN42中IPv4反向解析与公网的不同之处 参考文章: https://www.haiyun.me/archives/1398.html https://www.jianshu.com/p/7d69ec2976c7 https://www.potat0.cc/posts/20220726/Register_DN42_Domain/ https://bbs.csdn.net/topics/393775423 https://blog.snorlax.blue/knot-reverse-dns-kickstart/ http://www.kkdlabs.jp/dns/automatic-dnssec-signing-by-knot-dns/ https://lantian.pub/article/modify-website/register-own-domain-in-dn42.lantian/ https://datatracker.ietf.org/doc/html/rfc2317 https://datatracker.ietf.org/doc/html/rfc3152 https://datatracker.ietf.org/doc/html/rfc1912#section-2.2
03/08/2025
49 Views
0 Comments
1 Stars
DN42探究日记 - Ep.2 通过OSPF搭建内部网络并启用iBGP
写在前面 本人是BGP小白,文章中可能会存在不严谨内容/小白理解/低级错误,请诸位大佬们手下留情。若发现存在问题,您愿意的话可以邮件联系我,我会在第一时间更正。如果您不能接受,建议现在就关闭此文章。 本文更新日志 {timeline} {timeline-item color="#50BFFF"} 2025年7月22日:文章第一版发布,使用VXLAN over WireGuard隧道 {/timeline-item} {timeline-item color="#50BFFF"} 2025年7月25日:更新隧道方案,使用type ptp;以支持直接通过WireGuard传输OSPF流量(特别感谢Nuro Trance大佬指导!) {/timeline-item} {timeline-item color="#50BFFF"} 2025年8月8日:添加iBGP部分的解释和配置 {/timeline-item} {timeline-item color="#4F9E28"} 2025年8月27日:更新节点拓扑结构图 {/timeline-item} {/timeline} 为什么需要内部路由 当节点数量增多,我们需要一个合适的方式处理自己AS的内部路由。因为BGP路由只负责将数据包路由到AS,这就导致了一个问题:假如我有A、B两个节点都与别人peer,但是在路由器看来这两台设备同属于一个AS,从A节点发出的请求回包可能被回到B上。这个时候,如果没有做内部路由,则A会无法收到回包。因此,我们需要保证自家内网各个设备之间都能连通。常见的几种方式如下: 通过ZeroTier等组网工具:这种方式较为简单,只需要在每个节点上配置一个客户端即可实现各个节点之间的P2P连接。 通过WireGuard等P2P工具手动建立$\frac{n(n+1)}{2}$条隧道,效果同1,但是节点一多工作量呈指数级上升 通过WireGuard等P2P工具手动建立<$\frac{n(n+1)}{2}$条隧道,再通过OSPF、Babel等内网寻路协议建立内网路由,优点是较为灵活,易于后期添加新节点,缺点就是较为危险,容易配置错误引爆DN42。 因此我决定冒险一下 节点拓扑 graph LR A[HKG<br>172.20.234.225<br>fd18:3e15:61d0::1] B[NKG<br>172.20.234.226<br>fd18:3e15:61d0::2] C[TYO<br>172.20.234.227<br>fd18:3e15:61d0::3] D[LAX<br>172.20.234.229<br>fd18:3e15:61d0::5] B <--> A C <--> A A <--> D C <--> D 更新Bird2至v2.16及以上 因为我希望使用IPv6 Link-Local地址传递IPv4的OSPF数据,而Bird在2.16及以后才支持这项功能,因此需要更新至v2.16。如果你不希望更新,可以先跳过此步骤。 使用以下指令安装最新版本的Bird2: sudo apt update && sudo apt -y install apt-transport-https ca-certificates wget lsb-release sudo wget -O /usr/share/keyrings/cznic-labs-pkg.gpg https://pkg.labs.nic.cz/gpg echo "deb [signed-by=/usr/share/keyrings/cznic-labs-pkg.gpg] https://pkg.labs.nic.cz/bird2 $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/cznic-labs-bird2.list sudo apt update && sudo apt install bird2 -y 配置隧道 [Interface] PrivateKey = <本地WireGuard私钥> ListenPort = <监听端口> Table = off Address = <IPv6 LLA>/64 PostUp = sysctl -w net.ipv6.conf.%i.autoconf=0 [Peer] PublicKey = <对端公钥> Endpoint = <对端公网接入点> AllowedIPs = 10.0.0.0/8, 172.20.0.0/14, 172.31.0.0/16, fd00::/8, fe00::/8, ff02::5 ff02::5是OSPFv3路由器专用的链路本地范围组播地址,需要添加进AllowedIPs。 如果你正在使用v2.16以前的Bird,请再为隧道配置一个IPv4地址,不一定非要是DN42 IPv4,其他私有地址也可以。请参考: {collapse} {collapse-item label="包含IPv4的WireGuard配置示例"} [Interface] PrivateKey = <本地WireGuard私钥> ListenPort = <监听端口> Table = off Address = <IPv6 LLA>/64 PostUp = ip addr add 100.64.0.225/32 peer 100.64.0.226/32 dev %i PostUp = sysctl -w net.ipv6.conf.%i.autoconf=0 [Peer] PublicKey = <对端公钥> Endpoint = <对端公网接入点> AllowedIPs = 10.0.0.0/8, 172.20.0.0/14, 100.64.0.0/16, 172.31.0.0/16, fd00::/8, fe00::/8, ff02::5 请将100.64.0.225、100.64.0.226替换为你本机的IPv4和对端的IPv4,并且记得加入AllowedIPs。 {/collapse-item} {/collapse} 启用OSPF 默认你已经配置好了如上一篇文章所写的bird基础配置 在/etc/bird下新建一个名为ospf.conf的文件,填入如下内容: protocol ospf v3 <name> { ipv4 { import where is_self_net() && source != RTS_BGP; export where is_self_net() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; protocol ospf v3 <name> { ipv6 { import where is_self_net_v6() && source != RTS_BGP; export where is_self_net_v6() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; 理论上来说应该使用OSPF v2处理IPv4,但是因为需要通过IPv6 LLA地址通信IPv4,因此此处IPv4也使用OSPF v3。 过滤规则保证只有本网段内的路由能够通过OSPF传递,并且过滤掉外部BGP协议的路由 千万不要随意使用import all;export all;,有可能会导致路由劫持并影响到整个DN42网络。OSPF只应该处理网段内部的路由。 {collapse} {collapse-item label="配置示例"} /etc/bird/ospf.conf protocol ospf v3 dn42_iyoroynet_ospf { ipv4 { import where is_self_net() && source != RTS_BGP; export where is_self_net() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; protocol ospf v3 dn42_iyoroynet_ospf6 { ipv6 { import where is_self_net_v6() && source != RTS_BGP; export where is_self_net_v6() && source != RTS_BGP; }; include "/etc/bird/ospf/*"; }; {/collapse-item} {/collapse} 接着,新建/etc/bird/ospf文件夹,在其中创建area配置文件(如:/etc/bird/ospf/0.conf),填写区域信息: area 0.0.0.0 { interface "<DN42 dummy网卡>" { stub; }; interface "<wg0网卡名称>" { cost 80; # 按照你的网络情况修改 type ptp; }; interface "<wg1网卡名称>" { cost 100; # 按照你的网络情况修改 type ptp; }; # 以此类推 }; 0.0.0.0区域代表骨干网 此处dummy网卡指上一篇文章中所写的DN42虚拟网卡 cost值本应该是用于开销计算,但在DN42这种对带宽要求不大而对延迟较为敏感的场景下可以直接填写延迟,OSPF会自动走开销值之和最短的路由。 {collapse} {collapse-item label="配置示例"} /etc/bird/ospf/0.conf area 0.0.0.0 { interface "dn42" { stub; }; interface "dn42_hkg" { cost 80; type ptp; }; interface "dn42_hfe" { cost 150; type ptp; }; interface "dn42_lax"{ cost 100; type ptp; }; }; {/collapse-item} {/collapse} 最后,打开/etc/bird/bird.conf,在末尾引入OSPF的配置文件: include "ospf.conf"; 运行birdc configure,然后birdc show protocols应该就能看到OSPF的状态是Running了。如果不是,请检查配置步骤是否出错。 此时,在非直连的两台机器上互相ping应该就能通了: 配置iBGP 在从多个地点建立对等连接之前,您的各个节点必须首先完整掌握自身网络的拓扑结构。除了所有外部 BGP 连接外,这还需要配置另一个关键组件:内部 BGP(即 iBGP)。 必要性 iBGP可以保证AS内部所有运行BGP的路由器都能获知到达外部目的地的完整BGP路由信息,进而确保: 内部路由器可以选择最优的出口路径。 流量能够被正确地引导到负责连接特定外部网络的边界路由器。 即使存在多个边界路由器连接到同一个外部网络,内部路由器也能根据策略选择最佳出口。 相比于在AS内部使用默认路由指向边界路由器,iBGP提供了精确的外部路由信息,使得内部路由器能做出更智能的转发决策。 缺点与解决方案 为了防止路由信息在AS内部无控制地扩散导致环路,iBGP路由器不会将从某个iBGP邻居学到的路由再通告给其他iBGP邻居,这就要求传统的iBGP要求在同一个AS内所有运行iBGP的路由器之间必须建立全网状的iBGP邻居关系(Full Mesh)。(还是要建立$\frac{n(n+1)}{2}$条连接 ,没办法。不过OSPF起来之后iBGP配置还是比配置隧道简单的 ) 解决方案就是: 使用路由反射器(Route Reflector, RR): 由RR路由器管理整个AS内部所有的路由信息,缺点就是RR路由器故障将会导致整个网络瘫痪(这很不Decentralized) 通过BGP联盟(BGP Confederation) 搭建内部网络: 将AS内的路由器虚拟成一个个子AS,再把各个路由器之间的连接当作BGP处理,最后向外传递的时候抹去内部AS的路由路径。 后面两种方案我没有尝试过,下面是一些可能有用的参考文章。本文着重讨论iBGP的配置。 DN42 实验网络介绍及注册教程(2022-12 更新) - Lan Tian @ Blog Bird 配置 BGP Confederation,及模拟 Confederation(2020-06-07 更新) - Lan Tian @ Blog 编写iBGP配置文件 在/etc/bird下新建文件ibgp.conf,填入如下内容: template bgp ibgpeers { local as OWNAS; ipv4 { import where source = RTS_BGP && is_valid_network() && !is_self_net(); export where source = RTS_BGP && is_valid_network() && !is_self_net(); next hop self; extended next hop; }; ipv6 { import where source = RTS_BGP && is_valid_network_v6() && !is_self_net_v6(); export where source = RTS_BGP && is_valid_network_v6() && !is_self_net_v6(); next hop self; }; }; include "ibgp/*"; 导入和导出规则确保iBGP仅处理BGP协议学到的路由,并且过滤掉IGP的路由防止环回 next hop self是必须的,指示 BIRD 在向 iBGP 邻居导出路由时,将下一跳重写为边界路由器自身的IP地址(而非原始的外部下一跳)。因为内部路由器无法直接访问外部邻居地址,若不重写则会被认定为地址不可达。重写后,内部路由器只需通过 IGP 路由将流量送至边界路由器,由边界路由器完成最终的外部转发。 因为我希望使用IPv6地址建立MP-BGP,通过IPv6路由IPv4,因此在IPv4中启用了extended next hop 接着创建/etc/bird/ibgp文件夹,在其中为每台节点都创建一个iBGP Peer配置文件: protocol bgp 'dn42_ibgp_<节点>' from ibgpeers{ neighbor <对应节点的IPv6 ULA地址> as OWNAS; }; {collapse} {collapse-item label="样例"} /etc/bird/ibgp/hkg.conf: protocol bgp 'dn42_ibgp_HKG' from ibgpeers{ neighbor fd18:3e15:61d0::1 as OWNAS; }; {/collapse-item} {/collapse} 注意:每个节点上都需要建立(n-1)个iBGP连接,保证和AS内其他所有机器都建立连接,这也是为什么需要使用ULA地址 使用ULA地址确保即使两个节点之间的WireGuard断开,iBGP仍然能通过OSPF建立的内部路由建立,否则将会导致整个内部网络的崩溃 最后,在/etc/bird/bird.conf中加入对ibgp.conf的引入: include "ibgp.conf"; 并运行birdc configure应用配置即可。 参考文章: BIRD 与 BGP 的新手开场 - 海上的宫殿 萌新入坑 DN42 之 —— 基于 tailscale + vxlan + OSPF 的组网 – 米露小窝 使用 Bird2 配置 WireGuard + OSPF 实现网络的高可用 | bs' realm DN42 实验网络介绍及注册教程(2022-12 更新) - Lan Tian @ Blog 如何引爆 DN42 网络(2023-05-12 更新) - Lan Tian @ Blog Bird 配置 BGP Confederation,及模拟 Confederation(2020-06-07 更新) - Lan Tian @ Blog 深入解析OSPF路径开销、优先级和计时器 - 51CTO New release 2.16 | BIRD Internet Routing Daemon 第一章·第二节 如何在 Linux 上安装最新版本的 BIRD? | BIRD 中文文档 [DN42] 使用 OSPF ptp 搭建内网与IBGP配置 – Xe_iu's Blog | Xe_iu的杂物间 [译] dn42 多服务器环境中的 iBGP 与 IGP 配置 | liuzhen932 的小窝
22/07/2025
135 Views
1 Comments
1 Stars
1
2