Homepage
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
Search
1
Centralized Deployment of EasyTier using Docker
1,705 Views
2
Adding KernelSU Support to Android 4.9 Kernel
1,091 Views
3
Enabling EROFS Support for an Android ROM with Kernel 4.9
309 Views
4
Installing 1Panel Using Docker on TrueNAS
300 Views
5
2025 Yangcheng Cup CTF Preliminary WriteUp
296 Views
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Login
Search
Search Tags
Network Technology
BGP
Linux
BIRD
DN42
C&C++
Android
Windows
OSPF
Docker
AOSP
MSVC
Services
DNS
STL
Interior Gateway Protocol
Kernel
caf/clo
Web
TrueNAS
Kagura iYoRoy
A total of
28
articles have been written.
A total of
14
comments have been received.
Index
Column
Android
Ops
NAS
Develop
Network
Projects
DN42
One Man ISP
CTF
Cybersecurity
Pages
iYoRoy DN42 Network
About
Friends
Language
简体中文
English
6
articles related to
were found.
Cross-Platform Service Programming Diary Ep.2 - Inter-Process Communication (IPC)
Previously The previous article implemented unified log management. This article implements inter-process message communication, i.e., IPC. Analysis Windows On Windows, inter-process communication is primarily achieved through Pipes. Pipes are further divided into Named Pipes and Anonymous Pipes. Anonymous pipes are unidirectional and are typically used for communication between a parent and child process[2]. Named pipes, however, can be unidirectional or duplex and support one-to-many communication[3]. As the name implies, the only way to identify a named pipe is by its name, so two processes can communicate as long as they connect to the named pipe with the same name. We need to achieve bidirectional communication between processes, so we use named pipes. The general idea is: a process starts in server mode (receiver), creates a thread, creates a named pipe, and listens for messages within the pipe. When the pipe is connected, it reads data from it; when a process starts as a sender, it attempts to connect to a pipe with the same name and writes the message content. Linux On Linux, sockets are typically used for inter-process communication. However, unlike listening on a port, IPC usually involves listening on a sock file[5]. Common service applications like the Docker daemon and MySQL use this method. Thus, the general idea is as follows: a process started in server mode creates a socket listener and waits to receive messages from it; the sender connects to the socket and sends a message. Similar to the name of the named pipe mentioned above, the socket maps to a unique .sock file. The sender just needs to open this file to send the message. (In practice, it's not opened in the conventional file manner but using socket-specific methods[5].) Code Implementation Initialization To use a common main codebase, I used the same approach as the previous article, differentiating system types via macro definitions, placing the Windows and Linux code in the header files service-windows.h and service-linux.h respectively: #ifdef _WIN32 #include "service-windows.h" #elif defined(__linux__) #include "service-linux.h" #endif When the receiver process starts, it creates a thread to handle message reception (using std::thread as the multithreading library): thread_bind = std::thread(bind_thread_main); Listener Section Windows On Windows, we simply attempt to read data from a named pipe with a specified name. Because the pipe is set to blocking mode (i.e., PIPE_WAIT is set in the DWORD dwPipeMode parameter of CreateNamedPipe below), ConnectNamedPipe will be blocking, so there's no need to worry about performance loss from constant looping. void bind_thread_main() { while (!exit_requested.load()) { HANDLE hPipe = CreateNamedPipe( PIPE_NAME, PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT, PIPE_UNLIMITED_INSTANCES, 1024, // Output buffer size 1024, // Input buffer size 0, // Default timeout NULL); if (hPipe == INVALID_HANDLE_VALUE) { service_log.push(LEVEL_WARN, "Failed to create pipe: %d", GetLastError()); continue; } if (ConnectNamedPipe(hPipe, NULL) || GetLastError() == ERROR_PIPE_CONNECTED) { char buffer[1024]; DWORD bytesRead; if (ReadFile(hPipe, buffer, sizeof(buffer) - 1, &bytesRead, NULL)) { buffer[bytesRead] = '\0'; m_queueMsg.push(buffer); service_log.push(LEVEL_VERBOSE, "Message received: %s", buffer); } FlushFileBuffers(hPipe); DisconnectNamedPipe(hPipe); CloseHandle(hPipe); } else { CloseHandle(hPipe); } } } Linux To prevent creation failure, the code attempts to delete any leftover sock file that wasn't cleaned up before creation, i.e., unlink(SOCKET_PATH) in the code. SOCKET_PATH is a global variable defining the path to the socket file. When creating the socket, the family is specified as AF_UNIX, indicating a UNIX socket (the .sock file type; for network sockets it would be AF_INET). The timeval code sets a timeout limit. If the accept function waits longer than the set SOCKET_TIMEOUT (in seconds), it will automatically stop blocking and return an error. After creating the socket, proceed with the normal setup for binding and listening. void bind_thread_main() { unlink(SOCKET_PATH); int server_fd = socket(AF_UNIX, SOCK_STREAM, 0); if (server_fd == -1) { service_log.push(LEVEL_FATAL, "Failed to create socket"); exit_requested.store(true); return; } struct timeval tv; tv.tv_sec = SOCKET_TIMEOUT; tv.tv_usec = 0; setsockopt(server_fd, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv)); sockaddr_un addr{}; addr.sun_family = AF_UNIX; strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1); if (bind(server_fd, (sockaddr*)&addr, sizeof(addr)) == -1) { service_log.push(LEVEL_FATAL, "Bind failed"); close(server_fd); exit_requested.store(true); return; } if (listen(server_fd, 5) == -1) { service_log.push(LEVEL_FATAL, "Listen failed"); close(server_fd); exit_requested.store(true); return; } while (!exit_requested.load()) { int client_fd = accept(server_fd, nullptr, nullptr); if (client_fd != -1) { char buffer[1024]; int bytes_read = read(client_fd, buffer, sizeof(buffer) - 1); if (bytes_read > 0) { buffer[bytes_read] = '\0'; m_queueMsg.push(buffer); service_log.push(LEVEL_VERBOSE, "Message received: %s", buffer); } close(client_fd); } else { if (errno == EWOULDBLOCK || errno == EAGAIN) { continue; } service_log.push(LEVEL_WARN, "Failed to accept socket connection"); } } } After reading a message, both code versions save the message to the blocking queue m_queueMsg. Sender Section Windows Open the specified pipe and write the message content: bool send_message(const std::string& msg) { if (!WaitNamedPipe(PIPE_NAME, NMPWAIT_WAIT_FOREVER)) { service_log.push(LEVEL_ERROR, "Failed to find valid pipe: %d", GetLastError()); return false; } HANDLE hPipe = CreateFile( PIPE_NAME, GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); if (hPipe == INVALID_HANDLE_VALUE) { service_log.push(LEVEL_ERROR, "Failed to connect: %d", GetLastError()); return false; } DWORD bytesWritten; if (WriteFile(hPipe, msg.c_str(), (DWORD)msg.size(), &bytesWritten, NULL)) { service_log.push(LEVEL_VERBOSE, "Message sent: %s", msg.c_str()); CloseHandle(hPipe); return true; } else { service_log.push(LEVEL_ERROR, "Message (%s) send failed: %d", msg.c_str(),GetLastError()); CloseHandle(hPipe); return false; } } Linux Similarly, connect to the socket and send the data: bool send_message(const std::string& msg) { int sock = socket(AF_UNIX, SOCK_STREAM, 0); if (sock == -1) { service_log.push(LEVEL_ERROR, "Failed to create socket"); return false; } sockaddr_un addr{}; addr.sun_family = AF_UNIX; strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1); if (connect(sock, (sockaddr*)&addr, sizeof(addr)) == -1) { service_log.push(LEVEL_ERROR, "Connect failed"); close(sock); return false; } if (write(sock, msg.c_str(), msg.size()) == -1) { service_log.push(LEVEL_ERROR, "Message send failed: %s", msg.c_str()); close(sock); return false; } else { service_log.push(LEVEL_VERBOSE, "Message sent success: %s", msg.c_str()); close(sock); return true; } } Cleanup There's little to clean up on Windows, but on Linux, the socket file needs to be deleted: unlink(SOCKET_PATH); Demo Screenshots Windows Linux Sample code download: IPCTest.zip References: https://learn.microsoft.com/zh-cn/windows/win32/ipc/pipes https://learn.microsoft.com/zh-cn/windows/win32/ipc/anonymous-pipes https://learn.microsoft.com/zh-cn/windows/win32/ipc/named-pipes https://www.cnblogs.com/alantu2018/p/8493809.html https://blog.csdn.net/dog250/article/details/100998838
19/05/2025
150 Views
0 Comments
2 Stars
Cross-Platform Service Programming Diary Ep.1 - Unified Logging Management
A while ago, on a whim, I decided to write my own management program for a cross-platform console service-class application I was using, in order to add some features. Thus, I designed a simple service operation flow. {alert type="warning"} The views and solutions in this series of articles are designed by me based on my existing knowledge combined with assistance from DeepSeek. They have not been rigorously tested and do not guarantee feasibility or stability for use in production environments. {/alert} General Approach Roughly divided into several threads, used for: Logging Target application instance management (potentially more than one thread) Listening for IPC messages Processing received IPC messages (main process) This article focuses on the logging part. Design Rationale Why dedicate a separate thread to logging? My consideration is that since it's inherently a multi-threaded architecture, a unified logging module is necessary. If each thread prints independently, it's highly likely that two threads could write to the file or output to the console simultaneously, causing log chaos. Therefore, the general idea for logging is: Define a queue to store log content and level. Create a thread that continuously takes elements from the queue, deciding whether to print to the console or output to a file based on the set log level. External components push log content to the queue. Some Detailed Considerations Ensure portability by using the STL library as much as possible, e.g., using std::thread instead of pthread. Ensure thread safety, requiring protection of relevant variables with mutexes or similar mechanisms. Make the thread wait when the log queue is empty; thought of writing a blocking queue similar to Java's BlockingQueue. Specify a log level; only logs with a level meeting or exceeding this threshold will be saved or printed. Implement variadic arguments via va_list to give the logging function a usage experience similar to sprintf. Start Coding With the above approach, the overall coding becomes quite simple. BlockingQueue Got lazy here, let DeepSeek write this part directly To implement a multi-thread-safe blocking queue where calling front() blocks until another thread adds an element, we can combine a mutex (std::mutex) and a condition variable (std::condition_variable) to synchronize thread operations. Code Implementation Mutex (std::mutex) All operations on the queue (push、front、pop、empty) need to acquire the lock first, ensuring only one thread can modify the queue at a time and avoiding data races. Condition Variable (std::condition_variable) When front() is called and the queue is empty, the thread releases the lock and blocks via cv_.wait(), until another thread calls push() to add an element and wakes up one waiting thread via cv_.notify_one(). cv_.wait() needs to be used with std::unique_lock and automatically releases the lock while waiting to avoid deadlocks. Uses a predicate check ([this] { return !queue_.empty(); }) to prevent spurious wakeups. Element Retrieval and Removal front() returns a copy of the front element (not a reference), ensuring the caller gets the data after the queue's lock is released, avoiding dangling references. pop() must be called explicitly to remove the element, ensuring controllable queue state. #include <queue> // queue #include <mutex> // mutex #include <condition_variable> // condition_variable template<typename T> class BlockingQueue { public: // Add an element to the queue void push(const T& item) { std::lock_guard<std::mutex> lock(mtx_); queue_.push(item); cv_.notify_one(); // Notify one waiting thread } // Get the front element (blocks until queue is not empty) T front() { std::unique_lock<std::mutex> lock(mtx_); cv_.wait(lock, [this] { return !queue_.empty(); }); // Block until queue not empty return queue_.front(); } // Get and remove the front element T take() { std::unique_lock<std::mutex> lock(mtx_); cv_.wait(lock, [this] { return !queue_.empty(); }); T item = std::move(queue_.front()); // Use move semantics to avoid copy queue_.pop(); return item; } // Remove the front element (requires external call, non-blocking) void pop() { std::lock_guard<std::mutex> lock(mtx_); if (!queue_.empty()) { queue_.pop(); } } // Check if the queue is empty bool empty() const { std::lock_guard<std::mutex> lock(mtx_); return queue_.empty(); } private: mutable std::mutex mtx_; // Mutex std::condition_variable cv_; // Condition variable std::queue<T> queue_; // Internal queue }; Log Class Log.h #pragma once #include <iostream> #include <fstream> #include <cstring> #include <thread> #include <chrono> #include <mutex> #include <cstdio> #include <cstdarg> #include <atomic> #include "BlockingQueue.h" enum LogLevel { LEVEL_VERBOSE,LEVEL_INFO,LEVEL_WARN,LEVEL_ERROR,LEVEL_FATAL,LEVEL_OFF }; struct LogMsg { short m_LogLevel; std::string m_strTimestamp; std::string m_strLogMsg; }; class Log { private: std::ofstream m_ofLogFile; // Log file output stream std::mutex m_lockFile; // File operation mutex std::thread m_threadMain; // Background log processing thread BlockingQueue<LogMsg> m_msgQueue; // Thread-safe blocking queue short m_levelLog, m_levelPrint; // File and console log level thresholds std::atomic<bool> m_exit_requested{ false }; // Thread exit flag std::string getTime(); // Get current timestamp std::string level2str(short level, bool character_only); // Level to string void logThread(); // Background thread function public: Log(short default_loglevel = LEVEL_WARN, short default_printlevel = LEVEL_INFO); ~Log(); void push(short level, const char* msg, ...); // Add log (supports formatting) void set_level(short loglevel, short printlevel); // Set log levels bool open(std::string filename); // Open log file bool close(); // Close log file }; Log.cpp #include "Log.h" std::string Log::getTime() { using sc = std::chrono::system_clock; std::time_t t = sc::to_time_t(sc::now()); char buf[20]; #ifdef _WIN32 std::tm timeinfo; localtime_s(&timeinfo,&t); sprintf_s(buf, "%04d.%02d.%02d-%02d:%02d:%02d", timeinfo.tm_year + 1900, timeinfo.tm_mon + 1, timeinfo.tm_mday, timeinfo.tm_hour, timeinfo.tm_min, timeinfo.tm_sec ); #else strftime(buf, 20, "%Y.%m.%d-%H:%M:%S", localtime(&t)); #endif return buf; } std::string Log::level2str(short level, bool character_only) { switch (level) { case LEVEL_VERBOSE: return character_only ? "V" : "Verbose"; case LEVEL_WARN: return character_only ? "W" : "Warning"; case LEVEL_ERROR: return character_only ? "E" : "Error"; case LEVEL_FATAL: return character_only ? "F" : "Fatal"; } return character_only ? "I" : "Info"; } void Log::logThread() { while (true) { LogMsg front = m_msgQueue.take(); // Block until a message arrives // Handle file writing if (front.m_LogLevel >= m_levelLog) { std::lock_guard<std::mutex> lock(m_lockFile); // RAII manage lock if (m_ofLogFile) { m_ofLogFile << front.m_strTimestamp << ' ' << level2str(front.m_LogLevel, true) << ": " << front.m_strLogMsg << std::endl; } } // Handle console printing if (front.m_LogLevel >= m_levelPrint) { printf("%s %s: %s\n", front.m_strTimestamp.c_str(), level2str(front.m_LogLevel, true).c_str(), front.m_strLogMsg.c_str()); } // Check exit condition: queue is empty and flag is true if (m_exit_requested.load() && m_msgQueue.empty()) break; } return; } Log::Log(short default_loglevel, short default_printlevel) { set_level(default_loglevel, default_printlevel); m_threadMain = std::thread(&Log::logThread, this); } Log::~Log() { m_exit_requested.store(true); m_msgQueue.push({ LEVEL_INFO, getTime(), "Exit." }); // Wake potentially blocked thread if (m_threadMain.joinable()) m_threadMain.join(); close(); // Ensure file is closed } void Log::push(short level, const char* msg, ...) { va_list args; va_start(args, msg); const int len = vsnprintf(nullptr, 0, msg, args); va_end(args); if (len < 0) return; std::vector<char> buf(len + 1); va_start(args, msg); vsnprintf(buf.data(), buf.size(), msg, args); va_end(args); m_msgQueue.push({level,getTime(),buf.data()}); } void Log::set_level(short loglevel, short printlevel) { m_levelLog = loglevel; m_levelPrint = printlevel; } bool Log::open(std::string filename) { m_lockFile.lock(); m_ofLogFile.open(filename.c_str(), std::ios::out); m_lockFile.unlock(); return (bool)m_ofLogFile; } bool Log::close() { m_lockFile.lock(); m_ofLogFile.close(); m_lockFile.unlock(); return false; } Explanation Class/Structure Explanation LogLevel Enum Defines log levels: VERBOSE, INFO, WARN, ERROR, FATAL, OFF。 OFF should not be used as a level for recorded logs, only for setting the threshold when needing to disable all logging. LogMsg Struct Encapsulates a log message: m_LogLevel: The log level. m_strTimestamp: Timestamp string. m_strLogMsg: The log content. Member Variable Explanation Variable Explanation m_ofLogFile File output stream for writing to the log file. m_lockFile Mutex protecting file operations. m_threadMain Background thread handling consumption of log messages. m_msgQueue Blocking queue storing pending log messages. m_levelLog Minimum log level for writing to file (messages with level >= this are recorded). m_levelPrint Minimum log level for printing to console. m_exit_requested Atomic flag controlling log thread exit. Function Explanation Function Explanation getTime Gets the current timestamp string (cross-platform implementation). level2str Converts log level to string (e.g., LEVEL_INFO → "I" or "Info"). logThread Background thread function: consumes queue messages, writes to file or prints. Constructor Initializes log levels, starts the background thread. Destructor Sets exit flag, waits for thread to finish, ensures remaining messages are processed. push Formats log message (supports variadic arguments) and pushes to the queue. set_level Dynamically sets the log and print levels. open/close Opens/closes the log file. Complete code and test sample download: demo.zip
19/04/2025
75 Views
0 Comments
2 Stars
Centralized Deployment of EasyTier using Docker
EasyTier is inherently a decentralized P2P tool where any node can act as a relay server. However, each node's configuration file must be manually edited, which felt somewhat unfamiliar after migrating from Tailscale. Additionally, during the exploration phase, frequent configuration changes are often needed, leading to the decision to deploy EasyTier's Dashboard centrally for unified device management. Project Repository: https://github.com/easytier/easytier The official documentation doesn't explicitly provide a method for deploying the config-server separately, but it's actually quite straightforward, as the server component is already included in the downloaded binary file. This article focuses on installation via Docker Compose. For binary installation, please refer to the reference articles below. Analysis The dashboard deployment consists of two main parts: a backend RESTful API and a frontend web console. The easytier-web-embed binary found in the Releases provides both. Therefore, running this single binary enables the full functionality. Let's Get Started Deploying the API and Web Console Deploying with Docker is straightforward. Two ports need to be exposed: 11211/tcp: API interface, HTTP 22020/udp: For communication between clients (easytier-core) and the server. Volume mapping is required for the container's /app folder to persist data. The Compose file is as follows: services: easytier: restart: always hostname: easytier volumes: - /opt/easytier/api:/app ports: - "127.0.0.1:11211:11211" - "22020:22020/udp" environment: - TZ=Asia/Shanghai image: easytier/easytier:latest entrypoint: easytier-web-embed The image here is the same one used for deploying the client via Docker in the official documentation. The default entrypoint is easytier-core, so running the web API requires specifying the entrypoint as easytier-web-embed. Since the API interface requires HTTPS, the 11211 port is not directly exposed to the public internet here. Instead, it's bound to 127.0.0.1 and then exposed via a reverse proxy with HTTPS. Setting up Reverse Proxy I use 1Panel, so I simply created a new site in the panel and set up a reverse proxy to the configured API port. Registering a Console Account After deployment, open https://your-domain.com (if using the built-in console version, adding /web/ is not necessary). Change the 'Api Host' to https://your-domain.com. Ensure there is no trailing "/" in the Api Host URL, otherwise, strange issues may occur. Click 'Register' below to create an account. Then use this account to log in and access the console. Client Configuration Remove all startup parameters for the client, keeping only --config-server udp://your-ip:22020/your-username. Run the easytier-core binary, and the device should appear in the console. Click the settings button on the right, then click 'Create' to create a network for it. The subsequent steps are the same as in the local GUI mode and won't be detailed here. After saving, select the newly created network from the 'network' dropdown to join it. Because Docker container data is lost on restart, when deploying the client in Docker, a file must be mapped to the container path /usr/local/bin/et_machine_id to save the machine ID. Otherwise, the network will need to be reconfigured after each restart. Additionally, setting the container's hostname can be used as the device name displayed in the web console. Here is my compose file for the client: services: easytier: command: '--config-server udp://<ip>:22020/KaguraiYoRoy' environment: - TZ=Asia/Shanghai hostname: truenas image: easytier/easytier:latest labels: com.centurylinklabs.watchtower.enable: 'true' mem_limit: 0m network_mode: host privileged: True restart: always volumes: - >- /mnt/systemdata/DockerData/easytier/app/et_machine_id:/usr/local/bin/et_machine_id watchtower: command: '--interval 3600 --cleanup --label-enable' environment: - TZ=Asia/Shanghai - WATCHTOWER_NO_STARTUP_MESSAGE image: containrrr/watchtower restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock References: https://blog.mitsea.com/1a57bda595c580088006c17d6ba2a744/ https://github.com/EasyTier/EasyTier/issues/722 https://github.com/EasyTier/EasyTier/issues/577 https://github.com/EasyTier/EasyTier/pull/718
15/04/2025
2,241 Views
0 Comments
4 Stars
Using Alist to Sync TrueNAS to OneDrive
Background I have an E5 subscription and was originally using a solution involving running the driveone/onedrive:edge Docker container to achieve synchronization. However, this solution had drawbacks: firstly, it lacked a GUI/WebUI, and secondly, each sync operation would consume 25%-50% of CPU resources. Considering that TrueNAS's built-in sync solution can sync to WebDAV, I thought of using Alist to mount OneDrive and convert it into WebDAV for TrueNAS to mount. Process Installing Alist Create a persistent storage folder for Alist and write a Docker Compose file according to the official Alist documentation: services: alist: environment: - PUID=3000 - PGID=950 - UMASK=022 image: xhofe/alist:latest ports: - '8088:5244' restart: always volumes: - /mnt/systemdata/DockerData/alist/etc:/opt/alist/data - /mnt/data/Storage:/mnt/data Here, I exposed the Alist port on 8088. Mapping /mnt/data/Storage allows Alist to manage local storage; mapping /mnt/systemdata/DockerData/alist/etc serves as the folder for storing Alist data. Configuring OneDrive on Alist is not discussed in this article; please refer to the official Alist documentation. Here, I mounted my OneDrive at /OneDrive. After setup, go to the Alist admin panel -> Users, edit your user or create a new user, and check the Webdav Read and Webdav Manage permissions to enable WebDAV access for this user. Configuring TrueNAS Sync Go to TrueNAS Admin-Credentials-Backup Credentials, and add a Cloud Credential with the following parameters: Provider: WebDAV Name: Custom URL: Alist address +/dav, e.g., I used http://127.0.0.1:8088/dav WebDAV Service: OTHER Username和Password: Alist account credentials Verify the credential and save it if successful. Next, go to TrueNAS Admin -> Data Protection, and add a Cloud Sync Task. Under Provider, select the WebDAV credential for Alist created earlier. The parameters are explained in detail below: Direction: Choose PULL (cloud to local) or PUSH (local to cloud) Transfer Mode: COPY: Copy files. Files deleted from the source folder later will not be deleted from the target. MOVE: Copy files and then delete them from the source folder after transfer. SYNC: Keep the source and target folders synchronized. Files deleted from the source will also be deleted from the target. Directory/Files: The local file or folder to sync. Folder: The target folder in the cloud storage. Description: Notes. Schedule: Set a schedule using Cron syntax. You can use predefined intervals or write your own. For example, I selected PUSH, SYNC, syncing from /mnt/data/Storage to /OneDrive/TrueNAS, scheduled to run daily at 00:00. After editing, save the task. It will automatically upload local files to OneDrive at the scheduled time. Old Solution Project Address: https://github.com/abraunegg/onedrive Reference Articles: https://alist.nn.ci/zh/guide/install/docker.html https://alist.nn.ci/zh/guide/drivers/onedrive.html
13/03/2025
209 Views
0 Comments
0 Stars
Installing 1Panel Using Docker on TrueNAS
Background My TrueNAS has some performance headroom, so I'm thinking of deploying a web service. I want to install a control panel to reduce manual work. Considering the performance overhead of virtual machines, the high memory requirements for ZFS cache, and the fact that the NAS itself isn't very powerful, I decided to use Docker for deployment. Furthermore, since 1Panel itself is distributed as a Docker image, both systems controlling the TrueNAS host's Docker daemon is essentially equivalent to deploying websites directly on the TrueNAS host, making management easier. Analysis {alert type="warning"} This article assumes TrueNAS can access Docker Hub and the Docker daemon is already configured. {/alert} Environment Information Storage Pool There are two storage pools: /mnt/data: 1 x MIRROR | 2 wide | 2.73 TiB | HDD /mnt/systemdata: 1 x DISK | 1 wide | 223.57 GiB | SSD Docker data is stored in storage pool #2. Datasets There are three datasets: Storage: Located in the data storage pool, stores cold data. DockerData: Located in the systemdata storage pool, stores persistent data for containers. KaguraiYoRoy: Located in systemdata, the user's home directory. Installing 1Panel Used the moelin/1panel:latest image for deployment. Many parts of this process can refer to the README written by the image author. Project address: okxlin/docker-1panel Created a folder specifically for storing 1Panel data within the DockerData dataset, which is used as /opt/1panel inside the container, located at /mnt/systemdata/DockerData/1panel. Persistent Volumes To allow 1Panel to manage the host's Docker, map /var/run/docker.sock and the host's Docker directory. Map the data folder created for it earlier. The Docker directory in TrueNAS is different from typical Linux systems. Typically, it's at /var/lib/docker, but in TrueNAS, it's at /mnt/.ix-apps/docker. Environment Variables and Port Mapping The environment variables are the same as those set by the image author, passing TZ=Asia/Shanghai. Port mapping can be set as needed; the container's port is 10086. Docker Compose With the above information, writing the Docker Compose file becomes straightforward. The complete Docker Compose file is as follows: services: 1panel: dns: - 223.5.5.5 environment: - TZ=Asia/Shanghai image: moelin/1panel:latest labels: createdBy: Apps ports: - '8085:10086' restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock - /mnt/.ix-apps/docker:/var/lib/docker - /mnt/systemdata/DockerData/1panel/opt:/opt/1panel - /mnt/systemdata/DockerData/1panel/root:/root - /etc/docker:/etc/docker Mapping /root is because I need to run Git inside the container, and Git config is stored under /root. Setting DNS is because 1Panel needs to download data online when building environment images, and errors occur without specifying DNS. After installation, access the port you set. 1Panel Basic Information: Default Username: 1panel Default Password: 1panel_password Default Entrance: entrance Troubleshooting Docker Mirror During testing, it was found that without setting a mirror source, even with a Proxy configured, installing the PHP environment would fail. Furthermore, configuring both a mirror source and a Proxy also led to installation failure; the reason is unclear. Open /etc/docker/daemon.json on TrueNAS and add registry-mirrors: { "data-root": "/mnt/.ix-apps/docker", "default-address-pools": [ { "base": "172.17.0.0/12", "size": 24 } ], "exec-opts": [ "native.cgroupdriver=cgroupfs" ], "iptables": true, "registry-mirrors": [ "https://docker.1panel.live" ], "storage-driver": "overlay2" } Save the file, restart the host's Docker service, then try installing the environment in 1Panel again. {alert type="warning"} This configuration might be lost after a reboot. Try to install all necessary environments and apps in one go if possible. {/alert} Containers Created by 1Panel Fail to Start This is because in 1Panel, the default folder for storing data is the mapped /opt/1panel. However, the containers actually run on the TrueNAS host and try to access /opt/1panel, which doesn't exist on TrueNAS by default, and its /opt is read-only by default. This causes a "Read-only filesystem" error when starting containers. My solution is straightforward: On the TrueNAS host, first remount /opt as read-write, then create a symbolic link pointing to 1Panel's data folder. cd /opt mount -o remount,rw /opt ln -s /mnt/systemdata/DockerData/1panel/opt 1panel After this, it should work normally. One thing to note: When installing OpenResty in 1Panel, remember to avoid using ports 80 and 443, as these are the default ports for the TrueNAS web UI.
07/03/2025
379 Views
0 Comments
0 Stars
1
2