01 VMess Protocol
The original V2Ray protocol. VMess is a stateful, custom protocol that bundles its own AEAD encryption, making it self-contained but heavier than modern alternatives.
VMess — Core Philosophy
VMess-TCP, VMess-Websocket, VMess-Websocket-TLS, VMess-HTTP, VMess-HTTP2
LEGACYBACKWARD COMPAT
The Problem VMess Solved (2017): Early censorship firewalls could trivially identify Shadowsocks by its statistical randomness profile and V2Ray's predecessor by fixed headers. VMess was designed to produce traffic that statistically mimics HTTPS without requiring a real TLS handshake. It bundles authentication (UUID), timestamp-based replay protection, and AEAD-encrypted length fields so the wire format appears as random bytes with no identifiable header signatures.

Why it's now "legacy": VMess does its own encryption on top of TCP, creating a detectable double-encryption pattern when paired with TLS. Modern censorship systems (especially GFW's active probing) have learned to fingerprint VMess's handshake timing, header size patterns, and UUID authentication window. VLESS was created to strip these fingerprint surfaces away.
CLIENT APP
Stargate Client
SOCKS/HTTP :10808
VMess AEAD
auth + encrypt
WIRE
VMess stream
opaque bytes
No TLS — just VMess AEAD cipher
UUID auth
AEAD decode
STARGATE SERVER
VMess inbound
alterId: 0
freedom
DESTINATION
Internet
1
Authentication Header Construction
Client hashes HMAC-MD5(UUID + timestamp) to form the 16-byte auth header. The timestamp must be within ±90 seconds of server time (replay protection window). There is no certificate or key exchange — the shared UUID is the pre-shared secret. alterId must be 0 to enforce AEAD mode; legacy non-zero alterId uses MD5-based CHACHA encryption which is broken.
2
Request Header Encryption
The target address + port + command are encrypted with AES-128-CFB derived from UUID. The data payload follows using AEAD (AES-128-GCM or ChaCha20-Poly1305). Every chunk has an encrypted length field, so the censor cannot even tell where one chunk ends and another begins.
3
Why VMess + TLS = Double Encryption Problem
When VMess runs inside TLS (VMess-TCP-TLS), the inner VMess encryption is redundant — TLS already provides confidentiality. The double-layer creates a TLS-in-TLS statistical fingerprint where the inner TLS handshake (from the proxied target) is visibly nested inside the outer connection. Modern DPI detects this. VLESS was designed to remove VMess's own encryption entirely and rely solely on the outer transport security.
Auth Mechanism
typeUUID (pre-shared)
replay protection±90s timestamp
alterIdmust be 0
Encryption
headerAES-128-CFB
dataAES-128-GCM / ChaCha20
mandatory TLS?No
Detection Risk
active probinghigh
timing analysismedium
TLS-in-TLSdetectable
02 VLESS Protocol
VLESS is a stateless, zero-overhead proxy protocol that deliberately has no built-in encryption — it trusts the transport layer completely. This minimalism is intentional: it removes every fingerprint surface that VMess exposed.
VLESS — The "Thin Shell" Philosophy
VLESS-TCP, VLESS-TCP-TLS, VLESS-TCP-XTLS-Vision, VLESS-WSS, VLESS-gRPC, etc.
RECOMMENDEDDHD COMPATIBLE
The Core Insight: If you're running VLESS over TLS or DHD, you already have a strong outer security layer. VMess adding its own AEAD on top is wasteful and — crucially — creates detectable patterns. VLESS says: trust the outer layer, add nothing.

VLESS's "encryption: none" is not a security hole — it's a design statement. The protocol only needs to: (1) authenticate the user via UUID, and (2) tell the server where to forward traffic. Everything else — confidentiality, integrity, anti-fingerprinting — is delegated to TLS, XTLS, or DHD at the transport layer.

Stateless: Unlike VMess, VLESS has no timestamp replay protection window, no MD5-derived keys — just a clean UUID check. This makes it simpler, faster, and harder to fingerprint.
CLIENT
Stargate Client
encryption: none
VLESS header
inside TLS/DHD
WIRE
TLS / DHD
VLESS is just
inner payload
Outer security handles everything
UUID check
route traffic
STARGATE SERVER
VLESS inbound
decryption: none
freedom
DESTINATION
Internet
▸ VLESS Packet Format (simplified)
The VLESS wire format inside TLS is: [1B version][16B UUID][addons length][addons data][1B cmd][2B port][1B addr-type][addr][payload...]. No length obfuscation, no chunk encryption — the TLS record layer handles all that. This is why it's called "thin shell."
FeatureVMessVLESS
Built-in encryptionAES-GCM / ChaCha20None (delegated)
Auth methodUUID + timestamp HMACUUID only
StatefulYes (replay window)No
XTLS flow supportLimitedxtls-rprx-vision
DHD compatibleNoYes
Fingerprint surfaceHighMinimal
03 Goa'uld Protocol
Goa'uld's entire strategy is disguise — it doesn't try to encrypt better than TLS, it tries to look exactly like TLS. The protocol is so minimal it's almost invisible.
Goa'uld — The "Perfect Disguise" Model
Goa'uld-TCP-TLS, Goa'uld-gRPC, Goa'uld-WS-TLS, Goa'uld-H2-TLS
HTTPS MIMICRYFALLBACK RESISTANT
Philosophy: Rather than creating a new encrypted protocol, Goa'uld asks — what if the proxy looked exactly like HTTPS? It puts a real TLS handshake first (using a legitimate domain cert), then sends password-authenticated proxy requests that look like normal HTTPS POST bodies. A DPI system sees: valid TLS cert, standard cipher suites, normal handshake timing. There's nothing obviously "proxy-like."

The Fallback Trick: If someone connects to port 443 without the correct Goa'uld password (e.g., an active prober or a genuine HTTPS user), the server silently falls back to serving a real nginx web page. The server behaves identically whether it's a legitimate user or a censor's probe — the only difference is whether the correct password is embedded in the payload. This makes active probing attacks useless: probing the server returns a real website.
1
TLS Handshake (identical to HTTPS)
Client does a full TLS 1.3 handshake with the server's legitimate domain certificate (e.g., Let's Encrypt). The server responds with a normal ServerHello. At this point the connection looks 100% like normal HTTPS to any observer — valid cert chain, standard cipher, SNI matching the domain.
2
Password Verification (inside TLS)
Inside the TLS tunnel, the Goa'uld client sends: SHA224(password)\r\n + command + target_address\r\n + payload. The server verifies the SHA224 hash. If it matches a registered password → proxy mode. If not → fallback mode. The SHA224 prevents dictionary attacks; a wrong guess triggers the fallback, not a connection drop.
3
Fallback: Active Probe Resistance
Failed auth → Stargate forwards the connection to 127.0.0.1:80 (or wherever Nginx listens). Nginx serves the real website. The censor gets a real HTTP response. This is fundamentally different from many proxies that drop the connection or send a TCP RST — which itself is a fingerprint. Goa'uld's fallback makes probing inconclusive.
DPI PROBE
Censor
wrong password
TLS :443
bad payload
TROJAN SERVER
Auth Fail
SHA224 mismatch
fallback
:80
NGINX
Real Website
serves HTML
Probe gets a real site — inconclusive!
04 Shadowsocks 2022
The "pragmatic simplicity" protocol — Shadowsocks 2022 forgoes TLS entirely and relies on its own AEAD cipher for both encryption and authentication. The 2022 edition fixes critical vulnerabilities in older Shadowsocks versions.
Shadowsocks 2022 — Single-Layer AEAD
Shadowsocks-2022, Shadowsocks-AEAD, Shadowsocks-TCP, plus relay configurations
FASTMULTI-USERNO TLS
Philosophy: Shadowsocks is the "wire cutter" — it doesn't try to look like HTTPS, it just scrambles everything so efficiently that censors can't classify it. The 2022 edition uses 2022-blake3-aes-128-gcm or 2022-blake3-aes-256-gcm, which are specifically designed to produce traffic indistinguishable from random bytes with no packet-length correlation.

No TLS overhead: This is both a strength and weakness. It's fast (one encryption layer instead of two), but it means there's no certificate chain to present — the traffic looks like encrypted noise, not HTTPS. In deep-blocking environments like Netu, this "random-looking" traffic is itself suspicious. In less aggressive environments, it's perfectly fine.

What changed in 2022: Older Shadowsocks (AEAD-1.0) was vulnerable to replay attacks and had a detectable header structure. SS2022 adds a fixed-size salt-based session key derivation, a 64-bit timestamp in the header (replay protection), and proper AEAD over both header AND data. Each session uses a fresh ephemeral subkey.
CipherKey SizeGenerate CommandUse Case
2022-blake3-aes-128-gcm16 bytesopenssl rand -base64 16Standard, fastest
2022-blake3-aes-256-gcm32 bytesopenssl rand -base64 32Extra security
2022-blake3-chacha20-poly130532 bytesopenssl rand -base64 32ARM / no AES-NI

SS2022 Relay Chain — Multi-Hop Architecture
How relay works: The client holds two keys: relay_psk:user_psk. The relay server strips its own key layer and re-encrypts with server_psk:user_psk. The destination server verifies both keys. This creates a two-hop chain without the destination ever knowing the client's IP — the relay acts as an anonymizing middleman. The user's key propagates through the chain, so multi-user billing/revocation still works at the destination.
CLIENT
Stargate Client
relay_psk:user_psk
AEAD layer 1
relay_psk
RELAY SERVER
SS inbound
strips relay layer
re-encrypts
Client IP hidden from destination
AEAD layer 2
server_psk
DEST SERVER
SS inbound
verifies
server+user psk
freedom
DESTINATION
Internet
05 Transport Layer Technologies
The transport layer is independent of the protocol — you can pair any protocol (VLESS, VMess, Goa'uld, SS) with any transport. Transport choice determines how the traffic looks on the wire and what infrastructure it passes through.
Transport Comparison — Strategic Selection
The Fundamental Tradeoff: TCP is fastest and lowest-latency but produces traffic that is clearly "not HTTP" unless wrapped with a TLS handshake. WebSocket, gRPC, and HTTP/2 all piggyback on HTTP upgrade or HTTP/2 framing — making the traffic look like normal API calls to a web service. The more HTTP-like your transport, the more Pegasus Gate-compatible it becomes, but each layer adds overhead.
TransportWire AppearanceCDN SupportPerformanceObfuscationBest With
TCP (raw) Encrypted bytes No Highest Low — needs XTLS/DHD XTLS Vision, DHD
WebSocket HTTP Upgrade → WS frames Yes (CF, etc.) Medium Medium — looks like WS API CDN setups, Nginx front
gRPC HTTP/2 + protobuf frames Yes Medium High — looks like API calls Pegasus Gate, Nginx grpc_pass
HTTP/2 (h2c) HTTP/2 framing Yes Medium-High Medium Nginx SNI routing
XHTTP / HTTP/3 QUIC / HTTP/3 frames Limited High High DHD, modern edge
SplitHTTP Split across HTTP reqs Yes Medium Very high Heavy Pegasus Gate environments
▸ WebSocket: How the HTTP Upgrade Works
A WebSocket connection starts as a normal HTTP/1.1 GET request with headers Upgrade: websocket and Connection: Upgrade. The server responds 101 Switching Protocols. After that, both sides speak the WebSocket frame protocol. To a Pegasus Gate or firewall, this looks like a legitimate long-lived WebSocket API connection — common in chat apps, trading platforms, live dashboards. The proxy payload is inside the WS frames.
▸ gRPC: Why It Passes Pegasus Gate Scrutiny
gRPC uses HTTP/2 with specific content-type application/grpc and trailers. To a Pegasus Gate, it looks like a standard microservice RPC call. Nginx routes gRPC via grpc_pass directive, which understands the framing. The proxy payload is inside protobuf-framed HTTP/2 DATA frames. This is why the All-in-One config uses a Unix domain socket handoff to Nginx for gRPC routing.
06 mKCP — UDP Transport
mKCP — KCP over UDP with Obfuscation
VLESS-mKCPSeed, VMess-mKCPSeed
UDPHIGH-LOSS TOLERANT
Philosophy: TCP's congestion control (cubic, BBR) backs off when it detects packet loss — which is exactly wrong for high-latency, high-loss paths like satellite or mobile. mKCP implements the KCP reliable protocol over UDP, with its own retransmission logic that aggressively retransmits without waiting for TCP's exponential backoff. This trades bandwidth for latency. A KCP connection uses ~30% more bandwidth than TCP but maintains throughput on lossy paths where TCP would stall.

The seed-based obfuscation: The seed field in mKCP settings generates per-connection obfuscation headers that make the UDP datagrams look like specific protocol traffic (DTLS, uTP, WeChatVideo, SRTP). This is a statistical disguise — a firewall doing UDP fingerprinting sees "DTLS-like" packets rather than unknown UDP.
Header Disguises
noneraw KCP
srtplooks like VoIP
utplooks like BitTorrent
wechat-videolooks like WeChat
dtlslooks like DTLS
KCP Tuning
uplinkCapacityupstream Mbps
downlinkCapacitydownstream Mbps
congestiontrue = bottleneck
mtu1350 (default)
07 XTLS Vision
XTLS Vision is not a new protocol — it's a flow control mechanism that surgically eliminates the most detectable artifact of proxy usage: the double TLS handshake.
XTLS Vision — Eliminating TLS-in-TLS
flow: xtls-rprx-vision or xtls-rprx-vision-udp443
KERNEL SPLICEZERO OVERHEAD
The Problem It Solves: When a browser (inside a VLESS/TLS tunnel) connects to an HTTPS website, the wire looks like: [Outer TLS record] → [Inner TLS ClientHello]. This nesting is statistically anomalous — real TLS connections don't have TLS inside them. GFW's probers learned to detect this "TLS-in-TLS" pattern as a proxy fingerprint.

XTLS Vision's approach: Vision understands the TLS state machine. When it sees the proxied connection is doing its own TLS handshake (inner TLS), it does something clever: it passes the inner TLS bytes directly to the kernel instead of wrapping them in another TLS record. The outer TLS connection hands off to a kernel splice at exactly the right moment. From the wire's perspective, after the splice point, you see: proper outer TLS, then inner TLS handshake that appears to be part of the outer session. The nesting disappears.

Why "Vision"? The flow needs to "see" the inner application's TLS handshake to know when to splice. It adds random padding during the inner TLS handshake phase to further obscure the transition point. The result: near-zero CPU overhead for HTTPS traffic (kernel does the copy) AND no detectable TLS-in-TLS signature.
1
Phase 1: Outer TLS + VLESS Header
Normal VLESS over TLS handshake. UUID authentication. XTLS Vision adds random padding to the VLESS header to obscure pattern analysis. Server acknowledges the VLESS connection.
2
Phase 2: Inner TLS Detection + Padding
As the proxied app (browser) begins its own TLS handshake to the target site, Vision detects the inner ClientHello by parsing the TLS record header. During this inner handshake, Vision adds random-length padding chunks between the inner TLS records — this randomizes the packet timing and sizes, defeating traffic analysis that looks for "handshake-sized" bursts.
3
Phase 3: Kernel Splice (the magic)
Once the inner TLS handshake is complete and the inner session is established (inner ChangeCipherSpec detected), Vision switches to kernel splice mode: splice(2) syscall copies bytes directly between file descriptors in kernel space. Zero userspace copies. Zero Stargate CPU. The kernel moves the bytes. This is why XTLS achieves near-native speed for HTTPS proxying.
4
uTLS Fingerprint Simulation
The client must also set fingerprint: "chrome" (or firefox, safari, etc.) in its TLS settings. This uses the uTLS library to produce a ClientHello that is byte-for-byte identical to a real Chrome browser's ClientHello — including extension order, GREASE values, and cipher suite selection. Without this, the outer TLS handshake itself is fingerprint-detectable as "Stargate."
⚠ XTLS Vision Configuration Requirements
For Vision to function correctly: (1) Must use port 443 — non-standard ports are suspicious. (2) Must block Chinese IP/domain traffic in routing rules — traffic returning to China creates analysis opportunities. (3) Fallback must go to a real website, not a connection drop. (4) Client must set fingerprint to simulate a real browser — without uTLS, the outer TLS is identifiable.
08 DHD Protocol
DHD is the most sophisticated anti-censorship mechanism in the Stargate ecosystem. It doesn't just hide traffic — it makes the proxy server cryptographically indistinguishable from a real website, without owning a domain or certificate.
DHD — Borrowed Identity Protocol
VLESS-TCP-XTLS-Vision-DHD, VLESS-gRPC-DHD, VLESS-XHTTP-Reality
MOST SECURENO DOMAIN NEEDEDNO CERT NEEDED
Core Insight: Traditional TLS proxies have a fatal flaw — their server certificate is their certificate. A censor can request the certificate from any IP address and identify it as a proxy server. Even if the domain looks innocent, the certificate fingerprint or the chain of trust may reveal the proxy network.

DHD's solution: don't use your own certificate at all. Instead, act as a pass-through to a legitimate target website (like www.microsoft.com) and literally borrow their TLS handshake. When a censor connects to probe your server, they receive Microsoft's actual certificate chain. They cannot distinguish your server from a Pegasus Gate node or legitimate Microsoft endpoint.

How is the legitimate client authenticated then? This is where X25519 key cryptography comes in. The server embeds its public key in the ClientHello extension fields using a zero-knowledge proof scheme. Only a client that knows the corresponding private key (and the shortId) can recognize the server's embedded auth data. Unauthorized clients see a normal TLS handshake and get forwarded to the real target site.
1
Client Sends ClientHello (forged as Chrome)
The DHD client uses uTLS to generate a Chrome-identical ClientHello. Embedded inside the Session ID or other extension fields is a value derived from the client's shortId and a timestamp. The SNI is set to a valid target site hostname (e.g., www.microsoft.com). To any observer, this ClientHello is indistinguishable from a real Chrome browser connecting to Microsoft.
2
Server Classifies the Connection
The Stargate server reads the incoming ClientHello. It uses its X25519 private key to check whether the embedded auth value matches any known client. If shortId is recognized: this is a legitimate VLESS client. If not recognized: this is either a probe or a real user — forward everything to the real target site.
3A
Legitimate Client Path: Temporary Certificate
Server proxies the real TLS handshake from the target site (e.g., connects to microsoft.com:443 and fetches the ServerHello). It forwards this genuine ServerHello to the client, but wraps a "temporary trusted certificate" signed by its own ephemeral key. The client, knowing the server's X25519 public key, can verify this ephemeral certificate. Result: the client gets a valid TLS session with your server using Microsoft's certificate appearance. VLESS data flows inside this session.
3B
Unauthorized Client Path: Transparent Forwarding
Server literally proxies the connection to the real target site. The censor's probe connects, sees Microsoft's real certificate (not self-signed, not Stargate's), and receives real Microsoft HTML. There is no way to distinguish this from an actual Microsoft Pegasus Gate node. The server's TCP port 443 behaves identically to Microsoft's 443 for unauthorized clients.
4
Traffic Theft Prevention (steal_others config)
A risk: unauthorized users could exploit your server as a free proxy to the target site. The VLESS-TCP-DHD (without being stolen) config adds routing rules to block the target site's own traffic — if someone uses your server to reach microsoft.com directly, routing drops it. You only forward non-target-site traffic. Geo-blocking rules (geoip:cn, geosite:cn) can also prevent your server from being abused as a relay to Chinese government sites.
LEGIT CLIENT
Stargate + uTLS
shortId: "abc"
pubKey: known
ClientHello
Chrome TLS
SNI: microsoft.com
DHD SERVER
Stargate :443
shortId auth ✓
privateKey X25519
ServerHello
MS cert +
ephemeral sig
TLS SESSION
Established
VLESS data flows
VLESS+Vision
freedom
DESTINATION
Internet
▸ CERTIFICATE AT EACH HOP
HopCertificate presentedIssued byVerified byWhat a passive observer sees
Client → Server :443 DHD ephemeral cert
Subject: microsoft.com (or chosen SNI)
No CA — derived from VPS X25519 keypair
Server signs a synthetic cert matching the real site
Stargate client via out-of-band publicKey field in config Indistinguishable from a real TLS 1.3 handshake to microsoft.com.
Active probe without correct shortId receives the real Microsoft cert via fallback pass-through.
Server → Internet (freedom) Destination's own cert
e.g. Google, Akamai, etc.
Destination's CA (DigiCert, Let's Encrypt, etc.) VPS system CA store Normal HTTPS egress from VPS IP. No proxy fingerprint.
Target Site Requirements
TLS version1.3 required
HTTP/2required (h2)
locationoutside your country
no redirectapex domain
similar IPto your VPS
Server Keys (stargate x25519)
privateKeyserver only
publicKeyshare with clients
shortIdshex, even len, ≤16
empty shortId ""allows any client
Client Settings
fingerprintchrome / firefox
serverNameSNI (target site)
publicKeyfrom server
spiderXunique per client
▸ Why DHD doesn't need a domain or certificate
Traditional TLS proxies must: buy a domain, get a Let's Encrypt cert, set up DNS, maintain cert renewal. DHD eliminates all of this. The server's identity is its X25519 key pair — no PKI, no certificate authority, no domain name. The target site (e.g., microsoft.com) provides the "cover" certificate. This also means: even if the VPS IP is flagged, switching to a new IP doesn't require any cert reissuance — just update client configs.
09 Stargate Fallback Mechanism
Fallbacks are the routing brain of Stargate's server-side multiplexing. When an inbound receives a connection it can't authenticate, instead of dropping it, it forwards it to another handler — silently, with the full original byte stream intact.
How Fallbacks Work — Technical Detail
The Key Insight: A fallback is triggered before the protocol handshake fails visibly. Stargate reads only the first few bytes of the connection to determine if it matches the expected protocol. If not, it hands the connection off — along with the bytes already read — to another handler. The fallback handler receives the connection as if it were a fresh connection; those peeked bytes are replayed.

This is why fallbacks can serve a real website: Nginx receives what looks like a fresh HTTPS (or HTTP) connection. No connection teardown. No RST. No detectable "refused" signal to the prober. The transition is transparent.
1
Connection Accepted on Port 443
Any TCP connection landing on port 443 is accepted by the primary inbound (e.g., VLESS-TCP-XTLS). TLS handshake happens first — the outer TLS layer is always established regardless of what comes after.
2
Protocol Detection (peek, don't consume)
After TLS, Stargate peeks at the decrypted application data. It checks: Is this a valid VLESS header? Does the UUID match? It does this without consuming the stream — the bytes remain buffered. This peek is the basis for all fallback decisions.
3
Fallback Routing by Path / ALPN / SNI / Name
path fallback: If the decrypted data starts with GET /vlws (WebSocket upgrade), route to VLESS-WS inbound.
alpn fallback: If the TLS handshake negotiated h2, route to Goa'uld-TCP (which handles h2 traffic further).
name (SNI) fallback: If SNI was trh2o.example.com AND ALPN was h2, route to Goa'uld-H2 inbound.
default fallback: Anything that doesn't match → Nginx Unix socket.
4
Byte Replay to Downstream
The downstream handler (e.g., Nginx) receives the full original byte stream including any bytes that were peeked. To Nginx, this is a fresh connection from 127.0.0.1. The PROXY protocol header (xver:1) can optionally prepend the real client IP so Nginx logs have accurate source addresses.
▸ Why HTTP/2 Can't Use Path-Based Fallbacks
HTTP/2 multiplexes multiple streams over a single TCP connection using HPACK-compressed headers. The request path is buried deep inside a compressed HEADERS frame — it's not a plain-text URL prefix like in HTTP/1.1. Stargate can't peek the path without fully decoding the HTTP/2 frame. Therefore, H2 inbounds must be identified by SNI (which is available at TLS handshake time, before any application data) rather than by path.
10 All-in-One + Nginx Architecture
The All-in-One-fallbacks-Nginx configuration is the most complex example in the repo — 21 protocol/transport combinations on a single port 443, served by layered fallbacks and Nginx gRPC routing.
Full Decision Tree — Port 443 Traffic Routing
21 COMBOSSINGLE PORT
Strategy: Use VLESS-TCP-XTLS as the "smart front door" on port 443. It can handle its own native traffic with maximum performance (kernel splice). For everything else, it uses fallbacks as a dispatch table — routing by URL path, ALPN, or SNI to specific sub-inbounds listening on loopback ports. Nginx handles anything that needs to look like a normal website (decoy) or route gRPC (which requires HTTP/2 pass-through).
TriggerConditionDestinationDetails
VLESS-XTLS nativeValid UUID + VLESS headerDirect serveMax performance, kernel splice
path=/vlwsGET upgrade to WSVLESS-WS :1002WebSocket clients
path=/vltcPOST /vltc bodyVLESS-TCP-TLS :1001Non-XTLS VLESS
path=/vlgrpcgRPC content-typeNginx → VLESS-gRPC :3002gRPC path via Nginx
path=/vmtcPOST /vmtcVMESS-TCP :2001VMess TCP clients
path=/vmwsWS upgradeVMESS-WS :2002VMess WebSocket
path=/vmgrpcgRPCNginx h2c.sock → VMESS-gRPC :3003VMess over gRPC
path=/trtcGoa'uld passwordGoa'uld-TCP :1101Goa'uld TCP
path=/trwsWS upgradeGoa'uld-WS :1102Goa'uld WebSocket
alpn=h2HTTP/2 negotiatedGoa'uld-TCP h2 :1103h2 traffic → Goa'uld
alpn=h2 + SNI=trh2o.*SNI-matched h2Goa'uld-H2 :1104Goa'uld with H2 transport
path=/ssgrpcgRPCNginx → SS-gRPC :3004Shadowsocks gRPC
default (all else)Any unmatchedNginx h1.sock → Decoy websiteActive probe resistant
▸ The Dual Unix Socket Architecture
Nginx listens on two Unix domain sockets: /dev/shm/h1.sock (HTTP/1.1, proxy_protocol) and /dev/shm/h2c.sock (HTTP/2 cleartext, proxy_protocol). Using proxy_protocol on both allows Nginx to see the real client IP (replayed by Stargate's xver:1 header) despite the loopback handoff. The set_real_ip_from unix: + real_ip_header proxy_protocol directives restore the original IP in Nginx's access logs. The h2c socket handles gRPC (which must be h2), and Nginx's grpc_pass routes specific paths to specific backend ports.
▸ Anti-Probing: The HTTP 400 Default Server
Nginx has a catch-all default_server that returns HTTP 400 for any request hitting the Unix socket with an invalid Host header or direct IP access. This prevents attackers from hitting the Nginx backend directly. Only connections that went through Stargate's TLS and fallback machinery will have the right host headers to reach the actual website virtual host.
11 VLESS Reverse Proxy
The reverse proxy pattern solves a specific problem: you have a service running behind NAT (no public IP), but you want to expose it publicly. Stargate's built-in reverse proxy uses a persistent outbound tunnel so the NAT machine initiates — no port forwarding needed.
Reverse Proxy — NAT Traversal via Persistent Tunnel
ReverseProxy/VLESS-TCP-XTLS-WS — portal.jsonc + client_tcp.jsonc
REVERSE TUNNELNO PORT FORWARDING
The NAT Problem: A machine inside a home network or corporate firewall can make outbound TCP connections, but cannot receive inbound connections. Traditional solutions: port forwarding (requires router access), ngrok-style services (third-party trust), or WireGuard (requires both sides to have routable IPs).

Stargate's approach: The internal machine ("bridge") establishes a persistent, authenticated VLESS connection to the public server ("portal"). This outbound tunnel is kept alive. When an external user connects to the portal, the portal uses this pre-existing tunnel to reach the internal service — the traffic flows "backwards" through the bridge's outbound connection. No inbound ports needed on the bridge side.

Does the external user authenticate? This is the key nuance: the external user does NOT need Stargate or any proxy auth. The portal's external-facing port can accept raw HTTPS, VLESS, or any protocol you configure. The portal acts as a transparent relay — it accepts the external connection on its public port and pipes it through the pre-established tunnel to the internal service. The authentication happens at the tunnel level (bridge→portal), not at the user→portal level. The bridge's VLESS UUID controls who can register a tunnel; the portal's external port controls what the end user connects with.
1
Bridge Registers Tunnel (outbound from internal machine)
The internal machine runs Stargate with "reverse": {"bridges": [{"tag": "bridge", "domain": "service.internal"}]}. This creates a persistent outbound VLESS connection to the portal server. The "domain" is a virtual label — not a real DNS name — used internally to match which tunnel to use. The connection is authenticated with the bridge's UUID. The portal now has a registered, authenticated tunnel associated with the bridge.
2
Portal Receives External User Request
An external user (browser, curl, etc.) connects to the portal's public port (e.g., 443). The portal does not require any VLESS or proxy authentication from this user — it's configured as a transparent inbound. The traffic type depends on your portal config: it could be raw HTTPS, VLESS, or any protocol. In the VLESS-TCP-XTLS-WS example, the portal has a VLESS inbound that the user connects to (UUID auth), but you could equally expose it as plain HTTP.
3
Portal Routing: Tag "bridge" → Internal Tunnel
Portal routing rule: {"inboundTag": "user-inbound", "outboundTag": "bridge-tunnel"}. This sends the user's traffic into the pre-established bridge tunnel. The virtual domain service.internal is matched in the portal's "reverse": {"portals": [...]} config, which knows which connection corresponds to which bridge.
4
Bridge Receives Traffic, Routes to Local Service
The internal machine's Stargate receives the tunneled traffic. Its routing rule says: traffic tagged as coming from "bridge" → outbound to 127.0.0.1:8080 (or whatever the local service port is). The internal service sees a normal TCP connection from localhost. The response flows back through the same tunnel to the external user. Bidirectional, full-duplex, zero port forwarding.
EXTERNAL USER
Browser
no proxy needed
HTTPS :443
(or VLESS)
PUBLIC SERVER
Portal
VLESS inbound
+ reverse portals
pre-established
VLESS+XTLS
bridge tunnel
INTERNAL MACHINE
Bridge
UUID auth'd
persistent conn
Behind NAT — no public IP, initiated the outbound
localhost
:8080
LOCAL SERVICE
Web App
127.0.0.1:8080
▸ Multiple Bridges / Services
You can register multiple bridges by adding entries to the bridges array with different domain labels. Each bridge establishes its own persistent tunnel. The portal uses the domain label to route different external requests to different internal services. For example: service.internal → home server app, admin.internal → internal admin panel.
12 Pegasus Gate + WebSocket Architecture
CDN-Fronted VLESS / VMess WebSocket
VLESS-WSS-Nginx, VMess-Websocket-TLS, VLESS-TLS-SplitHTTP-CaddyNginx
Pegasus Gate IP PROTECTIONIP HIDING
Why CDN? If your VPS IP is blocked, a Pegasus Gate (Cloudflare, CDN77, etc.) acts as an intermediary — clients connect to the CDN's IP, which is one of millions of shared IPs that can't be blocked without disrupting huge amounts of legitimate traffic. The Pegasus Gate then proxies the WebSocket connection to your origin VPS. The censor can't block Cloudflare's IP range without breaking most of the internet.

The TLS puzzle: Cloudflare terminates TLS at its edge. This means: (1) Cloudflare sees the decrypted WebSocket frames — the proxy payload is visible to Cloudflare (though they don't inspect it in practice). (2) Your origin server gets a connection from Cloudflare's IP, not the user's IP. (3) You need a second TLS layer (Full Strict mode) between Cloudflare and your origin, otherwise traffic between CF and you is unencrypted.
CLIENT
Stargate Client
WS + TLS :443
TLS to CF IP
SNI: domain
CLOUDFLARE EDGE
TLS Termination
WS proxy pass
Full Strict SSL
CF sees decrypted WS frames; re-encrypts to origin
TLS (Full Strict)
CF → origin
NGINX
TLS Termination
proxy_pass
ws_pass :10000
localhost
:10000
STARGATE SERVER
VLESS-WS
127.0.0.1:10000
freedom
DESTINATION
Internet
▸ CERTIFICATE AT EACH HOP
HopCertificate presentedIssued byVerified byWhat a passive observer sees
Client → Cloudflare :443 Cloudflare Universal SSL
Subject: your-domain.com
DigiCert or Cloudflare CA
CF holds the private key at the edge PoP
Client TLS stack (OS trust store) Valid HTTPS to your domain. CF terminates TLS — origin IP is hidden. CF sees decrypted WS frames but not VLESS payload.
Cloudflare → Nginx origin :443 Let's Encrypt / ACME cert
Subject: your-domain.com
Let's Encrypt (ACME challenge on your VPS) Cloudflare — rejects self-signed or expired certs under Full Strict mode CF re-encrypts with new session keys. Nginx decrypts this layer and proxies WS frames to Stargate on localhost.
Nginx → Stargate (localhost) None — plain TCP on loopback N/A N/A — loopback only, never leaves the machine Unencrypted VLESS+WS frames on 127.0.0.1:10000. Safe because loopback is not externally reachable.
Stargate → Internet (freedom) Destination's own cert Destination's CA VPS system CA store Normal HTTPS from VPS IP. Destination sees VPS, not the client.
Cloudflare Requirements
WebSocketmust enable in CF
SSL modeFull (Strict)
gRPCenable separately
proxy statusorange cloud (on)
Tradeoffs
latency+20-50ms
IP blocked?No (CF IP)
CF visibilitysees WS payload
free tier?Yes (CF)
13 Gate Fronting
The most technically complex censorship bypass in the repo. Rather than running a proxy server, Gate Fronting lets a single local Stargate instance intercept its own TLS connections and re-route them through Pegasus Gate infrastructure using domain fronting.
Gate Fronting — Local TLS Interception + SNI Spoofing
Gate-Domain-Fronting / config.jsonc — no server required
NO VPS NEEDEDCERT INSTALL REQ.
The Concept: Domain fronting exploits the fact that CDNs use the outer TLS SNI to route traffic to their edge, but then route internally based on the HTTP Host header. You can send a request with SNI=allowed-cdn-domain.com (which passes the censor's SNI filter) but Host=your-actual-target.com (which the Pegasus Gate routes to the real backend). The SNI and Host are different, and the Pegasus Gate doesn't care — it delivers to the Host.

The Gate Part: To modify the Host header inside TLS, you need to terminate and re-originate TLS. Stargate does this locally by: (1) generating a self-signed CA cert and installing it in the OS trust store, (2) intercepting HTTPS connections via a dokodemo-door transparent proxy, (3) decrypting the TLS (the OS trusts your CA), (4) re-encrypting with modified parameters (fake SNI, target IP) and sending to the Pegasus Gate. The Pegasus Gate routes based on Host header and delivers the real content. No VPS required.
1
Prerequisite: Install Local CA
Generate CA with stargate tls cert -ca -file=mycert. Install into OS "Trusted Root Certification Authorities." This gives local Stargate the power to sign certificates for any domain — so when it intercepts example.com, it can present a fake-but-trusted example.com cert to the local browser. Without this step, the browser would show a certificate warning.
2
App Makes HTTPS Request → Intercepted by dokodemo-door
Browser/app points to Stargate's SOCKS proxy (:10801). Routing rule matches target domain (e.g., e-hentai.org) → redirect-out outbound → redirects to local tls-decrypt inbound on :4431. The dokodemo-door inbound with TLS settings terminates the HTTPS connection using the locally-trusted CA cert. Stargate now has the plaintext HTTP request.
3
Re-encryption with Domain-Fronted Parameters
The tls-repack outbound takes the decrypted request and re-encrypts it with: serverName: "11.45.1.4" (fake/empty SNI to avoid SNI filtering), redirect: "104.20.19.168:443" (Cloudflare's IP for the Pegasus Gate that fronts the target), and alpn: ["fromMitm"] (copy ALPN from original). The Pegasus Gate receives a request with a non-descript SNI but the correct Host: e-hentai.org header — it routes internally to the correct backend. The censor sees only Cloudflare's IP and a meaningless SNI.
4
Certificate Verification via verifyPeerCertInNames
Since the SNI is fake, normal TLS cert validation would fail (the cert says e-hentai.org but SNI says 11.45.1.4). The verifyPeerCertInNames: ["e-hentai.org", "fromMitm"] setting tells Stargate to accept the cert if it matches any of these names — not the SNI. This allows cert validation to succeed even with a spoofed SNI. Requires Stargate v25.2.21+.
BROWSER
HTTP GET
via SOCKS :10801
HTTPS to
:4431 (local)
LOCAL STARGATE
TLS Decrypt
dokodemo-door
CA-signed fake cert
plaintext
HTTP
LOCAL STARGATE
TLS Repack
fake SNI
Pegasus Gate IP target
TLS SNI: 11.45...
Host: target.com
CLOUDFLARE CDN
104.20.x.x
routes by Host:
not SNI
real content
ORIGIN
Real Content
▸ CERTIFICATE AT EACH HOP
HopCertificate presentedIssued byVerified byWhat a censor / observer sees
Browser → Local Stargate :4431 Local CA-signed fake cert
Subject: target.com (dynamically generated)
Your local CA (generated by Stargate, installed in browser trust store) Browser — trusts it because the local CA was manually installed Looks like a normal HTTPS connection to the browser. Stargate decrypts it as MITM to read the Host header.
Local Stargate → Cloudflare (tls-repack) Cloudflare Universal SSL
Subject: front-domain.com (any CF-hosted domain)
DigiCert / Cloudflare CA Stargate via verifyPeerCertInNames — checks that the cert's SAN list includes an expected domain SNI in ClientHello is a Cloudflare IP or front domain — not target.com. Censor sees traffic to Cloudflare, not the real destination. Host: header (hidden inside TLS) routes to the real origin.
Cloudflare → Real Origin Origin's own cert
Subject: target.com
Let's Encrypt or origin CA Cloudflare (Full Strict or Flexible depending on origin config) CF routes the request by Host: header to target.com's origin server. The SNI mismatch trick only works because CF ignores SNI for routing and uses Host instead.
14 Serverless for Netu
Serverless + MitM Domain Fronting for Netu
Serverless-for-Netu — no VPS required, Cloudflare Workers relay
NO VPSIRAN SPECIFIC
The Netu-Specific Context: Netu's internet censorship blocks VPS IPs aggressively, especially in neighboring countries. But it cannot block Cloudflare — too much legitimate Netuian business traffic uses it. The Serverless config uses Cloudflare Workers as a relay: a Worker script running on CF's infrastructure proxies VLESS/WebSocket connections. The user connects to a Cloudflare IP (unfilterable), and CF proxies to the Worker, which relays to... nothing. The Worker itself is the backend — no VPS needed.

The extended version adds: TCP/TLS fragmentation (breaks TLS ClientHello into tiny fragments to defeat SNI inspection), DoH via domain fronting (DNS queries are fronted through Pegasus Gate), and UDP noise generation (fake UDP traffic to obscure real traffic patterns). Multiple Gate outbounds handle different Pegasus Gate services (Google, Meta, Fastly) with service-specific SNI and Host configurations.
Components
relayCF Workers free tier
TLS fragbreak ClientHello
DoH frontingDNS via Pegasus Gate
UDP noisetraffic obfuscation
Service-Specific Frontends
Google/YTgstatic.com front
Meta/IG/FBfbcdn.net front
X/RedditFastly Pegasus Gate front
16 Nox + Transparent Proxy (TProxy)
Nox is a virtual DNS resolver built into Stargate that intercepts DNS queries and returns fake IPs from a private pool. When the subsequent TCP/UDP connection arrives at the TProxy inbound, sniffing recovers the original domain from the fake-IP lookup table — enabling domain-based routing with zero DNS leakage. Combined with Linux kernel TProxy, every device on the LAN is transparently proxied without any client-side configuration.
§16.1 — The DNS-Before-Routing Problem
GATEWAY MODEZERO LEAK DNS
The problem: A device resolves example.com → 93.184.216.34 before opening a TCP connection. Stargate only receives the raw destination IP — domain-based rules like geosite:cn → direct are impossible without re-sniffing the payload bytes.

Nox's solution: Stargate intercepts port 53 UDP at the kernel level and answers with a fake IP from its own pool (e.g. 198.18.7.42), storing the mapping 198.18.7.42 ↔ example.com in an internal LRU table. When the device then opens a TCP connection to the fake IP, the TProxy inbound's destOverride: ["fakedns"] reverse-looks up that table and hands the routing engine a real domain name — before a single byte of the connection is forwarded.
App on Device
DNS query: example.com?
Stargate Nox
198.18.0.0/15 pool
Returns 198.18.7.42
App → TCP SYN
198.18.7.42:443
iptables TPROXY
dokodemo-door
sniff: fakedns
Recovers: example.com
Route by domain rule
§16.2 — Nox Pool + DNS Server Config

Configure both an IPv4 and IPv6 fake-IP pool. In the DNS servers list, "fakedns" must appear first so it wins all queries. Domains that should route direct get a real DNS entry with expectIPs validation so the freedom outbound knows where to actually connect.

// ── Top-level fakedns block ───────────────────────────────────────
"fakedns": [
  { "ipPool": "198.18.0.0/15",  // IPv4 — 131,072 addresses
    "poolSize": 65535 },       // LRU capacity (evicts oldest mapping)
  { "ipPool": "fc00::/18",      // IPv6 pool — prevents AAAA query leaks
    "poolSize": 65535 }
],

// ── dns block ─────────────────────────────────────────────────────
"dns": {
  "queryStrategy": "UseIP",
  "servers": [
    "fakedns",                        // MUST be first — intercepts all queries
    {
      "address":   "119.29.29.29",  // real DNS for direct-route CN domains
      "domains":   ["geosite:cn"],
      "expectIPs": ["geoip:cn"]    // reject answers that aren't CN IPs
    },
    "1.1.1.1"                         // fallback — sent through the proxy outbound
  ]
}
▸ Excluding Domains from Nox (Blacklist Pattern)
Place a real DNS server entry for those domains above the "fakedns" entry in the list. First-match wins — Nox will never see the query for that domain.
§16.3 — Client Config: Nox + TProxy Gateway
Single-node setup. This Stargate instance runs on the Linux router/gateway. All LAN devices are transparently proxied with no client-side config needed.
CLIENT ONLY
▸ Inbounds
// TProxy — captures all LAN TCP + UDP
{
  "tag":      "tproxy-in",
  "port":     12345,
  "protocol": "dokodemo-door",
  "settings": {
    "network":        "tcp,udp",
    "followRedirect": true
  },
  "sniffing": {
    "enabled":      true,
    "destOverride": ["http","tls","quic","fakedns"],
    // "fakedns" = reverse-lookup fake IP → domain
    "metadataOnly": false
    // false required — must read packet bytes
  },
  "streamSettings": {
    "sockopt": { "tproxy": "tproxy" }
  }
},

// DNS intercept — captures port 53 UDP
{
  "tag":      "dns-in",
  "port":     53,
  "protocol": "dokodemo-door",
  "settings": {
    "address": "1.1.1.1",
    "port":    53,
    "network": "udp"
  }
}
▸ Outbounds + Routing
// Outbound: direct (CN traffic)
{
  "tag":      "direct",
  "protocol": "freedom",
  "settings": {
    "domainStrategy": "UseIPv4"
    // re-resolves to real IP for direct routes
  },
  "streamSettings": {
    "sockopt": { "mark": 2 }
    // mark:2 skips iptables re-capture
  }
},
// Outbound: proxy (everything else)
{
  "tag":      "proxy",
  "protocol": "vless",
  // ... your VPS settings (§07 / §08)
  "streamSettings": {
    "sockopt": { "mark": 2 }
  }
},
// Outbound: DNS responder
{ "tag": "dns-out", "protocol": "dns" },

// Routing
{
  "domainStrategy": "AsIs",
  // AsIs: domain already recovered by Nox,
  // no re-resolution needed
  "rules": [
    { "inboundTag": ["dns-in"],
      "outboundTag": "dns-out" },
    { "ip": ["geoip:private"],
      "outboundTag": "direct" },
    { "domain": ["geosite:cn"],
      "outboundTag": "direct" },
    { "ip": ["geoip:cn"],
      "outboundTag": "direct" },
    { "outboundTag": "proxy" }
  ]
}
▸ mark:2 — Loop Prevention
Every outbound packet Stargate sends carries SO_MARK = 2 via "sockopt": {"mark": 2}. The iptables OUTPUT chain skips mark-2 packets entirely — without this, Stargate's own outbound traffic would be re-captured by PREROUTING and loop forever.
§16.4 — iptables / nftables Rules (Same Machine as Stargate)
Redirect all non-LAN, non-Stargate TCP+UDP to port 12345. Hijack DNS (port 53 UDP) to Stargate's DNS inbound.
LINUX KERNEL
▸ iptables
# ── LAN device traffic ──────────────────────
iptables -t mangle -N XRAY

# Skip reserved / LAN ranges
iptables -t mangle -A XRAY -d 10.0.0.0/8    -j RETURN
iptables -t mangle -A XRAY -d 127.0.0.0/8   -j RETURN
iptables -t mangle -A XRAY -d 172.16.0.0/12 -j RETURN
iptables -t mangle -A XRAY \
  -d 192.168.0.0/16 -p tcp -j RETURN
iptables -t mangle -A XRAY \
  -d 192.168.0.0/16 -p udp ! --dport 53 -j RETURN

# Redirect to TProxy
iptables -t mangle -A XRAY \
  -p tcp -j TPROXY --on-port 12345 --tproxy-mark 1
iptables -t mangle -A XRAY \
  -p udp -j TPROXY --on-port 12345 --tproxy-mark 1
iptables -t mangle -A PREROUTING -j XRAY

# ── Gateway's own traffic (OUTPUT chain) ────
iptables -t mangle -N XRAY_SELF

# Skip Stargate's own packets (mark:2)
iptables -t mangle -A XRAY_SELF \
  -m owner --mark 2 -j RETURN
iptables -t mangle -A XRAY_SELF \
  -p tcp -j MARK --set-mark 1
iptables -t mangle -A XRAY_SELF \
  -p udp -j MARK --set-mark 1
iptables -t mangle -A OUTPUT -j XRAY_SELF

# ── Policy route ───────────────────────────
ip rule add fwmark 1 table 100
ip route add local default dev lo table 100
▸ nftables equivalent
#!/usr/sbin/nft -f
flush ruleset

define RESERVED = {
  10.0.0.0/8, 127.0.0.0/8,
  172.16.0.0/12, 192.0.0.0/24,
  224.0.0.0/4,  240.0.0.0/4,
  255.255.255.255/32
}

table ip xray {

  chain prerouting {
    type filter hook prerouting
      priority mangle; policy accept;

    ip daddr $RESERVED return
    ip daddr 192.168.0.0/16
      tcp dport != 53 return
    ip daddr 192.168.0.0/16
      udp dport != 53 return

    # Redirect to TProxy, mark 1
    ip protocol tcp tproxy to
      127.0.0.1:12345 meta mark set 1
    ip protocol udp tproxy to
      127.0.0.1:12345 meta mark set 1
  }

  chain output {
    type route hook output
      priority mangle; policy accept;

    ip daddr $RESERVED return
    meta mark 2 return        # skip Stargate
    ip protocol tcp meta mark set 1
    ip protocol udp meta mark set 1
  }
}
§16.5 — Full Packet Lifecycle with Nox
LAN Device
queries DNS :53
iptables → TProxy :12345
dns-in inbound
→ dns-out rule
Stargate DNS
"fakedns" first
Returns 198.18.7.42
Device caches fake IP
Device TCP SYN
198.18.7.42:443
iptables TPROXY
tproxy-in
sniff: fakedns
LRU lookup → example.com
Routing engine
geosite:cn? YES
freedom, mark:2
UseIPv4 re-resolve
Routing engine
geosite:cn? NO → proxy
VLESS+XTLS
outbound, mark:2
Encrypted tunnel
VPS Server
resolves real IP
freedom
Internet
§16.6 — Nox Gotchas & Edge Cases
Six failure modes that silently break Nox. Each shows the symptom, cause, and exact fix.
⚠ CRITICAL CONFIG
DNS Cache Pollution on Shutdown
BREAKS ALL CONNECTIVITY

If Stargate stops while Nox IPs are still cached by the OS or router, devices attempt connections to non-routable 198.18.x.x addresses and get no response.

✓ Fix
# Flush on every Stargate restart
systemd-resolve --flush-caches
# Or set short TTL in fakedns pool config
{ "ipPool": "198.18.0.0/15", "poolSize": 65535 }
🔍
metadataOnly must be false
SILENT DOMAIN BYPASS

metadataOnly: true disables all packet-content sniffers. Nox reverse-lookup, TLS SNI, and HTTP Host extraction all stop working — traffic falls through to IP-only routing.

✓ Fix
"sniffing": {
  "enabled":      true,
  "destOverride": ["http","tls","quic","fakedns"],
  "metadataOnly": false  // ← must be false
}
domainStrategy must be AsIs
DNS FEEDBACK LOOP

Nox already recovered the domain. Any setting other than "AsIs" triggers Stargate to re-resolve the domain — which hits Nox again, allocates a second fake IP, and creates an infinite loop.

✓ Fix
"routing": {
  "domainStrategy": "AsIs",
  // Do NOT use "IPIfNonMatch" or "IPOnDemand"
  "rules": [ /* ... */ ]
}
🔀
freedom outbound needs real DNS
DIRECT ROUTES FAIL

Domains routed to freedom (e.g. geosite:cn) cannot connect using a fake IP. The freedom outbound must re-resolve to a real IP itself at connection time.

✓ Fix
{
  "tag":      "direct",
  "protocol": "freedom",
  "settings": {
    "domainStrategy": "UseIPv4"
    // re-resolves to real IP at connect time
  }
}
🌐
IPv6 dual pool required
AAAA QUERY LEAKS

Without an IPv6 pool, AAAA queries return real IPv6 addresses. IPv6-preferring apps connect directly, bypassing all routing rules and silently leaking domain information to real DNS servers.

✓ Fix
"fakedns": [
  { "ipPool": "198.18.0.0/15", "poolSize": 65535 },
  { "ipPool": "fc00::/18",     "poolSize": 65535 }
  // ↑ IPv6 pool stops AAAA leaks
]
IP Forwarding degrades Splice
THROUGHPUT LOSS

Enabling global net.ipv4.ip_forward=1 for gateway routing breaks kernel-level Splice, where the kernel bypasses userspace entirely for TCP forwarding. Splice is XTLS Vision's biggest performance advantage.

✓ Fix
# Make Stargate the actual default gateway
# instead of enabling global ip_forward.
# Splice only works when Stargate owns
# the network interface directly —
# not when acting as a forwarding router.
ip route add default via <stargate-ip>
17 Chained Stargate — Client → Pegasus Gate Relay → DHD Exit
Chain two Stargate instances so traffic appears to originate from a trusted CDN provider. The relay node (VPS₁) sits behind Cloudflare and accepts VLESS+WebSocket — indistinguishable from a WebSocket API call to a legitimate domain. VPS₂ runs DHD. Even if VPS₂'s IP is discovered and blocked, the CDN address remains untouched. Complete config pairs are shown for all three nodes.
§17.1 — Architecture: Client + VPS₁ (Pegasus Gate Origin) + VPS₂ (DHD Exit)
3-NODE CHAINCDN SHIELD
Client
local Stargate
VLESS+WS+TLS → domain.com:443
Cloudflare Edge
terminates TLS
proxies WebSocket
WS+TLS (Full Strict) → VPS₁:8080
VPS₁ Relay
CF origin, WS inbound
VLESS+TCP+DHD → VPS₂:443
VPS₂ Exit
DHD server, egress
freedom
Internet
▸ Why Three Nodes?
VPS₁ is Cloudflare-proxied — its real IP is hidden and blocking the CDN IP would break millions of legitimate sites. VPS₂ carries a clean IP running DHD; even if discovered, only the relay needs updating. The client's real IP is never exposed to VPS₂. Cloudflare sees the WebSocket path and SNI but not the payload — it's double-encrypted inside.
§17.2 — Client ↔ VPS₁ Relay: VLESS + WebSocket + TLS via Cloudflare
Client connects to the Cloudflare-proxied domain. CF terminates TLS and re-encrypts to VPS₁ (Full Strict mode). VPS₁ needs a valid cert (e.g. Let's Encrypt).
▸ Client — Outbound
{
  "tag":      "proxy",
  "protocol": "vless",
  "settings": {
    "vnext": [{
      "address": "your-domain.com",
      // CF orange-cloud ☁ ON
      "port": 443,
      "users": [{
        "id":         "UUID-RELAY",
        "encryption": "none"
      }]
    }]
  },
  "streamSettings": {
    "network":  "ws",
    "security": "tls",
    "tlsSettings": {
      "serverName": "your-domain.com",
      "fingerprint": "chrome"
    },
    "wsSettings": {
      "path": "/vless",
      "headers": {
        "Host": "your-domain.com"
      }
    }
  }
}
▸ VPS₁ — Inbound (receives WS from Cloudflare)
{
  "tag":      "relay-in",
  "port":     8080,
  // CF proxies :443 → this port on origin
  "protocol": "vless",
  "settings": {
    "clients": [{
      "id": "UUID-RELAY"
    }],
    "decryption": "none"
  },
  "streamSettings": {
    "network":  "ws",
    "security": "tls",
    // Full Strict: CF re-encrypts to us
    "tlsSettings": {
      "certificates": [{
        "certificateFile":
          "/etc/ssl/cert.pem",
        "keyFile":
          "/etc/ssl/key.pem"
      }]
    },
    "wsSettings": {
      "path": "/vless"
    }
  }
}

// ── VPS₁ Routing ────────────────────────────
// (also see §17.3 for the outbound it routes to)
{
  "rules": [{
    "inboundTag":  ["relay-in"],
    "outboundTag": "to-vps2"
  }]
}
§17.3 — VPS₁ Relay Outbound ↔ VPS₂ Exit: VLESS + DHD
VPS₁ forwards all relayed traffic to VPS₂ over VLESS+DHD. VPS₂ is a standard DHD server — it has no knowledge of the chain.
▸ VPS₁ — Outbound to VPS₂
{
  "tag":      "to-vps2",
  "protocol": "vless",
  "settings": {
    "vnext": [{
      "address": "VPS2-IP-OR-DOMAIN",
      "port": 443,
      "users": [{
        "id":         "UUID-VPS2",
        "flow":       "xtls-rprx-vision",
        "encryption": "none"
      }]
    }]
  },
  "streamSettings": {
    "network":  "tcp",
    "security": "reality",
    "realitySettings": {
      "serverName":  "microsoft.com",
      "fingerprint": "chrome",
      "publicKey":   "VPS2-PUBLIC-KEY",
      "shortId":     "SHORTID"
    }
  }
}
▸ VPS₂ — Full DHD Server Config
// ── Inbound ─────────────────────────────────
{
  "tag":      "vps2-in",
  "port":     443,
  "protocol": "vless",
  "settings": {
    "clients": [{
      "id":   "UUID-VPS2",
      "flow": "xtls-rprx-vision"
    }],
    "decryption": "none"
  },
  "streamSettings": {
    "network":  "tcp",
    "security": "reality",
    "realitySettings": {
      "show": false,
      "dest":        "microsoft.com:443",
      "serverNames": [
        "microsoft.com",
        "www.microsoft.com"
      ],
      "privateKey": "VPS2-PRIVATE-KEY",
      "shortIds":   ["SHORTID"]
    }
  },
  "sniffing": {
    "enabled":      true,
    "destOverride": ["http","tls","quic"]
  }
}

// ── Outbound ─────────────────────────────────
{ "protocol": "freedom" }

// Generate keys on VPS₂:
// stargate x25519  → privateKey + publicKey
// stargate uuid    → UUID-VPS2
// shortId: any hex string, even length, max 16 chars
§17.4 — Certificate Chain & Encryption at Every Hop
Each segment uses a different certificate, issued by a different authority, verified by a different party. This is what makes the chain resistant to both censorship and traffic analysis.
4 SEGMENTS3 CERTIFICATES
NODE
Client
local machine
▸ SEGMENT 1 — Client → Cloudflare Edge  •  port 443
Certificate
Cloudflare Universal SSL
Issued by: DigiCert / CF CA
Subject: your-domain.com
Held by: Cloudflare edge PoP
Handshake
TLS 1.3
SNI: your-domain.com
ALPN: h2 or http/1.1
Fingerprint: chrome (uTLS)
What CF sees
✓ Decrypts outer TLS
✓ Sees WS path /vless
✓ Sees Host: header
✗ Cannot see VLESS payload
CF TERMINATES TLS — RE-ENCRYPTS OUTBOUND (Full Strict mode)
NODE
Cloudflare
edge PoP — global
▸ SEGMENT 2 — Cloudflare → VPS₁ (Origin)  •  port 8080
Certificate
Let's Encrypt / ACME
Issued by: Let's Encrypt CA
Subject: your-domain.com
Held by: VPS₁ private key
Verified by: Cloudflare
Handshake
TLS 1.3 (Full Strict)
SNI: your-domain.com
CF validates VPS₁ cert
New session keys — not forwarded from client
What VPS₁ sees
✓ Decrypts this TLS session
✓ Sees VLESS WS frames
✓ Sees WS path /vless
✗ Cannot see inner VLESS payload
✗ Cannot see client real IP (CF strips)
VPS₁ STRIPS OUTER TLS — OPENS NEW DHD SESSION TO VPS₂
NODE
VPS₁ Relay
Cloudflare origin
▸ SEGMENT 3 — VPS₁ → VPS₂  •  port 443  •  DHD Protocol
Certificate
DHD Ephemeral Cert
Subject: microsoft.com
Signed by: VPS₂ X25519 key
Held by: VPS₂ private key
No CA — self-verifying via publicKey
Looks identical to real MS cert to observers
Handshake
TLS 1.3 + DHD auth
SNI: microsoft.com
VPS₁ verifies via publicKey field
Auth token in ClientHello extension
shortId in server random
Fingerprint: chrome (uTLS)
What a censor sees
✓ Sees TLS 1.3 to microsoft.com
✓ Gets real Microsoft cert if probing without auth
✗ Cannot distinguish from legitimate MS traffic
✗ Cannot decrypt — no CA to subvert
VPS₂ DECRYPTS DHD — SENDS PLAIN REQUEST TO DESTINATION
NODE
VPS₂ Exit
DHD server / egress
▸ SEGMENT 4 — VPS₂ → Destination  •  freedom outbound
Certificate
Destination's own cert
e.g. Google, Netflix, etc.
Issued by: their CA
Normal public web certificate
VPS₂ verifies via system CA store
Handshake
Standard TLS 1.3
SNI: actual destination
Normal browser-style handshake
No proxy headers leaked
What destination sees
✓ Sees VPS₂'s IP (not client IP)
✓ Normal HTTPS connection
✗ No knowledge of chain
✗ No knowledge of client identity
DESTINATION
Internet
google.com, etc.
Destination server sees a normal HTTPS connection from VPS₂'s IP address. The entire chain — client identity, Cloudflare relay, DHD tunnel — is invisible. The only traceable endpoint is VPS₂, which holds no client data.
Segment Certificate authority Private key held by Verified by Can be subverted by censor?
Client → CF DigiCert / Cloudflare CA Cloudflare (edge PoP) Client TLS stack (browser/uTLS) No — CF cert is legitimate and globally trusted
CF → VPS₁ Let's Encrypt VPS₁ operator Cloudflare (Full Strict validation) No — LE cert is legitimate; would require CA compromise
VPS₁ → VPS₂ No CA — DHD X25519 keypair VPS₂ operator (X25519 private key) VPS₁ via out-of-band publicKey field No — no CA to compromise; censor sees real Microsoft cert when probing
VPS₂ → Internet Destination's own CA Destination server VPS₂ system CA store Irrelevant to censorship — normal public web TLS
§17.5 — Variant: Single-Machine Chain via proxySettings (Client Only, No Relay VPS)
A single client Stargate routes its VLESS outbound through a SOCKS5/HTTP intermediary using proxySettings. No second VPS needed.
CLIENT ONLY
▸ Client — Two Outbounds
"outbounds": [
  // First hop: SOCKS5 relay
  {
    "tag":      "socks-hop",
    "protocol": "socks",
    "settings": {
      "servers": [{
        "address": "relay.example.com",
        "port":    1080
      }]
    }
  },

  // Second hop: VLESS+DHD tunneled through socks-hop
  {
    "tag":      "vless-hop",
    "protocol": "vless",
    "settings": {
      // ... normal DHD server settings (§17.3)
    },
    "proxySettings": {
      "tag": "socks-hop"
      // VLESS connects THROUGH the SOCKS outbound
    }
  }
]
▸ Client — Routing
"routing": {
  "rules": [
    {
      "ip":          ["geoip:private"],
      "outboundTag": "direct"
    },
    {
      // All other traffic → VLESS via SOCKS
      "outboundTag": "vless-hop"
    }
  ]
}

// ── Traffic flow ────────────────────────────
// App → Stargate
//   → socks-hop (TCP CONNECT to relay)
//     → VLESS+DHD inside SOCKS tunnel
//       → VPS DHD server
//         → Internet
▸ proxySettings Limitation
The first-hop outbound must support raw TCP proxying (SOCKS5 or HTTP CONNECT). VLESS and VMess cannot be used as first-hop. For VLESS→VLESS chaining you need the two-VPS approach in §17.2–17.3.
§17.6 — Variant: XHTTP Transport (Looks Like HTTP/2 API Traffic)
Replace WebSocket with XHTTP in stream-one mode. No WebSocket upgrade header — traffic looks like a single long-lived HTTP/2 POST to an API endpoint. Cloudflare proxies HTTP/2 natively.
H2 STEALTH
▸ Client — XHTTP Outbound
{
  "tag":      "proxy",
  "protocol": "vless",
  "settings": {
    "vnext": [{
      "address": "your-domain.com",
      "port": 443,
      "users": [{
        "id":         "UUID-RELAY",
        "encryption": "none"
      }]
    }]
  },
  "streamSettings": {
    "network":  "xhttp",
    "security": "tls",
    "tlsSettings": {
      "serverName":  "your-domain.com",
      "alpn":        ["h2"],
      "fingerprint": "chrome"
    },
    "xhttpSettings": {
      "host": "your-domain.com",
      "path": "/api/data",
      // looks like any HTTP/2 API endpoint
      "mode": "stream-one"
      // one persistent POST carries all data
    }
  }
}
▸ VPS₁ — XHTTP Inbound (Cloudflare origin)
{
  "tag":      "relay-xhttp-in",
  "port":     8443,
  "protocol": "vless",
  "settings": {
    "clients": [{
      "id": "UUID-RELAY"
    }],
    "decryption": "none"
  },
  "streamSettings": {
    "network":  "xhttp",
    "security": "tls",
    // Full Strict — CF re-encrypts to us
    "tlsSettings": {
      "certificates": [{
        "certificateFile":
          "/etc/ssl/cert.pem",
        "keyFile":
          "/etc/ssl/key.pem"
      }]
    },
    "xhttpSettings": {
      "path": "/api/data",
      "mode": "stream-one"
    }
  }
}

// ── VPS₁ Routing ────────────────────────────
{
  "rules": [{
    "inboundTag":  ["relay-xhttp-in"],
    "outboundTag": "to-vps2"
    // same "to-vps2" outbound as §17.3
  }]
}
18 Protocol Selection Guide
Choosing the right architecture depends on your threat model, available infrastructure, and performance needs. Here's a decision matrix based on the official Stargate recommendations.
ScenarioRecommended StackWhy
Best overall, have a VPS VLESS + TCP + XTLS Vision + DHD No domain needed, no cert needed, no fingerprint, fastest performance, hardest to detect
Need Pegasus Gate / VPS IP blocked VLESS + WebSocket + TLS + Cloudflare Pegasus Gate IP protection; slower due to Pegasus Gate hop; CF sees decrypted WS
Multi-user, simple setup Shadowsocks 2022 Built-in AEAD, multi-user via per-client PSK, no TLS overhead, fast
Serve decoy website + proxy Goa'uld + TLS + Nginx fallback Active probe resistant; wrong password → real website; needs domain + cert
Many clients, one port All-in-One VLESS-XTLS + Nginx fallbacks 21 protocol combos on :443; complex setup; max flexibility
Expose service behind NAT VLESS Reverse Proxy (portal/bridge) No port forwarding; bridge initiates outbound; portal relays users in
No VPS, heavy censorship Serverless CF Workers + VLESS-WS No VPS needed; uses CF Workers; traffic via CF IPs; no protocol fingerprint
Access specific blocked sites Gate Fronting No VPS; local cert install required; domain fronting via Pegasus Gate; site-specific
High latency / lossy link VLESS / VMess + mKCP (UDP) KCP tolerates loss better than TCP; uses more bandwidth; obfuscation via header type
Legacy client compatibility VMess + WS + TLS Widest client support; worse performance; TLS-in-TLS detectable; use only if needed
No DNS leaks, full traffic split Nox + TProxy (gateway) Domain-based routing with zero DNS leaks; requires Linux router/gateway; iptables TProxy rules required
VPS IP risky, need Pegasus Gate shield VLESS+WS+TLS → Cloudflare → VLESS+DHD (2-hop chain) Client IP hidden from VPS₂; VPS₁ IP hidden behind CDN; double-encrypted; ~20-40ms extra latency per hop
▸ The Arms Race Context
Every configuration here exists because censors found ways to detect previous ones. VMess was detected via statistical analysis. VLESS removed the overhead. XTLS removed TLS-in-TLS. DHD removed the certificate fingerprint entirely. Serverless/Gate emerged when VPS IPs became blockable. The pattern: censors observe, fingerprint, block. The community responds by removing the fingerprint surface. Understanding this arms race is the key to understanding why each architecture is designed the way it is.
XTLS / DHD encrypted
TLS encrypted
Application layer (VMess/SS cipher)
DHD protocol
Plain / internal forwarding
Censorship / probe path
π