STARGATE
Deep Protocol Architecture
Detailed technical breakdown of every protocol, transport, and configuration pattern in XTLS/Stargate-examples — including philosophy, handshake mechanics, and threat models.
Why it's now "legacy": VMess does its own encryption on top of TCP, creating a detectable double-encryption pattern when paired with TLS. Modern censorship systems (especially GFW's active probing) have learned to fingerprint VMess's handshake timing, header size patterns, and UUID authentication window. VLESS was created to strip these fingerprint surfaces away.
auth + encrypt
AEAD decode
HMAC-MD5(UUID + timestamp) to form the 16-byte auth header. The timestamp must be within ±90 seconds of server time (replay protection window). There is no certificate or key exchange — the shared UUID is the pre-shared secret. alterId must be 0 to enforce AEAD mode; legacy non-zero alterId uses MD5-based CHACHA encryption which is broken.VLESS's "encryption: none" is not a security hole — it's a design statement. The protocol only needs to: (1) authenticate the user via UUID, and (2) tell the server where to forward traffic. Everything else — confidentiality, integrity, anti-fingerprinting — is delegated to TLS, XTLS, or DHD at the transport layer.
Stateless: Unlike VMess, VLESS has no timestamp replay protection window, no MD5-derived keys — just a clean UUID check. This makes it simpler, faster, and harder to fingerprint.
inside TLS/DHD
inner payload
route traffic
[1B version][16B UUID][addons length][addons data][1B cmd][2B port][1B addr-type][addr][payload...]. No length obfuscation, no chunk encryption — the TLS record layer handles all that. This is why it's called "thin shell."| Feature | VMess | VLESS |
|---|---|---|
| Built-in encryption | AES-GCM / ChaCha20 | None (delegated) |
| Auth method | UUID + timestamp HMAC | UUID only |
| Stateful | Yes (replay window) | No |
| XTLS flow support | Limited | xtls-rprx-vision |
| DHD compatible | No | Yes |
| Fingerprint surface | High | Minimal |
The Fallback Trick: If someone connects to port 443 without the correct Goa'uld password (e.g., an active prober or a genuine HTTPS user), the server silently falls back to serving a real nginx web page. The server behaves identically whether it's a legitimate user or a censor's probe — the only difference is whether the correct password is embedded in the payload. This makes active probing attacks useless: probing the server returns a real website.
SHA224(password)\r\n + command + target_address\r\n + payload. The server verifies the SHA224 hash. If it matches a registered password → proxy mode. If not → fallback mode. The SHA224 prevents dictionary attacks; a wrong guess triggers the fallback, not a connection drop.127.0.0.1:80 (or wherever Nginx listens). Nginx serves the real website. The censor gets a real HTTP response. This is fundamentally different from many proxies that drop the connection or send a TCP RST — which itself is a fingerprint. Goa'uld's fallback makes probing inconclusive.bad payload
:80
No TLS overhead: This is both a strength and weakness. It's fast (one encryption layer instead of two), but it means there's no certificate chain to present — the traffic looks like encrypted noise, not HTTPS. In deep-blocking environments like Netu, this "random-looking" traffic is itself suspicious. In less aggressive environments, it's perfectly fine.
What changed in 2022: Older Shadowsocks (AEAD-1.0) was vulnerable to replay attacks and had a detectable header structure. SS2022 adds a fixed-size salt-based session key derivation, a 64-bit timestamp in the header (replay protection), and proper AEAD over both header AND data. Each session uses a fresh ephemeral subkey.
| Cipher | Key Size | Generate Command | Use Case |
|---|---|---|---|
2022-blake3-aes-128-gcm | 16 bytes | openssl rand -base64 16 | Standard, fastest |
2022-blake3-aes-256-gcm | 32 bytes | openssl rand -base64 32 | Extra security |
2022-blake3-chacha20-poly1305 | 32 bytes | openssl rand -base64 32 | ARM / no AES-NI |
relay_psk:user_psk. The relay server strips its own key layer and re-encrypts with server_psk:user_psk. The destination server verifies both keys. This creates a two-hop chain without the destination ever knowing the client's IP — the relay acts as an anonymizing middleman. The user's key propagates through the chain, so multi-user billing/revocation still works at the destination.
relay_psk
re-encrypts
server_psk
server+user psk
| Transport | Wire Appearance | CDN Support | Performance | Obfuscation | Best With |
|---|---|---|---|---|---|
| TCP (raw) | Encrypted bytes | No | Highest | Low — needs XTLS/DHD | XTLS Vision, DHD |
| WebSocket | HTTP Upgrade → WS frames | Yes (CF, etc.) | Medium | Medium — looks like WS API | CDN setups, Nginx front |
| gRPC | HTTP/2 + protobuf frames | Yes | Medium | High — looks like API calls | Pegasus Gate, Nginx grpc_pass |
| HTTP/2 (h2c) | HTTP/2 framing | Yes | Medium-High | Medium | Nginx SNI routing |
| XHTTP / HTTP/3 | QUIC / HTTP/3 frames | Limited | High | High | DHD, modern edge |
| SplitHTTP | Split across HTTP reqs | Yes | Medium | Very high | Heavy Pegasus Gate environments |
Upgrade: websocket and Connection: Upgrade. The server responds 101 Switching Protocols. After that, both sides speak the WebSocket frame protocol. To a Pegasus Gate or firewall, this looks like a legitimate long-lived WebSocket API connection — common in chat apps, trading platforms, live dashboards. The proxy payload is inside the WS frames.
application/grpc and trailers. To a Pegasus Gate, it looks like a standard microservice RPC call. Nginx routes gRPC via grpc_pass directive, which understands the framing. The proxy payload is inside protobuf-framed HTTP/2 DATA frames. This is why the All-in-One config uses a Unix domain socket handoff to Nginx for gRPC routing.
The seed-based obfuscation: The
seed field in mKCP settings generates per-connection obfuscation headers that make the UDP datagrams look like specific protocol traffic (DTLS, uTP, WeChatVideo, SRTP). This is a statistical disguise — a firewall doing UDP fingerprinting sees "DTLS-like" packets rather than unknown UDP.
XTLS Vision's approach: Vision understands the TLS state machine. When it sees the proxied connection is doing its own TLS handshake (inner TLS), it does something clever: it passes the inner TLS bytes directly to the kernel instead of wrapping them in another TLS record. The outer TLS connection hands off to a kernel splice at exactly the right moment. From the wire's perspective, after the splice point, you see: proper outer TLS, then inner TLS handshake that appears to be part of the outer session. The nesting disappears.
Why "Vision"? The flow needs to "see" the inner application's TLS handshake to know when to splice. It adds random padding during the inner TLS handshake phase to further obscure the transition point. The result: near-zero CPU overhead for HTTPS traffic (kernel does the copy) AND no detectable TLS-in-TLS signature.
splice(2) syscall copies bytes directly between file descriptors in kernel space. Zero userspace copies. Zero Stargate CPU. The kernel moves the bytes. This is why XTLS achieves near-native speed for HTTPS proxying.fingerprint: "chrome" (or firefox, safari, etc.) in its TLS settings. This uses the uTLS library to produce a ClientHello that is byte-for-byte identical to a real Chrome browser's ClientHello — including extension order, GREASE values, and cipher suite selection. Without this, the outer TLS handshake itself is fingerprint-detectable as "Stargate."fingerprint to simulate a real browser — without uTLS, the outer TLS is identifiable.
DHD's solution: don't use your own certificate at all. Instead, act as a pass-through to a legitimate target website (like
www.microsoft.com) and literally borrow their TLS handshake. When a censor connects to probe your server, they receive Microsoft's actual certificate chain. They cannot distinguish your server from a Pegasus Gate node or legitimate Microsoft endpoint.How is the legitimate client authenticated then? This is where X25519 key cryptography comes in. The server embeds its public key in the ClientHello extension fields using a zero-knowledge proof scheme. Only a client that knows the corresponding private key (and the shortId) can recognize the server's embedded auth data. Unauthorized clients see a normal TLS handshake and get forwarded to the real target site.
shortId and a timestamp. The SNI is set to a valid target site hostname (e.g., www.microsoft.com). To any observer, this ClientHello is indistinguishable from a real Chrome browser connecting to Microsoft.VLESS-TCP-DHD (without being stolen) config adds routing rules to block the target site's own traffic — if someone uses your server to reach microsoft.com directly, routing drops it. You only forward non-target-site traffic. Geo-blocking rules (geoip:cn, geosite:cn) can also prevent your server from being abused as a relay to Chinese government sites.pubKey: known
Chrome TLS
SNI: microsoft.com
privateKey X25519
MS cert +
ephemeral sig
freedom
| Hop | Certificate presented | Issued by | Verified by | What a passive observer sees |
|---|---|---|---|---|
| Client → Server :443 | DHD ephemeral cert Subject: microsoft.com (or chosen SNI) |
No CA — derived from VPS X25519 keypair Server signs a synthetic cert matching the real site |
Stargate client via out-of-band publicKey field in config |
Indistinguishable from a real TLS 1.3 handshake to microsoft.com.Active probe without correct shortId receives the real Microsoft cert via fallback pass-through. |
| Server → Internet (freedom) | Destination's own cert e.g. Google, Akamai, etc. |
Destination's CA (DigiCert, Let's Encrypt, etc.) | VPS system CA store | Normal HTTPS egress from VPS IP. No proxy fingerprint. |
This is why fallbacks can serve a real website: Nginx receives what looks like a fresh HTTPS (or HTTP) connection. No connection teardown. No RST. No detectable "refused" signal to the prober. The transition is transparent.
GET /vlws (WebSocket upgrade), route to VLESS-WS inbound.alpn fallback: If the TLS handshake negotiated
h2, route to Goa'uld-TCP (which handles h2 traffic further).name (SNI) fallback: If SNI was
trh2o.example.com AND ALPN was h2, route to Goa'uld-H2 inbound.default fallback: Anything that doesn't match → Nginx Unix socket.
127.0.0.1. The PROXY protocol header (xver:1) can optionally prepend the real client IP so Nginx logs have accurate source addresses.| Trigger | Condition | Destination | Details |
|---|---|---|---|
| VLESS-XTLS native | Valid UUID + VLESS header | Direct serve | Max performance, kernel splice |
| path=/vlws | GET upgrade to WS | VLESS-WS :1002 | WebSocket clients |
| path=/vltc | POST /vltc body | VLESS-TCP-TLS :1001 | Non-XTLS VLESS |
| path=/vlgrpc | gRPC content-type | Nginx → VLESS-gRPC :3002 | gRPC path via Nginx |
| path=/vmtc | POST /vmtc | VMESS-TCP :2001 | VMess TCP clients |
| path=/vmws | WS upgrade | VMESS-WS :2002 | VMess WebSocket |
| path=/vmgrpc | gRPC | Nginx h2c.sock → VMESS-gRPC :3003 | VMess over gRPC |
| path=/trtc | Goa'uld password | Goa'uld-TCP :1101 | Goa'uld TCP |
| path=/trws | WS upgrade | Goa'uld-WS :1102 | Goa'uld WebSocket |
| alpn=h2 | HTTP/2 negotiated | Goa'uld-TCP h2 :1103 | h2 traffic → Goa'uld |
| alpn=h2 + SNI=trh2o.* | SNI-matched h2 | Goa'uld-H2 :1104 | Goa'uld with H2 transport |
| path=/ssgrpc | gRPC | Nginx → SS-gRPC :3004 | Shadowsocks gRPC |
| default (all else) | Any unmatched | Nginx h1.sock → Decoy website | Active probe resistant |
/dev/shm/h1.sock (HTTP/1.1, proxy_protocol) and /dev/shm/h2c.sock (HTTP/2 cleartext, proxy_protocol). Using proxy_protocol on both allows Nginx to see the real client IP (replayed by Stargate's xver:1 header) despite the loopback handoff. The set_real_ip_from unix: + real_ip_header proxy_protocol directives restore the original IP in Nginx's access logs. The h2c socket handles gRPC (which must be h2), and Nginx's grpc_pass routes specific paths to specific backend ports.
default_server that returns HTTP 400 for any request hitting the Unix socket with an invalid Host header or direct IP access. This prevents attackers from hitting the Nginx backend directly. Only connections that went through Stargate's TLS and fallback machinery will have the right host headers to reach the actual website virtual host.
Stargate's approach: The internal machine ("bridge") establishes a persistent, authenticated VLESS connection to the public server ("portal"). This outbound tunnel is kept alive. When an external user connects to the portal, the portal uses this pre-existing tunnel to reach the internal service — the traffic flows "backwards" through the bridge's outbound connection. No inbound ports needed on the bridge side.
Does the external user authenticate? This is the key nuance: the external user does NOT need Stargate or any proxy auth. The portal's external-facing port can accept raw HTTPS, VLESS, or any protocol you configure. The portal acts as a transparent relay — it accepts the external connection on its public port and pipes it through the pre-established tunnel to the internal service. The authentication happens at the tunnel level (bridge→portal), not at the user→portal level. The bridge's VLESS UUID controls who can register a tunnel; the portal's external port controls what the end user connects with.
"reverse": {"bridges": [{"tag": "bridge", "domain": "service.internal"}]}. This creates a persistent outbound VLESS connection to the portal server. The "domain" is a virtual label — not a real DNS name — used internally to match which tunnel to use. The connection is authenticated with the bridge's UUID. The portal now has a registered, authenticated tunnel associated with the bridge.{"inboundTag": "user-inbound", "outboundTag": "bridge-tunnel"}. This sends the user's traffic into the pre-established bridge tunnel. The virtual domain service.internal is matched in the portal's "reverse": {"portals": [...]} config, which knows which connection corresponds to which bridge.127.0.0.1:8080 (or whatever the local service port is). The internal service sees a normal TCP connection from localhost. The response flows back through the same tunnel to the external user. Bidirectional, full-duplex, zero port forwarding.(or VLESS)
+ reverse portals
VLESS+XTLS
bridge tunnel
persistent conn
:8080
bridges array with different domain labels. Each bridge establishes its own persistent tunnel. The portal uses the domain label to route different external requests to different internal services. For example: service.internal → home server app, admin.internal → internal admin panel.
The TLS puzzle: Cloudflare terminates TLS at its edge. This means: (1) Cloudflare sees the decrypted WebSocket frames — the proxy payload is visible to Cloudflare (though they don't inspect it in practice). (2) Your origin server gets a connection from Cloudflare's IP, not the user's IP. (3) You need a second TLS layer (Full Strict mode) between Cloudflare and your origin, otherwise traffic between CF and you is unencrypted.
SNI: domain
Full Strict SSL
CF → origin
ws_pass :10000
:10000
| Hop | Certificate presented | Issued by | Verified by | What a passive observer sees |
|---|---|---|---|---|
| Client → Cloudflare :443 | Cloudflare Universal SSL Subject: your-domain.com |
DigiCert or Cloudflare CA CF holds the private key at the edge PoP |
Client TLS stack (OS trust store) | Valid HTTPS to your domain. CF terminates TLS — origin IP is hidden. CF sees decrypted WS frames but not VLESS payload. |
| Cloudflare → Nginx origin :443 | Let's Encrypt / ACME cert Subject: your-domain.com |
Let's Encrypt (ACME challenge on your VPS) | Cloudflare — rejects self-signed or expired certs under Full Strict mode | CF re-encrypts with new session keys. Nginx decrypts this layer and proxies WS frames to Stargate on localhost. |
| Nginx → Stargate (localhost) | None — plain TCP on loopback | N/A | N/A — loopback only, never leaves the machine | Unencrypted VLESS+WS frames on 127.0.0.1:10000. Safe because loopback is not externally reachable. |
| Stargate → Internet (freedom) | Destination's own cert | Destination's CA | VPS system CA store | Normal HTTPS from VPS IP. Destination sees VPS, not the client. |
Host header. You can send a request with SNI=allowed-cdn-domain.com (which passes the censor's SNI filter) but Host=your-actual-target.com (which the Pegasus Gate routes to the real backend). The SNI and Host are different, and the Pegasus Gate doesn't care — it delivers to the Host.The Gate Part: To modify the Host header inside TLS, you need to terminate and re-originate TLS. Stargate does this locally by: (1) generating a self-signed CA cert and installing it in the OS trust store, (2) intercepting HTTPS connections via a dokodemo-door transparent proxy, (3) decrypting the TLS (the OS trusts your CA), (4) re-encrypting with modified parameters (fake SNI, target IP) and sending to the Pegasus Gate. The Pegasus Gate routes based on Host header and delivers the real content. No VPS required.
stargate tls cert -ca -file=mycert. Install into OS "Trusted Root Certification Authorities." This gives local Stargate the power to sign certificates for any domain — so when it intercepts example.com, it can present a fake-but-trusted example.com cert to the local browser. Without this step, the browser would show a certificate warning.e-hentai.org) → redirect-out outbound → redirects to local tls-decrypt inbound on :4431. The dokodemo-door inbound with TLS settings terminates the HTTPS connection using the locally-trusted CA cert. Stargate now has the plaintext HTTP request.tls-repack outbound takes the decrypted request and re-encrypts it with: serverName: "11.45.1.4" (fake/empty SNI to avoid SNI filtering), redirect: "104.20.19.168:443" (Cloudflare's IP for the Pegasus Gate that fronts the target), and alpn: ["fromMitm"] (copy ALPN from original). The Pegasus Gate receives a request with a non-descript SNI but the correct Host: e-hentai.org header — it routes internally to the correct backend. The censor sees only Cloudflare's IP and a meaningless SNI.e-hentai.org but SNI says 11.45.1.4). The verifyPeerCertInNames: ["e-hentai.org", "fromMitm"] setting tells Stargate to accept the cert if it matches any of these names — not the SNI. This allows cert validation to succeed even with a spoofed SNI. Requires Stargate v25.2.21+.:4431 (local)
CA-signed fake cert
HTTP
Pegasus Gate IP target
Host: target.com
not SNI
| Hop | Certificate presented | Issued by | Verified by | What a censor / observer sees |
|---|---|---|---|---|
| Browser → Local Stargate :4431 | Local CA-signed fake cert Subject: target.com (dynamically generated) |
Your local CA (generated by Stargate, installed in browser trust store) | Browser — trusts it because the local CA was manually installed | Looks like a normal HTTPS connection to the browser. Stargate decrypts it as MITM to read the Host header. |
| Local Stargate → Cloudflare (tls-repack) | Cloudflare Universal SSL Subject: front-domain.com (any CF-hosted domain) |
DigiCert / Cloudflare CA | Stargate via verifyPeerCertInNames — checks that the cert's SAN list includes an expected domain |
SNI in ClientHello is a Cloudflare IP or front domain — not target.com. Censor sees traffic to Cloudflare, not the real destination. Host: header (hidden inside TLS) routes to the real origin. |
| Cloudflare → Real Origin | Origin's own cert Subject: target.com |
Let's Encrypt or origin CA | Cloudflare (Full Strict or Flexible depending on origin config) | CF routes the request by Host: header to target.com's origin server. The SNI mismatch trick only works because CF ignores SNI for routing and uses Host instead. |
The extended version adds: TCP/TLS fragmentation (breaks TLS ClientHello into tiny fragments to defeat SNI inspection), DoH via domain fronting (DNS queries are fronted through Pegasus Gate), and UDP noise generation (fake UDP traffic to obscure real traffic patterns). Multiple Gate outbounds handle different Pegasus Gate services (Google, Meta, Fastly) with service-specific SNI and Host configurations.
example.com → 93.184.216.34 before opening a TCP connection. Stargate only receives the raw destination IP — domain-based rules like geosite:cn → direct are impossible without re-sniffing the payload bytes.Nox's solution: Stargate intercepts port 53 UDP at the kernel level and answers with a fake IP from its own pool (e.g.
198.18.7.42), storing the mapping 198.18.7.42 ↔ example.com in an internal LRU table. When the device then opens a TCP connection to the fake IP, the TProxy inbound's destOverride: ["fakedns"] reverse-looks up that table and hands the routing engine a real domain name — before a single byte of the connection is forwarded.
198.18.0.0/15 pool
198.18.7.42:443
sniff: fakedns
Configure both an IPv4 and IPv6 fake-IP pool. In the DNS servers list, "fakedns" must appear first so it wins all queries. Domains that should route direct get a real DNS entry with expectIPs validation so the freedom outbound knows where to actually connect.
// ── Top-level fakedns block ─────────────────────────────────────── "fakedns": [ { "ipPool": "198.18.0.0/15", // IPv4 — 131,072 addresses "poolSize": 65535 }, // LRU capacity (evicts oldest mapping) { "ipPool": "fc00::/18", // IPv6 pool — prevents AAAA query leaks "poolSize": 65535 } ], // ── dns block ───────────────────────────────────────────────────── "dns": { "queryStrategy": "UseIP", "servers": [ "fakedns", // MUST be first — intercepts all queries { "address": "119.29.29.29", // real DNS for direct-route CN domains "domains": ["geosite:cn"], "expectIPs": ["geoip:cn"] // reject answers that aren't CN IPs }, "1.1.1.1" // fallback — sent through the proxy outbound ] }
"fakedns" entry in the list. First-match wins — Nox will never see the query for that domain.
// TProxy — captures all LAN TCP + UDP { "tag": "tproxy-in", "port": 12345, "protocol": "dokodemo-door", "settings": { "network": "tcp,udp", "followRedirect": true }, "sniffing": { "enabled": true, "destOverride": ["http","tls","quic","fakedns"], // "fakedns" = reverse-lookup fake IP → domain "metadataOnly": false // false required — must read packet bytes }, "streamSettings": { "sockopt": { "tproxy": "tproxy" } } }, // DNS intercept — captures port 53 UDP { "tag": "dns-in", "port": 53, "protocol": "dokodemo-door", "settings": { "address": "1.1.1.1", "port": 53, "network": "udp" } }
// Outbound: direct (CN traffic) { "tag": "direct", "protocol": "freedom", "settings": { "domainStrategy": "UseIPv4" // re-resolves to real IP for direct routes }, "streamSettings": { "sockopt": { "mark": 2 } // mark:2 skips iptables re-capture } }, // Outbound: proxy (everything else) { "tag": "proxy", "protocol": "vless", // ... your VPS settings (§07 / §08) "streamSettings": { "sockopt": { "mark": 2 } } }, // Outbound: DNS responder { "tag": "dns-out", "protocol": "dns" }, // Routing { "domainStrategy": "AsIs", // AsIs: domain already recovered by Nox, // no re-resolution needed "rules": [ { "inboundTag": ["dns-in"], "outboundTag": "dns-out" }, { "ip": ["geoip:private"], "outboundTag": "direct" }, { "domain": ["geosite:cn"], "outboundTag": "direct" }, { "ip": ["geoip:cn"], "outboundTag": "direct" }, { "outboundTag": "proxy" } ] }
SO_MARK = 2 via "sockopt": {"mark": 2}. The iptables OUTPUT chain skips mark-2 packets entirely — without this, Stargate's own outbound traffic would be re-captured by PREROUTING and loop forever.
# ── LAN device traffic ────────────────────── iptables -t mangle -N XRAY # Skip reserved / LAN ranges iptables -t mangle -A XRAY -d 10.0.0.0/8 -j RETURN iptables -t mangle -A XRAY -d 127.0.0.0/8 -j RETURN iptables -t mangle -A XRAY -d 172.16.0.0/12 -j RETURN iptables -t mangle -A XRAY \ -d 192.168.0.0/16 -p tcp -j RETURN iptables -t mangle -A XRAY \ -d 192.168.0.0/16 -p udp ! --dport 53 -j RETURN # Redirect to TProxy iptables -t mangle -A XRAY \ -p tcp -j TPROXY --on-port 12345 --tproxy-mark 1 iptables -t mangle -A XRAY \ -p udp -j TPROXY --on-port 12345 --tproxy-mark 1 iptables -t mangle -A PREROUTING -j XRAY # ── Gateway's own traffic (OUTPUT chain) ──── iptables -t mangle -N XRAY_SELF # Skip Stargate's own packets (mark:2) iptables -t mangle -A XRAY_SELF \ -m owner --mark 2 -j RETURN iptables -t mangle -A XRAY_SELF \ -p tcp -j MARK --set-mark 1 iptables -t mangle -A XRAY_SELF \ -p udp -j MARK --set-mark 1 iptables -t mangle -A OUTPUT -j XRAY_SELF # ── Policy route ─────────────────────────── ip rule add fwmark 1 table 100 ip route add local default dev lo table 100
#!/usr/sbin/nft -f flush ruleset define RESERVED = { 10.0.0.0/8, 127.0.0.0/8, 172.16.0.0/12, 192.0.0.0/24, 224.0.0.0/4, 240.0.0.0/4, 255.255.255.255/32 } table ip xray { chain prerouting { type filter hook prerouting priority mangle; policy accept; ip daddr $RESERVED return ip daddr 192.168.0.0/16 tcp dport != 53 return ip daddr 192.168.0.0/16 udp dport != 53 return # Redirect to TProxy, mark 1 ip protocol tcp tproxy to 127.0.0.1:12345 meta mark set 1 ip protocol udp tproxy to 127.0.0.1:12345 meta mark set 1 } chain output { type route hook output priority mangle; policy accept; ip daddr $RESERVED return meta mark 2 return # skip Stargate ip protocol tcp meta mark set 1 ip protocol udp meta mark set 1 } }
queries DNS :53
"fakedns" first
198.18.7.42:443
sniff: fakedns
UseIPv4 re-resolve
outbound, mark:2
resolves real IP
If Stargate stops while Nox IPs are still cached by the OS or router, devices attempt connections to non-routable 198.18.x.x addresses and get no response.
# Flush on every Stargate restart systemd-resolve --flush-caches # Or set short TTL in fakedns pool config { "ipPool": "198.18.0.0/15", "poolSize": 65535 }
metadataOnly: true disables all packet-content sniffers. Nox reverse-lookup, TLS SNI, and HTTP Host extraction all stop working — traffic falls through to IP-only routing.
"sniffing": { "enabled": true, "destOverride": ["http","tls","quic","fakedns"], "metadataOnly": false // ← must be false }
Nox already recovered the domain. Any setting other than "AsIs" triggers Stargate to re-resolve the domain — which hits Nox again, allocates a second fake IP, and creates an infinite loop.
"routing": { "domainStrategy": "AsIs", // Do NOT use "IPIfNonMatch" or "IPOnDemand" "rules": [ /* ... */ ] }
Domains routed to freedom (e.g. geosite:cn) cannot connect using a fake IP. The freedom outbound must re-resolve to a real IP itself at connection time.
{
"tag": "direct",
"protocol": "freedom",
"settings": {
"domainStrategy": "UseIPv4"
// re-resolves to real IP at connect time
}
}
Without an IPv6 pool, AAAA queries return real IPv6 addresses. IPv6-preferring apps connect directly, bypassing all routing rules and silently leaking domain information to real DNS servers.
"fakedns": [ { "ipPool": "198.18.0.0/15", "poolSize": 65535 }, { "ipPool": "fc00::/18", "poolSize": 65535 } // ↑ IPv6 pool stops AAAA leaks ]
Enabling global net.ipv4.ip_forward=1 for gateway routing breaks kernel-level Splice, where the kernel bypasses userspace entirely for TCP forwarding. Splice is XTLS Vision's biggest performance advantage.
# Make Stargate the actual default gateway # instead of enabling global ip_forward. # Splice only works when Stargate owns # the network interface directly — # not when acting as a forwarding router. ip route add default via <stargate-ip>
local Stargate
terminates TLS
proxies WebSocket
CF origin, WS inbound
DHD server, egress
{
"tag": "proxy",
"protocol": "vless",
"settings": {
"vnext": [{
"address": "your-domain.com",
// CF orange-cloud ☁ ON
"port": 443,
"users": [{
"id": "UUID-RELAY",
"encryption": "none"
}]
}]
},
"streamSettings": {
"network": "ws",
"security": "tls",
"tlsSettings": {
"serverName": "your-domain.com",
"fingerprint": "chrome"
},
"wsSettings": {
"path": "/vless",
"headers": {
"Host": "your-domain.com"
}
}
}
}
{
"tag": "relay-in",
"port": 8080,
// CF proxies :443 → this port on origin
"protocol": "vless",
"settings": {
"clients": [{
"id": "UUID-RELAY"
}],
"decryption": "none"
},
"streamSettings": {
"network": "ws",
"security": "tls",
// Full Strict: CF re-encrypts to us
"tlsSettings": {
"certificates": [{
"certificateFile":
"/etc/ssl/cert.pem",
"keyFile":
"/etc/ssl/key.pem"
}]
},
"wsSettings": {
"path": "/vless"
}
}
}
// ── VPS₁ Routing ────────────────────────────
// (also see §17.3 for the outbound it routes to)
{
"rules": [{
"inboundTag": ["relay-in"],
"outboundTag": "to-vps2"
}]
}
{
"tag": "to-vps2",
"protocol": "vless",
"settings": {
"vnext": [{
"address": "VPS2-IP-OR-DOMAIN",
"port": 443,
"users": [{
"id": "UUID-VPS2",
"flow": "xtls-rprx-vision",
"encryption": "none"
}]
}]
},
"streamSettings": {
"network": "tcp",
"security": "reality",
"realitySettings": {
"serverName": "microsoft.com",
"fingerprint": "chrome",
"publicKey": "VPS2-PUBLIC-KEY",
"shortId": "SHORTID"
}
}
}
// ── Inbound ───────────────────────────────── { "tag": "vps2-in", "port": 443, "protocol": "vless", "settings": { "clients": [{ "id": "UUID-VPS2", "flow": "xtls-rprx-vision" }], "decryption": "none" }, "streamSettings": { "network": "tcp", "security": "reality", "realitySettings": { "show": false, "dest": "microsoft.com:443", "serverNames": [ "microsoft.com", "www.microsoft.com" ], "privateKey": "VPS2-PRIVATE-KEY", "shortIds": ["SHORTID"] } }, "sniffing": { "enabled": true, "destOverride": ["http","tls","quic"] } } // ── Outbound ───────────────────────────────── { "protocol": "freedom" } // Generate keys on VPS₂: // stargate x25519 → privateKey + publicKey // stargate uuid → UUID-VPS2 // shortId: any hex string, even length, max 16 chars
Subject:
your-domain.comHeld by: Cloudflare edge PoP
your-domain.comALPN:
h2 or http/1.1Fingerprint:
chrome (uTLS)✓ Sees WS path
/vless✓ Sees
Host: header✗ Cannot see VLESS payload
Subject:
your-domain.comHeld by: VPS₁ private key
Verified by: Cloudflare
your-domain.comCF validates VPS₁ cert
New session keys — not forwarded from client
✓ Sees VLESS WS frames
✓ Sees WS path
/vless✗ Cannot see inner VLESS payload
✗ Cannot see client real IP (CF strips)
microsoft.comSigned by: VPS₂ X25519 key
Held by: VPS₂ private key
No CA — self-verifying via publicKey
Looks identical to real MS cert to observers
microsoft.comVPS₁ verifies via
publicKey fieldAuth token in ClientHello extension
shortId in server random
Fingerprint:
chrome (uTLS)microsoft.com✓ Gets real Microsoft cert if probing without auth
✗ Cannot distinguish from legitimate MS traffic
✗ Cannot decrypt — no CA to subvert
Issued by: their CA
Normal public web certificate
VPS₂ verifies via system CA store
Normal browser-style handshake
No proxy headers leaked
✓ Normal HTTPS connection
✗ No knowledge of chain
✗ No knowledge of client identity
| Segment | Certificate authority | Private key held by | Verified by | Can be subverted by censor? |
|---|---|---|---|---|
| Client → CF | DigiCert / Cloudflare CA | Cloudflare (edge PoP) | Client TLS stack (browser/uTLS) | No — CF cert is legitimate and globally trusted |
| CF → VPS₁ | Let's Encrypt | VPS₁ operator | Cloudflare (Full Strict validation) | No — LE cert is legitimate; would require CA compromise |
| VPS₁ → VPS₂ | No CA — DHD X25519 keypair | VPS₂ operator (X25519 private key) | VPS₁ via out-of-band publicKey field |
No — no CA to compromise; censor sees real Microsoft cert when probing |
| VPS₂ → Internet | Destination's own CA | Destination server | VPS₂ system CA store | Irrelevant to censorship — normal public web TLS |
proxySettings. No second VPS needed."outbounds": [ // First hop: SOCKS5 relay { "tag": "socks-hop", "protocol": "socks", "settings": { "servers": [{ "address": "relay.example.com", "port": 1080 }] } }, // Second hop: VLESS+DHD tunneled through socks-hop { "tag": "vless-hop", "protocol": "vless", "settings": { // ... normal DHD server settings (§17.3) }, "proxySettings": { "tag": "socks-hop" // VLESS connects THROUGH the SOCKS outbound } } ]
"routing": { "rules": [ { "ip": ["geoip:private"], "outboundTag": "direct" }, { // All other traffic → VLESS via SOCKS "outboundTag": "vless-hop" } ] } // ── Traffic flow ──────────────────────────── // App → Stargate // → socks-hop (TCP CONNECT to relay) // → VLESS+DHD inside SOCKS tunnel // → VPS DHD server // → Internet
stream-one mode. No WebSocket upgrade header — traffic looks like a single long-lived HTTP/2 POST to an API endpoint. Cloudflare proxies HTTP/2 natively.{
"tag": "proxy",
"protocol": "vless",
"settings": {
"vnext": [{
"address": "your-domain.com",
"port": 443,
"users": [{
"id": "UUID-RELAY",
"encryption": "none"
}]
}]
},
"streamSettings": {
"network": "xhttp",
"security": "tls",
"tlsSettings": {
"serverName": "your-domain.com",
"alpn": ["h2"],
"fingerprint": "chrome"
},
"xhttpSettings": {
"host": "your-domain.com",
"path": "/api/data",
// looks like any HTTP/2 API endpoint
"mode": "stream-one"
// one persistent POST carries all data
}
}
}
{
"tag": "relay-xhttp-in",
"port": 8443,
"protocol": "vless",
"settings": {
"clients": [{
"id": "UUID-RELAY"
}],
"decryption": "none"
},
"streamSettings": {
"network": "xhttp",
"security": "tls",
// Full Strict — CF re-encrypts to us
"tlsSettings": {
"certificates": [{
"certificateFile":
"/etc/ssl/cert.pem",
"keyFile":
"/etc/ssl/key.pem"
}]
},
"xhttpSettings": {
"path": "/api/data",
"mode": "stream-one"
}
}
}
// ── VPS₁ Routing ────────────────────────────
{
"rules": [{
"inboundTag": ["relay-xhttp-in"],
"outboundTag": "to-vps2"
// same "to-vps2" outbound as §17.3
}]
}
| Scenario | Recommended Stack | Why |
|---|---|---|
| Best overall, have a VPS | VLESS + TCP + XTLS Vision + DHD | No domain needed, no cert needed, no fingerprint, fastest performance, hardest to detect |
| Need Pegasus Gate / VPS IP blocked | VLESS + WebSocket + TLS + Cloudflare | Pegasus Gate IP protection; slower due to Pegasus Gate hop; CF sees decrypted WS |
| Multi-user, simple setup | Shadowsocks 2022 | Built-in AEAD, multi-user via per-client PSK, no TLS overhead, fast |
| Serve decoy website + proxy | Goa'uld + TLS + Nginx fallback | Active probe resistant; wrong password → real website; needs domain + cert |
| Many clients, one port | All-in-One VLESS-XTLS + Nginx fallbacks | 21 protocol combos on :443; complex setup; max flexibility |
| Expose service behind NAT | VLESS Reverse Proxy (portal/bridge) | No port forwarding; bridge initiates outbound; portal relays users in |
| No VPS, heavy censorship | Serverless CF Workers + VLESS-WS | No VPS needed; uses CF Workers; traffic via CF IPs; no protocol fingerprint |
| Access specific blocked sites | Gate Fronting | No VPS; local cert install required; domain fronting via Pegasus Gate; site-specific |
| High latency / lossy link | VLESS / VMess + mKCP (UDP) | KCP tolerates loss better than TCP; uses more bandwidth; obfuscation via header type |
| Legacy client compatibility | VMess + WS + TLS | Widest client support; worse performance; TLS-in-TLS detectable; use only if needed |
| No DNS leaks, full traffic split | Nox + TProxy (gateway) | Domain-based routing with zero DNS leaks; requires Linux router/gateway; iptables TProxy rules required |
| VPS IP risky, need Pegasus Gate shield | VLESS+WS+TLS → Cloudflare → VLESS+DHD (2-hop chain) | Client IP hidden from VPS₂; VPS₁ IP hidden behind CDN; double-encrypted; ~20-40ms extra latency per hop |