Releases: therealaleph/MasterHttpRelayVPN-RUST
v1.9.3
• toggle youtube_via_relay در Android Advanced settings (PR #535 از @yyoyoian-pixel، closes #520): تا قبل، desktop UI checkbox youtube_via_relay داشت ولی Android UI نه — کاربران Android مجبور بودن config.json رو دستی edit کنن (که بدون root کارش نشدنی بود). حالا Switch toggle در بخش Advanced settings در Android UI هست + match با desktop UI checkbox. شامل field youtubeViaRelay در MhrvConfig با JSON serialization (youtube_via_relay بهعنوان wire format)، deserialization، + encode برای config-sharing. resources rشته EN + FA برای label + helper text. تغییر pure Android/Kotlin؛ بدون Rust impact.
• fix CI: gh release download در workflow Telegram publish با --clobber کار میکنه تا retries بعد از partial download کار کنه (no user impact، ولی v1.9.2 release برای Telegram channel بهخاطر این bug fail شد + manual re-publish لازم شد).
• youtube_via_relay toggle in Android Advanced settings (PR #535 by @yyoyoian-pixel, closes #520): the desktop UI has had a youtube_via_relay checkbox for a while, but the Android UI was missing it — Android users had to hand-edit config.json (which is rootful on Android). Now there's a Switch toggle in the Advanced settings section matching the desktop UI checkbox. Adds youtubeViaRelay field to MhrvConfig with JSON serialization (youtube_via_relay as the wire-format key), deserialization, and config-sharing encode. EN + FA string resources for label and helper text. Pure Android/Kotlin change; no Rust impact.
• CI fix: gh release download in the Telegram publish workflow now uses --clobber so retries can survive partial downloads (no user impact, but the v1.9.2 release's Telegram channel publish failed because of this and required manual re-trigger).
What's Changed
- feat(android): add youtube_via_relay toggle to Advanced settings by @yyoyoian-pixel in #535
Full Changelog: v1.9.2...v1.9.3
v1.9.2
• backend جایگزین Apps Script + Cloudflare Worker (PR #533 از @dazzling-no-more): deploy Code.cfw.gs (variant جدید GAS در assets/apps_script/) + worker.js (Cloudflare Worker در assets/cloudflare/)، Apps Script یک layer thin auth+forward میشه که outbound fetch رو به CF edge میده. mhrv-rs خود بدون تغییر — همان envelope JSON روی wire، همان mode: "apps_script"، script_id، auth_key. تنها تفاوت چیزی هست که Apps Script deployed بعد از authentication انجام میده. این task audit در roadmap #380 / #393 رو close میکنه. چرا کاربران Persian گزارش دادن GAS+CFW combination از pure GAS برای browsing + chat-style سریعتر حس میشه. سختگیر شده over upstream denuitt1/mhr-cfw: per-request AUTH_KEY check (upstream omit میکرد → relay open اگر URL leak شد)، fail-closed اگر AUTH_KEY هنوز placeholder باشه، loop guard x-relay-hop + self-host fetch block، body drop on GET/HEAD برای match با Code.gs/UrlFetchApp permissiveness، SKIP_HEADERS parity، batch handler با Promise.all + soft cap MAX_BATCH_SIZE = 40. محدودیتهای صادقانه (در docs explicit): با mode: "full" ناسازگار است (فقط HTTP-relay path port شده، نه raw-TCP/UDP tunnel ops). YouTube long-form بدتر میشه (30s CF Worker wall vs Apps Script ~6min — SABR cliff زودتر میرسه). Cloudflare anti-bot اثر معکوس داره (Worker IP اغلب stricter از Google IP). Day-one quota relief نیست (path batch ready ولی از client شیپ شده single-shape unreachable). docs کامل انگلیسی + فارسی در assets/cloudflare/README.md + README.fa.md شامل setup، model security سه AUTH_KEY match، trade-off table، Full mode incompatibility.
• Apps Script + Cloudflare Worker alternative backend (PR #533 by @dazzling-no-more): deploy Code.cfw.gs (new GAS variant in assets/apps_script/) plus worker.js (Cloudflare Worker in assets/cloudflare/), and Apps Script becomes a thin auth+forward layer that pushes the outbound fetch to CF's edge. mhrv-rs itself is unchanged — same JSON envelope on the wire, same mode: "apps_script", script_id, auth_key. The only difference is what the deployed Apps Script does after it authenticates. Closes the audit task on the v1.9.x roadmap (#380, #393). Why: recurring Persian-community feedback reports that GAS+CFW combination feels noticeably faster than plain GAS for browsing and chat-style workloads. Hardened over upstream denuitt1/mhr-cfw: per-request AUTH_KEY check (upstream omitted → open relay if URL leaks), fail-closed if AUTH_KEY still equals the placeholder, x-relay-hop loop guard + self-host fetch block, drops body on GET/HEAD to match Code.gs/UrlFetchApp permissiveness, SKIP_HEADERS parity, batch handler with Promise.all + soft cap MAX_BATCH_SIZE = 40. Honest limitations called out in docs: not compatible with mode: "full" (only HTTP-relay path ported; raw-TCP / UDP tunnel ops needed for messengers under Android full-mode aren't). YouTube long-form gets worse (30 s CF Worker wall vs Apps Script's ~6 min — SABR cliff arrives sooner). Cloudflare anti-bot is unaffected — exit IP becomes a Workers IP, which CF's anti-bot fingerprints as worker-internal (often stricter than a Google IP). No day-one UrlFetchApp daily-count relief; the batch-aware GAS+Worker path is wired and ready (ceil(N / 40) per N-URL batch) but unreachable from any shipping client today (mhrv-rs's HTTP-relay path is single-shape only). Full docs in English + Persian at assets/cloudflare/README.md + README.fa.md covering setup, the three-matching-AUTH_KEYs security model, trade-off table, full-mode incompatibility section. README updated with alternative-backend callout in both languages.
What's Changed
- feat(cfw): add Apps Script + Cloudflare Worker alternative backend by @dazzling-no-more in #533
Full Changelog: v1.9.1...v1.9.2
v1.9.1
• tunable کردن آستانه auto-blacklist با ۳ field config جدید: auto_blacklist_strikes (default 3)، auto_blacklist_window_secs (default 30)، auto_blacklist_cooldown_secs (default 120) (#391، #444): تا قبل، threshold روی ۳ timeout در ۳۰ ثانیه = ۱۲۰ ثانیه cooldown hard-coded بود. کاربران single-deployment گزارش دادن این threshold روی شبکههای flaky too aggressive هست — یک cold-start stall + دو network blip → فقط deployment آنها lockout میشه. حالا قابل تنظیم: single-deployment users میتونن auto_blacklist_strikes: 5 یا auto_blacklist_cooldown_secs: 30 بزارن. کاربران multi-deployment با ۱۰+ alternatives میتونن auto_blacklist_strikes: 2 بزارن برای fail-fast. defaults رفتار قدیمی رو حفظ میکنن — هیچ کاربری چیزی notice نمیکنه مگر در config صریح override کنه. کاربر در UI form expose نشده — power-user file edit در config.json. clamp [1, 86400] برای جلوگیری از مقادیر غیرمعقول.
• request_timeout_secs config (default 30) برای تنظیم batch HTTP timeout (#430، masterking32 PR #25): تا قبل BATCH_TIMEOUT = 30s hard-coded. شبکههای Iran ISP slow ممکنه 45 یا 60 بخوان تا Apps Script پیغام ارسال کنه past throttle window. شبکههای با fail-fast preference ممکنه 15 بخوان برای retry سریعتر هنگام hang. clamp [5s, 300s]. برای کاربر در UI form expose نشده.
• warning روشنتر در tunnel-node startup برای recurring MHRV_AUTH_KEY typo (#391، #444): چندین قدیمی copy-paste guide از MHRV_AUTH_KEY بهجای TUNNEL_AUTH_KEY در docker run استفاده میکرد. tunnel-node اون env var رو هرگز نمیخوند + silently default changeme رو fallback میکرد، که باعث AUTH_KEY-mismatch decoy میشد در client. حالا اگر MHRV_AUTH_KEY set باشه ولی TUNNEL_AUTH_KEY نباشه، tunnel-node پیغام specific میده: "MHRV_AUTH_KEY is set but TUNNEL_AUTH_KEY is not — tunnel-node only reads TUNNEL_AUTH_KEY (uppercase, with underscores). Rename your env var: docker run ... -e TUNNEL_AUTH_KEY=...". این به کاربر مستقیم کمک میکنه بهجای ساعتها debug.
• run.bat fallback به CLI بعد از UI failure (#417، #426، #487): قبلاً وقتی هر دو UI renderer (glow + wgpu) fail میگرفتن (روی ماشینهای قدیمی Windows / RDP / VM بدون GPU)، script پیغام "open issue" میداد + exit. حالا بعد از پیغام error، CLI mhrv-rs.exe رو خود اجرا میکنه + کاربر میتونه به استفاده از proxy ادامه دهد. CLI همان full functionality رو داره بدون UI shell — proxy روی 127.0.0.1:8085 (HTTP) و 127.0.0.1:8086 (SOCKS5).
• Tunable auto-blacklist threshold via three new config fields: auto_blacklist_strikes (default 3), auto_blacklist_window_secs (default 30), auto_blacklist_cooldown_secs (default 120) (#391, #444): previously hard-coded at "3 timeouts in 30s = 120s cooldown". Single-deployment users reported this threshold was too aggressive on flaky networks — one cold-start stall plus two transient network blips would lock them out of their only relay path. Now tunable: single-deployment users can set auto_blacklist_strikes: 5 or auto_blacklist_cooldown_secs: 30 to be more forgiving. Multi-deployment users with 10+ healthy alternatives can set auto_blacklist_strikes: 2 to fail-fast. Defaults preserve existing behavior — no user notices a change unless they explicitly tune in config.json. Not exposed in the UI form yet — power-user file edit. Clamped to [1, 86400] for the duration fields to prevent absurd values.
• request_timeout_secs config field (default 30) to tune the batch HTTP timeout (#430, masterking32 PR #25): previously the hard-coded BATCH_TIMEOUT = 30s constant. Slow Iran ISP networks may want 45 or 60 to give Apps Script time to respond past throttle windows. Networks preferring fail-fast may want 15 to retry sooner when a deployment hangs. Clamped to [5s, 300s] (anything beyond exceeds Apps Script's 6-min hard cap with no benefit). Not in the UI form.
• Clearer tunnel-node startup warning for the recurring MHRV_AUTH_KEY typo (#391, #444): several older copy-paste guides used MHRV_AUTH_KEY instead of TUNNEL_AUTH_KEY in docker run. tunnel-node never read that env var and silently fell back to default changeme, producing baffling AUTH_KEY-mismatch decoys on the client. Now if MHRV_AUTH_KEY is set but TUNNEL_AUTH_KEY is not, tunnel-node emits a specific warning: "MHRV_AUTH_KEY is set but TUNNEL_AUTH_KEY is not — tunnel-node only reads TUNNEL_AUTH_KEY (uppercase, with underscores). Rename your env var: docker run ... -e TUNNEL_AUTH_KEY=<your-secret>". Saves users hours of debugging the wrong layer.
• run.bat falls back to CLI after UI renderer failure (#417, #426, #487): when both UI renderers (glow + wgpu) fail on older Windows machines, RDP sessions, or VMs without GPU acceleration, the script previously printed an "open an issue on GitHub" message and exited. Now it prints the diagnostic info AND launches mhrv-rs.exe (CLI) so the user can keep using the proxy without the UI shell. CLI has the same proxy functionality on 127.0.0.1:8085 (HTTP) and 127.0.0.1:8086 (SOCKS5); only the visual UI is missing.
Full Changelog: v1.9.0...v1.9.1
v1.9.0
• شکستگی سازگاری minor: نامگذاری mode = "google_only" به mode = "direct" تغییر کرد (PR #488 از @dazzling-no-more): نام قدیمی توصیف وضعیت رو بعد از اضافه شدن fronting_groups (که فراتر از Google میرسه) درست نمیداد. در Rust + Android + UI dropdown همه به direct تغییر کردهاند، ولی google_only بهعنوان alias deprecated در parser قابل قبول مونده — configها و saved settings قدیمی نمیشکنن. در Save بعدی، on-disk file خودکار به direct migrate میشه. در docs (README EN/FA, SF_README EN/FA, tunnel-node FA) note "تا قبل v1.9 نام google_only بود — هنوز کار میکنه" گذاشته شده برای کاربرانی که از راهنماهای قدیمی یا پستهای Telegram قدیمی استفاده میکنن.
• fronting_groups: domain fronting چند-edge برای CDN غیر-Google (Vercel، Fastly، …) (PR #488 از @dazzling-no-more، با credit به @patterniha/MITM-DomainFronting برای technique اصلی): فیلد جدید config fronting_groups: [{name, ip, sni, domains}]. هر group شامل (edge IP، front SNI، domainهای member). وقتی CONNECT به یکی از domainهای member میرسه، proxy MITM میکنه + upstream با ip بهعنوان TCP destination + sni بهعنوان TLS SNI re-encrypt میکنه — همان trick که برای google_ip + front_domain میکنیم، حالا قابل تنظیم برای هر CDN multi-tenant. بر روی Google fronting (built-in) برتری داره؛ زیر passthrough_hosts و DoH bypass قرار داره. در mode = full غیر فعال (که end-to-end TLS رو حفظ میکنه + MITM نمیکنه). config مثال: config.fronting-groups.example.json. doc کامل: docs/fronting-groups.md شامل recipe انتخاب (ip, sni)، routing precedence، و warning صریح ⚠️ درباره cross-tenant Host-header leak failure mode (هرگز domainهایی که واقعاً روی edge نیستند رو list نکنید). reviews folded: SNI اعتبار رستورد روی config-load gate، Vec<Arc<>> بهجای clone-on-match، byte-level dot-anchored matcher، startup warnings برای inert combos.
• edge-cache DNS در CodeFull.gs برای skip کردن round-trip tunnel-node (PR #494 از @dazzling-no-more): udp_open ops با port=53 در _doTunnelBatch intercept میشن + از CacheService (cache hit) یا DoH (cache miss) سرو میشن. cache hitها latency typical first-hop DNS رو از ~۶۰۰-۱۲۰۰ms به ~۲۰۰-۴۰۰ms پایین میآرن. تغییر pure server-side در CodeFull.gs (فقط Full mode — apps_script mode UDP path نداره). بدون تغییر Rust/client. DoH fallback chain: Cloudflare → Google → Quad9 روی RFC 8484 GET. cache key per-qtype برای جلوگیری از A/AAAA collision. TTL clamping در [30s, 6h]. NXDOMAIN/SERVFAIL با ۴۵s negative cache. NODATA-with-SOA بر اساس RFC 2308 §5 SOA TTL رو honor میکنه. default-on، opt-out با ENABLE_EDGE_DNS_CACHE. هر failure mode به path forward موجود tunnel-node fallback میکنه (zero regression). انتخاب CacheService بر روی Sheets به دلیل سرعت (~۱۰ms) + privacy (volatile، روی Drive log persist نمیکنه — برای کاربران در صحنههای censorship مهمه). ۱۱ تست pure-JS pass.
• default tunnel_doh: true (flipped از false در v1.8.x) (#468): default قبلی (DoH bypass فعال) برای کاربران ایرانی بدون نشان دادن چیزی شکست میخورد چون Iran ISP direct connection به dns.google، chrome.cloudflare-dns.com و سایر pinned DoH hosts رو filter میکنن — همان hosts که bypass در حال route مستقیم میفرستاد. در نتیجه، DNS lookupها fail میگرفتن + browsing شکست میخورد. حالا default safe است (DoH داخل tunnel نگه داشته میشه، در یک شبکه فیلتر شده کار میکنه). کاربری روی شبکههایی که direct DoH کار میکنه (non-Iran)، میتونه tunnel_doh: false در config بگذاره برای latency win. تغییر backwards-compatible برای configs موجود — همهی configs دارای فیلد explicit tunnel_doh رفتار حفظ میشن.
• اشتراکگذاری Hotspot iOS/laptop (PR #483 از @yyoyoian-pixel): default listen_host از 127.0.0.1 به 0.0.0.0 تغییر کرده. این workflow معمول رو enable میکنه — یک phone Android که tunnel run میکنه، iPhone یا laptop روی همان hotspot WiFi میتونه از proxy استفاده کنه. configs قدیمی با explicit listen_host: "127.0.0.1" honor میشن (بازنویسی نمیشن).
• Minor breaking: mode = "google_only" renamed to mode = "direct" (PR #488 by @dazzling-no-more): the old name no longer described the mode now that fronting_groups reaches more than Google. Rust + Android + UI dropdown all updated, but google_only is preserved as a deprecated alias on parse — existing configs and saved settings don't break. On the next Save, the on-disk file migrates automatically to direct. Docs (README EN+FA, SF_README EN+FA, tunnel-node FA) carry a "was named google_only before v1.9 — old name still works" note so users following older guides / Telegram posts find their way.
• fronting_groups: multi-edge domain fronting for non-Google CDNs (PR #488 by @dazzling-no-more, credit to @patterniha/MITM-DomainFronting for the original technique): new config field fronting_groups: [{name, ip, sni, domains}]. Each group is (edge IP, front SNI, member domains): when a CONNECT to one of the member domains arrives, the proxy MITMs at the local CA, then re-encrypts upstream against ip with sni as the TLS SNI — same trick we already do for google_ip + front_domain, now configurable for any multi-tenant CDN edge (Vercel, Fastly, etc.). Wins over the built-in Google SNI-rewrite suffix list; loses to passthrough_hosts and DoH bypass. Skipped in mode = full (which preserves end-to-end TLS and can't MITM). Working example at config.fronting-groups.example.json. Full doc at docs/fronting-groups.md including the recipe for picking (ip, sni), routing precedence, and an explicit Vec<Arc<>> refcount on per-CONNECT match; byte-level dot-anchored matcher (no per-match format!()); startup warnings for inert combos.
• Edge-cache DNS in CodeFull.gs to skip the tunnel-node round-trip (PR #494 by @dazzling-no-more): intercepts udp_open / port=53 ops in _doTunnelBatch and serves them from CacheService (cache hit) or DoH (cache miss). Cache hits drop typical first-hop DNS latency from ~600-1200ms to ~200-400ms. Pure server-side change in CodeFull.gs (Full mode only — apps_script mode has no UDP path); zero Rust/client changes. DoH fallback chain: Cloudflare → Google → Quad9 over RFC 8484 GET. Per-qtype cache key keeps A and AAAA from colliding. Min RR TTL clamped to [30s, 6h]; NXDOMAIN/SERVFAIL get a 45s negative cache; NODATA-with-SOA honors the SOA TTL per RFC 2308 §5. Default-on, opt-out via ENABLE_EDGE_DNS_CACHE. Every failure mode (parse error, resolver outage, key-too-long, cache.put rejection) falls through to the existing tunnel-node forward path — zero regression on any failure. CacheService chosen over Sheets (#443's pattern) because Sheets reads/writes are 100-500ms per op (often slower than the DoH lookup we'd be caching), have a daily-quota hazard, and persist a Drive-listed log of every domain users resolve — a real privacy regression for users in censorship contexts. CacheService is ~10ms, volatile, free, no on-disk artifact. 11 pure-JS tests covering parsers, txid non-mutation, TTL clamp, NXDOMAIN-with-SOA TTL extraction, malformed/truncated input rejection, splice correctness for mixed batches.
• Default tunnel_doh: true (flipped from false in v1.8.x) (#468): the previous default (DoH bypass active) silently broke for Iranian users because Iran ISPs filter direct connections to dns.google, chrome.cloudflare-dns.com, and other pinned DoH hosts — exactly the hosts the bypass was routing direct. DNS resolution failed and browsing broke. The safer default keeps DoH inside the tunnel; users on networks where direct DoH works can opt back into the bypass with tunnel_doh: false. Backwards-compatible for existing configs — anyone who explicitly set tunnel_doh keeps their behavior. Iranian users on pre-v1.8.6 versions hitting this regression should upgrade.
• Hotspot sharing for iOS / laptop (PR #483 by @yyoyoian-pixel): default listen_host changed from 127.0.0.1 to 0.0.0.0. Enables the common workflow where an Android phone runs the tunnel and an iPhone/iPad/laptop on the same hotspot uses it as a proxy (HTTP 192.168.43.1:8080 or SOCKS5 :1081). For full device-wide coverage on iOS, Shadowrocket or Potatso create a local VPN that routes all traffic through the SOCKS5 on the Android phone. Old configs with explicit "listen_host": "127.0.0.1" are honored (not overwritten).
What's Changed
- feat: listen on all interfaces, hotspot sharing for iOS/laptop by @yyoyoian-pixel in #483
- feat(codefull.gs): edge-cache DNS to skip tunnel-node round-trip by @dazzling-no-more in #494
- feat: multi-edge fronting_groups + rename google_only to direct by @dazzling-no-more in #488
Full Changelog: v1.8.5...v1.9.0
v1.8.5
• fix tunnel-node: cap هر TCP drain روی ۱۶ MiB تا batch response از سقف Apps Script (~۵۰ MiB) عبور نکنه (#460 از @bankbunk): روی VPS های پر-bandwidth (۱ Gbps) reader task میتونه هزاران مگابایت رو در buffer per-session جمع کنه قبل از اینکه poll بعدی بیاد. قبلاً drain_now همهی buffer رو در یک batch response میگرفت، base64 encoding (~۱.۳۳×) + JSON envelope اضافه میکرد، نتیجه از سقف ۵۰ MiB Apps Script رد میشد. Apps Script body رو wrap-around mid-base64 کوتاه میکرد + client side serde_json parse error با EOF while parsing a string at line 1 column 52428685 میگرفت. برای استریم MP4 یا هر بایتسنگین upstream این bug stream رو مرتب کرش میداد. حالا drain_now حداکثر ۱۶ MiB در هر poll برمیگردونه + tail رو در buffer برای poll بعدی نگه میداره. eof تا finalize شدن buffer reported نمیشه که session بیموقع tear نشه. workaround قبلی @bankbunk (محدودکردن interface VPS با wondershaper به ۴۰ Mbps) دیگر لازم نیست — fix server-side پیاده شد و کاربران throughput عادی VPS رو خواهند داشت
• Fix tunnel-node: cap each TCP drain at 16 MiB so batch responses stay under Apps Script's ~50 MiB body ceiling (#460 by @bankbunk): on high-bandwidth VPS (1 Gbps+), the reader task can stuff the per-session read buffer with tens of MiB between client polls. The old drain_now took the entire buffer in one shot, base64-encoded it (1.33× overhead), wrapped it in JSON, and the resulting body exceeded Apps Script's hard ~50 MiB Web App response limit. Apps Script truncated the body mid-base64; the client failed serde_json parse with EOF while parsing a string at line 1 column 52428685 (= 50 MiB) and the stream tore. Most visibly, raw MP4 streams crashed minutes into playback. The fix splits oversized buffers: at most TCP_DRAIN_MAX_BYTES (16 MiB) is returned per drain, and the remainder stays in the buffer for the next poll. EOF is held back until the buffer is fully drained so partial drains don't prematurely close the session. Three regression tests cover the cap, the under-cap pass-through, and the EOF-holdback case (33 tunnel-node tests passing). @bankbunk's wondershaper workaround (rate-limiting the VPS interface to 40 Mbps) is no longer necessary — high-bandwidth VPS users can let throughput run at line rate again.
Full Changelog: v1.8.4...v1.8.5
v1.8.4
• adaptive batch coalescing برای کاهش تعداد Apps Script roundtripها (PR #448 از @yyoyoian-pixel): قبلاً هر op فقط ۸ms برای opهای همزمان دیگر صبر میکرد + اکثراً batchها فقط ۱ op داشتن. حالا scheme adaptive: اولین op یک timer ۴۰ms استارت میزنه، opهای جدید reset میکنن، اگر ۴۰ms idle شد batch fire میشه. cap نهایی ۱۰۰۰ms. نتیجه field test روی شبکه ایران: ops/batch از ۱.۰ به ۲-۳، P75 RTT از ۶.۲s به ۳.۰s، کاهش ~۵۰٪ Apps Script call. configurable از طریق coalesce_step_ms / coalesce_max_ms در config + در Android UI Advanced section با slider
• tunnel-node long-poll از ۵s به ۱۵s برای پایداری Telegram + Google Push (PR #446 از @yyoyoian-pixel): قبلاً connectionهای persistent (Telegram XMPP پورت ۵۲۲۲، Google Push پورت ۵۲۲۸) هر ۵ ثانیه empty-response برمیگرفتن + apps این رو instability تفسیر میکردن + session reconnect میزدن. هر reconnect معادل یک TLS handshake کامل (~۴s از طریق Apps Script) یعنی buffering قابل مشاهده در تماس Telegram یا playback. حالا long-poll تا ۱۵ ثانیه باز میمونه + persistent connection پایدار میمونه
• adaptive straggler settle در tunnel-node (PR #446): قبلاً fixed ۳۰ms wait بعد از اولین session که data داشت. حالا adaptive ۴۰ms-step تا ۵۰۰ms-max که زود break میشه اگر همه sessionها ready باشن. در شبکههای latency بالا (~۱.۵s Apps Script overhead) packing بیشتر session response در یک batch، quota مصرف کمتری ایجاد میکنه
• Adaptive batch coalescing to reduce Apps Script round-trip count (PR #448 by @yyoyoian-pixel): the previous fixed 8ms coalesce window barely caught concurrent ops — most batches were a single op. The new adaptive scheme starts a 40ms timer on first arrival, resets on each new op, fires the batch when the window stays empty, with a hard 1000ms cap. Field testing on a network in Iran showed ops/batch went from ~1.0 to 2-3, P75 RTT 6.2s → 3.0s, fast (<3s) batches 61% → 74-85%, total Apps Script calls roughly halved. Both values configurable via coalesce_step_ms / coalesce_max_ms in config.json and via sliders in the Android UI's Advanced section. Desktop UI sliders are queued for the v1.8.x desktop UI batch alongside the system-proxy toggle (#432).
• tunnel-node long-poll raised from 5s to 15s for persistent-connection stability (PR #446 by @yyoyoian-pixel): at 5s long-poll, persistent connections like Telegram XMPP (:5222) and Google Push (mtalk.google.com:5228) interpreted the frequent empty-poll returns as connection instability and rotated sessions. Each reconnect cost a full TLS handshake (~4s through Apps Script), causing visible interruptions during Telegram video/voice calls and media playback. The 15s long-poll holds the response open until server data actually arrives, keeping persistent sessions alive without unnecessary re-handshakes.
• Adaptive straggler settle in tunnel-node (PR #446): the previous fixed 30ms straggler settle was too short to catch neighboring sessions with data. Replaced with adaptive 40ms-step / 500ms-max settle that breaks early when all sessions in a batch have data. On high-latency relays (~1.5s Apps Script overhead per call), packing more session responses into one batch saves quota; the early-break prevents wasted time when all data is already ready.
What's Changed
- fix(tunnel-node): raise long-poll to 15s, adaptive straggler settle by @yyoyoian-pixel in #446
- feat(client): adaptive batch coalescing with configurable UI by @yyoyoian-pixel in #448
Full Changelog: v1.8.3...v1.8.4
v1.8.3
• cache spreadsheet اختیاری در Code.gs برای کاهش مصرف quota (#400، PR #443 از @euvel): GET requests عمومی که Cache-Control header دارن میتونن از Google Sheet بهجای Apps Script's UrlFetchApp serve بشن. هزینهی هر cache hit ~۵-۲۰ms (Sheet read) بهجای ۲۵۰-۵۰۰ms (UrlFetchApp roundtrip). features کامل: TTL-aware (max-age, no-cache, no-store, private respect)، header rewriting (Date/Age/X-Cache)، circular buffer برای O(1) writes، Vary-aware با Accept-Encoding/Accept-Language. opt-in via یک constant CACHE_SPREADSHEET_ID در Code.gs — default غیرفعال، بدون overhead برای کاربران که نمیخوان. setup: ساخت یک Google Sheet جدید + قرار دادن ID آن در CACHE_SPREADSHEET_ID + redeploy as new version
• bypass DoH endpoints from Apps Script tunnel (#377، PR #439 از @dazzling-no-more): قبلاً در Full mode هر DNS-over-HTTPS lookup browser از طریق Apps Script tunnel میرفت — chrome.cloudflare-dns.com:443، dns.google:443 و سایر هزینه ~۲ ثانیه UrlFetchApp roundtrip به ازای هر name داشتن. ولی DoH از قبل encrypted هست + tunnel privacy اضافهای نمیده — فقط fact-of-DoH رو از local network مخفی میکنه که ناچیزه. حالا bypass_doh_hosts config (default true) routes DoH lookups مستقیم via TCP/443. لیست کامل bypass شامل: Cloudflare (incl. chrome./mozilla./1dot1dot1dot1.)، Google، Quad9، AdGuard، NextDNS، OpenDNS، CleanBrowsing، dns.sb، dns0.eu، AliDNS، doh.pub، Mullvad. کاربران میتوانند با tunnel_doh: true در config opt-out کنن یا با bypass_doh_hosts: ["custom1.com", "custom2.com"] لیست رو extend کنن
• H1 container keepalive (~۲۴۰s) برای جلوگیری از Apps Script V8 cold-start stalls (PR #438 از @dazzling-no-more): Apps Script containerها بعد از ~۵ دقیقه idle cold میشن + ۱-۳ ثانیه به wake-up زمان میبرن. این بهخصوص در YouTube playback بعد از pause طولانی stall بهوضوح دیده میشد. با ping HEAD به example.com هر ۲۴۰ ثانیه از طریق relay، container warm نگه داشته میشه. cache + inflight coalescer bypass شده تا ping واقعاً به Apps Script برسه. در google_only mode غیرفعال
• 431 Request Header Fields Too Large بهجای drop سکوتی (PR #438 از @dazzling-no-more): قبلاً اگر header block > ۱ MB میشد، socket drop میشد + browser silently retry میکرد + loop ابدی. حالا cap به ۶۴ KB کاهش یافته (match upstream Python) + explicit HTTP/1.1 431 Request Header Fields Too Large response برمیگرده + close میشه. browser ارور رو ببینه + loop رو نمیسازه
• پیام error config port-collision واضحتر شد (PR #438): قبلاً پیام cryptic بود. حالا: "both set to 8080 on 127.0.0.1. Change one of them in config.json." — کاربر مستقیم میفهمه چی fix بکنه
• Optional spreadsheet-backed response cache in Code.gs to reduce UrlFetchApp quota consumption (#400, PR #443 by @euvel): public GET requests with Cache-Control headers can now be served from a Google Sheet instead of round-tripping through UrlFetchApp. Cache hit costs ~5-20ms (Sheet read) vs ~250-500ms (UrlFetchApp). Features: TTL-aware caching (respects max-age, no-cache, no-store, private), 35 KB body-size gate (under the Sheets cell limit), header rewriting (Date/Age/Cache-Control/X-Cache/X-Cached-At), circular buffer for O(1) writes, Vary-aware compound keys (Accept-Encoding + Accept-Language). Opt-in via a single CACHE_SPREADSHEET_ID constant — default off, zero overhead for users who don't want it. Setup: create a new Google Sheet, paste its ID into CACHE_SPREADSHEET_ID, redeploy as new version. Run getCacheStats() from the Apps Script editor to see hit/miss/eviction counts.
• Bypass Apps Script tunnel for DoH endpoints on TCP/443 (#377, PR #439 by @dazzling-no-more): previously every browser DNS-over-HTTPS lookup in Full mode rode through the Apps Script tunnel — chrome.cloudflare-dns.com:443, dns.google:443, etc. each paid the ~2-second UrlFetchApp round-trip per name. But DoH is already encrypted at the transport layer; tunneling it adds no real privacy (only hiding fact-of-DoH from the local network, which is marginal). Now bypass_doh_hosts config (default true) routes known DoH hosts around the tunnel via plain TCP. Built-in list: Cloudflare (incl. chrome./mozilla./1dot1dot1dot1. browser-pinned variants), Google, Quad9, AdGuard, NextDNS, OpenDNS, CleanBrowsing, dns.sb, dns0.eu, AliDNS, doh.pub, Mullvad. Users can opt out with tunnel_doh: true or extend the list with bypass_doh_hosts: ["custom1.com", "custom2.com"]. Gated to TCP/443 only — private DoH endpoints on :8443 should use passthrough_hosts instead. ProxyServer warns at startup if tunnel_doh: true is paired with non-empty bypass_doh_hosts (the otherwise-silent inert combo). 6 unit tests for matches_doh_host covering exact match, case insensitivity, trailing dots, suffix tenant subdomains, user extras extending the default list, and the asymmetric-matching footgun guard.
• H1 container keepalive (~240s) to prevent Apps Script V8 cold-start stalls (PR #438 by @dazzling-no-more): Apps Script V8 containers go cold after ~5 minutes idle and cost 1-3s to wake. Most visible as YouTube player stalls after a quiet pause. Now sends a HEAD http://example.com/ ping every 240s through the relay to keep the container warm. Bypasses the response cache and inflight coalescer (otherwise the second iteration would just hit the cache and never reach Apps Script). Skipped in google_only mode. The JoinHandle is captured so shutdown's select! arm can abort it cleanly — without that, hitting Stop in the UI would leave the keepalive holding an Arc<DomainFronter> on stale config (same class of bug as #99 hit for accept loops).
• 431 Request Header Fields Too Large instead of silent drop (PR #438 by @dazzling-no-more): previously header blocks larger than 1 MB were silently dropped at the socket level, causing browsers to retry on connection-reset and loop indefinitely on the same oversized request. Now the cap is tightened to 64 KB (matching upstream Python's MAX_HEADER_BYTES) and oversized requests get an explicit HTTP/1.1 431 Request Header Fields Too Large reply followed by close. Both the plaintext HTTP frontend and the MITM HTTPS relay path now do this. Browsers see the error and don't loop.
• Clearer port-collision error message (PR #438): the same-port validation already existed; only the message was vague. Now reads: "both set to 8080 on 127.0.0.1. Change one of them in config.json." matching upstream Python's clarity.
What's Changed
- feat: H1 container keepalive + 431 oversized-headers by @dazzling-no-more in #438
- feat: bypass Apps Script tunnel for DoH endpoints on TCP/443 by @dazzling-no-more in #439
- feat(code.gs): optional spreadsheet-backed response cache with TTL by @euvel in #443
New Contributors
Full Changelog: v1.8.2...v1.8.3
v1.8.2
• اصلاح log level در UI binary (Windows + Android) (#401): قبلاً mhrv-rs-ui (و Android) فیلتر tracing رو فقط از RUST_LOG env var یا default info,hyper=warn میخوند — مقدار log_level در config.json در عمل ignore میشد. فرم UI combobox log_level داشت ولی هیچجا به subscriber اعمال نمیشد. حالا precedence اینه: RUST_LOG (اگر set باشد) > config.log_level > info,hyper=warn. علاوه بر این Save در UI الان log level رو live اعمال میکنه (بدون نیاز به restart) از طریق reload handle. CLI mhrv-rs از قبل درست کار میکرد — این فقط fix UI bin بود.
• پیغام تشخیص decoy ملایمتر — بهجای assert AUTH_KEY mismatch، چهار علت ممکن enumerate میکنه (#404): @w0l4i گزارش داد همان script_id گاهی decoy و گاهی موفقیت برمیگرده در یک دقیقه — یعنی NOT AUTH_KEY mismatch (اگر بود ۱۰۰٪ fail میگرفت). تحقیق نشون داد body string "The script completed but did not return anything" اختصاصی به decoy v1.8.0 ما نیست — Apps Script همان body رو در ۴ سناریو برمیگردونه: (۱) AUTH_KEY mismatch (decoy ما، intentional)، (۲) Apps Script execution timeout یا quota tear، (۳) Google-side internal hiccup، (۴) ISP-side response truncation (#313 pattern). Error message v1.8.1 false positive داشت در سناریو ۲-۴. حالا پیغام: "got the v1.8.0 decoy/placeholder body — could be (1) AUTH_KEY mismatch, (2) Apps Script execution timeout/quota tear, (3) Apps Script internal hiccup, (4) ISP-side response truncation. Set DIAGNOSTIC_MODE=true to disambiguate (1) — only AUTH_KEY mismatch returns this body in diagnostic mode." کاربر action درست رو کشف میکنه.
• Fix log level on the UI binary (Windows + Android) (#401): previously mhrv-rs-ui (and Android, which uses the same JNI path) installed its tracing filter from RUST_LOG only — falling back to info,hyper=warn when unset. The log_level field in config.json was effectively ignored, even though the UI form has a combobox that writes to it. The CLI binary (mhrv-rs) read config.log_level correctly via init_logging(); only the UI binary was broken. New precedence: RUST_LOG (explicit override) > config.log_level (what the user picked in the form) > info,hyper=warn (default). The Save button now also reinstalls the filter live via a tracing_subscriber::reload::Handle, so users don't need to restart for a level change to take effect. RUST_LOG still wins if set at boot — explicit override beats config in both directions.
• Soften the v1.8.1 decoy detection error message — enumerate four candidate causes instead of asserting AUTH_KEY mismatch (#404): @w0l4i reported the same script_id mixing decoy ERROR with successful batches inside a one-minute window — which rules out AUTH_KEY mismatch as the cause (a real mismatch fails 100% of batches against that deployment, never succeeds intermittently). Investigation showed the body string "The script completed but did not return anything" is not unique to our v1.8.0 bad-auth path — Apps Script itself returns the same body in three other unrelated cases: (2) Apps Script execution timeout or per-100s quota tear, (3) Google-side internal runtime hiccup, (4) ISP-side response truncation mid-flight (the #313 pattern). The v1.8.1 error message was a false positive in scenarios 2-4. The v1.8.2 message now reads: "got the v1.8.0 decoy/placeholder body — could be (1) AUTH_KEY mismatch (run a direct curl probe against the deployment to verify), (2) Apps Script execution timeout or per-100s quota tear (try lowering parallel_concurrency), (3) Apps Script internal hiccup (transient, retry next batch), or (4) ISP-side response truncation (#313 pattern, try a different google_ip). To distinguish (1) from the rest: set DIAGNOSTIC_MODE=true at the top of Code.gs + redeploy as new version — only AUTH_KEY mismatch returns this body in diagnostic mode." Users now have an actionable narrowing procedure instead of a confidently-wrong assertion.
Full Changelog: v1.8.1...v1.8.2
v1.8.1
• تشخیص خطای decoy v1.8.0 در سمت کلاینت — پیغام واضح بهجای cryptic (#404، #310): قبلاً وقتی deployment auth fail میگرفت + decoy HTML برمیگردوند، client پیغام WARN batch failed: bad response: no json in batch response: <!DOCTYPE html>... میداد. کاربر باید خودش متن decoy رو میشناخت تا تشخیص بده. حالا client decoy رو با string-match تشخیص میده + پیغام explicit میده: "got the v1.8.0 bad-auth decoy — your AUTH_KEY in mhrv-rs config does NOT match the AUTH_KEY in this deployment's Code.gs. Either fix the mismatch + redeploy as a NEW VERSION, or set DIAGNOSTIC_MODE=true at the top of Code.gs + redeploy to see the explicit JSON unauthorized error during setup." — کاربر مستقیم میفهمه چی بکنه + ساعتها debug ذخیره میشه
• اضافه شدن script_id به همه logهای batch-failure (#404): قبلاً log WARN batch failed: ... نام deployment که fail کرد رو نشون نمیداد. در multi-deployment scenarios (5-10 deployment که برخی AUTH_KEY اشتباه داره)، کاربر نمیتونست بدون سختی deployment معیوب رو identify کنه. حالا همه پیغامهای failure (timeout، bad response، decoy، missing-response-in-batch) شامل short prefix script_id هستند: batch failed (script AKfycbz4): .... این + flag تشخیص decoy، اولین diagnostic از سرنوشت توزیع کاربری به طور reliable
• Flag config جدید disable_padding: true (#391): پیشفرض false (padding فعال = DPI defense). برای کاربران روی ISPهای heavily-throttled که هزینه padding ~۲۵٪ bandwidth با throttle compounds + batchهای borderline-working رو into timeout میاندازه، گذاشتن "disable_padding": true در config.json در ازای محافظت length-distribution DPI، headroom برمیگردونه. توصیه نیست speculatively فعال بشه — فقط بعد از measurement throughput improvement.
• Client-side decoy detection — clear hint instead of cryptic error (#404, #310): previously when a deployment had a stale/wrong AUTH_KEY, mhrv-rs returned the v1.8.0 bad-auth decoy HTML, and the client logged WARN batch failed: bad response: no json in batch response: <!DOCTYPE html>... — leaving the user to recognize the decoy body string and infer the cause. Now the client string-matches the decoy and emits an explicit error: "got the v1.8.0 bad-auth decoy — your AUTH_KEY in mhrv-rs config does NOT match the AUTH_KEY in this deployment's Code.gs. Either fix the mismatch + redeploy as a NEW VERSION (Apps Script doesn't auto-pick-up AUTH_KEY edits without an explicit redeploy), or set DIAGNOSTIC_MODE=true at the top of Code.gs + redeploy to see the explicit JSON unauthorized error during setup." Saves users hours of staring at "no json in batch response" trying to figure out what's wrong.
• Add script_id to every batch-failure log line (#404): previously WARN batch failed: ... didn't identify which deployment failed. In multi-deployment setups (5-10 deployments where one or two have a stale AUTH_KEY), users couldn't identify the culprit without the per-deployment curl probe loop. Every failure log line now includes the short script_id prefix: batch failed (script AKfycbz4): ..., applied to all four failure paths (timeout, bad response, decoy, missing-response-in-batch). Together with the decoy detection above, the first reliable diagnostic for the multi-deployment-with-one-bad-AUTH_KEY user pattern.
• New disable_padding: true config flag (#391): default false (padding active, full DPI defense). For users on heavily-throttled ISPs where the v1.8.0 random-padding cost (+~25% bandwidth per batch) compounds with the throttle to push borderline-working batches into timeouts, setting "disable_padding": true in config.json recovers headroom in exchange for losing length-distribution DPI defense. Don't flip on speculatively — for users where Apps Script outbound is uncongested, padding is free defense. Only enable if you've measured throughput improvement after the flip on your specific ISP path.
Full Changelog: v1.8.0...v1.8.1
v1.8.0
• Padding random برای پایلود Apps Script (#313، #365 Section 1): هر request به Apps Script حالا یک فیلد _pad با طول uniform-random بین ۰-۱۰۲۴ بایت اضافه میکنه — بهصورت base64 encoded. بدون این، طول request body در هر mode تقریباً ثابت میمونه + DPI ایران میتونه بر اساس distribution طول fingerprint بزنه. حالا packet sizes uniformly distributed هستن + length-clustering match نمیکنه. تأثیر bandwidth: متوسط ۵۱۲ بایت اضافه به batch ~۲KB = +۲۵٪، negligible در برابر floor latency Apps Script. backward-compatible: Code.gs قدیم هم کار میکنه (unknown JSON fields ignore میشن).
• Defense active probing: decoy 200 HTML در Code.gs / CodeFull.gs روی AUTH_KEY بد (#365 Section 3): قبلاً request بدون auth {"e":"unauthorized"} JSON برمیگردوند — fingerprint مشخص "این یه API endpoint هست". حالا یه HTML benign placeholder برمیگردونه که شبیه یه Apps Script web app forgotten-but-public هست. scanner active که با AUTH_KEY ساختگی POST میکنه categorize میکنه بهعنوان "non-tunnel، nothing interesting". flag DIAGNOSTIC_MODE برای setup که response قدیمی JSON رو برمیگردونه — default false (production-strong)
• Defense active probing: decoy 404 nginx در tunnel-node روی auth بد: tunnel-node قبلاً {"e":"unauthorized"} JSON برمیگردوند. حالا response 404 با body HTML شبیه nginx default error میفرسته (active scanners "static web server هست، tunnel نیست" تشخیص میدن). env var MHRV_DIAGNOSTIC=1 برای setup behavior قدیمی رو فعال میکنه
• رفع باگ "Usage today (estimated) در Full mode همیشه ۰" (#230، #362): counter today_calls و today_bytes فقط روی apps_script-mode relay path در domain_fronter::relay() افزایش مییافت. Full mode از tunnel_client::fire_batch میگذره که کانتر رو زد. حالا fire_batch بعد از batch موفق record_today(response_bytes) رو صدا میزنه — bytes از sum طول d و pkts در BatchTunnelResponse تخمین زده میشه. Full mode users حالا "Usage today" واقعی میبینن
• رفع باگ "quota reset countdown با time UTC بهجای PT نشون داده میشه" (#230، #362): Apps Script's UrlFetchApp quota در 00:00 Pacific Time ریست میشه (PST/PDT با DST)، نه UTC. ما UTC midnight رو نشون میدادیم — ۷-۸ ساعت off. fix: helpers جدید current_pt_day_key() + seconds_until_pacific_midnight() با hand-rolled DST detection (بدون اضافه کردن chrono-tz / 3MB tzdb). UI label "UTC day" → "PT day" تغییر کرد. ۲ test جدید برای DST window boundaries (مارس ۲۰۲۴/۲۰۲۶/۲۰۲۷، نوامبر ۲۰۲۴/۲۰۲۶) + Sakamoto's day-of-week
• Random payload padding for Apps Script requests (#313, #365 Section 1): every outbound request to Apps Script now carries a _pad field of uniform-random length 0–1024 bytes (base64 encoded). Before this, request body sizes within each mode were tightly clustered, giving ISP DPI a clean length-distribution fingerprint to match against. Now packet sizes are spread uniformly across the range so length-clustering DPI heuristics can't match. Bandwidth cost: ~512 bytes added to a typical 2 KB tunnel batch = +25%, negligible against Apps Script's per-call latency floor. Backward-compatible: old Code.gs deployments ignore the unknown field. Applied at all three payload-build sites: single relay, single tunnel op, batch tunnel.
• Active-probing defense: decoy 200 HTML on bad AUTH_KEY in Code.gs and CodeFull.gs (#365 Section 3): previously a request with a missing/wrong AUTH_KEY got {"e":"unauthorized"} as a JSON body — a clear "this is some kind of API endpoint" signal that active scanners can fingerprint. Now bad-auth requests get a benign HTML placeholder page that looks like a forgotten-but-public Apps Script web app, indistinguishable from the millions of stale Apps Script projects on Google's infrastructure. New DIAGNOSTIC_MODE const (default false) restores the old JSON error response for setup/debugging — flip to true while configuring a misconfigured client, then back to false before sharing the deployment widely.
• Active-probing defense: decoy 404 nginx-style HTML on bad auth in tunnel-node (#365 Section 3): previously a bad-auth request got {"e":"unauthorized"}. Now it gets an HTTP 404 with an nginx-style error page body, looking like a vanilla static web server. Active scanners that POST malformed payloads to /tunnel to discover proxy endpoints categorize this host as "boring" and move on. New MHRV_DIAGNOSTIC=1 env var restores the verbose JSON error during setup; default is the production decoy.
• Fix "Usage today (estimated) is always 0 in Full mode" (#230, #362): the daily-usage counters (today_calls / today_bytes) were incremented only on the apps_script-mode relay path inside domain_fronter::relay(). Full-mode traffic goes through tunnel_client::fire_batch which never wired the counter. Now fire_batch calls record_today(response_bytes) after each successful batch — bytes are estimated from the sum of per-session d (TCP payload) and pkts (UDP datagrams) lengths in the BatchTunnelResponse, which is a stable proxy for "how much did this batch move." Full mode users now see real usage numbers instead of stuck-at-zero.
• Fix "quota reset countdown shown in UTC instead of Pacific Time" (#230, #362): Apps Script's UrlFetchApp quota actually resets at midnight Pacific Time (PST/PDT — observes DST), not midnight UTC. We were displaying the countdown to UTC midnight, which is 7–8 hours off depending on DST. Fix: new current_pt_day_key() + seconds_until_pacific_midnight() helpers using a hand-rolled US DST detector (2nd Sunday of March → 1st Sunday of November = PDT, otherwise PST) so we don't pull chrono-tz and a ~3 MB IANA tzdb just for one helper. UI label updated from "UTC day" to "PT day". Two new tests pin down the DST window boundaries (March 2024 / 2026 / 2027, November 2024 / 2026) and Sakamoto's day-of-week formula.
Full Changelog: v1.7.11...v1.8.0