diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/README.md b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/README.md new file mode 100644 index 0000000000..473f84218e --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/README.md @@ -0,0 +1,228 @@ +# Record candidate: 11L XSA + LQER + SparseAttnGate + BOS-fixed SmearGate + NEFTune + Phased-TTT (4 phases, prefix=3000, LoRA-128) + +**val_bpb = 1.06035** (3-seed mean, std 0.00044) | **~15.16 MB** | 8xH100 SXM + +## 3-Seed Results + +| Seed | Steps | ms/step | Pre-quant val_bpb | Post-quant val_bpb | **Post-TTT val_bpb** | Artifact | +|------|-------|---------|-------------------|--------------------|----------------------|----------| +| 42 | 4,976 | 120.5 | 1.06787 | 1.07652 | **1.05980** | 15,897,143 | +| 0 | 4,921 | 121.8 | 1.07037 | 1.07912 | **1.06038** | 15,894,185 | +| 314 | 4,940 | 121.4 | 1.06989 | 1.07870 | **1.06087** | 15,893,797 | +| **Mean** | **4,946** | **121.2** | | | **1.06035** | 15,895,042 | + +3-seed std: 0.00044 BPB. Eval time per seed: 472–528 s of the 600 s budget (mean 509 s). + +## Lineage + +Built on top of [`track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611`](track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/) (val_bpb 1.06108) — the strongest parent and the source of the entire architecture + quantization + per-group compression stack. Other ancestors traversed: [`track_10min_16mb/2026-04-08_SP8192_ParallelResid_ScoreFirstTTT`](track_10min_16mb/2026-04-08_SP8192_ParallelResid_ScoreFirstTTT/) (val_bpb 1.0822), [`track_10min_16mb/2026-03-19_MLP3x_QAT_Int6_SlidingWindow`](track_10min_16mb/2026-03-19_MLP3x_QAT_Int6_SlidingWindow/), [`track_10min_16mb/2026-03-19_Seq2048_FP16Emb_TunedLR`](track_10min_16mb/2026-03-19_Seq2048_FP16Emb_TunedLR/), and `10min_16mb/2026_04_16_SmearGate_Attention_Output_Gate_Score-First_TTT` (external — no folder under `track_10min_16mb/`). + +vs. the strongest parent (1.06108) this submission adds **NEFTune embedding noise** and a **z-loss regularization term** during training, and bumps the phased-TTT settings (LoRA rank 80→128, prefix docs 2500→3000, num phases 3→4). The architecture is unchanged. + +## Architecture + +| Component | Setting | Source | +|-----------|---------|--------| +| Layers | 11 (512d, 8 GQA heads, 4 KV heads) | Baseline | +| MLP | 4× (2048) with LeakyReLU(0.5)² | [#493](https://github.com/openai/parameter-golf/pull/493) | +| Fused MLP kernel | LeakyReLU-square Triton | [#1530](https://github.com/openai/parameter-golf/pull/1530) | +| Attention | Standard FA3, GQA 2:1 | Baseline | +| XSA | All 11 layers (`xsa_last_n=11`) | [#478](https://github.com/openai/parameter-golf/pull/478) | +| RoPE | Partial (16/64 dims) + YaRN | [#315](https://github.com/openai/parameter-golf/pull/315) | +| LN Scale | 1/√(layer+1) | [#315](https://github.com/openai/parameter-golf/pull/315) | +| QK Gain init | 5.0 (per-head learned) | concept from [#259](https://github.com/openai/parameter-golf/pull/259); 5.0 default from [#1276](https://github.com/openai/parameter-golf/pull/1276) | +| U-Net skips | Encoder-decoder skip connections + skip gates | [#289](https://github.com/openai/parameter-golf/pull/289) | +| Parallel decoder | 2-lane parallel from layer 8+, lane mix learned | [#1530](https://github.com/openai/parameter-golf/pull/1530) (parallel residuals) | +| Depth recurrence | Loop layers 3–5, run 3× once `frac >= 0.35` | [#1344](https://github.com/openai/parameter-golf/pull/1344) | +| Logit softcap | 30 | Gemma2-style; in upstream baseline | +| Sparse attention gate | Narrow head-output gate, gate_window=12 | [#1787](https://github.com/openai/parameter-golf/pull/1787) | +| SmearGate (BOS-fixed) | Position-mixing gate with `not_bos` mask | [#1667](https://github.com/openai/parameter-golf/pull/1667) + parent [track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611](track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/) (BOS leak fix) | +| Polar-Express Newton-Schulz | Muon, 5 steps, per-iter minimax tuples | [#1344](https://github.com/openai/parameter-golf/pull/1344) → [#1787](https://github.com/openai/parameter-golf/pull/1787) | +| MIN_LR floor | 0.10 (warmdown LR floor) | [#1787](https://github.com/openai/parameter-golf/pull/1787) | +| Fused softcapped CE Triton kernel | Single-pass training-only | [#1787](https://github.com/openai/parameter-golf/pull/1787) | +| LQER asymmetric int4 | Rank-4 quant-error correction on top-3 tensors | [#1797](https://github.com/openai/parameter-golf/pull/1797) | +| Per-group compression | lrzip zpaq + L1 similarity-sort row reordering on hot tensors + brotli on the remainder | parent [track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611](track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/) | +| Quantization | GPTQ int6 + int7 embed + int8-per-row attn-gate | int7 embed from [#1586](https://github.com/openai/parameter-golf/pull/1586); int8-per-row attn-gate from [#1736](https://github.com/openai/parameter-golf/pull/1736) (`GATED_ATTN_QUANT_GATE`) | +| TTT | Phased TTT eval, **4** cumulative phases, LoRA per-doc reset, **rank 128**, **prefix=3000 docs** | concept [#1610](https://github.com/openai/parameter-golf/pull/1610) → multi-phase global SGD [#1626](https://github.com/openai/parameter-golf/pull/1626) → adopted in [#1736](https://github.com/openai/parameter-golf/pull/1736); 4-phase / rank-128 / 3000-prefix retune **this work** | +| Tokenizer | sp8192 lossless caps caseops v1 reserved | [#1729](https://github.com/openai/parameter-golf/pull/1729) | +| **NEFTune embedding noise** | **alpha=5.0, training-only, gated off during TTT** | **this work** | +| **Z-loss regularization** | **weight 1e-4 on `mean(LSE^2)`, computed from fused softcapped-CE LSE** | **this work** | + +## What this submission adds over the parents + +### NEFTune embedding noise (alpha=5.0) + +Adds uniform random noise scaled by `alpha / sqrt(seq_len * dim)` (alpha=5.0) to the token embeddings during the training forward pass. The noise is gated by an `_in_ttt` flag so it is **disabled during phased-TTT** (where the model fine-tunes on validation prefix and noise would just inject loss). + +Per the original NEFTune paper (Jain et al., 2023, arXiv:2310.05914), this acts as a strong embedding-level regularizer with negligible training-time overhead. None of the parents in `seed_record_ids` use NEFTune. + +Code: [train_gpt.py:249](train_gpt.py#L249), [train_gpt.py:1079](train_gpt.py#L1079), [train_gpt.py:1277-L1280](train_gpt.py#L1277-L1280). + +```python +neftune_alpha = float(os.environ.get("NEFTUNE_ALPHA", 5.0)) +... +if self.training and self.neftune_alpha > 0 and not self._in_ttt: + seq_len = max_seqlen if max_seqlen > 0 else x.size(1) + noise = torch.rand_like(x) * 2.0 - 1.0 + x = x + noise * (self.neftune_alpha / math.sqrt(seq_len * x.size(-1))) +``` + +### Z-loss regularization (weight 1e-4) + +Adds an auxiliary `weight * mean(LSE^2)` term to the training loss, where `LSE` is the per-token log-sum-exp of the (softcapped) logits. This penalizes drift in the logit normalization, which keeps `softmax` numerically well-conditioned and stops the partition function from inflating during long FP8/BF16 training runs. The standard PaLM-style trick (PaLM, Chowdhery et al. 2022). + +The crucial integration detail: when `FUSED_CE_ENABLED=1` (the default on this stack), the fused softcapped-CE Triton kernel already computes the per-token LSE as a byproduct, so the z-loss term is essentially free — no second logits pass. The fallback path (`fused_ce_enabled=False`) computes `torch.logsumexp` explicitly on the materialized FP32 logits. + +None of the parents in `seed_record_ids` use a z-loss term. + +Code: [train_gpt.py:251](train_gpt.py#L251), [train_gpt.py:1083](train_gpt.py#L1083), [train_gpt.py:1399-L1410](train_gpt.py#L1399-L1410). + +```python +z_loss_weight = float(os.environ.get("Z_LOSS_WEIGHT", 1e-4)) +... +if self.fused_ce_enabled: + losses, lse = torch.ops.pgsubmission1draft7fusedce.softcapped_ce( + logits_proj.reshape(-1, logits_proj.size(-1)), + flat_targets, + float(self.logit_softcap), + ) + return losses.mean() + self.z_loss_weight * (lse**2).mean() +``` + +## Hyperparameter stack + +Three phased-TTT hparams retuned on top of the 1.06108 parent's stack. Tuning was multi-seed (3 seeds: 42, 0, 314); all three changes monotonically improve the 3-seed mean. + +| hparam | value | default (parent 1.0611) | rationale | +|---|---|---|---| +| NEFTUNE_ALPHA | 5.0 | 0.0 (off) | Training-time embedding regularizer; disabled during TTT via `_in_ttt`. Default alpha from the NEFTune paper. | +| TTT_LORA_RANK | 128 | 80 | Higher-capacity LoRA adapters fit the longer per-phase prefix better; recovers ~0.0006 BPB on 3-seed mean. | +| PHASED_TTT_PREFIX_DOCS | 3000 | 2500 | Longer per-phase prefix (still fits inside the 600 s eval budget — 472–528 s observed). | +| PHASED_TTT_NUM_PHASES | 4 | 3 | One extra cumulative phase (boundaries at ~750/1500/2250/3000 docs) given the longer prefix. | + +## Training + +- 4,946 ± 22 steps in 600 s on 8xH100 SXM (121.2 ms/step mean). +- Optimizer: Polar-Express Muon (5 steps) on matrix params; Adam (β1=0.9, β2=0.99) on tied embeddings (lr=0.03) and scalars (lr=0.02). +- Schedule: warmup=20 steps, warmdown_frac=0.85, MIN_LR=0.10, MATRIX_LR=0.026, GRAD_CLIP_NORM=0.3. +- EMA decay = 0.9965; EMA weights applied at end of training. +- Z-loss weight = 1e-4, computed via the LSE returned by the fused softcapped-CE Triton kernel (training-only auxiliary term). +- NEFTune (α=5.0) injected on `tok_emb` output during training only. + +## Quantization + +- GPTQ int6 on matrix tensors with per-layer adaptive clipping (MLP_CLIP_SIGMAS=11.5, ATTN_CLIP_SIGMAS=13.0, EMBED_CLIP_SIGMAS=14.0). +- int7 on tied embeddings. +- int8-per-row on the small attention-gate tensors (`GATED_ATTN_QUANT_GATE=1`). +- LQER asymmetric rank-4 quant-error correction on the top-3 tensors with the highest reconstruction error (`LQER_TOP_K=3`, `LQER_FACTOR_BITS=4`, `LQER_ASYM_GROUP=64`). +- Compression: `COMPRESSOR=pergroup` — buckets weights by role, L1 similarity-sorts hot 2D groups, runs `lrzip -z -L 9` (ZPAQ context-mixing) on each blob, brotli on the remainder + code wrapper. + +## TTT (Test-Time Training) + +- Phased global-SGD TTT with **4 cumulative phases** at doc boundaries (max prefix = 3000 docs). +- Per-phase Adam (β1=0, β2=0.99, weight decay=0.5) on **LoRA rank-128** adapters injected on Q/K/V/O + MLP + lm_head. +- Per-doc LoRA reset between phases. +- NEFTune is force-disabled by the `_in_ttt` flag. +- Eval time: 472.8–527.8 s (mean 508.7 s) of the 600 s budget. + +## Compliance + +Track-B legal-eval conditions: + +- **Train ≤ 600 s** ✓ — strict 600 s wallclock cap; all three seeds stop at ~599.5 s. +- **Artifact ≤ 16 MB** ✓ — max artifact 15,897,143 bytes (~15.16 MB). +- **Eval ≤ 600 s** ✓ — max eval 527.8 s. +- **No SLOT** ✓ +- **No pre-quant TTT** ✓ — TTT runs *after* GPTQ + LQER. +- **No ETLB** ✓ +- **No n-gram cache** ✓ +- **Score-first TTT** ✓ — every token scored before the per-phase weight update. +- **3 seeds** ✓ — seeds 42, 0, 314. + +## Reproduction + +### Environment / install + +```bash +# Base: PyTorch 25.03 NGC image (nvcr.io/nvidia/pytorch:25.03-py3) +# or a clean Python 3.12 + CUDA 12.8 env. + +# System dep for COMPRESSOR=pergroup +apt-get update && apt-get install -y lrzip + +# PyTorch + Python deps +pip install torch --index-url https://download.pytorch.org/whl/cu128 +pip install -r requirements.txt + +# FlashAttention 2 + 3 (separate wheels for cu128/torch2.9) +pip install "https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl" +pip install "https://download.pytorch.org/whl/cu128/flash_attn_3-3.0.0-cp39-abi3-manylinux_2_28_x86_64.whl" +``` + +### Data + +The script expects a CaseOps-tokenized FineWeb shard tree at `$DATA_PATH` and the matching SentencePiece tokenizer at `$TOKENIZER_PATH`. Both are produced by the parent record's `prepare_caseops_data.py` (see [`track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/prepare_caseops_data.py`](track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/prepare_caseops_data.py)) and the tokenizer model at [`track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model`](track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model). + +### Run + +```bash +# --- Train --- +SEED=42 \ +DATA_PATH=./data/datasets \ +TOKENIZER_PATH=./data/tokenizers \ +CASEOPS_ENABLED=1 VOCAB_SIZE=8192 \ +ITERATIONS=20000 MAX_WALLCLOCK_SECONDS=600 \ +NEFTUNE_ALPHA=5.0 Z_LOSS_WEIGHT=1e-4 \ +EMBED_BITS=7 MATRIX_LR=0.026 MIN_LR=0.1 \ +MLP_CLIP_SIGMAS=11.5 ATTN_CLIP_SIGMAS=13.0 EMBED_CLIP_SIGMAS=14.0 \ +GRAD_CLIP_NORM=0.3 WARMUP_STEPS=20 WARMDOWN_FRAC=0.85 BETA2=0.99 \ +MUON_BACKEND_STEPS=5 \ +SPARSE_ATTN_GATE_ENABLED=1 SPARSE_ATTN_GATE_SCALE=0.5 GATE_WINDOW=12 \ +SMEAR_GATE_ENABLED=1 GATED_ATTN_QUANT_GATE=1 \ +LQER_ENABLED=1 LQER_ASYM_ENABLED=1 LQER_RANK=4 LQER_FACTOR_BITS=4 LQER_ASYM_GROUP=64 LQER_TOP_K=3 \ +FUSED_CE_ENABLED=1 COMPRESSOR=pergroup \ +torchrun --standalone --nproc_per_node=8 train_gpt.py --mode train + +# --- Eval (phased-TTT) --- +SEED=42 \ +DATA_PATH=./data/datasets \ +TOKENIZER_PATH=./data/tokenizers \ +CASEOPS_ENABLED=1 VOCAB_SIZE=8192 \ +TTT_ENABLED=1 TTT_LORA_RANK=128 TTT_CHUNK_SIZE=48 \ +TTT_BETA2=0.99 TTT_WEIGHT_DECAY=0.5 \ +PHASED_TTT_ENABLED=1 PHASED_TTT_PREFIX_DOCS=3000 PHASED_TTT_NUM_PHASES=4 \ +GLOBAL_TTT_MOMENTUM=0.9 \ +COMPRESSOR=pergroup \ +torchrun --standalone --nproc_per_node=8 train_gpt.py --mode eval +``` + +Re-run with `SEED=0` and `SEED=314` for the other two seeds. + +## Included Files + +- `README.md` — this file. +- `submission.json` — structured metadata. +- `train_gpt.py` — full training/eval script (≈3,830 lines), unchanged from the agent's `solution.py`. +- `requirements.txt` — Python deps for the install block. +- `train_seed42.log`, `train_seed0.log`, `train_seed314.log` — full per-seed run logs (train + GPTQ/LQER + phased-TTT eval). + +## Credits + +- [PR #1797](https://github.com/openai/parameter-golf/pull/1797) by @dexhunter — Smear Gate + LQER asymmetric rank-4 stacked on the PR #1787 base. +- [PR #1787](https://github.com/openai/parameter-golf/pull/1787) by @nprime06 — Polar-Express NS, MIN_LR=0.10, sparse attention gate, fused softcapped CE. +- [PR #1736](https://github.com/openai/parameter-golf/pull/1736) — CaseOps + GatedAttn + QuantGate + Loop4-5 + PhasedTTT integration on top of PR #1530's stack. +- [PR #1729](https://github.com/openai/parameter-golf/pull/1729) by @romeerp — sp8192 lossless caps caseops v1 reserved tokenizer + tapered weight decay infra. +- [PR #1667](https://github.com/openai/parameter-golf/pull/1667) by @MarioPaerle — Reintroduced SmearGate (modded-nanogpt @classiclarryd style) + Attention Output Gate. +- [PR #1626](https://github.com/openai/parameter-golf/pull/1626) by @dexhunter — Multi-phase global SGD phased-TTT. +- [PR #1610](https://github.com/openai/parameter-golf/pull/1610) — VarLenAttn + originator of phased TTT (PhasingTTT). +- [PR #1586](https://github.com/openai/parameter-golf/pull/1586) — Per-Layer Adaptive GPTQ Clip + int7 Embeddings + MATRIX_LR=0.026. +- [PR #1530](https://github.com/openai/parameter-golf/pull/1530) by @samacqua — Variable-length attention, fused LeakyReLU² MLP Triton kernel, parallel residuals, doc-based LoRA TTT. +- [PR #1344](https://github.com/openai/parameter-golf/pull/1344) — Polar-Express Newton-Schulz coefficients + depth recurrence (Loop4-5). +- [PR #1276](https://github.com/openai/parameter-golf/pull/1276) — QK Gain default 5.0. +- [PR #493](https://github.com/openai/parameter-golf/pull/493) — LeakyReLU² activation. +- [PR #478](https://github.com/openai/parameter-golf/pull/478) by @gowtham0992 — XSA-all on all layers. +- [PR #315](https://github.com/openai/parameter-golf/pull/315) — Partial RoPE + LN Scale. +- [PR #289](https://github.com/openai/parameter-golf/pull/289) — U-Net skip connections. +- [PR #259](https://github.com/openai/parameter-golf/pull/259) — QK Gain concept. +- Parent record [`track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611`](track_10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611/) by @codemath3000 — BOS-fixed SmearGate, per-group lrzip+brotli compression pipeline, and the 9-hparam stack this submission inherits. +- NEFTune embedding noise — concept from Jain et al., 2023 ([arXiv:2310.05914](https://arxiv.org/abs/2310.05914)); first integration on this stack is **this work**. diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/requirements.txt b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/requirements.txt new file mode 100644 index 0000000000..4227cb26db --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/requirements.txt @@ -0,0 +1,43 @@ +# Core Agent & Pipeline Dependencies +pydantic>=2.0 +pydantic-ai +google-genai +tenacity +python-dotenv + +# Storage & Infrastructure (from pyproject.toml) +redis>=5.0 +psycopg2-binary>=2.9 +sqlalchemy>=2.0 +celery>=5.3 +paramiko>=3.3 + +# Evaluation & Local Testing Dependencies +numpy +sentencepiece + +# Other Dependencies +tqdm +huggingface-hub +kernels +setuptools +typing-extensions +datasets +tiktoken +nvidia-ml-py +black +coolname +shutup +omegaconf +humanize +dataclasses_json +psutil +brotli +zstandard + +# kb_extend reproduction deps (pure-python) +python-minifier +blobfile +einops +ninja +flash-linear-attention \ No newline at end of file diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/submission.json b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/submission.json new file mode 100644 index 0000000000..f5e6d8bb6f --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/submission.json @@ -0,0 +1,42 @@ +{ + "author": "UniAgent", + "github_id": "uniagent-alpha", + "name": "11L XSA + LQER + SparseAttnGate + BOS-fixed SmearGate + NEFTune + Phased-TTT (4 phases, prefix=3000, LoRA-128)", + "date": "2026-05-09", + "track": "10min_16mb", + "val_bpb": 1.06035, + "val_bpb_std": 0.00044, + "seeds": [42, 0, 314], + "seed_results": { + "42": {"val_bpb": 1.05980, "artifact_bytes": 15897143}, + "0": {"val_bpb": 1.06038, "artifact_bytes": 15894185}, + "314": {"val_bpb": 1.06087, "artifact_bytes": 15893797} + }, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.9.1+cu128", + "technique_summary": "11L XSA + LQER int4-rank4 + SparseAttnGate + BOS-fixed SmearGate + Polar-Express Muon + per-group lrzip compression + NEFTune embedding noise (alpha=5.0) + z-loss (1e-4) + phased-TTT (4 phases, prefix=3000 docs, LoRA rank 128).", + "attribution": { + "seed_records": [ + "10min_16mb/2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611", + "10min_16mb/2026_04_16_SmearGate_Attention_Output_Gate_Score-First_TTT", + "10min_16mb/2026-03-19_MLP3x_QAT_Int6_SlidingWindow", + "10min_16mb/2026-03-19_Seq2048_FP16Emb_TunedLR", + "10min_16mb/2026-04-08_SP8192_ParallelResid_ScoreFirstTTT" + ], + "novel_contributions": [ + "NEFTune embedding noise (alpha=5.0, training-only, _in_ttt-gated)", + "Z-loss regularization (weight 1e-4) added to the training cross-entropy, computed via the LSE returned by the fused softcapped-CE Triton kernel" + ] + }, + "compliance": { + "train_under_600s": true, + "artifact_under_16mb": true, + "eval_under_600s": true, + "no_slot": true, + "no_pre_quant_ttt": true, + "no_etlb": true, + "no_ngram_cache": true, + "score_first_ttt": true, + "three_seeds": true + } +} diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_gpt.py b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_gpt.py new file mode 100644 index 0000000000..09a39ede5d --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_gpt.py @@ -0,0 +1,3832 @@ +import argparse, base64, collections, copy, fcntl, glob, io, lzma, math, os, sys +from pathlib import Path +import random, re, subprocess, time, uuid, numpy as np, sentencepiece as spm, torch, torch.distributed as dist, torch.nn.functional as F +from torch import Tensor, nn +from flash_attn_interface import ( + flash_attn_func as flash_attn_3_func, + flash_attn_varlen_func, +) +from concurrent.futures import ThreadPoolExecutor +import triton +import triton.language as tl +from triton.tools.tensor_descriptor import TensorDescriptor + +os.environ.setdefault("TORCHINDUCTOR_CACHE_DIR", "/workspace/.torch_inductor") + +# ===== Fused softcapped CE (Triton) ===== +_FUSED_CE_LIBRARY = "pgsubmission1draft7fusedce" +_FUSED_CE_BLOCK_SIZE = 1024 +_FUSED_CE_NUM_WARPS = 4 + + +@triton.jit +def _softcapped_ce_fwd_kernel( + logits_ptr, + losses_ptr, + lse_ptr, + targets_ptr, + stride_logits_n, + stride_logits_v, + n_rows, + n_cols, + softcap, + block_size: tl.constexpr, +): + row_idx = tl.program_id(0).to(tl.int64) + logits_row_ptr = logits_ptr + row_idx * stride_logits_n + max_val = -float("inf") + sum_exp = 0.0 + A = 2.0 * softcap + inv_C = 2.0 / softcap + for off in range(0, n_cols, block_size): + cols = off + tl.arange(0, block_size) + mask = cols < n_cols + val = tl.load( + logits_row_ptr + cols * stride_logits_v, mask=mask, other=-float("inf") + ).to(tl.float32) + z = A * tl.sigmoid(val * inv_C) + z = tl.where(mask, z, -float("inf")) + curr_max = tl.max(z, axis=0) + new_max = tl.maximum(max_val, curr_max) + sum_exp = sum_exp * tl.exp(max_val - new_max) + tl.sum( + tl.exp(z - new_max), axis=0 + ) + max_val = new_max + lse = max_val + tl.log(sum_exp) + tl.store(lse_ptr + row_idx, lse) + target = tl.load(targets_ptr + row_idx).to(tl.int32) + target_val = tl.load(logits_row_ptr + target * stride_logits_v).to(tl.float32) + target_z = A * tl.sigmoid(target_val * inv_C) + tl.store(losses_ptr + row_idx, lse - target_z) + + +@triton.jit +def _softcapped_ce_bwd_kernel( + grad_logits_ptr, + grad_losses_ptr, + lse_ptr, + logits_ptr, + targets_ptr, + stride_logits_n, + stride_logits_v, + stride_grad_n, + stride_grad_v, + n_rows, + n_cols, + softcap, + block_size: tl.constexpr, +): + row_idx = tl.program_id(0).to(tl.int64) + logits_row_ptr = logits_ptr + row_idx * stride_logits_n + grad_row_ptr = grad_logits_ptr + row_idx * stride_grad_n + lse = tl.load(lse_ptr + row_idx) + grad_loss = tl.load(grad_losses_ptr + row_idx).to(tl.float32) + target = tl.load(targets_ptr + row_idx).to(tl.int32) + A = 2.0 * softcap + inv_C = 2.0 / softcap + dz_dx_scale = A * inv_C + for off in range(0, n_cols, block_size): + cols = off + tl.arange(0, block_size) + mask = cols < n_cols + val = tl.load(logits_row_ptr + cols * stride_logits_v, mask=mask, other=0.0).to( + tl.float32 + ) + sigmoid_u = tl.sigmoid(val * inv_C) + z = A * sigmoid_u + probs = tl.exp(z - lse) + grad_z = grad_loss * (probs - tl.where(cols == target, 1.0, 0.0)) + grad_x = grad_z * (dz_dx_scale * sigmoid_u * (1.0 - sigmoid_u)) + tl.store(grad_row_ptr + cols * stride_grad_v, grad_x, mask=mask) + + +def _validate_softcapped_ce_inputs(logits, targets, softcap): + if logits.ndim != 2: + raise ValueError(f"Expected logits.ndim=2, got {logits.ndim}") + if targets.ndim != 1: + raise ValueError(f"Expected targets.ndim=1, got {targets.ndim}") + if logits.shape[0] != targets.shape[0]: + raise ValueError("rows mismatch") + if not logits.is_cuda or not targets.is_cuda: + raise ValueError("CUDA required") + if softcap <= 0.0: + raise ValueError("softcap must be positive") + if logits.dtype not in (torch.float16, torch.bfloat16, torch.float32): + raise ValueError("bad dtype") + logits = logits.contiguous() + targets = targets.contiguous() + if targets.dtype != torch.int64: + targets = targets.to(dtype=torch.int64) + return logits, targets + + +@torch.library.custom_op(f"{_FUSED_CE_LIBRARY}::softcapped_ce", mutates_args=()) +def softcapped_ce_op( + logits: Tensor, targets: Tensor, softcap: float +) -> tuple[Tensor, Tensor]: + logits, targets = _validate_softcapped_ce_inputs(logits, targets, float(softcap)) + n_rows, n_cols = logits.shape + losses = torch.empty((n_rows,), device=logits.device, dtype=torch.float32) + lse = torch.empty((n_rows,), device=logits.device, dtype=torch.float32) + _softcapped_ce_fwd_kernel[(n_rows,)]( + logits, + losses, + lse, + targets, + logits.stride(0), + logits.stride(1), + n_rows, + n_cols, + float(softcap), + block_size=_FUSED_CE_BLOCK_SIZE, + num_warps=_FUSED_CE_NUM_WARPS, + ) + return losses, lse + + +@softcapped_ce_op.register_fake +def _(logits, targets, softcap): + n_rows = logits.shape[0] + return ( + logits.new_empty((n_rows,), dtype=torch.float32), + logits.new_empty((n_rows,), dtype=torch.float32), + ) + + +@torch.library.custom_op( + f"{_FUSED_CE_LIBRARY}::softcapped_ce_backward", mutates_args=() +) +def softcapped_ce_backward_op( + logits: Tensor, targets: Tensor, lse: Tensor, grad_losses: Tensor, softcap: float +) -> Tensor: + logits, targets = _validate_softcapped_ce_inputs(logits, targets, float(softcap)) + lse = lse.contiguous() + grad_losses = grad_losses.contiguous().to(dtype=torch.float32) + grad_logits = torch.empty_like(logits) + n_rows, n_cols = logits.shape + _softcapped_ce_bwd_kernel[(n_rows,)]( + grad_logits, + grad_losses, + lse, + logits, + targets, + logits.stride(0), + logits.stride(1), + grad_logits.stride(0), + grad_logits.stride(1), + n_rows, + n_cols, + float(softcap), + block_size=_FUSED_CE_BLOCK_SIZE, + num_warps=_FUSED_CE_NUM_WARPS, + ) + return grad_logits + + +@softcapped_ce_backward_op.register_fake +def _(logits, targets, lse, grad_losses, softcap): + return logits.new_empty(logits.shape) + + +def _softcapped_ce_setup_context(ctx, inputs, output): + logits, targets, softcap = inputs + _losses, lse = output + ctx.save_for_backward(logits, targets, lse) + ctx.softcap = float(softcap) + + +def _softcapped_ce_backward(ctx, grad_losses, grad_lse): + del grad_lse + logits, targets, lse = ctx.saved_tensors + grad_logits = torch.ops.pgsubmission1draft7fusedce.softcapped_ce_backward( + logits, targets, lse, grad_losses, ctx.softcap + ) + return grad_logits, None, None + + +softcapped_ce_op.register_autograd( + _softcapped_ce_backward, setup_context=_softcapped_ce_setup_context +) + + +def softcapped_cross_entropy(logits, targets, softcap, reduction="mean"): + losses, _ = torch.ops.pgsubmission1draft7fusedce.softcapped_ce( + logits, targets, float(softcap) + ) + if reduction == "none": + return losses + if reduction == "sum": + return losses.sum() + if reduction == "mean": + return losses.mean() + raise ValueError(reduction) + + +class Hyperparameters: + data_path_root = os.environ.get("DATA_PATH", "/data/datasets/") + tokenizer_path_root = os.environ.get("TOKENIZER_PATH", "/data/tokenizers/") + seed = int(os.environ.get("SEED", 42)) + run_id = os.environ.get("RUN_ID", str(uuid.uuid4())) + iterations = int(os.environ.get("ITERATIONS", 20000)) + warmdown_frac = float(os.environ.get("WARMDOWN_FRAC", 0.85)) + warmup_steps = int(os.environ.get("WARMUP_STEPS", 20)) + train_batch_tokens = int(os.environ.get("TRAIN_BATCH_TOKENS", 786432)) + fused_ce_enabled = bool(int(os.environ.get("FUSED_CE_ENABLED", "1"))) + train_seq_len = int(os.environ.get("TRAIN_SEQ_LEN", 2048)) + train_log_every = int(os.environ.get("TRAIN_LOG_EVERY", 500)) + max_wallclock_seconds = float(os.environ.get("MAX_WALLCLOCK_SECONDS", 600.0)) + val_batch_tokens = int(os.environ.get("VAL_BATCH_TOKENS", 524288)) + eval_seq_len = int(os.environ.get("EVAL_SEQ_LEN", 2048)) + val_loss_every = int(os.environ.get("VAL_LOSS_EVERY", 0)) + vocab_size = int(os.environ.get("VOCAB_SIZE", 8192)) + num_layers = int(os.environ.get("NUM_LAYERS", 11)) + xsa_last_n = int(os.environ.get("XSA_LAST_N", 11)) + model_dim = int(os.environ.get("MODEL_DIM", 512)) + num_kv_heads = int(os.environ.get("NUM_KV_HEADS", 4)) + num_heads = int(os.environ.get("NUM_HEADS", 8)) + mlp_mult = float(os.environ.get("MLP_MULT", 4.0)) + skip_gates_enabled = bool(int(os.environ.get("SKIP_GATES_ENABLED", "1"))) + tie_embeddings = bool(int(os.environ.get("TIE_EMBEDDINGS", "1"))) + neftune_alpha = float(os.environ.get("NEFTUNE_ALPHA", 5.0)) + logit_softcap = float(os.environ.get("LOGIT_SOFTCAP", 3e1)) + z_loss_weight = float(os.environ.get("Z_LOSS_WEIGHT", 1e-4)) + rope_base = float(os.environ.get("ROPE_BASE", 1e4)) + rope_dims = int(os.environ.get("ROPE_DIMS", 16)) + rope_train_seq_len = int(os.environ.get("ROPE_TRAIN_SEQ_LEN", 2048)) + rope_yarn = bool(int(os.environ.get("ROPE_YARN", "0"))) + ln_scale = bool(int(os.environ.get("LN_SCALE", "1"))) + qk_gain_init = float(os.environ.get("QK_GAIN_INIT", 5.0)) + num_loops = int(os.environ.get("NUM_LOOPS", 2)) + loop_start = int(os.environ.get("LOOP_START", 3)) + loop_end = int(os.environ.get("LOOP_END", 5)) + enable_looping_at = float(os.environ.get("ENABLE_LOOPING_AT", 0.35)) + parallel_start_layer = int(os.environ.get("PARALLEL_START_LAYER", 8)) + parallel_final_lane = os.environ.get("PARALLEL_FINAL_LANE", "mean") + min_lr = float(os.environ.get("MIN_LR", 0.1)) + embed_lr = float(os.environ.get("EMBED_LR", 0.6)) + tied_embed_lr = float(os.environ.get("TIED_EMBED_LR", 0.03)) + tied_embed_init_std = float(os.environ.get("TIED_EMBED_INIT_STD", 0.005)) + matrix_lr = float(os.environ.get("MATRIX_LR", 0.026)) + scalar_lr = float(os.environ.get("SCALAR_LR", 0.02)) + muon_momentum = float(os.environ.get("MUON_MOMENTUM", 0.97)) + muon_backend_steps = int(os.environ.get("MUON_BACKEND_STEPS", 5)) + muon_momentum_warmup_start = float( + os.environ.get("MUON_MOMENTUM_WARMUP_START", 0.92) + ) + muon_momentum_warmup_steps = int(os.environ.get("MUON_MOMENTUM_WARMUP_STEPS", 1500)) + muon_row_normalize = bool(int(os.environ.get("MUON_ROW_NORMALIZE", "1"))) + beta1 = float(os.environ.get("BETA1", 0.9)) + beta2 = float(os.environ.get("BETA2", 0.99)) + adam_eps = float(os.environ.get("ADAM_EPS", 1e-08)) + grad_clip_norm = float(os.environ.get("GRAD_CLIP_NORM", 0.3)) + eval_stride = int(os.environ.get("EVAL_STRIDE", 64)) + adam_wd = float(os.environ.get("ADAM_WD", 0.02)) + muon_wd = float(os.environ.get("MUON_WD", 0.095)) + embed_wd = float(os.environ.get("EMBED_WD", 0.085)) + ema_decay = float(os.environ.get("EMA_DECAY", 0.9965)) + ttt_enabled = bool(int(os.environ.get("TTT_ENABLED", "1"))) + ttt_lora_rank = int(os.environ.get("TTT_LORA_RANK", 128)) + ttt_lora_lr = float(os.environ.get("TTT_LORA_LR", 0.0001)) + ttt_chunk_size = int(os.environ.get("TTT_CHUNK_SIZE", 48)) + ttt_eval_seq_len = int(os.environ.get("TTT_EVAL_SEQ_LEN", 2048)) + ttt_batch_size = int(os.environ.get("TTT_BATCH_SIZE", 64)) + ttt_grad_steps = int(os.environ.get("TTT_GRAD_STEPS", 1)) + ttt_weight_decay = float(os.environ.get("TTT_WEIGHT_DECAY", 0.5)) + ttt_beta1 = float(os.environ.get("TTT_BETA1", 0)) + ttt_beta2 = float(os.environ.get("TTT_BETA2", 0.99)) + ttt_k_lora = bool(int(os.environ.get("TTT_K_LORA", "1"))) + ttt_mlp_lora = bool(int(os.environ.get("TTT_MLP_LORA", "1"))) + ttt_o_lora = bool(int(os.environ.get("TTT_O_LORA", "1"))) + ttt_optimizer = os.environ.get("TTT_OPTIMIZER", "adam") + ttt_eval_batches = os.environ.get("TTT_EVAL_BATCHES", "") + val_doc_fraction = float(os.environ.get("VAL_DOC_FRACTION", 1.0)) + compressor = os.environ.get("COMPRESSOR", "pergroup") + gptq_calibration_batches = int(os.environ.get("GPTQ_CALIBRATION_BATCHES", 16)) + gptq_reserve_seconds = float(os.environ.get("GPTQ_RESERVE_SECONDS", 0.5)) + phased_ttt_prefix_docs = int(os.environ.get("PHASED_TTT_PREFIX_DOCS", 3000)) + phased_ttt_num_phases = int(os.environ.get("PHASED_TTT_NUM_PHASES", 4)) + global_ttt_lr = float(os.environ.get("GLOBAL_TTT_LR", 0.001)) + global_ttt_momentum = float(os.environ.get("GLOBAL_TTT_MOMENTUM", 0.9)) + global_ttt_epochs = int(os.environ.get("GLOBAL_TTT_EPOCHS", 1)) + global_ttt_chunk_tokens = int(os.environ.get("GLOBAL_TTT_CHUNK_TOKENS", 32768)) + global_ttt_batch_seqs = int(os.environ.get("GLOBAL_TTT_BATCH_SEQS", 32)) + global_ttt_warmup_start_lr = float( + os.environ.get("GLOBAL_TTT_WARMUP_START_LR", 0.0) + ) + global_ttt_warmup_chunks = int(os.environ.get("GLOBAL_TTT_WARMUP_CHUNKS", 0)) + global_ttt_grad_clip = float(os.environ.get("GLOBAL_TTT_GRAD_CLIP", 1.0)) + global_ttt_respect_doc_boundaries = bool( + int(os.environ.get("GLOBAL_TTT_RESPECT_DOC_BOUNDARIES", "1")) + ) + matrix_bits = int(os.environ.get("MATRIX_BITS", 6)) + embed_bits = int(os.environ.get("EMBED_BITS", 7)) + matrix_clip_sigmas = float(os.environ.get("MATRIX_CLIP_SIGMAS", 12.85)) + embed_clip_sigmas = float(os.environ.get("EMBED_CLIP_SIGMAS", 14.0)) + mlp_clip_sigmas = float(os.environ.get("MLP_CLIP_SIGMAS", 11.5)) + attn_clip_sigmas = float(os.environ.get("ATTN_CLIP_SIGMAS", 13.0)) + attn_out_gate_enabled = bool(int(os.environ.get("ATTN_OUT_GATE_ENABLED", "0"))) + attn_out_gate_src = os.environ.get("ATTN_OUT_GATE_SRC", "proj") + smear_gate_enabled = bool(int(os.environ.get("SMEAR_GATE_ENABLED", "1"))) + gate_window = int(os.environ.get("GATE_WINDOW", 12)) + gated_attn_enabled = bool(int(os.environ.get("GATED_ATTN_ENABLED", "0"))) + gated_attn_init_std = float(os.environ.get("GATED_ATTN_INIT_STD", 0.01)) + gated_attn_quant_gate = bool(int(os.environ.get("GATED_ATTN_QUANT_GATE", "1"))) + sparse_attn_gate_enabled = bool( + int(os.environ.get("SPARSE_ATTN_GATE_ENABLED", "1")) + ) + sparse_attn_gate_init_std = float(os.environ.get("SPARSE_ATTN_GATE_INIT_STD", 0.0)) + sparse_attn_gate_scale = float(os.environ.get("SPARSE_ATTN_GATE_SCALE", 0.5)) + lqer_enabled = bool(int(os.environ.get("LQER_ENABLED", "1"))) + lqer_rank = int(os.environ.get("LQER_RANK", 4)) + lqer_top_k = int(os.environ.get("LQER_TOP_K", 3)) + lqer_factor_bits = int(os.environ.get("LQER_FACTOR_BITS", 4)) + lqer_asym_enabled = bool(int(os.environ.get("LQER_ASYM_ENABLED", "1"))) + lqer_asym_group = int(os.environ.get("LQER_ASYM_GROUP", "64")) + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + rank = int(os.environ.get("RANK", "0")) + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + is_main_process = rank == 0 + grad_accum_steps = max(1, 8 // world_size) + caseops_enabled = bool(int(os.environ.get("CASEOPS_ENABLED", "1"))) + if caseops_enabled: + datasets_dir = os.path.join( + data_path_root, "fineweb10B_sp8192_lossless_caps_caseops_v1_reserved" + ) + tokenizer_path = os.path.join( + tokenizer_path_root, + "fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model", + ) + else: + datasets_dir = os.path.join(data_path_root, f"fineweb10B_sp{vocab_size}") + tokenizer_path = os.path.join( + tokenizer_path_root, f"fineweb_{vocab_size}_bpe.model" + ) + train_files = os.path.join(datasets_dir, "fineweb_train_*.bin") + val_files = os.path.join(datasets_dir, "fineweb_val_*.bin") + val_bytes_files = os.path.join(datasets_dir, "fineweb_val_bytes_*.bin") + artifact_dir = os.environ.get("ARTIFACT_DIR", "") + logfile = os.path.join(artifact_dir, f"{run_id}.txt") if artifact_dir else None + quantized_model_path = ( + os.path.join(artifact_dir, "final_model.ptz") + if artifact_dir + else "final_model.ptz" + ) + + +_logger_hparams = None + + +def set_logging_hparams(h): + global _logger_hparams + _logger_hparams = h + + +def log(msg, console=True): + if _logger_hparams is None: + print(msg, flush=True) + return + if _logger_hparams.is_main_process: + if console: + print(msg, flush=True) + if _logger_hparams.logfile is not None: + with open(_logger_hparams.logfile, "a", encoding="utf-8") as f: + print(msg, file=f) + + +class ValidationData: + def __init__(self, h, device): + self.sp = spm.SentencePieceProcessor(model_file=h.tokenizer_path) + if int(self.sp.vocab_size()) != h.vocab_size: + raise ValueError( + f"VOCAB_SIZE={h.vocab_size} != {int(self.sp.vocab_size())}" + ) + self.val_tokens = load_validation_tokens(h.val_files, h.eval_seq_len) + self.caseops_enabled = bool(getattr(h, "caseops_enabled", False)) + if self.caseops_enabled: + self.base_bytes_lut = None + self.has_leading_space_lut = None + self.is_boundary_token_lut = None + else: + ( + self.base_bytes_lut, + self.has_leading_space_lut, + self.is_boundary_token_lut, + ) = build_sentencepiece_luts(self.sp, h.vocab_size, device) + self.val_bytes = None + if self.caseops_enabled: + self.val_bytes = load_validation_byte_sidecar( + h.val_bytes_files, h.eval_seq_len, self.val_tokens.numel() + ) + + +def build_sentencepiece_luts(sp, vocab_size, device): + sp_vocab_size = int(sp.vocab_size()) + table_size = max(sp_vocab_size, vocab_size) + base_bytes_np = np.zeros((table_size,), dtype=np.int16) + has_leading_space_np = np.zeros((table_size,), dtype=np.bool_) + is_boundary_token_np = np.ones((table_size,), dtype=np.bool_) + for token_id in range(sp_vocab_size): + if sp.is_control(token_id) or sp.is_unknown(token_id) or sp.is_unused(token_id): + continue + is_boundary_token_np[token_id] = False + if sp.is_byte(token_id): + base_bytes_np[token_id] = 1 + continue + piece = sp.id_to_piece(token_id) + if piece.startswith("▁"): + has_leading_space_np[token_id] = True + piece = piece[1:] + base_bytes_np[token_id] = len(piece.encode("utf-8")) + return ( + torch.tensor(base_bytes_np, dtype=torch.int16, device=device), + torch.tensor(has_leading_space_np, dtype=torch.bool, device=device), + torch.tensor(is_boundary_token_np, dtype=torch.bool, device=device), + ) + + +def load_validation_tokens(pattern, seq_len): + files = [ + Path(p) for p in sorted(glob.glob(pattern)) if "_bytes_" not in Path(p).name + ] + if not files: + raise FileNotFoundError(pattern) + tokens = torch.cat([load_data_shard(file) for file in files]).contiguous() + usable = (tokens.numel() - 1) // seq_len * seq_len + if usable <= 0: + raise ValueError("val too short") + return tokens[: usable + 1] + + +def load_validation_byte_sidecar(pattern, seq_len, expected_len): + files = [Path(p) for p in sorted(glob.glob(pattern))] + if not files: + raise FileNotFoundError(pattern) + shards = [load_data_shard(file) for file in files] + bytes_full = torch.cat(shards).contiguous() + if bytes_full.numel() < expected_len: + raise ValueError("sidecar too short") + return bytes_full[:expected_len].to(torch.int32) + + +def load_data_shard(file): + header_bytes = 256 * np.dtype(" 0: + pos = start + while pos < end: + seg_starts.append(pos) + pos += max_doc_len + else: + seg_starts.append(start) + boundaries = seg_starts + [total_len] + padded_len = get_next_multiple_of_n(len(boundaries), bucket_size) + cu = torch.full((padded_len,), total_len, dtype=torch.int32, device=device) + cu[: len(boundaries)] = torch.tensor(boundaries, dtype=torch.int32, device=device) + seg_ends = seg_starts[1:] + [total_len] + max_seqlen = max(end - start for start, end in zip(seg_starts, seg_ends)) + return cu, max_seqlen + + +class DocumentPackingLoader: + _shard_pool = ThreadPoolExecutor(1) + + def __init__(self, h, device, cu_bucket_size=64): + self.rank = h.rank + self.world_size = h.world_size + self.device = device + self.cu_bucket_size = cu_bucket_size + self.max_seq_len = h.train_seq_len + all_files = [Path(p) for p in sorted(glob.glob(h.train_files))] + if not all_files: + raise FileNotFoundError(h.train_files) + self.files = all_files + self.file_iter = iter(self.files) + self._init_shard(load_data_shard(next(self.file_iter))) + self._next_shard = self._submit_next_shard() + self._batch_pool = ThreadPoolExecutor(1) + self._prefetch_queue = [] + + def _init_shard(self, tokens): + global BOS_ID + self.tokens = tokens + self.shard_size = tokens.numel() + if BOS_ID is None: + BOS_ID = 1 + self.bos_idx = ( + (tokens == BOS_ID).nonzero(as_tuple=True)[0].to(torch.int64).cpu().numpy() + ) + self.cursor = int(self.bos_idx[0]) + + def _submit_next_shard(self): + try: + path = next(self.file_iter) + return self._shard_pool.submit(load_data_shard, path) + except StopIteration: + return None + + def _advance_shard(self): + if self._next_shard is None: + self.file_iter = iter(self.files) + self._next_shard = self._shard_pool.submit( + load_data_shard, next(self.file_iter) + ) + self._init_shard(self._next_shard.result()) + self._next_shard = self._submit_next_shard() + + def _local_doc_starts(self, local_start, total_len): + lo = np.searchsorted(self.bos_idx, local_start, side="left") + hi = np.searchsorted(self.bos_idx, local_start + total_len, side="left") + return (self.bos_idx[lo:hi] - local_start).tolist() + + def _prepare_batch(self, num_tokens_local, max_seq_len): + per_rank_span = num_tokens_local + 1 + global_span = per_rank_span * self.world_size + while self.cursor + global_span > self.shard_size: + self._advance_shard() + local_start = self.cursor + self.rank * per_rank_span + buf = self.tokens[local_start : local_start + per_rank_span] + inputs = torch.empty(per_rank_span - 1, dtype=torch.int64, pin_memory=True) + targets = torch.empty(per_rank_span - 1, dtype=torch.int64, pin_memory=True) + inputs.copy_(buf[:-1]) + targets.copy_(buf[1:]) + starts = self._local_doc_starts(local_start, inputs.numel()) + cu_seqlens, max_seqlen = _build_cu_seqlens( + starts, inputs.numel(), inputs.device, max_seq_len, self.cu_bucket_size + ) + cu_seqlens = cu_seqlens.pin_memory() + self.cursor += global_span + return inputs, targets, cu_seqlens, max_seqlen + + def next_batch(self, global_tokens, grad_accum_steps): + num_tokens_local = global_tokens // (self.world_size * grad_accum_steps) + while len(self._prefetch_queue) < 2: + self._prefetch_queue.append( + self._batch_pool.submit( + self._prepare_batch, num_tokens_local, self.max_seq_len + ) + ) + inputs, targets, cu_seqlens, max_seqlen = self._prefetch_queue.pop(0).result() + self._prefetch_queue.append( + self._batch_pool.submit( + self._prepare_batch, num_tokens_local, self.max_seq_len + ) + ) + return ( + inputs[None].to(self.device, non_blocking=True), + targets[None].to(self.device, non_blocking=True), + cu_seqlens.to(self.device, non_blocking=True), + max_seqlen, + ) + + +class ShuffledSequenceLoader: + def __init__(self, h, device): + self.world_size = h.world_size + self.seq_len = h.train_seq_len + self.device = device + all_files = [Path(p) for p in sorted(glob.glob(h.train_files))] + if not all_files: + raise FileNotFoundError(h.train_files) + self.files = all_files[h.rank :: h.world_size] + self.rng = np.random.Generator(np.random.PCG64(h.rank)) + self.num_tokens = [_read_num_tokens(f) for f in self.files] + self.start_inds = [[] for _ in self.files] + for si in range(len(self.files)): + self._reset_shard(si) + + def _reset_shard(self, si): + max_phase = min( + self.seq_len - 1, max(0, self.num_tokens[si] - self.seq_len - 1) + ) + phase = int(self.rng.integers(max_phase + 1)) if max_phase > 0 else 0 + num_sequences = (self.num_tokens[si] - 1 - phase) // self.seq_len + sequence_order = self.rng.permutation(num_sequences) + self.start_inds[si] = (phase + sequence_order * self.seq_len).tolist() + + def next_batch(self, global_tokens, grad_accum_steps): + device_tokens = global_tokens // (self.world_size * grad_accum_steps) + device_batch_size = device_tokens // self.seq_len + remaining = np.array([len(s) for s in self.start_inds], dtype=np.float64) + x = torch.empty((device_batch_size, self.seq_len), dtype=torch.int64) + y = torch.empty((device_batch_size, self.seq_len), dtype=torch.int64) + for bi in range(device_batch_size): + total = remaining.sum() + if total <= 0: + for si in range(len(self.files)): + self._reset_shard(si) + remaining = np.array( + [len(s) for s in self.start_inds], dtype=np.float64 + ) + total = remaining.sum() + probs = remaining / total + si = int(self.rng.choice(len(self.files), p=probs)) + start_ind = self.start_inds[si].pop() + remaining[si] -= 1 + mm = _get_shard_memmap(self.files[si]) + window = torch.as_tensor( + np.array(mm[start_ind : start_ind + self.seq_len + 1], dtype=np.int64) + ) + x[bi] = window[:-1] + y[bi] = window[1:] + return x.to(self.device, non_blocking=True), y.to( + self.device, non_blocking=True + ) + + +class RMSNorm(nn.Module): + def __init__(self, eps=None): + super().__init__() + self.eps = eps + + def forward(self, x): + return F.rms_norm(x, (x.size(-1),), eps=self.eps) + + +class CastedLinear(nn.Linear): + def forward(self, x): + w = self.weight.to(x.dtype) + bias = self.bias.to(x.dtype) if self.bias is not None else None + return F.linear(x, w, bias) + + +@triton.jit +def linear_leaky_relu_square_kernel( + a_desc, + b_desc, + c_desc, + aux_desc, + M, + N, + K, + BLOCK_SIZE_M: tl.constexpr, + BLOCK_SIZE_N: tl.constexpr, + BLOCK_SIZE_K: tl.constexpr, + NUM_SMS: tl.constexpr, + FORWARD: tl.constexpr, +): + dtype = tl.bfloat16 + start_pid = tl.program_id(axis=0) + num_pid_m = tl.cdiv(M, BLOCK_SIZE_M) + num_pid_n = tl.cdiv(N, BLOCK_SIZE_N) + k_tiles = tl.cdiv(K, BLOCK_SIZE_K) + num_tiles = num_pid_m * num_pid_n + tile_id_c = start_pid - NUM_SMS + for tile_id in tl.range(start_pid, num_tiles, NUM_SMS, flatten=True): + pid_m = tile_id // num_pid_n + pid_n = tile_id % num_pid_n + offs_am = pid_m * BLOCK_SIZE_M + offs_bn = pid_n * BLOCK_SIZE_N + accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32) + for ki in range(k_tiles): + offs_k = ki * BLOCK_SIZE_K + a = a_desc.load([offs_am, offs_k]) + b = b_desc.load([offs_bn, offs_k]) + accumulator = tl.dot(a, b.T, accumulator) + tile_id_c += NUM_SMS + offs_am_c = offs_am + offs_bn_c = offs_bn + acc = tl.reshape(accumulator, (BLOCK_SIZE_M, 2, BLOCK_SIZE_N // 2)) + acc = tl.permute(acc, (0, 2, 1)) + acc0, acc1 = tl.split(acc) + c0 = acc0.to(dtype) + c1 = acc1.to(dtype) + if not FORWARD: + pre0 = aux_desc.load([offs_am_c, offs_bn_c]) + pre1 = aux_desc.load([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2]) + c0 = c0 * tl.where(pre0 > 0, 2.0 * pre0, 0.5 * pre0) + c1 = c1 * tl.where(pre1 > 0, 2.0 * pre1, 0.5 * pre1) + c_desc.store([offs_am_c, offs_bn_c], c0) + c_desc.store([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2], c1) + if FORWARD: + aux0 = tl.where(c0 > 0, c0, 0.5 * c0) + aux1 = tl.where(c1 > 0, c1, 0.5 * c1) + aux_desc.store([offs_am_c, offs_bn_c], aux0 * aux0) + aux_desc.store([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2], aux1 * aux1) + + +def linear_leaky_relu_square(a, b, aux=None): + M, K = a.shape + N, K2 = b.shape + assert K == K2 + c = torch.empty((M, N), device=a.device, dtype=a.dtype) + forward = aux is None + if aux is None: + aux = torch.empty((M, N), device=a.device, dtype=a.dtype) + num_sms = torch.cuda.get_device_properties(a.device).multi_processor_count + BLOCK_SIZE_M, BLOCK_SIZE_N, BLOCK_SIZE_K = 256, 128, 64 + num_stages = 4 if forward else 3 + a_desc = TensorDescriptor.from_tensor(a, [BLOCK_SIZE_M, BLOCK_SIZE_K]) + b_desc = TensorDescriptor.from_tensor(b, [BLOCK_SIZE_N, BLOCK_SIZE_K]) + c_desc = TensorDescriptor.from_tensor(c, [BLOCK_SIZE_M, BLOCK_SIZE_N // 2]) + aux_desc = TensorDescriptor.from_tensor(aux, [BLOCK_SIZE_M, BLOCK_SIZE_N // 2]) + grid = lambda _meta: ( + min(num_sms, triton.cdiv(M, BLOCK_SIZE_M) * triton.cdiv(N, BLOCK_SIZE_N)), + ) + linear_leaky_relu_square_kernel[grid]( + a_desc, + b_desc, + c_desc, + aux_desc, + M, + N, + K, + BLOCK_SIZE_M=BLOCK_SIZE_M, + BLOCK_SIZE_N=BLOCK_SIZE_N, + BLOCK_SIZE_K=BLOCK_SIZE_K, + NUM_SMS=num_sms, + FORWARD=forward, + num_stages=num_stages, + num_warps=8, + ) + if forward: + return c, aux + return c + + +class FusedLinearLeakyReLUSquareFunction(torch.autograd.Function): + @staticmethod + def forward(ctx, x, w1, w2): + x_flat = x.reshape(-1, x.shape[-1]) + pre, post = linear_leaky_relu_square(x_flat, w1) + out = F.linear(post, w2) + ctx.save_for_backward(x, w1, w2, pre, post) + return out.view(*x.shape[:-1], out.shape[-1]) + + @staticmethod + def backward(ctx, grad_output): + x, w1, w2, pre, post = ctx.saved_tensors + x_flat = x.reshape(-1, x.shape[-1]) + grad_output_flat = grad_output.reshape(-1, grad_output.shape[-1]) + dw2 = grad_output_flat.T @ post + dpre = linear_leaky_relu_square(grad_output_flat, w2.T.contiguous(), aux=pre) + dw1 = dpre.T @ x_flat + dx = dpre @ w1 + return dx.view_as(x), dw1, dw2 + + +FusedLeakyReLUSquareMLP = FusedLinearLeakyReLUSquareFunction.apply + + +class Rotary(nn.Module): + def __init__(self, dim, base=1e4, train_seq_len=1024, rope_dims=0, yarn=True): + super().__init__() + self.dim = dim + self.base = base + self.train_seq_len = train_seq_len + self.yarn = yarn + self.rope_dims = rope_dims if rope_dims > 0 else dim + inv_freq = 1.0 / base ** ( + torch.arange(0, self.rope_dims, 2, dtype=torch.float32) / self.rope_dims + ) + self.register_buffer("inv_freq", inv_freq, persistent=False) + self._seq_len_cached = 0 + self._cos_cached = None + self._sin_cached = None + + def forward(self, seq_len, device, dtype): + if ( + self._cos_cached is None + or self._sin_cached is None + or self._seq_len_cached < seq_len + or self._cos_cached.device != device + ): + rd = self.rope_dims + if self.yarn and seq_len > self.train_seq_len: + scale = seq_len / self.train_seq_len + new_base = self.base * scale ** (rd / (rd - 2)) + inv_freq = 1.0 / new_base ** ( + torch.arange(0, rd, 2, dtype=torch.float32, device=device) / rd + ) + else: + inv_freq = self.inv_freq.float().to(device) + t = torch.arange(seq_len, device=device, dtype=torch.float32) + freqs = torch.outer(t, inv_freq) + self._cos_cached = freqs.cos()[None, :, None, :] + self._sin_cached = freqs.sin()[None, :, None, :] + self._seq_len_cached = seq_len + return self._cos_cached[:, :seq_len].to(dtype=dtype), self._sin_cached[ + :, :seq_len + ].to(dtype=dtype) + + +def apply_rotary_emb(x, cos, sin, rope_dims=0): + if rope_dims > 0 and rope_dims < x.size(-1): + x_rope, x_pass = x[..., :rope_dims], x[..., rope_dims:] + half = rope_dims // 2 + x1, x2 = x_rope[..., :half], x_rope[..., half:] + x_rope = torch.cat((x1 * cos + x2 * sin, x1 * -sin + x2 * cos), dim=-1) + return torch.cat((x_rope, x_pass), dim=-1) + half = x.size(-1) // 2 + x1, x2 = x[..., :half], x[..., half:] + return torch.cat((x1 * cos + x2 * sin, x1 * -sin + x2 * cos), dim=-1) + + +class CausalSelfAttention(nn.Module): + def __init__( + self, + dim, + num_heads, + num_kv_heads, + rope_base, + qk_gain_init, + train_seq_len, + yarn=True, + attn_out_gate=False, + attn_out_gate_src="proj", + gate_window=12, + gated_attn=False, + gated_attn_init_std=0.01, + sparse_attn_gate=False, + sparse_attn_gate_init_std=0.0, + sparse_attn_gate_scale=1.0, + ): + super().__init__() + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + self.head_dim = dim // num_heads + self.q_gain = nn.Parameter( + torch.full((num_heads,), qk_gain_init, dtype=torch.float32) + ) + self.rope_dims = 0 + self.rotary = Rotary( + self.head_dim, base=rope_base, train_seq_len=train_seq_len, yarn=yarn + ) + self.use_xsa = False + self.attn_out_gate = attn_out_gate + self.attn_out_gate_src = attn_out_gate_src + self.gate_window = gate_window + if attn_out_gate: + self.attn_gate_proj = CastedLinear(gate_window, num_heads, bias=False) + self.attn_gate_proj._zero_init = True + self.gated_attn = gated_attn + if gated_attn: + W = torch.empty(num_heads, dim, dtype=torch.float32) + nn.init.normal_(W, mean=0.0, std=gated_attn_init_std) + self.attn_gate_w = nn.Parameter(W) + self.sparse_attn_gate = sparse_attn_gate + self.sparse_attn_gate_scale = sparse_attn_gate_scale + if sparse_attn_gate: + W = torch.empty(num_heads, gate_window, dtype=torch.float32) + if sparse_attn_gate_init_std > 0: + nn.init.normal_(W, mean=0.0, std=sparse_attn_gate_init_std) + else: + nn.init.zeros_(W) + self.attn_gate_w = nn.Parameter(W) + + def _xsa_efficient(self, y, v): + B, T, H, D = y.shape + Hkv = v.size(-2) + group = H // Hkv + y_g = y.reshape(B, T, Hkv, group, D) + vn = F.normalize(v, dim=-1).unsqueeze(-2) + proj = (y_g * vn).sum(dim=-1, keepdim=True) * vn + return (y_g - proj).reshape(B, T, H, D) + + def forward(self, x, q_w, k_w, v_w, out_w, cu_seqlens=None, max_seqlen=0): + bsz, seqlen, dim = x.shape + q_raw = F.linear(x, q_w.to(x.dtype)) + q = q_raw.reshape(bsz, seqlen, self.num_heads, self.head_dim) + k = F.linear(x, k_w.to(x.dtype)).reshape( + bsz, seqlen, self.num_kv_heads, self.head_dim + ) + v = F.linear(x, v_w.to(x.dtype)).reshape( + bsz, seqlen, self.num_kv_heads, self.head_dim + ) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = self.rotary(seqlen, x.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, self.rope_dims) + k = apply_rotary_emb(k, cos, sin, self.rope_dims) + q = q * self.q_gain.to(dtype=q.dtype)[None, None, :, None] + if cu_seqlens is not None: + y = flash_attn_varlen_func( + q[0], + k[0], + v[0], + cu_seqlens_q=cu_seqlens, + cu_seqlens_k=cu_seqlens, + max_seqlen_q=max_seqlen, + max_seqlen_k=max_seqlen, + causal=True, + window_size=(-1, -1), + )[None] + else: + y = flash_attn_3_func(q, k, v, causal=True) + if self.use_xsa: + y = self._xsa_efficient(y, v) + if self.attn_out_gate: + gate_src = q_raw if self.attn_out_gate_src == "q" else x + gate_in = gate_src[..., : self.gate_window].contiguous() + g = 2.0 * torch.sigmoid(self.attn_gate_proj(gate_in)) + y = y * g[..., None] + if self.gated_attn: + x_c = x.contiguous() + g = torch.sigmoid(F.linear(x_c, self.attn_gate_w.to(x.dtype))) + y = y * g[..., None] + if self.sparse_attn_gate: + gate_in = x[..., : self.gate_window].contiguous() + g = torch.sigmoid( + self.sparse_attn_gate_scale + * F.linear(gate_in, self.attn_gate_w.to(x.dtype)) + ) + y = y * g[..., None] + y = y.reshape(bsz, seqlen, dim) + self._last_proj_input = y.detach() if getattr(self, "_calib", False) else None + return F.linear(y, out_w.to(x.dtype)) + + +class MLP(nn.Module): + def __init__(self, dim, mlp_mult): + super().__init__() + self.use_fused = True + + def forward(self, x, up_w, down_w): + if self.training and self.use_fused: + return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) + hidden = F.leaky_relu( + F.linear(x, up_w.to(x.dtype)), negative_slope=0.5 + ).square() + self._last_down_input = ( + hidden.detach() if getattr(self, "_calib", False) else None + ) + return F.linear(hidden, down_w.to(x.dtype)) + + +class Block(nn.Module): + def __init__( + self, + dim, + num_heads, + num_kv_heads, + mlp_mult, + rope_base, + qk_gain_init, + train_seq_len, + layer_idx=0, + ln_scale=False, + yarn=True, + attn_out_gate=False, + attn_out_gate_src="proj", + gate_window=12, + gated_attn=False, + gated_attn_init_std=0.01, + sparse_attn_gate=False, + sparse_attn_gate_init_std=0.0, + sparse_attn_gate_scale=1.0, + ): + super().__init__() + self.attn_norm = RMSNorm() + self.mlp_norm = RMSNorm() + self.attn = CausalSelfAttention( + dim, + num_heads, + num_kv_heads, + rope_base, + qk_gain_init, + train_seq_len, + yarn=yarn, + attn_out_gate=attn_out_gate, + attn_out_gate_src=attn_out_gate_src, + gate_window=gate_window, + gated_attn=gated_attn, + gated_attn_init_std=gated_attn_init_std, + sparse_attn_gate=sparse_attn_gate, + sparse_attn_gate_init_std=sparse_attn_gate_init_std, + sparse_attn_gate_scale=sparse_attn_gate_scale, + ) + self.mlp = MLP(dim, mlp_mult) + self.attn_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.mlp_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.resid_mix = nn.Parameter( + torch.stack((torch.ones(dim), torch.zeros(dim))).float() + ) + self.ln_scale_factor = 1.0 / math.sqrt(layer_idx + 1) if ln_scale else 1.0 + + def forward( + self, x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=None, max_seqlen=0 + ): + mix = self.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + attn_out = self.attn( + self.attn_norm(x_in) * self.ln_scale_factor, + q_w, + k_w, + v_w, + out_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + x_out = x_in + self.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + x_out = x_out + self.mlp_scale.to(dtype=x_out.dtype)[None, None, :] * self.mlp( + self.mlp_norm(x_out) * self.ln_scale_factor, up_w, down_w + ) + return x_out + + +class GPT(nn.Module): + def __init__(self, h): + super().__init__() + self.tie_embeddings = h.tie_embeddings + self.neftune_alpha = getattr(h, "neftune_alpha", 0.0) + self._in_ttt = False + self.tied_embed_init_std = h.tied_embed_init_std + self.logit_softcap = h.logit_softcap + self.z_loss_weight = h.z_loss_weight + self.fused_ce_enabled = bool(h.fused_ce_enabled) + self.tok_emb = nn.Embedding(h.vocab_size, h.model_dim) + self.num_layers = h.num_layers + head_dim = h.model_dim // h.num_heads + kv_dim = h.num_kv_heads * head_dim + hidden_dim = int(h.mlp_mult * h.model_dim) + self.qo_bank = nn.Parameter( + torch.empty(2 * h.num_layers, h.model_dim, h.model_dim) + ) + self.kv_bank = nn.Parameter(torch.empty(2 * h.num_layers, kv_dim, h.model_dim)) + self.mlp_up_bank = nn.Parameter( + torch.empty(h.num_layers, hidden_dim, h.model_dim) + ) + self.mlp_down_bank = nn.Parameter( + torch.empty(h.num_layers, h.model_dim, hidden_dim) + ) + self.num_encoder_layers = h.num_layers // 2 + self.num_decoder_layers = h.num_layers - self.num_encoder_layers + self.blocks = nn.ModuleList( + [ + Block( + h.model_dim, + h.num_heads, + h.num_kv_heads, + h.mlp_mult, + h.rope_base, + h.qk_gain_init, + h.train_seq_len, + layer_idx=i, + ln_scale=h.ln_scale, + yarn=h.rope_yarn, + attn_out_gate=h.attn_out_gate_enabled, + attn_out_gate_src=h.attn_out_gate_src, + gate_window=h.gate_window, + gated_attn=h.gated_attn_enabled, + gated_attn_init_std=h.gated_attn_init_std, + sparse_attn_gate=h.sparse_attn_gate_enabled, + sparse_attn_gate_init_std=h.sparse_attn_gate_init_std, + sparse_attn_gate_scale=h.sparse_attn_gate_scale, + ) + for i in range(h.num_layers) + ] + ) + if h.rope_dims > 0: + head_dim = h.model_dim // h.num_heads + for block in self.blocks: + block.attn.rope_dims = h.rope_dims + block.attn.rotary = Rotary( + head_dim, + base=h.rope_base, + train_seq_len=h.train_seq_len, + rope_dims=h.rope_dims, + yarn=h.rope_yarn, + ) + self.final_norm = RMSNorm() + self.lm_head = ( + None + if h.tie_embeddings + else CastedLinear(h.model_dim, h.vocab_size, bias=False) + ) + if self.lm_head is not None: + self.lm_head._zero_init = True + if h.xsa_last_n > 0: + for i in range(max(0, h.num_layers - h.xsa_last_n), h.num_layers): + self.blocks[i].attn.use_xsa = True + self.looping_active = False + if h.num_loops > 0: + loop_seg = list(range(h.loop_start, h.loop_end + 1)) + all_indices = list(range(h.loop_start)) + for _ in range(h.num_loops + 1): + all_indices.extend(loop_seg) + all_indices.extend(range(h.loop_end + 1, h.num_layers)) + num_enc = len(all_indices) // 2 + self.encoder_indices = all_indices[:num_enc] + self.decoder_indices = all_indices[num_enc:] + else: + self.encoder_indices = list(range(self.num_encoder_layers)) + self.decoder_indices = list(range(self.num_encoder_layers, h.num_layers)) + self.num_skip_weights = min( + len(self.encoder_indices), len(self.decoder_indices) + ) + self.skip_weights = nn.Parameter( + torch.ones(self.num_skip_weights, h.model_dim, dtype=torch.float32) + ) + self.skip_gates = ( + nn.Parameter( + torch.zeros(self.num_skip_weights, h.model_dim, dtype=torch.float32) + ) + if h.skip_gates_enabled + else None + ) + self.parallel_start_layer = h.parallel_start_layer + self.parallel_final_lane = h.parallel_final_lane.lower() + self.parallel_post_lambdas = nn.Parameter( + torch.ones(h.num_layers, 2, 2, dtype=torch.float32) + ) + self.parallel_resid_lambdas = nn.Parameter( + torch.full((h.num_layers, 2), 1.1, dtype=torch.float32) + ) + self.smear_gate_enabled = h.smear_gate_enabled + if self.smear_gate_enabled: + self.smear_window = h.gate_window + self.smear_gate = CastedLinear(self.smear_window, 1, bias=False) + self.smear_gate._zero_init = True + self.smear_lambda = nn.Parameter(torch.zeros(1, dtype=torch.float32)) + self._init_weights() + + def _init_weights(self): + if self.tie_embeddings: + nn.init.normal_(self.tok_emb.weight, mean=0.0, std=self.tied_embed_init_std) + n = self.num_layers + proj_scale = 1.0 / math.sqrt(2 * n) + for i in range(n): + nn.init.orthogonal_(self.qo_bank.data[i], gain=1.0) + nn.init.zeros_(self.qo_bank.data[n + i]) + self.qo_bank.data[n + i].mul_(proj_scale) + nn.init.orthogonal_(self.kv_bank.data[i], gain=1.0) + nn.init.orthogonal_(self.kv_bank.data[n + i], gain=1.0) + for i in range(n): + nn.init.orthogonal_(self.mlp_up_bank.data[i], gain=1.0) + nn.init.zeros_(self.mlp_down_bank.data[i]) + self.mlp_down_bank.data[i].mul_(proj_scale) + for name, module in self.named_modules(): + if isinstance(module, nn.Linear): + if getattr(module, "_zero_init", False): + nn.init.zeros_(module.weight) + elif ( + module.weight.ndim == 2 + and module.weight.shape[0] >= 64 + and module.weight.shape[1] >= 64 + ): + nn.init.orthogonal_(module.weight, gain=1.0) + + def _bank_weights(self, i): + n = self.num_layers + return ( + self.qo_bank[i], + self.kv_bank[i], + self.kv_bank[n + i], + self.qo_bank[n + i], + self.mlp_up_bank[i], + self.mlp_down_bank[i], + ) + + def _parallel_block( + self, + block_idx, + lane0, + lane1, + x0, + q_w, + k_w, + v_w, + out_w, + up_w, + down_w, + cu_seqlens=None, + max_seqlen=0, + ): + block = self.blocks[block_idx] + mix = block.resid_mix.to(dtype=lane0.dtype) + attn_read = mix[0][None, None, :] * lane0 + mix[1][None, None, :] * x0 + attn_out = block.attn( + block.attn_norm(attn_read) * block.ln_scale_factor, + q_w, + k_w, + v_w, + out_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + attn_out = block.attn_scale.to(dtype=attn_out.dtype)[None, None, :] * attn_out + mlp_read = lane1 + mlp_out = block.mlp_scale.to(dtype=lane1.dtype)[None, None, :] * block.mlp( + block.mlp_norm(mlp_read) * block.ln_scale_factor, up_w, down_w + ) + attn_resid = self.parallel_resid_lambdas[block_idx, 0].to(dtype=lane0.dtype) + attn_post = self.parallel_post_lambdas[block_idx, 0].to(dtype=lane0.dtype) + mlp_resid = self.parallel_resid_lambdas[block_idx, 1].to(dtype=lane0.dtype) + mlp_post = self.parallel_post_lambdas[block_idx, 1].to(dtype=lane0.dtype) + lane0 = attn_resid * lane0 + attn_post[0] * attn_out + mlp_post[0] * mlp_out + lane1 = mlp_resid * lane1 + attn_post[1] * attn_out + mlp_post[1] * mlp_out + return lane0, lane1 + + def _final_parallel_hidden(self, lane0, lane1): + if self.parallel_final_lane == "mlp": + return lane1 + if self.parallel_final_lane == "attn": + return lane0 + return 0.5 * (lane0 + lane1) + + def _forward_hidden(self, input_ids, cu_seqlens=None, max_seqlen=0): + x = self.tok_emb(input_ids) + if self.training and self.neftune_alpha > 0 and not self._in_ttt: + seq_len = max_seqlen if max_seqlen > 0 else x.size(1) + noise = torch.rand_like(x) * 2.0 - 1.0 + x = x + noise * (self.neftune_alpha / math.sqrt(seq_len * x.size(-1))) + if self.smear_gate_enabled: + sl = self.smear_lambda.to(dtype=x.dtype) + gate_in = x[:, 1:, : self.smear_window].contiguous() + g = sl * torch.sigmoid(self.smear_gate(gate_in)) + not_bos = (input_ids[:, 1:] != BOS_ID).to(x.dtype).unsqueeze(-1) + x = torch.cat([x[:, :1], x[:, 1:] + g * x[:, :-1] * not_bos], dim=1) + x = F.rms_norm(x, (x.size(-1),)) + x0 = x + skips = [] + enc_iter = ( + self.encoder_indices + if self.looping_active + else range(self.num_encoder_layers) + ) + dec_iter = ( + self.decoder_indices + if self.looping_active + else range( + self.num_encoder_layers, + self.num_encoder_layers + self.num_decoder_layers, + ) + ) + for i in enc_iter: + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + x = self.blocks[i]( + x, + x0, + q_w, + k_w, + v_w, + out_w, + up_w, + down_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + skips.append(x) + psl = self.parallel_start_layer + lane0 = None + lane1 = None + for skip_idx, i in enumerate(dec_iter): + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + if i >= psl and psl > 0: + if lane0 is None: + lane0 = x + lane1 = x + if skip_idx < self.num_skip_weights and skips: + skip = skips.pop() + w = self.skip_weights[skip_idx].to(dtype=lane0.dtype)[None, None, :] + if self.skip_gates is not None: + g = torch.sigmoid( + self.skip_gates[skip_idx].to(dtype=lane0.dtype) + )[None, None, :] + lane0 = torch.lerp(w * skip, lane0, g) + else: + lane0 = lane0 + w * skip + lane0, lane1 = self._parallel_block( + i, + lane0, + lane1, + x0, + q_w, + k_w, + v_w, + out_w, + up_w, + down_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + else: + if skip_idx < self.num_skip_weights and skips: + scaled_skip = ( + self.skip_weights[skip_idx].to(dtype=x.dtype)[None, None, :] + * skips.pop() + ) + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=x.dtype))[ + None, None, : + ] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + x = self.blocks[i]( + x, + x0, + q_w, + k_w, + v_w, + out_w, + up_w, + down_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + if lane0 is not None: + x = self._final_parallel_hidden(lane0, lane1) + x = self.final_norm(x) + return x + + def _project_logits(self, hidden): + if self.tie_embeddings: + return F.linear(hidden, self.tok_emb.weight) + return self.lm_head(hidden) + + def forward_logits(self, input_ids, cu_seqlens=None, max_seqlen=0): + hidden = self._forward_hidden( + input_ids, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen + ) + logits_proj = self._project_logits(hidden) + return self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap) + + def forward(self, input_ids, target_ids, cu_seqlens=None, max_seqlen=0): + hidden = self._forward_hidden( + input_ids, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen + ) + logits_proj = self._project_logits(hidden) + flat_targets = target_ids.reshape(-1) + if self.fused_ce_enabled: + losses, lse = torch.ops.pgsubmission1draft7fusedce.softcapped_ce( + logits_proj.reshape(-1, logits_proj.size(-1)), + flat_targets, + float(self.logit_softcap), + ) + return losses.mean() + self.z_loss_weight * (lse**2).mean() + logits = self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap) + logits_flat = logits.reshape(-1, logits.size(-1)).float() + loss = F.cross_entropy(logits_flat, flat_targets, reduction="mean") + lse = torch.logsumexp(logits_flat, dim=-1) + return loss + self.z_loss_weight * (lse**2).mean() + + def forward_ttt(self, input_ids, target_ids, lora): + x = self.tok_emb(input_ids) + if self.smear_gate_enabled: + sl = self.smear_lambda.to(dtype=x.dtype) + gate_in = x[:, 1:, : self.smear_window].contiguous() + g = sl * torch.sigmoid(self.smear_gate(gate_in)) + not_bos = (input_ids[:, 1:] != BOS_ID).to(x.dtype).unsqueeze(-1) + x = torch.cat([x[:, :1], x[:, 1:] + g * x[:, :-1] * not_bos], dim=1) + x = F.rms_norm(x, (x.size(-1),)) + x0 = x + skips = [] + enc_iter = ( + self.encoder_indices + if self.looping_active + else list(range(self.num_encoder_layers)) + ) + dec_iter = ( + self.decoder_indices + if self.looping_active + else list( + range( + self.num_encoder_layers, + self.num_encoder_layers + self.num_decoder_layers, + ) + ) + ) + slot = 0 + for i in enc_iter: + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + x = self._block_with_lora( + self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w + ) + slot += 1 + skips.append(x) + psl = self.parallel_start_layer + lane0 = None + lane1 = None + for skip_idx, i in enumerate(dec_iter): + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + if i >= psl and psl > 0: + if lane0 is None: + lane0 = x + lane1 = x + if skip_idx < self.num_skip_weights and skips: + skip = skips.pop() + w = self.skip_weights[skip_idx].to(dtype=lane0.dtype)[None, None, :] + if self.skip_gates is not None: + g = torch.sigmoid( + self.skip_gates[skip_idx].to(dtype=lane0.dtype) + )[None, None, :] + lane0 = torch.lerp(w * skip, lane0, g) + else: + lane0 = lane0 + w * skip + lane0, lane1 = self._parallel_block_with_lora( + i, lane0, lane1, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w + ) + else: + if skip_idx < self.num_skip_weights and skips: + scaled_skip = ( + self.skip_weights[skip_idx].to(dtype=x.dtype)[None, None, :] + * skips.pop() + ) + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=x.dtype))[ + None, None, : + ] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + x = self._block_with_lora( + self.blocks[i], + x, + x0, + lora, + slot, + q_w, + k_w, + v_w, + out_w, + up_w, + down_w, + ) + slot += 1 + if lane0 is not None: + x = self._final_parallel_hidden(lane0, lane1) + x = self.final_norm(x) + if self.tie_embeddings: + logits = F.linear(x, self.tok_emb.weight) + else: + logits = self.lm_head(x) + logits = logits + lora.lm_head_lora(x) + logits = self.logit_softcap * torch.tanh(logits / self.logit_softcap) + bsz, sl, V = logits.shape + return F.cross_entropy( + logits.float().reshape(-1, V), target_ids.reshape(-1), reduction="none" + ).reshape(bsz, sl) + + def _block_with_lora( + self, block, x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w + ): + mix = block.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + n = block.attn_norm(x_in) * block.ln_scale_factor + attn = block.attn + bsz, seqlen, dim = n.shape + q_raw = F.linear(n, q_w.to(n.dtype)) + lora.q_loras[slot](n) + q = q_raw.reshape(bsz, seqlen, attn.num_heads, attn.head_dim) + k = F.linear(n, k_w.to(n.dtype)) + if lora.k_loras is not None: + k = k + lora.k_loras[slot](n) + k = k.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + v = (F.linear(n, v_w.to(n.dtype)) + lora.v_loras[slot](n)).reshape( + bsz, seqlen, attn.num_kv_heads, attn.head_dim + ) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = attn.rotary(seqlen, n.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, attn.rope_dims) + k = apply_rotary_emb(k, cos, sin, attn.rope_dims) + q = q * attn.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if attn.use_xsa: + y = attn._xsa_efficient(y, v) + if attn.attn_out_gate: + gate_src = q_raw if attn.attn_out_gate_src == "q" else n + gate_in = gate_src[..., : attn.gate_window].contiguous() + g = 2.0 * torch.sigmoid(attn.attn_gate_proj(gate_in)) + y = y * g[..., None] + if attn.gated_attn: + n_c = n.contiguous() + g = torch.sigmoid(F.linear(n_c, attn.attn_gate_w.to(n.dtype))) + y = y * g[..., None] + if attn.sparse_attn_gate: + gate_in = n[..., : attn.gate_window].contiguous() + g = torch.sigmoid( + attn.sparse_attn_gate_scale + * F.linear(gate_in, attn.attn_gate_w.to(n.dtype)) + ) + y = y * g[..., None] + y = y.reshape(bsz, seqlen, dim) + attn_out = F.linear(y, out_w.to(n.dtype)) + if lora.o_loras is not None: + attn_out = attn_out + lora.o_loras[slot](n) + x_out = x_in + block.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + mlp_n = block.mlp_norm(x_out) * block.ln_scale_factor + mlp_out = block.mlp(mlp_n, up_w, down_w) + if lora.mlp_loras is not None: + mlp_out = mlp_out + lora.mlp_loras[slot](mlp_n) + x_out = x_out + block.mlp_scale.to(dtype=x_out.dtype)[None, None, :] * mlp_out + return x_out + + def _parallel_block_with_lora( + self, + block_idx, + lane0, + lane1, + x0, + lora, + slot, + q_w, + k_w, + v_w, + out_w, + up_w, + down_w, + ): + block = self.blocks[block_idx] + mix = block.resid_mix.to(dtype=lane0.dtype) + attn_read = mix[0][None, None, :] * lane0 + mix[1][None, None, :] * x0 + n = block.attn_norm(attn_read) * block.ln_scale_factor + attn = block.attn + bsz, seqlen, dim = n.shape + q_raw = F.linear(n, q_w.to(n.dtype)) + lora.q_loras[slot](n) + q = q_raw.reshape(bsz, seqlen, attn.num_heads, attn.head_dim) + k = F.linear(n, k_w.to(n.dtype)) + if lora.k_loras is not None: + k = k + lora.k_loras[slot](n) + k = k.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + v = (F.linear(n, v_w.to(n.dtype)) + lora.v_loras[slot](n)).reshape( + bsz, seqlen, attn.num_kv_heads, attn.head_dim + ) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = attn.rotary(seqlen, n.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, attn.rope_dims) + k = apply_rotary_emb(k, cos, sin, attn.rope_dims) + q = q * attn.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if attn.use_xsa: + y = attn._xsa_efficient(y, v) + if attn.attn_out_gate: + gate_src = q_raw if attn.attn_out_gate_src == "q" else n + gate_in = gate_src[..., : attn.gate_window].contiguous() + g = 2.0 * torch.sigmoid(attn.attn_gate_proj(gate_in)) + y = y * g[..., None] + if attn.gated_attn: + n_c = n.contiguous() + g = torch.sigmoid(F.linear(n_c, attn.attn_gate_w.to(n.dtype))) + y = y * g[..., None] + if attn.sparse_attn_gate: + gate_in = n[..., : attn.gate_window].contiguous() + g = torch.sigmoid( + attn.sparse_attn_gate_scale + * F.linear(gate_in, attn.attn_gate_w.to(n.dtype)) + ) + y = y * g[..., None] + y = y.reshape(bsz, seqlen, dim) + attn_out = F.linear(y, out_w.to(n.dtype)) + if lora.o_loras is not None: + attn_out = attn_out + lora.o_loras[slot](n) + attn_out = block.attn_scale.to(dtype=attn_out.dtype)[None, None, :] * attn_out + mlp_read = lane1 + mlp_n = block.mlp_norm(mlp_read) * block.ln_scale_factor + mlp_out = block.mlp(mlp_n, up_w, down_w) + if lora.mlp_loras is not None: + mlp_out = mlp_out + lora.mlp_loras[slot](mlp_n) + mlp_out = block.mlp_scale.to(dtype=lane1.dtype)[None, None, :] * mlp_out + attn_resid = self.parallel_resid_lambdas[block_idx, 0].to(dtype=lane0.dtype) + attn_post = self.parallel_post_lambdas[block_idx, 0].to(dtype=lane0.dtype) + mlp_resid = self.parallel_resid_lambdas[block_idx, 1].to(dtype=lane0.dtype) + mlp_post = self.parallel_post_lambdas[block_idx, 1].to(dtype=lane0.dtype) + lane0 = attn_resid * lane0 + attn_post[0] * attn_out + mlp_post[0] * mlp_out + lane1 = mlp_resid * lane1 + attn_post[1] * attn_out + mlp_post[1] * mlp_out + return lane0, lane1 + + +class BatchedLinearLoRA(nn.Module): + _ALPHA = float(os.environ.get("TTT_LORA_ALPHA", "144")) + _WARM_START_A = bool(int(os.environ.get("TTT_WARM_START_A", "1"))) + + def __init__(self, bsz, in_features, out_features, rank): + super().__init__() + self._bound = 1.0 / math.sqrt(in_features) + self._scale = self._ALPHA / rank + self.A = nn.Parameter( + torch.empty(bsz, rank, in_features).uniform_(-self._bound, self._bound) + ) + self.B = nn.Parameter(torch.zeros(bsz, out_features, rank)) + + def reset(self): + with torch.no_grad(): + if not self._WARM_START_A: + self.A.uniform_(-self._bound, self._bound) + self.B.zero_() + + def forward(self, x): + return ((x @ self.A.transpose(1, 2)) @ self.B.transpose(1, 2)) * self._scale + + +class BatchedTTTLoRA(nn.Module): + def __init__(self, bsz, model, rank, k_lora=True, mlp_lora=True, o_lora=True): + super().__init__() + self.bsz = bsz + dim = model.qo_bank.shape[-1] + vocab = model.tok_emb.num_embeddings + if getattr(model, "looping_active", False): + num_slots = len(model.encoder_indices) + len(model.decoder_indices) + else: + num_slots = len(model.blocks) + kv_dim = model.blocks[0].attn.num_kv_heads * ( + dim // model.blocks[0].attn.num_heads + ) + embed_dim = model.tok_emb.embedding_dim + self.lm_head_lora = BatchedLinearLoRA(bsz, embed_dim, vocab, rank) + self.q_loras = nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + self.v_loras = nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, kv_dim, rank) for _ in range(num_slots)] + ) + self.k_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, kv_dim, rank) for _ in range(num_slots)] + ) + if k_lora + else None + ) + self.mlp_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if mlp_lora + else None + ) + self.o_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if o_lora + else None + ) + + def reset(self): + with torch.no_grad(): + self.lm_head_lora.reset() + for loras in [ + self.q_loras, + self.v_loras, + self.k_loras, + self.mlp_loras, + self.o_loras, + ]: + if loras is not None: + for lora in loras: + lora.reset() + + +_PE_COEFFS = ( + (8.156554524902461, -22.48329292557795, 15.878769915207462), + (4.042929935166739, -2.808917465908714, 0.5000178451051316), + (3.8916678022926607, -2.772484153217685, 0.5060648178503393), + (3.285753657755655, -2.3681294933425376, 0.46449024233003106), + (2.3465413258596377, -1.7097828382687081, 0.42323551169305323), +) + + +@torch.compile +def zeropower_via_newtonschulz5(G, steps=10, eps=1e-07): + was_2d = G.ndim == 2 + if was_2d: + G = G.unsqueeze(0) + X = G.bfloat16() + transposed = X.size(-2) > X.size(-1) + if transposed: + X = X.mT + X = X / (X.norm(dim=(-2, -1), keepdim=True) + eps) + coeffs = _PE_COEFFS[:steps] if steps <= len(_PE_COEFFS) else _PE_COEFFS + for a, b, c in coeffs: + A = X @ X.mT + B = b * A + c * (A @ A) + X = a * X + B @ X + if transposed: + X = X.mT + if was_2d: + X = X.squeeze(0) + return X + + +class Muon(torch.optim.Optimizer): + def __init__( + self, + params, + lr, + momentum, + backend_steps, + nesterov=True, + weight_decay=0.0, + row_normalize=False, + ): + super().__init__( + params, + dict( + lr=lr, + momentum=momentum, + backend_steps=backend_steps, + nesterov=nesterov, + weight_decay=weight_decay, + row_normalize=row_normalize, + ), + ) + self._built = False + + def _build(self): + self._distributed = dist.is_available() and dist.is_initialized() + self._world_size = dist.get_world_size() if self._distributed else 1 + self._rank = dist.get_rank() if self._distributed else 0 + ws = self._world_size + self._bank_meta = [] + for group in self.param_groups: + for p in group["params"]: + B = p.shape[0] + padded_B = ((B + ws - 1) // ws) * ws + shard_B = padded_B // ws + tail = p.shape[1:] + dev = p.device + self._bank_meta.append( + { + "p": p, + "B": B, + "padded_grad": torch.zeros( + padded_B, *tail, device=dev, dtype=torch.bfloat16 + ), + "shard": torch.zeros( + shard_B, *tail, device=dev, dtype=torch.bfloat16 + ), + "shard_mom": torch.zeros( + shard_B, *tail, device=dev, dtype=torch.bfloat16 + ), + "full_update": torch.zeros( + padded_B, *tail, device=dev, dtype=torch.bfloat16 + ), + "scale": max(1, p.shape[-2] / p.shape[-1]) ** 0.5, + } + ) + self._bank_meta.sort(key=lambda m: -m["p"].numel()) + self._built = True + + def launch_reduce_scatters(self): + if not self._built: + self._build() + if not self._distributed: + return + self._rs_futures = [] + for m in self._bank_meta: + p = m["p"] + if p.grad is None: + self._rs_futures.append(None) + continue + pg = m["padded_grad"] + pg[: m["B"]].copy_(p.grad) + fut = dist.reduce_scatter_tensor( + m["shard"], pg, op=dist.ReduceOp.AVG, async_op=True + ) + self._rs_futures.append(fut) + + @torch.no_grad() + def step(self, closure=None): + loss = None + if closure is not None: + with torch.enable_grad(): + loss = closure() + if not self._built: + self._build() + for group in self.param_groups: + lr = group["lr"] + momentum = group["momentum"] + backend_steps = group["backend_steps"] + nesterov = group["nesterov"] + wd = group.get("weight_decay", 0.0) + row_normalize = group.get("row_normalize", False) + prev_ag_handle = None + prev_m = None + sharded = self._distributed and hasattr(self, "_rs_futures") + for idx, m in enumerate(self._bank_meta): + p = m["p"] + if p.grad is None: + continue + if prev_ag_handle is not None: + prev_ag_handle.wait() + pp = prev_m["p"] + upd = prev_m["full_update"][: prev_m["B"]] + if wd > 0.0: + pp.data.mul_(1.0 - lr * wd) + pp.add_(upd, alpha=-lr * prev_m["scale"]) + if sharded and self._rs_futures[idx] is not None: + self._rs_futures[idx].wait() + g = m["shard"] + buf = m["shard_mom"] + else: + g = p.grad.bfloat16() + state = self.state[p] + if "momentum_buffer" not in state: + state["momentum_buffer"] = torch.zeros_like(g) + buf = state["momentum_buffer"] + buf.mul_(momentum).add_(g) + if nesterov: + update = g.add(buf, alpha=momentum) + else: + update = buf + if row_normalize: + rn = update.float().norm(dim=-1, keepdim=True).clamp_min(1e-07) + update = update / rn.to(update.dtype) + update = zeropower_via_newtonschulz5(update, steps=backend_steps) + if sharded: + prev_ag_handle = dist.all_gather_into_tensor( + m["full_update"], update, async_op=True + ) + prev_m = m + else: + if wd > 0.0: + p.data.mul_(1.0 - lr * wd) + p.add_(update, alpha=-lr * m["scale"]) + if prev_ag_handle is not None: + prev_ag_handle.wait() + pp = prev_m["p"] + upd = prev_m["full_update"][: prev_m["B"]] + if wd > 0.0: + pp.data.mul_(1.0 - lr * wd) + pp.add_(upd, alpha=-lr * prev_m["scale"]) + if hasattr(self, "_rs_futures"): + del self._rs_futures + return loss + + +CONTROL_TENSOR_NAME_PATTERNS = tuple( + pattern + for pattern in os.environ.get( + "CONTROL_TENSOR_NAME_PATTERNS", + "attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,skip_gates,parallel_post_lambdas,parallel_resid_lambdas,attn_gate_proj,attn_gate_w,smear_gate,smear_lambda", + ).split(",") + if pattern +) + +PACKED_REPLICATED_GRAD_MAX_NUMEL = 1 << 15 + + +class Optimizers: + def __init__(self, h, base_model): + matrix_params = [ + base_model.qo_bank, + base_model.kv_bank, + base_model.mlp_up_bank, + base_model.mlp_down_bank, + ] + block_named_params = list(base_model.blocks.named_parameters()) + scalar_params = [ + p + for (name, p) in block_named_params + if p.ndim < 2 + or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS) + ] + if base_model.skip_weights.numel() > 0: + scalar_params.append(base_model.skip_weights) + if base_model.skip_gates is not None and base_model.skip_gates.numel() > 0: + scalar_params.append(base_model.skip_gates) + if base_model.parallel_post_lambdas is not None: + scalar_params.append(base_model.parallel_post_lambdas) + if base_model.parallel_resid_lambdas is not None: + scalar_params.append(base_model.parallel_resid_lambdas) + if getattr(base_model, "smear_gate_enabled", False): + scalar_params.append(base_model.smear_gate.weight) + scalar_params.append(base_model.smear_lambda) + token_lr = h.tied_embed_lr if h.tie_embeddings else h.embed_lr + tok_params = [ + {"params": [base_model.tok_emb.weight], "lr": token_lr, "base_lr": token_lr} + ] + self.optimizer_tok = torch.optim.AdamW( + tok_params, + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.embed_wd, + fused=True, + ) + self.optimizer_muon = Muon( + matrix_params, + lr=h.matrix_lr, + momentum=h.muon_momentum, + backend_steps=h.muon_backend_steps, + weight_decay=h.muon_wd, + row_normalize=h.muon_row_normalize, + ) + for group in self.optimizer_muon.param_groups: + group["base_lr"] = h.matrix_lr + self.optimizer_scalar = torch.optim.AdamW( + [{"params": scalar_params, "lr": h.scalar_lr, "base_lr": h.scalar_lr}], + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.adam_wd, + fused=True, + ) + self.optimizers = [ + self.optimizer_tok, + self.optimizer_muon, + self.optimizer_scalar, + ] + self.replicated_params = list(tok_params[0]["params"]) + self.replicated_params.extend(scalar_params) + self.replicated_large_params = [] + self.replicated_packed_params = [] + for p in self.replicated_params: + if p.numel() <= PACKED_REPLICATED_GRAD_MAX_NUMEL: + self.replicated_packed_params.append(p) + else: + self.replicated_large_params.append(p) + self._aux_stream = torch.cuda.Stream() + + def __iter__(self): + return iter(self.optimizers) + + def zero_grad_all(self): + for opt in self.optimizers: + opt.zero_grad(set_to_none=True) + + def _all_reduce_packed_grads(self): + grads_by_key = collections.defaultdict(list) + for p in self.replicated_packed_params: + if p.grad is not None: + grads_by_key[(p.grad.device, p.grad.dtype)].append(p.grad) + for grads in grads_by_key.values(): + flat = torch.empty( + sum(g.numel() for g in grads), + device=grads[0].device, + dtype=grads[0].dtype, + ) + offset = 0 + for g in grads: + n = g.numel() + flat[offset : offset + n].copy_(g.contiguous().view(-1)) + offset += n + dist.all_reduce(flat, op=dist.ReduceOp.AVG) + offset = 0 + for g in grads: + n = g.numel() + g.copy_(flat[offset : offset + n].view_as(g)) + offset += n + + def step(self, distributed=False): + self.optimizer_muon.launch_reduce_scatters() + if distributed: + reduce_handles = [ + dist.all_reduce(p.grad, op=dist.ReduceOp.AVG, async_op=True) + for p in self.replicated_large_params + if p.grad is not None + ] + self._all_reduce_packed_grads() + for handle in reduce_handles: + handle.wait() + self._aux_stream.wait_stream(torch.cuda.current_stream()) + with torch.cuda.stream(self._aux_stream): + self.optimizer_tok.step() + self.optimizer_scalar.step() + self.optimizer_muon.step() + torch.cuda.current_stream().wait_stream(self._aux_stream) + self.zero_grad_all() + + +def restore_fp32_params(model): + for module in model.modules(): + if isinstance(module, CastedLinear): + module.float() + for name, param in model.named_parameters(): + if ( + param.ndim < 2 + or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS) + ) and param.dtype != torch.float32: + param.data = param.data.float() + if hasattr(model, "qo_bank") and model.qo_bank is not None: + model.qo_bank.data = model.qo_bank.data.float() + model.kv_bank.data = model.kv_bank.data.float() + model.mlp_up_bank.data = model.mlp_up_bank.data.float() + model.mlp_down_bank.data = model.mlp_down_bank.data.float() + + +def collect_hessians(model, train_loader, h, device, n_calibration_batches=64): + hessians = {} + hooks = [] + for i, block in enumerate(model.blocks): + block.attn._calib = True + block.mlp._calib = True + block.mlp.use_fused = False + + def make_attn_hook(layer_idx): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + for suffix in ["c_q", "c_k", "c_v"]: + name = f"blocks.{layer_idx}.attn.{suffix}.weight" + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + y = module._last_proj_input + if y is not None: + y = y.float() + if y.ndim == 3: + y = y.reshape(-1, y.shape[-1]) + name = f"blocks.{layer_idx}.attn.proj.weight" + if name not in hessians: + hessians[name] = torch.zeros( + y.shape[1], y.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(y.T, y) + + return hook_fn + + def make_mlp_hook(layer_idx): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + name = f"blocks.{layer_idx}.mlp.fc.weight" + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + h_act = module._last_down_input + if h_act is not None: + h_act = h_act.float() + if h_act.ndim == 3: + h_act = h_act.reshape(-1, h_act.shape[-1]) + name = f"blocks.{layer_idx}.mlp.proj.weight" + if name not in hessians: + hessians[name] = torch.zeros( + h_act.shape[1], + h_act.shape[1], + dtype=torch.float32, + device=device, + ) + hessians[name].addmm_(h_act.T, h_act) + + return hook_fn + + for i, block in enumerate(model.blocks): + hooks.append(block.attn.register_forward_hook(make_attn_hook(i))) + hooks.append(block.mlp.register_forward_hook(make_mlp_hook(i))) + + if model.tie_embeddings: + hook_module = model.final_norm + + def make_output_hook(name): + def hook_fn(module, inp, out): + x = out.detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + + return hook_fn + + hooks.append( + hook_module.register_forward_hook(make_output_hook("tok_emb.weight")) + ) + model.eval() + with torch.no_grad(): + for _ in range(n_calibration_batches): + x, _ = train_loader.next_batch(h.train_batch_tokens, h.grad_accum_steps) + model.forward_logits(x) + for hook in hooks: + hook.remove() + for i, block in enumerate(model.blocks): + block.attn._calib = False + block.mlp._calib = False + block.mlp.use_fused = True + for name in hessians: + hessians[name] = hessians[name].cpu() / n_calibration_batches + return hessians + + +def gptq_quantize_weight(w, H, clip_sigmas=3.0, clip_range=63, block_size=128): + W_orig = w.float().clone() + rows, cols = W_orig.shape + H = H.float().clone() + dead = torch.diag(H) == 0 + H[dead, dead] = 1 + damp = 0.01 * H.diag().mean() + H.diagonal().add_(damp) + perm = torch.argsort(H.diag(), descending=True) + invperm = torch.argsort(perm) + W_perm = W_orig[:, perm].clone() + W_perm[:, dead[perm]] = 0 + H = H[perm][:, perm] + Hinv = torch.cholesky_inverse(torch.linalg.cholesky(H)) + Hinv = torch.linalg.cholesky(Hinv, upper=True) + row_std = W_orig.std(dim=1) + s = (clip_sigmas * row_std / clip_range).clamp_min(1e-10).to(torch.float16) + sf = s.float() + Q = torch.zeros(rows, cols, dtype=torch.int8) + W_work = W_perm.clone() + for i1 in range(0, cols, block_size): + i2 = min(i1 + block_size, cols) + W_block = W_work[:, i1:i2].clone() + Hinv_block = Hinv[i1:i2, i1:i2] + Err = torch.zeros(rows, i2 - i1) + for j in range(i2 - i1): + w_col = W_block[:, j] + d = Hinv_block[j, j] + q_col = torch.clamp(torch.round(w_col / sf), -clip_range, clip_range) + Q[:, i1 + j] = q_col.to(torch.int8) + err = (w_col - q_col.float() * sf) / d + Err[:, j] = err + W_block[:, j:] -= err.unsqueeze(1) * Hinv_block[j, j:].unsqueeze(0) + if i2 < cols: + W_work[:, i2:] -= Err @ Hinv[i1:i2, i2:] + return Q[:, invperm], s + + +def _quantize_gate_int8_row(w): + W = w.float().contiguous() + row_max = W.abs().amax(dim=1).clamp_min(1e-10) + s = (row_max / 127.0).to(torch.float16) + sf = s.float().view(-1, 1) + q = torch.clamp(torch.round(W / sf), -127, 127).to(torch.int8) + return q, s + + +def _lqer_pack(A, B, bits): + rng = 2 ** (bits - 1) - 1 + sA = (A.abs().amax(dim=1).clamp_min(1e-10) / rng).to(torch.float16) + sB = (B.abs().amax(dim=1).clamp_min(1e-10) / rng).to(torch.float16) + qA = torch.clamp(torch.round(A / sA.float().view(-1, 1)), -rng, rng).to(torch.int8) + qB = torch.clamp(torch.round(B / sB.float().view(-1, 1)), -rng, rng).to(torch.int8) + return qA, sA, qB, sB + + +def _lqer_pack_asym(A, B, g=64): + sA = (A.abs().amax().clamp_min(1e-10) / 1.5).to(torch.float16) + qA = torch.clamp(torch.round(A / sA.float()), -2, 1).to(torch.int8) + Bf = B.reshape(-1, g) + Bmax = Bf.abs().amax(dim=-1, keepdim=True).clamp_min(1e-10) + sB = (Bmax / 7.5).to(torch.float16).reshape(-1) + qB = ( + torch.clamp(torch.round(Bf / sB.float().reshape(-1, 1)), -8, 7) + .to(torch.int8) + .reshape(B.shape) + ) + return qA, sA, qB, sB + + +def gptq_mixed_quantize(state_dict, hessians, h): + result = {} + meta = {} + quant_gate = bool(getattr(h, "gated_attn_quant_gate", False)) + lqer_on = bool(getattr(h, "lqer_enabled", False)) + lqer_cands = {} + for name, tensor in state_dict.items(): + t = tensor.detach().cpu().contiguous() + if ( + quant_gate + and t.is_floating_point() + and t.ndim == 2 + and name.endswith(".attn_gate_w") + and 32 <= t.numel() <= 8192 + ): + gq, gs = _quantize_gate_int8_row(t) + result[name + ".gq"] = gq + result[name + ".gs"] = gs + meta[name] = "gate_int8_row" + continue + if not t.is_floating_point() or t.numel() <= 65536: + result[name] = t.to(torch.float16) if t.is_floating_point() else t + meta[name] = "passthrough (float16)" + continue + if "tok_emb" in name: + cs = h.embed_clip_sigmas + elif ".mlp." in name: + cs = h.mlp_clip_sigmas + elif ".attn." in name: + cs = h.attn_clip_sigmas + else: + cs = h.matrix_clip_sigmas + bits = h.embed_bits if "tok_emb" in name else h.matrix_bits + clip_range = 2 ** (bits - 1) - 1 + ret = gptq_quantize_weight( + t, hessians[name], clip_sigmas=cs, clip_range=clip_range + ) + q, s = ret + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = f"gptq (int{bits})" + if lqer_on: + W_q = q.float() * s.float().view(-1, 1) + E = t.float() - W_q + lqer_cands[name] = (E, float(E.norm())) + if lqer_on and lqer_cands: + top = sorted(lqer_cands.items(), key=lambda kv: -kv[1][1])[: h.lqer_top_k] + asym_on = bool(getattr(h, "lqer_asym_enabled", False)) + asym_g = int(getattr(h, "lqer_asym_group", 64)) + for name, (E, _) in top: + U, S, Vh = torch.linalg.svd(E, full_matrices=False) + r = min(h.lqer_rank, S.numel()) + A = (U[:, :r] * S[:r]).contiguous() + B = Vh[:r, :].contiguous() + if asym_on and B.numel() % asym_g == 0: + qA, sA, qB, sB = _lqer_pack_asym(A, B, asym_g) + result[name + ".lqA_a"] = qA + result[name + ".lqAs_a"] = sA + result[name + ".lqB_a"] = qB + result[name + ".lqBs_a"] = sB + meta[name] = meta[name] + "+lqer_asym" + else: + qA, sA, qB, sB = _lqer_pack(A, B, h.lqer_factor_bits) + result[name + ".lqA"] = qA + result[name + ".lqAs"] = sA + result[name + ".lqB"] = qB + result[name + ".lqBs"] = sB + meta[name] = meta[name] + "+lqer" + categories = collections.defaultdict(set) + for name, cat in meta.items(): + short = re.sub("\\.\\d+$", "", re.sub("blocks\\.\\d+", "blocks", name)) + categories[cat].add(short) + log("Quantized weights:") + for cat in sorted(categories): + log(f" {cat}: {', '.join(sorted(categories[cat]))}") + return result, meta + + +def dequantize_mixed(result, meta, template_sd): + out = {} + for name, orig in template_sd.items(): + info = meta.get(name) + if info is None: + continue + orig_dtype = orig.dtype + if "passthrough" in info: + t = result[name] + if t.dtype == torch.float16 and orig_dtype in ( + torch.float32, + torch.bfloat16, + ): + t = t.to(orig_dtype) + out[name] = t + continue + if info == "gate_int8_row": + gq = result[name + ".gq"] + gs = result[name + ".gs"] + out[name] = (gq.float() * gs.float().view(-1, 1)).to(orig_dtype) + continue + q, s = result[name + ".q"], result[name + ".scale"] + if s.ndim > 0: + W = q.float() * s.float().view(q.shape[0], *[1] * (q.ndim - 1)) + else: + W = q.float() * float(s.item()) + if "lqer_asym" in info: + qA_t = result[name + ".lqA_a"] + sA_t = result[name + ".lqAs_a"] + qB_t = result[name + ".lqB_a"] + sB_t = result[name + ".lqBs_a"] + qA = qA_t.float() * float(sA_t) + g_sz = qB_t.numel() // sB_t.numel() + qB = (qB_t.reshape(-1, g_sz).float() * sB_t.float().view(-1, 1)).reshape( + qB_t.shape + ) + W = W + qA @ qB + elif "lqer" in info: + qA = result[name + ".lqA"].float() * result[name + ".lqAs"].float().view( + -1, 1 + ) + qB = result[name + ".lqB"].float() * result[name + ".lqBs"].float().view( + -1, 1 + ) + W = W + qA @ qB + out[name] = W.to(orig_dtype) + return out + + +_GROUP_ORDER = [ + "_tok_emb.weight.q", + "attn.c_k.weight.q", + "attn.c_q.weight.q", + "attn.c_v.weight.q", + "attn.proj.weight.q", + "mlp.fc.weight.q", + "mlp.proj.weight.q", +] +_SIMSORT_KEYS = {"_tok_emb.weight.q", "attn.c_q.weight.q", "mlp.fc.weight.q"} +_PACK_MAGIC = b"PGRP" + + +def _similarity_sort_l1(matrix): + n = matrix.shape[0] + used = np.zeros(n, dtype=bool) + order = [0] + used[0] = True + cur = matrix[0].astype(np.float32) + for _ in range(n - 1): + dists = np.sum(np.abs(matrix[~used].astype(np.float32) - cur), axis=1) + unused = np.where(~used)[0] + best = unused[np.argmin(dists)] + order.append(best) + used[best] = True + cur = matrix[best].astype(np.float32) + return np.array(order, dtype=np.uint16) + + +def _lrzip_compress(data, tmpdir, label): + inp = os.path.join(tmpdir, f"{label}.bin") + out = f"{inp}.lrz" + with open(inp, "wb") as f: + f.write(data) + subprocess.run( + ["lrzip", "-z", "-L", "9", "-o", out, inp], capture_output=True, check=True + ) + with open(out, "rb") as f: + result = f.read() + os.remove(inp) + os.remove(out) + return result + + +def _lrzip_decompress(data, tmpdir, label): + inp = os.path.join(tmpdir, f"{label}.lrz") + out = os.path.join(tmpdir, f"{label}.bin") + with open(inp, "wb") as f: + f.write(data) + subprocess.run( + ["lrzip", "-d", "-f", "-o", out, inp], capture_output=True, check=True + ) + with open(out, "rb") as f: + result = f.read() + os.remove(inp) + os.remove(out) + return result + + +def _pack_streams(streams): + import struct + + n = len(streams) + hdr = _PACK_MAGIC + struct.pack("= 2 + docs.append((start, end - start)) + return docs + + +def _build_ttt_global_batches(doc_entries, h, ascending=False): + batch_size = h.ttt_batch_size + global_doc_entries = sorted(doc_entries, key=lambda x: x[1][1]) + global_batches = [ + global_doc_entries[i : i + batch_size] + for i in range(0, len(global_doc_entries), batch_size) + ] + indexed = list(enumerate(global_batches)) + if not ascending: + indexed.sort(key=lambda ib: -max(dl for _, (_, dl) in ib[1])) + return indexed + + +def _init_batch_counter(path): + with open(path, "wb") as f: + f.write((0).to_bytes(4, "little")) + + +def _claim_next_batch(counter_path, queue_len): + try: + with open(counter_path, "r+b") as f: + fcntl.flock(f, fcntl.LOCK_EX) + idx = int.from_bytes(f.read(4), "little") + f.seek(0) + f.write((idx + 1).to_bytes(4, "little")) + f.flush() + except FileNotFoundError: + return queue_len + return idx + + +def _compute_chunk_window(ci, pred_len, num_chunks, chunk_size, eval_seq_len): + chunk_end = pred_len if ci == num_chunks - 1 else (ci + 1) * chunk_size + win_start = max(0, chunk_end - eval_seq_len) + win_len = chunk_end - win_start + chunk_start = ci * chunk_size + chunk_offset = chunk_start - win_start + chunk_len = chunk_end - chunk_start + return win_start, win_len, chunk_offset, chunk_len + + +def _accumulate_bpb( + ptl, + x, + y, + chunk_offsets, + chunk_lens, + pos_idx, + base_bytes_lut, + has_leading_space_lut, + is_boundary_token_lut, + loss_sum, + byte_sum, + token_count, + y_bytes=None, +): + pos = pos_idx[: x.size(1)].unsqueeze(0) + mask = ( + (chunk_lens.unsqueeze(1) > 0) + & (pos >= chunk_offsets.unsqueeze(1)) + & (pos < (chunk_offsets + chunk_lens).unsqueeze(1)) + ) + mask_f64 = mask.to(torch.float64) + if y_bytes is not None: + tok_bytes = y_bytes.to(torch.float64) + else: + tok_bytes = base_bytes_lut[y].to(torch.float64) + tok_bytes += (has_leading_space_lut[y] & ~is_boundary_token_lut[x]).to( + torch.float64 + ) + loss_sum += (ptl.to(torch.float64) * mask_f64).sum() + byte_sum += (tok_bytes * mask_f64).sum() + token_count += chunk_lens.to(torch.float64).sum() + + +def _loss_bpb_from_sums(loss_sum, token_count, byte_sum): + val_loss = (loss_sum / token_count).item() + val_bpb = val_loss / math.log(2.0) * (token_count.item() / byte_sum.item()) + return val_loss, val_bpb + + +def _add_to_counter(path, delta): + try: + with open(path, "r+b") as f: + fcntl.flock(f, fcntl.LOCK_EX) + cur = int.from_bytes(f.read(8), "little", signed=True) + cur += int(delta) + f.seek(0) + f.write(int(cur).to_bytes(8, "little", signed=True)) + f.flush() + return cur + except FileNotFoundError: + return int(delta) + + +def _init_int64_counter(path): + with open(path, "wb") as f: + f.write((0).to_bytes(8, "little", signed=True)) + + +def _select_ttt_doc_entries(docs, h): + doc_entries = list(enumerate(docs)) + if h.val_doc_fraction < 1.0: + sample_n = max(1, int(round(len(docs) * h.val_doc_fraction))) + sampled_indices = sorted( + random.Random(h.seed).sample(range(len(docs)), sample_n) + ) + return [(i, docs[i]) for i in sampled_indices] + return doc_entries + + +def train_val_ttt_global_sgd_distributed( + h, device, val_data, base_model, val_tokens, batch_seqs=None +): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + seq_len = h.eval_seq_len + total_tokens = val_tokens.numel() - 1 + ttt_chunk = h.global_ttt_chunk_tokens + batch_seqs = h.global_ttt_batch_seqs if batch_seqs is None else batch_seqs + num_chunks = (total_tokens + ttt_chunk - 1) // ttt_chunk + ttt_params = [p for p in base_model.parameters()] + for p in ttt_params: + p.requires_grad_(True) + optimizer = torch.optim.SGD( + ttt_params, lr=h.global_ttt_lr, momentum=h.global_ttt_momentum + ) + t_start = time.perf_counter() + base_model._in_ttt = True + for ci in range(num_chunks): + chunk_start = ci * ttt_chunk + chunk_end = min((ci + 1) * ttt_chunk, total_tokens) + is_last_chunk = ci == num_chunks - 1 + if is_last_chunk or h.global_ttt_epochs <= 0: + continue + base_model.train() + chunk_seqs = (chunk_end - chunk_start) // seq_len + if chunk_seqs <= 0: + continue + warmup_chunks = max(0, min(h.global_ttt_warmup_chunks, num_chunks - 1)) + if warmup_chunks > 0 and ci < warmup_chunks: + warmup_denom = max(warmup_chunks - 1, 1) + warmup_t = ci / warmup_denom + lr_now = ( + h.global_ttt_warmup_start_lr + + (h.global_ttt_lr - h.global_ttt_warmup_start_lr) * warmup_t + ) + else: + decay_steps = max(num_chunks - 1 - warmup_chunks, 1) + decay_ci = max(ci - warmup_chunks, 0) + lr_now = ( + h.global_ttt_lr + * 0.5 + * (1.0 + math.cos(math.pi * decay_ci / decay_steps)) + ) + for pg in optimizer.param_groups: + pg["lr"] = lr_now + my_seq_s = chunk_seqs * h.rank // h.world_size + my_seq_e = chunk_seqs * (h.rank + 1) // h.world_size + my_chunk_seqs = my_seq_e - my_seq_s + for _ in range(h.global_ttt_epochs): + for bs in range(0, my_chunk_seqs, batch_seqs): + be = min(bs + batch_seqs, my_chunk_seqs) + actual_bs = my_seq_s + bs + start_tok = chunk_start + actual_bs * seq_len + end_tok = chunk_start + (my_seq_s + be) * seq_len + 1 + if end_tok > val_tokens.numel(): + continue + local = val_tokens[start_tok:end_tok].to( + device=device, dtype=torch.int64 + ) + x_flat = local[:-1] + y_flat = local[1:] + optimizer.zero_grad(set_to_none=True) + with torch.enable_grad(): + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + if h.global_ttt_respect_doc_boundaries: + bos_pos = ( + (x_flat == BOS_ID).nonzero(as_tuple=True)[0].tolist() + ) + cu_seqlens, max_seqlen = _build_cu_seqlens( + bos_pos, + x_flat.numel(), + x_flat.device, + h.eval_seq_len, + 64, + ) + loss = base_model( + x_flat[None], + y_flat[None], + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + else: + x = x_flat.reshape(-1, seq_len) + y = y_flat.reshape(-1, seq_len) + loss = base_model(x, y) + loss.backward() + if dist.is_available() and dist.is_initialized(): + for p in ttt_params: + if p.grad is not None: + dist.all_reduce(p.grad, op=dist.ReduceOp.SUM) + p.grad.mul_(1.0 / h.world_size) + if h.global_ttt_grad_clip > 0: + torch.nn.utils.clip_grad_norm_(ttt_params, h.global_ttt_grad_clip) + optimizer.step() + base_model.eval() + if h.rank == 0: + elapsed = time.perf_counter() - t_start + log(f"tttg: c{ci+1}/{num_chunks} lr:{lr_now:.6f} t:{elapsed:.1f}s") + for p in base_model.parameters(): + p.requires_grad_(True) + base_model._in_ttt = False + base_model.eval() + + +def eval_val_ttt_phased(h, base_model, device, val_data, forward_ttt_train): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + for p in base_model.parameters(): + p.requires_grad_(False) + all_tokens = val_data.val_tokens + all_tokens_idx = all_tokens.to(torch.int32) + docs = _find_docs(all_tokens) + doc_entries = _select_ttt_doc_entries(docs, h) + prefix_doc_limit = max(0, min(len(doc_entries), int(h.phased_ttt_prefix_docs))) + num_phases = max(1, int(h.phased_ttt_num_phases)) + phase_boundaries = [] + for pi in range(num_phases): + boundary = prefix_doc_limit * (pi + 1) // num_phases + phase_boundaries.append(boundary) + current_phase = 0 + current_phase_boundary = phase_boundaries[0] + log( + f"ttt_phased: total_docs:{len(doc_entries)} prefix_docs:{prefix_doc_limit} suffix_docs:{len(doc_entries) - prefix_doc_limit} num_phases:{num_phases} boundaries:{phase_boundaries}" + ) + chunk_size, eval_seq_len = h.ttt_chunk_size, h.ttt_eval_seq_len + eval_batch_set = None + if h.ttt_eval_batches: + eval_batch_set = set(int(x) for x in h.ttt_eval_batches.split(",") if x.strip()) + use_ascending = eval_batch_set is not None + global_batches_sorted = _build_ttt_global_batches( + doc_entries, h, ascending=use_ascending + ) + queue_len = len(global_batches_sorted) + counter_path = f"/tmp/ttt_counter_{h.run_id}" + prefix_counter_path = f"/tmp/ttt_prefix_counter_{h.run_id}" + pause_flag_path = f"/tmp/ttt_pause_flag_{h.run_id}" + if h.rank == 0: + _init_batch_counter(counter_path) + _init_int64_counter(prefix_counter_path) + try: + os.remove(pause_flag_path) + except FileNotFoundError: + pass + if dist.is_available() and dist.is_initialized(): + path_list = [counter_path, prefix_counter_path, pause_flag_path] + dist.broadcast_object_list(path_list, src=0) + counter_path, prefix_counter_path, pause_flag_path = path_list + dist.barrier() + loss_sum = torch.zeros((), device=device, dtype=torch.float64) + byte_sum = torch.zeros((), device=device, dtype=torch.float64) + token_count = torch.zeros((), device=device, dtype=torch.float64) + t_start = time.perf_counter() + reusable_lora = BatchedTTTLoRA( + h.ttt_batch_size, + base_model, + h.ttt_lora_rank, + k_lora=h.ttt_k_lora, + mlp_lora=h.ttt_mlp_lora, + o_lora=h.ttt_o_lora, + ).to(device) + + def _build_opt(lora): + if h.ttt_optimizer == "sgd": + return torch.optim.SGD( + lora.parameters(), + lr=h.ttt_lora_lr, + momentum=h.ttt_beta1, + weight_decay=h.ttt_weight_decay, + ) + return torch.optim.AdamW( + lora.parameters(), + lr=h.ttt_lora_lr, + betas=(h.ttt_beta1, h.ttt_beta2), + eps=1e-10, + weight_decay=h.ttt_weight_decay, + fused=True, + ) + + reusable_opt = _build_opt(reusable_lora) + local_scored_docs = [] + global_ttt_done = prefix_doc_limit == 0 + while True: + queue_idx = _claim_next_batch(counter_path, queue_len) + if queue_idx >= queue_len: + break + orig_batch_idx, batch_entries = global_batches_sorted[queue_idx] + batch = [doc for _, doc in batch_entries] + bsz = len(batch) + prev_loss = loss_sum.item() + prev_bytes = byte_sum.item() + prev_tokens = token_count.item() + if bsz == reusable_lora.bsz: + reusable_lora.reset() + for s in reusable_opt.state.values(): + for k, v in s.items(): + if isinstance(v, torch.Tensor): + v.zero_() + elif k == "step": + s[k] = 0 + cur_lora = reusable_lora + cur_opt = reusable_opt + else: + cur_lora = BatchedTTTLoRA( + bsz, + base_model, + h.ttt_lora_rank, + k_lora=h.ttt_k_lora, + mlp_lora=h.ttt_mlp_lora, + o_lora=h.ttt_o_lora, + ).to(device) + cur_opt = _build_opt(cur_lora) + pred_lens = [doc_len - 1 for _, doc_len in batch] + num_chunks = [(pl + chunk_size - 1) // chunk_size for pl in pred_lens] + max_nc = max(num_chunks) + num_chunks_t = torch.tensor(num_chunks, dtype=torch.int64, device=device) + for ci in range(max_nc): + active = [ci < nc for nc in num_chunks] + needs_train = any(ci < nc - 1 for nc in num_chunks) + tok_starts = torch.zeros(bsz, dtype=torch.int64) + tok_wls = torch.zeros(bsz, dtype=torch.int64) + chunk_offsets_cpu = torch.zeros(bsz, dtype=torch.int64) + chunk_lens_cpu = torch.zeros(bsz, dtype=torch.int64) + for b in range(bsz): + if not active[b]: + continue + doc_start, doc_len = batch[b] + win_start, win_len, chunk_offset, chunk_len = _compute_chunk_window( + ci, pred_lens[b], num_chunks[b], chunk_size, eval_seq_len + ) + tok_starts[b] = doc_start + win_start + tok_wls[b] = win_len + chunk_offsets_cpu[b] = chunk_offset + chunk_lens_cpu[b] = chunk_len + _, context_size, chunk_offset, _ = _compute_chunk_window( + ci, (ci + 1) * chunk_size, ci + 1, chunk_size, eval_seq_len + ) + col_idx = torch.arange(context_size + 1) + idx = tok_starts.unsqueeze(1) + col_idx.unsqueeze(0) + idx.clamp_(max=all_tokens.numel() - 1) + gathered_gpu = all_tokens_idx[idx].to( + device=device, dtype=torch.int64, non_blocking=True + ) + valid = (col_idx[:context_size].unsqueeze(0) < tok_wls.unsqueeze(1)).to( + device, non_blocking=True + ) + chunk_offsets = chunk_offsets_cpu.to(device, non_blocking=True) + chunk_lens = chunk_lens_cpu.to(device, non_blocking=True) + x = torch.where(valid, gathered_gpu[:, :context_size], 0) + y = torch.where(valid, gathered_gpu[:, 1 : context_size + 1], 0) + ctx_pos = torch.arange(context_size, device=device, dtype=torch.int64) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + per_tok_loss = forward_ttt_train(x, y, lora=cur_lora) + y_bytes_arg = None + if val_data.caseops_enabled and val_data.val_bytes is not None: + y_idx = ( + tok_starts.unsqueeze(1) + 1 + col_idx[:context_size].unsqueeze(0) + ) + y_idx = y_idx.clamp_(max=val_data.val_bytes.numel() - 1) + y_bytes_arg = val_data.val_bytes[y_idx].to( + device=device, dtype=torch.int32, non_blocking=True + ) + y_bytes_arg = torch.where( + valid, y_bytes_arg, torch.zeros_like(y_bytes_arg) + ) + with torch.no_grad(): + _accumulate_bpb( + per_tok_loss, + x, + y, + chunk_offsets, + chunk_lens, + ctx_pos, + val_data.base_bytes_lut, + val_data.has_leading_space_lut, + val_data.is_boundary_token_lut, + loss_sum, + byte_sum, + token_count, + y_bytes=y_bytes_arg, + ) + if needs_train: + activate_chunk_mask = (num_chunks_t - 1 > ci).float() + for gi in range(h.ttt_grad_steps): + if gi > 0: + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + per_tok_loss = forward_ttt_train(x, y, lora=cur_lora) + per_doc = per_tok_loss[ + :, chunk_offset : chunk_offset + chunk_size + ].mean(dim=-1) + cur_opt.zero_grad(set_to_none=True) + (per_doc * activate_chunk_mask).sum().backward() + cur_opt.step() + else: + del per_tok_loss + batch_num = orig_batch_idx + 1 + doc_lens = [dl for _, dl in batch] + should_report = ( + batch_num in eval_batch_set if eval_batch_set is not None else True + ) + if should_report: + cur_tokens = token_count.item() + cur_loss_val = loss_sum.item() + cur_bytes_val = byte_sum.item() + dt = cur_tokens - prev_tokens + db = cur_bytes_val - prev_bytes + if dt > 0 and db > 0: + b_loss = (cur_loss_val - prev_loss) / dt + b_bpb = b_loss / math.log(2.0) * (dt / db) + else: + b_loss = b_bpb = 0.0 + r_loss = cur_loss_val / max(cur_tokens, 1) + r_bpb = r_loss / math.log(2.0) * (cur_tokens / max(cur_bytes_val, 1)) + elapsed = time.perf_counter() - t_start + log( + f"ttp: b{batch_num}/{queue_len} bl:{b_loss:.4f} bb:{b_bpb:.4f} rl:{r_loss:.4f} rb:{r_bpb:.4f} dl:{min(doc_lens)}-{max(doc_lens)} gd:{int(global_ttt_done)}" + ) + if not global_ttt_done: + local_scored_docs.extend( + (orig_batch_idx, pos, doc_start, doc_len) + for pos, (doc_start, doc_len) in enumerate(batch) + ) + prefix_done = _add_to_counter(prefix_counter_path, len(batch_entries)) + if prefix_done >= current_phase_boundary: + try: + with open(pause_flag_path, "x"): + pass + except FileExistsError: + pass + should_pause = os.path.exists(pause_flag_path) + if should_pause: + if dist.is_available() and dist.is_initialized(): + dist.barrier() + gathered_scored_docs = [None] * h.world_size + if dist.is_available() and dist.is_initialized(): + dist.all_gather_object(gathered_scored_docs, local_scored_docs) + else: + gathered_scored_docs = [local_scored_docs] + scored_docs_for_global = [] + for rank_docs in gathered_scored_docs: + if rank_docs: + scored_docs_for_global.extend(rank_docs) + scored_docs_for_global.sort(key=lambda x: (x[0], x[1])) + scored_docs_for_global = scored_docs_for_global[:current_phase_boundary] + scored_token_chunks = [ + val_data.val_tokens[doc_start : doc_start + doc_len] + for _, _, doc_start, doc_len in scored_docs_for_global + ] + if scored_token_chunks: + global_ttt_tokens = torch.cat(scored_token_chunks) + else: + global_ttt_tokens = val_data.val_tokens[:0] + if h.rank == 0: + prefix_done = 0 + try: + with open(prefix_counter_path, "rb") as f: + prefix_done = int.from_bytes( + f.read(8), "little", signed=True + ) + except FileNotFoundError: + pass + log( + f"ttpp: phase:{current_phase + 1}/{num_phases} pd:{prefix_done} gd:{len(scored_docs_for_global)} t:{time.perf_counter() - t_start:.1f}s" + ) + train_val_ttt_global_sgd_distributed( + h, device, val_data, base_model, global_ttt_tokens + ) + for p in base_model.parameters(): + p.requires_grad_(False) + reusable_lora = BatchedTTTLoRA( + h.ttt_batch_size, + base_model, + h.ttt_lora_rank, + k_lora=h.ttt_k_lora, + mlp_lora=h.ttt_mlp_lora, + o_lora=h.ttt_o_lora, + ).to(device) + reusable_opt = _build_opt(reusable_lora) + current_phase += 1 + if current_phase >= num_phases: + global_ttt_done = True + else: + current_phase_boundary = phase_boundaries[current_phase] + if h.rank == 0: + try: + os.remove(pause_flag_path) + except FileNotFoundError: + pass + if dist.is_available() and dist.is_initialized(): + dist.barrier() + if h.rank == 0: + log( + f"ttpr: phase:{current_phase}/{num_phases} t:{time.perf_counter() - t_start:.1f}s" + ) + del cur_lora, cur_opt + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(byte_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(token_count, op=dist.ReduceOp.SUM) + for p in base_model.parameters(): + p.requires_grad_(True) + base_model.train() + return _loss_bpb_from_sums(loss_sum, token_count, byte_sum) + + +def timed_eval(label, fn, *args, **kwargs): + torch.cuda.synchronize() + t0 = time.perf_counter() + val_loss, val_bpb = fn(*args, **kwargs) + torch.cuda.synchronize() + elapsed_ms = 1e3 * (time.perf_counter() - t0) + log( + f"{label} val_loss:{val_loss:.8f} val_bpb:{val_bpb:.8f} eval_time:{elapsed_ms:.0f}ms" + ) + return val_loss, val_bpb + + +def train_model(h, device, val_data): + base_model = GPT(h).to(device).bfloat16() + restore_fp32_params(base_model) + compiled_model = torch.compile(base_model, dynamic=False, fullgraph=True) + compiled_forward_logits = torch.compile( + base_model.forward_logits, dynamic=False, fullgraph=True + ) + model = compiled_model + log(f"model_params:{sum(p.numel() for p in base_model.parameters())}") + optimizers = Optimizers(h, base_model) + train_loader = DocumentPackingLoader(h, device) + max_wallclock_ms = ( + 1e3 * h.max_wallclock_seconds if h.max_wallclock_seconds > 0 else None + ) + if max_wallclock_ms is not None: + max_wallclock_ms -= h.gptq_reserve_seconds * 1e3 + log( + f"gptq:reserving {h.gptq_reserve_seconds:.1f}s, effective={max_wallclock_ms:.0f}ms" + ) + + def training_frac(step, elapsed_ms): + if max_wallclock_ms is None: + return step / max(h.iterations, 1) + return elapsed_ms / max(max_wallclock_ms, 1e-09) + + def lr_mul(frac): + if h.warmdown_frac <= 0: + return 1.0 + if frac >= 1.0 - h.warmdown_frac: + return max((1.0 - frac) / h.warmdown_frac, h.min_lr) + return 1.0 + + _clip_params = [p for p in base_model.parameters() if p.requires_grad] + + def step_fn(step, lr_scale): + train_loss = torch.zeros((), device=device) + for micro_step in range(h.grad_accum_steps): + x, y, cu_seqlens, _max_seqlen = train_loader.next_batch( + h.train_batch_tokens, h.grad_accum_steps + ) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + loss = model(x, y, cu_seqlens=cu_seqlens, max_seqlen=h.train_seq_len) + train_loss += loss.detach() + (loss / h.grad_accum_steps).backward() + train_loss /= h.grad_accum_steps + if step <= h.muon_momentum_warmup_steps: + frac = ( + min(step / h.muon_momentum_warmup_steps, 1.0) + if h.muon_momentum_warmup_steps > 0 + else 1.0 + ) + muon_momentum = ( + 1 - frac + ) * h.muon_momentum_warmup_start + frac * h.muon_momentum + for group in optimizers.optimizer_muon.param_groups: + group["momentum"] = muon_momentum + for opt in optimizers: + for group in opt.param_groups: + group["lr"] = group["base_lr"] * lr_scale + if h.grad_clip_norm > 0: + torch.nn.utils.clip_grad_norm_(_clip_params, h.grad_clip_norm) + optimizers.step(distributed=h.distributed) + return train_loss + + if h.warmup_steps > 0: + initial_model_state = { + name: tensor.detach().cpu().clone() + for (name, tensor) in base_model.state_dict().items() + } + initial_optimizer_states = [ + copy.deepcopy(opt.state_dict()) for opt in optimizers + ] + model.train() + num_tokens_local = h.train_batch_tokens // h.world_size + for blk in base_model.blocks: + blk.attn.rotary(num_tokens_local, device, torch.bfloat16) + cu_bucket_size = train_loader.cu_bucket_size + warmup_cu_buckets = tuple(cu_bucket_size * i for i in range(1, 5)) + warmup_cu_iters = 3 + x, y, cu_seqlens, _ = train_loader.next_batch( + h.train_batch_tokens, h.grad_accum_steps + ) + log( + f"warmup_cu_buckets:{','.join(str(b) for b in warmup_cu_buckets)} iters_each:{warmup_cu_iters}" + ) + + def _run_cu_bucket_warmup(): + for bucket_len in warmup_cu_buckets: + boundaries = list(range(0, x.size(1), max(h.train_seq_len, 1))) + if boundaries[-1] != x.size(1): + boundaries.append(x.size(1)) + cu = torch.full( + (bucket_len,), x.size(1), dtype=torch.int32, device=device + ) + cu[: len(boundaries)] = torch.tensor( + boundaries, dtype=torch.int32, device=device + ) + for _ in range(warmup_cu_iters): + optimizers.zero_grad_all() + with torch.autocast( + device_type="cuda", dtype=torch.bfloat16, enabled=True + ): + wloss = model(x, y, cu_seqlens=cu, max_seqlen=h.train_seq_len) + (wloss / h.grad_accum_steps).backward() + optimizers.zero_grad_all() + + _run_cu_bucket_warmup() + if h.num_loops > 0: + base_model.looping_active = True + _run_cu_bucket_warmup() + base_model.looping_active = False + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if ( + warmup_step <= 5 + or (warmup_step + 1) % 10 == 0 + or warmup_step + 1 == h.warmup_steps + ): + log(f"warmup_step: {warmup_step+1}/{h.warmup_steps}") + if h.num_loops > 0: + base_model.looping_active = True + log( + f"loop_warmup:enabled encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}" + ) + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if ( + warmup_step <= 5 + or (warmup_step + 1) % 10 == 0 + or warmup_step + 1 == h.warmup_steps + ): + log(f"loop_warmup_step: {warmup_step+1}/{h.warmup_steps}") + base_model.looping_active = False + base_model.load_state_dict(initial_model_state, strict=True) + for opt, state in zip(optimizers, initial_optimizer_states, strict=True): + opt.load_state_dict(state) + optimizers.zero_grad_all() + train_loader = DocumentPackingLoader(h, device) + _live_state = base_model.state_dict(keep_vars=True) + ema_state = {name: t.detach().float().clone() for (name, t) in _live_state.items()} + _ema_pairs = [(ema_state[name], t) for (name, t) in _live_state.items()] + ema_decay = h.ema_decay + training_time_ms = 0.0 + stop_after_step = None + torch.cuda.synchronize() + t0 = time.perf_counter() + step = 0 + while True: + last_step = ( + step == h.iterations + or stop_after_step is not None + and step >= stop_after_step + ) + should_validate = ( + last_step or h.val_loss_every > 0 and step % h.val_loss_every == 0 + ) + if should_validate: + torch.cuda.synchronize() + training_time_ms += 1e3 * (time.perf_counter() - t0) + val_loss, val_bpb = eval_val( + h, device, val_data, model, compiled_forward_logits + ) + log( + f"{step}/{h.iterations} val_loss: {val_loss:.4f} val_bpb: {val_bpb:.4f}" + ) + torch.cuda.synchronize() + t0 = time.perf_counter() + if last_step: + if stop_after_step is not None and step < h.iterations: + log( + f"stopping_early: wallclock_cap train_time: {training_time_ms:.0f}ms step: {step}/{h.iterations}" + ) + break + elapsed_ms = training_time_ms + 1e3 * (time.perf_counter() - t0) + frac = training_frac(step, elapsed_ms) + scale = lr_mul(frac) + if ( + h.num_loops > 0 + and not base_model.looping_active + and frac >= h.enable_looping_at + ): + base_model.looping_active = True + log( + f"layer_loop:enabled step:{step} frac:{frac:.3f} encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}" + ) + train_loss = step_fn(step, scale) + with torch.no_grad(): + for ema_t, t in _ema_pairs: + ema_t.mul_(ema_decay).add_(t.detach(), alpha=1.0 - ema_decay) + step += 1 + approx_training_time_ms = training_time_ms + 1e3 * (time.perf_counter() - t0) + should_log_train = h.train_log_every > 0 and ( + step <= 5 or step % h.train_log_every == 0 or stop_after_step is not None + ) + if should_log_train: + tok_per_sec = step * h.train_batch_tokens / (approx_training_time_ms / 1e3) + log( + f"{step}/{h.iterations} train_loss: {train_loss.item():.4f} train_time: {approx_training_time_ms/60000:.1f}m tok/s: {tok_per_sec:.0f}" + ) + reached_cap = ( + max_wallclock_ms is not None and approx_training_time_ms >= max_wallclock_ms + ) + if h.distributed and max_wallclock_ms is not None: + reached_cap_tensor = torch.tensor(int(reached_cap), device=device) + dist.all_reduce(reached_cap_tensor, op=dist.ReduceOp.MAX) + reached_cap = bool(reached_cap_tensor.item()) + if stop_after_step is None and reached_cap: + stop_after_step = step + log( + f"peak memory allocated: {torch.cuda.max_memory_allocated()//1024//1024} MiB reserved: {torch.cuda.max_memory_reserved()//1024//1024} MiB" + ) + log("ema:applying EMA weights") + current_state = base_model.state_dict() + avg_state = { + name: t.to(dtype=current_state[name].dtype) for (name, t) in ema_state.items() + } + base_model.load_state_dict(avg_state, strict=True) + return base_model, compiled_model, compiled_forward_logits + + +def _ttt_warmup_compile(h, device): + """Pre-warm the torch.compile cache for the eval-only TTT graph using same args as eval.""" + log("ttt_compile_warmup: starting (writes to inductor cache)") + t0 = time.perf_counter() + ttt_model = deserialize(h, device) + if h.num_loops > 0: + ttt_model.looping_active = True + for p in ttt_model.parameters(): + p.requires_grad_(False) + if h.rope_yarn: + _yarn_seqlen = h.train_batch_tokens // h.grad_accum_steps + for block in ttt_model.blocks: + block.attn.rotary(_yarn_seqlen, device, torch.bfloat16) + else: + for block in ttt_model.blocks: + block.attn.rotary._cos_cached = None + block.attn.rotary._sin_cached = None + block.attn.rotary._seq_len_cached = 0 + block.attn.rotary(h.ttt_eval_seq_len, device, torch.bfloat16) + + def _fwd_ttt_inner(input_ids, target_ids, lora): + return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) + + fwd_ttt_compiled = torch.compile(_fwd_ttt_inner, dynamic=True) + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + bsz = h.ttt_batch_size + wl = BatchedTTTLoRA( + bsz, + ttt_model, + h.ttt_lora_rank, + k_lora=h.ttt_k_lora, + mlp_lora=h.ttt_mlp_lora, + o_lora=h.ttt_o_lora, + ).to(device) + wo = torch.optim.AdamW( + wl.parameters(), + lr=h.ttt_lora_lr, + betas=(h.ttt_beta1, h.ttt_beta2), + eps=1e-10, + weight_decay=h.ttt_weight_decay, + fused=True, + ) + for ctx_len in (h.ttt_chunk_size, h.ttt_eval_seq_len): + xw = torch.randint( + 0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64 + ) + yw = torch.randint( + 0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64 + ) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + ptl = fwd_ttt_compiled(xw, yw, lora=wl) + ptl[:, : min(h.ttt_chunk_size, ctx_len)].mean(dim=-1).sum().backward() + wo.step() + wo.zero_grad(set_to_none=True) + del wl, wo, ttt_model, fwd_ttt_compiled + torch.cuda.empty_cache() + log(f"ttt_compile_warmup: done in {time.perf_counter()-t0:.1f}s") + + +def do_train(h, device): + random.seed(h.seed) + np.random.seed(h.seed) + torch.manual_seed(h.seed) + torch.cuda.manual_seed_all(h.seed) + if h.artifact_dir and h.is_main_process: + os.makedirs(h.artifact_dir, exist_ok=True) + val_data = ValidationData(h, device) + log( + f"train_shards: {len(list(Path(h.datasets_dir).resolve().glob('fineweb_train_*.bin')))}" + ) + log(f"val_tokens: {val_data.val_tokens.numel()-1}") + base_model, compiled_model, compiled_forward_logits = train_model( + h, device, val_data + ) + torch._dynamo.reset() + timed_eval( + "diagnostic pre-quantization post-ema", + eval_val, + h, + device, + val_data, + compiled_model, + compiled_forward_logits, + ) + serialize(h, base_model) + if h.distributed: + dist.barrier() + eval_model = deserialize(h, device) + if h.num_loops > 0: + eval_model.looping_active = True + compiled_eval_model = torch.compile(eval_model, dynamic=False, fullgraph=True) + compiled_eval_forward_logits = torch.compile( + eval_model.forward_logits, dynamic=False, fullgraph=True + ) + timed_eval( + "diagnostic quantized", + eval_val, + h, + device, + val_data, + compiled_eval_model, + compiled_eval_forward_logits, + ) + del eval_model, compiled_eval_model, compiled_eval_forward_logits + torch._dynamo.reset() + torch.cuda.empty_cache() + # Pre-warm TTT compile cache for eval mode + if h.ttt_enabled: + _ttt_warmup_compile(h, device) + + +def do_eval(h, device): + random.seed(h.seed) + np.random.seed(h.seed) + torch.manual_seed(h.seed) + torch.cuda.manual_seed_all(h.seed) + val_data = ValidationData(h, device) + log(f"val_tokens: {val_data.val_tokens.numel()-1}") + ttt_model = deserialize(h, device) + if h.num_loops > 0: + ttt_model.looping_active = True + for p in ttt_model.parameters(): + p.requires_grad_(False) + if h.rope_yarn: + _yarn_seqlen = h.train_batch_tokens // h.grad_accum_steps + for block in ttt_model.blocks: + block.attn.rotary(_yarn_seqlen, device, torch.bfloat16) + else: + for block in ttt_model.blocks: + block.attn.rotary._cos_cached = None + block.attn.rotary._sin_cached = None + block.attn.rotary._seq_len_cached = 0 + block.attn.rotary(h.ttt_eval_seq_len, device, torch.bfloat16) + + def _fwd_ttt_inner(input_ids, target_ids, lora): + return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) + + _fwd_ttt_compiled_inner = None + + def _fwd_ttt(input_ids, target_ids, lora): + nonlocal _fwd_ttt_compiled_inner + if _fwd_ttt_compiled_inner is None: + _fwd_ttt_compiled_inner = torch.compile(_fwd_ttt_inner, dynamic=True) + return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) + + log("ttt_lora:warming up compile (random tokens, no val data)") + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + t_warmup = time.perf_counter() + bsz = h.ttt_batch_size + wl = BatchedTTTLoRA( + bsz, + ttt_model, + h.ttt_lora_rank, + k_lora=h.ttt_k_lora, + mlp_lora=h.ttt_mlp_lora, + o_lora=h.ttt_o_lora, + ).to(device) + wo = torch.optim.AdamW( + wl.parameters(), + lr=h.ttt_lora_lr, + betas=(h.ttt_beta1, h.ttt_beta2), + eps=1e-10, + weight_decay=h.ttt_weight_decay, + fused=True, + ) + for ctx_len in (h.ttt_chunk_size, h.ttt_eval_seq_len): + xw = torch.randint( + 0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64 + ) + yw = torch.randint( + 0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64 + ) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + ptl = _fwd_ttt(xw, yw, lora=wl) + ptl[:, : min(h.ttt_chunk_size, ctx_len)].mean(dim=-1).sum().backward() + wo.step() + wo.zero_grad(set_to_none=True) + del wl, wo + torch.cuda.empty_cache() + log(f"ttt_lora:compile warmup done ({time.perf_counter()-t_warmup:.1f}s)") + log("\nbeginning TTT eval timer") + torch.cuda.synchronize() + t_ttt = time.perf_counter() + ttt_val_loss, ttt_val_bpb = eval_val_ttt_phased( + h, ttt_model, device, val_data, forward_ttt_train=_fwd_ttt + ) + torch.cuda.synchronize() + ttt_eval_elapsed = time.perf_counter() - t_ttt + log( + f"quantized_ttt_phased val_loss:{ttt_val_loss:.8f} val_bpb:{ttt_val_bpb:.8f} eval_time:{1e3*ttt_eval_elapsed:.0f}ms" + ) + if h.is_main_process: + print(f"val_loss:{ttt_val_loss:.8f}", flush=True) + print(f"val_bpb:{ttt_val_bpb:.8f}", flush=True) + print(f"total_eval_time:{ttt_eval_elapsed:.1f}s", flush=True) + + +def main(): + parser = argparse.ArgumentParser() + parser.add_argument("--mode", choices=["train", "eval"], required=True) + args = parser.parse_args() + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + if not torch.cuda.is_available(): + raise RuntimeError("CUDA required") + device = torch.device("cuda", local_rank) + torch.cuda.set_device(device) + if distributed: + dist.init_process_group(backend="nccl", device_id=device) + dist.barrier() + torch.backends.cuda.matmul.allow_tf32 = True + torch.backends.cudnn.allow_tf32 = True + torch.set_float32_matmul_precision("high") + from torch.backends.cuda import ( + enable_cudnn_sdp, + enable_flash_sdp, + enable_math_sdp, + enable_mem_efficient_sdp, + ) + + enable_cudnn_sdp(False) + enable_flash_sdp(True) + enable_mem_efficient_sdp(False) + enable_math_sdp(False) + torch._dynamo.config.optimize_ddp = False + torch._dynamo.config.cache_size_limit = 64 + h = Hyperparameters() + set_logging_hparams(h) + if h.is_main_process and h.logfile: + os.makedirs(os.path.dirname(h.logfile) or ".", exist_ok=True) + if args.mode == "train": + do_train(h, device) + else: + do_eval(h, device) + if distributed: + dist.destroy_process_group() + + +if __name__ == "__main__": + main() diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed0.log b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed0.log new file mode 100644 index 0000000000..fb55df4ba0 --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed0.log @@ -0,0 +1,1092 @@ +train_shards: 80 +val_tokens: 47851520 +model_params:35945671 +gptq:reserving 0.5s, effective=599500ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 9.1630 train_time: 0.0m tok/s: 17353484 +2/20000 train_loss: 13.0927 train_time: 0.0m tok/s: 5755307 +3/20000 train_loss: 10.4720 train_time: 0.0m tok/s: 6435050 +4/20000 train_loss: 8.9800 train_time: 0.0m tok/s: 6856927 +5/20000 train_loss: 8.2376 train_time: 0.0m tok/s: 7123224 +500/20000 train_loss: 2.8877 train_time: 0.8m tok/s: 8248100 +1000/20000 train_loss: 3.1098 train_time: 1.6m tok/s: 8245550 +1500/20000 train_loss: 2.9223 train_time: 2.4m tok/s: 8240386 +2000/20000 train_loss: 2.9534 train_time: 3.2m tok/s: 8239812 +layer_loop:enabled step:2198 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.8472 train_time: 4.2m tok/s: 7799283 +3000/20000 train_loss: 2.8604 train_time: 5.4m tok/s: 7242872 +3500/20000 train_loss: 2.8653 train_time: 6.6m tok/s: 6955831 +4000/20000 train_loss: 2.7106 train_time: 7.8m tok/s: 6699103 +4500/20000 train_loss: 2.5882 train_time: 9.0m tok/s: 6545139 +4921/20000 val_loss: 2.3710 val_bpb: 1.0834 +stopping_early: wallclock_cap train_time: 599507ms step: 4921/20000 +peak memory allocated: 41697 MiB reserved: 41722 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.34237369 val_bpb:1.07037337 eval_time:108717ms +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 3.7s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 117.4s +Serialized model quantized+pergroup: 15865881 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 20.9s +diagnostic quantized val_loss:2.36150981 val_bpb:1.07911783 eval_time:112538ms +ttt_compile_warmup: starting (writes to inductor cache) +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 21.0s +ttt_compile_warmup: done in 134.1s + +val_tokens: 47851520 +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 22.1s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (108.3s) + +beginning TTT eval timer +ttt_phased: total_docs:50000 prefix_docs:3000 suffix_docs:47000 num_phases:4 boundaries:[750, 1500, 2250, 3000] +ttp: b777/782 bl:2.3130 bb:1.0839 rl:2.3130 rb:1.0839 dl:8452-9229 gd:0 +ttp: b771/782 bl:2.3074 bb:1.0598 rl:2.3108 rb:1.0744 dl:5523-5749 gd:0 +ttp: b768/782 bl:2.2409 bb:1.0436 rl:2.2929 rb:1.0666 dl:4859-5083 gd:0 +ttpp: phase:1/4 pd:1168 gd:750 t:182.3s +tttg: c1/124 lr:0.001000 t:0.7s +tttg: c2/124 lr:0.001000 t:0.8s +tttg: c3/124 lr:0.000999 t:0.9s +tttg: c4/124 lr:0.000999 t:1.0s +tttg: c5/124 lr:0.000997 t:1.1s +tttg: c6/124 lr:0.000996 t:1.2s +tttg: c7/124 lr:0.000994 t:2.4s +tttg: c8/124 lr:0.000992 t:2.5s +tttg: c9/124 lr:0.000990 t:2.6s +tttg: c10/124 lr:0.000987 t:2.7s +tttg: c11/124 lr:0.000984 t:2.8s +tttg: c12/124 lr:0.000980 t:2.9s +tttg: c13/124 lr:0.000977 t:2.9s +tttg: c14/124 lr:0.000973 t:3.0s +tttg: c15/124 lr:0.000968 t:3.1s +tttg: c16/124 lr:0.000964 t:3.2s +tttg: c17/124 lr:0.000959 t:3.3s +tttg: c18/124 lr:0.000954 t:3.4s +tttg: c19/124 lr:0.000948 t:3.5s +tttg: c20/124 lr:0.000942 t:3.5s +tttg: c21/124 lr:0.000936 t:3.6s +tttg: c22/124 lr:0.000930 t:3.7s +tttg: c23/124 lr:0.000923 t:3.8s +tttg: c24/124 lr:0.000916 t:3.9s +tttg: c25/124 lr:0.000909 t:4.0s +tttg: c26/124 lr:0.000901 t:4.1s +tttg: c27/124 lr:0.000894 t:4.1s +tttg: c28/124 lr:0.000886 t:4.2s +tttg: c29/124 lr:0.000877 t:4.3s +tttg: c30/124 lr:0.000869 t:4.4s +tttg: c31/124 lr:0.000860 t:4.5s +tttg: c32/124 lr:0.000851 t:4.6s +tttg: c33/124 lr:0.000842 t:4.7s +tttg: c34/124 lr:0.000833 t:4.7s +tttg: c35/124 lr:0.000823 t:4.8s +tttg: c36/124 lr:0.000813 t:4.9s +tttg: c37/124 lr:0.000803 t:5.0s +tttg: c38/124 lr:0.000793 t:5.1s +tttg: c39/124 lr:0.000782 t:5.2s +tttg: c40/124 lr:0.000772 t:5.3s +tttg: c41/124 lr:0.000761 t:5.4s +tttg: c42/124 lr:0.000750 t:5.4s +tttg: c43/124 lr:0.000739 t:5.5s +tttg: c44/124 lr:0.000728 t:5.6s +tttg: c45/124 lr:0.000716 t:5.7s +tttg: c46/124 lr:0.000705 t:5.8s +tttg: c47/124 lr:0.000693 t:5.9s +tttg: c48/124 lr:0.000681 t:6.0s +tttg: c49/124 lr:0.000669 t:6.0s +tttg: c50/124 lr:0.000657 t:6.1s +tttg: c51/124 lr:0.000645 t:6.2s +tttg: c52/124 lr:0.000632 t:6.3s +tttg: c53/124 lr:0.000620 t:6.4s +tttg: c54/124 lr:0.000608 t:6.5s +tttg: c55/124 lr:0.000595 t:6.6s +tttg: c56/124 lr:0.000583 t:6.6s +tttg: c57/124 lr:0.000570 t:6.7s +tttg: c58/124 lr:0.000557 t:6.8s +tttg: c59/124 lr:0.000545 t:6.9s +tttg: c60/124 lr:0.000532 t:7.0s +tttg: c61/124 lr:0.000519 t:7.1s +tttg: c62/124 lr:0.000506 t:7.2s +tttg: c63/124 lr:0.000494 t:7.3s +tttg: c64/124 lr:0.000481 t:7.3s +tttg: c65/124 lr:0.000468 t:7.4s +tttg: c66/124 lr:0.000455 t:7.5s +tttg: c67/124 lr:0.000443 t:7.6s +tttg: c68/124 lr:0.000430 t:7.7s +tttg: c69/124 lr:0.000417 t:7.8s +tttg: c70/124 lr:0.000405 t:7.8s +tttg: c71/124 lr:0.000392 t:7.9s +tttg: c72/124 lr:0.000380 t:8.0s +tttg: c73/124 lr:0.000368 t:8.1s +tttg: c74/124 lr:0.000355 t:8.2s +tttg: c75/124 lr:0.000343 t:8.3s +tttg: c76/124 lr:0.000331 t:8.4s +tttg: c77/124 lr:0.000319 t:8.4s +tttg: c78/124 lr:0.000307 t:8.5s +tttg: c79/124 lr:0.000295 t:8.6s +tttg: c80/124 lr:0.000284 t:8.7s +tttg: c81/124 lr:0.000272 t:8.8s +tttg: c82/124 lr:0.000261 t:8.9s +tttg: c83/124 lr:0.000250 t:9.0s +tttg: c84/124 lr:0.000239 t:9.0s +tttg: c85/124 lr:0.000228 t:9.1s +tttg: c86/124 lr:0.000218 t:9.2s +tttg: c87/124 lr:0.000207 t:9.3s +tttg: c88/124 lr:0.000197 t:9.4s +tttg: c89/124 lr:0.000187 t:9.5s +tttg: c90/124 lr:0.000177 t:9.6s +tttg: c91/124 lr:0.000167 t:9.7s +tttg: c92/124 lr:0.000158 t:9.7s +tttg: c93/124 lr:0.000149 t:9.8s +tttg: c94/124 lr:0.000140 t:9.9s +tttg: c95/124 lr:0.000131 t:10.0s +tttg: c96/124 lr:0.000123 t:10.1s +tttg: c97/124 lr:0.000114 t:10.2s +tttg: c98/124 lr:0.000106 t:10.3s +tttg: c99/124 lr:0.000099 t:10.3s +tttg: c100/124 lr:0.000091 t:10.4s +tttg: c101/124 lr:0.000084 t:10.5s +tttg: c102/124 lr:0.000077 t:10.6s +tttg: c103/124 lr:0.000070 t:10.7s +tttg: c104/124 lr:0.000064 t:10.8s +tttg: c105/124 lr:0.000058 t:10.9s +tttg: c106/124 lr:0.000052 t:10.9s +tttg: c107/124 lr:0.000046 t:11.0s +tttg: c108/124 lr:0.000041 t:11.1s +tttg: c109/124 lr:0.000036 t:11.2s +tttg: c110/124 lr:0.000032 t:11.3s +tttg: c111/124 lr:0.000027 t:11.4s +tttg: c112/124 lr:0.000023 t:11.5s +tttg: c113/124 lr:0.000020 t:11.6s +tttg: c114/124 lr:0.000016 t:11.6s +tttg: c115/124 lr:0.000013 t:11.7s +tttg: c116/124 lr:0.000010 t:11.8s +tttg: c117/124 lr:0.000008 t:11.9s +tttg: c118/124 lr:0.000006 t:12.0s +tttg: c119/124 lr:0.000004 t:12.1s +tttg: c120/124 lr:0.000003 t:12.2s +tttg: c121/124 lr:0.000001 t:12.2s +tttg: c122/124 lr:0.000001 t:12.3s +tttg: c123/124 lr:0.000000 t:12.4s +ttpr: phase:1/4 t:197.5s +ttp: b762/782 bl:2.3546 bb:1.0903 rl:2.3036 rb:1.0707 dl:4032-4142 gd:0 +ttpp: phase:2/4 pd:2000 gd:1500 t:270.5s +tttg: c1/199 lr:0.001000 t:0.1s +tttg: c2/199 lr:0.001000 t:0.2s +tttg: c3/199 lr:0.001000 t:0.3s +tttg: c4/199 lr:0.000999 t:0.3s +tttg: c5/199 lr:0.000999 t:0.4s +tttg: c6/199 lr:0.000998 t:0.5s +tttg: c7/199 lr:0.000998 t:0.6s +tttg: c8/199 lr:0.000997 t:0.7s +tttg: c9/199 lr:0.000996 t:0.8s +tttg: c10/199 lr:0.000995 t:0.9s +tttg: c11/199 lr:0.000994 t:0.9s +tttg: c12/199 lr:0.000992 t:1.0s +tttg: c13/199 lr:0.000991 t:1.1s +tttg: c14/199 lr:0.000989 t:1.2s +tttg: c15/199 lr:0.000988 t:1.3s +tttg: c16/199 lr:0.000986 t:1.4s +tttg: c17/199 lr:0.000984 t:1.5s +tttg: c18/199 lr:0.000982 t:1.6s +tttg: c19/199 lr:0.000980 t:1.6s +tttg: c20/199 lr:0.000977 t:1.7s +tttg: c21/199 lr:0.000975 t:1.8s +tttg: c22/199 lr:0.000973 t:1.9s +tttg: c23/199 lr:0.000970 t:2.0s +tttg: c24/199 lr:0.000967 t:2.1s +tttg: c25/199 lr:0.000964 t:2.2s +tttg: c26/199 lr:0.000961 t:2.2s +tttg: c27/199 lr:0.000958 t:2.3s +tttg: c28/199 lr:0.000955 t:2.4s +tttg: c29/199 lr:0.000951 t:2.5s +tttg: c30/199 lr:0.000948 t:2.6s +tttg: c31/199 lr:0.000944 t:2.7s +tttg: c32/199 lr:0.000941 t:2.8s +tttg: c33/199 lr:0.000937 t:2.9s +tttg: c34/199 lr:0.000933 t:2.9s +tttg: c35/199 lr:0.000929 t:3.0s +tttg: c36/199 lr:0.000925 t:3.1s +tttg: c37/199 lr:0.000921 t:3.2s +tttg: c38/199 lr:0.000916 t:3.3s +tttg: c39/199 lr:0.000912 t:3.4s +tttg: c40/199 lr:0.000907 t:3.5s +tttg: c41/199 lr:0.000903 t:3.5s +tttg: c42/199 lr:0.000898 t:3.6s +tttg: c43/199 lr:0.000893 t:3.7s +tttg: c44/199 lr:0.000888 t:3.8s +tttg: c45/199 lr:0.000883 t:3.9s +tttg: c46/199 lr:0.000878 t:4.0s +tttg: c47/199 lr:0.000873 t:4.0s +tttg: c48/199 lr:0.000867 t:4.1s +tttg: c49/199 lr:0.000862 t:4.2s +tttg: c50/199 lr:0.000856 t:4.3s +tttg: c51/199 lr:0.000851 t:4.4s +tttg: c52/199 lr:0.000845 t:4.5s +tttg: c53/199 lr:0.000839 t:4.6s +tttg: c54/199 lr:0.000833 t:4.6s +tttg: c55/199 lr:0.000827 t:4.7s +tttg: c56/199 lr:0.000821 t:4.8s +tttg: c57/199 lr:0.000815 t:4.9s +tttg: c58/199 lr:0.000809 t:5.0s +tttg: c59/199 lr:0.000803 t:5.1s +tttg: c60/199 lr:0.000796 t:5.2s +tttg: c61/199 lr:0.000790 t:5.2s +tttg: c62/199 lr:0.000784 t:5.3s +tttg: c63/199 lr:0.000777 t:5.4s +tttg: c64/199 lr:0.000770 t:5.5s +tttg: c65/199 lr:0.000764 t:5.6s +tttg: c66/199 lr:0.000757 t:5.7s +tttg: c67/199 lr:0.000750 t:5.8s +tttg: c68/199 lr:0.000743 t:5.9s +tttg: c69/199 lr:0.000736 t:5.9s +tttg: c70/199 lr:0.000729 t:6.0s +tttg: c71/199 lr:0.000722 t:6.1s +tttg: c72/199 lr:0.000715 t:6.2s +tttg: c73/199 lr:0.000708 t:6.3s +tttg: c74/199 lr:0.000700 t:6.4s +tttg: c75/199 lr:0.000693 t:6.5s +tttg: c76/199 lr:0.000686 t:6.6s +tttg: c77/199 lr:0.000678 t:6.6s +tttg: c78/199 lr:0.000671 t:6.7s +tttg: c79/199 lr:0.000664 t:6.8s +tttg: c80/199 lr:0.000656 t:6.9s +tttg: c81/199 lr:0.000648 t:7.0s +tttg: c82/199 lr:0.000641 t:7.1s +tttg: c83/199 lr:0.000633 t:7.2s +tttg: c84/199 lr:0.000626 t:7.2s +tttg: c85/199 lr:0.000618 t:7.3s +tttg: c86/199 lr:0.000610 t:7.4s +tttg: c87/199 lr:0.000602 t:7.5s +tttg: c88/199 lr:0.000595 t:7.6s +tttg: c89/199 lr:0.000587 t:7.7s +tttg: c90/199 lr:0.000579 t:7.8s +tttg: c91/199 lr:0.000571 t:7.9s +tttg: c92/199 lr:0.000563 t:7.9s +tttg: c93/199 lr:0.000555 t:8.0s +tttg: c94/199 lr:0.000548 t:8.1s +tttg: c95/199 lr:0.000540 t:8.2s +tttg: c96/199 lr:0.000532 t:8.3s +tttg: c97/199 lr:0.000524 t:8.4s +tttg: c98/199 lr:0.000516 t:8.5s +tttg: c99/199 lr:0.000508 t:8.5s +tttg: c100/199 lr:0.000500 t:8.6s +tttg: c101/199 lr:0.000492 t:8.7s +tttg: c102/199 lr:0.000484 t:8.8s +tttg: c103/199 lr:0.000476 t:8.9s +tttg: c104/199 lr:0.000468 t:9.0s +tttg: c105/199 lr:0.000460 t:9.1s +tttg: c106/199 lr:0.000452 t:9.1s +tttg: c107/199 lr:0.000445 t:9.2s +tttg: c108/199 lr:0.000437 t:9.3s +tttg: c109/199 lr:0.000429 t:9.5s +tttg: c110/199 lr:0.000421 t:9.6s +tttg: c111/199 lr:0.000413 t:9.6s +tttg: c112/199 lr:0.000405 t:9.7s +tttg: c113/199 lr:0.000398 t:9.8s +tttg: c114/199 lr:0.000390 t:9.9s +tttg: c115/199 lr:0.000382 t:10.0s +tttg: c116/199 lr:0.000374 t:10.1s +tttg: c117/199 lr:0.000367 t:10.2s +tttg: c118/199 lr:0.000359 t:10.3s +tttg: c119/199 lr:0.000352 t:10.3s +tttg: c120/199 lr:0.000344 t:10.4s +tttg: c121/199 lr:0.000336 t:10.5s +tttg: c122/199 lr:0.000329 t:10.6s +tttg: c123/199 lr:0.000322 t:10.7s +tttg: c124/199 lr:0.000314 t:10.8s +tttg: c125/199 lr:0.000307 t:10.9s +tttg: c126/199 lr:0.000300 t:10.9s +tttg: c127/199 lr:0.000292 t:11.0s +tttg: c128/199 lr:0.000285 t:11.1s +tttg: c129/199 lr:0.000278 t:11.2s +tttg: c130/199 lr:0.000271 t:11.3s +tttg: c131/199 lr:0.000264 t:11.4s +tttg: c132/199 lr:0.000257 t:11.5s +tttg: c133/199 lr:0.000250 t:11.6s +tttg: c134/199 lr:0.000243 t:11.6s +tttg: c135/199 lr:0.000236 t:11.7s +tttg: c136/199 lr:0.000230 t:11.8s +tttg: c137/199 lr:0.000223 t:11.9s +tttg: c138/199 lr:0.000216 t:12.0s +tttg: c139/199 lr:0.000210 t:12.1s +tttg: c140/199 lr:0.000204 t:12.2s +tttg: c141/199 lr:0.000197 t:12.2s +tttg: c142/199 lr:0.000191 t:12.3s +tttg: c143/199 lr:0.000185 t:12.4s +tttg: c144/199 lr:0.000179 t:12.5s +tttg: c145/199 lr:0.000173 t:12.6s +tttg: c146/199 lr:0.000167 t:12.7s +tttg: c147/199 lr:0.000161 t:12.8s +tttg: c148/199 lr:0.000155 t:12.8s +tttg: c149/199 lr:0.000149 t:12.9s +tttg: c150/199 lr:0.000144 t:13.0s +tttg: c151/199 lr:0.000138 t:13.1s +tttg: c152/199 lr:0.000133 t:13.2s +tttg: c153/199 lr:0.000127 t:13.3s +tttg: c154/199 lr:0.000122 t:13.4s +tttg: c155/199 lr:0.000117 t:13.5s +tttg: c156/199 lr:0.000112 t:13.5s +tttg: c157/199 lr:0.000107 t:13.6s +tttg: c158/199 lr:0.000102 t:13.7s +tttg: c159/199 lr:0.000097 t:13.8s +tttg: c160/199 lr:0.000093 t:13.9s +tttg: c161/199 lr:0.000088 t:14.0s +tttg: c162/199 lr:0.000084 t:14.1s +tttg: c163/199 lr:0.000079 t:14.2s +tttg: c164/199 lr:0.000075 t:14.2s +tttg: c165/199 lr:0.000071 t:14.3s +tttg: c166/199 lr:0.000067 t:14.4s +tttg: c167/199 lr:0.000063 t:14.5s +tttg: c168/199 lr:0.000059 t:14.6s +tttg: c169/199 lr:0.000056 t:14.7s +tttg: c170/199 lr:0.000052 t:14.8s +tttg: c171/199 lr:0.000049 t:14.8s +tttg: c172/199 lr:0.000045 t:14.9s +tttg: c173/199 lr:0.000042 t:15.0s +tttg: c174/199 lr:0.000039 t:15.1s +tttg: c175/199 lr:0.000036 t:15.2s +tttg: c176/199 lr:0.000033 t:15.3s +tttg: c177/199 lr:0.000030 t:15.4s +tttg: c178/199 lr:0.000027 t:15.5s +tttg: c179/199 lr:0.000025 t:15.5s +tttg: c180/199 lr:0.000023 t:15.6s +tttg: c181/199 lr:0.000020 t:15.7s +tttg: c182/199 lr:0.000018 t:15.8s +tttg: c183/199 lr:0.000016 t:15.9s +tttg: c184/199 lr:0.000014 t:16.0s +tttg: c185/199 lr:0.000012 t:16.1s +tttg: c186/199 lr:0.000011 t:16.1s +tttg: c187/199 lr:0.000009 t:16.2s +tttg: c188/199 lr:0.000008 t:16.3s +tttg: c189/199 lr:0.000006 t:16.4s +tttg: c190/199 lr:0.000005 t:16.5s +tttg: c191/199 lr:0.000004 t:16.6s +tttg: c192/199 lr:0.000003 t:16.7s +tttg: c193/199 lr:0.000002 t:16.8s +tttg: c194/199 lr:0.000002 t:16.9s +tttg: c195/199 lr:0.000001 t:16.9s +tttg: c196/199 lr:0.000001 t:17.0s +tttg: c197/199 lr:0.000000 t:17.1s +tttg: c198/199 lr:0.000000 t:17.2s +ttpr: phase:2/4 t:290.4s +ttp: b745/782 bl:2.2373 bb:1.0243 rl:2.2964 rb:1.0656 dl:2842-2883 gd:0 +ttp: b740/782 bl:2.2603 bb:1.0376 rl:2.2931 rb:1.0630 dl:2653-2686 gd:0 +ttpp: phase:3/4 pd:2704 gd:2250 t:307.7s +tttg: c1/270 lr:0.001000 t:0.1s +tttg: c2/270 lr:0.001000 t:0.2s +tttg: c3/270 lr:0.001000 t:0.3s +tttg: c4/270 lr:0.001000 t:0.4s +tttg: c5/270 lr:0.000999 t:0.4s +tttg: c6/270 lr:0.000999 t:0.5s +tttg: c7/270 lr:0.000999 t:0.6s +tttg: c8/270 lr:0.000998 t:0.7s +tttg: c9/270 lr:0.000998 t:0.8s +tttg: c10/270 lr:0.000997 t:0.9s +tttg: c11/270 lr:0.000997 t:1.0s +tttg: c12/270 lr:0.000996 t:1.0s +tttg: c13/270 lr:0.000995 t:1.1s +tttg: c14/270 lr:0.000994 t:1.2s +tttg: c15/270 lr:0.000993 t:1.3s +tttg: c16/270 lr:0.000992 t:1.4s +tttg: c17/270 lr:0.000991 t:1.5s +tttg: c18/270 lr:0.000990 t:1.6s +tttg: c19/270 lr:0.000989 t:1.6s +tttg: c20/270 lr:0.000988 t:1.7s +tttg: c21/270 lr:0.000986 t:1.8s +tttg: c22/270 lr:0.000985 t:1.9s +tttg: c23/270 lr:0.000984 t:2.0s +tttg: c24/270 lr:0.000982 t:2.1s +tttg: c25/270 lr:0.000980 t:2.2s +tttg: c26/270 lr:0.000979 t:2.2s +tttg: c27/270 lr:0.000977 t:2.3s +tttg: c28/270 lr:0.000975 t:2.4s +tttg: c29/270 lr:0.000974 t:2.5s +tttg: c30/270 lr:0.000972 t:2.6s +tttg: c31/270 lr:0.000970 t:2.7s +tttg: c32/270 lr:0.000968 t:2.8s +tttg: c33/270 lr:0.000965 t:2.8s +tttg: c34/270 lr:0.000963 t:2.9s +tttg: c35/270 lr:0.000961 t:3.0s +tttg: c36/270 lr:0.000959 t:3.1s +tttg: c37/270 lr:0.000956 t:3.2s +tttg: c38/270 lr:0.000954 t:3.3s +tttg: c39/270 lr:0.000952 t:3.4s +tttg: c40/270 lr:0.000949 t:3.4s +tttg: c41/270 lr:0.000946 t:3.5s +tttg: c42/270 lr:0.000944 t:3.6s +tttg: c43/270 lr:0.000941 t:3.7s +tttg: c44/270 lr:0.000938 t:3.8s +tttg: c45/270 lr:0.000935 t:3.9s +tttg: c46/270 lr:0.000933 t:4.0s +tttg: c47/270 lr:0.000930 t:4.1s +tttg: c48/270 lr:0.000927 t:4.1s +tttg: c49/270 lr:0.000923 t:4.2s +tttg: c50/270 lr:0.000920 t:4.3s +tttg: c51/270 lr:0.000917 t:4.4s +tttg: c52/270 lr:0.000914 t:4.5s +tttg: c53/270 lr:0.000911 t:4.6s +tttg: c54/270 lr:0.000907 t:4.7s +tttg: c55/270 lr:0.000904 t:4.8s +tttg: c56/270 lr:0.000900 t:4.8s +tttg: c57/270 lr:0.000897 t:4.9s +tttg: c58/270 lr:0.000893 t:5.0s +tttg: c59/270 lr:0.000890 t:5.1s +tttg: c60/270 lr:0.000886 t:5.2s +tttg: c61/270 lr:0.000882 t:5.3s +tttg: c62/270 lr:0.000878 t:5.4s +tttg: c63/270 lr:0.000875 t:5.4s +tttg: c64/270 lr:0.000871 t:5.5s +tttg: c65/270 lr:0.000867 t:5.6s +tttg: c66/270 lr:0.000863 t:5.7s +tttg: c67/270 lr:0.000859 t:5.8s +tttg: c68/270 lr:0.000855 t:5.9s +tttg: c69/270 lr:0.000850 t:6.0s +tttg: c70/270 lr:0.000846 t:6.0s +tttg: c71/270 lr:0.000842 t:6.1s +tttg: c72/270 lr:0.000838 t:6.2s +tttg: c73/270 lr:0.000833 t:6.3s +tttg: c74/270 lr:0.000829 t:6.4s +tttg: c75/270 lr:0.000825 t:6.5s +tttg: c76/270 lr:0.000820 t:6.6s +tttg: c77/270 lr:0.000816 t:6.6s +tttg: c78/270 lr:0.000811 t:6.7s +tttg: c79/270 lr:0.000806 t:6.8s +tttg: c80/270 lr:0.000802 t:6.9s +tttg: c81/270 lr:0.000797 t:7.0s +tttg: c82/270 lr:0.000792 t:7.1s +tttg: c83/270 lr:0.000788 t:7.2s +tttg: c84/270 lr:0.000783 t:7.2s +tttg: c85/270 lr:0.000778 t:7.3s +tttg: c86/270 lr:0.000773 t:7.4s +tttg: c87/270 lr:0.000768 t:7.5s +tttg: c88/270 lr:0.000763 t:7.6s +tttg: c89/270 lr:0.000758 t:7.7s +tttg: c90/270 lr:0.000753 t:7.8s +tttg: c91/270 lr:0.000748 t:7.8s +tttg: c92/270 lr:0.000743 t:7.9s +tttg: c93/270 lr:0.000738 t:8.0s +tttg: c94/270 lr:0.000733 t:8.1s +tttg: c95/270 lr:0.000728 t:8.2s +tttg: c96/270 lr:0.000723 t:8.3s +tttg: c97/270 lr:0.000717 t:8.4s +tttg: c98/270 lr:0.000712 t:8.4s +tttg: c99/270 lr:0.000707 t:8.5s +tttg: c100/270 lr:0.000701 t:8.6s +tttg: c101/270 lr:0.000696 t:8.7s +tttg: c102/270 lr:0.000691 t:8.8s +tttg: c103/270 lr:0.000685 t:8.9s +tttg: c104/270 lr:0.000680 t:9.0s +tttg: c105/270 lr:0.000674 t:9.0s +tttg: c106/270 lr:0.000669 t:9.1s +tttg: c107/270 lr:0.000663 t:9.2s +tttg: c108/270 lr:0.000658 t:9.3s +tttg: c109/270 lr:0.000652 t:9.4s +tttg: c110/270 lr:0.000647 t:9.5s +tttg: c111/270 lr:0.000641 t:9.6s +tttg: c112/270 lr:0.000636 t:9.6s +tttg: c113/270 lr:0.000630 t:9.7s +tttg: c114/270 lr:0.000624 t:9.8s +tttg: c115/270 lr:0.000619 t:9.9s +tttg: c116/270 lr:0.000613 t:10.0s +tttg: c117/270 lr:0.000607 t:10.1s +tttg: c118/270 lr:0.000601 t:10.2s +tttg: c119/270 lr:0.000596 t:10.3s +tttg: c120/270 lr:0.000590 t:10.3s +tttg: c121/270 lr:0.000584 t:10.4s +tttg: c122/270 lr:0.000579 t:10.5s +tttg: c123/270 lr:0.000573 t:10.6s +tttg: c124/270 lr:0.000567 t:10.7s +tttg: c125/270 lr:0.000561 t:10.8s +tttg: c126/270 lr:0.000555 t:10.9s +tttg: c127/270 lr:0.000550 t:10.9s +tttg: c128/270 lr:0.000544 t:11.0s +tttg: c129/270 lr:0.000538 t:11.1s +tttg: c130/270 lr:0.000532 t:11.2s +tttg: c131/270 lr:0.000526 t:11.3s +tttg: c132/270 lr:0.000520 t:11.4s +tttg: c133/270 lr:0.000515 t:11.5s +tttg: c134/270 lr:0.000509 t:11.5s +tttg: c135/270 lr:0.000503 t:11.6s +tttg: c136/270 lr:0.000497 t:11.7s +tttg: c137/270 lr:0.000491 t:11.8s +tttg: c138/270 lr:0.000485 t:11.9s +tttg: c139/270 lr:0.000480 t:12.0s +tttg: c140/270 lr:0.000474 t:12.1s +tttg: c141/270 lr:0.000468 t:12.1s +tttg: c142/270 lr:0.000462 t:12.2s +tttg: c143/270 lr:0.000456 t:12.3s +tttg: c144/270 lr:0.000450 t:12.4s +tttg: c145/270 lr:0.000445 t:12.5s +tttg: c146/270 lr:0.000439 t:12.6s +tttg: c147/270 lr:0.000433 t:12.7s +tttg: c148/270 lr:0.000427 t:12.7s +tttg: c149/270 lr:0.000421 t:12.8s +tttg: c150/270 lr:0.000416 t:12.9s +tttg: c151/270 lr:0.000410 t:13.0s +tttg: c152/270 lr:0.000404 t:13.1s +tttg: c153/270 lr:0.000399 t:13.2s +tttg: c154/270 lr:0.000393 t:13.3s +tttg: c155/270 lr:0.000387 t:13.3s +tttg: c156/270 lr:0.000381 t:13.4s +tttg: c157/270 lr:0.000376 t:13.5s +tttg: c158/270 lr:0.000370 t:13.6s +tttg: c159/270 lr:0.000364 t:13.7s +tttg: c160/270 lr:0.000359 t:13.8s +tttg: c161/270 lr:0.000353 t:13.9s +tttg: c162/270 lr:0.000348 t:14.0s +tttg: c163/270 lr:0.000342 t:14.0s +tttg: c164/270 lr:0.000337 t:14.1s +tttg: c165/270 lr:0.000331 t:14.2s +tttg: c166/270 lr:0.000326 t:14.3s +tttg: c167/270 lr:0.000320 t:14.4s +tttg: c168/270 lr:0.000315 t:14.5s +tttg: c169/270 lr:0.000309 t:14.6s +tttg: c170/270 lr:0.000304 t:14.6s +tttg: c171/270 lr:0.000299 t:14.7s +tttg: c172/270 lr:0.000293 t:14.8s +tttg: c173/270 lr:0.000288 t:14.9s +tttg: c174/270 lr:0.000283 t:15.0s +tttg: c175/270 lr:0.000277 t:15.1s +tttg: c176/270 lr:0.000272 t:15.2s +tttg: c177/270 lr:0.000267 t:15.3s +tttg: c178/270 lr:0.000262 t:15.3s +tttg: c179/270 lr:0.000257 t:15.4s +tttg: c180/270 lr:0.000252 t:15.5s +tttg: c181/270 lr:0.000247 t:15.6s +tttg: c182/270 lr:0.000242 t:15.7s +tttg: c183/270 lr:0.000237 t:15.8s +tttg: c184/270 lr:0.000232 t:15.9s +tttg: c185/270 lr:0.000227 t:15.9s +tttg: c186/270 lr:0.000222 t:16.0s +tttg: c187/270 lr:0.000217 t:16.1s +tttg: c188/270 lr:0.000212 t:16.2s +tttg: c189/270 lr:0.000208 t:16.3s +tttg: c190/270 lr:0.000203 t:16.4s +tttg: c191/270 lr:0.000198 t:16.5s +tttg: c192/270 lr:0.000194 t:16.5s +tttg: c193/270 lr:0.000189 t:16.6s +tttg: c194/270 lr:0.000184 t:16.7s +tttg: c195/270 lr:0.000180 t:16.8s +tttg: c196/270 lr:0.000175 t:16.9s +tttg: c197/270 lr:0.000171 t:17.0s +tttg: c198/270 lr:0.000167 t:17.1s +tttg: c199/270 lr:0.000162 t:17.1s +tttg: c200/270 lr:0.000158 t:17.2s +tttg: c201/270 lr:0.000154 t:17.3s +tttg: c202/270 lr:0.000150 t:17.4s +tttg: c203/270 lr:0.000145 t:17.5s +tttg: c204/270 lr:0.000141 t:17.6s +tttg: c205/270 lr:0.000137 t:17.7s +tttg: c206/270 lr:0.000133 t:17.7s +tttg: c207/270 lr:0.000129 t:17.8s +tttg: c208/270 lr:0.000125 t:17.9s +tttg: c209/270 lr:0.000122 t:18.0s +tttg: c210/270 lr:0.000118 t:18.1s +tttg: c211/270 lr:0.000114 t:18.2s +tttg: c212/270 lr:0.000110 t:18.3s +tttg: c213/270 lr:0.000107 t:18.3s +tttg: c214/270 lr:0.000103 t:18.4s +tttg: c215/270 lr:0.000100 t:18.5s +tttg: c216/270 lr:0.000096 t:18.6s +tttg: c217/270 lr:0.000093 t:18.7s +tttg: c218/270 lr:0.000089 t:18.8s +tttg: c219/270 lr:0.000086 t:18.9s +tttg: c220/270 lr:0.000083 t:18.9s +tttg: c221/270 lr:0.000080 t:19.0s +tttg: c222/270 lr:0.000077 t:19.1s +tttg: c223/270 lr:0.000073 t:19.2s +tttg: c224/270 lr:0.000070 t:19.3s +tttg: c225/270 lr:0.000067 t:19.4s +tttg: c226/270 lr:0.000065 t:19.5s +tttg: c227/270 lr:0.000062 t:19.5s +tttg: c228/270 lr:0.000059 t:19.6s +tttg: c229/270 lr:0.000056 t:19.7s +tttg: c230/270 lr:0.000054 t:19.8s +tttg: c231/270 lr:0.000051 t:19.9s +tttg: c232/270 lr:0.000048 t:20.0s +tttg: c233/270 lr:0.000046 t:20.1s +tttg: c234/270 lr:0.000044 t:20.1s +tttg: c235/270 lr:0.000041 t:20.2s +tttg: c236/270 lr:0.000039 t:20.3s +tttg: c237/270 lr:0.000037 t:20.4s +tttg: c238/270 lr:0.000035 t:20.5s +tttg: c239/270 lr:0.000032 t:20.6s +tttg: c240/270 lr:0.000030 t:20.7s +tttg: c241/270 lr:0.000028 t:20.7s +tttg: c242/270 lr:0.000026 t:20.8s +tttg: c243/270 lr:0.000025 t:20.9s +tttg: c244/270 lr:0.000023 t:21.0s +tttg: c245/270 lr:0.000021 t:21.1s +tttg: c246/270 lr:0.000020 t:21.2s +tttg: c247/270 lr:0.000018 t:21.3s +tttg: c248/270 lr:0.000016 t:21.3s +tttg: c249/270 lr:0.000015 t:21.4s +tttg: c250/270 lr:0.000014 t:21.5s +tttg: c251/270 lr:0.000012 t:21.6s +tttg: c252/270 lr:0.000011 t:21.7s +tttg: c253/270 lr:0.000010 t:21.8s +tttg: c254/270 lr:0.000009 t:21.9s +tttg: c255/270 lr:0.000008 t:21.9s +tttg: c256/270 lr:0.000007 t:22.0s +tttg: c257/270 lr:0.000006 t:22.1s +tttg: c258/270 lr:0.000005 t:22.2s +tttg: c259/270 lr:0.000004 t:22.3s +tttg: c260/270 lr:0.000003 t:22.4s +tttg: c261/270 lr:0.000003 t:22.5s +tttg: c262/270 lr:0.000002 t:22.5s +tttg: c263/270 lr:0.000002 t:22.6s +tttg: c264/270 lr:0.000001 t:22.7s +tttg: c265/270 lr:0.000001 t:22.8s +tttg: c266/270 lr:0.000001 t:22.9s +tttg: c267/270 lr:0.000000 t:23.0s +tttg: c268/270 lr:0.000000 t:23.1s +tttg: c269/270 lr:0.000000 t:23.1s +ttpr: phase:3/4 t:333.6s +ttp: b739/782 bl:2.2916 bb:1.0224 rl:2.2930 rb:1.0595 dl:2619-2652 gd:0 +ttpp: phase:4/4 pd:3472 gd:3000 t:347.6s +tttg: c1/325 lr:0.001000 t:0.1s +tttg: c2/325 lr:0.001000 t:0.2s +tttg: c3/325 lr:0.001000 t:0.3s +tttg: c4/325 lr:0.001000 t:0.4s +tttg: c5/325 lr:0.001000 t:0.4s +tttg: c6/325 lr:0.000999 t:0.5s +tttg: c7/325 lr:0.000999 t:0.6s +tttg: c8/325 lr:0.000999 t:0.7s +tttg: c9/325 lr:0.000998 t:0.8s +tttg: c10/325 lr:0.000998 t:0.9s +tttg: c11/325 lr:0.000998 t:1.0s +tttg: c12/325 lr:0.000997 t:1.0s +tttg: c13/325 lr:0.000997 t:1.1s +tttg: c14/325 lr:0.000996 t:1.2s +tttg: c15/325 lr:0.000995 t:1.3s +tttg: c16/325 lr:0.000995 t:1.4s +tttg: c17/325 lr:0.000994 t:1.5s +tttg: c18/325 lr:0.000993 t:1.6s +tttg: c19/325 lr:0.000992 t:1.6s +tttg: c20/325 lr:0.000992 t:1.7s +tttg: c21/325 lr:0.000991 t:1.8s +tttg: c22/325 lr:0.000990 t:1.9s +tttg: c23/325 lr:0.000989 t:2.0s +tttg: c24/325 lr:0.000988 t:2.1s +tttg: c25/325 lr:0.000987 t:2.2s +tttg: c26/325 lr:0.000985 t:2.2s +tttg: c27/325 lr:0.000984 t:2.3s +tttg: c28/325 lr:0.000983 t:2.4s +tttg: c29/325 lr:0.000982 t:2.5s +tttg: c30/325 lr:0.000980 t:2.6s +tttg: c31/325 lr:0.000979 t:2.7s +tttg: c32/325 lr:0.000978 t:2.8s +tttg: c33/325 lr:0.000976 t:2.8s +tttg: c34/325 lr:0.000975 t:2.9s +tttg: c35/325 lr:0.000973 t:3.0s +tttg: c36/325 lr:0.000971 t:3.1s +tttg: c37/325 lr:0.000970 t:3.2s +tttg: c38/325 lr:0.000968 t:3.3s +tttg: c39/325 lr:0.000966 t:3.4s +tttg: c40/325 lr:0.000965 t:3.4s +tttg: c41/325 lr:0.000963 t:3.5s +tttg: c42/325 lr:0.000961 t:3.6s +tttg: c43/325 lr:0.000959 t:3.7s +tttg: c44/325 lr:0.000957 t:3.8s +tttg: c45/325 lr:0.000955 t:3.9s +tttg: c46/325 lr:0.000953 t:3.9s +tttg: c47/325 lr:0.000951 t:4.0s +tttg: c48/325 lr:0.000949 t:4.1s +tttg: c49/325 lr:0.000947 t:4.2s +tttg: c50/325 lr:0.000945 t:4.3s +tttg: c51/325 lr:0.000942 t:4.4s +tttg: c52/325 lr:0.000940 t:4.5s +tttg: c53/325 lr:0.000938 t:4.6s +tttg: c54/325 lr:0.000935 t:4.6s +tttg: c55/325 lr:0.000933 t:4.7s +tttg: c56/325 lr:0.000931 t:4.8s +tttg: c57/325 lr:0.000928 t:4.9s +tttg: c58/325 lr:0.000926 t:5.0s +tttg: c59/325 lr:0.000923 t:5.1s +tttg: c60/325 lr:0.000920 t:5.2s +tttg: c61/325 lr:0.000918 t:5.3s +tttg: c62/325 lr:0.000915 t:5.3s +tttg: c63/325 lr:0.000912 t:5.4s +tttg: c64/325 lr:0.000910 t:5.5s +tttg: c65/325 lr:0.000907 t:5.6s +tttg: c66/325 lr:0.000904 t:5.7s +tttg: c67/325 lr:0.000901 t:5.8s +tttg: c68/325 lr:0.000898 t:5.8s +tttg: c69/325 lr:0.000895 t:5.9s +tttg: c70/325 lr:0.000892 t:6.0s +tttg: c71/325 lr:0.000889 t:6.1s +tttg: c72/325 lr:0.000886 t:6.2s +tttg: c73/325 lr:0.000883 t:6.3s +tttg: c74/325 lr:0.000880 t:6.4s +tttg: c75/325 lr:0.000877 t:6.4s +tttg: c76/325 lr:0.000874 t:6.5s +tttg: c77/325 lr:0.000870 t:6.6s +tttg: c78/325 lr:0.000867 t:6.7s +tttg: c79/325 lr:0.000864 t:6.8s +tttg: c80/325 lr:0.000860 t:6.9s +tttg: c81/325 lr:0.000857 t:7.0s +tttg: c82/325 lr:0.000854 t:7.0s +tttg: c83/325 lr:0.000850 t:7.1s +tttg: c84/325 lr:0.000847 t:7.2s +tttg: c85/325 lr:0.000843 t:7.3s +tttg: c86/325 lr:0.000840 t:7.4s +tttg: c87/325 lr:0.000836 t:7.5s +tttg: c88/325 lr:0.000832 t:7.6s +tttg: c89/325 lr:0.000829 t:7.7s +tttg: c90/325 lr:0.000825 t:7.7s +tttg: c91/325 lr:0.000821 t:7.8s +tttg: c92/325 lr:0.000818 t:7.9s +tttg: c93/325 lr:0.000814 t:8.0s +tttg: c94/325 lr:0.000810 t:8.1s +tttg: c95/325 lr:0.000806 t:8.2s +tttg: c96/325 lr:0.000802 t:8.3s +tttg: c97/325 lr:0.000799 t:8.3s +tttg: c98/325 lr:0.000795 t:8.4s +tttg: c99/325 lr:0.000791 t:8.5s +tttg: c100/325 lr:0.000787 t:8.6s +tttg: c101/325 lr:0.000783 t:8.7s +tttg: c102/325 lr:0.000779 t:8.8s +tttg: c103/325 lr:0.000775 t:8.9s +tttg: c104/325 lr:0.000771 t:9.0s +tttg: c105/325 lr:0.000767 t:9.0s +tttg: c106/325 lr:0.000762 t:9.1s +tttg: c107/325 lr:0.000758 t:9.2s +tttg: c108/325 lr:0.000754 t:9.3s +tttg: c109/325 lr:0.000750 t:9.4s +tttg: c110/325 lr:0.000746 t:9.5s +tttg: c111/325 lr:0.000742 t:9.6s +tttg: c112/325 lr:0.000737 t:9.6s +tttg: c113/325 lr:0.000733 t:9.7s +tttg: c114/325 lr:0.000729 t:9.8s +tttg: c115/325 lr:0.000724 t:9.9s +tttg: c116/325 lr:0.000720 t:10.0s +tttg: c117/325 lr:0.000716 t:10.1s +tttg: c118/325 lr:0.000711 t:10.2s +tttg: c119/325 lr:0.000707 t:10.3s +tttg: c120/325 lr:0.000702 t:10.3s +tttg: c121/325 lr:0.000698 t:10.4s +tttg: c122/325 lr:0.000694 t:10.5s +tttg: c123/325 lr:0.000689 t:10.6s +tttg: c124/325 lr:0.000685 t:10.7s +tttg: c125/325 lr:0.000680 t:10.8s +tttg: c126/325 lr:0.000676 t:10.9s +tttg: c127/325 lr:0.000671 t:10.9s +tttg: c128/325 lr:0.000666 t:11.0s +tttg: c129/325 lr:0.000662 t:11.1s +tttg: c130/325 lr:0.000657 t:11.2s +tttg: c131/325 lr:0.000653 t:11.3s +tttg: c132/325 lr:0.000648 t:11.4s +tttg: c133/325 lr:0.000643 t:11.5s +tttg: c134/325 lr:0.000639 t:11.5s +tttg: c135/325 lr:0.000634 t:11.6s +tttg: c136/325 lr:0.000629 t:11.7s +tttg: c137/325 lr:0.000625 t:11.8s +tttg: c138/325 lr:0.000620 t:11.9s +tttg: c139/325 lr:0.000615 t:12.0s +tttg: c140/325 lr:0.000611 t:12.1s +tttg: c141/325 lr:0.000606 t:12.1s +tttg: c142/325 lr:0.000601 t:12.2s +tttg: c143/325 lr:0.000596 t:12.3s +tttg: c144/325 lr:0.000592 t:12.4s +tttg: c145/325 lr:0.000587 t:12.5s +tttg: c146/325 lr:0.000582 t:12.6s +tttg: c147/325 lr:0.000577 t:12.7s +tttg: c148/325 lr:0.000572 t:12.8s +tttg: c149/325 lr:0.000568 t:12.8s +tttg: c150/325 lr:0.000563 t:12.9s +tttg: c151/325 lr:0.000558 t:13.0s +tttg: c152/325 lr:0.000553 t:13.1s +tttg: c153/325 lr:0.000548 t:13.2s +tttg: c154/325 lr:0.000544 t:13.3s +tttg: c155/325 lr:0.000539 t:13.4s +tttg: c156/325 lr:0.000534 t:13.4s +tttg: c157/325 lr:0.000529 t:13.5s +tttg: c158/325 lr:0.000524 t:13.6s +tttg: c159/325 lr:0.000519 t:13.7s +tttg: c160/325 lr:0.000515 t:13.8s +tttg: c161/325 lr:0.000510 t:13.9s +tttg: c162/325 lr:0.000505 t:14.0s +tttg: c163/325 lr:0.000500 t:14.0s +tttg: c164/325 lr:0.000495 t:14.1s +tttg: c165/325 lr:0.000490 t:14.2s +tttg: c166/325 lr:0.000485 t:14.3s +tttg: c167/325 lr:0.000481 t:14.4s +tttg: c168/325 lr:0.000476 t:14.5s +tttg: c169/325 lr:0.000471 t:14.6s +tttg: c170/325 lr:0.000466 t:14.7s +tttg: c171/325 lr:0.000461 t:14.7s +tttg: c172/325 lr:0.000456 t:14.8s +tttg: c173/325 lr:0.000452 t:14.9s +tttg: c174/325 lr:0.000447 t:15.0s +tttg: c175/325 lr:0.000442 t:15.1s +tttg: c176/325 lr:0.000437 t:15.2s +tttg: c177/325 lr:0.000432 t:15.3s +tttg: c178/325 lr:0.000428 t:15.4s +tttg: c179/325 lr:0.000423 t:15.4s +tttg: c180/325 lr:0.000418 t:15.5s +tttg: c181/325 lr:0.000413 t:15.6s +tttg: c182/325 lr:0.000408 t:15.7s +tttg: c183/325 lr:0.000404 t:15.8s +tttg: c184/325 lr:0.000399 t:15.9s +tttg: c185/325 lr:0.000394 t:16.0s +tttg: c186/325 lr:0.000389 t:16.1s +tttg: c187/325 lr:0.000385 t:16.1s +tttg: c188/325 lr:0.000380 t:16.2s +tttg: c189/325 lr:0.000375 t:16.3s +tttg: c190/325 lr:0.000371 t:16.4s +tttg: c191/325 lr:0.000366 t:16.5s +tttg: c192/325 lr:0.000361 t:16.6s +tttg: c193/325 lr:0.000357 t:16.7s +tttg: c194/325 lr:0.000352 t:16.7s +tttg: c195/325 lr:0.000347 t:16.8s +tttg: c196/325 lr:0.000343 t:16.9s +tttg: c197/325 lr:0.000338 t:17.0s +tttg: c198/325 lr:0.000334 t:17.1s +tttg: c199/325 lr:0.000329 t:17.2s +tttg: c200/325 lr:0.000324 t:17.3s +tttg: c201/325 lr:0.000320 t:17.3s +tttg: c202/325 lr:0.000315 t:17.4s +tttg: c203/325 lr:0.000311 t:17.5s +tttg: c204/325 lr:0.000306 t:17.6s +tttg: c205/325 lr:0.000302 t:17.7s +tttg: c206/325 lr:0.000298 t:17.8s +tttg: c207/325 lr:0.000293 t:17.9s +tttg: c208/325 lr:0.000289 t:18.0s +tttg: c209/325 lr:0.000284 t:18.0s +tttg: c210/325 lr:0.000280 t:18.1s +tttg: c211/325 lr:0.000276 t:18.2s +tttg: c212/325 lr:0.000271 t:18.3s +tttg: c213/325 lr:0.000267 t:18.4s +tttg: c214/325 lr:0.000263 t:18.5s +tttg: c215/325 lr:0.000258 t:18.6s +tttg: c216/325 lr:0.000254 t:18.6s +tttg: c217/325 lr:0.000250 t:18.7s +tttg: c218/325 lr:0.000246 t:18.8s +tttg: c219/325 lr:0.000242 t:18.9s +tttg: c220/325 lr:0.000238 t:19.0s +tttg: c221/325 lr:0.000233 t:19.1s +tttg: c222/325 lr:0.000229 t:19.1s +tttg: c223/325 lr:0.000225 t:19.2s +tttg: c224/325 lr:0.000221 t:19.3s +tttg: c225/325 lr:0.000217 t:19.4s +tttg: c226/325 lr:0.000213 t:19.5s +tttg: c227/325 lr:0.000209 t:19.6s +tttg: c228/325 lr:0.000205 t:19.7s +tttg: c229/325 lr:0.000201 t:19.8s +tttg: c230/325 lr:0.000198 t:19.8s +tttg: c231/325 lr:0.000194 t:19.9s +tttg: c232/325 lr:0.000190 t:20.0s +tttg: c233/325 lr:0.000186 t:20.1s +tttg: c234/325 lr:0.000182 t:20.2s +tttg: c235/325 lr:0.000179 t:20.3s +tttg: c236/325 lr:0.000175 t:20.4s +tttg: c237/325 lr:0.000171 t:20.5s +tttg: c238/325 lr:0.000168 t:20.5s +tttg: c239/325 lr:0.000164 t:20.6s +tttg: c240/325 lr:0.000160 t:20.7s +tttg: c241/325 lr:0.000157 t:20.8s +tttg: c242/325 lr:0.000153 t:20.9s +tttg: c243/325 lr:0.000150 t:21.0s +tttg: c244/325 lr:0.000146 t:21.0s +tttg: c245/325 lr:0.000143 t:21.1s +tttg: c246/325 lr:0.000140 t:21.2s +tttg: c247/325 lr:0.000136 t:21.3s +tttg: c248/325 lr:0.000133 t:21.4s +tttg: c249/325 lr:0.000130 t:21.5s +tttg: c250/325 lr:0.000126 t:21.6s +tttg: c251/325 lr:0.000123 t:21.6s +tttg: c252/325 lr:0.000120 t:21.7s +tttg: c253/325 lr:0.000117 t:21.8s +tttg: c254/325 lr:0.000114 t:21.9s +tttg: c255/325 lr:0.000111 t:22.0s +tttg: c256/325 lr:0.000108 t:22.1s +tttg: c257/325 lr:0.000105 t:22.2s +tttg: c258/325 lr:0.000102 t:22.2s +tttg: c259/325 lr:0.000099 t:22.3s +tttg: c260/325 lr:0.000096 t:22.4s +tttg: c261/325 lr:0.000093 t:22.5s +tttg: c262/325 lr:0.000090 t:22.6s +tttg: c263/325 lr:0.000088 t:22.7s +tttg: c264/325 lr:0.000085 t:22.8s +tttg: c265/325 lr:0.000082 t:22.8s +tttg: c266/325 lr:0.000080 t:22.9s +tttg: c267/325 lr:0.000077 t:23.0s +tttg: c268/325 lr:0.000074 t:23.1s +tttg: c269/325 lr:0.000072 t:23.2s +tttg: c270/325 lr:0.000069 t:23.3s +tttg: c271/325 lr:0.000067 t:23.3s +tttg: c272/325 lr:0.000065 t:23.4s +tttg: c273/325 lr:0.000062 t:23.5s +tttg: c274/325 lr:0.000060 t:23.6s +tttg: c275/325 lr:0.000058 t:23.7s +tttg: c276/325 lr:0.000055 t:23.8s +tttg: c277/325 lr:0.000053 t:23.9s +tttg: c278/325 lr:0.000051 t:23.9s +tttg: c279/325 lr:0.000049 t:24.0s +tttg: c280/325 lr:0.000047 t:24.1s +tttg: c281/325 lr:0.000045 t:24.2s +tttg: c282/325 lr:0.000043 t:24.3s +tttg: c283/325 lr:0.000041 t:24.4s +tttg: c284/325 lr:0.000039 t:24.5s +tttg: c285/325 lr:0.000037 t:24.5s +tttg: c286/325 lr:0.000035 t:24.6s +tttg: c287/325 lr:0.000034 t:24.7s +tttg: c288/325 lr:0.000032 t:24.8s +tttg: c289/325 lr:0.000030 t:24.9s +tttg: c290/325 lr:0.000029 t:25.0s +tttg: c291/325 lr:0.000027 t:25.1s +tttg: c292/325 lr:0.000025 t:25.1s +tttg: c293/325 lr:0.000024 t:25.2s +tttg: c294/325 lr:0.000022 t:25.3s +tttg: c295/325 lr:0.000021 t:25.4s +tttg: c296/325 lr:0.000020 t:25.5s +tttg: c297/325 lr:0.000018 t:25.6s +tttg: c298/325 lr:0.000017 t:25.7s +tttg: c299/325 lr:0.000016 t:25.7s +tttg: c300/325 lr:0.000015 t:25.8s +tttg: c301/325 lr:0.000013 t:25.9s +tttg: c302/325 lr:0.000012 t:26.0s +tttg: c303/325 lr:0.000011 t:26.1s +tttg: c304/325 lr:0.000010 t:26.2s +tttg: c305/325 lr:0.000009 t:26.2s +tttg: c306/325 lr:0.000008 t:26.3s +tttg: c307/325 lr:0.000008 t:26.4s +tttg: c308/325 lr:0.000007 t:26.5s +tttg: c309/325 lr:0.000006 t:26.6s +tttg: c310/325 lr:0.000005 t:26.7s +tttg: c311/325 lr:0.000005 t:26.8s +tttg: c312/325 lr:0.000004 t:26.8s +tttg: c313/325 lr:0.000003 t:26.9s +tttg: c314/325 lr:0.000003 t:27.0s +tttg: c315/325 lr:0.000002 t:27.1s +tttg: c316/325 lr:0.000002 t:27.2s +tttg: c317/325 lr:0.000002 t:27.3s +tttg: c318/325 lr:0.000001 t:27.3s +tttg: c319/325 lr:0.000001 t:27.4s +tttg: c320/325 lr:0.000001 t:27.5s +tttg: c321/325 lr:0.000000 t:27.6s +tttg: c322/325 lr:0.000000 t:27.7s +tttg: c323/325 lr:0.000000 t:27.8s +tttg: c324/325 lr:0.000000 t:27.9s +ttpr: phase:4/4 t:378.1s +ttp: b724/782 bl:2.3210 bb:1.0598 rl:2.2948 rb:1.0595 dl:2203-2231 gd:1 +ttp: b714/782 bl:2.3040 bb:1.0205 rl:2.2953 rb:1.0572 dl:2018-2035 gd:1 +ttp: b706/782 bl:2.4004 bb:1.0735 rl:2.3006 rb:1.0581 dl:1898-1910 gd:1 +ttp: b702/782 bl:2.4264 bb:1.0812 rl:2.3065 rb:1.0592 dl:1847-1858 gd:1 +ttp: b692/782 bl:2.2922 bb:1.0290 rl:2.3059 rb:1.0579 dl:1737-1746 gd:1 +ttp: b681/782 bl:2.3311 bb:1.0422 rl:2.3068 rb:1.0573 dl:1628-1637 gd:1 +ttp: b673/782 bl:2.3634 bb:1.0609 rl:2.3088 rb:1.0574 dl:1562-1571 gd:1 +ttp: b669/782 bl:2.3311 bb:1.0423 rl:2.3096 rb:1.0569 dl:1530-1537 gd:1 +ttp: b662/782 bl:2.2931 bb:1.0250 rl:2.3090 rb:1.0559 dl:1480-1486 gd:1 +ttp: b654/782 bl:2.2875 bb:1.0346 rl:2.3084 rb:1.0553 dl:1425-1432 gd:1 +ttp: b647/782 bl:2.2739 bb:1.0320 rl:2.3075 rb:1.0546 dl:1382-1387 gd:1 +ttp: b636/782 bl:2.3818 bb:1.0675 rl:2.3094 rb:1.0549 dl:1314-1320 gd:1 +ttp: b629/782 bl:2.3499 bb:1.0113 rl:2.3103 rb:1.0538 dl:1276-1280 gd:1 +ttp: b622/782 bl:2.2637 bb:1.0340 rl:2.3093 rb:1.0534 dl:1237-1243 gd:1 +ttp: b614/782 bl:2.3167 bb:1.0526 rl:2.3094 rb:1.0534 dl:1195-1200 gd:1 +ttp: b606/782 bl:2.3621 bb:1.0673 rl:2.3105 rb:1.0537 dl:1159-1164 gd:1 +ttp: b598/782 bl:2.3591 bb:1.0670 rl:2.3115 rb:1.0539 dl:1124-1129 gd:1 +ttp: b590/782 bl:2.3076 bb:1.0573 rl:2.3114 rb:1.0540 dl:1089-1093 gd:1 +ttp: b582/782 bl:2.3494 bb:1.0320 rl:2.3121 rb:1.0536 dl:1056-1060 gd:1 +ttp: b573/782 bl:2.3630 bb:1.0652 rl:2.3129 rb:1.0538 dl:1021-1025 gd:1 +ttp: b565/782 bl:2.3847 bb:1.0331 rl:2.3141 rb:1.0534 dl:993-997 gd:1 +ttp: b557/782 bl:2.3393 bb:1.0508 rl:2.3145 rb:1.0534 dl:965-968 gd:1 +ttp: b549/782 bl:2.2572 bb:1.0205 rl:2.3136 rb:1.0529 dl:939-943 gd:1 +ttp: b541/782 bl:2.3280 bb:1.0330 rl:2.3138 rb:1.0526 dl:915-918 gd:1 +ttp: b533/782 bl:2.3715 bb:1.0669 rl:2.3146 rb:1.0528 dl:890-892 gd:1 +ttp: b526/782 bl:2.3204 bb:1.0227 rl:2.3147 rb:1.0524 dl:869-872 gd:1 +ttp: b519/782 bl:2.2949 bb:1.0411 rl:2.3144 rb:1.0523 dl:850-852 gd:1 +ttp: b504/782 bl:2.3209 bb:1.0354 rl:2.3145 rb:1.0520 dl:807-809 gd:1 +ttp: b496/782 bl:2.4169 bb:1.0463 rl:2.3157 rb:1.0520 dl:785-788 gd:1 +ttp: b488/782 bl:2.2892 bb:1.0073 rl:2.3154 rb:1.0515 dl:766-769 gd:1 +ttp: b480/782 bl:2.4373 bb:1.0852 rl:2.3167 rb:1.0518 dl:747-749 gd:1 +ttp: b472/782 bl:2.3830 bb:1.0817 rl:2.3174 rb:1.0521 dl:728-730 gd:1 +ttp: b464/782 bl:2.2677 bb:1.0162 rl:2.3169 rb:1.0518 dl:710-712 gd:1 +ttp: b456/782 bl:2.3477 bb:1.0399 rl:2.3172 rb:1.0517 dl:693-695 gd:1 +ttp: b448/782 bl:2.3138 bb:1.0087 rl:2.3171 rb:1.0513 dl:677-678 gd:1 +ttp: b440/782 bl:2.2384 bb:0.9853 rl:2.3164 rb:1.0507 dl:659-662 gd:1 +ttp: b432/782 bl:2.3357 bb:1.0381 rl:2.3166 rb:1.0505 dl:643-645 gd:1 +ttp: b424/782 bl:2.3400 bb:1.0610 rl:2.3168 rb:1.0506 dl:629-630 gd:1 +ttp: b416/782 bl:2.3750 bb:1.0443 rl:2.3173 rb:1.0506 dl:613-615 gd:1 +ttp: b408/782 bl:2.3010 bb:1.0699 rl:2.3171 rb:1.0507 dl:597-598 gd:1 +ttp: b400/782 bl:2.3069 bb:1.0380 rl:2.3171 rb:1.0506 dl:582-584 gd:1 +ttp: b392/782 bl:2.2523 bb:1.0360 rl:2.3166 rb:1.0505 dl:568-570 gd:1 +ttp: b384/782 bl:2.3382 bb:1.0518 rl:2.3167 rb:1.0505 dl:554-555 gd:1 +ttp: b376/782 bl:2.3236 bb:1.0421 rl:2.3168 rb:1.0505 dl:540-542 gd:1 +ttp: b367/782 bl:2.2978 bb:1.0844 rl:2.3167 rb:1.0507 dl:525-527 gd:1 +ttp: b359/782 bl:2.2491 bb:1.0328 rl:2.3162 rb:1.0506 dl:512-513 gd:1 +ttp: b351/782 bl:2.3553 bb:1.0782 rl:2.3165 rb:1.0507 dl:498-499 gd:1 +ttp: b343/782 bl:2.2186 bb:1.0441 rl:2.3159 rb:1.0507 dl:486-488 gd:1 +ttp: b334/782 bl:2.3765 bb:1.0682 rl:2.3162 rb:1.0508 dl:472-474 gd:1 +ttp: b326/782 bl:2.3150 bb:1.0601 rl:2.3162 rb:1.0509 dl:461-462 gd:1 +ttp: b317/782 bl:2.3003 bb:1.0451 rl:2.3161 rb:1.0508 dl:446-448 gd:1 +ttp: b309/782 bl:2.4092 bb:1.1055 rl:2.3166 rb:1.0511 dl:435-437 gd:1 +ttp: b301/782 bl:2.3532 bb:1.0924 rl:2.3168 rb:1.0513 dl:422-424 gd:1 +ttp: b292/782 bl:2.3378 bb:1.1069 rl:2.3169 rb:1.0516 dl:409-410 gd:1 +ttp: b284/782 bl:2.4431 bb:1.1376 rl:2.3175 rb:1.0520 dl:398-399 gd:1 +ttp: b276/782 bl:2.3895 bb:1.1045 rl:2.3178 rb:1.0522 dl:387-388 gd:1 +ttp: b268/782 bl:2.3485 bb:1.0730 rl:2.3180 rb:1.0523 dl:376-378 gd:1 +ttp: b260/782 bl:2.3736 bb:1.0814 rl:2.3182 rb:1.0524 dl:366-367 gd:1 +ttp: b252/782 bl:2.3819 bb:1.0676 rl:2.3185 rb:1.0525 dl:356-357 gd:1 +ttp: b244/782 bl:2.3278 bb:1.1077 rl:2.3185 rb:1.0527 dl:346-347 gd:1 +ttp: b236/782 bl:2.3346 bb:1.0744 rl:2.3186 rb:1.0528 dl:336-337 gd:1 +ttp: b228/782 bl:2.3376 bb:1.0883 rl:2.3187 rb:1.0529 dl:327-328 gd:1 +ttp: b220/782 bl:2.4099 bb:1.1403 rl:2.3190 rb:1.0532 dl:317-318 gd:1 +ttp: b212/782 bl:2.3641 bb:1.0792 rl:2.3191 rb:1.0533 dl:308-309 gd:1 +ttp: b204/782 bl:2.4592 bb:1.1539 rl:2.3196 rb:1.0537 dl:300-301 gd:1 +ttp: b196/782 bl:2.4502 bb:1.1181 rl:2.3201 rb:1.0539 dl:291-292 gd:1 +ttp: b188/782 bl:2.3403 bb:1.0989 rl:2.3201 rb:1.0540 dl:282-283 gd:1 +ttp: b180/782 bl:2.4318 bb:1.1141 rl:2.3205 rb:1.0542 dl:274-275 gd:1 +ttp: b172/782 bl:2.5214 bb:1.1560 rl:2.3211 rb:1.0545 dl:266-267 gd:1 +ttp: b165/782 bl:2.3523 bb:1.1171 rl:2.3212 rb:1.0547 dl:260-260 gd:1 +ttp: b156/782 bl:2.3031 bb:1.1499 rl:2.3211 rb:1.0549 dl:251-252 gd:1 +ttp: b148/782 bl:2.3261 bb:1.1006 rl:2.3211 rb:1.0550 dl:243-244 gd:1 +ttp: b139/782 bl:2.4296 bb:1.1318 rl:2.3214 rb:1.0552 dl:234-235 gd:1 +ttp: b130/782 bl:2.5736 bb:1.1794 rl:2.3221 rb:1.0555 dl:226-227 gd:1 +ttp: b122/782 bl:2.4080 bb:1.1400 rl:2.3223 rb:1.0557 dl:219-219 gd:1 +ttp: b111/782 bl:2.4033 bb:1.1718 rl:2.3225 rb:1.0560 dl:208-210 gd:1 +ttp: b104/782 bl:2.5014 bb:1.1809 rl:2.3229 rb:1.0563 dl:202-203 gd:1 +ttp: b96/782 bl:2.4703 bb:1.1993 rl:2.3232 rb:1.0566 dl:195-196 gd:1 +ttp: b88/782 bl:2.4738 bb:1.1805 rl:2.3235 rb:1.0568 dl:188-189 gd:1 +ttp: b80/782 bl:2.4545 bb:1.1440 rl:2.3237 rb:1.0570 dl:181-182 gd:1 +ttp: b72/782 bl:2.3716 bb:1.1481 rl:2.3238 rb:1.0571 dl:173-174 gd:1 +ttp: b64/782 bl:2.5189 bb:1.1489 rl:2.3242 rb:1.0573 dl:166-167 gd:1 +ttp: b56/782 bl:2.5432 bb:1.2190 rl:2.3246 rb:1.0576 dl:159-160 gd:1 +ttp: b47/782 bl:2.4404 bb:1.1393 rl:2.3248 rb:1.0577 dl:150-151 gd:1 +ttp: b38/782 bl:2.5989 bb:1.1920 rl:2.3252 rb:1.0579 dl:141-142 gd:1 +ttp: b30/782 bl:2.5861 bb:1.2609 rl:2.3256 rb:1.0582 dl:133-134 gd:1 +ttp: b21/782 bl:2.6129 bb:1.2326 rl:2.3259 rb:1.0584 dl:123-124 gd:1 +ttp: b13/782 bl:2.6796 bb:1.2140 rl:2.3264 rb:1.0586 dl:112-114 gd:1 +ttp: b4/782 bl:2.7521 bb:1.2331 rl:2.3268 rb:1.0588 dl:93-96 gd:1 +quantized_ttt_phased val_loss:2.32050078 val_bpb:1.06037830 eval_time:472816ms +val_loss:2.32050078 +val_bpb:1.06037830 +total_eval_time:472.8s + diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed314.log b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed314.log new file mode 100644 index 0000000000..37f8145f0b --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed314.log @@ -0,0 +1,1096 @@ +train_shards: 80 +val_tokens: 47851520 +model_params:35945671 +gptq:reserving 0.5s, effective=599500ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 9.1530 train_time: 0.0m tok/s: 17543164 +2/20000 train_loss: 12.9509 train_time: 0.0m tok/s: 8534115 +3/20000 train_loss: 10.3878 train_time: 0.0m tok/s: 8508181 +4/20000 train_loss: 8.8450 train_time: 0.0m tok/s: 8495247 +5/20000 train_loss: 8.1535 train_time: 0.0m tok/s: 8481225 +500/20000 train_loss: 2.8731 train_time: 0.8m tok/s: 8275503 +1000/20000 train_loss: 3.0935 train_time: 1.6m tok/s: 8264840 +1500/20000 train_loss: 2.9074 train_time: 2.4m tok/s: 8255447 +2000/20000 train_loss: 2.9337 train_time: 3.2m tok/s: 8252221 +layer_loop:enabled step:2200 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.8360 train_time: 4.2m tok/s: 7811448 +3000/20000 train_loss: 2.8509 train_time: 5.4m tok/s: 7312200 +3500/20000 train_loss: 2.8594 train_time: 6.5m tok/s: 7014403 +4000/20000 train_loss: 2.7049 train_time: 7.8m tok/s: 6749965 +4500/20000 train_loss: 2.5796 train_time: 9.0m tok/s: 6575350 +4940/20000 val_loss: 2.3679 val_bpb: 1.0820 +stopping_early: wallclock_cap train_time: 599557ms step: 4940/20000 +peak memory allocated: 41697 MiB reserved: 41722 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.34131849 val_bpb:1.06989118 eval_time:108805ms +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 3.7s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 125.4s +Serialized model quantized+pergroup: 15865493 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 20.7s +diagnostic quantized val_loss:2.36058524 val_bpb:1.07869534 eval_time:113067ms +ttt_compile_warmup: starting (writes to inductor cache) +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 20.6s +ttt_compile_warmup: done in 129.2s + +val_tokens: 47851520 +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 20.7s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (109.4s) + +beginning TTT eval timer +ttt_phased: total_docs:50000 prefix_docs:3000 suffix_docs:47000 num_phases:4 boundaries:[750, 1500, 2250, 3000] +ttp: b779/782 bl:2.2258 bb:1.0530 rl:2.2258 rb:1.0530 dl:10442-13079 gd:0 +ttp: b770/782 bl:2.2883 bb:1.0803 rl:2.2456 rb:1.0616 dl:5311-5522 gd:0 +ttp: b764/782 bl:2.2938 bb:1.0745 rl:2.2553 rb:1.0642 dl:4284-4392 gd:0 +ttpp: phase:1/4 pd:1168 gd:750 t:222.5s +tttg: c1/124 lr:0.001000 t:0.7s +tttg: c2/124 lr:0.001000 t:0.8s +tttg: c3/124 lr:0.000999 t:0.9s +tttg: c4/124 lr:0.000999 t:1.0s +tttg: c5/124 lr:0.000997 t:1.1s +tttg: c6/124 lr:0.000996 t:1.2s +tttg: c7/124 lr:0.000994 t:1.2s +tttg: c8/124 lr:0.000992 t:1.3s +tttg: c9/124 lr:0.000990 t:1.4s +tttg: c10/124 lr:0.000987 t:1.5s +tttg: c11/124 lr:0.000984 t:1.5s +tttg: c12/124 lr:0.000980 t:1.6s +tttg: c13/124 lr:0.000977 t:1.7s +tttg: c14/124 lr:0.000973 t:1.8s +tttg: c15/124 lr:0.000968 t:1.9s +tttg: c16/124 lr:0.000964 t:1.9s +tttg: c17/124 lr:0.000959 t:2.0s +tttg: c18/124 lr:0.000954 t:2.1s +tttg: c19/124 lr:0.000948 t:2.2s +tttg: c20/124 lr:0.000942 t:2.2s +tttg: c21/124 lr:0.000936 t:2.3s +tttg: c22/124 lr:0.000930 t:3.7s +tttg: c23/124 lr:0.000923 t:3.7s +tttg: c24/124 lr:0.000916 t:3.8s +tttg: c25/124 lr:0.000909 t:3.9s +tttg: c26/124 lr:0.000901 t:4.0s +tttg: c27/124 lr:0.000894 t:4.1s +tttg: c28/124 lr:0.000886 t:4.1s +tttg: c29/124 lr:0.000877 t:4.2s +tttg: c30/124 lr:0.000869 t:4.3s +tttg: c31/124 lr:0.000860 t:4.4s +tttg: c32/124 lr:0.000851 t:4.4s +tttg: c33/124 lr:0.000842 t:4.5s +tttg: c34/124 lr:0.000833 t:4.6s +tttg: c35/124 lr:0.000823 t:4.7s +tttg: c36/124 lr:0.000813 t:4.8s +tttg: c37/124 lr:0.000803 t:4.8s +tttg: c38/124 lr:0.000793 t:4.9s +tttg: c39/124 lr:0.000782 t:5.0s +tttg: c40/124 lr:0.000772 t:5.1s +tttg: c41/124 lr:0.000761 t:5.2s +tttg: c42/124 lr:0.000750 t:5.2s +tttg: c43/124 lr:0.000739 t:5.3s +tttg: c44/124 lr:0.000728 t:5.4s +tttg: c45/124 lr:0.000716 t:5.5s +tttg: c46/124 lr:0.000705 t:5.6s +tttg: c47/124 lr:0.000693 t:5.6s +tttg: c48/124 lr:0.000681 t:5.7s +tttg: c49/124 lr:0.000669 t:5.8s +tttg: c50/124 lr:0.000657 t:5.9s +tttg: c51/124 lr:0.000645 t:6.0s +tttg: c52/124 lr:0.000632 t:6.0s +tttg: c53/124 lr:0.000620 t:6.1s +tttg: c54/124 lr:0.000608 t:6.2s +tttg: c55/124 lr:0.000595 t:6.3s +tttg: c56/124 lr:0.000583 t:6.4s +tttg: c57/124 lr:0.000570 t:6.4s +tttg: c58/124 lr:0.000557 t:6.5s +tttg: c59/124 lr:0.000545 t:6.6s +tttg: c60/124 lr:0.000532 t:6.7s +tttg: c61/124 lr:0.000519 t:6.7s +tttg: c62/124 lr:0.000506 t:6.8s +tttg: c63/124 lr:0.000494 t:6.9s +tttg: c64/124 lr:0.000481 t:7.0s +tttg: c65/124 lr:0.000468 t:7.1s +tttg: c66/124 lr:0.000455 t:7.1s +tttg: c67/124 lr:0.000443 t:7.2s +tttg: c68/124 lr:0.000430 t:7.3s +tttg: c69/124 lr:0.000417 t:7.4s +tttg: c70/124 lr:0.000405 t:7.5s +tttg: c71/124 lr:0.000392 t:7.5s +tttg: c72/124 lr:0.000380 t:7.6s +tttg: c73/124 lr:0.000368 t:7.7s +tttg: c74/124 lr:0.000355 t:7.8s +tttg: c75/124 lr:0.000343 t:7.8s +tttg: c76/124 lr:0.000331 t:7.9s +tttg: c77/124 lr:0.000319 t:8.0s +tttg: c78/124 lr:0.000307 t:8.1s +tttg: c79/124 lr:0.000295 t:9.4s +tttg: c80/124 lr:0.000284 t:9.5s +tttg: c81/124 lr:0.000272 t:9.5s +tttg: c82/124 lr:0.000261 t:9.6s +tttg: c83/124 lr:0.000250 t:9.7s +tttg: c84/124 lr:0.000239 t:9.8s +tttg: c85/124 lr:0.000228 t:9.9s +tttg: c86/124 lr:0.000218 t:10.0s +tttg: c87/124 lr:0.000207 t:10.0s +tttg: c88/124 lr:0.000197 t:10.1s +tttg: c89/124 lr:0.000187 t:10.2s +tttg: c90/124 lr:0.000177 t:10.3s +tttg: c91/124 lr:0.000167 t:10.4s +tttg: c92/124 lr:0.000158 t:10.4s +tttg: c93/124 lr:0.000149 t:10.5s +tttg: c94/124 lr:0.000140 t:10.6s +tttg: c95/124 lr:0.000131 t:10.7s +tttg: c96/124 lr:0.000123 t:10.8s +tttg: c97/124 lr:0.000114 t:10.9s +tttg: c98/124 lr:0.000106 t:10.9s +tttg: c99/124 lr:0.000099 t:11.0s +tttg: c100/124 lr:0.000091 t:11.1s +tttg: c101/124 lr:0.000084 t:11.2s +tttg: c102/124 lr:0.000077 t:11.3s +tttg: c103/124 lr:0.000070 t:11.3s +tttg: c104/124 lr:0.000064 t:11.4s +tttg: c105/124 lr:0.000058 t:11.5s +tttg: c106/124 lr:0.000052 t:11.6s +tttg: c107/124 lr:0.000046 t:11.7s +tttg: c108/124 lr:0.000041 t:11.8s +tttg: c109/124 lr:0.000036 t:11.8s +tttg: c110/124 lr:0.000032 t:11.9s +tttg: c111/124 lr:0.000027 t:12.0s +tttg: c112/124 lr:0.000023 t:12.1s +tttg: c113/124 lr:0.000020 t:12.2s +tttg: c114/124 lr:0.000016 t:12.2s +tttg: c115/124 lr:0.000013 t:12.3s +tttg: c116/124 lr:0.000010 t:12.4s +tttg: c117/124 lr:0.000008 t:12.5s +tttg: c118/124 lr:0.000006 t:12.6s +tttg: c119/124 lr:0.000004 t:12.6s +tttg: c120/124 lr:0.000003 t:12.7s +tttg: c121/124 lr:0.000001 t:12.8s +tttg: c122/124 lr:0.000001 t:12.9s +tttg: c123/124 lr:0.000000 t:13.0s +ttpr: phase:1/4 t:238.2s +ttp: b758/782 bl:2.3082 bb:1.0758 rl:2.2631 rb:1.0659 dl:3634-3740 gd:0 +ttp: b752/782 bl:2.3324 bb:1.0722 rl:2.2710 rb:1.0667 dl:3222-3283 gd:0 +ttpp: phase:2/4 pd:2000 gd:1500 t:316.3s +tttg: c1/199 lr:0.001000 t:0.1s +tttg: c2/199 lr:0.001000 t:0.2s +tttg: c3/199 lr:0.001000 t:0.2s +tttg: c4/199 lr:0.000999 t:0.3s +tttg: c5/199 lr:0.000999 t:0.4s +tttg: c6/199 lr:0.000998 t:0.5s +tttg: c7/199 lr:0.000998 t:0.6s +tttg: c8/199 lr:0.000997 t:0.6s +tttg: c9/199 lr:0.000996 t:0.7s +tttg: c10/199 lr:0.000995 t:0.8s +tttg: c11/199 lr:0.000994 t:0.9s +tttg: c12/199 lr:0.000992 t:1.0s +tttg: c13/199 lr:0.000991 t:1.1s +tttg: c14/199 lr:0.000989 t:1.1s +tttg: c15/199 lr:0.000988 t:1.2s +tttg: c16/199 lr:0.000986 t:1.3s +tttg: c17/199 lr:0.000984 t:1.4s +tttg: c18/199 lr:0.000982 t:1.5s +tttg: c19/199 lr:0.000980 t:1.5s +tttg: c20/199 lr:0.000977 t:1.6s +tttg: c21/199 lr:0.000975 t:1.7s +tttg: c22/199 lr:0.000973 t:1.8s +tttg: c23/199 lr:0.000970 t:1.9s +tttg: c24/199 lr:0.000967 t:1.9s +tttg: c25/199 lr:0.000964 t:2.0s +tttg: c26/199 lr:0.000961 t:2.1s +tttg: c27/199 lr:0.000958 t:2.2s +tttg: c28/199 lr:0.000955 t:2.3s +tttg: c29/199 lr:0.000951 t:2.3s +tttg: c30/199 lr:0.000948 t:2.4s +tttg: c31/199 lr:0.000944 t:2.5s +tttg: c32/199 lr:0.000941 t:2.6s +tttg: c33/199 lr:0.000937 t:2.7s +tttg: c34/199 lr:0.000933 t:2.8s +tttg: c35/199 lr:0.000929 t:2.8s +tttg: c36/199 lr:0.000925 t:2.9s +tttg: c37/199 lr:0.000921 t:3.0s +tttg: c38/199 lr:0.000916 t:3.1s +tttg: c39/199 lr:0.000912 t:3.2s +tttg: c40/199 lr:0.000907 t:3.2s +tttg: c41/199 lr:0.000903 t:3.3s +tttg: c42/199 lr:0.000898 t:3.4s +tttg: c43/199 lr:0.000893 t:3.5s +tttg: c44/199 lr:0.000888 t:3.6s +tttg: c45/199 lr:0.000883 t:3.6s +tttg: c46/199 lr:0.000878 t:3.7s +tttg: c47/199 lr:0.000873 t:3.8s +tttg: c48/199 lr:0.000867 t:3.9s +tttg: c49/199 lr:0.000862 t:4.0s +tttg: c50/199 lr:0.000856 t:4.1s +tttg: c51/199 lr:0.000851 t:4.1s +tttg: c52/199 lr:0.000845 t:4.2s +tttg: c53/199 lr:0.000839 t:4.3s +tttg: c54/199 lr:0.000833 t:4.4s +tttg: c55/199 lr:0.000827 t:4.5s +tttg: c56/199 lr:0.000821 t:4.6s +tttg: c57/199 lr:0.000815 t:4.6s +tttg: c58/199 lr:0.000809 t:4.7s +tttg: c59/199 lr:0.000803 t:4.8s +tttg: c60/199 lr:0.000796 t:4.9s +tttg: c61/199 lr:0.000790 t:5.0s +tttg: c62/199 lr:0.000784 t:5.0s +tttg: c63/199 lr:0.000777 t:5.1s +tttg: c64/199 lr:0.000770 t:5.2s +tttg: c65/199 lr:0.000764 t:5.3s +tttg: c66/199 lr:0.000757 t:5.4s +tttg: c67/199 lr:0.000750 t:5.4s +tttg: c68/199 lr:0.000743 t:5.5s +tttg: c69/199 lr:0.000736 t:5.6s +tttg: c70/199 lr:0.000729 t:5.7s +tttg: c71/199 lr:0.000722 t:5.8s +tttg: c72/199 lr:0.000715 t:5.9s +tttg: c73/199 lr:0.000708 t:5.9s +tttg: c74/199 lr:0.000700 t:6.0s +tttg: c75/199 lr:0.000693 t:6.1s +tttg: c76/199 lr:0.000686 t:6.2s +tttg: c77/199 lr:0.000678 t:6.3s +tttg: c78/199 lr:0.000671 t:6.3s +tttg: c79/199 lr:0.000664 t:6.4s +tttg: c80/199 lr:0.000656 t:6.5s +tttg: c81/199 lr:0.000648 t:6.6s +tttg: c82/199 lr:0.000641 t:6.7s +tttg: c83/199 lr:0.000633 t:6.7s +tttg: c84/199 lr:0.000626 t:6.8s +tttg: c85/199 lr:0.000618 t:6.9s +tttg: c86/199 lr:0.000610 t:7.0s +tttg: c87/199 lr:0.000602 t:7.1s +tttg: c88/199 lr:0.000595 t:7.1s +tttg: c89/199 lr:0.000587 t:7.2s +tttg: c90/199 lr:0.000579 t:7.3s +tttg: c91/199 lr:0.000571 t:7.4s +tttg: c92/199 lr:0.000563 t:7.5s +tttg: c93/199 lr:0.000555 t:7.6s +tttg: c94/199 lr:0.000548 t:7.6s +tttg: c95/199 lr:0.000540 t:7.7s +tttg: c96/199 lr:0.000532 t:7.8s +tttg: c97/199 lr:0.000524 t:7.9s +tttg: c98/199 lr:0.000516 t:8.0s +tttg: c99/199 lr:0.000508 t:8.0s +tttg: c100/199 lr:0.000500 t:8.1s +tttg: c101/199 lr:0.000492 t:8.2s +tttg: c102/199 lr:0.000484 t:8.3s +tttg: c103/199 lr:0.000476 t:8.4s +tttg: c104/199 lr:0.000468 t:8.4s +tttg: c105/199 lr:0.000460 t:8.5s +tttg: c106/199 lr:0.000452 t:8.6s +tttg: c107/199 lr:0.000445 t:8.7s +tttg: c108/199 lr:0.000437 t:8.8s +tttg: c109/199 lr:0.000429 t:8.8s +tttg: c110/199 lr:0.000421 t:8.9s +tttg: c111/199 lr:0.000413 t:9.0s +tttg: c112/199 lr:0.000405 t:9.1s +tttg: c113/199 lr:0.000398 t:9.2s +tttg: c114/199 lr:0.000390 t:9.2s +tttg: c115/199 lr:0.000382 t:9.3s +tttg: c116/199 lr:0.000374 t:9.4s +tttg: c117/199 lr:0.000367 t:9.5s +tttg: c118/199 lr:0.000359 t:9.6s +tttg: c119/199 lr:0.000352 t:9.6s +tttg: c120/199 lr:0.000344 t:9.7s +tttg: c121/199 lr:0.000336 t:9.8s +tttg: c122/199 lr:0.000329 t:9.9s +tttg: c123/199 lr:0.000322 t:10.0s +tttg: c124/199 lr:0.000314 t:10.1s +tttg: c125/199 lr:0.000307 t:10.1s +tttg: c126/199 lr:0.000300 t:10.2s +tttg: c127/199 lr:0.000292 t:10.3s +tttg: c128/199 lr:0.000285 t:10.4s +tttg: c129/199 lr:0.000278 t:10.5s +tttg: c130/199 lr:0.000271 t:10.5s +tttg: c131/199 lr:0.000264 t:10.6s +tttg: c132/199 lr:0.000257 t:10.7s +tttg: c133/199 lr:0.000250 t:10.8s +tttg: c134/199 lr:0.000243 t:10.9s +tttg: c135/199 lr:0.000236 t:10.9s +tttg: c136/199 lr:0.000230 t:11.0s +tttg: c137/199 lr:0.000223 t:11.1s +tttg: c138/199 lr:0.000216 t:11.2s +tttg: c139/199 lr:0.000210 t:11.3s +tttg: c140/199 lr:0.000204 t:11.3s +tttg: c141/199 lr:0.000197 t:11.4s +tttg: c142/199 lr:0.000191 t:11.5s +tttg: c143/199 lr:0.000185 t:11.6s +tttg: c144/199 lr:0.000179 t:11.7s +tttg: c145/199 lr:0.000173 t:11.7s +tttg: c146/199 lr:0.000167 t:11.8s +tttg: c147/199 lr:0.000161 t:11.9s +tttg: c148/199 lr:0.000155 t:12.0s +tttg: c149/199 lr:0.000149 t:12.1s +tttg: c150/199 lr:0.000144 t:12.2s +tttg: c151/199 lr:0.000138 t:12.2s +tttg: c152/199 lr:0.000133 t:12.3s +tttg: c153/199 lr:0.000127 t:12.4s +tttg: c154/199 lr:0.000122 t:12.5s +tttg: c155/199 lr:0.000117 t:12.6s +tttg: c156/199 lr:0.000112 t:12.6s +tttg: c157/199 lr:0.000107 t:12.7s +tttg: c158/199 lr:0.000102 t:12.8s +tttg: c159/199 lr:0.000097 t:12.9s +tttg: c160/199 lr:0.000093 t:13.0s +tttg: c161/199 lr:0.000088 t:13.0s +tttg: c162/199 lr:0.000084 t:13.1s +tttg: c163/199 lr:0.000079 t:13.2s +tttg: c164/199 lr:0.000075 t:13.3s +tttg: c165/199 lr:0.000071 t:13.4s +tttg: c166/199 lr:0.000067 t:13.4s +tttg: c167/199 lr:0.000063 t:13.5s +tttg: c168/199 lr:0.000059 t:13.6s +tttg: c169/199 lr:0.000056 t:13.7s +tttg: c170/199 lr:0.000052 t:13.8s +tttg: c171/199 lr:0.000049 t:13.8s +tttg: c172/199 lr:0.000045 t:13.9s +tttg: c173/199 lr:0.000042 t:14.0s +tttg: c174/199 lr:0.000039 t:14.1s +tttg: c175/199 lr:0.000036 t:14.2s +tttg: c176/199 lr:0.000033 t:14.3s +tttg: c177/199 lr:0.000030 t:14.3s +tttg: c178/199 lr:0.000027 t:14.4s +tttg: c179/199 lr:0.000025 t:14.5s +tttg: c180/199 lr:0.000023 t:14.6s +tttg: c181/199 lr:0.000020 t:14.7s +tttg: c182/199 lr:0.000018 t:14.7s +tttg: c183/199 lr:0.000016 t:14.8s +tttg: c184/199 lr:0.000014 t:14.9s +tttg: c185/199 lr:0.000012 t:15.0s +tttg: c186/199 lr:0.000011 t:15.1s +tttg: c187/199 lr:0.000009 t:15.1s +tttg: c188/199 lr:0.000008 t:15.2s +tttg: c189/199 lr:0.000006 t:15.3s +tttg: c190/199 lr:0.000005 t:15.4s +tttg: c191/199 lr:0.000004 t:15.5s +tttg: c192/199 lr:0.000003 t:15.6s +tttg: c193/199 lr:0.000002 t:15.6s +tttg: c194/199 lr:0.000002 t:15.7s +tttg: c195/199 lr:0.000001 t:15.8s +tttg: c196/199 lr:0.000001 t:15.9s +tttg: c197/199 lr:0.000000 t:16.0s +tttg: c198/199 lr:0.000000 t:16.0s +ttpr: phase:2/4 t:335.1s +ttp: b743/782 bl:2.3349 bb:1.0638 rl:2.2767 rb:1.0664 dl:2762-2805 gd:0 +ttp: b742/782 bl:2.3240 bb:1.0463 rl:2.2806 rb:1.0647 dl:2730-2762 gd:0 +ttpp: phase:3/4 pd:2704 gd:2250 t:352.7s +tttg: c1/270 lr:0.001000 t:0.1s +tttg: c2/270 lr:0.001000 t:0.2s +tttg: c3/270 lr:0.001000 t:0.2s +tttg: c4/270 lr:0.001000 t:0.3s +tttg: c5/270 lr:0.000999 t:0.4s +tttg: c6/270 lr:0.000999 t:0.5s +tttg: c7/270 lr:0.000999 t:0.6s +tttg: c8/270 lr:0.000998 t:0.7s +tttg: c9/270 lr:0.000998 t:0.7s +tttg: c10/270 lr:0.000997 t:0.8s +tttg: c11/270 lr:0.000997 t:0.9s +tttg: c12/270 lr:0.000996 t:1.0s +tttg: c13/270 lr:0.000995 t:1.1s +tttg: c14/270 lr:0.000994 t:1.2s +tttg: c15/270 lr:0.000993 t:1.2s +tttg: c16/270 lr:0.000992 t:1.3s +tttg: c17/270 lr:0.000991 t:1.4s +tttg: c18/270 lr:0.000990 t:1.5s +tttg: c19/270 lr:0.000989 t:1.6s +tttg: c20/270 lr:0.000988 t:1.6s +tttg: c21/270 lr:0.000986 t:1.7s +tttg: c22/270 lr:0.000985 t:1.8s +tttg: c23/270 lr:0.000984 t:1.9s +tttg: c24/270 lr:0.000982 t:2.0s +tttg: c25/270 lr:0.000980 t:2.1s +tttg: c26/270 lr:0.000979 t:2.1s +tttg: c27/270 lr:0.000977 t:2.2s +tttg: c28/270 lr:0.000975 t:2.3s +tttg: c29/270 lr:0.000974 t:2.4s +tttg: c30/270 lr:0.000972 t:2.5s +tttg: c31/270 lr:0.000970 t:2.5s +tttg: c32/270 lr:0.000968 t:2.6s +tttg: c33/270 lr:0.000965 t:2.7s +tttg: c34/270 lr:0.000963 t:2.8s +tttg: c35/270 lr:0.000961 t:2.9s +tttg: c36/270 lr:0.000959 t:3.0s +tttg: c37/270 lr:0.000956 t:3.0s +tttg: c38/270 lr:0.000954 t:3.1s +tttg: c39/270 lr:0.000952 t:3.2s +tttg: c40/270 lr:0.000949 t:3.3s +tttg: c41/270 lr:0.000946 t:3.4s +tttg: c42/270 lr:0.000944 t:3.5s +tttg: c43/270 lr:0.000941 t:3.5s +tttg: c44/270 lr:0.000938 t:3.6s +tttg: c45/270 lr:0.000935 t:3.7s +tttg: c46/270 lr:0.000933 t:3.8s +tttg: c47/270 lr:0.000930 t:3.9s +tttg: c48/270 lr:0.000927 t:4.0s +tttg: c49/270 lr:0.000923 t:4.0s +tttg: c50/270 lr:0.000920 t:4.1s +tttg: c51/270 lr:0.000917 t:4.2s +tttg: c52/270 lr:0.000914 t:4.3s +tttg: c53/270 lr:0.000911 t:4.4s +tttg: c54/270 lr:0.000907 t:4.4s +tttg: c55/270 lr:0.000904 t:4.5s +tttg: c56/270 lr:0.000900 t:4.6s +tttg: c57/270 lr:0.000897 t:4.7s +tttg: c58/270 lr:0.000893 t:4.8s +tttg: c59/270 lr:0.000890 t:4.9s +tttg: c60/270 lr:0.000886 t:4.9s +tttg: c61/270 lr:0.000882 t:5.0s +tttg: c62/270 lr:0.000878 t:5.1s +tttg: c63/270 lr:0.000875 t:5.2s +tttg: c64/270 lr:0.000871 t:5.3s +tttg: c65/270 lr:0.000867 t:5.4s +tttg: c66/270 lr:0.000863 t:5.4s +tttg: c67/270 lr:0.000859 t:5.5s +tttg: c68/270 lr:0.000855 t:5.6s +tttg: c69/270 lr:0.000850 t:5.7s +tttg: c70/270 lr:0.000846 t:5.8s +tttg: c71/270 lr:0.000842 t:5.9s +tttg: c72/270 lr:0.000838 t:5.9s +tttg: c73/270 lr:0.000833 t:6.0s +tttg: c74/270 lr:0.000829 t:6.1s +tttg: c75/270 lr:0.000825 t:6.2s +tttg: c76/270 lr:0.000820 t:6.3s +tttg: c77/270 lr:0.000816 t:6.3s +tttg: c78/270 lr:0.000811 t:6.4s +tttg: c79/270 lr:0.000806 t:6.5s +tttg: c80/270 lr:0.000802 t:6.6s +tttg: c81/270 lr:0.000797 t:6.7s +tttg: c82/270 lr:0.000792 t:6.8s +tttg: c83/270 lr:0.000788 t:6.8s +tttg: c84/270 lr:0.000783 t:6.9s +tttg: c85/270 lr:0.000778 t:7.0s +tttg: c86/270 lr:0.000773 t:7.1s +tttg: c87/270 lr:0.000768 t:7.2s +tttg: c88/270 lr:0.000763 t:7.3s +tttg: c89/270 lr:0.000758 t:7.3s +tttg: c90/270 lr:0.000753 t:7.4s +tttg: c91/270 lr:0.000748 t:7.5s +tttg: c92/270 lr:0.000743 t:7.6s +tttg: c93/270 lr:0.000738 t:7.7s +tttg: c94/270 lr:0.000733 t:7.7s +tttg: c95/270 lr:0.000728 t:7.8s +tttg: c96/270 lr:0.000723 t:7.9s +tttg: c97/270 lr:0.000717 t:8.0s +tttg: c98/270 lr:0.000712 t:8.1s +tttg: c99/270 lr:0.000707 t:8.2s +tttg: c100/270 lr:0.000701 t:8.2s +tttg: c101/270 lr:0.000696 t:8.3s +tttg: c102/270 lr:0.000691 t:8.4s +tttg: c103/270 lr:0.000685 t:8.5s +tttg: c104/270 lr:0.000680 t:8.6s +tttg: c105/270 lr:0.000674 t:8.6s +tttg: c106/270 lr:0.000669 t:8.7s +tttg: c107/270 lr:0.000663 t:8.8s +tttg: c108/270 lr:0.000658 t:8.9s +tttg: c109/270 lr:0.000652 t:9.0s +tttg: c110/270 lr:0.000647 t:9.1s +tttg: c111/270 lr:0.000641 t:9.1s +tttg: c112/270 lr:0.000636 t:9.2s +tttg: c113/270 lr:0.000630 t:9.3s +tttg: c114/270 lr:0.000624 t:9.4s +tttg: c115/270 lr:0.000619 t:9.5s +tttg: c116/270 lr:0.000613 t:9.6s +tttg: c117/270 lr:0.000607 t:9.6s +tttg: c118/270 lr:0.000601 t:9.7s +tttg: c119/270 lr:0.000596 t:9.8s +tttg: c120/270 lr:0.000590 t:9.9s +tttg: c121/270 lr:0.000584 t:10.0s +tttg: c122/270 lr:0.000579 t:10.1s +tttg: c123/270 lr:0.000573 t:10.1s +tttg: c124/270 lr:0.000567 t:10.2s +tttg: c125/270 lr:0.000561 t:10.3s +tttg: c126/270 lr:0.000555 t:10.4s +tttg: c127/270 lr:0.000550 t:10.5s +tttg: c128/270 lr:0.000544 t:10.6s +tttg: c129/270 lr:0.000538 t:10.6s +tttg: c130/270 lr:0.000532 t:10.7s +tttg: c131/270 lr:0.000526 t:10.8s +tttg: c132/270 lr:0.000520 t:10.9s +tttg: c133/270 lr:0.000515 t:11.0s +tttg: c134/270 lr:0.000509 t:11.1s +tttg: c135/270 lr:0.000503 t:11.1s +tttg: c136/270 lr:0.000497 t:11.2s +tttg: c137/270 lr:0.000491 t:11.3s +tttg: c138/270 lr:0.000485 t:11.4s +tttg: c139/270 lr:0.000480 t:11.5s +tttg: c140/270 lr:0.000474 t:11.5s +tttg: c141/270 lr:0.000468 t:11.6s +tttg: c142/270 lr:0.000462 t:11.7s +tttg: c143/270 lr:0.000456 t:11.8s +tttg: c144/270 lr:0.000450 t:11.9s +tttg: c145/270 lr:0.000445 t:12.0s +tttg: c146/270 lr:0.000439 t:12.0s +tttg: c147/270 lr:0.000433 t:12.1s +tttg: c148/270 lr:0.000427 t:12.2s +tttg: c149/270 lr:0.000421 t:12.3s +tttg: c150/270 lr:0.000416 t:12.4s +tttg: c151/270 lr:0.000410 t:12.5s +tttg: c152/270 lr:0.000404 t:12.5s +tttg: c153/270 lr:0.000399 t:12.6s +tttg: c154/270 lr:0.000393 t:12.7s +tttg: c155/270 lr:0.000387 t:12.8s +tttg: c156/270 lr:0.000381 t:12.9s +tttg: c157/270 lr:0.000376 t:13.0s +tttg: c158/270 lr:0.000370 t:13.0s +tttg: c159/270 lr:0.000364 t:13.1s +tttg: c160/270 lr:0.000359 t:13.2s +tttg: c161/270 lr:0.000353 t:13.3s +tttg: c162/270 lr:0.000348 t:13.4s +tttg: c163/270 lr:0.000342 t:13.4s +tttg: c164/270 lr:0.000337 t:13.5s +tttg: c165/270 lr:0.000331 t:13.6s +tttg: c166/270 lr:0.000326 t:13.7s +tttg: c167/270 lr:0.000320 t:13.8s +tttg: c168/270 lr:0.000315 t:13.9s +tttg: c169/270 lr:0.000309 t:13.9s +tttg: c170/270 lr:0.000304 t:14.0s +tttg: c171/270 lr:0.000299 t:14.1s +tttg: c172/270 lr:0.000293 t:14.2s +tttg: c173/270 lr:0.000288 t:14.3s +tttg: c174/270 lr:0.000283 t:14.3s +tttg: c175/270 lr:0.000277 t:14.4s +tttg: c176/270 lr:0.000272 t:14.5s +tttg: c177/270 lr:0.000267 t:14.6s +tttg: c178/270 lr:0.000262 t:14.7s +tttg: c179/270 lr:0.000257 t:14.8s +tttg: c180/270 lr:0.000252 t:14.8s +tttg: c181/270 lr:0.000247 t:14.9s +tttg: c182/270 lr:0.000242 t:15.0s +tttg: c183/270 lr:0.000237 t:15.1s +tttg: c184/270 lr:0.000232 t:15.2s +tttg: c185/270 lr:0.000227 t:15.2s +tttg: c186/270 lr:0.000222 t:15.3s +tttg: c187/270 lr:0.000217 t:15.4s +tttg: c188/270 lr:0.000212 t:15.5s +tttg: c189/270 lr:0.000208 t:15.6s +tttg: c190/270 lr:0.000203 t:15.7s +tttg: c191/270 lr:0.000198 t:15.7s +tttg: c192/270 lr:0.000194 t:15.8s +tttg: c193/270 lr:0.000189 t:15.9s +tttg: c194/270 lr:0.000184 t:16.0s +tttg: c195/270 lr:0.000180 t:16.1s +tttg: c196/270 lr:0.000175 t:16.1s +tttg: c197/270 lr:0.000171 t:16.2s +tttg: c198/270 lr:0.000167 t:16.3s +tttg: c199/270 lr:0.000162 t:16.4s +tttg: c200/270 lr:0.000158 t:16.5s +tttg: c201/270 lr:0.000154 t:16.6s +tttg: c202/270 lr:0.000150 t:16.6s +tttg: c203/270 lr:0.000145 t:16.7s +tttg: c204/270 lr:0.000141 t:16.8s +tttg: c205/270 lr:0.000137 t:16.9s +tttg: c206/270 lr:0.000133 t:17.0s +tttg: c207/270 lr:0.000129 t:17.1s +tttg: c208/270 lr:0.000125 t:17.1s +tttg: c209/270 lr:0.000122 t:17.2s +tttg: c210/270 lr:0.000118 t:17.3s +tttg: c211/270 lr:0.000114 t:17.4s +tttg: c212/270 lr:0.000110 t:17.5s +tttg: c213/270 lr:0.000107 t:17.5s +tttg: c214/270 lr:0.000103 t:17.6s +tttg: c215/270 lr:0.000100 t:17.7s +tttg: c216/270 lr:0.000096 t:17.8s +tttg: c217/270 lr:0.000093 t:17.9s +tttg: c218/270 lr:0.000089 t:17.9s +tttg: c219/270 lr:0.000086 t:18.0s +tttg: c220/270 lr:0.000083 t:18.1s +tttg: c221/270 lr:0.000080 t:18.2s +tttg: c222/270 lr:0.000077 t:18.3s +tttg: c223/270 lr:0.000073 t:18.4s +tttg: c224/270 lr:0.000070 t:18.4s +tttg: c225/270 lr:0.000067 t:18.5s +tttg: c226/270 lr:0.000065 t:18.6s +tttg: c227/270 lr:0.000062 t:18.7s +tttg: c228/270 lr:0.000059 t:18.8s +tttg: c229/270 lr:0.000056 t:18.8s +tttg: c230/270 lr:0.000054 t:18.9s +tttg: c231/270 lr:0.000051 t:19.0s +tttg: c232/270 lr:0.000048 t:19.1s +tttg: c233/270 lr:0.000046 t:19.2s +tttg: c234/270 lr:0.000044 t:19.3s +tttg: c235/270 lr:0.000041 t:19.3s +tttg: c236/270 lr:0.000039 t:19.4s +tttg: c237/270 lr:0.000037 t:19.5s +tttg: c238/270 lr:0.000035 t:19.6s +tttg: c239/270 lr:0.000032 t:19.7s +tttg: c240/270 lr:0.000030 t:19.7s +tttg: c241/270 lr:0.000028 t:19.8s +tttg: c242/270 lr:0.000026 t:19.9s +tttg: c243/270 lr:0.000025 t:20.0s +tttg: c244/270 lr:0.000023 t:20.1s +tttg: c245/270 lr:0.000021 t:20.2s +tttg: c246/270 lr:0.000020 t:20.2s +tttg: c247/270 lr:0.000018 t:20.3s +tttg: c248/270 lr:0.000016 t:20.4s +tttg: c249/270 lr:0.000015 t:20.5s +tttg: c250/270 lr:0.000014 t:20.6s +tttg: c251/270 lr:0.000012 t:20.7s +tttg: c252/270 lr:0.000011 t:20.7s +tttg: c253/270 lr:0.000010 t:20.8s +tttg: c254/270 lr:0.000009 t:20.9s +tttg: c255/270 lr:0.000008 t:21.0s +tttg: c256/270 lr:0.000007 t:21.1s +tttg: c257/270 lr:0.000006 t:22.7s +tttg: c258/270 lr:0.000005 t:22.8s +tttg: c259/270 lr:0.000004 t:22.9s +tttg: c260/270 lr:0.000003 t:23.0s +tttg: c261/270 lr:0.000003 t:23.1s +tttg: c262/270 lr:0.000002 t:23.1s +tttg: c263/270 lr:0.000002 t:23.2s +tttg: c264/270 lr:0.000001 t:23.3s +tttg: c265/270 lr:0.000001 t:23.4s +tttg: c266/270 lr:0.000001 t:23.5s +tttg: c267/270 lr:0.000000 t:23.5s +tttg: c268/270 lr:0.000000 t:23.6s +tttg: c269/270 lr:0.000000 t:23.7s +ttpr: phase:3/4 t:379.2s +ttp: b732/782 bl:2.3732 bb:1.0929 rl:2.2868 rb:1.0666 dl:2416-2441 gd:0 +ttp: b731/782 bl:2.3400 bb:1.0436 rl:2.2901 rb:1.0651 dl:2377-2414 gd:0 +ttpp: phase:4/4 pd:3472 gd:3000 t:393.2s +tttg: c1/325 lr:0.001000 t:0.1s +tttg: c2/325 lr:0.001000 t:0.2s +tttg: c3/325 lr:0.001000 t:0.2s +tttg: c4/325 lr:0.001000 t:0.3s +tttg: c5/325 lr:0.001000 t:0.4s +tttg: c6/325 lr:0.000999 t:0.5s +tttg: c7/325 lr:0.000999 t:0.6s +tttg: c8/325 lr:0.000999 t:0.7s +tttg: c9/325 lr:0.000998 t:0.7s +tttg: c10/325 lr:0.000998 t:0.8s +tttg: c11/325 lr:0.000998 t:0.9s +tttg: c12/325 lr:0.000997 t:1.0s +tttg: c13/325 lr:0.000997 t:1.1s +tttg: c14/325 lr:0.000996 t:1.1s +tttg: c15/325 lr:0.000995 t:1.2s +tttg: c16/325 lr:0.000995 t:1.3s +tttg: c17/325 lr:0.000994 t:1.4s +tttg: c18/325 lr:0.000993 t:1.5s +tttg: c19/325 lr:0.000992 t:1.6s +tttg: c20/325 lr:0.000992 t:1.6s +tttg: c21/325 lr:0.000991 t:1.7s +tttg: c22/325 lr:0.000990 t:1.8s +tttg: c23/325 lr:0.000989 t:1.9s +tttg: c24/325 lr:0.000988 t:2.0s +tttg: c25/325 lr:0.000987 t:2.0s +tttg: c26/325 lr:0.000985 t:2.1s +tttg: c27/325 lr:0.000984 t:2.2s +tttg: c28/325 lr:0.000983 t:2.3s +tttg: c29/325 lr:0.000982 t:2.4s +tttg: c30/325 lr:0.000980 t:2.4s +tttg: c31/325 lr:0.000979 t:2.5s +tttg: c32/325 lr:0.000978 t:2.6s +tttg: c33/325 lr:0.000976 t:2.7s +tttg: c34/325 lr:0.000975 t:2.8s +tttg: c35/325 lr:0.000973 t:2.9s +tttg: c36/325 lr:0.000971 t:2.9s +tttg: c37/325 lr:0.000970 t:3.0s +tttg: c38/325 lr:0.000968 t:3.1s +tttg: c39/325 lr:0.000966 t:3.2s +tttg: c40/325 lr:0.000965 t:3.3s +tttg: c41/325 lr:0.000963 t:3.3s +tttg: c42/325 lr:0.000961 t:3.4s +tttg: c43/325 lr:0.000959 t:3.5s +tttg: c44/325 lr:0.000957 t:3.6s +tttg: c45/325 lr:0.000955 t:3.7s +tttg: c46/325 lr:0.000953 t:3.8s +tttg: c47/325 lr:0.000951 t:3.8s +tttg: c48/325 lr:0.000949 t:3.9s +tttg: c49/325 lr:0.000947 t:4.0s +tttg: c50/325 lr:0.000945 t:4.1s +tttg: c51/325 lr:0.000942 t:4.2s +tttg: c52/325 lr:0.000940 t:4.2s +tttg: c53/325 lr:0.000938 t:4.3s +tttg: c54/325 lr:0.000935 t:4.4s +tttg: c55/325 lr:0.000933 t:4.5s +tttg: c56/325 lr:0.000931 t:4.6s +tttg: c57/325 lr:0.000928 t:4.7s +tttg: c58/325 lr:0.000926 t:4.7s +tttg: c59/325 lr:0.000923 t:4.8s +tttg: c60/325 lr:0.000920 t:4.9s +tttg: c61/325 lr:0.000918 t:5.0s +tttg: c62/325 lr:0.000915 t:5.1s +tttg: c63/325 lr:0.000912 t:5.1s +tttg: c64/325 lr:0.000910 t:5.2s +tttg: c65/325 lr:0.000907 t:5.3s +tttg: c66/325 lr:0.000904 t:5.4s +tttg: c67/325 lr:0.000901 t:5.5s +tttg: c68/325 lr:0.000898 t:5.6s +tttg: c69/325 lr:0.000895 t:5.6s +tttg: c70/325 lr:0.000892 t:5.7s +tttg: c71/325 lr:0.000889 t:5.8s +tttg: c72/325 lr:0.000886 t:5.9s +tttg: c73/325 lr:0.000883 t:6.0s +tttg: c74/325 lr:0.000880 t:6.0s +tttg: c75/325 lr:0.000877 t:6.1s +tttg: c76/325 lr:0.000874 t:6.2s +tttg: c77/325 lr:0.000870 t:6.3s +tttg: c78/325 lr:0.000867 t:6.4s +tttg: c79/325 lr:0.000864 t:6.4s +tttg: c80/325 lr:0.000860 t:6.5s +tttg: c81/325 lr:0.000857 t:6.6s +tttg: c82/325 lr:0.000854 t:6.7s +tttg: c83/325 lr:0.000850 t:6.8s +tttg: c84/325 lr:0.000847 t:6.9s +tttg: c85/325 lr:0.000843 t:6.9s +tttg: c86/325 lr:0.000840 t:7.0s +tttg: c87/325 lr:0.000836 t:7.1s +tttg: c88/325 lr:0.000832 t:7.2s +tttg: c89/325 lr:0.000829 t:7.3s +tttg: c90/325 lr:0.000825 t:7.4s +tttg: c91/325 lr:0.000821 t:7.4s +tttg: c92/325 lr:0.000818 t:7.5s +tttg: c93/325 lr:0.000814 t:7.6s +tttg: c94/325 lr:0.000810 t:7.7s +tttg: c95/325 lr:0.000806 t:7.8s +tttg: c96/325 lr:0.000802 t:7.8s +tttg: c97/325 lr:0.000799 t:7.9s +tttg: c98/325 lr:0.000795 t:8.0s +tttg: c99/325 lr:0.000791 t:8.1s +tttg: c100/325 lr:0.000787 t:8.2s +tttg: c101/325 lr:0.000783 t:8.3s +tttg: c102/325 lr:0.000779 t:8.3s +tttg: c103/325 lr:0.000775 t:8.4s +tttg: c104/325 lr:0.000771 t:8.5s +tttg: c105/325 lr:0.000767 t:8.6s +tttg: c106/325 lr:0.000762 t:8.7s +tttg: c107/325 lr:0.000758 t:8.7s +tttg: c108/325 lr:0.000754 t:8.8s +tttg: c109/325 lr:0.000750 t:8.9s +tttg: c110/325 lr:0.000746 t:9.0s +tttg: c111/325 lr:0.000742 t:9.1s +tttg: c112/325 lr:0.000737 t:9.2s +tttg: c113/325 lr:0.000733 t:9.2s +tttg: c114/325 lr:0.000729 t:9.3s +tttg: c115/325 lr:0.000724 t:9.4s +tttg: c116/325 lr:0.000720 t:9.5s +tttg: c117/325 lr:0.000716 t:9.6s +tttg: c118/325 lr:0.000711 t:9.6s +tttg: c119/325 lr:0.000707 t:9.7s +tttg: c120/325 lr:0.000702 t:9.8s +tttg: c121/325 lr:0.000698 t:9.9s +tttg: c122/325 lr:0.000694 t:10.0s +tttg: c123/325 lr:0.000689 t:10.0s +tttg: c124/325 lr:0.000685 t:10.1s +tttg: c125/325 lr:0.000680 t:10.2s +tttg: c126/325 lr:0.000676 t:10.3s +tttg: c127/325 lr:0.000671 t:10.4s +tttg: c128/325 lr:0.000666 t:10.5s +tttg: c129/325 lr:0.000662 t:10.5s +tttg: c130/325 lr:0.000657 t:10.6s +tttg: c131/325 lr:0.000653 t:10.7s +tttg: c132/325 lr:0.000648 t:10.8s +tttg: c133/325 lr:0.000643 t:10.9s +tttg: c134/325 lr:0.000639 t:10.9s +tttg: c135/325 lr:0.000634 t:11.0s +tttg: c136/325 lr:0.000629 t:11.1s +tttg: c137/325 lr:0.000625 t:11.2s +tttg: c138/325 lr:0.000620 t:11.3s +tttg: c139/325 lr:0.000615 t:11.3s +tttg: c140/325 lr:0.000611 t:11.4s +tttg: c141/325 lr:0.000606 t:11.5s +tttg: c142/325 lr:0.000601 t:11.6s +tttg: c143/325 lr:0.000596 t:11.7s +tttg: c144/325 lr:0.000592 t:11.8s +tttg: c145/325 lr:0.000587 t:11.8s +tttg: c146/325 lr:0.000582 t:11.9s +tttg: c147/325 lr:0.000577 t:12.0s +tttg: c148/325 lr:0.000572 t:12.1s +tttg: c149/325 lr:0.000568 t:12.2s +tttg: c150/325 lr:0.000563 t:12.2s +tttg: c151/325 lr:0.000558 t:12.3s +tttg: c152/325 lr:0.000553 t:12.4s +tttg: c153/325 lr:0.000548 t:12.5s +tttg: c154/325 lr:0.000544 t:12.6s +tttg: c155/325 lr:0.000539 t:12.7s +tttg: c156/325 lr:0.000534 t:12.7s +tttg: c157/325 lr:0.000529 t:12.8s +tttg: c158/325 lr:0.000524 t:12.9s +tttg: c159/325 lr:0.000519 t:13.0s +tttg: c160/325 lr:0.000515 t:13.1s +tttg: c161/325 lr:0.000510 t:13.1s +tttg: c162/325 lr:0.000505 t:13.2s +tttg: c163/325 lr:0.000500 t:13.3s +tttg: c164/325 lr:0.000495 t:13.4s +tttg: c165/325 lr:0.000490 t:13.5s +tttg: c166/325 lr:0.000485 t:13.5s +tttg: c167/325 lr:0.000481 t:13.6s +tttg: c168/325 lr:0.000476 t:13.7s +tttg: c169/325 lr:0.000471 t:13.8s +tttg: c170/325 lr:0.000466 t:13.9s +tttg: c171/325 lr:0.000461 t:14.0s +tttg: c172/325 lr:0.000456 t:14.0s +tttg: c173/325 lr:0.000452 t:14.1s +tttg: c174/325 lr:0.000447 t:14.2s +tttg: c175/325 lr:0.000442 t:14.3s +tttg: c176/325 lr:0.000437 t:14.4s +tttg: c177/325 lr:0.000432 t:14.4s +tttg: c178/325 lr:0.000428 t:14.5s +tttg: c179/325 lr:0.000423 t:14.6s +tttg: c180/325 lr:0.000418 t:14.7s +tttg: c181/325 lr:0.000413 t:14.8s +tttg: c182/325 lr:0.000408 t:14.9s +tttg: c183/325 lr:0.000404 t:14.9s +tttg: c184/325 lr:0.000399 t:15.0s +tttg: c185/325 lr:0.000394 t:15.1s +tttg: c186/325 lr:0.000389 t:15.2s +tttg: c187/325 lr:0.000385 t:15.3s +tttg: c188/325 lr:0.000380 t:15.3s +tttg: c189/325 lr:0.000375 t:15.4s +tttg: c190/325 lr:0.000371 t:15.5s +tttg: c191/325 lr:0.000366 t:15.6s +tttg: c192/325 lr:0.000361 t:15.7s +tttg: c193/325 lr:0.000357 t:15.7s +tttg: c194/325 lr:0.000352 t:15.8s +tttg: c195/325 lr:0.000347 t:15.9s +tttg: c196/325 lr:0.000343 t:16.0s +tttg: c197/325 lr:0.000338 t:16.1s +tttg: c198/325 lr:0.000334 t:16.2s +tttg: c199/325 lr:0.000329 t:16.2s +tttg: c200/325 lr:0.000324 t:16.3s +tttg: c201/325 lr:0.000320 t:16.4s +tttg: c202/325 lr:0.000315 t:16.5s +tttg: c203/325 lr:0.000311 t:16.6s +tttg: c204/325 lr:0.000306 t:16.6s +tttg: c205/325 lr:0.000302 t:16.7s +tttg: c206/325 lr:0.000298 t:16.8s +tttg: c207/325 lr:0.000293 t:16.9s +tttg: c208/325 lr:0.000289 t:17.0s +tttg: c209/325 lr:0.000284 t:17.1s +tttg: c210/325 lr:0.000280 t:17.1s +tttg: c211/325 lr:0.000276 t:17.2s +tttg: c212/325 lr:0.000271 t:17.3s +tttg: c213/325 lr:0.000267 t:17.4s +tttg: c214/325 lr:0.000263 t:17.5s +tttg: c215/325 lr:0.000258 t:17.5s +tttg: c216/325 lr:0.000254 t:17.6s +tttg: c217/325 lr:0.000250 t:17.7s +tttg: c218/325 lr:0.000246 t:17.8s +tttg: c219/325 lr:0.000242 t:17.9s +tttg: c220/325 lr:0.000238 t:17.9s +tttg: c221/325 lr:0.000233 t:18.0s +tttg: c222/325 lr:0.000229 t:18.1s +tttg: c223/325 lr:0.000225 t:18.2s +tttg: c224/325 lr:0.000221 t:18.3s +tttg: c225/325 lr:0.000217 t:18.4s +tttg: c226/325 lr:0.000213 t:18.4s +tttg: c227/325 lr:0.000209 t:18.5s +tttg: c228/325 lr:0.000205 t:18.6s +tttg: c229/325 lr:0.000201 t:18.7s +tttg: c230/325 lr:0.000198 t:18.8s +tttg: c231/325 lr:0.000194 t:18.8s +tttg: c232/325 lr:0.000190 t:18.9s +tttg: c233/325 lr:0.000186 t:19.0s +tttg: c234/325 lr:0.000182 t:19.1s +tttg: c235/325 lr:0.000179 t:19.2s +tttg: c236/325 lr:0.000175 t:19.2s +tttg: c237/325 lr:0.000171 t:19.3s +tttg: c238/325 lr:0.000168 t:19.4s +tttg: c239/325 lr:0.000164 t:19.5s +tttg: c240/325 lr:0.000160 t:19.6s +tttg: c241/325 lr:0.000157 t:19.7s +tttg: c242/325 lr:0.000153 t:19.7s +tttg: c243/325 lr:0.000150 t:19.8s +tttg: c244/325 lr:0.000146 t:19.9s +tttg: c245/325 lr:0.000143 t:20.0s +tttg: c246/325 lr:0.000140 t:20.1s +tttg: c247/325 lr:0.000136 t:20.1s +tttg: c248/325 lr:0.000133 t:20.2s +tttg: c249/325 lr:0.000130 t:20.3s +tttg: c250/325 lr:0.000126 t:20.4s +tttg: c251/325 lr:0.000123 t:20.5s +tttg: c252/325 lr:0.000120 t:20.6s +tttg: c253/325 lr:0.000117 t:20.6s +tttg: c254/325 lr:0.000114 t:20.7s +tttg: c255/325 lr:0.000111 t:20.8s +tttg: c256/325 lr:0.000108 t:20.9s +tttg: c257/325 lr:0.000105 t:21.0s +tttg: c258/325 lr:0.000102 t:21.0s +tttg: c259/325 lr:0.000099 t:21.1s +tttg: c260/325 lr:0.000096 t:21.2s +tttg: c261/325 lr:0.000093 t:21.3s +tttg: c262/325 lr:0.000090 t:21.4s +tttg: c263/325 lr:0.000088 t:21.5s +tttg: c264/325 lr:0.000085 t:21.5s +tttg: c265/325 lr:0.000082 t:21.6s +tttg: c266/325 lr:0.000080 t:21.7s +tttg: c267/325 lr:0.000077 t:21.8s +tttg: c268/325 lr:0.000074 t:21.9s +tttg: c269/325 lr:0.000072 t:21.9s +tttg: c270/325 lr:0.000069 t:22.0s +tttg: c271/325 lr:0.000067 t:22.1s +tttg: c272/325 lr:0.000065 t:22.2s +tttg: c273/325 lr:0.000062 t:22.3s +tttg: c274/325 lr:0.000060 t:22.3s +tttg: c275/325 lr:0.000058 t:22.4s +tttg: c276/325 lr:0.000055 t:22.5s +tttg: c277/325 lr:0.000053 t:22.6s +tttg: c278/325 lr:0.000051 t:22.7s +tttg: c279/325 lr:0.000049 t:22.7s +tttg: c280/325 lr:0.000047 t:22.8s +tttg: c281/325 lr:0.000045 t:22.9s +tttg: c282/325 lr:0.000043 t:23.0s +tttg: c283/325 lr:0.000041 t:23.1s +tttg: c284/325 lr:0.000039 t:23.2s +tttg: c285/325 lr:0.000037 t:23.2s +tttg: c286/325 lr:0.000035 t:23.3s +tttg: c287/325 lr:0.000034 t:23.4s +tttg: c288/325 lr:0.000032 t:23.5s +tttg: c289/325 lr:0.000030 t:23.6s +tttg: c290/325 lr:0.000029 t:23.6s +tttg: c291/325 lr:0.000027 t:23.7s +tttg: c292/325 lr:0.000025 t:23.8s +tttg: c293/325 lr:0.000024 t:23.9s +tttg: c294/325 lr:0.000022 t:24.0s +tttg: c295/325 lr:0.000021 t:24.1s +tttg: c296/325 lr:0.000020 t:24.1s +tttg: c297/325 lr:0.000018 t:24.2s +tttg: c298/325 lr:0.000017 t:24.3s +tttg: c299/325 lr:0.000016 t:24.4s +tttg: c300/325 lr:0.000015 t:24.5s +tttg: c301/325 lr:0.000013 t:24.5s +tttg: c302/325 lr:0.000012 t:24.6s +tttg: c303/325 lr:0.000011 t:24.7s +tttg: c304/325 lr:0.000010 t:24.8s +tttg: c305/325 lr:0.000009 t:24.9s +tttg: c306/325 lr:0.000008 t:25.0s +tttg: c307/325 lr:0.000008 t:25.0s +tttg: c308/325 lr:0.000007 t:25.1s +tttg: c309/325 lr:0.000006 t:25.2s +tttg: c310/325 lr:0.000005 t:25.3s +tttg: c311/325 lr:0.000005 t:25.4s +tttg: c312/325 lr:0.000004 t:25.4s +tttg: c313/325 lr:0.000003 t:25.5s +tttg: c314/325 lr:0.000003 t:25.6s +tttg: c315/325 lr:0.000002 t:25.7s +tttg: c316/325 lr:0.000002 t:25.8s +tttg: c317/325 lr:0.000002 t:25.9s +tttg: c318/325 lr:0.000001 t:25.9s +tttg: c319/325 lr:0.000001 t:26.0s +tttg: c320/325 lr:0.000001 t:26.1s +tttg: c321/325 lr:0.000000 t:26.2s +tttg: c322/325 lr:0.000000 t:26.3s +tttg: c323/325 lr:0.000000 t:26.3s +tttg: c324/325 lr:0.000000 t:26.4s +ttpr: phase:4/4 t:422.4s +ttp: b722/782 bl:2.3520 bb:1.0539 rl:2.2933 rb:1.0645 dl:2163-2185 gd:1 +ttp: b717/782 bl:2.2585 bb:1.0341 rl:2.2917 rb:1.0630 dl:2070-2088 gd:1 +ttp: b710/782 bl:2.2286 bb:1.0433 rl:2.2889 rb:1.0622 dl:1952-1966 gd:1 +ttp: b702/782 bl:2.4301 bb:1.0829 rl:2.2945 rb:1.0630 dl:1847-1858 gd:1 +ttp: b694/782 bl:2.3106 bb:1.0566 rl:2.2951 rb:1.0628 dl:1758-1769 gd:1 +ttp: b687/782 bl:2.3127 bb:1.0561 rl:2.2957 rb:1.0626 dl:1685-1696 gd:1 +ttp: b679/782 bl:2.3057 bb:1.0587 rl:2.2960 rb:1.0625 dl:1610-1618 gd:1 +ttp: b670/782 bl:2.3495 bb:1.0691 rl:2.2975 rb:1.0627 dl:1537-1544 gd:1 +ttp: b658/782 bl:2.2588 bb:1.0226 rl:2.2965 rb:1.0616 dl:1452-1459 gd:1 +ttp: b654/782 bl:2.2910 bb:1.0362 rl:2.2964 rb:1.0609 dl:1425-1432 gd:1 +ttp: b646/782 bl:2.2718 bb:1.0504 rl:2.2958 rb:1.0607 dl:1375-1382 gd:1 +ttp: b638/782 bl:2.3424 bb:1.0672 rl:2.2968 rb:1.0608 dl:1325-1331 gd:1 +ttp: b630/782 bl:2.3224 bb:1.0390 rl:2.2974 rb:1.0603 dl:1280-1285 gd:1 +ttp: b622/782 bl:2.2612 bb:1.0329 rl:2.2966 rb:1.0598 dl:1237-1243 gd:1 +ttp: b614/782 bl:2.3150 bb:1.0518 rl:2.2970 rb:1.0596 dl:1195-1200 gd:1 +ttp: b606/782 bl:2.3623 bb:1.0674 rl:2.2982 rb:1.0598 dl:1159-1164 gd:1 +ttp: b598/782 bl:2.3570 bb:1.0661 rl:2.2992 rb:1.0599 dl:1124-1129 gd:1 +ttp: b590/782 bl:2.3084 bb:1.0577 rl:2.2993 rb:1.0598 dl:1089-1093 gd:1 +ttp: b582/782 bl:2.3472 bb:1.0310 rl:2.3001 rb:1.0594 dl:1056-1060 gd:1 +ttp: b574/782 bl:2.3673 bb:1.0623 rl:2.3011 rb:1.0594 dl:1025-1029 gd:1 +ttp: b566/782 bl:2.3000 bb:1.0273 rl:2.3011 rb:1.0589 dl:997-1001 gd:1 +ttp: b558/782 bl:2.3754 bb:1.0624 rl:2.3021 rb:1.0590 dl:968-972 gd:1 +ttp: b550/782 bl:2.3685 bb:1.0597 rl:2.3030 rb:1.0590 dl:943-946 gd:1 +ttp: b543/782 bl:2.3372 bb:1.0581 rl:2.3034 rb:1.0590 dl:921-924 gd:1 +ttp: b535/782 bl:2.3771 bb:1.0310 rl:2.3044 rb:1.0586 dl:896-899 gd:1 +ttp: b527/782 bl:2.3458 bb:1.0296 rl:2.3048 rb:1.0583 dl:872-875 gd:1 +ttp: b520/782 bl:2.3300 bb:1.0047 rl:2.3051 rb:1.0576 dl:852-854 gd:1 +ttp: b512/782 bl:2.3033 bb:1.0637 rl:2.3051 rb:1.0577 dl:829-832 gd:1 +ttp: b505/782 bl:2.3345 bb:1.0676 rl:2.3054 rb:1.0578 dl:809-812 gd:1 +ttp: b497/782 bl:2.3365 bb:1.0420 rl:2.3057 rb:1.0576 dl:788-791 gd:1 +ttp: b489/782 bl:2.3868 bb:1.0738 rl:2.3065 rb:1.0578 dl:769-771 gd:1 +ttp: b482/782 bl:2.3334 bb:1.0490 rl:2.3068 rb:1.0577 dl:752-754 gd:1 +ttp: b475/782 bl:2.3621 bb:1.0541 rl:2.3073 rb:1.0577 dl:735-737 gd:1 +ttp: b468/782 bl:2.3604 bb:1.0624 rl:2.3078 rb:1.0577 dl:719-721 gd:1 +ttp: b460/782 bl:2.2505 bb:1.0528 rl:2.3073 rb:1.0577 dl:701-703 gd:1 +ttp: b452/782 bl:2.2660 bb:1.0141 rl:2.3069 rb:1.0573 dl:685-687 gd:1 +ttp: b444/782 bl:2.3083 bb:1.0635 rl:2.3070 rb:1.0573 dl:668-670 gd:1 +ttp: b428/782 bl:2.3063 bb:1.0509 rl:2.3069 rb:1.0573 dl:636-638 gd:1 +ttp: b421/782 bl:2.2931 bb:1.0039 rl:2.3068 rb:1.0569 dl:622-624 gd:1 +ttp: b413/782 bl:2.3682 bb:1.0614 rl:2.3073 rb:1.0569 dl:607-609 gd:1 +ttp: b405/782 bl:2.3575 bb:1.0579 rl:2.3076 rb:1.0569 dl:592-593 gd:1 +ttp: b397/782 bl:2.3579 bb:1.0457 rl:2.3080 rb:1.0568 dl:577-579 gd:1 +ttp: b388/782 bl:2.3082 bb:1.0409 rl:2.3080 rb:1.0567 dl:561-562 gd:1 +ttp: b380/782 bl:2.3628 bb:1.0895 rl:2.3083 rb:1.0569 dl:547-549 gd:1 +ttp: b372/782 bl:2.3341 bb:1.0484 rl:2.3085 rb:1.0569 dl:533-535 gd:1 +ttp: b364/782 bl:2.3435 bb:1.0597 rl:2.3087 rb:1.0569 dl:521-522 gd:1 +ttp: b356/782 bl:2.3428 bb:1.0550 rl:2.3089 rb:1.0569 dl:506-508 gd:1 +ttp: b348/782 bl:2.3641 bb:1.0603 rl:2.3092 rb:1.0569 dl:494-495 gd:1 +ttp: b340/782 bl:2.4595 bb:1.0812 rl:2.3100 rb:1.0570 dl:482-483 gd:1 +ttp: b333/782 bl:2.4327 bb:1.0827 rl:2.3107 rb:1.0572 dl:471-472 gd:1 +ttp: b326/782 bl:2.3136 bb:1.0595 rl:2.3107 rb:1.0572 dl:461-462 gd:1 +ttp: b318/782 bl:2.3438 bb:1.0711 rl:2.3108 rb:1.0573 dl:448-450 gd:1 +ttp: b310/782 bl:2.2965 bb:1.1009 rl:2.3108 rb:1.0575 dl:437-438 gd:1 +ttp: b300/782 bl:2.3374 bb:1.0558 rl:2.3109 rb:1.0574 dl:421-422 gd:1 +ttp: b292/782 bl:2.3399 bb:1.1079 rl:2.3110 rb:1.0577 dl:409-410 gd:1 +ttp: b284/782 bl:2.4463 bb:1.1391 rl:2.3116 rb:1.0580 dl:398-399 gd:1 +ttp: b276/782 bl:2.3895 bb:1.1045 rl:2.3119 rb:1.0582 dl:387-388 gd:1 +ttp: b266/782 bl:2.3833 bb:1.1089 rl:2.3122 rb:1.0584 dl:374-375 gd:1 +ttp: b258/782 bl:2.4436 bb:1.0964 rl:2.3127 rb:1.0586 dl:364-365 gd:1 +ttp: b250/782 bl:2.3153 bb:1.0735 rl:2.3128 rb:1.0586 dl:354-355 gd:1 +ttp: b242/782 bl:2.3776 bb:1.1006 rl:2.3130 rb:1.0588 dl:344-345 gd:1 +ttp: b235/782 bl:2.2856 bb:1.1004 rl:2.3129 rb:1.0589 dl:335-336 gd:1 +ttp: b229/782 bl:2.3654 bb:1.0661 rl:2.3131 rb:1.0589 dl:328-329 gd:1 +ttp: b221/782 bl:2.4125 bb:1.1242 rl:2.3134 rb:1.0591 dl:318-320 gd:1 +ttp: b213/782 bl:2.2675 bb:1.0772 rl:2.3133 rb:1.0592 dl:309-310 gd:1 +ttp: b205/782 bl:2.3249 bb:1.1131 rl:2.3133 rb:1.0594 dl:301-302 gd:1 +ttp: b197/782 bl:2.3673 bb:1.1191 rl:2.3135 rb:1.0595 dl:292-294 gd:1 +ttp: b188/782 bl:2.3453 bb:1.1013 rl:2.3136 rb:1.0597 dl:282-283 gd:1 +ttp: b180/782 bl:2.4244 bb:1.1107 rl:2.3139 rb:1.0598 dl:274-275 gd:1 +ttp: b172/782 bl:2.5240 bb:1.1572 rl:2.3145 rb:1.0601 dl:266-267 gd:1 +ttp: b165/782 bl:2.3464 bb:1.1143 rl:2.3145 rb:1.0602 dl:260-260 gd:1 +ttp: b156/782 bl:2.3161 bb:1.1564 rl:2.3145 rb:1.0604 dl:251-252 gd:1 +ttp: b148/782 bl:2.3308 bb:1.1028 rl:2.3146 rb:1.0606 dl:243-244 gd:1 +ttp: b139/782 bl:2.4318 bb:1.1329 rl:2.3149 rb:1.0607 dl:234-235 gd:1 +ttp: b131/782 bl:2.3885 bb:1.1533 rl:2.3150 rb:1.0609 dl:227-228 gd:1 +ttp: b123/782 bl:2.3870 bb:1.1606 rl:2.3152 rb:1.0611 dl:219-220 gd:1 +ttp: b114/782 bl:2.4666 bb:1.1438 rl:2.3155 rb:1.0613 dl:211-212 gd:1 +ttp: b106/782 bl:2.4362 bb:1.1727 rl:2.3158 rb:1.0615 dl:204-205 gd:1 +ttp: b97/782 bl:2.4582 bb:1.1635 rl:2.3161 rb:1.0617 dl:196-197 gd:1 +ttp: b89/782 bl:2.4918 bb:1.1514 rl:2.3164 rb:1.0619 dl:189-190 gd:1 +ttp: b83/782 bl:2.4251 bb:1.1445 rl:2.3166 rb:1.0620 dl:183-184 gd:1 +ttp: b75/782 bl:2.5595 bb:1.1867 rl:2.3170 rb:1.0623 dl:176-177 gd:1 +ttp: b67/782 bl:2.5409 bb:1.2028 rl:2.3174 rb:1.0625 dl:169-170 gd:1 +ttp: b59/782 bl:2.4927 bb:1.1875 rl:2.3177 rb:1.0627 dl:162-163 gd:1 +ttp: b51/782 bl:2.4808 bb:1.1869 rl:2.3179 rb:1.0629 dl:154-155 gd:1 +ttp: b43/782 bl:2.5058 bb:1.2233 rl:2.3182 rb:1.0631 dl:146-147 gd:1 +ttp: b34/782 bl:2.6264 bb:1.2024 rl:2.3186 rb:1.0633 dl:137-138 gd:1 +ttp: b25/782 bl:2.5959 bb:1.1993 rl:2.3190 rb:1.0635 dl:128-129 gd:1 +ttp: b17/782 bl:2.6694 bb:1.2683 rl:2.3194 rb:1.0637 dl:118-119 gd:1 +ttp: b9/782 bl:2.7565 bb:1.2578 rl:2.3199 rb:1.0639 dl:105-107 gd:1 +ttp: b1/782 bl:2.8435 bb:1.1836 rl:2.3203 rb:1.0640 dl:27-83 gd:1 +quantized_ttt_phased val_loss:2.32157554 val_bpb:1.06086943 eval_time:525430ms +val_loss:2.32157554 +val_bpb:1.06086943 +total_eval_time:525.4s + diff --git a/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed42.log b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed42.log new file mode 100644 index 0000000000..b2ea8fb050 --- /dev/null +++ b/records/track_10min_16mb/2026-05-09_SP8192_NEFTune_TTT128_PhasedTTT4_1.0603/train_seed42.log @@ -0,0 +1,1093 @@ +train_shards: 80 +val_tokens: 47851520 +model_params:35945671 +gptq:reserving 0.5s, effective=599500ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 9.1614 train_time: 0.0m tok/s: 17408241 +2/20000 train_loss: 12.9518 train_time: 0.0m tok/s: 11303725 +3/20000 train_loss: 10.4166 train_time: 0.0m tok/s: 10167188 +4/20000 train_loss: 8.9011 train_time: 0.0m tok/s: 9655817 +5/20000 train_loss: 8.1794 train_time: 0.0m tok/s: 9367708 +500/20000 train_loss: 2.8627 train_time: 0.8m tok/s: 8258284 +1000/20000 train_loss: 3.0776 train_time: 1.6m tok/s: 8227452 +1500/20000 train_loss: 2.8955 train_time: 2.4m tok/s: 8226692 +2000/20000 train_loss: 2.9307 train_time: 3.2m tok/s: 8224513 +layer_loop:enabled step:2193 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.8258 train_time: 4.2m tok/s: 7778403 +3000/20000 train_loss: 2.8490 train_time: 5.4m tok/s: 7308526 +3500/20000 train_loss: 2.8522 train_time: 6.5m tok/s: 7006883 +4000/20000 train_loss: 2.7033 train_time: 7.7m tok/s: 6796970 +4500/20000 train_loss: 2.5809 train_time: 8.9m tok/s: 6641713 +4976/20000 val_loss: 2.3632 val_bpb: 1.0799 +stopping_early: wallclock_cap train_time: 599564ms step: 4976/20000 +peak memory allocated: 41697 MiB reserved: 41722 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.33690555 val_bpb:1.06787464 eval_time:110795ms +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 3.7s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 129.3s +Serialized model quantized+pergroup: 15868839 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 18.8s +diagnostic quantized val_loss:2.35582149 val_bpb:1.07651849 eval_time:112208ms +ttt_compile_warmup: starting (writes to inductor cache) +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 18.6s +ttt_compile_warmup: done in 118.1s + +val_tokens: 47851520 +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 21.7s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (97.0s) + +beginning TTT eval timer +ttt_phased: total_docs:50000 prefix_docs:3000 suffix_docs:47000 num_phases:4 boundaries:[750, 1500, 2250, 3000] +ttp: b781/782 bl:2.1479 bb:1.0509 rl:2.1479 rb:1.0509 dl:17258-30330 gd:0 +ttpp: phase:1/4 pd:1168 gd:750 t:220.9s +tttg: c1/124 lr:0.001000 t:0.7s +tttg: c2/124 lr:0.001000 t:0.8s +tttg: c3/124 lr:0.000999 t:0.9s +tttg: c4/124 lr:0.000999 t:1.0s +tttg: c5/124 lr:0.000997 t:1.1s +tttg: c6/124 lr:0.000996 t:1.2s +tttg: c7/124 lr:0.000994 t:1.3s +tttg: c8/124 lr:0.000992 t:1.3s +tttg: c9/124 lr:0.000990 t:1.4s +tttg: c10/124 lr:0.000987 t:1.5s +tttg: c11/124 lr:0.000984 t:1.6s +tttg: c12/124 lr:0.000980 t:1.7s +tttg: c13/124 lr:0.000977 t:1.8s +tttg: c14/124 lr:0.000973 t:1.9s +tttg: c15/124 lr:0.000968 t:1.9s +tttg: c16/124 lr:0.000964 t:2.0s +tttg: c17/124 lr:0.000959 t:2.1s +tttg: c18/124 lr:0.000954 t:2.2s +tttg: c19/124 lr:0.000948 t:2.3s +tttg: c20/124 lr:0.000942 t:2.4s +tttg: c21/124 lr:0.000936 t:2.5s +tttg: c22/124 lr:0.000930 t:2.5s +tttg: c23/124 lr:0.000923 t:2.6s +tttg: c24/124 lr:0.000916 t:2.7s +tttg: c25/124 lr:0.000909 t:2.8s +tttg: c26/124 lr:0.000901 t:2.9s +tttg: c27/124 lr:0.000894 t:3.0s +tttg: c28/124 lr:0.000886 t:3.1s +tttg: c29/124 lr:0.000877 t:3.2s +tttg: c30/124 lr:0.000869 t:3.2s +tttg: c31/124 lr:0.000860 t:3.3s +tttg: c32/124 lr:0.000851 t:3.4s +tttg: c33/124 lr:0.000842 t:3.5s +tttg: c34/124 lr:0.000833 t:3.6s +tttg: c35/124 lr:0.000823 t:3.7s +tttg: c36/124 lr:0.000813 t:3.8s +tttg: c37/124 lr:0.000803 t:3.9s +tttg: c38/124 lr:0.000793 t:3.9s +tttg: c39/124 lr:0.000782 t:4.0s +tttg: c40/124 lr:0.000772 t:4.1s +tttg: c41/124 lr:0.000761 t:4.2s +tttg: c42/124 lr:0.000750 t:4.3s +tttg: c43/124 lr:0.000739 t:4.4s +tttg: c44/124 lr:0.000728 t:4.5s +tttg: c45/124 lr:0.000716 t:4.5s +tttg: c46/124 lr:0.000705 t:4.6s +tttg: c47/124 lr:0.000693 t:4.7s +tttg: c48/124 lr:0.000681 t:4.8s +tttg: c49/124 lr:0.000669 t:4.9s +tttg: c50/124 lr:0.000657 t:5.0s +tttg: c51/124 lr:0.000645 t:5.1s +tttg: c52/124 lr:0.000632 t:5.2s +tttg: c53/124 lr:0.000620 t:5.2s +tttg: c54/124 lr:0.000608 t:5.3s +tttg: c55/124 lr:0.000595 t:5.4s +tttg: c56/124 lr:0.000583 t:5.5s +tttg: c57/124 lr:0.000570 t:5.6s +tttg: c58/124 lr:0.000557 t:5.7s +tttg: c59/124 lr:0.000545 t:5.8s +tttg: c60/124 lr:0.000532 t:5.9s +tttg: c61/124 lr:0.000519 t:5.9s +tttg: c62/124 lr:0.000506 t:6.0s +tttg: c63/124 lr:0.000494 t:6.1s +tttg: c64/124 lr:0.000481 t:6.2s +tttg: c65/124 lr:0.000468 t:6.3s +tttg: c66/124 lr:0.000455 t:6.4s +tttg: c67/124 lr:0.000443 t:6.5s +tttg: c68/124 lr:0.000430 t:6.6s +tttg: c69/124 lr:0.000417 t:6.7s +tttg: c70/124 lr:0.000405 t:6.7s +tttg: c71/124 lr:0.000392 t:6.8s +tttg: c72/124 lr:0.000380 t:6.9s +tttg: c73/124 lr:0.000368 t:7.0s +tttg: c74/124 lr:0.000355 t:7.1s +tttg: c75/124 lr:0.000343 t:7.2s +tttg: c76/124 lr:0.000331 t:7.3s +tttg: c77/124 lr:0.000319 t:7.4s +tttg: c78/124 lr:0.000307 t:7.5s +tttg: c79/124 lr:0.000295 t:7.6s +tttg: c80/124 lr:0.000284 t:7.6s +tttg: c81/124 lr:0.000272 t:7.7s +tttg: c82/124 lr:0.000261 t:7.8s +tttg: c83/124 lr:0.000250 t:7.9s +tttg: c84/124 lr:0.000239 t:8.0s +tttg: c85/124 lr:0.000228 t:8.1s +tttg: c86/124 lr:0.000218 t:8.2s +tttg: c87/124 lr:0.000207 t:8.3s +tttg: c88/124 lr:0.000197 t:8.4s +tttg: c89/124 lr:0.000187 t:8.4s +tttg: c90/124 lr:0.000177 t:8.5s +tttg: c91/124 lr:0.000167 t:8.6s +tttg: c92/124 lr:0.000158 t:8.7s +tttg: c93/124 lr:0.000149 t:8.8s +tttg: c94/124 lr:0.000140 t:8.9s +tttg: c95/124 lr:0.000131 t:9.0s +tttg: c96/124 lr:0.000123 t:9.1s +tttg: c97/124 lr:0.000114 t:9.1s +tttg: c98/124 lr:0.000106 t:9.2s +tttg: c99/124 lr:0.000099 t:9.3s +tttg: c100/124 lr:0.000091 t:9.4s +tttg: c101/124 lr:0.000084 t:9.5s +tttg: c102/124 lr:0.000077 t:9.6s +tttg: c103/124 lr:0.000070 t:9.7s +tttg: c104/124 lr:0.000064 t:9.8s +tttg: c105/124 lr:0.000058 t:9.8s +tttg: c106/124 lr:0.000052 t:9.9s +tttg: c107/124 lr:0.000046 t:10.0s +tttg: c108/124 lr:0.000041 t:10.1s +tttg: c109/124 lr:0.000036 t:10.2s +tttg: c110/124 lr:0.000032 t:10.3s +tttg: c111/124 lr:0.000027 t:10.4s +tttg: c112/124 lr:0.000023 t:10.5s +tttg: c113/124 lr:0.000020 t:10.6s +tttg: c114/124 lr:0.000016 t:10.6s +tttg: c115/124 lr:0.000013 t:10.7s +tttg: c116/124 lr:0.000010 t:10.8s +tttg: c117/124 lr:0.000008 t:10.9s +tttg: c118/124 lr:0.000006 t:11.0s +tttg: c119/124 lr:0.000004 t:11.1s +tttg: c120/124 lr:0.000003 t:11.2s +tttg: c121/124 lr:0.000001 t:11.3s +tttg: c122/124 lr:0.000001 t:11.3s +tttg: c123/124 lr:0.000000 t:11.4s +ttpr: phase:1/4 t:235.3s +ttp: b758/782 bl:2.3069 bb:1.0752 rl:2.1706 rb:1.0545 dl:3634-3740 gd:0 +ttp: b753/782 bl:2.2146 bb:0.9997 rl:2.1757 rb:1.0479 dl:3284-3344 gd:0 +ttpp: phase:2/4 pd:2000 gd:1500 t:310.2s +tttg: c1/199 lr:0.001000 t:0.1s +tttg: c2/199 lr:0.001000 t:0.2s +tttg: c3/199 lr:0.001000 t:0.2s +tttg: c4/199 lr:0.000999 t:0.3s +tttg: c5/199 lr:0.000999 t:0.4s +tttg: c6/199 lr:0.000998 t:0.5s +tttg: c7/199 lr:0.000998 t:0.6s +tttg: c8/199 lr:0.000997 t:0.7s +tttg: c9/199 lr:0.000996 t:0.8s +tttg: c10/199 lr:0.000995 t:0.9s +tttg: c11/199 lr:0.000994 t:0.9s +tttg: c12/199 lr:0.000992 t:1.0s +tttg: c13/199 lr:0.000991 t:1.1s +tttg: c14/199 lr:0.000989 t:1.2s +tttg: c15/199 lr:0.000988 t:1.3s +tttg: c16/199 lr:0.000986 t:1.4s +tttg: c17/199 lr:0.000984 t:1.5s +tttg: c18/199 lr:0.000982 t:1.5s +tttg: c19/199 lr:0.000980 t:1.6s +tttg: c20/199 lr:0.000977 t:1.7s +tttg: c21/199 lr:0.000975 t:1.8s +tttg: c22/199 lr:0.000973 t:1.9s +tttg: c23/199 lr:0.000970 t:2.0s +tttg: c24/199 lr:0.000967 t:2.1s +tttg: c25/199 lr:0.000964 t:2.2s +tttg: c26/199 lr:0.000961 t:2.2s +tttg: c27/199 lr:0.000958 t:2.3s +tttg: c28/199 lr:0.000955 t:2.4s +tttg: c29/199 lr:0.000951 t:2.5s +tttg: c30/199 lr:0.000948 t:2.6s +tttg: c31/199 lr:0.000944 t:2.7s +tttg: c32/199 lr:0.000941 t:2.8s +tttg: c33/199 lr:0.000937 t:2.8s +tttg: c34/199 lr:0.000933 t:2.9s +tttg: c35/199 lr:0.000929 t:3.0s +tttg: c36/199 lr:0.000925 t:3.1s +tttg: c37/199 lr:0.000921 t:3.2s +tttg: c38/199 lr:0.000916 t:3.3s +tttg: c39/199 lr:0.000912 t:3.4s +tttg: c40/199 lr:0.000907 t:3.5s +tttg: c41/199 lr:0.000903 t:3.6s +tttg: c42/199 lr:0.000898 t:3.6s +tttg: c43/199 lr:0.000893 t:3.7s +tttg: c44/199 lr:0.000888 t:3.8s +tttg: c45/199 lr:0.000883 t:3.9s +tttg: c46/199 lr:0.000878 t:4.0s +tttg: c47/199 lr:0.000873 t:4.1s +tttg: c48/199 lr:0.000867 t:4.2s +tttg: c49/199 lr:0.000862 t:4.2s +tttg: c50/199 lr:0.000856 t:4.3s +tttg: c51/199 lr:0.000851 t:4.4s +tttg: c52/199 lr:0.000845 t:4.5s +tttg: c53/199 lr:0.000839 t:4.6s +tttg: c54/199 lr:0.000833 t:4.7s +tttg: c55/199 lr:0.000827 t:4.8s +tttg: c56/199 lr:0.000821 t:4.8s +tttg: c57/199 lr:0.000815 t:4.9s +tttg: c58/199 lr:0.000809 t:5.0s +tttg: c59/199 lr:0.000803 t:5.1s +tttg: c60/199 lr:0.000796 t:5.2s +tttg: c61/199 lr:0.000790 t:5.3s +tttg: c62/199 lr:0.000784 t:5.4s +tttg: c63/199 lr:0.000777 t:5.5s +tttg: c64/199 lr:0.000770 t:5.5s +tttg: c65/199 lr:0.000764 t:5.6s +tttg: c66/199 lr:0.000757 t:5.7s +tttg: c67/199 lr:0.000750 t:5.8s +tttg: c68/199 lr:0.000743 t:5.9s +tttg: c69/199 lr:0.000736 t:6.0s +tttg: c70/199 lr:0.000729 t:6.1s +tttg: c71/199 lr:0.000722 t:6.2s +tttg: c72/199 lr:0.000715 t:6.2s +tttg: c73/199 lr:0.000708 t:6.3s +tttg: c74/199 lr:0.000700 t:6.4s +tttg: c75/199 lr:0.000693 t:6.5s +tttg: c76/199 lr:0.000686 t:6.6s +tttg: c77/199 lr:0.000678 t:6.7s +tttg: c78/199 lr:0.000671 t:6.8s +tttg: c79/199 lr:0.000664 t:6.9s +tttg: c80/199 lr:0.000656 t:6.9s +tttg: c81/199 lr:0.000648 t:7.0s +tttg: c82/199 lr:0.000641 t:7.1s +tttg: c83/199 lr:0.000633 t:7.2s +tttg: c84/199 lr:0.000626 t:7.3s +tttg: c85/199 lr:0.000618 t:7.4s +tttg: c86/199 lr:0.000610 t:7.5s +tttg: c87/199 lr:0.000602 t:7.6s +tttg: c88/199 lr:0.000595 t:7.6s +tttg: c89/199 lr:0.000587 t:7.7s +tttg: c90/199 lr:0.000579 t:7.8s +tttg: c91/199 lr:0.000571 t:7.9s +tttg: c92/199 lr:0.000563 t:8.0s +tttg: c93/199 lr:0.000555 t:8.1s +tttg: c94/199 lr:0.000548 t:8.2s +tttg: c95/199 lr:0.000540 t:8.2s +tttg: c96/199 lr:0.000532 t:8.3s +tttg: c97/199 lr:0.000524 t:8.4s +tttg: c98/199 lr:0.000516 t:8.5s +tttg: c99/199 lr:0.000508 t:8.6s +tttg: c100/199 lr:0.000500 t:8.7s +tttg: c101/199 lr:0.000492 t:8.8s +tttg: c102/199 lr:0.000484 t:8.9s +tttg: c103/199 lr:0.000476 t:8.9s +tttg: c104/199 lr:0.000468 t:9.0s +tttg: c105/199 lr:0.000460 t:9.1s +tttg: c106/199 lr:0.000452 t:9.2s +tttg: c107/199 lr:0.000445 t:9.3s +tttg: c108/199 lr:0.000437 t:9.4s +tttg: c109/199 lr:0.000429 t:9.5s +tttg: c110/199 lr:0.000421 t:9.6s +tttg: c111/199 lr:0.000413 t:9.6s +tttg: c112/199 lr:0.000405 t:9.7s +tttg: c113/199 lr:0.000398 t:9.8s +tttg: c114/199 lr:0.000390 t:9.9s +tttg: c115/199 lr:0.000382 t:10.0s +tttg: c116/199 lr:0.000374 t:10.1s +tttg: c117/199 lr:0.000367 t:10.2s +tttg: c118/199 lr:0.000359 t:10.2s +tttg: c119/199 lr:0.000352 t:10.3s +tttg: c120/199 lr:0.000344 t:10.4s +tttg: c121/199 lr:0.000336 t:10.5s +tttg: c122/199 lr:0.000329 t:10.6s +tttg: c123/199 lr:0.000322 t:10.7s +tttg: c124/199 lr:0.000314 t:10.8s +tttg: c125/199 lr:0.000307 t:10.9s +tttg: c126/199 lr:0.000300 t:10.9s +tttg: c127/199 lr:0.000292 t:11.0s +tttg: c128/199 lr:0.000285 t:11.1s +tttg: c129/199 lr:0.000278 t:11.2s +tttg: c130/199 lr:0.000271 t:11.3s +tttg: c131/199 lr:0.000264 t:11.4s +tttg: c132/199 lr:0.000257 t:11.5s +tttg: c133/199 lr:0.000250 t:11.6s +tttg: c134/199 lr:0.000243 t:11.6s +tttg: c135/199 lr:0.000236 t:11.7s +tttg: c136/199 lr:0.000230 t:11.8s +tttg: c137/199 lr:0.000223 t:11.9s +tttg: c138/199 lr:0.000216 t:12.0s +tttg: c139/199 lr:0.000210 t:12.1s +tttg: c140/199 lr:0.000204 t:12.2s +tttg: c141/199 lr:0.000197 t:12.2s +tttg: c142/199 lr:0.000191 t:12.3s +tttg: c143/199 lr:0.000185 t:12.4s +tttg: c144/199 lr:0.000179 t:12.5s +tttg: c145/199 lr:0.000173 t:12.6s +tttg: c146/199 lr:0.000167 t:12.7s +tttg: c147/199 lr:0.000161 t:12.8s +tttg: c148/199 lr:0.000155 t:12.9s +tttg: c149/199 lr:0.000149 t:12.9s +tttg: c150/199 lr:0.000144 t:13.0s +tttg: c151/199 lr:0.000138 t:13.1s +tttg: c152/199 lr:0.000133 t:13.2s +tttg: c153/199 lr:0.000127 t:13.3s +tttg: c154/199 lr:0.000122 t:13.4s +tttg: c155/199 lr:0.000117 t:13.5s +tttg: c156/199 lr:0.000112 t:13.6s +tttg: c157/199 lr:0.000107 t:13.7s +tttg: c158/199 lr:0.000102 t:13.7s +tttg: c159/199 lr:0.000097 t:13.8s +tttg: c160/199 lr:0.000093 t:13.9s +tttg: c161/199 lr:0.000088 t:14.0s +tttg: c162/199 lr:0.000084 t:14.1s +tttg: c163/199 lr:0.000079 t:14.2s +tttg: c164/199 lr:0.000075 t:14.3s +tttg: c165/199 lr:0.000071 t:14.4s +tttg: c166/199 lr:0.000067 t:14.4s +tttg: c167/199 lr:0.000063 t:14.5s +tttg: c168/199 lr:0.000059 t:14.6s +tttg: c169/199 lr:0.000056 t:14.7s +tttg: c170/199 lr:0.000052 t:14.8s +tttg: c171/199 lr:0.000049 t:14.9s +tttg: c172/199 lr:0.000045 t:15.0s +tttg: c173/199 lr:0.000042 t:15.1s +tttg: c174/199 lr:0.000039 t:15.1s +tttg: c175/199 lr:0.000036 t:15.2s +tttg: c176/199 lr:0.000033 t:15.3s +tttg: c177/199 lr:0.000030 t:15.4s +tttg: c178/199 lr:0.000027 t:15.5s +tttg: c179/199 lr:0.000025 t:15.6s +tttg: c180/199 lr:0.000023 t:15.7s +tttg: c181/199 lr:0.000020 t:15.7s +tttg: c182/199 lr:0.000018 t:15.8s +tttg: c183/199 lr:0.000016 t:15.9s +tttg: c184/199 lr:0.000014 t:16.0s +tttg: c185/199 lr:0.000012 t:16.1s +tttg: c186/199 lr:0.000011 t:16.2s +tttg: c187/199 lr:0.000009 t:16.2s +tttg: c188/199 lr:0.000008 t:16.3s +tttg: c189/199 lr:0.000006 t:16.4s +tttg: c190/199 lr:0.000005 t:16.5s +tttg: c191/199 lr:0.000004 t:16.6s +tttg: c192/199 lr:0.000003 t:16.7s +tttg: c193/199 lr:0.000002 t:16.7s +tttg: c194/199 lr:0.000002 t:16.8s +tttg: c195/199 lr:0.000001 t:16.9s +tttg: c196/199 lr:0.000001 t:17.0s +tttg: c197/199 lr:0.000000 t:17.1s +tttg: c198/199 lr:0.000000 t:17.2s +ttpr: phase:2/4 t:330.3s +ttp: b750/782 bl:2.3880 bb:1.0730 rl:2.1962 rb:1.0504 dl:3090-3149 gd:0 +ttpp: phase:3/4 pd:2704 gd:2250 t:347.7s +tttg: c1/270 lr:0.001000 t:0.1s +tttg: c2/270 lr:0.001000 t:0.2s +tttg: c3/270 lr:0.001000 t:0.2s +tttg: c4/270 lr:0.001000 t:0.3s +tttg: c5/270 lr:0.000999 t:0.4s +tttg: c6/270 lr:0.000999 t:0.5s +tttg: c7/270 lr:0.000999 t:0.6s +tttg: c8/270 lr:0.000998 t:0.7s +tttg: c9/270 lr:0.000998 t:0.8s +tttg: c10/270 lr:0.000997 t:0.9s +tttg: c11/270 lr:0.000997 t:0.9s +tttg: c12/270 lr:0.000996 t:1.0s +tttg: c13/270 lr:0.000995 t:1.1s +tttg: c14/270 lr:0.000994 t:1.2s +tttg: c15/270 lr:0.000993 t:1.3s +tttg: c16/270 lr:0.000992 t:1.4s +tttg: c17/270 lr:0.000991 t:1.5s +tttg: c18/270 lr:0.000990 t:1.6s +tttg: c19/270 lr:0.000989 t:1.7s +tttg: c20/270 lr:0.000988 t:1.7s +tttg: c21/270 lr:0.000986 t:1.8s +tttg: c22/270 lr:0.000985 t:1.9s +tttg: c23/270 lr:0.000984 t:2.0s +tttg: c24/270 lr:0.000982 t:2.1s +tttg: c25/270 lr:0.000980 t:2.2s +tttg: c26/270 lr:0.000979 t:2.3s +tttg: c27/270 lr:0.000977 t:2.4s +tttg: c28/270 lr:0.000975 t:2.4s +tttg: c29/270 lr:0.000974 t:2.5s +tttg: c30/270 lr:0.000972 t:2.6s +tttg: c31/270 lr:0.000970 t:2.7s +tttg: c32/270 lr:0.000968 t:2.8s +tttg: c33/270 lr:0.000965 t:2.9s +tttg: c34/270 lr:0.000963 t:3.0s +tttg: c35/270 lr:0.000961 t:3.1s +tttg: c36/270 lr:0.000959 t:3.1s +tttg: c37/270 lr:0.000956 t:3.2s +tttg: c38/270 lr:0.000954 t:3.3s +tttg: c39/270 lr:0.000952 t:3.4s +tttg: c40/270 lr:0.000949 t:3.5s +tttg: c41/270 lr:0.000946 t:3.6s +tttg: c42/270 lr:0.000944 t:3.7s +tttg: c43/270 lr:0.000941 t:3.8s +tttg: c44/270 lr:0.000938 t:3.8s +tttg: c45/270 lr:0.000935 t:3.9s +tttg: c46/270 lr:0.000933 t:4.0s +tttg: c47/270 lr:0.000930 t:4.1s +tttg: c48/270 lr:0.000927 t:4.2s +tttg: c49/270 lr:0.000923 t:4.3s +tttg: c50/270 lr:0.000920 t:4.4s +tttg: c51/270 lr:0.000917 t:4.5s +tttg: c52/270 lr:0.000914 t:4.5s +tttg: c53/270 lr:0.000911 t:4.6s +tttg: c54/270 lr:0.000907 t:4.7s +tttg: c55/270 lr:0.000904 t:4.8s +tttg: c56/270 lr:0.000900 t:4.9s +tttg: c57/270 lr:0.000897 t:5.0s +tttg: c58/270 lr:0.000893 t:5.1s +tttg: c59/270 lr:0.000890 t:5.1s +tttg: c60/270 lr:0.000886 t:5.2s +tttg: c61/270 lr:0.000882 t:5.3s +tttg: c62/270 lr:0.000878 t:5.4s +tttg: c63/270 lr:0.000875 t:5.5s +tttg: c64/270 lr:0.000871 t:5.6s +tttg: c65/270 lr:0.000867 t:5.7s +tttg: c66/270 lr:0.000863 t:5.8s +tttg: c67/270 lr:0.000859 t:5.8s +tttg: c68/270 lr:0.000855 t:5.9s +tttg: c69/270 lr:0.000850 t:6.0s +tttg: c70/270 lr:0.000846 t:6.1s +tttg: c71/270 lr:0.000842 t:6.2s +tttg: c72/270 lr:0.000838 t:6.3s +tttg: c73/270 lr:0.000833 t:6.4s +tttg: c74/270 lr:0.000829 t:6.5s +tttg: c75/270 lr:0.000825 t:6.5s +tttg: c76/270 lr:0.000820 t:6.6s +tttg: c77/270 lr:0.000816 t:6.7s +tttg: c78/270 lr:0.000811 t:6.8s +tttg: c79/270 lr:0.000806 t:6.9s +tttg: c80/270 lr:0.000802 t:7.0s +tttg: c81/270 lr:0.000797 t:7.1s +tttg: c82/270 lr:0.000792 t:7.2s +tttg: c83/270 lr:0.000788 t:7.3s +tttg: c84/270 lr:0.000783 t:7.3s +tttg: c85/270 lr:0.000778 t:7.4s +tttg: c86/270 lr:0.000773 t:7.5s +tttg: c87/270 lr:0.000768 t:7.6s +tttg: c88/270 lr:0.000763 t:7.7s +tttg: c89/270 lr:0.000758 t:7.8s +tttg: c90/270 lr:0.000753 t:7.9s +tttg: c91/270 lr:0.000748 t:8.0s +tttg: c92/270 lr:0.000743 t:8.0s +tttg: c93/270 lr:0.000738 t:8.1s +tttg: c94/270 lr:0.000733 t:8.2s +tttg: c95/270 lr:0.000728 t:8.3s +tttg: c96/270 lr:0.000723 t:8.4s +tttg: c97/270 lr:0.000717 t:8.5s +tttg: c98/270 lr:0.000712 t:8.6s +tttg: c99/270 lr:0.000707 t:8.6s +tttg: c100/270 lr:0.000701 t:8.7s +tttg: c101/270 lr:0.000696 t:8.8s +tttg: c102/270 lr:0.000691 t:8.9s +tttg: c103/270 lr:0.000685 t:9.0s +tttg: c104/270 lr:0.000680 t:9.1s +tttg: c105/270 lr:0.000674 t:9.2s +tttg: c106/270 lr:0.000669 t:9.3s +tttg: c107/270 lr:0.000663 t:9.3s +tttg: c108/270 lr:0.000658 t:9.4s +tttg: c109/270 lr:0.000652 t:9.5s +tttg: c110/270 lr:0.000647 t:9.6s +tttg: c111/270 lr:0.000641 t:9.7s +tttg: c112/270 lr:0.000636 t:9.8s +tttg: c113/270 lr:0.000630 t:9.9s +tttg: c114/270 lr:0.000624 t:10.0s +tttg: c115/270 lr:0.000619 t:10.0s +tttg: c116/270 lr:0.000613 t:10.1s +tttg: c117/270 lr:0.000607 t:10.2s +tttg: c118/270 lr:0.000601 t:10.3s +tttg: c119/270 lr:0.000596 t:10.4s +tttg: c120/270 lr:0.000590 t:10.5s +tttg: c121/270 lr:0.000584 t:10.6s +tttg: c122/270 lr:0.000579 t:10.7s +tttg: c123/270 lr:0.000573 t:10.7s +tttg: c124/270 lr:0.000567 t:10.8s +tttg: c125/270 lr:0.000561 t:10.9s +tttg: c126/270 lr:0.000555 t:11.0s +tttg: c127/270 lr:0.000550 t:11.1s +tttg: c128/270 lr:0.000544 t:11.2s +tttg: c129/270 lr:0.000538 t:11.3s +tttg: c130/270 lr:0.000532 t:11.4s +tttg: c131/270 lr:0.000526 t:11.4s +tttg: c132/270 lr:0.000520 t:11.5s +tttg: c133/270 lr:0.000515 t:11.6s +tttg: c134/270 lr:0.000509 t:11.7s +tttg: c135/270 lr:0.000503 t:11.8s +tttg: c136/270 lr:0.000497 t:11.9s +tttg: c137/270 lr:0.000491 t:12.0s +tttg: c138/270 lr:0.000485 t:12.1s +tttg: c139/270 lr:0.000480 t:12.1s +tttg: c140/270 lr:0.000474 t:12.2s +tttg: c141/270 lr:0.000468 t:12.3s +tttg: c142/270 lr:0.000462 t:12.4s +tttg: c143/270 lr:0.000456 t:12.5s +tttg: c144/270 lr:0.000450 t:12.6s +tttg: c145/270 lr:0.000445 t:12.7s +tttg: c146/270 lr:0.000439 t:12.8s +tttg: c147/270 lr:0.000433 t:12.8s +tttg: c148/270 lr:0.000427 t:12.9s +tttg: c149/270 lr:0.000421 t:13.0s +tttg: c150/270 lr:0.000416 t:13.1s +tttg: c151/270 lr:0.000410 t:13.2s +tttg: c152/270 lr:0.000404 t:13.3s +tttg: c153/270 lr:0.000399 t:13.4s +tttg: c154/270 lr:0.000393 t:13.5s +tttg: c155/270 lr:0.000387 t:13.5s +tttg: c156/270 lr:0.000381 t:13.6s +tttg: c157/270 lr:0.000376 t:13.7s +tttg: c158/270 lr:0.000370 t:13.8s +tttg: c159/270 lr:0.000364 t:13.9s +tttg: c160/270 lr:0.000359 t:14.0s +tttg: c161/270 lr:0.000353 t:14.1s +tttg: c162/270 lr:0.000348 t:14.2s +tttg: c163/270 lr:0.000342 t:14.2s +tttg: c164/270 lr:0.000337 t:14.3s +tttg: c165/270 lr:0.000331 t:14.4s +tttg: c166/270 lr:0.000326 t:14.5s +tttg: c167/270 lr:0.000320 t:14.6s +tttg: c168/270 lr:0.000315 t:14.7s +tttg: c169/270 lr:0.000309 t:14.8s +tttg: c170/270 lr:0.000304 t:14.9s +tttg: c171/270 lr:0.000299 t:15.0s +tttg: c172/270 lr:0.000293 t:15.0s +tttg: c173/270 lr:0.000288 t:15.1s +tttg: c174/270 lr:0.000283 t:15.2s +tttg: c175/270 lr:0.000277 t:15.3s +tttg: c176/270 lr:0.000272 t:15.4s +tttg: c177/270 lr:0.000267 t:15.5s +tttg: c178/270 lr:0.000262 t:15.6s +tttg: c179/270 lr:0.000257 t:15.7s +tttg: c180/270 lr:0.000252 t:15.7s +tttg: c181/270 lr:0.000247 t:15.8s +tttg: c182/270 lr:0.000242 t:15.9s +tttg: c183/270 lr:0.000237 t:16.0s +tttg: c184/270 lr:0.000232 t:16.1s +tttg: c185/270 lr:0.000227 t:16.2s +tttg: c186/270 lr:0.000222 t:16.3s +tttg: c187/270 lr:0.000217 t:16.4s +tttg: c188/270 lr:0.000212 t:16.4s +tttg: c189/270 lr:0.000208 t:16.5s +tttg: c190/270 lr:0.000203 t:16.6s +tttg: c191/270 lr:0.000198 t:16.7s +tttg: c192/270 lr:0.000194 t:16.8s +tttg: c193/270 lr:0.000189 t:16.9s +tttg: c194/270 lr:0.000184 t:17.0s +tttg: c195/270 lr:0.000180 t:17.1s +tttg: c196/270 lr:0.000175 t:17.2s +tttg: c197/270 lr:0.000171 t:17.2s +tttg: c198/270 lr:0.000167 t:17.3s +tttg: c199/270 lr:0.000162 t:17.4s +tttg: c200/270 lr:0.000158 t:17.5s +tttg: c201/270 lr:0.000154 t:17.6s +tttg: c202/270 lr:0.000150 t:17.7s +tttg: c203/270 lr:0.000145 t:17.8s +tttg: c204/270 lr:0.000141 t:17.9s +tttg: c205/270 lr:0.000137 t:17.9s +tttg: c206/270 lr:0.000133 t:18.0s +tttg: c207/270 lr:0.000129 t:18.1s +tttg: c208/270 lr:0.000125 t:18.2s +tttg: c209/270 lr:0.000122 t:18.3s +tttg: c210/270 lr:0.000118 t:18.4s +tttg: c211/270 lr:0.000114 t:18.5s +tttg: c212/270 lr:0.000110 t:18.5s +tttg: c213/270 lr:0.000107 t:18.6s +tttg: c214/270 lr:0.000103 t:18.7s +tttg: c215/270 lr:0.000100 t:18.8s +tttg: c216/270 lr:0.000096 t:18.9s +tttg: c217/270 lr:0.000093 t:19.0s +tttg: c218/270 lr:0.000089 t:19.1s +tttg: c219/270 lr:0.000086 t:19.2s +tttg: c220/270 lr:0.000083 t:19.2s +tttg: c221/270 lr:0.000080 t:19.3s +tttg: c222/270 lr:0.000077 t:19.4s +tttg: c223/270 lr:0.000073 t:19.5s +tttg: c224/270 lr:0.000070 t:19.6s +tttg: c225/270 lr:0.000067 t:19.7s +tttg: c226/270 lr:0.000065 t:19.8s +tttg: c227/270 lr:0.000062 t:19.8s +tttg: c228/270 lr:0.000059 t:19.9s +tttg: c229/270 lr:0.000056 t:20.0s +tttg: c230/270 lr:0.000054 t:20.1s +tttg: c231/270 lr:0.000051 t:20.2s +tttg: c232/270 lr:0.000048 t:20.3s +tttg: c233/270 lr:0.000046 t:20.4s +tttg: c234/270 lr:0.000044 t:20.5s +tttg: c235/270 lr:0.000041 t:20.5s +tttg: c236/270 lr:0.000039 t:20.6s +tttg: c237/270 lr:0.000037 t:20.7s +tttg: c238/270 lr:0.000035 t:20.8s +tttg: c239/270 lr:0.000032 t:20.9s +tttg: c240/270 lr:0.000030 t:21.0s +tttg: c241/270 lr:0.000028 t:21.1s +tttg: c242/270 lr:0.000026 t:21.2s +tttg: c243/270 lr:0.000025 t:21.2s +tttg: c244/270 lr:0.000023 t:21.3s +tttg: c245/270 lr:0.000021 t:21.4s +tttg: c246/270 lr:0.000020 t:21.5s +tttg: c247/270 lr:0.000018 t:21.6s +tttg: c248/270 lr:0.000016 t:21.7s +tttg: c249/270 lr:0.000015 t:21.8s +tttg: c250/270 lr:0.000014 t:21.8s +tttg: c251/270 lr:0.000012 t:21.9s +tttg: c252/270 lr:0.000011 t:22.0s +tttg: c253/270 lr:0.000010 t:22.1s +tttg: c254/270 lr:0.000009 t:22.2s +tttg: c255/270 lr:0.000008 t:22.3s +tttg: c256/270 lr:0.000007 t:22.4s +tttg: c257/270 lr:0.000006 t:22.4s +tttg: c258/270 lr:0.000005 t:22.5s +tttg: c259/270 lr:0.000004 t:22.6s +tttg: c260/270 lr:0.000003 t:22.7s +tttg: c261/270 lr:0.000003 t:22.8s +tttg: c262/270 lr:0.000002 t:22.9s +tttg: c263/270 lr:0.000002 t:23.0s +tttg: c264/270 lr:0.000001 t:23.0s +tttg: c265/270 lr:0.000001 t:23.1s +tttg: c266/270 lr:0.000001 t:23.2s +tttg: c267/270 lr:0.000000 t:23.3s +tttg: c268/270 lr:0.000000 t:23.4s +tttg: c269/270 lr:0.000000 t:23.5s +ttpr: phase:3/4 t:374.1s +ttp: b738/782 bl:2.3128 bb:1.0473 rl:2.2050 rb:1.0502 dl:2583-2618 gd:0 +ttpp: phase:4/4 pd:3472 gd:3000 t:388.2s +tttg: c1/325 lr:0.001000 t:0.1s +tttg: c2/325 lr:0.001000 t:0.2s +tttg: c3/325 lr:0.001000 t:0.2s +tttg: c4/325 lr:0.001000 t:0.3s +tttg: c5/325 lr:0.001000 t:0.4s +tttg: c6/325 lr:0.000999 t:0.5s +tttg: c7/325 lr:0.000999 t:0.6s +tttg: c8/325 lr:0.000999 t:0.7s +tttg: c9/325 lr:0.000998 t:0.8s +tttg: c10/325 lr:0.000998 t:0.9s +tttg: c11/325 lr:0.000998 t:1.0s +tttg: c12/325 lr:0.000997 t:1.0s +tttg: c13/325 lr:0.000997 t:1.1s +tttg: c14/325 lr:0.000996 t:1.2s +tttg: c15/325 lr:0.000995 t:1.3s +tttg: c16/325 lr:0.000995 t:1.4s +tttg: c17/325 lr:0.000994 t:1.5s +tttg: c18/325 lr:0.000993 t:1.6s +tttg: c19/325 lr:0.000992 t:1.7s +tttg: c20/325 lr:0.000992 t:1.7s +tttg: c21/325 lr:0.000991 t:1.8s +tttg: c22/325 lr:0.000990 t:1.9s +tttg: c23/325 lr:0.000989 t:2.0s +tttg: c24/325 lr:0.000988 t:2.1s +tttg: c25/325 lr:0.000987 t:2.2s +tttg: c26/325 lr:0.000985 t:2.3s +tttg: c27/325 lr:0.000984 t:2.4s +tttg: c28/325 lr:0.000983 t:2.4s +tttg: c29/325 lr:0.000982 t:2.5s +tttg: c30/325 lr:0.000980 t:2.6s +tttg: c31/325 lr:0.000979 t:2.7s +tttg: c32/325 lr:0.000978 t:2.8s +tttg: c33/325 lr:0.000976 t:2.9s +tttg: c34/325 lr:0.000975 t:3.0s +tttg: c35/325 lr:0.000973 t:3.1s +tttg: c36/325 lr:0.000971 t:3.1s +tttg: c37/325 lr:0.000970 t:3.2s +tttg: c38/325 lr:0.000968 t:3.3s +tttg: c39/325 lr:0.000966 t:3.4s +tttg: c40/325 lr:0.000965 t:3.5s +tttg: c41/325 lr:0.000963 t:3.6s +tttg: c42/325 lr:0.000961 t:3.7s +tttg: c43/325 lr:0.000959 t:3.8s +tttg: c44/325 lr:0.000957 t:3.8s +tttg: c45/325 lr:0.000955 t:3.9s +tttg: c46/325 lr:0.000953 t:4.0s +tttg: c47/325 lr:0.000951 t:4.1s +tttg: c48/325 lr:0.000949 t:4.2s +tttg: c49/325 lr:0.000947 t:4.3s +tttg: c50/325 lr:0.000945 t:4.4s +tttg: c51/325 lr:0.000942 t:4.5s +tttg: c52/325 lr:0.000940 t:4.5s +tttg: c53/325 lr:0.000938 t:4.6s +tttg: c54/325 lr:0.000935 t:4.7s +tttg: c55/325 lr:0.000933 t:4.8s +tttg: c56/325 lr:0.000931 t:4.9s +tttg: c57/325 lr:0.000928 t:5.0s +tttg: c58/325 lr:0.000926 t:5.1s +tttg: c59/325 lr:0.000923 t:5.2s +tttg: c60/325 lr:0.000920 t:5.2s +tttg: c61/325 lr:0.000918 t:5.3s +tttg: c62/325 lr:0.000915 t:5.4s +tttg: c63/325 lr:0.000912 t:5.5s +tttg: c64/325 lr:0.000910 t:5.6s +tttg: c65/325 lr:0.000907 t:5.7s +tttg: c66/325 lr:0.000904 t:5.8s +tttg: c67/325 lr:0.000901 t:5.9s +tttg: c68/325 lr:0.000898 t:5.9s +tttg: c69/325 lr:0.000895 t:6.0s +tttg: c70/325 lr:0.000892 t:6.1s +tttg: c71/325 lr:0.000889 t:6.2s +tttg: c72/325 lr:0.000886 t:6.3s +tttg: c73/325 lr:0.000883 t:6.4s +tttg: c74/325 lr:0.000880 t:6.5s +tttg: c75/325 lr:0.000877 t:6.6s +tttg: c76/325 lr:0.000874 t:6.6s +tttg: c77/325 lr:0.000870 t:6.7s +tttg: c78/325 lr:0.000867 t:6.8s +tttg: c79/325 lr:0.000864 t:6.9s +tttg: c80/325 lr:0.000860 t:7.0s +tttg: c81/325 lr:0.000857 t:7.1s +tttg: c82/325 lr:0.000854 t:7.2s +tttg: c83/325 lr:0.000850 t:7.2s +tttg: c84/325 lr:0.000847 t:7.3s +tttg: c85/325 lr:0.000843 t:7.4s +tttg: c86/325 lr:0.000840 t:7.5s +tttg: c87/325 lr:0.000836 t:7.6s +tttg: c88/325 lr:0.000832 t:7.7s +tttg: c89/325 lr:0.000829 t:7.8s +tttg: c90/325 lr:0.000825 t:7.9s +tttg: c91/325 lr:0.000821 t:7.9s +tttg: c92/325 lr:0.000818 t:8.0s +tttg: c93/325 lr:0.000814 t:8.1s +tttg: c94/325 lr:0.000810 t:8.2s +tttg: c95/325 lr:0.000806 t:8.3s +tttg: c96/325 lr:0.000802 t:8.4s +tttg: c97/325 lr:0.000799 t:8.5s +tttg: c98/325 lr:0.000795 t:8.6s +tttg: c99/325 lr:0.000791 t:8.6s +tttg: c100/325 lr:0.000787 t:8.7s +tttg: c101/325 lr:0.000783 t:8.8s +tttg: c102/325 lr:0.000779 t:8.9s +tttg: c103/325 lr:0.000775 t:9.0s +tttg: c104/325 lr:0.000771 t:9.1s +tttg: c105/325 lr:0.000767 t:9.2s +tttg: c106/325 lr:0.000762 t:9.3s +tttg: c107/325 lr:0.000758 t:9.3s +tttg: c108/325 lr:0.000754 t:9.4s +tttg: c109/325 lr:0.000750 t:9.5s +tttg: c110/325 lr:0.000746 t:9.6s +tttg: c111/325 lr:0.000742 t:9.7s +tttg: c112/325 lr:0.000737 t:9.8s +tttg: c113/325 lr:0.000733 t:9.9s +tttg: c114/325 lr:0.000729 t:9.9s +tttg: c115/325 lr:0.000724 t:10.0s +tttg: c116/325 lr:0.000720 t:10.1s +tttg: c117/325 lr:0.000716 t:10.2s +tttg: c118/325 lr:0.000711 t:10.3s +tttg: c119/325 lr:0.000707 t:10.4s +tttg: c120/325 lr:0.000702 t:10.5s +tttg: c121/325 lr:0.000698 t:10.6s +tttg: c122/325 lr:0.000694 t:10.6s +tttg: c123/325 lr:0.000689 t:10.7s +tttg: c124/325 lr:0.000685 t:10.8s +tttg: c125/325 lr:0.000680 t:10.9s +tttg: c126/325 lr:0.000676 t:11.0s +tttg: c127/325 lr:0.000671 t:11.1s +tttg: c128/325 lr:0.000666 t:11.2s +tttg: c129/325 lr:0.000662 t:11.3s +tttg: c130/325 lr:0.000657 t:11.4s +tttg: c131/325 lr:0.000653 t:11.4s +tttg: c132/325 lr:0.000648 t:11.5s +tttg: c133/325 lr:0.000643 t:11.6s +tttg: c134/325 lr:0.000639 t:11.7s +tttg: c135/325 lr:0.000634 t:11.8s +tttg: c136/325 lr:0.000629 t:11.9s +tttg: c137/325 lr:0.000625 t:12.0s +tttg: c138/325 lr:0.000620 t:12.0s +tttg: c139/325 lr:0.000615 t:12.1s +tttg: c140/325 lr:0.000611 t:12.2s +tttg: c141/325 lr:0.000606 t:12.3s +tttg: c142/325 lr:0.000601 t:12.4s +tttg: c143/325 lr:0.000596 t:12.5s +tttg: c144/325 lr:0.000592 t:12.6s +tttg: c145/325 lr:0.000587 t:12.7s +tttg: c146/325 lr:0.000582 t:12.7s +tttg: c147/325 lr:0.000577 t:12.8s +tttg: c148/325 lr:0.000572 t:12.9s +tttg: c149/325 lr:0.000568 t:13.0s +tttg: c150/325 lr:0.000563 t:13.1s +tttg: c151/325 lr:0.000558 t:13.2s +tttg: c152/325 lr:0.000553 t:13.3s +tttg: c153/325 lr:0.000548 t:13.4s +tttg: c154/325 lr:0.000544 t:13.5s +tttg: c155/325 lr:0.000539 t:13.5s +tttg: c156/325 lr:0.000534 t:13.6s +tttg: c157/325 lr:0.000529 t:13.7s +tttg: c158/325 lr:0.000524 t:13.8s +tttg: c159/325 lr:0.000519 t:13.9s +tttg: c160/325 lr:0.000515 t:14.0s +tttg: c161/325 lr:0.000510 t:14.1s +tttg: c162/325 lr:0.000505 t:14.2s +tttg: c163/325 lr:0.000500 t:14.2s +tttg: c164/325 lr:0.000495 t:14.3s +tttg: c165/325 lr:0.000490 t:14.4s +tttg: c166/325 lr:0.000485 t:14.5s +tttg: c167/325 lr:0.000481 t:14.6s +tttg: c168/325 lr:0.000476 t:14.7s +tttg: c169/325 lr:0.000471 t:14.7s +tttg: c170/325 lr:0.000466 t:14.8s +tttg: c171/325 lr:0.000461 t:14.9s +tttg: c172/325 lr:0.000456 t:15.0s +tttg: c173/325 lr:0.000452 t:15.1s +tttg: c174/325 lr:0.000447 t:15.2s +tttg: c175/325 lr:0.000442 t:15.3s +tttg: c176/325 lr:0.000437 t:15.3s +tttg: c177/325 lr:0.000432 t:15.4s +tttg: c178/325 lr:0.000428 t:15.5s +tttg: c179/325 lr:0.000423 t:15.6s +tttg: c180/325 lr:0.000418 t:15.7s +tttg: c181/325 lr:0.000413 t:15.8s +tttg: c182/325 lr:0.000408 t:15.9s +tttg: c183/325 lr:0.000404 t:16.0s +tttg: c184/325 lr:0.000399 t:16.1s +tttg: c185/325 lr:0.000394 t:16.1s +tttg: c186/325 lr:0.000389 t:16.2s +tttg: c187/325 lr:0.000385 t:16.3s +tttg: c188/325 lr:0.000380 t:16.4s +tttg: c189/325 lr:0.000375 t:16.5s +tttg: c190/325 lr:0.000371 t:16.6s +tttg: c191/325 lr:0.000366 t:16.7s +tttg: c192/325 lr:0.000361 t:16.8s +tttg: c193/325 lr:0.000357 t:16.8s +tttg: c194/325 lr:0.000352 t:16.9s +tttg: c195/325 lr:0.000347 t:17.0s +tttg: c196/325 lr:0.000343 t:17.1s +tttg: c197/325 lr:0.000338 t:17.2s +tttg: c198/325 lr:0.000334 t:17.3s +tttg: c199/325 lr:0.000329 t:17.4s +tttg: c200/325 lr:0.000324 t:17.5s +tttg: c201/325 lr:0.000320 t:17.5s +tttg: c202/325 lr:0.000315 t:17.6s +tttg: c203/325 lr:0.000311 t:17.7s +tttg: c204/325 lr:0.000306 t:17.8s +tttg: c205/325 lr:0.000302 t:17.9s +tttg: c206/325 lr:0.000298 t:18.0s +tttg: c207/325 lr:0.000293 t:18.1s +tttg: c208/325 lr:0.000289 t:18.2s +tttg: c209/325 lr:0.000284 t:18.2s +tttg: c210/325 lr:0.000280 t:18.3s +tttg: c211/325 lr:0.000276 t:18.4s +tttg: c212/325 lr:0.000271 t:18.5s +tttg: c213/325 lr:0.000267 t:18.6s +tttg: c214/325 lr:0.000263 t:18.7s +tttg: c215/325 lr:0.000258 t:18.8s +tttg: c216/325 lr:0.000254 t:18.9s +tttg: c217/325 lr:0.000250 t:18.9s +tttg: c218/325 lr:0.000246 t:19.0s +tttg: c219/325 lr:0.000242 t:19.1s +tttg: c220/325 lr:0.000238 t:19.2s +tttg: c221/325 lr:0.000233 t:19.3s +tttg: c222/325 lr:0.000229 t:19.4s +tttg: c223/325 lr:0.000225 t:19.5s +tttg: c224/325 lr:0.000221 t:19.6s +tttg: c225/325 lr:0.000217 t:19.6s +tttg: c226/325 lr:0.000213 t:19.7s +tttg: c227/325 lr:0.000209 t:19.8s +tttg: c228/325 lr:0.000205 t:19.9s +tttg: c229/325 lr:0.000201 t:20.0s +tttg: c230/325 lr:0.000198 t:20.1s +tttg: c231/325 lr:0.000194 t:20.2s +tttg: c232/325 lr:0.000190 t:20.3s +tttg: c233/325 lr:0.000186 t:20.3s +tttg: c234/325 lr:0.000182 t:20.4s +tttg: c235/325 lr:0.000179 t:20.5s +tttg: c236/325 lr:0.000175 t:20.6s +tttg: c237/325 lr:0.000171 t:20.7s +tttg: c238/325 lr:0.000168 t:20.8s +tttg: c239/325 lr:0.000164 t:20.9s +tttg: c240/325 lr:0.000160 t:21.0s +tttg: c241/325 lr:0.000157 t:21.1s +tttg: c242/325 lr:0.000153 t:21.2s +tttg: c243/325 lr:0.000150 t:21.2s +tttg: c244/325 lr:0.000146 t:21.3s +tttg: c245/325 lr:0.000143 t:21.4s +tttg: c246/325 lr:0.000140 t:21.5s +tttg: c247/325 lr:0.000136 t:21.6s +tttg: c248/325 lr:0.000133 t:21.7s +tttg: c249/325 lr:0.000130 t:21.8s +tttg: c250/325 lr:0.000126 t:21.9s +tttg: c251/325 lr:0.000123 t:21.9s +tttg: c252/325 lr:0.000120 t:22.0s +tttg: c253/325 lr:0.000117 t:22.1s +tttg: c254/325 lr:0.000114 t:22.2s +tttg: c255/325 lr:0.000111 t:22.3s +tttg: c256/325 lr:0.000108 t:22.4s +tttg: c257/325 lr:0.000105 t:22.5s +tttg: c258/325 lr:0.000102 t:22.6s +tttg: c259/325 lr:0.000099 t:22.6s +tttg: c260/325 lr:0.000096 t:22.7s +tttg: c261/325 lr:0.000093 t:22.8s +tttg: c262/325 lr:0.000090 t:22.9s +tttg: c263/325 lr:0.000088 t:23.0s +tttg: c264/325 lr:0.000085 t:23.1s +tttg: c265/325 lr:0.000082 t:23.2s +tttg: c266/325 lr:0.000080 t:23.3s +tttg: c267/325 lr:0.000077 t:23.3s +tttg: c268/325 lr:0.000074 t:23.4s +tttg: c269/325 lr:0.000072 t:23.5s +tttg: c270/325 lr:0.000069 t:23.6s +tttg: c271/325 lr:0.000067 t:23.7s +tttg: c272/325 lr:0.000065 t:23.8s +tttg: c273/325 lr:0.000062 t:23.9s +tttg: c274/325 lr:0.000060 t:24.0s +tttg: c275/325 lr:0.000058 t:24.0s +tttg: c276/325 lr:0.000055 t:24.1s +tttg: c277/325 lr:0.000053 t:24.2s +tttg: c278/325 lr:0.000051 t:24.3s +tttg: c279/325 lr:0.000049 t:24.4s +tttg: c280/325 lr:0.000047 t:24.5s +tttg: c281/325 lr:0.000045 t:24.6s +tttg: c282/325 lr:0.000043 t:24.7s +tttg: c283/325 lr:0.000041 t:24.7s +tttg: c284/325 lr:0.000039 t:24.8s +tttg: c285/325 lr:0.000037 t:24.9s +tttg: c286/325 lr:0.000035 t:25.0s +tttg: c287/325 lr:0.000034 t:25.1s +tttg: c288/325 lr:0.000032 t:25.2s +tttg: c289/325 lr:0.000030 t:25.3s +tttg: c290/325 lr:0.000029 t:25.4s +tttg: c291/325 lr:0.000027 t:25.5s +tttg: c292/325 lr:0.000025 t:25.5s +tttg: c293/325 lr:0.000024 t:25.6s +tttg: c294/325 lr:0.000022 t:25.7s +tttg: c295/325 lr:0.000021 t:25.8s +tttg: c296/325 lr:0.000020 t:25.9s +tttg: c297/325 lr:0.000018 t:26.0s +tttg: c298/325 lr:0.000017 t:26.1s +tttg: c299/325 lr:0.000016 t:26.2s +tttg: c300/325 lr:0.000015 t:26.2s +tttg: c301/325 lr:0.000013 t:26.3s +tttg: c302/325 lr:0.000012 t:26.4s +tttg: c303/325 lr:0.000011 t:26.5s +tttg: c304/325 lr:0.000010 t:26.6s +tttg: c305/325 lr:0.000009 t:26.7s +tttg: c306/325 lr:0.000008 t:26.8s +tttg: c307/325 lr:0.000008 t:26.9s +tttg: c308/325 lr:0.000007 t:26.9s +tttg: c309/325 lr:0.000006 t:27.0s +tttg: c310/325 lr:0.000005 t:27.1s +tttg: c311/325 lr:0.000005 t:27.2s +tttg: c312/325 lr:0.000004 t:27.3s +tttg: c313/325 lr:0.000003 t:27.4s +tttg: c314/325 lr:0.000003 t:27.5s +tttg: c315/325 lr:0.000002 t:27.6s +tttg: c316/325 lr:0.000002 t:27.6s +tttg: c317/325 lr:0.000002 t:27.7s +tttg: c318/325 lr:0.000001 t:27.8s +tttg: c319/325 lr:0.000001 t:27.9s +tttg: c320/325 lr:0.000001 t:28.0s +tttg: c321/325 lr:0.000000 t:28.1s +tttg: c322/325 lr:0.000000 t:28.2s +tttg: c323/325 lr:0.000000 t:28.3s +tttg: c324/325 lr:0.000000 t:28.3s +ttpr: phase:4/4 t:419.5s +ttp: b721/782 bl:2.3097 bb:1.0257 rl:2.2111 rb:1.0487 dl:2144-2163 gd:1 +ttp: b717/782 bl:2.2565 bb:1.0332 rl:2.2135 rb:1.0478 dl:2070-2088 gd:1 +ttp: b708/782 bl:2.3054 bb:1.0312 rl:2.2178 rb:1.0470 dl:1924-1937 gd:1 +ttp: b701/782 bl:2.3059 bb:1.0339 rl:2.2216 rb:1.0464 dl:1835-1847 gd:1 +ttp: b691/782 bl:2.4511 bb:1.0671 rl:2.2305 rb:1.0473 dl:1725-1737 gd:1 +ttp: b683/782 bl:2.2662 bb:1.0549 rl:2.2318 rb:1.0475 dl:1646-1657 gd:1 +ttp: b676/782 bl:2.3323 bb:1.0491 rl:2.2352 rb:1.0476 dl:1586-1595 gd:1 +ttp: b669/782 bl:2.3279 bb:1.0408 rl:2.2380 rb:1.0474 dl:1530-1537 gd:1 +ttp: b661/782 bl:2.3970 bb:1.0836 rl:2.2427 rb:1.0485 dl:1474-1480 gd:1 +ttp: b655/782 bl:2.3774 bb:1.0427 rl:2.2464 rb:1.0483 dl:1432-1439 gd:1 +ttp: b647/782 bl:2.2746 bb:1.0323 rl:2.2471 rb:1.0479 dl:1382-1387 gd:1 +ttp: b639/782 bl:2.3083 bb:1.0308 rl:2.2486 rb:1.0474 dl:1331-1337 gd:1 +ttp: b631/782 bl:2.3105 bb:1.0061 rl:2.2500 rb:1.0464 dl:1285-1290 gd:1 +ttp: b623/782 bl:2.3319 bb:1.0176 rl:2.2518 rb:1.0458 dl:1243-1249 gd:1 +ttp: b615/782 bl:2.3166 bb:1.0460 rl:2.2531 rb:1.0458 dl:1200-1205 gd:1 +ttp: b607/782 bl:2.3496 bb:1.0511 rl:2.2550 rb:1.0459 dl:1164-1168 gd:1 +ttp: b599/782 bl:2.3617 bb:1.0684 rl:2.2570 rb:1.0463 dl:1129-1133 gd:1 +ttp: b591/782 bl:2.3050 bb:1.0315 rl:2.2578 rb:1.0460 dl:1093-1098 gd:1 +ttp: b583/782 bl:2.3247 bb:1.0330 rl:2.2589 rb:1.0458 dl:1060-1064 gd:1 +ttp: b569/782 bl:2.3007 bb:1.0403 rl:2.2596 rb:1.0457 dl:1007-1010 gd:1 +ttp: b561/782 bl:2.2434 bb:1.0119 rl:2.2594 rb:1.0452 dl:979-983 gd:1 +ttp: b553/782 bl:2.2828 bb:1.0292 rl:2.2597 rb:1.0450 dl:952-955 gd:1 +ttp: b544/782 bl:2.3426 bb:1.0675 rl:2.2608 rb:1.0453 dl:924-927 gd:1 +ttp: b536/782 bl:2.3165 bb:1.0432 rl:2.2616 rb:1.0452 dl:899-902 gd:1 +ttp: b528/782 bl:2.3337 bb:1.0431 rl:2.2625 rb:1.0452 dl:875-878 gd:1 +ttp: b519/782 bl:2.2941 bb:1.0408 rl:2.2629 rb:1.0452 dl:850-852 gd:1 +ttp: b513/782 bl:2.3642 bb:1.0379 rl:2.2641 rb:1.0451 dl:832-835 gd:1 +ttp: b506/782 bl:2.3436 bb:1.0119 rl:2.2650 rb:1.0447 dl:812-814 gd:1 +ttp: b499/782 bl:2.3356 bb:1.0546 rl:2.2658 rb:1.0448 dl:794-796 gd:1 +ttp: b492/782 bl:2.2742 bb:1.0329 rl:2.2659 rb:1.0447 dl:776-778 gd:1 +ttp: b483/782 bl:2.2547 bb:1.0287 rl:2.2657 rb:1.0445 dl:754-756 gd:1 +ttp: b475/782 bl:2.3650 bb:1.0554 rl:2.2667 rb:1.0446 dl:735-737 gd:1 +ttp: b467/782 bl:2.3458 bb:1.0514 rl:2.2675 rb:1.0447 dl:717-719 gd:1 +ttp: b460/782 bl:2.2490 bb:1.0521 rl:2.2673 rb:1.0447 dl:701-703 gd:1 +ttp: b452/782 bl:2.2615 bb:1.0121 rl:2.2673 rb:1.0444 dl:685-687 gd:1 +ttp: b444/782 bl:2.3069 bb:1.0628 rl:2.2676 rb:1.0446 dl:668-670 gd:1 +ttp: b437/782 bl:2.2899 bb:1.0536 rl:2.2678 rb:1.0447 dl:653-655 gd:1 +ttp: b429/782 bl:2.2392 bb:1.0213 rl:2.2676 rb:1.0445 dl:638-640 gd:1 +ttp: b421/782 bl:2.2886 bb:1.0020 rl:2.2677 rb:1.0441 dl:622-624 gd:1 +ttp: b414/782 bl:2.1991 bb:1.0069 rl:2.2672 rb:1.0438 dl:609-611 gd:1 +ttp: b405/782 bl:2.3494 bb:1.0543 rl:2.2678 rb:1.0439 dl:592-593 gd:1 +ttp: b397/782 bl:2.3462 bb:1.0406 rl:2.2684 rb:1.0439 dl:577-579 gd:1 +ttp: b390/782 bl:2.3430 bb:1.0556 rl:2.2689 rb:1.0440 dl:564-566 gd:1 +ttp: b382/782 bl:2.2948 bb:1.0842 rl:2.2691 rb:1.0442 dl:550-552 gd:1 +ttp: b374/782 bl:2.2968 bb:1.0354 rl:2.2692 rb:1.0442 dl:537-538 gd:1 +ttp: b366/782 bl:2.3354 bb:1.0699 rl:2.2697 rb:1.0443 dl:524-525 gd:1 +ttp: b359/782 bl:2.2457 bb:1.0312 rl:2.2695 rb:1.0443 dl:512-513 gd:1 +ttp: b351/782 bl:2.3560 bb:1.0785 rl:2.2700 rb:1.0445 dl:498-499 gd:1 +ttp: b343/782 bl:2.2179 bb:1.0437 rl:2.2697 rb:1.0445 dl:486-488 gd:1 +ttp: b335/782 bl:2.3671 bb:1.0723 rl:2.2703 rb:1.0446 dl:474-476 gd:1 +ttp: b327/782 bl:2.3332 bb:1.0849 rl:2.2706 rb:1.0448 dl:462-463 gd:1 +ttp: b319/782 bl:2.3912 bb:1.0782 rl:2.2712 rb:1.0450 dl:450-451 gd:1 +ttp: b311/782 bl:2.3423 bb:1.0796 rl:2.2716 rb:1.0452 dl:438-439 gd:1 +ttp: b303/782 bl:2.3943 bb:1.0921 rl:2.2722 rb:1.0454 dl:426-427 gd:1 +ttp: b295/782 bl:2.2619 bb:1.0612 rl:2.2721 rb:1.0455 dl:414-415 gd:1 +ttp: b287/782 bl:2.4051 bb:1.0958 rl:2.2728 rb:1.0457 dl:402-403 gd:1 +ttp: b279/782 bl:2.3125 bb:1.0927 rl:2.2729 rb:1.0459 dl:391-392 gd:1 +ttp: b272/782 bl:2.3627 bb:1.0913 rl:2.2733 rb:1.0461 dl:382-383 gd:1 +ttp: b264/782 bl:2.4140 bb:1.1000 rl:2.2739 rb:1.0464 dl:371-372 gd:1 +ttp: b257/782 bl:2.4435 bb:1.1115 rl:2.2746 rb:1.0466 dl:362-364 gd:1 +ttp: b249/782 bl:2.4499 bb:1.1035 rl:2.2753 rb:1.0469 dl:352-354 gd:1 +ttp: b242/782 bl:2.3766 bb:1.1001 rl:2.2757 rb:1.0471 dl:344-345 gd:1 +ttp: b234/782 bl:2.4117 bb:1.1428 rl:2.2762 rb:1.0474 dl:334-335 gd:1 +ttp: b226/782 bl:2.3637 bb:1.0963 rl:2.2765 rb:1.0476 dl:324-325 gd:1 +ttp: b218/782 bl:2.4565 bb:1.1080 rl:2.2771 rb:1.0478 dl:315-316 gd:1 +ttp: b210/782 bl:2.2588 bb:1.0830 rl:2.2771 rb:1.0479 dl:306-307 gd:1 +ttp: b202/782 bl:2.3550 bb:1.1022 rl:2.2773 rb:1.0481 dl:298-299 gd:1 +ttp: b194/782 bl:2.4387 bb:1.1172 rl:2.2778 rb:1.0483 dl:289-290 gd:1 +ttp: b185/782 bl:2.4289 bb:1.1137 rl:2.2783 rb:1.0485 dl:279-280 gd:1 +ttp: b178/782 bl:2.3419 bb:1.0956 rl:2.2785 rb:1.0486 dl:272-273 gd:1 +ttp: b170/782 bl:2.3755 bb:1.1264 rl:2.2787 rb:1.0488 dl:264-265 gd:1 +ttp: b162/782 bl:2.4069 bb:1.1206 rl:2.2791 rb:1.0490 dl:256-257 gd:1 +ttp: b154/782 bl:2.4643 bb:1.2019 rl:2.2796 rb:1.0494 dl:249-250 gd:1 +ttp: b147/782 bl:2.4604 bb:1.1190 rl:2.2801 rb:1.0496 dl:242-243 gd:1 +ttp: b138/782 bl:2.3875 bb:1.1108 rl:2.2803 rb:1.0498 dl:233-234 gd:1 +ttp: b131/782 bl:2.3911 bb:1.1545 rl:2.2806 rb:1.0500 dl:227-228 gd:1 +ttp: b123/782 bl:2.3898 bb:1.1620 rl:2.2809 rb:1.0502 dl:219-220 gd:1 +ttp: b115/782 bl:2.4656 bb:1.1669 rl:2.2813 rb:1.0505 dl:212-213 gd:1 +ttp: b107/782 bl:2.4249 bb:1.1613 rl:2.2816 rb:1.0507 dl:205-206 gd:1 +ttp: b99/782 bl:2.4965 bb:1.1758 rl:2.2820 rb:1.0510 dl:198-199 gd:1 +ttp: b91/782 bl:2.4528 bb:1.1497 rl:2.2824 rb:1.0512 dl:190-191 gd:1 +ttp: b84/782 bl:2.5232 bb:1.1998 rl:2.2828 rb:1.0515 dl:184-185 gd:1 +ttp: b76/782 bl:2.4922 bb:1.1705 rl:2.2832 rb:1.0517 dl:177-178 gd:1 +ttp: b69/782 bl:2.4626 bb:1.2021 rl:2.2836 rb:1.0519 dl:171-172 gd:1 +ttp: b61/782 bl:2.4583 bb:1.2168 rl:2.2839 rb:1.0522 dl:164-165 gd:1 +ttp: b53/782 bl:2.5088 bb:1.1954 rl:2.2842 rb:1.0524 dl:156-157 gd:1 +ttp: b45/782 bl:2.4561 bb:1.1752 rl:2.2845 rb:1.0526 dl:148-149 gd:1 +ttp: b37/782 bl:2.5656 bb:1.2093 rl:2.2849 rb:1.0528 dl:140-141 gd:1 +ttp: b30/782 bl:2.5892 bb:1.2625 rl:2.2853 rb:1.0531 dl:133-134 gd:1 +ttp: b22/782 bl:2.5537 bb:1.1954 rl:2.2857 rb:1.0533 dl:124-126 gd:1 +ttp: b14/782 bl:2.5841 bb:1.1796 rl:2.2860 rb:1.0534 dl:114-115 gd:1 +ttp: b6/782 bl:2.7095 bb:1.2081 rl:2.2865 rb:1.0536 dl:99-101 gd:1 +quantized_ttt_phased val_loss:2.31922628 val_bpb:1.05979590 eval_time:527836ms +val_loss:2.31922628 +val_bpb:1.05979590 +total_eval_time:527.8s +