diff --git a/.DS_Store b/.DS_Store
index d786024..3e6f872 100644
Binary files a/.DS_Store and b/.DS_Store differ
diff --git a/froster/.DS_Store b/froster/.DS_Store
index 319d9ef..7ebee6a 100644
Binary files a/froster/.DS_Store and b/froster/.DS_Store differ
diff --git a/speed3r/README.md b/speed3r/README.md
new file mode 100755
index 0000000..e84d02d
--- /dev/null
+++ b/speed3r/README.md
@@ -0,0 +1,16 @@
+# Nerfies
+
+This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
+
+If you find Nerfies useful for your work please cite:
+```
+@article{park2021nerfies
+ author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
+ title = {Nerfies: Deformable Neural Radiance Fields},
+ journal = {ICCV},
+ year = {2021},
+}
+```
+
+# Website License
+
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
diff --git a/speed3r/Three.html b/speed3r/Three.html
new file mode 100644
index 0000000..5f42fea
--- /dev/null
+++ b/speed3r/Three.html
@@ -0,0 +1,773 @@
+
+
+
+ While recent feed-forward 3D reconstruction models accelerate 3D reconstruction by jointly inferring dense geometry and camera poses in a single pass, their reliance on dense attention imposes a quadratic complexity, creating a computational bottleneck that severely limits inference speed. To resolve this, we introduce Speed3R, an end-to-end trainable model inspired by the core principle of Structure-from-Motion that a sparse set of keypoints is sufficient for robust estimation. Speed3R features a dual-branch attention mechanism where the compression branch creates a coarse contextual prior to guide the selection branch, which performs fine-grained attention only on the most informative image tokens. This strategy mimics the efficiency of traditional keypoint matching, achieving a remarkable 12.4x inference speedup on 1000-view sequences, while introducing a minimal, controlled trade-off in accuracy. Validated on standard benchmarks with both VGGT and \(\Pi^3\) backbones, our method delivers high-quality reconstructions at a fraction of computational cost, paving the way for efficient large-scale scene modeling. +
+
+ + Our model processes a sequence of input images through a shared feature encoder. The resulting tokens are then processed by a series of transformer blocks that alternate between local Frame Attention (within each view) and our proposed Global Sparse Attention (across all views). The GSA module efficiently integrates global information by decomposing attention into a Compression Branch for coarse context and a Selection Branch for fine-grained details, guided by a Top-k selection mechanism. Finally, updated tokens are passed to task-specific heads to predict camera pose and dense depth maps.
+ +@inproceedings{ren2026speed3r,
+ title={Speed3R: Sparse Feed-forward 3D Reconstruction Models},
+ author={Ren, Weining and Tan, Xiao and Han, Kai},
+ booktitle={Arxiv},
+ year={2026}
+}
+ Inference Time vs. AUC@30° on Tanks & Temples for VGGT and π³ Backbones
+
+
+
+
+
+
+
+
+
+
+