Wafer Overlay by Sichain is a desktop app for intelligent wafer stacking and AOI. It renders accurate wafer maps, stacks multi-stage data, and can run TorchScript AOI (segmentation first, optional YOLO detection).
Development stack
- Tauri: Tauri + React + Typescript
- Mantine: GUI library for React
- ThreeJS: JS-based 3D library for accurate drawing of wafers
- Rust Backend
IDE setup:
VS Code + Tauri + rust-analyzer
pnpm tauri icon --help
pnpm tauri icon public/logo3.png
pnpm tauri build --bundles dmg
| 文件夹 | 正则表达式 |
|---|---|
| 衬底 | (?<=[/\\])衬底(?=[/\\]) |
| FAB CP | (?<=[/\\])FAB\s*CP(?=[/\\]) |
| CP 1 | (?<=[/\\])CP\s*1(?=[/\\]) |
| WLBI MAP | (?<=[/\\])WLBI\s*MAP(?=[/\\]) |
| CP 2 | (?<=[/\\])CP\s*2(?=[/\\]) |
| AOI | (?<=[/\\])AOI(?=[/\\]) |
The database file is located in the %APPDATA folder, named data.db.
- Set a default admin password by creating a
.envfile at the project root:- Copy
.env.exampleto.env - Set
VITE_ADMIN_DEFAULT_PASSWORD=your-secret
- Copy
- On first run, if the database still has the seed password (
admin), the app updates it to the env value during initialization. - The “default password” check in the UI uses this env value as the baseline.
- https://www.sichainsemi.com/
- https://v2.tauri.app/start/
- https://react-redux.js.org/
- https://threejs.org/
- https://github.com/0xtaruhi/ufde-next/
- https://github.com/tabler/tabler-icons/
- JUN WEI WANG | jwwang2003
- YI TING | ee731
- Download the official
libtorch-macos-arm64zip from pytorch.org and unpack it intosrc-tauri/libtorch/so you havesrc-tauri/libtorch/lib/libtorch.dylib,libtorch_cpu.dylib, etc. src-tauri/tauri.conf.jsonalready copieslibtorch/**into the bundle; the dylibs land inAOI Wafer Stacking.app/Contents/Resources/libtorch/lib.- If you built libtorch locally at
3rdparty/pytorch/build/install, either copy or symlink it into place so the bundler sees it:ln -snf ../3rdparty/pytorch/build/install src-tauri/libtorch - Build the macOS ARM bundle with the vendored libs:
(You can swap
cd src-tauri LIBTORCH=./libtorch \ # or ../3rdparty/pytorch/build/install RUSTFLAGS="-C link-args=-Wl,-rpath,@executable_path/../Resources/libtorch/lib" \ cargo tauri build --target aarch64-apple-darwincargo tauri buildwithpnpm tauri build; addLIBTORCH_BYPASS_VERSION_CHECK=1if needed.) - Verify the binary can see the libs:
otool -l target/release/bundle/macos/AOI\ Wafer\ Stacking.app/Contents/MacOS/aoi-wafer-stacking | rg LC_RPATH ls target/release/bundle/macos/AOI\ Wafer\ Stacking.app/Contents/Resources/libtorch/lib - If codesign complains, run from
src-tauri:codesign --force --deep --sign - target/release/bundle/macos/AOI\ Wafer\ Stacking.app
In a shell before running pnpm tauri dev:
export LIBTORCH_BYPASS_VERSION_CHECK=1
export LIBTORCH_USE_PYTORCH=1
torch_lib=$(python - <<'PY'
import torch, os
print(os.path.join(os.path.dirname(torch.__file__), 'lib'))
PY
)
echo "Using torch lib dir: $torch_lib"
ls "$torch_lib/libtorch_cpu.dylib" # should exist
export LIBTORCH="$torch_lib"
export DYLD_LIBRARY_PATH="$torch_lib:${DYLD_LIBRARY_PATH}"
pnpm tauri dev
Refer to PyTorch's build-from-source guidelines. Tested on an M1 Pro (2023 MacBook 14").
git submodule init
git submodule update --recursive
cd 3rdparty/pytorch
export BUILD_TEST=0
export USE_DISTRIBUTED=0
export USE_CUDA=0
export USE_MPS=0 # set to 1 if you actually need MPS on macOS
export DEBUG=0
python tools/build_libtorch.py
mkdir -p build && cd build
cmake .. \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX="$PWD/install" \
-DBUILD_SHARED_LIBS=ON \
-DBUILD_PYTHON=OFF \
-DBUILD_TEST=OFF \
-DUSE_CUDA=OFF \
-DUSE_MPS=OFF \
-DUSE_DISTRIBUTED=OFF
cmake --build . --target install -j"$(sysctl -n hw.ncpu)"
ln -snf ../3rdparty/pytorch/build/install ../src-tauri/libtorch
pnpm tauri dev -- --no-default-features
