Humans perceive and comprehend their surroundings through information spanning multiple frequencies. In immersive scenes, people naturally scan their environment to grasp its overall structure while examining fine details of objects that capture their attention. However, current NeRF frameworks primarily focus on modeling either high-frequency local views or the broad structure of scenes with low-frequency information, limited to balance both. We introduce FA-NeRF, a novel frequency-aware framework for view synthesis that simultaneously captures the overall scene structure and high-definition details within a single NeRF model.
We propose a 3D frequency quantification method that analyzes the scene’s frequency distribution, enabling frequency-aware rendering. Our framework incorporates a frequency grid for fast convergence and querying, a frequency-aware feature re-weighting strategy to balance features across different frequency contents.
@misc{zhang2025lookcloserfrequencyawareradiancefield,
title={LookCloser: Frequency-aware Radiance Field for Tiny-Detail Scene},
author={Xiaoyu Zhang and Weihong Pan and Chong Bao and Xiyu Zhang and Xiaojun Xiang and Hanqing Jiang and Hujun Bao},
year={2025},
eprint={2503.18513},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.18513}
}