Presentation
28 September 2023 Minimal memory differentiable FDTD for photonic inverse design
Rui Jie Tang, Soon Wei Daniel Lim, Marcus Ossiander, Xinghui Yin, Federico Capasso
Author Affiliations +
Abstract
Reverse mode automatic differentiation (RMAD) is widely used in deep learning training due to its runtime being independent of the number of training parameters. However, RMAD is limited by its high memory consumption, storing every intermediate value and operation, making it incompatible with commonly employed time-stepping finite difference time domain (FDTD) electromagnetic simulators. To address this issue, a differentiable FDTD simulator is proposed that exploits the time-reversal properties of Maxwell’s equations and removes redundant operations at each timestep, resolving the memory bottleneck. This approach enables the efficient calculation of high-dimensional objective function gradients, expanding the applicability of inverse-design topology optimization.
Conference Presentation
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Rui Jie Tang, Soon Wei Daniel Lim, Marcus Ossiander, Xinghui Yin, and Federico Capasso "Minimal memory differentiable FDTD for photonic inverse design", Proc. SPIE PC12664, Optical Modeling and Performance Predictions XIII, PC1266401 (28 September 2023); https://doi.org/10.1117/12.2677131
Advertisement
Advertisement
KEYWORDS
Design and modelling

Finite-difference time-domain method

Computer simulations

Education and training

Color

Deep learning

Mathematical optimization

Back to Top