A new study explores using deep learning to speed up optical scatterometry, a key quality control technique in computer chip manufacturing. This method could replace older, slower, and data-heavy processes.
Manufacturing computer chips involves creating tiny, complex patterns on silicon wafers. The accuracy of these microscopic features is vital for the chips' performance and reliability. Optical scatterometry, which uses light scattering to measure these features, is essential for this quality control. However, traditional scatterometry methods have drawbacks: iterative fitting methods are slow, and library search methods need massive amounts of data.
To overcome these issues, this project investigated the use of deep learning, a powerful form of artificial intelligence. Specifically, a neural network called ResNet was trained with simulated optical measurements that mimic how light scatters off a chip's microscopic features. By analyzing this scattered light, the ResNet learned to predict key parameters of the features' shape and size, such as their width, height, and sidewall angles.
The research compared different architectures and prediction strategies:
UniNet: A single network that predicts all parameters at once.
MonoNet: Separate networks for each parameter, predicting them one by one.
ExpertNet: A combination of networks that predicts groups of related parameters.
The results showed that MonoNet performed the best. This approach successfully decoupled the parameters and achieved high accuracy even with smaller datasets.
Since real-world measurements are always affected by noise and uncertainties, the study also examined how noise impacts the deep learning models. By adding simulated noise to the training data, the researchers assessed the robustness of the different network architectures. These findings offer valuable insights for developing more reliable and accurate measurement techniques, even in the presence of real-world imperfections.
Microsim, 2.5