Novel AI Method Sharply Improves 3D X-Ray Vision for Nanoscale Imaging

Novel AI Method Sharply Improves 3D X-Ray Vision for Nanoscale Imaging
This 3D image of an integrated circuit was reconstructed using a new AI-based method called the perception fused iterative tomography reconstruction engine. Credit: Brookhaven National Laboratory

X-ray tomography has long been one of the most powerful ways for scientists to look inside objects without cutting them open. From medical CT scans to analyzing advanced battery materials and computer chips, the technique allows researchers to reconstruct a 3D view of internal structures by taking many X-ray images while rotating a sample. But when imaging extremely small features at the nanoscale, the process becomes far more complicated. A new method developed at Brookhaven National Laboratory now shows how artificial intelligence combined with physics can dramatically improve 3D X-ray imaging where traditional approaches struggle.

At the center of this breakthrough is the Hard X-ray Nanoprobe (HXN) beamline at the National Synchrotron Light Source II (NSLS-II), a U.S. Department of Energy Office of Science user facility. This beamline can generate X-rays that are more than a billion times brighter than those used in conventional medical CT scans, enabling resolutions roughly 10,000 times finer. Such power is essential for imaging features inside microchips or advanced materials, where details can be just a few nanometers wide.

However, brightness alone does not solve every problem. For X-ray tomography to work perfectly, scientists need projection images from all angles around a sample. In real-world experiments, this is often impossible. Flat objects like integrated circuits cannot be rotated a full 180 degrees without blocking the X-ray beam. At steep angles, X-rays also struggle to penetrate dense materials, which further limits usable viewing directions.

The result is a well-known issue in imaging called the missing wedge problem. When a range of angles is unavailable, the reconstruction software has no data for that region. This missing information creates a โ€œblind spotโ€ that leads to blurry, stretched, or distorted images, even when using state-of-the-art reconstruction algorithms. For decades, this limitation has restricted what X-ray and electron tomography could realistically achieve.

To tackle this challenge, researchers at NSLS-II developed a new approach known as the Perception Fused Iterative Tomography Reconstruction Engine, or PFITRE. Rather than relying purely on mathematics or purely on machine learning, PFITRE intentionally combines AI-based perception with physics-based modeling. The idea is simple but powerful: let AI help fill in missing information, while physics ensures the results remain scientifically accurate.

At the core of PFITRE is a convolutional neural network, a type of AI widely used for image analysis. Specifically, the team used a U-net architecture, which follows an encoder-decoder design. The encoder learns to recognize important features such as edges, textures, and shapes, while the decoder reconstructs the image using those features. To make the network better suited for tomography, the researchers enhanced it with residual dense blocks and dilated convolutions, allowing the model to capture information at multiple scales, from fine textures to larger structural patterns.

Unlike consumer photo enhancement tools, scientific imaging cannot prioritize appearance over accuracy. Simply generating a visually pleasing image is not enough. To ensure reliability, the AI component in PFITRE is embedded directly into an iterative reconstruction engine. This engine repeatedly refines the solution, step by step, until it converges on a result that satisfies both the AIโ€™s learned expectations and the physical constraints of X-ray measurements.

In this setup, the neural network acts as a smart regularizer. Instead of aggressively โ€œcorrectingโ€ the image, it gently guides the reconstruction toward plausible structures while the physics-based model checks that every update remains consistent with the actual data. The process repeats multiple times until the AI and physics agree, producing reconstructions that are both visually clear and scientifically trustworthy.

Training such an AI model presents its own challenges. High-quality experimental tomography datasets are rare and often too limited to train a specialized neural network. To overcome this, the team relied heavily on synthetic training data. They generated datasets using natural images, simulated patterns, and scanning electron microscope images of circuits, treating them as if they were being imaged by X-rays.

To make the training realistic, the researchers built a digital twin of the experiment. This virtual setup intentionally included noise, misalignment, and other imperfections commonly found in real measurements. By exposing the AI to these conditions during training, the model learned how to handle the kinds of flaws it would encounter in actual experiments.

When tested, PFITRE showed clear advantages over todayโ€™s gold-standard reconstruction method, the fast iterative shrinkage-thresholding algorithm (FISTA). Even when significant angular data was missing, PFITRE produced sharper and more accurate reconstructions, revealing fine details that standard techniques failed to recover. Comparisons using integrated circuit samples demonstrated how closely PFITREโ€™s results matched reconstructions created from full angular datasets.

Beyond image quality, the method offers several practical benefits. Because PFITRE can extract more information from fewer measurements, it could enable faster experiments, larger fields of view, and reduced radiation doses. This is especially important for sensitive samples or for in situ studies, where researchers want to observe changes in real time while minimizing damage.

The potential applications are wide-ranging. In microelectronics, PFITRE could help engineers diagnose hidden defects deep inside chips without destroying them. In battery research, it may reveal how internal structures evolve during charging and discharging, shedding light on degradation mechanisms. The method also holds promise for broader materials science research and, in the long term, even biomedical imaging, where limited viewing angles are common.

Despite its strengths, PFITRE is not without limitations. At present, the method reconstructs 3D objects slice by slice rather than as a fully unified 3D volume. Extending it to a full 3D reconstruction framework would improve consistency but would also demand significantly more computational power. Another challenge is ensuring the AI can handle a wider variety of artifacts, such as faulty detector pixels or unexpected sample motion. Like all machine-learning systems, PFITRE cannot reliably correct issues it has never encountered during training.

Future work will focus on building richer training datasets, incorporating more types of artifacts, and developing ways for the model to learn effectively from fewer examples. These improvements could make the method even more robust and widely applicable.

More broadly, PFITRE represents a growing trend in scientific imaging: the tight integration of AI and physical models. Rather than replacing traditional methods, AI is being used to enhance them, extending what is possible under real experimental constraints. As synchrotron facilities and machine-learning techniques continue to evolve together, tools like PFITRE are likely to play a major role in sharpening our view of the microscopic world and accelerating discoveries across science and technology.

Research paper:
https://www.nature.com/articles/s41524-025-01724-0

Also Read

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments