Accuracy of CD (Critical Dimension) Measurements using TEM
- Practical Electron Microscopy and Database -
- An Online Book -

https://www.globalsino.com/EM/  



 

 

=================================================================================

 

In TEM imaging, the prospects of obtaining a reproducible method for determining film thickness might not seem promising.

Preparing very thin TEM specimens constantly is difficult and often impractical. The TEM specimen thinner than 25 nm tends to buckle, be stress relaxation, or be damaged from sample preparation. On ther other hand, no reliable TEM or STEM images can be obtained if the specimen is too thick. Therefore, each technique has its own specimen thickness requirements as shown in page1217. Furthermore, interfacial roughness is also a factor to limit the accuracy of the measurements of sample thickness as well, that is, the rougher the interface is, the thinner the TEM sample is needed.

Taylor et al. [1] theoretically (using multi-slice simulation) suggested that the error for thickness measurements of 1.056 nm ~ 1.629 nm thin silicon dioxide films sandwiched between crystalline silicon, performed using HRTEM, is about 10% due to the effect of different TEM specimen thickness, defocus, tilt, and spherical aberrations of the objective lens. However, they proposed that an HRTEM with no spherical lens aberration could measure film thickness exactly if single lattice images are considered.

The dimension measurements, e.g. evaluations of critical dimension (CD)  and d spacings, in TEM cannot be accurate (normally 10% ~ 100% inaccuracy) if the camera length is in error. To ensure that the installation-calibrated camera length can be used in your experimental measurement, the specimen should be positioned in the "Eucentric plane". This can be done simply by manually tilting the specimen and thus finding the height of the specimen holder where the image of the specimen remains stationary.

On the other hand, the magnification of the image-forming lens system has normally an error of 5% - 10% so that the final magnification has an error of 5%-10%. To obtain more accurate magnification (and thus dimension measurement), the magnification should be calibrated with a lattice image of a known standard specimen by recording both the specimen and standard in the same field of view. Furthermore, the instruments may give a few percent of image distortion which also needs to be calibrated.

In some TEMs, the magnification and its aspect ratio of TEM Images can be automatically corrected in some degree even though it is not "perfect".

 

 

 

 

 

[1] Taylor, S. Mardinly, J. OÕKeefe, M.A. & Gronsky, R. (2000). HRTEM Image Simulations for Gate Oxide Metrology, In Characterization and Metrology for ULSI Technology 2000. , Seiler, D.G., Diebold, A.C., Shaffner, T.J., McDonald, R., Bullis, W.M., Smith, P.J., & Secula, E.M. (Eds.) pp 130-133. AIP conference Proceedings 550. Melville, NY: AIP Press.

 

=================================================================================