The electronics industry is constantly striving to make products faster, smaller, and more power-efficient than previous generations. A task that may have required a dedicated desktop computer fifteen years ago can now be performed on a smartphone weighing several ounces, often while running several other processes at the same time. This increase in computing power is directly correlated to the level of complexity found in modern electronics; the intricacy of the circuits in a modern device, connected by the labyrinthine network of copper and dielectric material that make up a printed circuit board, is far beyond that found in the systems of yesteryear. Qualifying these incredibly dense assemblies can be an incredible challenge; fortunately, an approach for quickly examining the construction of these parts can be found in the PCB cross-section.
Historically, the role of IC failure analysis labs has been fairly narrowly defined. A failing device is submitted for analysis (either to an internal or external FA lab), where it is torn apart and subjected to countless different tests before the root cause of failure is finally determined, with analysts trained to distinguish between failures due to manufacturing defects and unintentional overstress induced in a customer’s application (among other typical causes of failure). As the microelectronics market has expanded and evolved, however, failure analysts find themselves faced with another potential source of problems: devices that claim to be something that they are not.
The intricate web of interconnects that makes up an integrated circuit is a mind-numbing maze of metal, stretching to all corners of a microchip. These metal traces race from one end of the device to the other, traversing the multiple layers of metal used to route signals from point to point on the IC. Modern IC devices are inherently three-dimensional; most modern devices require as many as nine or ten different metal layers on an integrated circuit, all stacked atop one another, in order to achieve the necessary signal density. The dense stack-up of layers can often make it difficult to determine key information about a given device from top-down; in these cases, an analyst can augment their understanding of a part through an integrated circuit (IC) cross section.
In the blink of an eye, modern electronics systems can sample sensors, perform countless computations, and drive dazzling displays. The sheer amount of data a single system can generate and process is staggering. All this microcomputing muscle would be for naught, however, without a way to store data. Memory is one of the core components of almost any system, from a simple RFID tag to the most powerful processing behemoth. While electronic memories can certainly store and accurately recall more detail than the notoriously malleable human mind, they are by no means immune to failure. Since many modern memories have upwards of 4 billion "bits" of potential data that may be malfunctioning, it is often necessary to enlist outside help in the form of failure analysis services to get to the root of a case of silicon paramnesia.
Scanning Electron Microscopy (SEM) is one of the most fundamental tools of the failure analysis lab. The ability to examine even the most minute of details at high resolution is crucial, especially when a given defect might be only a few nanometers wide. As integrated circuit processes continue to shrink, the use of a SEM becomes a necessity, as features shrink below the smallest size that can be resolved with optical wavelengths of light. SEM is by no means limited to imaging failures on integrated circuits, however; with the proper techniques, the SEM can also be a powerful tool for material characterization.
The printed circuit board is one of the cornerstones of modern electronics technology. The sheer amount of devices and necessary interconnects to interlink them required for even relatively simple consumer electronics cannot be realized without modern high-density circuit board technology. Naturally, this increased level of complexity poses unique challenges for failure analysis; finding an open trace, for example, may require hours of painstaking work poring over board layouts, performing countless microsurgeries on the board to finally isolate the failing node. Even then, once the failure has been isolated through extensive printed circuit board testing, an analyst's tribulations are not finished, as they must then find a way to unearth the buried failure.
When examining a contemporary integrated circuit, an electronic failure analyst must face a myriad array of challenges; metal interconnects can be too dense for traditional isolation techniques to be of any value, critical dimensions may be too small to be thoroughly examined in any but the most cutting edge of microscopes, and layers are often spaced so finely as to make planar deprocessing a nerve-wracking, pulse-pounding undertaking, in which one slip of the finger can result in irreparable damage to a device undergoing analysis. As if these hurdles weren’t enough to contend with, analysts must also grapple with a rapidly expanding segment of the microelectronics market: semiconductor devices that incorporate moving parts as a part of their operation. These devices, referred to as MEMS (Micro-Electro-Mechanical Systems), offers a unique challenge from the standpoint of semiconductor failure analysis, largely due to their markedly different construction.
The scale of features on a modern semiconductor device is so infinitesimally small as to be hard to even conceptualize. A cutting edge transistor from a powerful processor may have a channel length as small as 35 nanometers – a size dwarfed even by microorganisms like bacteria or viruses. In order to study these miniscule movers of the electronic world, electron microscopy is a necessity for any failure analysis lab; however, an electron microscope is not a panacea for imaging ailments, and must be applied with proper knowledge of device characteristics and the limitations of the tool.
One of the cornerstones of non-destructive failure analysis of packaged integrated circuits, allowing an analyst a relatively simple way of examining the structural integrity of a device, is Scanning Acoustic Microscopy (SAM). By using tightly focused pulses of ultrasonic waves and analyzing the sound reflected by and transmitted through a sample, it is possible to create a detailed, accurate image of a packaged semiconductor device, showing any pockets of air or delamination that may contribute to early-life failure. SAM has been an invaluable tool in performing analysis on the types of parts that have traditionally been the most prevalent in the industry - plastic encapsulated, wire-bonded ICs. Though the industry may be shifting away from these types of devices in favor of packaging technologies like flip-chip ball grid arrays (FCBGAs) due to the more efficient use of bonding space and potential for increased thermal compensation, the SAM is not obsolete; indeed, with a few changes, SAM can provide invaluable data on these cutting-edge technologies.
To someone unfamiliar with failure analysis of integrated circuits, it can be extremely difficult to imagine how any sort of meaningful data can be produced from a non-functioning piece of electronics – especially when the problem description is often phrased in nebulous terms, sprinkled heavily with empty words like “broken” and “defective”. Yet, in many cases, a good analyst can turn these imprecise terms into a finely honed insight into a particular defect or device. One may ask how this is possible, given the extreme complexity of modern semiconductor devices. In IAL’s case, the answer lies in a well-planned IC failure analysis lab flow that takes a device from initial observations to final reporting.
One of the most challenging cases of PCB failure analysis is the search for an open circuit. Navigating the maze of metal interconnects with probes and an ohmmeter is time-consuming and, frustratingly, often ends without bearing fruit when an analyst encounters a component like a ball-grid array (BGA), with concealed connections that prevent further probing. At this point, the analyst is stuck; removing the component by desoldering would remove any evidence of an open circuit, and a blind cross section has low odds of success unless the component has large numbers of open solder joints. In such occasions, dye penetrant testing can be used to detect any solder defects, revealing broken or non-wetted joints at the expense of further testability.
Generally speaking, most discussion of electronics failure analysis is geared towards finding silicon-based integrated circuit defects. The reason for this is fairly straightforward; silicon is, by far, the most prevalent semiconductor used to create modern electronics, and therefore has the lion’s share of defects associated with it. In some cases, however, silicon circuits are simply insufficient for a given application – especially when extremely high frequency applications are considered. In these cases, it is much more common to use a III-V semiconductor like gallium arsenide (usually referred to as GaAs). Though the high frequency performance of III-V devices may be much greater than their silicon counterparts, their unique construction poses some difficult challenges for a failure analyst hoping to dig into their inner workings.
Performing a detailed failure analysis on electronic circuits requires a wide variety of tools, many of which are targeted at isolating a defect to a single point in the labyrinthine network of metal and polysilicon that make up an integrated circuit. The vast majority of these tools require the failing device to be electrically biased in its failing condition, at which point data is gathered about the part’s condition – thermal measurements are taken, light emitted from the circuit is gathered, and so on. Often, these tools are sufficient to find a failure; some defects, however, do not appear as readily under these methods of investigation. In these cases, it is often necessary to use a different class of tool, which uses an outside stimulus to create a change on the device, then measures the device’s reaction.
PCB failure analysis can be a daunting task in even the most ideal of cases. Modern printed circuit boards are densely-packed, multilayer rat’s nests of copper interconnects, integrated circuits, and discrete components. Isolating a single defect - which may often be a single splash of solder, misregistered via, or cracked copper trace - is an arduous process, requiring hours of probing and isolation to finally narrow down the point of failure. This process is taxing, to say the least; however, the problem is often compounded when the device to be analyzed is no more than a twisted, blackened hunk of burnt PCB material.
One of the benefits of a thorough failure analysis is the ability to properly classify a given IC defect, identifying its most likely origin and determining what caused it. With this data, a manufacturer can determine the proper course of action necessary to respond to the failure. If the defect arose from improper use, then the manufacturer can provide feedback to their customer, letting them know that they may have an inherent design flaw; on the other hand, if the defect is found to be related to the manufacturing process, it becomes necessary to evaluate the potential impact on other product manufactured during the same time frame.
One of the most pivotal points of any IC failure analysis is the process of electrical characterization. In order to correctly understand a failure and choose the proper course of action to find its root cause, it is vital to understand the failure’s electrical signature; for example, analysis of a short circuit will follow a far different path than an FA targeting an open circuit. Since it is is so crucial to properly understand the electrical characteristics of a failure, a good FA lab will have a comprehensive semiconductor test program in place that can handle a wide variety of devices.
In many cases, performing a successful failure analysis hinges upon being able to quickly and accurately characterize a contaminant that caused a device to malfunction. In many cases, elemental analysis testing techniques like energy dispersive spectroscopy (EDS) or x-ray fluorescence (XRF) provide enough data about a given sample – for example, a contaminant with high levels of chlorine is almost universally bad, due to the highly ionic nature of chlorine. In other cases, however – especially cases involving organic contaminants, which often appear on elemental analyses as high concentrations of carbon and oxygen with little else that might help an analyst identify them – it is necessary to know not only the elements present in a contaminant, but how they are bonded together. In these cases, Fourier transform infrared spectroscopy or FTIR analysis can provide the answer.
One of the most critical points of any failure analysis is the decapsulation step. Decapsulation is the point where non-destructive analysis ends and more risky operations begin – the die is removed from its protective plastic shell to allow failure analyst access to the complex circuitry within. Usually, decapsulation is performed using wet-etch procedures, dissolving the plastic encapsulant material of an IC package with any of a variety of different acids or solvents. The downside of this approach, of course, is that working with these potentially hazardous chemicals necessitates some serious safety measures like fume hoods and other types of personal protective equipment. Furthermore, the chemicals most often used for decapsulation, though relatively common, are still not cheap and can amount to a significant expense depending on the number and type of parts that must be decapsulated. Most importantly, the chemical decapsulation process can often disrupt the failure on the part; in some cases, like when working with GaAs or some other III-V semiconductors, the decapsulation chemicals can even dissolve the integrated circuit completely! Fortunately, there is an experimental alternative to chemical decapsulation: laser decapsulation is one of the most promising new technologies on the horizon.
Elemental analysis tools, like Auger electron spectroscopy, can often be exceptionally helpful for providing qualitative data about the composition of a material. An unknown material can be quickly analyzed to look for the presence of harmful corrosive elements or organic contaminants that may be relevant to a failure. In some cases, however, knowing whether or not an element is present does not tell the whole story; manufacturers may have guidelines which set limits on the amount of a given substance that may be present on a device, or specifications for the material composition of certain parts of their product. In these cases, it is necessary to perform a more thorough, quantitative analysis.
One of the most powerful tools at a failure analyst’s disposal for non-destructively studying the integrity of a component’s packaging is scanning acoustic microscopy. By using ultrasonic waves, the scanning acoustic microscope can detect cracks, air gaps, or delamination with relative ease. There is one caveat to the results from scanning acoustic microscopy, however; in many cases, seeing is believing, and an acoustic image does not necessarily quench the burning desire to view the defect directly. Many manufacturers requesting acoustic imaging services may call the results of a test into question (especially if the result is not one they find favorable); in these cases, it may be necessary to provide another, more tangible piece of evidence.