Derek Snider at dual beam stage with star wars R2D2 toy

 

Semiconductor and Electronic Failure Analysis Blog

Welcome to the Semiconductor and Electronics Failure Analysis Blog, and discussion forum for all things related to electrical, integrated circuit (IC) board and electronics failure analysis.  Please subscribed to our feed and feel free to leave a comment or question. Thanks for visiting.

Insight Analytical Labs often uses FTIR (Fourier Transform InfraRed) spectroscopy when performing root cause analysis on the failure of an electronic component.  Learn more.

So you’re having a problem with a printed circuit board assembly (PCBA). You’ve done all you can to narrow down the failure site, but you’re at the limit of the capabilities your equipment has available to you. What do you do now?

Lighthearted look at Scanning Acoustic Microscopy, or SAM, and the microchip manufacturing process.

The Mil-Std- 883, Method 2018 specifies sampling procedures for die selected from the wafers or die already packaged. In this post, we dive into the purpose and processes required by this standard.

Are you thinking about starting your own electronics failure analysis lab? In this post we discuss basic failure analysis capabilities for an in-house or independent lab, the costs, facilities, maintenance, supplies, inspections, and personnel selections must be taken into account.

In the current era of System-on-Chip (SoC) designs with 10 and 11 metal layers, copper metallizations, exotic dielectric materials, and the use of area pads scattered across the entire die area of circuit design, FIB provides an ideal diagnostic aid.

Understanding why things fail is critical to preventing failure in the future. Whether it is a single catastrophic failure whose root cause needs to be understood to prevent future critical failures or a test run of a prototype that is about to go to production understanding the root causes of failure are essential. Mechanical failures, in particular, can be complex and difficult to understand. When there is a mechanical failure of a material, several tests and images must be taken in order to understand the cause of the failure. Taking your sample to a lab with electron microscopy services can help you dig down further to find out where your failure might have occurred.

Computers used to take up entire rooms to perform what we would consider today rather rudimentary calculations. As computing power increased, the size of the computers decreased. What was once an easily spotted blown tube transistor became very difficult to see electron leakage through a PNP junction. Enter the world of microelectronics. Every mobile electronic device today is powered by microelectronics. They need to be small, fast and reliable. They also need to be durable. When things go wrong with them, we want to know what caused the failure and how it can be fixed to make our electronics as reliable as possible.

Modern semiconductors and integrated circuits are built with geometries measured in terms of angstroms and nanometers, and defects on these devices may be completely invisible under an optical microscope. For uncovering even the smallest defects, IAL offers scanning electron microscopy services, providing a crisp, clear image of any anomaly imaginable. Learn more.

In their final, packaged form, many of the secrets of integrated circuits are concealed from an analyst looking to uncover a failure. While techniques like x-ray and acoustic microscopy can penetrate the shroud of mold compound and FR4 that enfold the semiconductor die at the heart of a device and reveal some information, they rarely tell the whole story; to truly determine the root cause of failure, an analyst almost always needs to be able to examine the device directly. This examination may take many forms - optical or electron microscopy may reveal a defect site, or elemental analysis tools may identify contaminants causing corrosion or other issues - so the techniques used to expose the semiconductor die must take into account the potential failure mechanisms that are most likely for any given device. IC decapsulation is the process - part art, part science - of breaking in to these devices to discover what defects might lie within.

The humble capacitor is one of the most fundamental components of any electronic assembly. These ubiquitous passive devices come in a variety of different flavors; whether formed using electrolytic fluids, metal foils, the metals and oxides of an integrated circuit, or any of a multitude of other materials, there is hardly a printed circuit assembly in the world without at least one capacitor mounted somewhere on its surface. Capacitors form the backbone of charge pumps, frequency filters, power conditioners, and many other common applications; since these components are so crucial to these operations, a malfunctioning capacitor can often cause complete failure of a system. At first blush, a capacitor would seem to be a fairly straightforward device to perform analysis on (after all, how complex can two electrodes separated by a thin dielectric be?), capacitor failure analysis poses unique challenges that must be met with equally unique approaches.

The modern electronics consumer is a demanding, discerning individual. The demands placed on any product are extensive; end users expect a wide range of functionality, with high reliability, at low cost. A device as ubiquitous as a smartphone is capable of facilitating transcontinental data transfer, displaying cutting edge graphics, and performing feats of mathematical might, all in a package small enough to fit into a pocket - and at a price point low enough not to empty said pocket. Modern electronic systems require hundreds, if not thousands, of components, all working together in concert to provide the functionality consumers have come to rely on; from the sheer computing power of a cutting-edge microprocessor to the simplicity of a passive capacitor, each component is vital to a device’s operation, since extraneous or redundant parts are trimmed during design in order to minimize costs. When one of these components fail - even one as minor as a surface mount resistor - a device can go from a modern marvel of technology to an extremely expensive inert hunk of plastic and metal. Determining why a device failed is often an excellent first step towards improving the reliability of future generations of products;  electronic…

Today’s cutting edge microelectronics are twisting, labyrinthine networks of nanotechnology, with layers upon layers of intertwined metallic and crystalline structures. Gone are the days when one could put a device under an optical microscope and, over the course of a few hours, sketch out a relatively accurate functional schematic; the process technology used in creating a modern microprocessor or memory device creates features so small that they are physically impossible to resolve with optical microscopy, since the wavelength of visible light is so much larger than the features being imaged. Higher resolution electron microscopes can easily resolve the nanometer-scale features on these devices, but the ultra-high magnifications needed to do so mean that only very small areas of the die can be viewed at a given time, an equally restrictive roadblock to understanding a circuit as a whole. Performing intellectual property analysis on a device in order to protect patents or reverse engineer obsolete parts which are no longer manufactured is, in many cases, an exercise in competing compromises; one can get a highly focused analysis with electron microscopy that is very limited in scope, or a very broad look at a device that may lack the necessary depth for…

                The modern electronics and semiconductor markets are fiercely competitive. Manufacturers are constantly vying for supremacy, attempting to carve out a niche with novel, innovative approaches to fulfill the needs and wants of an increasingly demanding customer base. In such a rapidly changing, fast-paced environment, bringing a new product to market can be challenging, especially without any sort of knowledge of how the competition might measure up. Often, a manufacturer looking to break into the market will employ a third party to perform a technical competitive analysis – an in-depth look at the construction of a product that can provide insight into key details like process node, die size, and functional block size that can be used to perform cost and performance analyses. At first blush, technical competitive analyses appear completely separate from failure analysis services; in reality, the tools and techniques developed for finding defects on cutting-edge products translate seamlessly to the type of teardowns necessary to perform a deep dive into the minutiae of a product’s construction.

While solder, the metallic alloy that is melted and reflowed to create joints between components and printed circuit boards, may not be as exciting and glamorous as the intricate webwork of copper and polysilicon in an integrated circuit, it is still vital to the creation of an electronic device. Without proper solder connections, even the most advanced of integrated circuits is reduced to an ineffectual paperweight, lacking any pathways for power and signals to travel over. Being able to perform a solder quality inspection is, therefore, an integral part of any failure analyst’s repertoire of skills.

Failure analysis of consumer electronics can pose a wide variety of challenges, due to the multitude of different failure mechanisms that might befall a device. Environmental factors, mistreatment, and even the way that the device is packaged can contribute to the untimely demise of a device. While the vast majority of integrated circuits are packaged using a plastic or epoxy based mold compound, some high-reliability devices - especially those used in aerospace applications - are encased in hermetically sealed tombs of ceramic and metal. Performing electronic failure analysis of these hermetic packages poses a new set of challenges, as there are certain failure mechanisms and tests that are applicable only to this type of packaging.

Continued from A Study in Printed Circuit Board Failure Analysis, Part 1 The next step in the failure analysis process, revealing the defect, would almost certainly involve the destruction of the board; as a result, a strong hypothesis was necessary before embarking upon any further analysis. In order to determine the best course of action, our analyst reviewed the facts as they stood before proceeding.

Over the course of a failure analyst’s career, they will be exposed to an extensive and varied array of devices. No matter the technology – whether they be nanoscopic silicon sensors with moving parts so small as to defy belief or massive circuit assemblies comprised of thousands of discrete components and integrated circuits – no device is completely immune to failure. Variations in process control, insufficiently robust designs and extended abuse by an end user can all spell early doom for a device. In our introductory article, we took a high-level overview of the failure analysis process, discussing the steps an analyst takes to turn a failing, rejected product into actionable knowledge for process improvement; in this column, we will see how these steps are applied to a specific failure. Naturally, examining a relatively trivial case would not provide the necessary depth of learning, so instead, we choose to give an example of a failure many analysts dread: an intermittent failure on a printed circuit assembly.

Continued from Failure Is The First Step on the Road To Success, Part 1 Non-destructive testing overlaps to a certain degree with the next step in the process, wherein an analyst attempts to isolate the failure to as small of an area as possible. This phase of the project may include both destructive and non-destructive aspects as necessary to locate a defect site. Some problems may be fairly simple to isolate, given the correct tools; a low resistance short between nodes of a board may be revealed in a matter of seconds using a thermal imaging camera, and the aforementioned cracked solder joint found during visual inspection can usually be probed for continuity with very little trouble. Other defects may require patience, a steady hand, and a methodical plan of attack; finding a leakage site on a PCB, for example, may require an analyst to cut traces (both on the surface of the PCB and buried within) in order to limit the number of possible locations for a defect.

                It is an inexorable fact of life that all electronic assemblies – from the most complex, densely interconnected systems to the cheapest mass-produced consumer devices – will eventually fail. Such devices may be victims of various forms of abuse at the hands of their end users, subject to mechanical, environmental, or electrical stresses far beyond what any design engineer would consider reasonable. Some, especially early prototypes, may be inherently flawed and susceptible to malfunction as a result of a simple mistake made during one too many late night, bleary-eyed design review sessions, conducted over energy drinks and cold takeout. Of course, it is also possible for assemblies to simply die of old age; eventually, normal wear and tear will break down even the most robust of electronic devices. In all these cases, the result is the same (at least at a very high level): a device that no longer performs its intended function.

                If one were able to take a modern printed circuit board and examine the vast network of metal traces, completely unobscured by dielectric materials, one would find an intricate, three-dimensional lacework of finely interwoven metal threads. Thin filaments of copper, reminiscent of a spider’s web, snake outward from ring-shaped vias, while in other places metallic tributaries flow into the large bus lines which carry rushing rapids of electrons that provide power to the devices on the board. The many layers of the board taken as a whole bring to mind a futuristic highway system, with thousands upon thousands of individual pathways crossing over one another, routing traffic seamlessly from point to point. Unfortunately, this highway system is not always perfect; thin filaments may break, rushing rapids of electrons may overflow, and improperly built pathways eventually fail, turning these intricate patterns into tangled snarls sure to frustrate any user. In these cases, electronic device failure analysis can help to unravel the tangled web that was woven; one of many approaches that may be taken in these scenarios is printed circuit board delayering.

                Modern printed circuit assemblies are vastly complex labyrinths of interconnected devices, comprising many hundreds of components and thousands of individual signals being routed through the networks of metal, silicon, and dielectric material. While the individual integrated circuits on an assembly may steal most of the glory – just look at the buzz surrounding the processors inside the latest and greatest cell phone, video card, or supercomputer – interconnect technology is just as important to the success of a given product. To ensure a robust product, the reliability of the connections between individual components and the PCB that hosts them is paramount; to maximize this reliability, failure analysis of electronic assemblies to investigate solder failures is an excellent springboard to continuous improvement.

The focused ion beam (FIP) is a powerful tool in the hands of a skilled electronics failure analysis engineer.  In this post, we use the metaphor of a surgeon wielding a scalpel to help explain the power and versatility of the FIB system. 

The culminating moment of triumph for any failure analysis project is when a defect is captured in all its glory - that instant where the noisy tangle of data and observations are crystallized into a coherent analysis due to the addition of one crowning piece of evidence. While it would seem that the final photograph, showcasing the defect that lies at the root of a failure, would draw a failure analysis project to a close, there is often still work left to do; in many cases, analyzing semiconductor failures requires an even deeper examination of the defect, to determine its most likely origin.

At IAL, we constantly strive to provide our customers with accurate, reliable data. We realize that our contribution to a given project may have far-reaching ramifications that continue long after we've sent our reports and finished our analyses. As the microelectronics failure analysis services we provide can be so vital in our customer's process of continuous improvement, it is important to us that we ensure that our tools are up to the task of ferreting out the root cause of failure in even the most complex of devices. Many who are unfamiliar with FA are unaware of the types of tools that might be in an analyst's repertoire; what follows is a brief overview of an analyst's toolbox, all of which can be applied to increase understanding of a failure.

IAL has always been proud to be a one-stop shop for electron microscopy services, consistently offering the high-quality imaging and analysis that is necessary for the failure analysis and intellectual property industries. As the microelectronics industry has evolved, producing smaller, more complex devices, electron microscopy tools have been forced to evolve as well; in keeping with our commitment to providing the highest quality results for our customers, IAL is pleased to announce that we have taken delivery of our new FEI Versa 3D dual beam tool, combining the increased resolution of a field emission electron microscope with the flexibility and added capability of a focused ion beam (FIB) tool. This new acquisition represents an exciting leap forward for our microscopy lab, allowing us to offer several services that previously were out of our reach.

Modern consumer electronics devices must withstand all manner of harsh environments. They may operate in areas where humidity is extremely high, providing ample amounts of ambient moisture that can be detrimental to the operation of sensitive circuits. Many dirty environments are filled with dust, grime, and a whole laundry list of other contaminants ranging from the innocuous to the truly disgusting that can be pulled in by a device’s cooling fans, introducing myriad organic and inorganic contaminants that may collect on the surface of a device. Still other factors may exist that many designers may never even consider as a possible source for contamination; in one case, IAL opened a device that had been returned from the field, only to find the inside thoroughly coated with the remains of unfortunate insects who had attempted a too-thorough inspection of the system’s fan. All of these things may contribute to the malfunction of an electronic device; however, it is up to the analyst to determine whether these contaminants or other environmental factors are truly at the root cause of the failure, or are merely incidental. Could ionic contamination, introduced from the environment, be causing a short circuit? Are the failing solder joints…

Part of the inherent nature of failure analysis is the fact that no two jobs will ever be quite the same. Failure modes, environmental conditions, device applications – all these parameters shape the circumstances of a given failure analysis project. Managing a failure analysis project therefore requires particular care and attention, to ensure that the proper tools and techniques are chosen for a given job. Charting the course of a failure analysis project requires not only a solid grounding in the tests and equipment used in the lab, but also requires on-the-fly synthesis of disparate data points – not just the incoming data generated by the failure analysts, but also information about how and under what conditions a device was used before its failure.

With the release of smaller, more feature-laden devices every year, it is obvious that the electronics industry is in a constant state of flux and evolution. The increase in complexity of a single integrated circuit over the years is undeniable, whether it is due to paradigm shifts in the methods of construction and operation or simply a result of the inexorable march of Moore’s law, which predicts that the number of transistors on integrated circuits will double roughly every two years. Naturally, this constant change in technology has serious ramifications for failure analysis; a technique that was suitable for older products may not be sufficient for submicron technologies, with their densely-packed features and towering metal stacks. The failure analysis industry has therefore needed to respond quickly to changes in technology and develop new techniques capable of handling even the most complex of devices.

Every morning in the failure analysis lab holds the potential for a new challenge. A board from a missile guidance system, an integrated circuit from the latest cell phone or video game console, or pieces of a high tech neural implant may be but a few of the many different devices that analysts may find waiting on their desks in the morning (after, of course, a requisite stop at the coffee pot – like many other engineering fields, xanthic alkaloids are one of the cornerstones of a healthy analyst’s diet). Though there is a vast range of device types that may cross an electronic failure (FA) analyst’s desk, there are similarities between every FA project that can be examined; regardless of the unique circumstances of a given electronic device, there are still a handful of standard steps that come together to make up a typical day in the electronic failure analysis lab.

Every failure analysis project is unique; rarely, if ever, will an analyst come across a defect that is exactly identical to one found on a previous project. The wide range of process types, device applications, and conditions that contribute to failure will change from device to device; since every defect is shaped by the circumstances surrounding its inevitable end of life, no two failures will be alike. Although the specific circumstances of failure may be one-of-a-kind, most IC defects still fall within one of several different categories. These categories are not just convenient pigeonholes for describing a failure - in many cases, they help to indicate the proper course of analysis for the device.

  In many cases, it is necessary to isolate a single defect amidst a vast array of circuitry, singling out a single leaky gate or overdriven transistor from among billions, in order to perform a successful failure analysis. Without some visual way to pluck the single defective device out from the lineup of identical looking circuit elements, an analyst cannot properly target the more destructive steps in the analysis, like cross-section or deprocessing. While some tools, like thermal imaging or other heat-sensitive techniques, can be successful in isolating an area for further investigation, in some cases they aren’t enough; the defect may not be generating enough heat to be detected. In these cases, a different approach, in which one takes the time to understand a device more completely by contrasting some sort of characteristic signature of malfunctioning devices against those that are properly functioning, may be able to isolate the failure. Emission microscopy is one such method of characterizing devices, and offers an excellent picture of many different types of failure upon which to build an analysis.

There are many hurdles that must be overcome when attempting to introduce a new electronic gadget to the market. The trials and tribulations of creating a prototype and developing a unique, compelling solution to a consumer problem are only the first step in a long series of trials; with a working prototype in hand, a manufacturer must perform extensive testing on their new product in order to ensure reliability over its lifespan, a process that often leads to several costly design revisions before the product is even released for general consumption. Even after a reliable product has been produced, the qualification process for the new device is not over; unless the manufacturer is making a type of device that is specifically exempted, the new product must undergo RoHS certification or be barred from sale in the vast majority of markets.

In part one of this series of tips for outsourcing or hiring an electronics failure analysis service, we examined the wide variety of information that should be gathered before sending a failing part out for analysis. The construction of a detailed packet of data, including a problem description, a background or history of the failing device, and any auxiliary documents like layouts or schematics that may be necessary in chasing down the root cause of failure of a device is an involved process - but, once such a dataset has been assembled, the struggles of choosing a lab to entrust it with can begin in earnest. Just as one would not want to drop an expensive supercar off with any random shadetree mechanic, a one-of-a-kind failure should be sent to a lab with the  best (and most relevant) capabilities, experience, and a proven track record, in order to help ensure the best results.

Inevitably, in any product’s life cycle, there will arise an obstacle that may seem insurmountable: products may experience unexpected levels of inexplicable malfunctions after hitting store shelves, low production yields may wipe out any hope of profitability, or any of a number of other issues can rear their heads. When faced with such gremlins, manufacturers often struggle to find the best approach for solving their woes - without being able to pin down the problem, finding a solution is impossible. External failure analysis services can often be invaluable in such situations; however, the task of choosing a lab - and providing them with the information needed to ensure their success - can be difficult as well. Fortunately there are some tips that can help in the process of hiring an electronics failure analysis service, to ensure that the necessary results are obtained.

Electronic component distributors are faced with a myriad variety of risks when dealing with the vast array of devices available on the contemporary market. The looming specter of counterfeit or fraudulent devices, combined with the expected stresses of dealing with run-of-the-mill complaints and RMAs, can be an overwhelming combination of potential problems that must be overcome. In order to surmount these obstacles, diligent distributors must often enlist outside assistance. Fortunately, electronic component failure analysis labs are perfectly poised to help these suppliers struggle through any quality issues they may face.

As previously discussed, a cross-section of a printed circuit board can be an excellent way to qualify a new process and determine whether a product is being produced to specification. The data about layer spacing, plating thicknesses, and interconnect quality that can be obtained through a well-targeted cross-section is invaluable in determining whether appropriate manufacturing procedures are being followed. The cross-section is not only useful for determining the acceptability of a given product, however; indeed, PCB cross-section analysis is often one of the only ways to identify certain types of PCB defects.

Non-destructive testing provides the foundation for any thorough failure analysis project. Without properly gathering initial data about the part - condition of the package and leads, electrical behavior, and so on - an analyst would be hard pressed to identify and track down a defect. Often, the use of acoustic microscopy for electronics component inspection can provide invaluable data about the condition of a part that directly leads to identifying the root cause of failure - for example, delamination of the package over the lead for an open-circuited signal. Looking for delamination is only one of the acoustic microscope's applications, however; properly applied, it can reveal much more.

The final step in the majority of integrated circuit failure analysis projects involves deprocessing the device, removing layers of metal and oxide to expose the defect on the device. Though the techniques of deprocessing are incredibly involved and require extremely high levels of skill, they are still inherently brute-force techniques, involving volatile chemicals and abrasive polishes. In some cases, such an approach may be too aggressive. Fortunately, there are tools in an analyst’s repertoire that can be wielded with scalpel-like precision; using a focused ion beam (or FIB) for failure analysis allows an analyst to forgo lapping or wet etching in favor of drilling directly to the site of failure.

The culminating point of any semiconductor failure analysis job is the task of deprocessing the integrated circuit: removing the various layers of metal and oxide that make up a device until a defect or damage site is revealed. While, in theory, deprocessing seems straightforward, there are many potential pitfalls and nuances that must be accounted for; as such, IC failure analysis companies who can successfully offer IC deprocessing services on a wide range of parts are few and far between. Successful results hinge upon correctly identifying a process type and matching it with the proper set of techniques from an IC failure analysis engineers a comprehensive set of tools.

The electronics industry is constantly striving to make products faster, smaller, and more power-efficient than previous generations. A task that may have required a dedicated desktop computer fifteen years ago can now be performed on a smartphone weighing several ounces, often while running several other processes at the same time. This increase in computing power is directly correlated to the level of complexity found in modern electronics; the intricacy of the circuits in a modern device, connected by the labyrinthine network of copper and dielectric material that make up a printed circuit board, is far beyond that found in the systems of yesteryear. Qualifying these incredibly dense assemblies can be an incredible challenge; fortunately, an approach for quickly examining the construction of these parts can be found in the PCB cross-section.

Historically, the role of IC failure analysis labs has been fairly narrowly defined. A failing device is submitted for analysis (either to an internal or external FA lab), where it is torn apart and subjected to countless different tests before the root cause of failure is finally determined, with analysts trained to distinguish between failures due to manufacturing defects and unintentional overstress induced in a customer’s application (among other typical causes of failure). As the microelectronics market has expanded and evolved, however, failure analysts find themselves faced with another potential source of problems: devices that claim to be something that they are not.

The intricate web of interconnects that makes up an integrated circuit is a mind-numbing maze of metal, stretching to all corners of a microchip. These metal traces race from one end of the device to the other, traversing the multiple layers of metal used to route signals from point to point on the IC. Modern IC devices are inherently three-dimensional; most modern devices require as many as nine or ten different metal layers on an integrated circuit, all stacked atop one another, in order to achieve the necessary signal density. The dense stack-up of layers can often make it difficult to determine key information about a given device from top-down; in these cases, an analyst can augment their understanding of a part through an integrated circuit (IC) cross section.

In the blink of an eye, modern electronics systems can sample sensors, perform countless computations, and drive dazzling displays. The sheer amount of data a single system can generate and process is staggering. All this microcomputing muscle would be for naught, however, without a way to store data. Memory is one of the core components of almost any system, from a simple RFID tag to the most powerful processing behemoth. While electronic memories can certainly store and accurately recall more detail than the notoriously malleable human mind, they are by no means immune to failure. Since many modern memories have upwards of 4 billion "bits" of potential data that may be malfunctioning, it is often necessary to enlist outside help in the form of failure analysis services to get to the root of a case of silicon paramnesia.

Scanning Electron Microscopy (SEM) is one of the most fundamental tools of the failure analysis lab. The ability to examine even the most minute of details at high resolution is crucial, especially when a given defect might be only a few nanometers wide. As integrated circuit processes continue to shrink, the use of a SEM becomes a necessity, as features shrink below the smallest size that can be resolved with optical wavelengths of light. SEM is by no means limited to imaging failures on integrated circuits, however; with the proper techniques, the SEM can also be a powerful tool for material characterization.

The printed circuit board is one of the cornerstones of modern electronics technology. The sheer amount of devices and necessary interconnects to interlink them required for even relatively simple consumer electronics cannot be realized without modern high-density circuit board technology. Naturally, this increased level of complexity poses unique challenges for failure analysis; finding an open trace, for example, may require hours of painstaking work poring over board layouts, performing countless microsurgeries on the board to finally isolate the failing node. Even then, once the failure has been isolated through extensive printed circuit board testing, an analyst's tribulations are not finished, as they must then find a way to unearth the buried failure.

 When examining a contemporary integrated circuit, an electronic failure analyst must face a myriad array of challenges; metal interconnects can be too dense for traditional isolation techniques to be of any value, critical dimensions may be too small to be thoroughly examined in any but the most cutting edge of microscopes, and layers are often spaced so finely as to make planar deprocessing a nerve-wracking, pulse-pounding undertaking, in which one slip of the finger can result in irreparable damage to a device undergoing analysis. As if these hurdles weren’t enough to contend with, analysts must also grapple with a rapidly expanding segment of the microelectronics market: semiconductor devices that incorporate moving parts as a part of their operation. These devices, referred to as MEMS (Micro-Electro-Mechanical Systems), offers a unique challenge from the standpoint of semiconductor failure analysis, largely due to their markedly different construction.

Though electron microscopy is an invaluable tool for electronic failure analysis, it has limitations that must be accounted for in the failure analysis lab. Learn more.

One of the cornerstones of non-destructive failure analysis of packaged integrated circuits, allowing an analyst a relatively simple way of examining the structural integrity of a device, is Scanning Acoustic Microscopy (SAM). By using tightly focused pulses of ultrasonic waves and analyzing the sound reflected by and transmitted through a sample, it is possible to create a detailed, accurate image of a packaged semiconductor device, showing any pockets of air or delamination that may contribute to early-life failure. SAM has been an invaluable tool in performing analysis on the types of parts that have traditionally been the most prevalent in the industry - plastic encapsulated, wire-bonded ICs. Though the industry may be shifting away from these types of devices in favor of packaging technologies like flip-chip ball grid arrays (FCBGAs) due to the more efficient use of bonding space and potential for increased thermal compensation, the SAM is not obsolete; indeed, with a few changes, SAM can provide invaluable data on these cutting-edge technologies.

To someone unfamiliar with failure analysis of integrated circuits, it can be extremely difficult to imagine how any sort of meaningful data can be produced from a non-functioning piece of electronics – especially when the problem description is often phrased in nebulous terms, sprinkled heavily with empty words like “broken” and “defective”.  Yet, in many cases, a good analyst can turn these imprecise terms into a finely honed insight into a particular defect or device. One may ask how this is possible, given the extreme complexity of modern semiconductor devices. In IAL’s case, the answer lies in a well-planned IC failure analysis lab flow that takes a device from initial observations to final reporting.

One of the most challenging cases of PCB failure analysis is the search for an open circuit. Navigating the maze of metal interconnects with probes and an ohmmeter is time-consuming and, frustratingly, often ends without bearing fruit when an analyst encounters a component like a ball-grid array (BGA), with concealed connections that prevent further probing. At this point, the analyst is stuck; removing the component by desoldering would remove any evidence of an open circuit, and a blind cross section has low odds of success unless the component has large numbers of open solder joints. In such occasions, dye penetrant testing can be used to detect any solder defects, revealing broken or non-wetted joints at the expense of further testability.

Generally speaking, most discussion of electronics failure analysis is geared towards finding silicon-based integrated circuit defects. The reason for this is fairly straightforward; silicon is, by far, the most prevalent semiconductor used to create modern electronics, and therefore has the lion’s share of defects associated with it. In some cases, however, silicon circuits are simply insufficient for a given application – especially when extremely high frequency applications are considered. In these cases, it is much more common to use a III-V semiconductor like gallium arsenide (usually referred to as GaAs). Though the high frequency performance of III-V devices may be much greater than their silicon counterparts, their unique construction poses some difficult challenges for a failure analyst hoping to dig into their inner workings.

Performing a detailed failure analysis on electronic circuits requires a wide variety of tools, many of which are targeted at isolating a defect to a single point in the labyrinthine network of metal and polysilicon that make up an integrated circuit. The vast majority of these tools require the failing device to be electrically biased in its failing condition, at which point data is gathered about the part’s condition – thermal measurements are taken, light emitted from the circuit is gathered, and so on. Often, these tools are sufficient to find a failure; some defects, however, do not appear as readily under these methods of investigation. In these cases, it is often necessary to use a different class of tool, which uses an outside stimulus to create a change on the device, then measures the device’s reaction.

PCB failure analysis can be a daunting task in even the most ideal of cases. Modern printed circuit boards are densely-packed, multilayer rat’s nests of copper interconnects, integrated circuits, and discrete components. Isolating a single defect - which may often be a single splash of solder, misregistered via, or cracked copper trace - is an arduous process, requiring hours of probing and isolation to finally narrow down the point of failure. This process is taxing, to say the least; however, the problem is often compounded when the device to be analyzed is no more than a twisted, blackened hunk of burnt PCB material.

One of the benefits of a thorough failure analysis is the ability to properly classify a given IC defect, identifying its most likely origin and determining what caused it. With this data, a manufacturer can determine the proper course of action necessary to respond to the failure. If the defect arose from improper use, then the manufacturer can provide feedback to their customer, letting them know that they may have an inherent design flaw; on the other hand, if the defect is found to be related to the manufacturing process, it becomes necessary to evaluate the potential impact on other product manufactured during the same time frame.

One of the most pivotal points of any IC failure analysis is the process of electrical characterization. In order to correctly understand a failure and choose the proper course of action to find its root cause, it is vital to understand the failure’s electrical signature; for example, analysis of a short circuit will follow a far different path than an FA targeting an open circuit. Since it is is so crucial to properly understand the electrical characteristics of a failure, a good FA lab will have a comprehensive semiconductor test program in place that can handle a wide variety of devices.

In many cases, performing a successful failure analysis hinges upon being able to quickly and accurately characterize a contaminant that caused a device to malfunction. In many cases, elemental analysis testing techniques like energy dispersive spectroscopy (EDS) or x-ray fluorescence (XRF) provide enough data about a given sample – for example, a contaminant with high levels of chlorine is almost universally bad, due to the highly ionic nature of chlorine. In other cases, however – especially cases involving organic contaminants, which often appear on elemental analyses as high concentrations of carbon and oxygen with little else that might help an analyst identify them – it is necessary to know not only the elements present in a contaminant, but how they are bonded together. In these cases, Fourier transform infrared spectroscopy or FTIR analysis can provide the answer.

One of the most critical points of any failure analysis is the decapsulation step. Decapsulation is the point where non-destructive analysis ends and more risky operations begin – the die is removed from its protective plastic shell to allow failure analyst access to the complex circuitry within. Usually, decapsulation is performed using wet-etch procedures, dissolving the plastic encapsulant material of an IC package with any of a variety of different acids or solvents. The downside of this approach, of course, is that working with these potentially hazardous chemicals necessitates some serious safety measures like fume hoods and other types of personal protective equipment. Furthermore, the chemicals most often used for decapsulation, though relatively common, are still not cheap and can amount to a significant expense depending on the number and type of parts that must be decapsulated. Most importantly, the chemical decapsulation process can often disrupt the failure on the part; in some cases, like when working with GaAs or some other III-V semiconductors, the decapsulation chemicals can even dissolve the integrated circuit completely! Fortunately, there is an experimental alternative to chemical decapsulation: laser decapsulation is one of the most promising new technologies on the horizon.

Elemental analysis tools, like Auger electron spectroscopy, can often be exceptionally helpful for providing qualitative data about the composition of a material. An unknown material can be quickly analyzed to look for the presence of harmful corrosive elements or organic contaminants that may be relevant to a failure. In some cases, however, knowing whether or not an element is present does not tell the whole story; manufacturers may have guidelines which set limits on the amount of a given substance that may be present on a device, or specifications for the material composition of certain parts of their product. In these cases, it is necessary to perform a more thorough, quantitative analysis.

One of the most powerful tools at a failure analyst’s disposal for non-destructively studying the integrity of a component’s packaging is scanning acoustic microscopy. By using ultrasonic waves, the scanning acoustic microscope can detect cracks, air gaps, or delamination with relative ease. There is one caveat to the results from scanning acoustic microscopy, however; in many cases, seeing is believing, and an acoustic image does not necessarily quench the burning desire to view the defect directly. Many manufacturers requesting acoustic imaging services may call the results of a test into question (especially if the result is not one they find favorable); in these cases, it may be necessary to provide another, more tangible piece of evidence.

Many failure analysts say that no two projects are exactly alike. Every defect is subtly shaped by its surrounding circumstances – the type of process used to construct the device, the environment in which the device was used, and the application that the device is used in can all contribute to the nature of the malfunction. Though they may be relatively unique in their specifics, most IC defects can still be classified with fairly broad brushstrokes; indeed, these classifications are vital to the failure analysis customer, as they often determine the type of corrective action that must be taken.

Oftentimes, discussion of failure analysis services for semiconductor devices tends to focus on the most complex of devices – microprocessors with millions of transistors, intricately designed printed circuit boards, or the fantastically precise silicon sensors called MEMS (micro-electro-mechanical systems). The reality of the industry, however, is that the vast majority of electronic components are far more simple, running the gamut from passive discretes like resistors and capacitors to simple active components like light-emitting diodes (LEDs) and power transistors. Almost every television remote control system, for example, uses a combination of LEDs and photodetectors to allow a user to channel surf. Despite their relative ubiquity, these types of components are just as susceptible to failure as any other – and, therefore, failure analysis can be just as useful in their improvement.

Modern consumer electronics are constantly subjected to all types of environmental abuse. They may operate in humid climates, with plenty of ambient moisture that can collect on sensitive circuits. Dust and other particulates can be sucked in by air intakes, introducing any number of organic contaminants onto a device. There is also the omnipresent danger of sugary, carbonated beverages – one of the most diabolical nemeses of electronics in the home, especially a home populated with children (or clumsy adults). All of these things can cause an electronic device to malfunction; fortunately, Auger spectroscopy can help an analyst determine whether these factors are truly the root cause of a given problem.

The failure analysis blog we run at IAL has been up for a little over a year now, during which time we’ve covered a handful of different topics pertaining to the services we offer and how we can be of benefit to our customers. We’ve striven to provide a no-nonsense, plain English description of what failure analysis is, as well as straightforward explanations of our equipment – how it works, what it is best suited for, and what its limitations may be. In doing so, we hope to make the general operation of our lab accessible to the widest audience possible, from the most experienced engineer to someone who may only have a passing interest in electronics FA.

Often, failing systems are so complex that it can be difficult to find a good starting point. A circuit board may be hundreds of square inches of densely packed discrete components, integrated circuits, and wiring; a schematic view may be so intricate as to require several feet of paper to print out. In these cases, electronic component failure analysis gains a whole new aspect of complexity; an analyst must be able to isolate the failing component amongst a plethora of other devices. At first glance, this may seem to be a Herculean task – devising a test program to analyze all the thousands of different components on a board is no easy feat. Fortunately, with the right approach, such an endeavor is not necessary.

Like any problem-solving endeavor, failure analysis can often be stopped cold by a seemingly insurmountable obstacle; electrical isolation techniques fail, deprocessing proves too challenging, or (worst of all) the failure mode simply disappears, vanishing into thin air like a magician’s cheap parlor trick. Any of these situations will give the dreaded result of “No Fault Found”, which essentially means that no valuable information was gleaned from the analysis. Encountering a stymie like one of these in the course of an analysis can elicit wailing and gnashing of teeth from even the most intrepid engineers; at times like these, it is often necessary to enlist the help of external failure analysis services to provide a fresh look at the problem.

The process of initially undergoing RoHS certification can be a daunting one. Every piece of a product – from the largest circuit board down to the smallest wire – must be accounted for, to ensure that any of the named hazardous substances are kept to an absolute minimum. For many companies, developing the capability to inspect a product to ensure it meets these stringent requirements entails a huge cost due to equipment purchases and additional hiring and training of dedicated personnel. For smaller startups, these costs may be prohibitive; however, the use of a failure analysis lab instead of onsite capabilities for RoHS certification can help to alleviate some of this cost.

One of the fundamental truths of electronics is thus: all devices generate heat to some degree. Some heat emitted by devices is normal – after all, millions of transistors, switching off and on millions of times a second, will consume sizeable amounts of power and therefore produce a considerable amount of heat. However, there are certain types of defects that increase power consumption, thereby increasing the amount of heat given off by a device. While this additional heat is immaterial to the engineering team responsible for the design and production of a device, it can provide a useful avenue for isolating the item that they’re truly interested in – the defect itself. Using thermal emission microscopy, slight differences in temperature can be turned into useful data about a device, enabling a failure analyst to drive to the heart of a defect and determine the root cause of failure.

Modern electronic devices are subjected to all manners of abuse and neglect. Portable MP3 players are dropped, sat upon, and left on car dashboards to bake in the sun; electronics in the home are subjected to spills, dirt, and even the occasional power surge. Industrial electronics are often mistreated as well, locked away in rooms with poor ventilation and exposed to extremes of temperature and humidity. It is imperative, therefore, that the integrated circuits inside these electronic devices operate at the pinnacle of reliability. Semiconductor testing services are one way to ensure that a device has a long, productive life. By analyzing an integrated circuit under a variety of different conditions, it is possible to model the predicted lifespan of a device and make improvements to the device until the desired lifetime goal is met.

The modern printed circuit board is a veritable labyrinth of components, vias, and conductive traces, routing electrical signals from point to point through convoluted pathways that span large areas of onboard real estate. While electrical signals have no problem navigating this maze of circuitry, the human eye cannot always follow the same route as traces dive into buried layers deep within the board. Naturally, this greatly increases the difficulty for an analyst tasked with performing PCB failure analysis. Though the unassisted eye may not always be able to detect an issue on a PCB, an analyst has access to many different tools and techniques that can allow the analyst to “see” the defect. As shown in the following two case studies in which both samples were reported as failing for shorts, these tools can be invaluable to a successful analysis.

The process of IC failure analysis can be long and arduous. The task of diving into a device, meticulously tracing out a failing signal, poring over layouts, schematics, and test results in order to find the root cause of a defect is daunting to say the least. Occasionally, however, an analyst may find an unexpected gem, a hidden inside joke shared between an integrated circuit designer and anyone who takes the time to tear a device apart to get a good look at the semiconductor die; instead of finding an anomaly, an analyst may end up unearthing a piece of silicon artwork.

In an attempt to minimize the ecological impact of the increasing amounts of “e-waste” generated as electronic devices reach the end of their lifespan and are shipped off to the great scrapyard in the sky, the European Union passed the Restriction on Harmful Substances (RoHS) directive. This directive is an edict prohibiting the use of a handful of different materials in any electronic devices manufactured after a certain date (with some exceptions allowed for some applications like medical devices). While it is relatively easy for a manufacturer to guarantee that their own process is free of these elements, given that relatively few manufacturers are vertically integrated, it is often necessary to qualify parts that have been received from various subcontractors and suppliers in order to achieve RoHS certification for a device.

The ever-increasing demand for quicker, more powerful, and more compact devices – all at a static or even decreasing price point – has been an immense driving factor in the evolution of the electronics and semiconductor industries. As things like smartphones make the cultural shift from “geeky” to “irreplaceable”, the technology upon which they are based must change to meet the needs of the expanding market. In some cases, these changes are simply reworks of proven technology; in others, attempts to build the proverbial better mousetrap have resulted in creative new products with physical and electrical characteristics far different from their predecessors. This constant drive to innovate is undoubtedly a boon to the consumer; however, the constant introduction of new tech poses unique challenges in the process of electronic device failure analysis.

The task of producing high-quality, reliable printed circuit boards have become increasingly difficult as the demands of modern technology have expanded. Even in the time frame of a few years, the inexorable march of technological advancement has demanded the construction of more and more complex circuits, oftentimes with far more stringent requirements placed on parameters like board size and power consumption. Inevitably, increased complexity leads to an increased potential for failure; it is, therefore, more crucial than ever to be able to continually identify and remedy process weaknesses in order to produce the highest number of functioning devices. Fortunately, there are several printed circuit board tests that can be used to examine the quality of a product.

Although some devices may have their lives cut short by any of a number of factors like processing defects or improper application, the vast majority of devices may function perfectly well for very long periods of time – in some cases, going several years before finally wearing out. Even though failure analysis takes place, by definition, after a device has reached the end of its useful life, it still plays a vital role in improving the manufacturing processes and techniques that allow most devices to operate for long enough to retire gracefully into obsolescence. Indeed, semiconductor reliability studies are focused on causing parts to fail intentionally, in order to identify process weaknesses and create better devices.

A well-equipped, fully staffed failure analysis lab is a veritable wellspring of knowledge about integrated circuits, semiconductors, and microelectronics in general. With the wide breadth of experience gleaned from years of working with microelectronics, such a lab can take on even the most complex defects and find the root cause of the problem. The skills and expertise honed through the failure analysis process can be applied to other aspects of the electronics industry as well. For this reason, a failure analysis lab may also offer a variety of other support services for the semiconductor industry.

Emission microscopy is, as discussed previously, an indispensable tool for failure analysis of integrated circuits, capable of creating a detailed map of the parts of a circuit that draw power in both failing and functional configurations. This allows an analyst to quickly pinpoint a defect, by comparing the emissions from a known good unit to those of a failing unit and looking for any discrepancies. While this alone is reason enough to keep a light emission microscope in the failure analysis toolbox, there are many other ways that the system can be applied to find a defect.

Though failure analysis is an integral part of continually improving a process or product, for many companies it is not feasible to build an internal FA lab. The price of maintaining a wide range of equipment and retaining staff with the diverse sets of skills necessary to perform a successful analysis can be daunting; further, since the FA lab does not directly produce a product, it can often be difficult to justify its existence in terms of number of units sold or amount of revenue generated. Be that as it may, however, failure analysis is still an invaluable contribution to the efforts of such a company to create the absolute best product that they can. In such situations, finding an external lab providing failure analysis services is often the best course of action.

The printed circuit board (PCB) is indeed omnipresent in the modern electronics industry. Devices as complex as wireless communications hubs and as simple as cheap children’s toys can potentially house one or more PCBs. Given the widespread use of PCBs, an aspiring failure analyst must be familiar with the intricacies of PCB technology in order to successfully analyze any of the multitudes of common PCB defects that they may encounter.

As mentioned in a previous article, the failure mode of an integrated circuit can, in the hands of a trained analyst, be pivotal in determining the course of analysis and successfully isolating a defect. Without understanding how a device malfunctions, it is nearly impossible to determine why – unless, of course, an analyst is blessed with an abundance of luck and a dearth of caution. For those who are not so happily gifted, a well-identified failure mode can save hours of time that would otherwise be spent digging through datalogs and schematics, hunched over a workbench illuminated by the faint green light of test equipment, hunting for any possible toehold to begin the analysis. While it is true that failure modes can be as unique and varied as the endless menagerie of integrated circuits they occur on, there are a handful of usual suspects that an analyst will see time and again over the course of their career.

In order to successfully characterize, isolate, and eventually uncover a defect on a semiconductor device, it is necessary to begin with a basic understanding of the problem at hand. A basic description of the failure – for example, “output pin stuck high” or “device draws excessive power” – can go a long way towards helping an analyst formulate a plan for tackling a defective part. Once this basic semiconductor failure mode has been identified, the proper tools and procedures can be chosen to locate even the most minuscule of defects.

Considering the relative ubiquity of printed circuit boards in modern electronics, a typical failure analysis engineer will undoubtedly see countless numbers of printed circuit board failures over the course of his or her career. At first blush, many of these jobs may seem to have very little in common – a twisted, charred circuit board from the onboard computer of a river ferry and a defective video game console could hardly be more dissimilar. While it is true that no two failure analysis jobs are alike, and that all defects have subtle nuances that make them unique, PCB failures can generally be broken down into two categories: those occurring during the manufacturing process, and those that occur after the unit has been delivered to the end user.

Illuminating Hidden Leakage Sites As discussed previously, current leakage is one of the most prevalent failures in modern semiconductors and electronic devices. One of the most common techniques for locating current leakage is a liquid crystal, which is a quick and effective way of isolating failure sites; however, the liquid crystal has some limitations that prevent it from being useful in all cases. Liquid crystal works by using the heat generated by a leakage site to raise the temperature of the crystal to a “transition point”, where an analyst can optically observe a change in the properties of the crystal and thereby identify the leakage site. A more subtle failure may never be able to heat the liquid crystal to its transition point since smaller defects dissipate less power and therefore generate less heat. At the opposite end of the spectrum, high amounts of leakage can produce enough heat to raise the temperature of the entire device quickly enough that it is impossible to identify the transition point. To combat these shortcomings, fluorescent microthermal imaging can be used to supplement the standard liquid crystal.

One of the single most common failures that plague modern electronic devices is current leakage. This leakage can manifest in many ways; some devices may exhibit normal functionality with excessive power consumption, while others may stop working altogether. This is partly due to the multitude of different causes for current leakage – improper processing, packaging, or handling (in the form of electrostatic discharge damage) can result in defects that will draw excessive current, as can electrical overstress of a device in the field. Since current leakage is such a common failure mode, a good failure analyst will have many different tools to assist in the detection and isolation of defects that may cause an anomalous current draw.

Most modern electronic devices are packaged as proverbial “black boxes”; it is nearly impossible to tell what is happening inside a device by looking at the outside packaging. What’s more, many devices are designed to be virtually impossible to open without causing irreversible changes to the product. These types of devices pose a unique problem for failure analysis – without being able to see the functional pieces of a device, it is nearly impossible to find a failing component or signal. While there are a plethora of destructive techniques available, allowing the analyst access to the “guts” of a device, these techniques often carry with them a certain level of risk; destructively opening an integrated circuit or other assembly can, in very rare cases, induce damage. To help prove beyond reasonable doubt that any damage an analyst finds was pre-existing and not created during the course of the analysis, a non-destructive way of looking inside the black box is necessary. X-Ray imaging lends itself perfectly to this application, penetrating the shroud surrounding most devices with ease.

The world of electronics grows and evolves at a breakneck pace. On a seemingly daily basis, new electronic gadgets hit the market – the tech-savvy consumer is inundated with choices for faster home computers, powerful smartphones, and more visually stunning TVs; these examples only scratch the surface of the ever-changing landscape of electronic devices. This process of continual growth and discovery seems clearly beneficial for all; there is, however, an unspoken corollary to the unfettered progress made in electronics: the specter of obsolescence looms large, relegating the old, broken, and tragically untrendy devices to the wastebasket. As electronics have become cheaper and more commonplace, the amount of electronics waste in landfills around the world grows at a seemingly exponential rate. To limit the ecological impact of the growing problem of “e-waste”, the European Union created the Restriction on Harmful Substances (RoHS) directive, establishing limits on the amounts of the most ecologically dangerous materials commonly used in electronics. Manufacturers who choose to “go green” take this directive to heart; however, given the complex supply chain involved in most modern manufacturing, it is sometimes difficult to ensure that all components of a device meet RoHS requirements. In these situations, RoHS auditing can…

At first glance, the modern integrated circuit may appear to be nothing more than a jumbled mess. Billions of transistors are connected to one another by a vast, labyrinthine network of metal traces, vias, wire bonds, and solder connections; a single electrical pulse may weave its way through countless other signals, moving through a spiraling spider’s web of conductors, before reaching its final destination at an output pin. For an analyst tasked with inspection or failure analysis of such a device, this convoluted system may resemble the proverbial Gordian knot. There is hope, however: just as Alexander was able to cut through the jumble of the fabled knot with his sword, an analyst skilled in deprocessing can slice through the tangles of circuitry, driving to the heart of the device under test.

While many different tools and techniques are used in performing failure analysis and assembling a report, the crux of the analysis is a clear, sharp photograph of the defect that lies at the root of the failure. Indeed, not only in failure analysis but in any of the sciences, it can be said that "seeing is believing": a detailed picture can remove any shadow of a doubt as to the nature of an object. In the case of failure analysis, a good image can help to identify the type of corrective action that must be implemented to resolve a recurring problem. For larger defects, an image taken with an optical microscope is often sufficient; however, given the infinitesimally small geometries used in modern semiconductors, a defect that may be catastrophically huge in terms of circuit performance may still be so small that it is effectively invisible to a traditional microscope - some defects are so minuscule, it is physically impossible to image them accurately with any sort of visible light optics. In these cases, electron microscopy is more than capable of peeling away the cloak of invisibility enshrouding a defect, providing crisp, detailed images at magnifications far beyond the limits…

As discussed in previous blogs, acoustic microscopy is a valuable part of the failure analysis process. The ability to use ultrasonic waves to construct an image of a device and study its construction without damaging or destroying the part is useful for guiding an analyst in choosing the proper approach for finding a failure. There are many different types of defects that can be detected with acoustic microscopy, each providing several avenues for further analysis.

The ability to isolate a defect in a sea of circuitry, pinpointing a problem hiding amongst a plethora of transistors and metal lines, is one of the cornerstones of successful failure analysis. An analyst would be hard-pressed to study an anomaly in depth without first knowing where the anomaly is. The resourceful analyst has many tools and techniques to aid in the detection of defects on an integrated circuit; some, like liquid crystal or thermal imaging, are best used to find short circuits that generate large quantities of heat, while others, like time domain reflectometry, are best suited to finding open circuits. Unfortunately, these techniques are often not sufficient, and an analyst must find a way to characterize a device, creating a baseline against which to contrast a failing unit in order to detect the defect at the root of an electronic component failure. In these cases, emission microscopy provides the perfect platform upon which to build an analysis.

To some skeptics, the value of failure analysis is not readily obvious. Spending time and resources on devices that, by definition, are nonfunctional and will not be released to a customer may seem wasteful. The true value in failure analysis, however, lies in its ability to identify characteristics that may lead to further failures, costing a company untold amounts in production fall-out and damaging their reputation with their customers. Presented here are two case studies of devices, both exhibiting multiple anomalous electrical opens. Even though both devices behaved similarly, the root cause of the failure between the two units was vastly different; determining the root cause of failure for both devices required in-depth IC failure analysis, resulting in major process weaknesses being identified in both cases.

Since the demonstration of the first integrated circuit in the late 1950s, semiconductor technology has developed explosively, growing at an exponential rate. The guidance computers that were used in the Apollo space program, performing the critical calculations necessary to land a manned spacecraft on the moon, have been completely dwarfed in complexity, memory capacity, and processing power by modern video game consoles and handheld MP3 players. Where an early microchip might contain several hundred devices, today's IC is home to billions of transistors. Even though semiconductor technology has come so far from its inception, it is not yet infallible, and failures do occur as a result of improper processing, misuse, or simply due to the inexorable march of time. Finding a defect on such a complex device may bring to mind clichéd sayings about needles and haystacks; however, the process of semiconductor failure analysis brings together a comprehensive toolset, a breadth of industry experience, and a certain degree of intuition, all in order to find that one in a billion defect.

Printed circuit board (PCB) technology serves as one of the fundamental building blocks of modern electronics. One would be hard-pressed to find an electronic device of even moderate complexity made in the past ten to twenty years that does not include at least one PCB in its construction. The ubiquity of PCBs in electronics means, of course, that a failure analyst is likely to see several malfunctioning boards in his or her professional lifetime. The size and complexity of a modern circuit board would seem to make successfully finding a defect an impossibility; however, with experience and the right mindset, PCB failure analysis can be a successful endeavor.

Electronics failure analysis is, at times, a daunting task. An analyst must constantly question his or her assumptions about a given device, circuit, or process, discarding false premises and peeling away the myriad layers of a problem until the root cause of the failure can be determined. Sometimes, one of the steps in this grand inquisition is to question the fundamental composition and purity of a material. Could ionic contamination be causing a short circuit? Was residual material, left behind on an improperly cleaned printed circuit board, the underlying cause for a solder joint failure? Fortunately, the analyst has tools to analyze materials, even down to their elemental makeup. Auger spectroscopy failure analysis is one of several such tools that an analyst might choose in such a case.

In this article, we take a look at common semiconductor defects or faults which can occur inside a package. Each type of error has multiple detection techniques and the electronic failure analysis method chosen depends on the sensitivity required, the type of chip it is, and whether or not the process is destructive.

Scanning Acoustic Microscopy (SAM) is a fast, non-destructive investigative technique frequently used in electronic failure analysis. SAM uses ultrasound waves to image interfaces and detect possible defects within optically opaque structures and components such as chip capacitors, chip resistors, circuit board traces, discrete semiconductor devices, integrated circuits (ICs), and other electronic components. SAM is frequently used in failure analysis to evaluate die attach integrity, heat spreader adhesion, and solder quality.

Integrated circuits can be fragile and will fail if not packaged correctly. Even those circuits which are designed to withstand shock are required to operate within very specific parameters. In this article, we take a look at how the failure analysis procedure may need to focus on whether the packaging of the electronic components is compromised and if so, to what extent. Various methods exist to detect the loss of package integrity, fine and gross leak testing is one example of testing that we use to identify package integrity.

This post explains how to use the Fault Tree Analysis (FTA) technique in integrated circuit failure analysis to find out the root causes of an error in an electronic circuit or component.

Much of failure analysis work is done before any actual testing is done. It might seem that taking a chip and putting it in a scanning electron microscope or thermal microscope to find points of failure is the most direct way to detect where a problem is occurring, but this is reactive and the most costly approach.  It also damages a company's reputation with its customers, which can be very costly. Failure analysis is a procedure that should start from the ground up at the design stage itself. The initial investment of designing something to work around the failures of past systems can be repaid many times over later on with reduced failure costs. In this article, we look at the Failure Mode and Effects Analysis (FMEA) procedure which is a technique for preventing failures from occurring in a chip in the first place.

Detecting and isolating a failure in an integrated circuit is no easy matter. There are many techniques used in failure analysis and choosing the right one is an art as well as a science. Sometimes we may need to use many techniques both for better detection as well as independent corroboration so that we can be sure of the results of a particular test. But all tests fall into one of two categories - destructive testing, and non-destructive testing. In this article, we look at why non-destructive testing is so important and what methods fall into this type of test.

 

Need to Determine the Root Cause of a Failure in an Electronic Component?  We get back to you with a quote in 24 hours once we have your information.

Request Failure Analysis Quote