Many failure analysts say that no two projects are exactly alike. Every defect is subtly shaped by its surrounding circumstances – the type of process used to construct the device, the environment in which the device was used, and the application that the device is used in can all contribute to the nature of the malfunction. Though they may be relatively unique in their specifics, most IC defects can still be classified with fairly broad brushstrokes; indeed, these classifications are vital to the failure analysis customer, as they often determine the type of corrective action that must be taken.
Oftentimes, discussion of failure analysis services for semiconductor devices tends to focus on the most complex of devices – microprocessors with millions of transistors, intricately designed printed circuit boards, or the fantastically precise silicon sensors called MEMS (micro-electro-mechanical systems). The reality of the industry, however, is that the vast majority of electronic components are far more simple, running the gamut from passive discretes like resistors and capacitors to simple active components like light-emitting diodes (LEDs) and power transistors. Almost every television remote control system, for example, uses a combination of LEDs and photodetectors to allow a user to channel surf. Despite their relative ubiquity, these types of components are just as susceptible to failure as any other – and, therefore, failure analysis can be just as useful in their improvement.
Modern consumer electronics are constantly subjected to all types of environmental abuse. They may operate in humid climates, with plenty of ambient moisture that can collect on sensitive circuits. Dust and other particulates can be sucked in by air intakes, introducing any number of organic contaminants onto a device. There is also the omnipresent danger of sugary, carbonated beverages – one of the most diabolical nemeses of electronics in the home, especially a home populated with children (or clumsy adults). All of these things can cause an electronic device to malfunction; fortunately, Auger spectroscopy can help an analyst determine whether these factors are truly the root cause of a given problem.
The failure analysis blog we run at IAL has been up for a little over a year now, during which time we’ve covered a handful of different topics pertaining to the services we offer and how we can be of benefit to our customers. We’ve striven to provide a no-nonsense, plain English description of what failure analysis is, as well as straightforward explanations of our equipment – how it works, what it is best suited for, and what its limitations may be. In doing so, we hope to make the general operation of our lab accessible to the widest audience possible, from the most experienced engineer to someone who may only have a passing interest in electronics FA.
Often, failing systems are so complex that it can be difficult to find a good starting point. A circuit board may be hundreds of square inches of densely packed discrete components, integrated circuits, and wiring; a schematic view may be so intricate as to require several feet of paper to print out. In these cases, electronic component failure analysis gains a whole new aspect of complexity; an analyst must be able to isolate the failing component amongst a plethora of other devices. At first glance, this may seem to be a Herculean task – devising a test program to analyze all the thousands of different components on a board is no easy feat. Fortunately, with the right approach, such an endeavor is not necessary.
Like any problem-solving endeavor, failure analysis can often be stopped cold by a seemingly insurmountable obstacle; electrical isolation techniques fail, deprocessing proves too challenging, or (worst of all) the failure mode simply disappears, vanishing into thin air like a magician’s cheap parlor trick. Any of these situations will give the dreaded result of “No Fault Found”, which essentially means that no valuable information was gleaned from the analysis. Encountering a stymie like one of these in the course of an analysis can elicit wailing and gnashing of teeth from even the most intrepid engineers; at times like these, it is often necessary to enlist the help of external failure analysis services to provide a fresh look at the problem.
The process of initially undergoing RoHS certification can be a daunting one. Every piece of a product – from the largest circuit board down to the smallest wire – must be accounted for, to ensure that any of the named hazardous substances are kept to an absolute minimum. For many companies, developing the capability to inspect a product to ensure it meets these stringent requirements entails a huge cost due to equipment purchases and additional hiring and training of dedicated personnel. For smaller startups, these costs may be prohibitive; however, the use of a failure analysis lab instead of onsite capabilities for RoHS certification can help to alleviate some of this cost.
One of the fundamental truths of electronics is thus: all devices generate heat to some degree. Some heat emitted by devices is normal – after all, millions of transistors, switching off and on millions of times a second, will consume sizeable amounts of power and therefore produce a considerable amount of heat. However, there are certain types of defects that increase power consumption, thereby increasing the amount of heat given off by a device. While this additional heat is immaterial to the engineering team responsible for the design and production of a device, it can provide a useful avenue for isolating the item that they’re truly interested in – the defect itself. Using thermal emission microscopy, slight differences in temperature can be turned into useful data about a device, enabling a failure analyst to drive to the heart of a defect and determine the root cause of failure.
Modern electronic devices are subjected to all manners of abuse and neglect. Portable MP3 players are dropped, sat upon, and left on car dashboards to bake in the sun; electronics in the home are subjected to spills, dirt, and even the occasional power surge. Industrial electronics are often mistreated as well, locked away in rooms with poor ventilation and exposed to extremes of temperature and humidity. It is imperative, therefore, that the integrated circuits inside these electronic devices operate at the pinnacle of reliability. Semiconductor testing services are one way to ensure that a device has a long, productive life. By analyzing an integrated circuit under a variety of different conditions, it is possible to model the predicted lifespan of a device and make improvements to the device until the desired lifetime goal is met.
The modern printed circuit board is a veritable labyrinth of components, vias, and conductive traces, routing electrical signals from point to point through convoluted pathways that span large areas of onboard real estate. While electrical signals have no problem navigating this maze of circuitry, the human eye cannot always follow the same route as traces dive into buried layers deep within the board. Naturally, this greatly increases the difficulty for an analyst tasked with performing PCB failure analysis. Though the unassisted eye may not always be able to detect an issue on a PCB, an analyst has access to many different tools and techniques that can allow the analyst to “see” the defect. As shown in the following two case studies in which both samples were reported as failing for shorts, these tools can be invaluable to a successful analysis.
The process of IC failure analysis can be long and arduous. The task of diving into a device, meticulously tracing out a failing signal, poring over layouts, schematics, and test results in order to find the root cause of a defect is daunting to say the least. Occasionally, however, an analyst may find an unexpected gem, a hidden inside joke shared between an integrated circuit designer and anyone who takes the time to tear a device apart to get a good look at the semiconductor die; instead of finding an anomaly, an analyst may end up unearthing a piece of silicon artwork.
In an attempt to minimize the ecological impact of the increasing amounts of “e-waste” generated as electronic devices reach the end of their lifespan and are shipped off to the great scrapyard in the sky, the European Union passed the Restriction on Harmful Substances (RoHS) directive. This directive is an edict prohibiting the use of a handful of different materials in any electronic devices manufactured after a certain date (with some exceptions allowed for some applications like medical devices). While it is relatively easy for a manufacturer to guarantee that their own process is free of these elements, given that relatively few manufacturers are vertically integrated, it is often necessary to qualify parts that have been received from various subcontractors and suppliers in order to achieve RoHS certification for a device.
The ever-increasing demand for quicker, more powerful, and more compact devices – all at a static or even decreasing price point – has been an immense driving factor in the evolution of the electronics and semiconductor industries. As things like smartphones make the cultural shift from “geeky” to “irreplaceable”, the technology upon which they are based must change to meet the needs of the expanding market. In some cases, these changes are simply reworks of proven technology; in others, attempts to build the proverbial better mousetrap have resulted in creative new products with physical and electrical characteristics far different from their predecessors. This constant drive to innovate is undoubtedly a boon to the consumer; however, the constant introduction of new tech poses unique challenges in the process of electronic device failure analysis.
The task of producing high-quality, reliable printed circuit boards have become increasingly difficult as the demands of modern technology have expanded. Even in the time frame of a few years, the inexorable march of technological advancement has demanded the construction of more and more complex circuits, oftentimes with far more stringent requirements placed on parameters like board size and power consumption. Inevitably, increased complexity leads to an increased potential for failure; it is, therefore, more crucial than ever to be able to continually identify and remedy process weaknesses in order to produce the highest number of functioning devices. Fortunately, there are several printed circuit board tests that can be used to examine the quality of a product.
Although some devices may have their lives cut short by any of a number of factors like processing defects or improper application, the vast majority of devices may function perfectly well for very long periods of time – in some cases, going several years before finally wearing out. Even though failure analysis takes place, by definition, after a device has reached the end of its useful life, it still plays a vital role in improving the manufacturing processes and techniques that allow most devices to operate for long enough to retire gracefully into obsolescence. Indeed, semiconductor reliability studies are focused on causing parts to fail intentionally, in order to identify process weaknesses and create better devices.
A well-equipped, fully staffed failure analysis lab is a veritable wellspring of knowledge about integrated circuits, semiconductors, and microelectronics in general. With the wide breadth of experience gleaned from years of working with microelectronics, such a lab can take on even the most complex defects and find the root cause of the problem. The skills and expertise honed through the failure analysis process can be applied to other aspects of the electronics industry as well. For this reason, a failure analysis lab may also offer a variety of other support services for the semiconductor industry.
Emission microscopy is, as discussed previously, an indispensable tool for failure analysis of integrated circuits, capable of creating a detailed map of the parts of a circuit that draw power in both failing and functional configurations. This allows an analyst to quickly pinpoint a defect, by comparing the emissions from a known good unit to those of a failing unit and looking for any discrepancies. While this alone is reason enough to keep a light emission microscope in the failure analysis toolbox, there are many other ways that the system can be applied to find a defect.
Though failure analysis is an integral part of continually improving a process or product, for many companies it is not feasible to build an internal FA lab. The price of maintaining a wide range of equipment and retaining staff with the diverse sets of skills necessary to perform a successful analysis can be daunting; further, since the FA lab does not directly produce a product, it can often be difficult to justify its existence in terms of number of units sold or amount of revenue generated. Be that as it may, however, failure analysis is still an invaluable contribution to the efforts of such a company to create the absolute best product that they can. In such situations, finding an external lab providing failure analysis services is often the best course of action.
The printed circuit board (PCB) is indeed omnipresent in the modern electronics industry. Devices as complex as wireless communications hubs and as simple as cheap children’s toys can potentially house one or more PCBs. Given the widespread use of PCBs, an aspiring failure analyst must be familiar with the intricacies of PCB technology in order to successfully analyze any of the multitudes of common PCB defects that they may encounter.
As mentioned in a previous article, the failure mode of an integrated circuit can, in the hands of a trained analyst, be pivotal in determining the course of analysis and successfully isolating a defect. Without understanding how a device malfunctions, it is nearly impossible to determine why – unless, of course, an analyst is blessed with an abundance of luck and a dearth of caution. For those who are not so happily gifted, a well-identified failure mode can save hours of time that would otherwise be spent digging through datalogs and schematics, hunched over a workbench illuminated by the faint green light of test equipment, hunting for any possible toehold to begin the analysis. While it is true that failure modes can be as unique and varied as the endless menagerie of integrated circuits they occur on, there are a handful of usual suspects that an analyst will see time and again over the course of their career.