Dr. Mike Coulson
Swindon Silicon Systems Ltd.
Royal Wootton Bassett, England
Abstract—An Analog to Digital Converter (ADC) bridges the analog world to the digital world, and is crucial to any modern sensor system. We find that our customers increasingly demand ADCs to be absorbed into their mixed-signal Application Specific Integrated Circuits (ASICs), with huge benefits to their systems in terms of cost, power and size. But some expertise is required for the potential gains to be realized, both on the part of the ASIC supplier and on the part of the customer.
Keywords—ADC; ASIC; system-on-chip; mixed-signal; integrated; CMOS; analog; sensor interface
An Application Specific Integrated Circuit (ASIC) is a custom designed silicon chip, which replaces numerous discrete components on a PCB to reduce cost, physical size or power consumption. ASICs are often used in sensor systems, where continuously varying voltages or currents are digitized by an Analog to Digital Converter (ADC) before being digitally processed. When the ADC is absorbed into the ASIC, both analog and digital discretes may be replaced, and the benefits of using an ASIC become particularly pronounced. But the task of implementing an ADC within an ASIC is by no means simple, and customers should work closely with their ASIC supplier to ensure that the correct design decisions are made.
II. A recap on essential terminology
Before design begins in earnest, it is crucial to understand what is required of the ADC. This is because the ASIC designer is not restricted to choosing ‘off the shelf’ parts, and can instead optimize a circuit to yield the necessary performance in the most frugal and efficient manner. Whilst ADC performance is most obviously characterized by resolution and conversion rate, this is an over-simplification. In fact, a plethora of other metrics stands ready to confuse those who are unfamiliar with data converters. However, the process of consultation with your chosen ASIC supplier can be accelerated by understanding the few concepts that are explained below.
The operation of any ADC becomes more challenging at high conversion rates and high input frequencies. This is due to dynamic effects such as the incomplete charging of capacitors, and the inadequate settling of amplifier outputs. Designers use different measures to describe an ADC’s static performance (under a DC or very low frequency input) and dynamic performance (under a sinusoidal input of specified frequency). Static performance measures describe the uniformity of the code transition voltages and code widths. The code transition voltages are the lowest input voltages at which each output code is excited, whereas the code widths are the differences between adjacent code transition voltages. The key static performance measures to be aware of are the Differential Non-Linearity (DNL) and the Integral Non-Linearity (INL). The DNL describes anomalies in the code widths, compared to their average value. It is measured in fractions of a code, and ADCs are typically specified in terms of the maximum DNL encountered across their code range. DNL is of interest in applications such as control and instrumentation, where codes are likely to be excited in sequence and irregularities between them may have undesirable effects. The INL describes the deviation in each code transition voltage from its ideal linear placement, and is measured in fractions of a code. The INL is the cumulative sum of the DNL, so it is quite possible for an ADC to have excellent DNL – say for all the code widths to be uniform to within 1/10 of a code – yet to have INL that peaks at tens of codes.
Such behavior might be problematic for an ASIC designed to sense an absolute voltage, yet might be quite acceptable for a chip designed to measure a rate of change. In the absence of certainty, it is tempting to demand an ADC whose maximum INL and DNL never exceeds 0.5 codes. But such heavyhandedness incurs unnecessary costs, and it always pays to consider how specifications may be relaxed according to the particular application.
The most common dynamic performance measures describe the spectral purity of the ADC’s output, when converting a sinusoidal input. They include the Signal to Noise Ratio (SNR), the Total Harmonic Distortion (THD), the SIgnal to Noise And Distortion ratio (SINAD) and the Effective Number Of Bits (ENOB).
The SNR is the power ratio (in dB) of a full scale sinusoid to all the non-harmonic error, such as white noise. Harmonics of the sinusoidal input are deliberately rejected when calculating the SNR, and are instead described by the THD. This is the power ratio of the first few (typically 5) harmonics to that of a full scale sinusoid. The SINAD describes both sources of error: it is the power ratio of the signal to the total noise and harmonic distortion.
Even an ideal ADC generates some error: its output is restricted to discrete levels and so will usually disagree with its input. Termed ‘quantization noise’, this error tends to be spectrally white and diminishes as the resolution of the converter is increased. A well-known formula (1) gives the maximum achievable SINAD for a given resolution N, where N is specified in bits :
SINADmax (dB) = 6.02 N + 1.76 (1)
When a system must achieve a certain SINAD, this equation tells us the minimum resolution that should be chosen
for its ADC.
In practice, circuit imperfections may contribute appreciable error on top of the quantization noise. Depending on its origin, this error may appear as additional noise or as additional harmonic power. For example, the slowly varying INL of Fig. 1 might manifest as harmonic power, whilst a randomly varying INL would more likely appear as white noise. Either way, circuit imperfections can reduce the SINAD below its maximum value for the resolution of the ADC. The true SINAD can be expressed as an intuitive quantity known as the Effective Number Of Bits (ENOB).
The ENOB is the resolution of an ideal converter that would yield the same total noise and distortion, through quantization noise alone. The ENOB is found by inverting (1):
ENOB = [SINAD (dB) – 1.76] / 6.02 (2)
Dynamic performance measures such as these are often more useful than static measures, as they quantify problems at realistic input frequencies and factor in random effects such as thermal noise. A 16-bit ADC with negligible INL and DNL sounds flawless, but is less useful if thermal noise limits its ENOB to 12.
III. Adcs in asics: the benefits of integration
When you have the capability to add ADCs to an ASIC, you have the facility to absorb both the analog and digital aspects of your design into a single package. This is preferable to using a discrete ADC for several reasons. Firstly, the separate package of the ADC is eliminated, and so the system gets smaller and more mechanically robust. While it is true that more silicon area is required in the ASIC, this increase may be insignificant and is unlikely to merit increasing the size of the package. In fact, it is plausible that the ASIC may fall in size. Bear in mind that a parallel data interface to a discrete 16-bit ADC might require in excess of 17 pins, and in many ASIC designs the pin-count governs the size of both the package and the silicon within.
Secondly, and perhaps obviously, the cost of the separate ADC is eliminated. Chip design overheads will increase – but not dramatically, providing the supplier has a good breadth of experience and a portfolio of past designs to draw elements from.
Thirdly, uniting both analog and digital functions on a single chip brings scope for the entire signal path to come under the care of the ASIC supplier. This means that a complete system, from preamplifiers to digital signal processing, can be simulated together and remains the responsibility of one company. The ASIC test process will then guarantee operation of a cohesive system, rather than validating a smaller building block. Costs will be reduced here too, given that only one package need be handled by the test facility.
Finally, a discrete ADC must be chosen from a limited set of available devices, so parts may be over-specified in some regards just to achieve the necessary performance in others. This can contribute to cost and power consumption. In contrast, an ASIC designer has control over every aspect of an integrated converter, and can optimize it for the task in hand. Indeed, a major responsibility of the ASIC supplier is to truly understand the customer’s system, and to hone the specifications of each block until the greatest benefits are realized. In many cases the original customer requirements have been arrived at via a PCB prototype, and it is possible for the ASIC supplier to substantially downgrade the ADC’s specification. The supplier can then begin the complex design process with confidence that they are maximizing the customer’s return on investment.
The first and most important design decision is the type of ADC to use. A competent design house will have deployed all the common architectures many times. As a customer, it pays to at least understand the rudiments of each.
IV. Your options: types of adc, and their applications
There are numerous techniques for performing analog to digital conversion. Some excel at delivering high speed results, yet are not suited to high resolutions. Others are restricted to low bandwidths, yet can offer huge precision. Four of the key circuit architectures to be aware of are flash, pipeline, Successive Approximation Register (SAR), and sigma-delta. It is common to see them spanning a plot of bandwidth vs. resolution, as shown in Fig. 2.
In an N-bit flash ADC, a set of 2N-1 comparators is fed with staggered reference voltages that span the input range, as shown in Fig. 3. The comparators all ‘watch’ the input voltage, and their thermometer-coded outputs are combined by decoding logic to produce an N-bit result. This architecture is extremely fast, because the conversion is completed in a single action. However, even an 8-bit ADC requires 255 comparators, so flash converters are both large and power hungry. They should therefore be reserved for use when bandwidth is an absolute priority.
Pipeline ADCs achieve higher resolutions than flash converters, but maintain respectable bandwidths by performing each conversion in multiple stages. Each stage quantifies the input in further detail, generating decisions for one or more bits. Like a production line, each stage of the conversion is handled by a different part of the circuit. At any point in time there are multiple conversions underway, and each is at a different stage of completion: the principle is shown in Fig 4.
But as any given sample must be processed by every stage in turn, an appreciable ‘pipeline delay’ is accumulated. This delay may be unacceptable, especially in closed-loop control applications where it may cause instability. Furthermore, as numerous stages operate simultaneously, pipelined ADCs tend to consume appreciable power and area.
For mid-bandwidth applications where resolutions of 10-14 bits are required, successive approximation ADCs are often best suited. In a SAR ADC, the output code is arrived at through a series of iterations, which can be understood by examining Fig. 5. Upon each iteration, a ‘trial code’ is chosen by the successive approximation logic, and this code is converted to an analog voltage by an internal DAC. The analog voltage is then compared to the ADC’s input voltage: the result dictates whether the next code to be trialed should be larger or smaller. In general, an N-bit conversion calls for a sequence of N comparisons, which limits the achievable conversion rate. Resolution tends to be constrained by the accuracy of the internal DAC (which comes down to component matching) and by thermal noise.
When the highest resolutions are required, and when low bandwidths are tolerable, sigma-delta ADCs find use. These employ a technique called oversampling whereby many noisy, low resolution conversions are digitally averaged to produce a single high precision result. The low resolution is commonly only 1 bit, so the oversampled result amounts to the digital pulse train of Fig. 6. The oversampling ratio dictates how many low resolution conversions correspond to a single high resolution output: ratios of 1000 are typical, explaining the restricted bandwidths that can be achieved.
V. A job for experts: the challenges of integration
Performing on-chip data conversion is by no means trivial, and care must always be taken when moving sensitive analog circuits into the same package as noisy digital logic. For example, special consideration should be given to power supply arrangements to prevent digital activity from coupling into the analog signal path. It is common to see separate pins afforded for the analog and digital power rails, even if they are connected together on the PCB. In this case, the ASIC designer is protecting against voltage drops associated with the bond wires inside the package.
Interference may also occur through the silicon substrate, where digital activities can cause significant voltage perturbations. Susceptible circuits such as ADCs are often fabricated in diode-isolated wells, connected to clean analog supplies, as shown in Fig 7.
It is essential for an ASIC designer to spot potential interference mechanisms within a chip, and to assess each circuit’s susceptibility through simulation. Fully differential circuit architectures are particularly robust, and should be adopted wherever possible.
Further complications arise when large digital ASICs call for small-geometry fabrication processes. Consider a systemon-chip, containing a high-end processor core that occupies 80% of the silicon area. In this case, the process must clearly be selected for speed, compactness and power efficiency of the digital element. Such processes are poorly suited to analog design: supply voltages are low, leakage currents are high, and matching between devices is poor. Low supply voltages complicate design by restricting signal swings and hampering the performance of transistors. Leakage currents are harmful to circuits that use switched capacitors: a technique employed wherever signals are sampled and held, and found in the majority of ADCs. Poor device matching leads to DNL and INL in many converter architectures: to mitigate its effects, sophisticated architectural techniques such as redundancy and digital error correction may be required.
The problems associated with designing data converters on small process geometries are by no means new, and various established design techniques can help surmount them. For example, the use of multiple supply domains can allow both 3.3V and 1.8V devices to cohabit within a single chip. In addition, the most experienced ASIC suppliers will have ‘tried and tested’ layouts for critical circuit blocks, which are well characterized and have been optimized over the course of numerous chip designs. But there is still compromise, and a fully-integrated ADC is unlikely to match the performance of a ‘cutting edge’ discrete.
VI. The bottom line: to integrate or not?
In applications that demand cutting edge performance from an ADC, integration is probably not the correct route to follow. High performance converters require specially chosen fabrication processes, and benefit from being isolated in their own packages.
However, in the vast majority of automotive and industrial applications an integrated ADC proves cheaper, smaller and more convenient than using a separate chip. Whilst some additional silicon area and ASIC design effort are required, these costs should not significantly impact the net benefits. Remember, once the ADC is integrated, there is scope for additional elements of the analog signal chain to be absorbed too. In this way, the potential gains multiply: one ASIC supplier takes responsibility for the whole signal path, and theproduction test process qualifies all elements therein.
 W. R. Bennett, “Spectra of Quantized Signals”, Bell System Technical Journal, Vol. 27, pp. 446-, July 1948
 M. Coulson, “Die Integration von A/D-Wandlern differenziert ASICs”, Elektronikpraxis Nr. 17, 8th September 2014