The Frequency Domain Trap – Beware of Your AC Analysis

This man right here arguably changed the course of signal processing and engineering. Sure, let’s also throw names like Euler, Laplace and Cooley-Turkey in there, but Fourier transform has become the cornerstone of designers’ daily routine. Due to its magic, we simulate and measure our designs mostly in the frequency domain.

From AC simulations to FFT measurements, we have almost developed a second nature when looking at frequency responses. We pride ourselves in building the flattest filter responses and knowing the causes for each harmonic. Even so, is this really the whole picture? In this post, we will explore some dangers when we trust and rely on frequency domain too much. Let’s make Fourier proud.

Magnitude isn’t the whole story

Math is hard. We engineers apply what makes intuitive sense into our designs, and hide the complicated and head-scratching stuff behind “approximations”. Our brains can understand magnitude very well – large/small, tall/short, cheap/expensive. When it comes to phase and time, we can’t seem to manage (just look at your last project’s schedule).

So naturally, we have developed a preference for the magnitude of a frequency response. That’s why we love sine wave tests: the output is simply a delayed version of the input with different amplitude. It’s easy to measure and makes “intuitive sense”, so what’s the problem?

Sometimes, the phase portion of the frequency responses contains equal if not more information as the magnitude part. Here is my favorite example to illustrate this point (makes a good interview question).

Take a look at this funky transfer function above. It has a left half-plane pole and a RIGHT half-plane zero. Its magnitude response looks absolute boring – a flat line across all frequencies. In other words, this transfer function processes the signal only in the phase domain. If you only focused on the magnitude response, you would pat yourself on the back for creating an ideal amplifier. Shown below is a circuit that could give such a transfer function. Have a little fun and try deriving its transfer function (reference)

But is it even real or just a made up example? If you ever used an inverter, you would recognize the following waveform. Ever wondered where those spikes come from? They come precisely from the feedforward path (right half-plane zero) through the inverter’s Miller capacitor. This RHP zero also contributes to the inverter buffer’s delay. There is no way to predict these spikes from magnitude responses alone.

Magnitude response can still remain a good “indicator” of obvious issues (after all, it’s one of the fastest simulations). However, phase information becomes crucial with the introduction of parasitics and inductors, especially at high frequencies. Sometimes, it’s not the flattest response you should aim for (for those who are interested, look into raised-cosine filters and their applications in communications).

Probability – the third leg in the engineering stool

As mentioned before, we love our sines and cosines, but do we speak on the phone with a C# note? Most real life signals look more like noise than sine waves. In fact, the transmitter in a wireline link typically encodes data to be “random” and have equal energy for all frequencies. The signal’s frequency content simply looks like high energy white noise – flat and not that useful.

What’s interesting, however, is the probabilistic and statistical properties of the signal. Other than time and frequency, the probability domain is often overlooked. Let’s study some examples on why we need to pay extra attention to signal statistics.

1. Signals of different distributions

We will begin by clearing the air on one concept: white noise doesn’t mean it has a Gaussian/normal distribution. The only criteria for a (discrete) signal to be “white” is for each sample to be independently taken from the same probability distribution. In the continuous domain, this translates to having a constant power spectral density in the frequency domain.

We typically associate white noise with Gaussian distributions because of “AWGN” (additive white gaussian noise), which is the go-to model for noise. It is certainly not the case when it comes to signals. Here are four special probability distributions

Again, if independent signal samples are taken from any one of these distributions, the resulting signal is still considered white. A quick FFT of the constructed signal would look identical to “noise”. The implications on the processing circuits’ requirements, however, are completely different.

Take linearity for instance. It wouldn’t be wrong to assume the linearity requirement for processing two digital levels should be much relaxed than a uniformly distributed input signal. The figure below shows that nonlinearity error for a dual-Dirac distribution could effectively become “gain error”, while a uniform input yields a different error distribution. A Gaussian distributed input signal might also require less linearity than a sinusoidal-like distribution because smaller amplitude signal is more likely.

By understanding input signal’s statistical nature, we can gather more insights about certain requirements for our circuits than just from frequency domain. It is frequently a sin when we design just for the best figure of merit (FOM) using sine wave stimulus. Such designs are often sub-optimal or even worse non-functional when processing real life signals.

2. Stationary vs non-stationary signals

Before these distant probability class jargons scare you away, let’s just imagine yourself speaking on the phone again. Unless you are chatty like me, the microphone should be picking up your voice in intervals. You speak, then pause, then speak again. Congratulations, you are now a non-stationary signal source: the microphone’s input signal statistics (e.g. mean, variance, etc.) CHANGES over time.

When we deal with this kind of signal, frequency domain analysis forces us to go into the “either-or” mode. We would perhaps analysis the circuit assuming we are in either the “speak” or the “pause” phase. However, the transition between the two phases might be forgotten.

This becomes especially important for systems where a host and device take turns to send handshake signals on the same line. In these cases, even pseudo-random bit sequences (PRBS) can’t realistically emulate the real signals.

Other scenarios involving baseline wander and switching glitches also fall under this category. Frequency domain analysis works best when signals reach steady-state, but offer limited value for such time and statistical domain phenomena. Figure below depicts a handshake signal example in the HDMI standards. Try and convince me that frequency domain simulations help here.

The small signal swamp

Though they are not entirely the same, small signal analysis are associated with frequency domain simulations because they are all part of the linear analysis family. Designers are eager to dive into the small signal swamp to do s-domain calculations and run AC simulations. There is nothing wrong with it, but far too often we forget about the land that’s just slightly outside the swamp (let’s call it the “medium signal land”).

Overlooking the medium signal land can potentially lead to design issues. Examples include slewing, nonlinearity, undesired settling dynamics, and sometimes even divergent behavior with bad initial conditions. Small signal thinking often tells a performance story: gain, bandwidth, etc. Medium/large signals, on the other hand, tells a functional story. Ask yourself: can I get to the small signal land from here at all? If not, you might have taped out a very high performance brick.

In real life designs, key aspects like biasing, power on sequence, and resets could be more important than the small signal behaviors. And the only way to cover these points is through time domain simulation.

Stand the test of time

My favorite example for why frequency domain measurements could be deceiving is found in this article by Chris Mangelsdorf. Chris’ example demonstrates errors due to very high harmonics (i.e. code glitches) are often not visible in frequency domain. In this particular case, it’s even difficult to spot in time domain without some tricks. This article also touches upon similar sentiments mentioned above including phase information.

While many consider getting good FFT plots and ENOB numbers the finish line in most projects, not understanding time domain errors like glitches can be catastrophic. For example, if an ADC has a code glitches happening every thousand samples (regardless of its perfect ENOB or FOM), it cannot be used in a communication link targeting bit error rate (BER) of 1E-6 or below.

Unfortunately, time domain analysis is, well, time-consuming. In large systems, running large system level transient simulations inevitably crash servers and human spirit. That’s why adopting top-down methodology with good behavior models is of increasing importance. To stand the test of time, we need to be smart about what and how to simulate in the time domain. Below is a list of essential time domain simulations

  1. Power-on reset
    This is on the top of the list for obvious reasons. This is often not discussed enough for students working on tape-out. A good chip is a live chip first.
  2. Power down to power up transition
    Putting a chip into sleep/low power mode is always desired, but can it wake up properly? Run this simulation (not input stimulus is necessary) to check the circuit biasing between power down/up states.
  3. Input stimulus transition from idle to active state
    In some applications, input signal could go from idle to active continuously (e.g. burst mode communication, audio signals, etc.). Make sure your circuit handles this transition well.
  4. Special input stimulus like step or pulse response
    Instead of sine wave testing, consider using steps or pulses to test your circuit. Step and pulse responses reflect the system’s impulse response, which ultimately contains all frequencies’ magnitude/phase information. Techniques like this are helpful in characterizing dynamic and periodic circuits (see Impulse Sensitivity Function)
  5. Other initial condition sweeps
    Power and input signal transitions are just special cases for different initial conditions. Make sure you try several initial conditions that could cover some ground. For example, a feedback circuit might not be fully symmetrical. It could have different settling behaviors for high and low initial conditions.

To state the obvious, this post is not suggesting to ignore Fourier completely, but rather treat it as the first (not last) guiding step in your entire verification process. To build a solid stool on which your design can rest on, we need to consider frequency, time and probability domains together. So whenever you look at another frequency response next time, think about phase, statistics, time. and hopefully this three-legged stool.

2 Comments

  1. Aftab Farooqi

    Hi Kevin,

    This is a great article.

    Would you be interested in posting your content in my App called Semiconductor chips on apple and google play stores?
    The links to download the App are on

    http://www.siliconoperations.Com

    Thanks
    Aftab Farooqi, Ph.D.

  2. Javed GS

    I have always run 2 transient sims at all times for my designs.
    1. Power on Reset
    2. Power Down to Power up and power up to Power down.

© 2024 Circuit Artists

Theme by Anders NorenUp ↑