When we hear “system” in IC design, normally two types pop in our heads – the billion(or trillion!)-transistor chips or the PCBs that host these SoCs. To be completely honest, I never really liked the term “SoC”. It forces us to think a system must have a processor, memory, a plethora of I/Os and much more to be worthy of this name. In reality, every component inside an “SoC” is a system by itself with many interconnected sub-blocks. This is even more so in the advanced CMOS era where mixed-signal processing and system-level co-optimization is crucial, even for a simple amplifier.
System thinking has never been an emphasis in analog design curriculums (granted there is just too much to cover). However, this causes designers to be often stuck in a weird place. They sometimes aren’t sure how the requirements come about or how their blocks fit in the system, . And yet, we have all witnessed the huge benefits when a designer understands signal processing and system concepts.
The modern digital-assisted analog or analog-assisted digital paradigms call for more designers who can think deeper about the incoming signals, block interfaces and architectures. These are what I believe to be the top 3 pillars in system thinking for analog designers, which we shall explore in more details in this post series.
The 3 pillars of system thinking
You can start practicing designing with a system mindset by asking the following 3 questions (and some sub-questions)
- Do I understand the nature of the signal coming into my block?
- Which signal characteristic is the most important?
- What is the worst case signal that I need to handle?
- Any signal characteristics that my circuit might be able to exploit?
- Do I understand my block‘s interface?
- Should I use an analog or digital output interface?
- Is my load really a resistor or capacitor?
- What does my block look like to others?
- Do I have the right number of loops?
- Should I use sizing or loops?
- Too few or too many loops?
- Does any loops interfere with any important signal characteristics?
The objective here is to develop a habit of challenging the architecture and circuit requirements, even if we are just “humble block designers”. Let’s dive deeper into the first two questions here (architecture and feedback deserves a post by itself) and learn about some of the key concepts and tools at our disposal.
What am I processing?
One of the first things we are taught was the Nyquist-Shannon sampling theorem. Analog designers have this “2x curse” in the back of our heads – somehow we always need to squeeze out twice the signal bandwidth in the frequency domain. Another trap we tend to fall into is ignoring the lower frequencies (also partly due to the 2x curse). The reality is that increasingly more applications and architectures simply don’t follow Nyquist sampling anymore.
For example, modern wireline links operate on baud-rate sampling. Sub-Nyquist sampling is paramount in some software-defined radios (SDR) and other compressive sensing applications. What enables these architectures is the understanding the difference between signal and information bandwidths. The goal of our analog circuitry has always been to preserve or condition the information contained in the signal. Reconstructing the entire signal waveform(i.e. Nyquist sampling) is just a super set of preserving information.
We should begin viewing our signal conditioning blocks all as Analog-to-Information Converter (AIC), a concept inspired by compressed sensing theory. I believe most of the problems can be reframed in the AIC context. In my own field of wired/optical communication, the overall channel’s inter-symbol interference (ISI), which in the conventional sense is bad for signal bandwidth, actually contains valuable information. A maximum-likelihood sequence estimator (MLSE) desires the right amount of ISI for the decoding algorithm to work.
Getting to know your signal?
I encourage all analog designers to grasp what information their circuits are trying to process first. Below are somethings to ask about the signal characteristics that impact the incoming information
- Is the information carried in a broadband (e.g. wireline) or narrowband (e.g. wireless) signal?
- Is there a huge discrepancy between the signal bandwidth and the information bandwidth? (e.g. we only care about the long delay times between very sharp periodic pulses, like ECG signal)
- Is the information in the signal levels, signal transitions, or both? (e.g. level encoded like PAM vs edge encoded like Manchester code)
- Is there any low frequency or even DC information? (e.g. any encoding on the signal that impact low frequency content?)
- Is the signal/information arriving continuous or sparsely? (e.g. continuous vs. burst mode)
A fun interview question
The discussion above might sound too high-level or even philosophical to some, so let me give an interview question type example (derived from a real world problem). Let’s say we have a transmit signal that looks like a “train of clocks” as shown below. The signal swing is relatively small and rides on a DC bias on the PCB. A huge DC blocking cap is used on board because the DC bias level is unknown. Your task is to design a receiver circuit for this clock train and preserve the clock duty cycle as much as possible.
The challenge here is a combination of the signal’s burst nature and the board level AC coupling. As a result, the chip’s input signal will have baseline wander, which is always a nuisance.
Our first attempt might be to use a comparator directly. The issue becomes how to set the reference voltage. There is no one reference voltage that can preserve the clock duty cycle for every pulse. The next natural thought is to google all baseline wander techniques out there to see if we can negate the AC coupling completely (then pull my hair out and cry myself to sleep).
Now, if we realize that the information in the clock actually lies in the edges and not the levels, there can be other possibilities. If the edges are extracted and a spike train is created like below, the new receiving circuit might be able to restore the levels from the spikes.
The simplest edge extraction circuit is actually just another AC coupling network, but the cutoff frequency needs to be high enough relative to the clock frequency. A level restorer could be conceptually a pulse triggered latch (w/ the right dosage of positive feedback). Congratulations, we just practiced Analog-to-Information conversion (high-passing to extract edges) and reconstruction (level restoration) and created a much simpler and more robust solution. In fact, the receiver would work equally well if the burst signal is PRBS like.
Exploit signal nature
System thinking in analog design often requires “thinking outside the box” and leads to “easier” solutions. The first step is to understand the information that we aim to process and pinpoint what we could exploit. Like the example above, we took advantage of the fact that the same information lies in the signal transitions as the levels. This led to a solution better suited for this particular application. While we should be proud of making complicated circuits work, we should take equal pride in simpler solutions born from a better understanding of the incoming signal.
What am I driving?
After figuring out what’s coming into our blocks, we now shift the focus to where the output signal is going, or more precisely block interfaces. One major source of frustration is when you tweak a block to perfection but trouble arises when plugged into the system. Either the load doesn’t behave as expected or your own block is the problematic load.
Perhaps everyone can relate to the cringe when seeing heavily Figure of Merit (FOM) engineered publications. Some new circuits are extremely power efficient provided that the input source is a $10,000 box with a wall plug. Needless to say, it’s important to fully understand our blocks’ interface so that we can design and simulate accordingly.
The impedance lies
There aren’t that many lies greater than “my block looks like/drives a pure resistor or capacitor”. While a block’s input or load impedance might look like a pure resistor/capacitor at certain frequencies, every realistic element has a frequency dependent impedance (Exhibit A). Overly relying on simplified R/C loads is another reason why sometimes we can’t trust frequency domain simulations too much.
My readers already know my love for inverters, so let’s take a look at the picture below. As a start, let’s say our circuit is driving an ideal inverter. There shouldn’t be any objection to say the input impedance looks like a capacitor. Fair enough.
Now let’s add a Miller capacitor in there. Right away, things become more complicated than meets the eye. In the case of a small Miller cap relative to the input cap, the Miller cap gets amplified by the inverter gain and one might approximate the input impedance still as a capacitor with a Miller multiplied component. However, if the Miller cap is big enough such that it begins to act as an AC short sooner, the load impedance now behaves as a resistive component because the inverter becomes diode connected (this is also intuition behind pole splitting in Miller compensation).
To be the lord of chaos, I will now throw in a LC-tank at the inverter’s output, and why not cascade another stage (and another and another). Have you lost track of what the input impedance should be yet? Don’t believe this is a real circuit? Here is the resonant clock distribution circuit for a 224Gb/s transmitter. I would feel very uneasy to use simple load capacitors when designing any intermediate stages.
Impedance modeling using nport
The habit of resorting to simple RC loads is not unjustified. They could certainly provide order-of-magnitude results and speed up simulations. However, as illustrated above, that doesn’t guarantee the block would act the same when plugged into a real system. As designers, we need to recognize this possible culprit and address it early on.
We don’t need to look far to see a better way to model our block interfaces. Signal and power integrity (SI/PI) experts have long figured out that every trace on a PCB is an n-port network.
We often forget the first thing we learned. Electronics 101 has prepared us for n-port modeling with Thevenin/Norton equivalent networks, and even a MOS transistor’s small signal model is network based. And yet, we rarely think about our own circuits as a network and having S-parameters. For some reason, S-parameters are synonymous with RF designs, but in reality there is mathematical equivalence between S-parameters and Y/Z parameters, making it applicable for all frequencies. S-parameters are popular simply because they are easier to measure in real life. The point is that S-parameter is a great modeling tool for linear circuits, and we should start utilizing it more.
Passing around *.snp files
The idea then is to have a new routine testbench that extracts the n-port model of our own or load circuits. The simulation is as simple as AC analysis, but provides the entire frequency dependent impedance information.
Most simulators have S-parameter analysis (some just as a special case of AC analysis). The interface between designers then becomes “.s2p” files, which could also have best-case/worst-case variants under different PVT conditions. Simulation time remains fast but accuracy improves dramatically. It serves as the perfect balance between using an ideal capacitor and using the extracted netlist of the next block.
In fact, your DUT can also be modelled as a .s3p, .s4p, etc. as long as we are most interested in the circuit’s linear behavior. The same S-parameter files are equally usable in specialized system modelling tools like MATLAB. Modeling active circuits with S-parameter is not something new, but a wheel definitely worth reinventing (check out this 1970 thesis after a simple search).
Limitations of S-parameter models
As you might have guessed, the key limitation to this S-parameter modeling approach is the linear circuit assumption. When nonlinear effects become important (e.g. input amplitude dependent impedance change), small-signal S-parameters could yield different results (but still much better than an ideal capacitor). While there exists a so-called Large-Signal S-Parameter analysis (LSSP), it falls under harmonic balance (HB) or periodic steady state (PSS) analysis, which means it truly focuses more on RF applications. In addition, S-parameters might be limiting when dealing with mixed-signal processing, like sampling circuits.
Nevertheless, I have found impedance/circuit modeling using S-parameters generally allow fast simulation time, better accuracy and less system level frustration down the line. In fact, Analog designers could also gain system insights when interfacing blocks through S-parameters. Give it a try!
Let’s take a small break
System thinking in analog design is a skill that is increasingly more important. Long gone are the days for building “general purpose” devices, and a good system solution require tailored circuits for each application.
First and foremost, we should understand our circuits are processing and their interfaces. I hope the examples discussed in this post open the door for some aspiring analog designers for adopting system mentalities. In the next post, we will move from the interface to the inside of each block, and talk about perhaps the most important architectural tool for analog designers – feedback. Till next time!
That’s why in FPGA we use LUTs as an abstraction, and D flip-flops for full pipelining, rather than fully analog CMOS transistors. After life of being a mixed signal engineer AND a software developer, I realized that premature optimization is the root of all evil. We need a better abstraction and stop playing with squeezing chips, and optimize for hardware solutions instead, just like software does.
Brian, completely agree with you on “premature optimization is the root of all evil”. What are your thoughts and ideas on “better abstractions” for analog circuits?