Category: Tips & tricks

System for Analog Designers, Pt. 1 – What Comes In and What Goes Out

When we hear “system” in IC design, normally two types pop in our heads – the billion(or trillion!)-transistor chips or the PCBs that host these SoCs. To be completely honest, I never really liked the term “SoC”. It forces us to think a system must have a processor, memory, a plethora of I/Os and much more to be worthy of this name. In reality, every component inside an “SoC” is a system by itself with many interconnected sub-blocks. This is even more so in the advanced CMOS era where mixed-signal processing and system-level co-optimization is crucial, even for a simple amplifier.

Tesla Dojo (left), Cerebras Wafer-Scale Engine (middle), 112Gbps receiver (right)

System thinking has never been an emphasis in analog design curriculums (granted there is just too much to cover). However, this causes designers to be often stuck in a weird place. They sometimes aren’t sure how the requirements come about or how their blocks fit in the system, . And yet, we have all witnessed the huge benefits when a designer understands signal processing and system concepts.

The modern digital-assisted analog or analog-assisted digital paradigms call for more designers who can think deeper about the incoming signals, block interfaces and architectures. These are what I believe to be the top 3 pillars in system thinking for analog designers, which we shall explore in more details in this post series.

The 3 pillars of system thinking

You can start practicing designing with a system mindset by asking the following 3 questions (and some sub-questions)

  1. Do I understand the nature of the signal coming into my block?
    1. Which signal characteristic is the most important?
    2. What is the worst case signal that I need to handle?
    3. Any signal characteristics that my circuit might be able to exploit?
  2. Do I understand my block‘s interface?
    1. Should I use an analog or digital output interface?
    2. Is my load really a resistor or capacitor?
    3. What does my block look like to others?
  3. Do I have the right number of loops?
    1. Should I use sizing or loops?
    2. Too few or too many loops?
    3. Does any loops interfere with any important signal characteristics?

The objective here is to develop a habit of challenging the architecture and circuit requirements, even if we are just “humble block designers”. Let’s dive deeper into the first two questions here (architecture and feedback deserves a post by itself) and learn about some of the key concepts and tools at our disposal.

What am I processing?

One of the first things we are taught was the Nyquist-Shannon sampling theorem. Analog designers have this “2x curse” in the back of our heads – somehow we always need to squeeze out twice the signal bandwidth in the frequency domain. Another trap we tend to fall into is ignoring the lower frequencies (also partly due to the 2x curse). The reality is that increasingly more applications and architectures simply don’t follow Nyquist sampling anymore.

For example, modern wireline links operate on baud-rate sampling. Sub-Nyquist sampling is paramount in some software-defined radios (SDR) and other compressive sensing applications. What enables these architectures is the understanding the difference between signal and information bandwidths. The goal of our analog circuitry has always been to preserve or condition the information contained in the signal. Reconstructing the entire signal waveform(i.e. Nyquist sampling) is just a super set of preserving information.

We should begin viewing our signal conditioning blocks all as Analog-to-Information Converter (AIC), a concept inspired by compressed sensing theory. I believe most of the problems can be reframed in the AIC context. In my own field of wired/optical communication, the overall channel’s inter-symbol interference (ISI), which in the conventional sense is bad for signal bandwidth, actually contains valuable information. A maximum-likelihood sequence estimator (MLSE) desires the right amount of ISI for the decoding algorithm to work.

Getting to know your signal?

I encourage all analog designers to grasp what information their circuits are trying to process first. Below are somethings to ask about the signal characteristics that impact the incoming information

  1. Is the information carried in a broadband (e.g. wireline) or narrowband (e.g. wireless) signal?
  2. Is there a huge discrepancy between the signal bandwidth and the information bandwidth? (e.g. we only care about the long delay times between very sharp periodic pulses, like ECG signal)
  3. Is the information in the signal levels, signal transitions, or both? (e.g. level encoded like PAM vs edge encoded like Manchester code)
  4. Is there any low frequency or even DC information? (e.g. any encoding on the signal that impact low frequency content?)
  5. Is the signal/information arriving continuous or sparsely? (e.g. continuous vs. burst mode)

A fun interview question

The discussion above might sound too high-level or even philosophical to some, so let me give an interview question type example (derived from a real world problem). Let’s say we have a transmit signal that looks like a “train of clocks” as shown below. The signal swing is relatively small and rides on a DC bias on the PCB. A huge DC blocking cap is used on board because the DC bias level is unknown. Your task is to design a receiver circuit for this clock train and preserve the clock duty cycle as much as possible.

The challenge here is a combination of the signal’s burst nature and the board level AC coupling. As a result, the chip’s input signal will have baseline wander, which is always a nuisance.

Our first attempt might be to use a comparator directly. The issue becomes how to set the reference voltage. There is no one reference voltage that can preserve the clock duty cycle for every pulse. The next natural thought is to google all baseline wander techniques out there to see if we can negate the AC coupling completely (then pull my hair out and cry myself to sleep).

Now, if we realize that the information in the clock actually lies in the edges and not the levels, there can be other possibilities. If the edges are extracted and a spike train is created like below, the new receiving circuit might be able to restore the levels from the spikes.

The simplest edge extraction circuit is actually just another AC coupling network, but the cutoff frequency needs to be high enough relative to the clock frequency. A level restorer could be conceptually a pulse triggered latch (w/ the right dosage of positive feedback). Congratulations, we just practiced Analog-to-Information conversion (high-passing to extract edges) and reconstruction (level restoration) and created a much simpler and more robust solution. In fact, the receiver would work equally well if the burst signal is PRBS like.

Exploit signal nature

System thinking in analog design often requires “thinking outside the box” and leads to “easier” solutions. The first step is to understand the information that we aim to process and pinpoint what we could exploit. Like the example above, we took advantage of the fact that the same information lies in the signal transitions as the levels. This led to a solution better suited for this particular application. While we should be proud of making complicated circuits work, we should take equal pride in simpler solutions born from a better understanding of the incoming signal.

What am I driving?

After figuring out what’s coming into our blocks, we now shift the focus to where the output signal is going, or more precisely block interfaces. One major source of frustration is when you tweak a block to perfection but trouble arises when plugged into the system. Either the load doesn’t behave as expected or your own block is the problematic load.

Perhaps everyone can relate to the cringe when seeing heavily Figure of Merit (FOM) engineered publications. Some new circuits are extremely power efficient provided that the input source is a $10,000 box with a wall plug. Needless to say, it’s important to fully understand our blocks’ interface so that we can design and simulate accordingly.

The impedance lies

There aren’t that many lies greater than “my block looks like/drives a pure resistor or capacitor”. While a block’s input or load impedance might look like a pure resistor/capacitor at certain frequencies, every realistic element has a frequency dependent impedance (Exhibit A). Overly relying on simplified R/C loads is another reason why sometimes we can’t trust frequency domain simulations too much.

My readers already know my love for inverters, so let’s take a look at the picture below. As a start, let’s say our circuit is driving an ideal inverter. There shouldn’t be any objection to say the input impedance looks like a capacitor. Fair enough.

Now let’s add a Miller capacitor in there. Right away, things become more complicated than meets the eye. In the case of a small Miller cap relative to the input cap, the Miller cap gets amplified by the inverter gain and one might approximate the input impedance still as a capacitor with a Miller multiplied component. However, if the Miller cap is big enough such that it begins to act as an AC short sooner, the load impedance now behaves as a resistive component because the inverter becomes diode connected (this is also intuition behind pole splitting in Miller compensation).

To be the lord of chaos, I will now throw in a LC-tank at the inverter’s output, and why not cascade another stage (and another and another). Have you lost track of what the input impedance should be yet? Don’t believe this is a real circuit? Here is the resonant clock distribution circuit for a 224Gb/s transmitter. I would feel very uneasy to use simple load capacitors when designing any intermediate stages.

Impedance modeling using nport

The habit of resorting to simple RC loads is not unjustified. They could certainly provide order-of-magnitude results and speed up simulations. However, as illustrated above, that doesn’t guarantee the block would act the same when plugged into a real system. As designers, we need to recognize this possible culprit and address it early on.

We don’t need to look far to see a better way to model our block interfaces. Signal and power integrity (SI/PI) experts have long figured out that every trace on a PCB is an n-port network.

We often forget the first thing we learned. Electronics 101 has prepared us for n-port modeling with Thevenin/Norton equivalent networks, and even a MOS transistor’s small signal model is network based. And yet, we rarely think about our own circuits as a network and having S-parameters. For some reason, S-parameters are synonymous with RF designs, but in reality there is mathematical equivalence between S-parameters and Y/Z parameters, making it applicable for all frequencies. S-parameters are popular simply because they are easier to measure in real life. The point is that S-parameter is a great modeling tool for linear circuits, and we should start utilizing it more.

Passing around *.snp files

The idea then is to have a new routine testbench that extracts the n-port model of our own or load circuits. The simulation is as simple as AC analysis, but provides the entire frequency dependent impedance information.

Most simulators have S-parameter analysis (some just as a special case of AC analysis). The interface between designers then becomes “.s2p” files, which could also have best-case/worst-case variants under different PVT conditions. Simulation time remains fast but accuracy improves dramatically. It serves as the perfect balance between using an ideal capacitor and using the extracted netlist of the next block.

In fact, your DUT can also be modelled as a .s3p, .s4p, etc. as long as we are most interested in the circuit’s linear behavior. The same S-parameter files are equally usable in specialized system modelling tools like MATLAB. Modeling active circuits with S-parameter is not something new, but a wheel definitely worth reinventing (check out this 1970 thesis after a simple search).

Limitations of S-parameter models

As you might have guessed, the key limitation to this S-parameter modeling approach is the linear circuit assumption. When nonlinear effects become important (e.g. input amplitude dependent impedance change), small-signal S-parameters could yield different results (but still much better than an ideal capacitor). While there exists a so-called Large-Signal S-Parameter analysis (LSSP), it falls under harmonic balance (HB) or periodic steady state (PSS) analysis, which means it truly focuses more on RF applications. In addition, S-parameters might be limiting when dealing with mixed-signal processing, like sampling circuits.

Nevertheless, I have found impedance/circuit modeling using S-parameters generally allow fast simulation time, better accuracy and less system level frustration down the line. In fact, Analog designers could also gain system insights when interfacing blocks through S-parameters. Give it a try!

Let’s take a small break

System thinking in analog design is a skill that is increasingly more important. Long gone are the days for building “general purpose” devices, and a good system solution require tailored circuits for each application.

First and foremost, we should understand our circuits are processing and their interfaces. I hope the examples discussed in this post open the door for some aspiring analog designers for adopting system mentalities. In the next post, we will move from the interface to the inside of each block, and talk about perhaps the most important architectural tool for analog designers – feedback. Till next time!

Lean-Agile Principles in Circuit Design, Part 1 – How to Reduce Design Wastes

Working in a startup has forced me to pick up Eric Ries’ “The Lean Startup” again. If you haven’t read it, it’s a book about applying “scientific principles” in a startup or entrepreneurial environments. As a hardware guy, you could imagine my slight “disappointment” the first time I read it. “Well, it’s only for software”, “Big companies probably can’t adopt this”, “Written in 2011? That’s so yesterday”.

I now find some ideas more intriguing after my second viewing (actually listening on Audible during commute). I begin to make connections between the underlying principles behind “lean thinking” and IC design practices. Maybe (and just maybe), IC design is primed to adopt lean principles more formally and systematically. So, you are reading this two-part series as a result of my obsession with such principles for the past couple of months.

Ah management jargons, we meet again

To many, management jargons seem just as foreign (and possibly pompous) as engineering abbreviations. Nevertheless, the end goal of either side remains the same: plan, execute and deliver a satisfactory result in a time and cost efficient manner. I have come to learn that a good “process” is key to sustainable and predictable results. So let’s first put away our engineering hats and look at the three most popular process improvement methodologies compared to the traditional waterfall approach.

Lean

Lean manufacturing was invented by Toyota to achieve a more efficient production system. In the 1950s, Toyota adopted the just-in-time (JIT) manufacturing principle to focus on waste reduction (seven types identified) in the process flow. The system was later rebranded as “lean” and studied by many business schools. In a nutshell, lean systems aim to remove unnecessary efforts that create minimal value for the final output.

Six Sigma

Who doesn’t love normal distributions? Six Sigma’s originator Bill Smith must secretly be a marketing genius because the name fully captures the methodology’s key principle – reducing variations. As one would imagine, Six Sigma processes heavily rely on data and statistical analysis. Decisions are made with data evidence and not assumptions. This notion shouldn’t be that alien to IC designers – after all, we run Monte Carlo simulations precisely for yield reasons. Modern processes combine Lean and Six Sigma and call it Lean Six Sigma (jargons right?).

Agile

You might be the most familiar with this term. After “Manifesto for Agile Software Development” was first published in 2001, it quickly gained steam and almost achieved this “Ten Commandments” type status in the software world. The biggest difference in Agile is its embrace for constant change and readiness to launch small revisions frequently. Many became a fan of Agile during COVID since it proved to be the most resilient system.

Relevance to IC design

It’s easy to classify such process methodologies as “obvious” or “not applicable to hardware”. Some might even falsely generalize Lean as less, Six Sigma as perfect, and Agile as fast. Ironically, “less, fast and perfect” are actually the desirable outcomes from such processes. Acknowledging and studying these ideas can help improve our own design methodologies.

In this post, I want to zoom in on the “waste reduction” aspect in lean (part 1). Not only do we often see over-specifying or over-designing leading to waste, valuable human and machine time are also not fully utilized when schematics are drawn inefficiently.

It’s also no coincidence that some commonalities exist, which might be applicable to circuit design as well. Lean, Six Sigma and Agile all rely on a constant feedback loop of “build-measure-learn”. The difference lies only in the “complexity” and “latency” in the loop (part 2).

Now let’s try putting this in IC design’s perspective: if we are the managers of the “circuit design factory”, how would we adopt these principles?

Waste in IC design

Lean was first applied in manufacturing systems and later extended to other fields. Fortunately, lean is equally applicable to the engineering process. The table below, taken from an MIT course on lean six sigma methods, shows a mapping between the original manufacturing wastes and their engineering equivalents.

Engineering wastes aligned to the wastes in lean manufacturing [source: MIT OCW 16.660j Lec 2-4]

So how can we extend this further to IC design? Here is my attempt at mapping these wastes. I wager that you must have experienced at least one of these frustrations in the following table. I bet even more that we have all been waste contributors at a certain point.

Waste reduction toolbox

Now that we have identified the waste categories, let’s discuss the top 5 ways to reduce them during design cycles. You could call these personal learnings from the good and the bad examples. Many ideas here have parallels to some programming principles. So without further ado, let’s begin.

1. Finding O(1) or O(log N) in your routines

Targets waste #4, #8

I apologize for my software persona popping out, but there is beauty in finishing a task in constant or logarithmic time (check big-O notation). Examples for circuit design involve using hierarchies, array syntax, and bus notations to reduce schematic drawing/modification to O(1) or O(log N) time.

If you are learning to create floorplans, ask your layout partners about groups, array/synchronous copy (for instances), aligning (for pins), and cuts (for routes). I wish someone had told me these lifesaving shortcuts earlier because I have spent way too much time doing copy→paste→move.

Travelling salesman problem [source: xkcd]

2. Systematic library/cellview management

Targets waste #1, #2, #3

Borrowing from software again, revision controls in library manager are widely used nowadays. While the benefit is huge, it could lead to unintended bad habits. Many designers simply create as many variations of the same cell as possible without actively “managing” them. This could result in mass confusion later on, especially if no final consolidation happens. Worst case scenario, you could be checking LVS against one version, but tape out another.

If investigations and comparative studies require multiple versions, I recommend using a different cellview instead of creating a completely new cell. Combining with config views in simulations, the entire library becomes cleaner and more flexible. When library consolidation or migration happens, only the relevant final cells will survive, thus a clean database. I plan to discuss how to create a good cellview system in a more detailed future post.

Don’t sweat over the what the cellview names mean on the right, but do take some educated guesses

3. Symbol/schematic skeleton before optimization

Targets waste #5, #6, #7

Top-down methodology encourages designers to have a bird-eye view of the entire system in addition to the fine details of their responsible cells. One method is to define block and cell level pins earlier in the design phase. This idea is similar (though not as sophisticated) to abstract classes or interface (e.g. Java, Python, etc.) in object-oriented programming languages. Instead of implementing the specific functions right away, a high-level abstract description first defines the key methods and their interfaces. The IC equivalent would be to appropriately name and assign port directions for each block’s pins. The symbol itself contains all the information for its interface.

“How can I build a symbol without knowing what’s inside?” The truth is you must know the most critical pins – an amplifier should at least have power, inputs and outputs. You must also know the most basic required features on a block – power down, reset, basic configuration bits. Informative symbols and schematic skeletons should be possible with these pins alone. The same concept is applicable to layout and floorplans, with pins + black boxing.

Since we are only dealing with symbols and pins here, it’s must easier to modify if specification changes or a new feature is requested. This ties into the “minimum viable product” (MVP) concept that we shall discuss in part 2.

A rough frame w/ non-ideal parts is a better starting point towards building a car than a perfectly round and polished wheel

4. Design w/ uncertainties & effort forecast

Targets waste #5, #6, #7

Now your schematic skeleton looks solid, the device level design begins. You have a clear plan of execution because of the symbol creation exercise, but potential pre and post layout discrepancies bother you till no end. We all have had that fear: what if this thing completely breaks down after layout?

To address this, designers should 1. estimate parasitics early by floorplanning, 2. use sufficient dummies, and 3. add chicken bits. Depicted below is an example of a tail current source in an amplifier. Before starting layout, designers should have a mental (or real) picture of how the unit current cells are tiled together. There could be an always-on branch (8x), two smaller branches for fine adjustments (4x + 2x), and dummies (6x). A critical parasitic capacitor connects to the output node w/ reasonably estimated value.

One could argue the extra programmable branches and dummies are “waste” themselves. Keep in mind that reserving real estate at this stage consumes minimal effort compared to potential changes later in the design process. Swapping dummies and the always-on cells only require metal+via changes. What if layout database is frozen during final stages of the tapeout but some extra juice is required due to specification change? What if the chip comes back and you realize the PDK models were entirely off? The chicken bits might just save you.

5. “Ticketing” pipeline between design and layout

Targets waste #3, #5, #8

This last one is my personal system to communicate with my layout partners. I use a poor man’s “ticketing” tool called POWERPOINT. Yes, you read that right – I am suggesting to use one more .ppt document to cut IC design waste. My personal experience so far is that this interface document provides better communication and results than any zoom calls, especially if there are time zone differences. Below is how an example slide looks like.

Each slide acts as a ticket for a layout modification request. The slides DO NOT need to be pretty at all. A quick snapshot and description serve the purpose of both conveying and documenting the request. As the design gets more complete, this slide deck will grow in size but all changes are tracked in a visual way. This also allows the designer and layout engineer to prioritize and work at their own pace, almost in a FIFO manner. When periodic checkpoints or project milestones are close, this slide deck becomes extremely helpful for reviewing and further planning.

Till next time

Being lean in our design process never means reducing design complexity or using fewer tools. Rather, it’s the mentality that we should begin with the right design complexity and use the right tools.

I hope some techniques mentioned here can provide insights on how to be lean when designing. As promised, there are more to this topic. Specifically, IC design process can also embrace incremental changes in Agile methodology. We can achieve better outcome by breaking large design cycles into smaller ones. So stay tuned for part 2!

Top Down or Bottom Up – Where Should Designs Begin?

I might not be considered a “seasoned veteran”, but I have experienced some personal design paradigm shifts over the years.

Starting in undergrad, circuit design meant discrete components and breadboard. The equivalent of IC hazing was to read through countless datasheets and choose between a bad and an OK op amp. Moving to graduate studies, shrinking my breadboard designs into GDS was definitely dopamine inducing. Meanwhile, I began to get a taste of the challenges that come with more complex circuits and systems. Various internships taught me the importance of designing for PVT and not just for Ph. D. Working full-time opened my eyes to the internal structures of a well-oiled IC design machine (system, design, layout, verification, etc.). I picked up the design reuse mentality along with a new set of acronyms (DFT, DFM, DFABCD…). Interestingly enough, I need to draw on ALL of these experiences in a startup environment.

What I just described here is how my own methodology went from bottom-up to top-down, and today I live mostly in the middle. To get started, I recommend everyone to read through Ken Kundert’s article on top-down methodology first. Based on what he wrote more than 20 years ago (!), I will then add my take on this topic.

Where’s top? Where’s bottom?

This is an obvious question to ask, but how is “top” and “bottom” really defined? In the good old days, “bottom” meant transistors and “top” meant amplifiers. It was easier to draw the line because there weren’t that many layers. However, the increasing number of hierarchies in SoCs has forced us to rethink what top/bottom means.

It’s easier to define what “top” is: whatever block you are responsible for. “Bottom” becomes trickier. This is where models enter the chat. British statistician George Box famously pointed out the true nature of models, and it is especially true for IC design. My definition of “bottom” is the layer at which the model is still useful but details become cumbersome for the design of interest.

Digital designers have moved their collective “bottom” to the gate level because transistor details become unnecessary. For a PLL charge pump designer, transistors could mean the bottom, but for the overall PLL owner, the bottom stops at the current source/sink model of the charge pump. My top can be your bottom like the picture below. The hierarchical tree depicted here shows a clean boundary between each owner, but sometimes there could even be overlaps. Therefore, every designer has the opportunity to practice “top-down” methodology and think like a system architect, which I would expand upon in a later section.

The simulation problem

My post won’t be complete without a xkcd reference, so here it is:

Compiling. [credit: xkcd]

Change “compiling” to “simulating”, you get a pretty accurate representation of our daily lives. I am kidding of course, but the underlying message is valid. Large IC systems nowadays are simply impossible to simulate. The fact that a billion-transistor chip works at all is nothing short of a miracle.

There are mainly two ways simulation speed can be dragged down:

1. Netlist is too big

Do I hear a resounding “duh”? In modern PDKs, the transistor models themselves are becoming more complex already. Multiple flags and parameters are included in the model for layout dependent effects and parasitic estimates. When we add extra transistors in the circuit, we are also adding more resistors and capacitors. Layout extraction (especially RC extraction) makes the netlist size explode further.

2. Time constant gaps

More and more mixed-signal systems run into this issue. Examples include oversampled converters, digital/hybrid PLLs, TIA with DC offset cancellation, etc. A block may have signal and loop bandwidths that are orders of magnitude apart. A high speed TIA processes GHz signal, but the DC offset loop might only have kHz bandwidth. In order to fully simulate functionality, a millisecond long simulation with a picosecond time step might be needed. This becomes a problem regardless of the netlist size.

To make matters worse, designers also often relegate into the role of “SPICE monkeys”. Without a good understanding of the top level requirements and behaviors, many fall in a trap of “tweak, sweep and press run”. Perhaps this is the reason why many fear an take-over by the AI over lord, because computers are way better at loops than us.

The simulation bottleneck worsens the already long time-to-market for IC products. To address these issues, top-down methodology introduces the use of behavioral models to allow trade-offs between simulation time, accuracy and gaining insights.

The top-down loop

Behavioral models are the key enablers in a top-down design flow. Top-down design typically requires new modeling languages other than SPICE to describe block behaviors. One can use software programming languages like Python and MATLAB, or hardware description languages (HDL) like Verilog-AMS or SystemVerilog.

When I went through my graduate program, our group had a unwritten rule. No one touched PDKs without at least a year of work in MATLAB. Our daily work revolved around models and algorithms before we can finally put down transistors. Unfortunately, not many circuit design programs enforce students to pick up a modeling language skill, which is reflected in the industry today.

With the benefits of behavioral models, I often find myself in a top-down loop at a design’s early phase. Here is what I mean:

  1. Begin at the top level for the system, and assume relative ideal blocks. Verify that your proposed system/architecture works with these assumptions using behavioral models.
  2. Question your assumptions and each block’s ideal nature. Start adding non-idealities into your models and re-evaluate. The key here is to pinpoint the non-idealities that matter the most in your system, and keep the ideal approximation for other aspects.
  3. You should have the “preliminary specs” for each block at this point. Now question if these specs are reasonable.
  4. Do order-of-magnitude low level simulations for feasibility study. Note that we are already at the “bottom” layer here!
  5. Repeat the process until the specifications converge as more low level simulation data becomes available

A simplified illustration of this top-down loop is shown above. If everything goes well, we traverse on the green and blue arrows until we reach a final design. Note that the green path signifies a top-down approach and blue is for bottom-up. When people refer to top-down approach today, they are really talking about this loop, not just the green path. It’s the continuous re-evaluations and requirement updates at the model and circuit levels that ensure optimal designs and smooth executions.

Sometimes we might run into the red arrow where a fundamental limit pushes us to rethink the overall system (and worse, our career choice). While it sounds disastrous, a brand new architecture or a neat circuit trick typically comes to life to break this limit. About 80% of my current job happens when I am driving. My mind goes around the loop several times, shuffles around some blocks and plays more mental gymnastics when I believe a fundamental limit is reached. It takes some practice and time, but anyone can grow into a “system architect” after living in this loop long enough.

System architect – the man, the myth, the legend

Ken Kundert specifically wrote about system architects in another similar article. A system architect’s job is to own the top-level schematic, help define block interfaces, develop simulation and modeling plans, work with verification and test engineers, etc. A system architect basically acts as the middle man who speaks different languages to coordinate multiple efforts during a design cycle. They are the go-to person when an issue arises or change is necessary.

Sounds like a talent that’s extremely hard to come by. Yet, every team has to task a single engineer to be this person in a top-down design flow. All too often a system architect ends up being a guru with models but with minimal circuit design experience. Thus they wouldn’t spot a fundamental limitation until it’s too late.

My belief is that every designer can be a system architect to some extent and on different scales. Regardless of how complex your circuit block is, you can adopt the top-down loop methodology as long as you treat it as a system. Here are some ways for you to try and play system architect

1. Always question specifications

While specifications serve as the first line of interface between designers and the final product, that is really all it is. No requirement is sacred and no sizing is sacred, as my last manager loves to say. One example is the use of effective number of bits (ENOB) for specifying data converters. There has been a shift from using this generic figure of merit to a more application specific way of defining converter requirements. A noiseless but nonlinear ADC will impact the system differently than a noisy but perfectly linear one. So next time when you are handed a specification table, ask WHY.

2. Always question the signal nature

Most circuit requirements come from assuming some signal type going into the circuit. Sinusoidal signals have been the go-to choice because we love Fourier and AC responses. They are easier to simulate and measure. Unfortunately, almost no real application only processes single-tone sine waves. With the system architect’s hat on, you should understand fully the signal nature. There might be characteristics in the signal that can be exploited to simplify or improve your circuits. Is the signal DC-balanced? What does its statistics look like? How does it handle weak and hard nonlinearity?

3. Create simple simulatable models in schematics

Building models is difficult and requires a picking up a new skillset. However, you could build many useful models with ideal analog components without any knowledge of Verilog or MATLAB. More complex primitives are already available including delays, multipliers, converters, etc. Start building simulatable models with these components first. You will be surprised at how effective they can be in reducing simulation time and providing insights. There are more sophisticated modeling tools, like Xmodel, if you have become more comfortable and proficient later on.

4. Define symbols and pins early before drawing transistors

Lastly, a system architect has a bird’s-eye view of what the finished product looks like. Start with the end product and you will get a better picture of how to get there. Try identifying and naming the most critical pins for each cell first. While you create symbols, your mind is already computing how to connect each block and prioritizing what the do next. Empty schematics with meaningful symbols can still be full of information. Be mindful that these symbols will definitely change later, so nothing needs to be perfect. Treat this exercise just as drawing block diagrams on a scratch pad. Your muscle memory for drawing schematics will put you in design turbo mode and keep you motivated to continue the design effort.

Conclusions

The boundary between “top” and “down” is muddier than most think. Top-down design is really a mindset where designers treat the circuit block as a system rather than a soup of transistors. Education and training programs in IC design still tend to produce good designers, but do little to steer them towards becoming architects.

In my personal view, schematics and basic component libraries provide enough tools for anyone to play the role of a system architect at all levels. I encourage all students and designers to start incorporating behavioral models (with ideal components and/or Verilog-AMS) in their schematics, even if there is a separate army for system modeling. The right models can help reduce simulation efforts, assist in debug, and solidify your own understanding of the circuit.

It is no secret that polyglots have huge advantages in the globalized world, and the analogy is equally true for circuit designers. Adopting a top-down design mentality is like learning multiple new languages, which will definitely prove fruitful in the long run.

The Unsung Heroes – Dummies, Decaps, and More

Like most fields, circuit design requires a great deal of “learning on the job”. My first encounters with dummies and decoupling capacitors (decaps) were through internships. In fact, they could be the difference makers in a successful tape-out (analog and digital alike). In this post, we will take a deep dive and discuss the best ways to manage these unsung heroes in schematics.

Smart use of dummies

As the name suggests, dummies are devices that are sitting in your designs doing nothing functionally and looking “dumb”. The use of dummies fall under the category of “Design For Manufacturability” or DFM. They ensure that the real operating devices behave as closely to the simulation models as possible. Below are the three main reasons to include dummies

1. Reduce layout dependent effects (LDE) for best device characteristics

The biggest two LDEs are well proximity and length of diffusion effects illustrated below. Basically, all FETs like to think they are the center of the universe. The right thing to do is sacrificing the self-esteem of some dummies to extend the well edge and diffusion length. This is also why multi-finger devices are preferred over single-finger devices despite having the same W/L.

Well proximity and LOD effects (left), and their impact on device threshold voltage (right)
Adding dummies reduce LDEs for active devices in the middle (left); multi-finger devices suffer less LDE than single-finger devices (right)

Every process node’s LDE is different, but a general rule of thumb is to add 1-2um worth of dummies on either side for a peace of mind (L0 in the graph above where Vt plateaus). So before starting your design, study the DFM recommendations or even better, draw some devices and simulate.

2. Identical device environments for matching

Even when diffusions can’t be shared (for example, compact logic gates or self-heating limitations), dummies are still necessary to ensure device matching. This also applies to other elements like resistors and capacitors. Specifically, the devices of interest should have the same environments, even including metallization. Below are some examples of where to use dummies for device matching

(a) dummy inverters for consistent diffusion edge environments; (b) dummies around active resistors; (c) dummies next to matching current sources; (d) dummies next to matched MOM fingers

It’s not easy to share diffusion for single finger inverters without adding extra parasitic loading like in (a). Dummy inverters can be added on both sides to ensure at least the diffusion edges see another diffusion edge consistently. Similar principles apply to resistors in a ladder, matching current sources or MOM fingers in DACs. The idea is to create a regular layout pattern and the active cells are in the middle of said pattern.

3. Spare devices for easier late-stage design tweaks

Preparing for last minute design changes is crucial for any projects. The worst kind of change is for device size because FEOL space is precious and who knows what new DRCs these changes can trigger. There is a whole industry created around ECOs (Engineering Change Order) to handle late-stage design changes, especially for large VLSI systems. By placing dummies (or spare cells) strategically, only metal changes might be necessary for late design changes. My favorite example is the dummy buffers for custom digital timing fixes shown below.

Dummy buffers as spares for potential timing fixes

Take a simple timing interface, and let’s say it’s setup time critical in this case in a high-speed custom digital path. The clock path needs some extra delay to give the flip flop sufficient setup margin. We won’t know whether the margin is enough or not until we do post layout simulation. A good practice is to put down some extra buffer/inverter cells, tied as dummies for post layout modifications. Of course, it requires some experience to spot where these spare cells are needed, so start practicing as soon as possible.

Another quick example is putting spare gates for low speed combinatorial logic for fixes late in or even after tape-outs. You might have heard people put NAND and NOR spare gates everywhere for this reason. One tip is to use 4-input NAND/NOR, and tie NAND’s input to high and NOR’s input to low as dummy . This way, they can still be used as 2- or 3-input gates functionally. Modern synthesis and digital flows already automate this, but analog/mixed-signal designers need to be aware of this as well.

This idea also applies to analog circuits. Take the dummies that might exist in a CML circuit: bias current dummies, differential pair dummies and resistor load dummies. They are all available as spares for last minute tweaks in order to squeeze out extra gain or bandwidth. The key here is to reserve the real estate so that only metal changes are necessary. Most layout engineers I worked with are magicians when it comes to quick metal fixes.

The catalog for decaps

There is no such thing as a pure capacitor outside of the mathematics land. That is why you probably have run into pictures like below at some point (a simple tutorial here). The effective series inductance/resistance (ESL/ESR) of a capacitor suppresses its high frequency bypass capability. Even worse, a capacitor can really behave inductively at high enough frequency.

Realistic PCB capacitor model (top) and decoupling network impedance over frequency (bottom)

This picture continues on chip. The PCB capacitors rely on in-package or on-die decaps to further suppress the supply impedance rise at higher frequencies. However, on-chip decaps face their own unique challenges, like ESD, leakage, lower quality factor, etc. Let’s first detail out the possible decap choices.

1. PMOS/NMOS gate decap

This is probably the first thing that comes to our minds. We will connect the gate of a PMOS/NMOS to supply/ground, and connect the source and drain to the other. Typically the supply voltage is much larger than the device Vt, so we will get a linear enough decap. To build a high-Q cap, the gate length is typically quite long for smaller gate resistance. However, the overall ESR is still considerable when taking all layers of VIAs and metals into account. Nevertheless, these decaps have much higher capacitance density.

NMOS/PMOS gate decap schematics and example layout

So are we done? Not quite. The biggest issues for these decaps lie in reliability, specially ESD and leakage performance. For many deep sub-micron nodes, the oxide is thin enough for electrons to tunnel through, leading to gate leakage current. For the same reason, the oxide layer is susceptible to breakdown when high voltage is present or an ESD event happens. As a result, these decaps can lead to catastrophic failures if not taken care of. For example, if a positive ESD event happens on the supply, which directly connects to the NMOS’s gate, the device would likely break down, causing huge leakage current or even collapsing the supply.

Between the two flavors, PMOS tend to be the more reliable (not necessarily the better performance) decap choice for most small geometry processes. Planar PMOS has lower gate leakage than NMOS. The parasitic diodes between the Nwell and substrate provide some extra ESD protection. The extra parasitic capacitance between the Nwell and substrate is another point in PMOS’ favor.

Cross section of planar PMOS and NMOS
2. Cross-coupled decap

To further improve on-chip decaps’ reliability, a cross-coupled decap structure came onto the scene(here is a nice paper on decaps). The structure does look funny – a positive feedback loop leads to a stable biasing point in this decap. Under this operating point, the circuit behaves as two parallel device capacitors, each with a device on-resistance in series. This ESR is much higher than that of the gate decaps, thus will be less effective for high frequency bypassing. However, the increased gate resistance provides extra protection during an ESD event by limiting the current through the gate oxide. Most decaps in standard cell libraries today use similar structures to tradeoff reliability for performance. After all, nothing matters if your chip has a hole burnt through it.

Cross-coupled decap schematic, model and impedance over frequency
3. Thin vs. thick oxide

Another way to tradeoff reliability and performance is through the use of thick oxide (TOX) devices. TOX devices have much lower leakage current and are rated for higher voltages, and thus have a better chance of surviving ESD events. The cost, however, is smaller capacitance density (smaller capacitance due to larger distance between gate and channel).

There was an anecdote in my Ph.D. lab that a chip returned with huge off-state currents, and unfortunately nothing worked. The root cause was the large area of thin oxide NMOS decaps, coupled with perhaps improper handling of antenna effects, making the chips dead on arrival. After that incident, “only TOX decaps allowed” was an enforced rule in the group.

Industry and academia environments are certainly different and more rigorous rule checks are available today. Nevertheless, I still make my decap choices carefully because of this horror story.

4. MOM, MIM and power grid

Last but not least, we have the good old metal caps. They typically provide better quality factor, linearity and reliability than device caps, but at much lower cap density. Below is an illustration of the physical structures of MOM and MIM caps

Example bird eye view of MOM capacitor (a) and cross section view of MIM capacitor (b)

In most cases, a MOM capacitor can be stacked directly on top of a device decap to effectively increase density and quality factor. Roughly 20% cap density improvement is achievable with optimized layout. MIM caps might seem efficient because they sit in between top two layers with better density than MOM caps, but the thin plates’ high resistance is a bummer. I never used MIM caps for supply decoupling because they disrupt power grids and have mediocre performance at high frequencies. However, don’t let my personal preference deter you from trying them out and maybe they are the right fit for you.

One other “freebie” for decaps is the sidewall parasitic capacitances between power straps. Therefore, try to interleave your supply/ground lines whenever possible.

Decoupling signals

Let’s get this out of the way first: your supply is a signal. Sadly, not many people realize this until supply noise becomes a problem. What it really means is that a supply or ground pin in schematics is not a small-signal ground, so connecting decaps to these nodes requires some thoughts.

Let’s take a PMOS current bias voltage for instance. Normally a low pass filter exists between the current mirror and the destination current source (either C or RC) to lower noise. The question now is which decap type should we use.

First of all, since the decaps see a finite impedance to supply/ground, ESD is less of a concern (i.e. use of NMOS gate caps is OK). We probably want the highest cap density for area saving, so let’s stack as much MOM capacitors as possible. Ground is typically “quieter”, so let’s bypass to ground. Thus, here is our first attempt:

First attempt at decoupling current bias voltage

At first glance, there is nothing wrong with this considering noise sources from Iref or the diode connected PMOS. However, as soon as we think about noise from the supply (which we believed is noisier than ground), it sees a common gate amplifier on the right side at high frequency! If this bias current goes to an oscillator, boy would we have some jitter problems. The correct connection is to bypass the bias voltage to supply, stabilizing Vgs across the PMOS device. At the same time, a PMOS gate cap would be the better choice in terms of layout.

Supply noise injection comparisons between different decoupling schemes

Decoupling signals is often not as straightforward as it seems. I have dealt with signals that needed to have specific ratio of decoupling to supply and ground for optimal performance. Such exercises become more challenging when area becomes a constraint as well. This might seem obvious to some of you, but I am sure we all have made similar mistakes somewhere along the way. I hope this little snippet could save new designers some troubles.

Managing dummies

Finally, we get to the schematics part after a crash course on dummies and decaps.

You might already know my stance on who should initiate and manage dummies/decaps. I strongly believe designers should own the decisions on usage and placements of these devices. As evidenced above, dummies and decaps directly impact circuit performance, and sometimes determines if we have taped out a resistor or brick. So start thinking about them as soon as a schematic hierarchy is created.

There are mainly two types of transistor dummies: ones that connect to a circuit node and ones connected to supplies. My recommendation is to try your best to draw the first type in schematics as intended in layout. It’s OK to leave the supply connected dummies in a corner if you want to make schematics look cleaner, but definitely create your own floorplan. To illustrate, take the simple diff pair example below. One connects dummies to node isrc explicitly, and the other tucks them away in the corner with net name connections. Many schematics out there contain dummies like the left example. For bigger and flatter schematics, it can quickly become difficult to trace.

Different dummy drawing styles for example differential pair

The next tip involves aligning dummies in the same row as the active devices to reflect layout. The diff pair example didn’t follow this because it’s a simple circuit. We will use a conventional StrongARM latch as an example for this point.

Aligning dummies to rows of active devices in a StrongARM latch example

Note that the dummies on the vx nodes remain part of the active schematic similar to the diff pair example. On the right is a boxed section for supply connected dummies put into rows. This might seem redundant since all NMOS devices could be combined, but it creates a template for layout engineers and highlights the relative dummy locations. The dummy sizes DON’T need to be accurate when the schematic is first created. They serve as placeholders for either layout or you to fill in later. Again, dummies are for LDEs, so always keep layout in mind.

If you haven’t already realized, some PMOS dummies on the top row are connected as decaps. In general, don’t waste opportunities to turn dummies into decaps (for supply or bias alike) right next to your circuits. They are the first line of defense against switching currents or capacitive feedthroughs like in a dynamic comparator.

Should we create dedicated dummy wrapper cells? My cop out answer is that it’s a personal choice. However, if you designed the schematic hierarchy right, no level should have enough dummies to even consider a wrapper cell. So my real answer is if a wrapper cell is ever needed, it could just mean your schematic is too flat. Start wrapping active and dummy devices together.

Managing decaps

Most teams probably already have reusable decap cells. If you don’t have them, make them now!

For my first Ph.D. tapeout, the unit decap cell was the biggest time saver towards the end of the project. By using mosaic instantiation, the empty areas around the core circuits were filled up in no time. My first chip didn’t work for other reasons, but I was very proud of the decaps I taped out (can you hear me choking up?).

Cartoon chip layout, with decap mosaics for different supply domains (orange & yellow)

There could be many details that go into making these reusable decaps. Schematic-wise, they are a collection of unit decap cells with different flavors pulled from the catalog. In modern CMOS designs, these decaps’ unit area in layout fits within a power or standard cell grid. The standard cell decaps are excellent examples. We now just take that concept and apply it to higher level custom decaps.

The first advice might sound silly: make reasonably small symbols for unit decap cells. Decaps are important, but they are not the star of the show. Therefore, the real estate they take up in a schematic sheet should still be small. Case in point: a decap cell symbol in a standard library most likely is smaller than an inverter symbol. Along the same line of thinking, your custom decap cell’s symbol could be slightly bigger to include information about the decap type, but not that much bigger.

Below are some example custom decap symbols, comparing to the sizes of a typical standard cell decap and MOS symbols. By making them small but still informative, tucking these decaps away in a corner makes them less distracting in schematics.

Example custom unit decap symbols compared to standard cell decap and MOS symbols
Example StrongARM latch schematic with dummies and decaps

Moving up the schematic hierarchy, different decaps types are necessary for multiple supplies. For example, use thick oxide for IO voltage, combination of thin and thick oxide for core voltage, etc. The advice here is to ALWAYS make a dedicated wrapper cell for all the higher level decaps. The example below is not really drawn to scale. One can imagine the decap wrapper cell symbol being significantly smaller than the rest of the core circuits. Key is to again put the cell away in a corner, but it’s still easily accessible.

Decap wrapper example at higher level schematics

So what’s the big deal? Aside from a more modular schematic, there are two other main benefits.

  1. This creates a clean interface between design and layout engineers. The layout engineer can update the decap count inside the wrapper cell him/herself without interfering with ongoing changes in the core circuits. This will save everyone some efforts during crunch time.
  2. The magic of black boxing makes this schematic more simulatable. Accompanying the decaps are millions of parasitic capacitance and resistance when fully extracted. That’s one of the reasons why post-extraction simulations for higher level schematics are almost impossible. With this schematic, we can mix and match the extraction outputs for all blocks. The decap wrapper can stay in schematic or use C-only extraction. The opposite case could be to have the core circuit stay in schematic, but with a full RC extraction on the decaps and power grids.

The decap wrapper cell doesn’t have to be only on the top most level. In fact, I would recommend putting these cells starting at almost all mid-level blocks. Once you become used to it, it just becomes a habit of copy/paste.

Conclusions

Dummies and decaps are not the sexiest things to talk about (I have tried very hard here). They are nevertheless the key elements that ensure our circuits operate as intended. Here is a quote about decaps by Kent Lundberg (my circuit and feedback class instructor during undergrad): “Decoupling capacitors are like seat belts. You have to use them every time, whether you think you’re going to need them or not.” The same applies to dummies in today’s process nodes.

Subjects like dummies and decaps are often learned on the job or from expensive mistakes. There are many other “boring” but critical elements that require more of our attentions in a design process (mostly DFM related). Often times, fresh grads are overwhelmed with new terminologies, methodologies and productization concepts that weren’t taught in school. To address this, rating the correct usage of dummies/decaps and overall schematics for a class project might be a good starting point .

Mistakes in chip design are expensive. Ironically, the hard truth is that sometimes people learn best from expensive mistakes. The best tradeoff, then, might be to share and openly discuss more “horror stories” in order to save younger designers from these million-dollar downfalls.

Metal Resistors – Your Unexpected Friend In Wire Management

Yes, you read the title right. If you haven’t seen or used metal resistors (a.k.a. metres, rm, etc.) in your schematics, I hope this post opens a new door. Most modern PDKs already include metal resistor cells in the library. If not, you could create your own with CAD team’s help (if you have access to one). Normally, we work hard to avoid metres because they show up uninvited and mess up everything after extraction. However, they can be extremely helpful when placed properly, especially for simulation, testing and documentation purposes. In this post, I will explain in more details how to effectively utilize metres in these areas.

Some wires deserve a name

Metal resistors have been around for a long time. I only began using them more heavily when finFETs came about. As explained in another post, layout and parasitics are now dominant factors in a design’s success. Therefore, many routing wires need to be scrutinized just like devices, and they deserve to have dedicated cells.

The easy example we can all agree on is on-chip inductors. Although many PDKs come equipped with inductor pcells, we probably still end up drawing our own. There are many methods to deal with inductor LVS (black boxing, creating pcell, etc.), but my current favorite is to use metal resistors. These schematics are boring to say the least (the resistance is negligible, often <<1Ohm and essentially a short), but they will pass LVS as is without any funky setups. To simulate, you replace it with another schematic view generated from your favorite EM solver, be it an inductor model or nport model. The possibilities are endless: a similar schematic can apply to transmission lines as another example.

metres for inductor LVS
metres for transmission lines

Perhaps my favorite use case is for creating a standalone routing cell for a critical net. This happens the most when a signal need to branch out and reach multiple destinations. Metal resistors can help define this design intent early on (especially if you have already experimented with the floorplan). This is just another aspect of the “draw it like you see it” mentality. The example shown below is for a simple clock route, but you can easily expand this to be a distributed or tree structure. Note that the schematic could be “less boring” now that I added some parasitic capacitors to both supply and ground.

metres for an example clock route

Let’s compare the two schematics below. On the top is a straightforward one showing a clock buffer driving the signal to the top and bottom buffers. Although not drawn here, one can imagine an improved version including annotated routing information and a cartoon floorplan in the corner. So how can we further improve upon that? That’s where the bottom schematic come in with a routing cell created with metal resistors.

Schematics improvement with routing cell

Here are some of the biggest benefits of the bottom schematic:

  1. It forces you to treat routing plans seriously and acknowledge that it’s part of your design. Heck, it makes everyone who looks at this say that routing cell must be very important.
  2. There are two more unique and critical nodes (i.e. clk_top & clk_bot) for easy probing during simulation. There might be some who are fluent in netlist and know exactly where the signal of interest is, but that’s not me. With this schematic I can easily probe these two nodes and obtain useful information right away (e.g. delay matching).
  3. This schematic intrinsically separates the design problem into two parts: driver fan-out sizing and parasitics. So if the post layout simulation results weren’t as desired, we could have a better plan of attack for debug. Is it routing parasitics or fanout limited? Maybe I should try C-only extraction for the routing cell to see if it’s resistance dominant. Maybe there is some layout issue in the buffer instead of the wire routes, so let’s use extracted view only for the routing cell. I hope that you see this is a more efficient scheme to help designers isolate layout issues.
  4. Let’s talk about the supply/ground pins. The obvious reason is give the extracted capacitors a better reference rather than “0”. The more important reason is that these pins will remind you to include power grids surrounding the wires in layout. Many designers find out much later that top level integration slapped a dense power grid over their critical signals. This can lead to yelling, hair pulling and sometimes redesign. Putting power pins on routing cells lower the chance of such “surprises”.

Despite the examples focusing on high speed wires, metal resistors could be equally important for lower speed applications. When resistance matching is critical (e.g. summing node for a current DAC), segmenting a net with metal resistors can work wonders.

On-chip probe points

Now let’s go to other extreme of the speed spectrum: DC test voltages. For the uninitiated, real world designs often require the ability to measure critical on-chip signals. For digital blocks, an internal mux and a protocol of your choice (I2C, SPI, monitor bus, etc.) is sufficient to select and send signals off chip. The principle is the same for analog signals, except you have to decide the exact locations to probe in the physical layout.

There are mainly two categories of test signals: performance critical and location critical. Performance critical signals are ones that you don’t wish to disturb when you look at them. For example, you don’t wish to add extra capacitive loading on a high speed net or you want to make sure no extra noise can be injected into the VCO control voltage through the test path. The typical solution is to use a large isolation resistor (could be ~100k) locally before sending the voltage to a far-away analog mux. In this case, the resistor is an actual device like a poly resistor.

In other cases, extra loading is not problematic but you are specific about the location and metal layer where the signal is probed. Supply and ground network is the best example for this use. Our friendly metal resistor can be the perfect fit here. My suggestion is to create a corner in the schematic that summarizes the probe signals and their respective metals like below. This little corner provides layout engineers enough initial information (fine tuning is certainly required), and also serves as documentation.

Metres for sense voltage connections

To those who are pcell savvy or wish to improve upon this, you can create a wrapper cell with custom symbols with metal information written on them. The size can also be adjusted for more compact drawings (schematic real estate is valuable too). Depending on your appetite and the scale of your design, this might be an overkill. However, there is a similar use case in the digital land that might make more sense for some.

Digital mapper

Let’s take the diagram above and flip it left and right. Then, you have a bus coming in on the left, and branched out to new unique pins on the right. Remember the configuration section of the symbol here? This list can grow quickly for a larger block, and propagating all these pins to higher level could become troublesome. Somewhere in the schematic hierarchy one needs to connect these meaningfully named pins to the dull digital buses. Perhaps you have seen something like this before

Digital bits distribution by net names

Ah, the good old connect by net name crime. The noConn cells are there to eliminate the warnings due to a few unused bits, but now the whole bus is “not connected”. There is no structure in how the digital bits are connected to their destinations. No amount of “dynamic net highlighting” is gonna save your eyes when you need to debug a wrong connection. Your layout partner is probably also making a voodoo doll and sharpening some needles. Introducing the digital mapper cell, sponsored by metal resistors

Digital bits distribution by mapper cell

The magic happens inside the mapper like below. Luckily, tools today recognize buses that are tapped off from a bigger bus without complaining. This results in a much cleaner look for the schematic and nothing is connected by net name. Right away, it conveys more information about each bit’s usage, expected metal layer and even relative locations in the layout. For example, the noConns signify routing tracks reserved for shielding around critical signals, like power down and reset.

Unit mapper group example

Building upon this unit metres mapper group, a complete mapper cell can contain much more information about the circuit’s programmability. You guessed it – here can be go-to place for all the default values if you annotate them down. What’s better is you could see which configurations share the same register map address. You can even read off the combined value in hex for digital and verification folks. This is just another example of schematic as documentation, made possible by metal resistors. From layout’s point of view, the initial effort might be similar, but any incremental change becomes easier to track with this cell.

Example of complete mapper schematic

I have one final note about the digital mapper cell. Depending on your team’s own methodology, the mapper inputs could potentially be lumped into a single wider bus. This can help simplify the symbol of a mid-level block and makes higher level schematics easier to read and draw. But again, it’s up to your personal taste.

High level schematic symbol style flexibility with mapper cell

Dear Santa

As a build up to my personal wish list, here is a my bad attempt at a Christmas poem:

‘Twas the night before Christmas when an unexpected friend showed up in the PDK,

Her name was Metres, who smiled and said “your wires are going to be OK”.

Forgive me Santa for being so greedy,

but I still wish Metres can be a bit more handy.

Don’t you know a special elf named CAD?

Perhaps he can help, I heard he’s a good lad.

I know he is busy all season long,

but here is the list for what I want

  1. As mentioned above, the metres symbols should display metal layer info directly.
  2. Currently, pins can be assigned a “type” (power, signal, analog, etc.), but I personally never used them and understood their purpose. Is it possible to create a “digital” pin type and give me a field to input “default values”? It would be nice if the default value can show up in the symbol pin automatically.
  3. Is it possible to read in a digital mapper cell and generate a spreadsheet for the configuration table? This probably requires #2 to happen first.
  4. To expand upon #3, perhaps the process of creating configuration spreadsheets can be fully automated if special metres are recognized when tracing an entire netlist. Now designers only need to make sure their schematics contain the configuration details, and never have to touch Excel.
  5. A similar methodology might also work for analog test signals, just need another special flavor of metres pcell.

These might still be pipe dreams, but dreams do come true if we wish hard enough. The bigger point, however, is that we need to keep thinking about ways to enhance productivity, improve design scalability and reduce chances of error. An effective use of a tiny element like metres can translate to huge gain in efficiency. You never know what would be the next gem you find or create in the PDK.

Draw It Like You See It – Schematic and Layout Duality

The verdict is final: layout IS the design. The images below taken from Alvin Loke’s CICC slides (watch his recent talk here) summarize the key challenges in modern CMOS designs. Layout effort grows linearly (I think it’s more than that) due to more stringent DRC rules and longer design iterations. As a result, it has forced changes on most of our design mentality (although we designers are a stubborn breed). I began to re-evaluate the way I draw schematics because I know eventually a good layout is king. I still hold the personal belief that it’s more likely to work if it looks good. This applies to both schematics and layout.

Design and layout complexity in modern CMOS process [credit: Alvin Loke]

Design in reverse

I was fortunate enough to have early access to finFET processes (16nm to be exact) during my Ph.D years. Funny story: when I first stared at a finFET layout, I assumed all gates are shorted due to the continuous poly lines. You can imagine my facial expressions when I learned about poly and MD cuts (😲+ 🤬 + 😭). It took me about 3-4 circuit blocks to fully understand the “evil” that is parasitics in finFETs. The LVS ignorable parasitic caps that double your circuit’s power seem innocent next to the parasitic resistors that smirk at you as they make your netlist size explode. Eventually, I decided to do my designs in reverse: never begin schematics/simulations before layout placement. It might sound counter intuitive, but here are the biggest reasons and benefits for adopting this methodology

  1. Save time for everyone in the design cycle
    Think in the shoes of layout engineers. They need to tell you that your circuit is impossible to draw due to some DRC. They probably spent a whole day trying to figure this out, but this bad news seemed inevitable. Frustrations and tension fill the room. These scenarios could be avoided if the designers already understood the floorplan/DRC limitations. So instead of “optimizing” the circuits only in SPICE land, start with layout.
  2. Layout can determine your sizing, not simulations
    Designers tend to “overdesign” a cell when assigned a small puzzle piece in a big block. We run hundreds of simulations to squeeze out that 0.1dB extra performance, only to find out later that there is no difference with post layout extractions. Nature is kind to us: we often deal with shallow optima and a single transistor’s size wouldn’t matter too much in the grand scheme of things. So instead of running tons of simulations, your design choice might be informed by what works better in layout. One example would be increasing a device’s finger by one would help reduce the cell’s width due to abutment.
  3. Begin thinking about creating hierarchies because of layout “tediousness”
    A good schematics hierarchy could also help increase layout’s efficiency. To fully understand the importance of good hierarchical schematics, you need to experience the painful repetitive tasks in layout firsthand.
  4. Embed your design intent for parasitics into the floorplan
    No matter how good your cartoon floorplans in schematics are, it doesn’t come close to real layout floorplans. You might gain new insights after just laying down your tiny transistors and some wires. You might also want to break the OD diffusion to squeeze in more contacts. So, you change an NMOS from a single 10 finger device to 10x single finger devices. You can then draw schematics with design intents for parasitics, but you need to OWN the layout floorplan for that to happen.

The design/layout Venn diagram

I have this mental Venn diagram for design and layout. I remind our team’s designers that a majority of their design efforts should be in layout, with awareness of floorplan, DRC and parasitics at a minimum. On the other hand, a good layout engineer should be an electrical engineer at heart, knowing when to tradeoff parasitic capacitors and resistors, capable of suggesting better floorplans, and just a wizard with those hot keys.

It is certainly easier said than done, and I believe designers should take the initiative to reach across the aisle. DO NOT think you are doing layout engineers’ job, but rather you are helping yourself down the line. I promise you that your design’s “time to completion” will reduce significantly. Everyone in the layout team will shower you with appreciation if you are just a little more layout savvy.

Design and layout Venn diagram

Layout-like schematics

Enough of my ranting about how designers should learn layout, let’s discuss how at least we can draw schematics with layout in mind.

Different companies have different ways of managing their schematics at various levels. For large SoC companies, there might be a transition from a more manual and analog way of managing schematics to more digital like methodologies (i.e. netlist, Verilog, etc.) somewhere in the hierarchy. In these cases, the higher level schematics are mostly auto generated and human unreadable. Sometimes this makes sense because the chip becomes just a combination of macros and functional verifications can be more efficient with the help of digital tools. Nevertheless, drawing good schematics that reflect layout is still a good idea at mid/low level. It is really a personal choice deciding at which level or for which cell to draw a layout-like schematic, but it is a practice that could fit any hierarchy level.

It’s time to get our hands dirty. The biggest hurdle we need to jump over first is the default rectangle symbol shapes handed to us by the EDA tools. Its partner in crime is the selection box, the invisible border that defines the symbol boundary and limits your creativity. The conventional wisdom says input on the left and outputs on the right. We have been going with the flow for a while, and to be fair, they certainly get the job done. To break from this convention, here is the corner selection box that allows you to draw symbols with any shape.

Corner selection box

This allows you to create very layout like symbols, yet provide a clear entry point to descent down. To illustrate, below is a top level schematic with a pad ring. The boring rectangle symbols will result in a schematics that look like this (I didn’t put all the pin names on there for simplicity)

Boring pad ring symbol at top level

Now if I draw the pad ring symbol as a real ring with the corner selection box, the schematics can turn into something like below

An actual chip drawn in schematics w/ a ring symbol

Let’s detail out the ways why this is better

  • The pad locations are explicit, so you can get a lot of information from schematics alone. You can already visualize signal/current flows. You know exactly how many and where the spare pads are just in case. You know how to floorplan internal block’s I/O pins. The list goes on.
  • It makes more sense to have duplicate pins on this pad ring symbol because it reflects the physical layout. Thus, you have an immediate idea of how many pads are available for the same signals, especially important for supply/ground.
  • Although I didn’t draw it here, you can imagine how one can expand this symbol to annotate each pad type (e.g. ESD strength, I/O pad type, etc.), adopting the schematics as documentation principle.
  • The sense of pride that comes with drawing and admiring this finished schematic, which you treasure almost as much as your kids (ok, maybe not that much).

Another dummy example

Now let’s move to a lower level dummy block for another example. I want to emphasize that this is not a real design and probably doesn’t even work. However, it’s a great example to show how to draw layout-like schematics. Take a digital LDO (since we did an analog one before) and we will focus on the custom digital controller and the PMOS array. Below shows the block diagram

Dummy digital LDO core block diagram

As you can see, this block diagram serves as a pseudo floorplan for the layout as well. I will show you the final schematics first, and go into each sub-block respectively.

Digitally controlled PMOS array schematics

We will dive into the PMOS array (or more precisely matrix) first. This cell embodies the notion that the layout is the design. It’s quite straightforward schematic wise but the nuances are all in layout. My preferred way to draw this cell for layout is to create row and column hierarchies like below

Row and column schematics for the PMOS matrix

Note that I purposedly make the csel bus come from the bottom to match the layout. The vin/vout pin directions are more conventional since there is no easy way to indicate a 3D structure (i.e. VIA up and down) in schematics.

Those eagle-eyed among you may already see that the schematics can be more simplified and scaling friendly using bus notations. When the matrix size is large (e.g. >256 elements or 8 bits), the bus notation makes sense. Otherwise, I think 16 rows + 16 cols can still be drawn out explicitly to reflect layout (that’s roughly 2log2(16) = 8 Ctrl C+V operations, so not that bad). Together with a cartoon floorplan and more notes in the schematics, you can confidently hand this off to your layout partner.

Simplified row and column schematics for PMOS matrix

Now we will move onto the custom digital block. The more interesting subcell here is the shift register, so I will expand on it further. For the digital block itself, you can clearly see the three subcells and their relative position w.r.t each other. They can be placed into an L-shape with standard cells, fillers and decaps, just like in the schematics. Of course I didn’t forget to draw a note box to indicate the higher level connections to the PMOS matrix. One benefit with this style of schematics (which might not be obvious until you try it yourself) is you rarely have to connect by net names because the signal directions are preserved like in the layout.

Digital controller schematics

If we descend into the shift register cell, I would draw something like the following. The example design intent here is to run the clock signal against the data path to lower the hold time violation risk. Thus the data input needs to be buffered to the other side of the register chain. The extra buffer delay will act as hold time margin for the first DFF.

Note that I also space out the data buffers and annotate the approximate distance between them. The data buffer size is already chosen appropriately because I know the metal usage and wire length in advance. The clock signal also has the same annotations along with a note symbol for shielding. It’s all possible because I have played around with layout floorplan before drawing this schematic. Again, we can simplify this schematic before the shift register gets too long. It might lose some layout information, so you can add a cartoon floorplan in the corner as a supplement.

Shift register schematics, explicit (top) vs. simplified (bottom)

Final points

In the interest of keeping this post at reasonable length, I won’t include any more specific examples. However, here is a list for layout information that can be explicitly shown in schematics

  1. Routing path features including 45/90 degree turns and branching for critical signals, especially if distributed long distance (e.g. clocks).
  2. Directionality between critical signals (e.g. show if data and clock paths are parallel or orthogonal).
  3. Special routing plans like a tree structure for matching or star connection for power.
  4. Inductor coil placements relative to other cells.
  5. Higher level block symmetry (for instance, replicated I/Q mixer in a RF signal path).
  6. Common centroid placements and connections for first order gradient cancellation (differential pairs, binary/segmented DACs, etc.).
  7. The list can go on as you start to draw it like you see it…

As a closing thought, I started this post with a focus on modern CMOS and finFET, but the principles of design in reverse and drawing layout-like schematics is equally suitable for older process technologies. Designers have to evolve and understand bottlenecks and constraints often lie in other aspects, especially layout. By the same token, I also encourage designers to learn about new ideas in signal processing and systems.

In an ideal world, the Venn diagram described above would have a third circle for system design. Work flows and available talents nowadays force most teams to operate like the diagram on the left. Each circle will expand over time thanks to new technology and tools, but it’s the right overlaps that push innovation forward and ensure execution. We should all aspire to be in the middle of the intersections, and younger generation engineers should be trained as such. So gauge yourself against this picture, and move towards the center 1dB each day and one day at a time.

Schematics as Documentation – How to Use Symbols and Annotations Effectively

You might call me a “reverse convert” – I began as a software undergraduate, who later jumped to the hardware side, pretty rare huh? I can save my personal story for another day, but what I learned from the software side remained impactful in my work today. The importance of documentation was chiseled into my brain after a few programming courses. I remember distinctly when a TA showed us how Javadoc could turn my code’s inline comments into pre-formatted and intuitive HTML documentation. Your IDE (Integrated Development Environment) becomes a one-stop shop for most of your programming needs.

Example Javadoc documentation page, auto generated from comments in code (and I emphasize “auto”)

Now let’s compare that with what IC designers do. We all have made one too many slides, probably some Word docs as formal documentation, and countless spreadsheets for configuration settings. Microsoft is really the winner here.

I pose the question (and challenge) whether schematic capture tools can be used to meet our documentation needs as well. Before taking the leap to talk about “AI assisted schematics” , my simpler conjecture is that adopting the mindset of “schematics as documentation” and some basic practices will already have huge benefits. In this post, I will share some key ways of utilizing good symbols and annotations in order to document our circuits’ architecture, analog and digital aspects.

Architecture documentation

Think back on the last time you WANTED to draw a good block diagram for your circuit, probably for a design review or a journal paper. Now take that desire and put it into schematics and symbols drawing, which is the very first place we can document circuit architecture at all scale. Here are my three guidelines to ensure architectural information is conveyed clearly in schematics and symbols.

1. Draw block diagrams on symbols with the right level of abstraction

Below are three different example PLL symbols. A bad one won’t have any information about what the block is, so it’s virtually the same as a netlist block. An OK one will at least have a description of the block (i.e. block names like PLL, LNA, ADC, etc.) on the symbol. There are situations where this might be acceptable. It’s marginally better than the bad one, but putting a big visible name makes it easier to find in a complex high level schematic. A good symbol will show some level of architectural information on symbol. It can depict the critical sub-blocks and immediately show the signal flow with the right pin placements. Depending on your personal preference, the big block name could be optional.

Examples of a bad symbol (left), OK symbol (mid), and good symbol (right)
2. Use meaningful cell, instance, pin and net names

There isn’t a “golden reference” for naming convention, but definitely have one within your group and enforce it. For basic primitives, there are some rules (“R” for resistors, “C” for capacitors, “L” for inductors, “V” for voltage sources, “M” for devices, and so on), but even those aren’t strictly followed sometimes in the crunch of time (I plead guilty). When it comes to pin/net names, the problem could be even worse like the example below. Obviously I exaggerated the bad symbol to make a point, but I hope the message is delivered.

Always give meaningful names to cell, instance and pins if it’s meaningful to you

One suggestion on cell names is to embed a similar schematic hierarchy in the name itself. This scheme not only helps with hierarchical thinking, but it also automatically sorts the cells in the library manager (easier to find later). One example would be to name the bias block in the charge pump pll_core_cp_bias. The name does get long sometimes, so exercise tradeoffs when using this scheme (might be a good indicator that you have too many levels).

Most schematics I have seen so far tend to have good pin names, but once in a while you run into pin names directly taken from the alphabet soup. Similar to cell names, I also recommend embedding some hierarchy. Also, group the pins according to function or sub-blocks. For example, the main signal path pins (e.g. clk_ref, clk_out) should be a part of the block diagram, the bias current pins can be grouped together, and the configuration pins for PD/CP should be grouped together.

Lastly, something is wrong when a counter determines the instance name of your circuits, especially for higher level blocks. The issue isn’t really about schematic readability, but debuggability when you study and modify netlists. I learned this the hard way when I had to trace “I0/I12/I034/I5/net067” in a netlist. So if you care about an internal net, give it a good name.

3. Use note shapes/colors to highlight local or higher level connectivity

Schematic capture tools are essentially drawing tools, so we have shapes and colors at our disposal. Weirdly enough, we rarely use them in schematics. Later in my career, I started experimenting with using shapes and colors more and I loved it. Architecturally, it adds another dimension to explaining the circuits.

One example is when I can’t find a good way to avoid the “connect by net name” crime. By using different color nets or explicit note arrows, you can emphasize critical connection even though they might be further apart in a big schematic. The goal here is to catch the attention of whoever reads the schematic, especially when feedback loops are involved. I still prefer using note arrows because it shows directionality , but colors might work better for special nets like biases and clocks.

Use different colored nets and/or arrow shape to show connectivity when connecting by net name

Another example is when such loops need to be broken during simulation (PLL, LDO, common mode feedback, switched-cap circuits, etc.). We can use an arrow to indicate higher level net connectivity. Sometimes we can even draw connectivity to other blocks so that we don’t lose sight of the overall architecture. Below are two examples

Note arrows to indicate connectivity at higher level
Use note shapes to show important higher level blocks (gm pair on left and regulator power MOS on right)

Analog documentation

Let’s move onto the more analog specific documentations that are valuable in schematics. We have already seen a glimpse of this in the examples above, in which current consumptions are also annotated along with the higher level components in the schematics. Most designers already have a personal style for documenting analog blocks, and my Big Three is listed below

1. Highlight circuit parameters and performance

One widely used annotation is putting in parasitic capacitance on critical nets after layout extraction. This practice reduces the gap between pre and post layout simulations, and it can help designers pinpoint where the limitations are. I think we can go a bit further to annotate other circuit parameters and simulated performance numbers in the schematics, including input capacitance, mismatch, noise, etc. It’s almost like keeping a journal of your circuit’s health.

2. Draw a layout floorplan

Layout notes are also essential in serving as the first interface between layout and design engineers. I would recommend drawing actual cartoon placements in addition to writing simplified notes like “minimize resistance” and “match routing”. Sometimes a well drawn floorplan automatically enforces these requirements. My personal preference is to put these notes in the floorplan cartoon so that the schematic itself looks clean. So think about creating a “layout corner” in your schematics.

3. Keep a record of your thinking process

Last but not least, use your schematics as a notepad to record your design decisions or experimental tweaks. It’s especially important when you are close to the tape-out finish line. We could keep having deja vu and wasting our time to “rediscover” the reasons behind a weird sizing choice, so writing down a note can save us some energy and trouble later. We can also use notes as reminders for revisiting design choices in the future. This is very similar to writing comments on small code snippets that catch edge cases when programming.

Combining these suggestions, one can draw and annotate a differential amplifier schematic as below. On the other hand, be wary of the risk of having a schematic look too heavy with too much text. So again, use your artistic senses to exercise tradeoffs when annotating schematics.

Example diff pair schematic that involves analog documentation

Digital documentation

Digital programmability has become inevitable in mixed signal designs today (i.e. digital-assisted analog designs). As a result, documenting configurations could now be a full-time job itself. It is especially challenging when multiple stakeholders are involved in a big system. Some poor soul will have to chase down individual block owners asking for default settings, power-on sequences, and so on. After some experimentations, I find the following guidelines the most helpful for in-situ digital documentation.

1. Use wide wires for digital buses

This is a personal choice but I use wide wires in both schematics and symbols to indicate a digital bus. It can show right away which block has a more complex interface, and needs more careful higher level routing plan.

Use wide wire to indicate digital bus
2. Write down critical config tables in schematics

Putting down a digital table becomes more natural as soon as you have the habit of using notes in schematics. We don’t need to include all configurations, just the ones that we will refer to the most later. Some example tables might be for a LDO’s output voltages or a VCO’s coarse tuned frequencies. We can also use such tables to note down functional behaviors, like clock source selection or reset sequences.

Another benefit is that we would already have a reference even if we need to document in excel again later. It would be much easier because we have already done so in schematics at the earliest chance bit by bit, instead of taking an all-at-once approach when the design is complete.

Coarse frequency tuning table in a VCO schematic
3. Explain custom decoders or digital controller whenever possible

We use custom encoders/decoders all the time (binary to thermometer, one-hot, gray coding, and so on), but we can’t draw much for their symbols. Therefore, the next best thing is to write the decoder’s name in easy to read fashion whenever possible. There are some tricks that simplify drawing custom digital decoders as well, like using bus notations and regular expressions.

Example digital controller schematic. Using bus connections can make the schematic look simpler and more intuitive than net name connections
4. Create block sections for analog and digital pins

This practice immediately separates the analog and digital portions of a block on the symbol. It also provides a quick view of each sub-cell’s programmability respectively. Together with tip #5, we can have a crystal clear symbol that well documents the digital interface.

5. Write default attribute values next to pins in symbol (at mid or high level)

I learned this trick from a co-worker, and find it extremely intuitive. The key here is to “hide” the default settings next to the symbol pins with smaller fonts. This way, they won’t obstruct the entire schematic when viewed from afar, but provide critical information when you zoom in. Perhaps I am late to the party, but this is extremely cool! The final finished PLL symbol might look something like this

Final PLL symbol combining previous tips

Putting it together

Let’s finish with another cartoon schematic and symbol for an example regulator. The sub-cell pin/instance/cell names are not explicitly drawn in this example, but we should still give them meaningful names like mentioned above. I hope you can look and agree that the LDO’s critical architecture, analog and digital information is well documented. We now can stay in the schematic land a bit longer before diving into Office.

Example LDO schematic w/ good documentation
Example LDO symbol

Lastly, this isn’t and shouldn’t be the only way to draw schematics and symbols as documentation. That’s where your individuality and creativity come in. Nevertheless, I would be thrilled if you find these recommendations useful and want to give them a try.

The day when our schematic environments turn into our IDEs might not be far. My personal wish is to see more built-in documentation features in EDA tools. Even a simple note section similar to the foot notes in PowerPoint can already provide some value. If you agree, please let your EDA friends know.

As always, my goal is also to strike up discussions, so please share your neat ways of documenting in schematics and thoughts on the topic!

To Wrap or Not To Wrap – How To Use Schematic Hierarchy To Enhance Design Productivity

It’s non-trivial to decide when a new schematic hierarchy is necessary, and everyone is self-taught in this regard. It’s an important decision but an early investment that’s often overlooked. In this post, I will share some ideas on when and how to create schematic hierarchies so that your design flow can be more efficient.

My personal guideline is to always check the top schematic crimes (to mimic the late Prof. Patrick Winston in his “How To Speak” lecture), each of which could be prevented or solved by creating symbols and hierarchy in your schematics. They are not mutually exclusive and could often happen at the same time. Let’s see how wrapping up your circuits can help.

1. The “too flat” crime

The “too-flat” crime happens when there are too many primitive circuit elements or circuit blocks and the circuit’s function is not easy to tell anymore (even by an expert eye). We were taught not to draw an op-amp’s internal circuits again when we use it in other schematics, but somehow we forget to do so for other circuits when their symbols are not pre-defined or readily available. This crime can also be committed unintentionally when you begin with a simple circuit, but then keep expanding until it’s lethal to the eyes. My suggestion is to ALWAYS use a border sheet and use it as a rule of thumb on when new hierarchies should be considered. I tend to avoid going above a D-size sheet, and try to contain most schematics in a B/C-size sheet. Otherwise, it usually means the schematic is too flat, even for a higher level circuit block.

A schematic cartoon with the “too flat” crime. It’s too flat if some engineering is required to fit everything comfortably into the border sheet of your choice.

2. The “too hard to change” crime

Imagine you are drawing a circuit that copies a reference current and produce bias currents for all other blocks. You rolled up your sleeves, pulled out the best current mirrors you know, and sent the gate bias to many devices. You pat yourself on the back and said to other designers “I got you covered”. The next day, you are asked to add more outputs, add a power down and trimming options to each branch, and improve these current sources (e.g. increase output impedance, reduce mismatch, lower noise, etc.). All these changes require you to either modify the device sizes or add new devices MULTIPLE times. You may run into crime #1 if you start changing things blindly.

Even wrapping up the simplest circuit can help save time down the line.

What may seem to be a redundant step of creating a symbol for your unit current source (even if it’s a single device at first) can save you from tedious changes later in the design phase. Now when changes are necessary, you only need to do it ONCE inside that nice wrapper (the same goes for layout. Your colleague will buy you coffee for doing this).

It almost seems obvious when it comes to coding – if you run a routine more than once (however simple it is), wrap it up in a function. So let’s treat your circuits as modular functions and place and drop onto your schematic canvas.

3. The “wait for layout” crime

Designing the circuit (including schematics and layout) is probably 20% of the work, and the rest goes to simulation and verification. For modern CMOS technology, the design cycle becomes much longer due to the increased layout complexity. There have been numerous solutions proposed to address this issue. Some examples are using look-up table based design methodology that includes some pre-simulated layout effects (e.g. ADT by Master Micro) and better EDA tools focused on parasitics (paragonX by Diakopto).

Complexity of modern day design cycle. [credit: Diakopto]

While these tools are fantastic, I believe as designers we should make our lives easier starting at the very first schematic building stage. Let’s take a simple CML amplifier, consisting of a tail current source, differential gm devices and resistor loads. You pass along the schematic to your layout partner and begin waiting because you can’t simulate anything until every component is done and put together to pass LVS. The process takes longer than you think because a very weird DRC violation forces you to re-floorplan. When the layout eventually comes back and the first SPICE simulation doesn’t show promising result, you don’t quite know where to start debugging because the layout is already “too complete to take apart”. If this sounds familiar to you, you are not alone. I encountered many similar challenges when I first started IC design, especially when dealing with new technology nodes.

Creating hierarchy even in a well understood circuit can shorten the design cycle in modern technology.

Now let’s break the problem into smaller pieces by wrapping up different portions of the amplifier. Right away you can see how this addresses the issues mentioned above. You don’t need the whole thing done to simulate something meaningful. If you care a lot about device matching, have the differential pair done first. If bandwidth degraded significantly when everything is put together but not when each part is simulated separately, then you know the problem is in the higher level metal connections. This way of drawing schematics also encourages reuse in other circuit blocks, thus cutting the “wait for LVS” time.

It’s not a new concept. In fact, the agile software development paradigm already adopts this incremental development method to allow more iterations and “fail often and early” mentality. Prof. Elad Alon has proposed the idea of agile hardware development before, but focused more on using software generators (e.g. Blue Cheetah). What I believe is we can still bring in a human touch by using agile thinking when drawing schematics, keep ourselves entertained, and achieve similar results. When it comes to both schematics and layout, we need to draw smarter, not harder.

Example agile development cycle for analog circuits using python code. [credit: Elad Alon, “Agile Hardware Design with a Generator-Based Methodology”]

4. The “fish bone” and “solder dot” crime

This is a easy to understand but sometimes hard to avoid crime. Systems with complex programmability and feedback loops often make us draw wires that are crossing each other. Schematics will begin to have fish bone patterns, and soon connectivity becomes hard to track. Most people are aware of this problem, but solves it with the next crime (“connect by net name”). Creating new symbols can help because you can now redefine the pin locations and hide some crossings in a new hierarchy.

Fish bones in a cartoon schematic. Never a good sign when something looks “dead”

Here I also want to discuss solder dots, because crossing wires can sometimes create unwanted ones. Mainstream schematic capture tools have definitely gotten “smarter” over the years on how to reroute your wires when you move circuits around, but it’s far from perfect. I can still get a few unwanted connections when things move, and it’s usually solved by better symbol pin placements, more direct wire routes, and more better solder dots connections. My three tips on solder dots for more readable schematics are

  1. Minimize number of right angles on a net (Use T-shaped connections)
  2. Use either the double or diamond solder dots to emphasize branching
  3. Order your pins to line up solder dots diagonally in Manhattan wiring whenever possible. This can reduce # of crossings and save some real estate in a bigger mesh.
Better solder dot connection styles

5. The “connect by net name” crime

We all have committed this one, and probably will continue. Sometimes it could be impossible to connect all pins with wires so I might downgrade this one to a misdemeanor. We also connect by net name somehow believing that it’s quicker. Raise your hand if you are tired of a schematic that looks like this

There is no information on how the circuits are connected. And yes, the net labels are not visible because they are too small.

When net labels become invisible to the human eyes and are only for machine consumption, I question the value of putting them there in the first place (especially for higher level blocks). The pin names should already be sufficient to tell what each net is. Debugging the connection become exhausting when you need to highlight these nets. To quote my Ph.D. advisor Boris Murmann, “I’d actually prefer a netlist that describes the connectivity”. When building such schematics, typing in all the unique net names probably consumes the same if not more effort than grouping the left two blocks and creating a new symbol to align pins like below. Another benefit of adopting such a style is that it forces you to think early about the interface between multiple blocks, making functional checks less error prone. This comes very handy in large mixed signal systems, which we will discuss in future posts.

Realigned connections using a new symbol

The flip side of the coin is deciding when not to wrap. We certainly don’t want to create unnecessary hierarchies because it takes time to make one and descend into one. From my limited experience so far, it’s hard to come up with some good rules because each design is different depending on legacy block reuse, layout change complexity, and frankly your colleague’s habits as well. We should, however, start discussions, share experiences and exchange ideas. I look forward to hearing about your thoughts on drawing schematics and creating hierarchies.

© 2024 Circuit Artists

Theme by Anders NorenUp ↑