Top Down or Bottom Up – Where Should Designs Begin?

I might not be considered a “seasoned veteran”, but I have experienced some personal design paradigm shifts over the years.

Starting in undergrad, circuit design meant discrete components and breadboard. The equivalent of IC hazing was to read through countless datasheets and choose between a bad and an OK op amp. Moving to graduate studies, shrinking my breadboard designs into GDS was definitely dopamine inducing. Meanwhile, I began to get a taste of the challenges that come with more complex circuits and systems. Various internships taught me the importance of designing for PVT and not just for Ph. D. Working full-time opened my eyes to the internal structures of a well-oiled IC design machine (system, design, layout, verification, etc.). I picked up the design reuse mentality along with a new set of acronyms (DFT, DFM, DFABCD…). Interestingly enough, I need to draw on ALL of these experiences in a startup environment.

What I just described here is how my own methodology went from bottom-up to top-down, and today I live mostly in the middle. To get started, I recommend everyone to read through Ken Kundert’s article on top-down methodology first. Based on what he wrote more than 20 years ago (!), I will then add my take on this topic.

Where’s top? Where’s bottom?

This is an obvious question to ask, but how is “top” and “bottom” really defined? In the good old days, “bottom” meant transistors and “top” meant amplifiers. It was easier to draw the line because there weren’t that many layers. However, the increasing number of hierarchies in SoCs has forced us to rethink what top/bottom means.

It’s easier to define what “top” is: whatever block you are responsible for. “Bottom” becomes trickier. This is where models enter the chat. British statistician George Box famously pointed out the true nature of models, and it is especially true for IC design. My definition of “bottom” is the layer at which the model is still useful but details become cumbersome for the design of interest.

Digital designers have moved their collective “bottom” to the gate level because transistor details become unnecessary. For a PLL charge pump designer, transistors could mean the bottom, but for the overall PLL owner, the bottom stops at the current source/sink model of the charge pump. My top can be your bottom like the picture below. The hierarchical tree depicted here shows a clean boundary between each owner, but sometimes there could even be overlaps. Therefore, every designer has the opportunity to practice “top-down” methodology and think like a system architect, which I would expand upon in a later section.

The simulation problem

My post won’t be complete without a xkcd reference, so here it is:

Compiling. [credit: xkcd]

Change “compiling” to “simulating”, you get a pretty accurate representation of our daily lives. I am kidding of course, but the underlying message is valid. Large IC systems nowadays are simply impossible to simulate. The fact that a billion-transistor chip works at all is nothing short of a miracle.

There are mainly two ways simulation speed can be dragged down:

1. Netlist is too big

Do I hear a resounding “duh”? In modern PDKs, the transistor models themselves are becoming more complex already. Multiple flags and parameters are included in the model for layout dependent effects and parasitic estimates. When we add extra transistors in the circuit, we are also adding more resistors and capacitors. Layout extraction (especially RC extraction) makes the netlist size explode further.

2. Time constant gaps

More and more mixed-signal systems run into this issue. Examples include oversampled converters, digital/hybrid PLLs, TIA with DC offset cancellation, etc. A block may have signal and loop bandwidths that are orders of magnitude apart. A high speed TIA processes GHz signal, but the DC offset loop might only have kHz bandwidth. In order to fully simulate functionality, a millisecond long simulation with a picosecond time step might be needed. This becomes a problem regardless of the netlist size.

To make matters worse, designers also often relegate into the role of “SPICE monkeys”. Without a good understanding of the top level requirements and behaviors, many fall in a trap of “tweak, sweep and press run”. Perhaps this is the reason why many fear an take-over by the AI over lord, because computers are way better at loops than us.

The simulation bottleneck worsens the already long time-to-market for IC products. To address these issues, top-down methodology introduces the use of behavioral models to allow trade-offs between simulation time, accuracy and gaining insights.

The top-down loop

Behavioral models are the key enablers in a top-down design flow. Top-down design typically requires new modeling languages other than SPICE to describe block behaviors. One can use software programming languages like Python and MATLAB, or hardware description languages (HDL) like Verilog-AMS or SystemVerilog.

When I went through my graduate program, our group had a unwritten rule. No one touched PDKs without at least a year of work in MATLAB. Our daily work revolved around models and algorithms before we can finally put down transistors. Unfortunately, not many circuit design programs enforce students to pick up a modeling language skill, which is reflected in the industry today.

With the benefits of behavioral models, I often find myself in a top-down loop at a design’s early phase. Here is what I mean:

  1. Begin at the top level for the system, and assume relative ideal blocks. Verify that your proposed system/architecture works with these assumptions using behavioral models.
  2. Question your assumptions and each block’s ideal nature. Start adding non-idealities into your models and re-evaluate. The key here is to pinpoint the non-idealities that matter the most in your system, and keep the ideal approximation for other aspects.
  3. You should have the “preliminary specs” for each block at this point. Now question if these specs are reasonable.
  4. Do order-of-magnitude low level simulations for feasibility study. Note that we are already at the “bottom” layer here!
  5. Repeat the process until the specifications converge as more low level simulation data becomes available

A simplified illustration of this top-down loop is shown above. If everything goes well, we traverse on the green and blue arrows until we reach a final design. Note that the green path signifies a top-down approach and blue is for bottom-up. When people refer to top-down approach today, they are really talking about this loop, not just the green path. It’s the continuous re-evaluations and requirement updates at the model and circuit levels that ensure optimal designs and smooth executions.

Sometimes we might run into the red arrow where a fundamental limit pushes us to rethink the overall system (and worse, our career choice). While it sounds disastrous, a brand new architecture or a neat circuit trick typically comes to life to break this limit. About 80% of my current job happens when I am driving. My mind goes around the loop several times, shuffles around some blocks and plays more mental gymnastics when I believe a fundamental limit is reached. It takes some practice and time, but anyone can grow into a “system architect” after living in this loop long enough.

System architect – the man, the myth, the legend

Ken Kundert specifically wrote about system architects in another similar article. A system architect’s job is to own the top-level schematic, help define block interfaces, develop simulation and modeling plans, work with verification and test engineers, etc. A system architect basically acts as the middle man who speaks different languages to coordinate multiple efforts during a design cycle. They are the go-to person when an issue arises or change is necessary.

Sounds like a talent that’s extremely hard to come by. Yet, every team has to task a single engineer to be this person in a top-down design flow. All too often a system architect ends up being a guru with models but with minimal circuit design experience. Thus they wouldn’t spot a fundamental limitation until it’s too late.

My belief is that every designer can be a system architect to some extent and on different scales. Regardless of how complex your circuit block is, you can adopt the top-down loop methodology as long as you treat it as a system. Here are some ways for you to try and play system architect

1. Always question specifications

While specifications serve as the first line of interface between designers and the final product, that is really all it is. No requirement is sacred and no sizing is sacred, as my last manager loves to say. One example is the use of effective number of bits (ENOB) for specifying data converters. There has been a shift from using this generic figure of merit to a more application specific way of defining converter requirements. A noiseless but nonlinear ADC will impact the system differently than a noisy but perfectly linear one. So next time when you are handed a specification table, ask WHY.

2. Always question the signal nature

Most circuit requirements come from assuming some signal type going into the circuit. Sinusoidal signals have been the go-to choice because we love Fourier and AC responses. They are easier to simulate and measure. Unfortunately, almost no real application only processes single-tone sine waves. With the system architect’s hat on, you should understand fully the signal nature. There might be characteristics in the signal that can be exploited to simplify or improve your circuits. Is the signal DC-balanced? What does its statistics look like? How does it handle weak and hard nonlinearity?

3. Create simple simulatable models in schematics

Building models is difficult and requires a picking up a new skillset. However, you could build many useful models with ideal analog components without any knowledge of Verilog or MATLAB. More complex primitives are already available including delays, multipliers, converters, etc. Start building simulatable models with these components first. You will be surprised at how effective they can be in reducing simulation time and providing insights. There are more sophisticated modeling tools, like Xmodel, if you have become more comfortable and proficient later on.

4. Define symbols and pins early before drawing transistors

Lastly, a system architect has a bird’s-eye view of what the finished product looks like. Start with the end product and you will get a better picture of how to get there. Try identifying and naming the most critical pins for each cell first. While you create symbols, your mind is already computing how to connect each block and prioritizing what the do next. Empty schematics with meaningful symbols can still be full of information. Be mindful that these symbols will definitely change later, so nothing needs to be perfect. Treat this exercise just as drawing block diagrams on a scratch pad. Your muscle memory for drawing schematics will put you in design turbo mode and keep you motivated to continue the design effort.

Conclusions

The boundary between “top” and “down” is muddier than most think. Top-down design is really a mindset where designers treat the circuit block as a system rather than a soup of transistors. Education and training programs in IC design still tend to produce good designers, but do little to steer them towards becoming architects.

In my personal view, schematics and basic component libraries provide enough tools for anyone to play the role of a system architect at all levels. I encourage all students and designers to start incorporating behavioral models (with ideal components and/or Verilog-AMS) in their schematics, even if there is a separate army for system modeling. The right models can help reduce simulation efforts, assist in debug, and solidify your own understanding of the circuit.

It is no secret that polyglots have huge advantages in the globalized world, and the analogy is equally true for circuit designers. Adopting a top-down design mentality is like learning multiple new languages, which will definitely prove fruitful in the long run.

1 Comment

  1. Dick Freebird

    For some of us the chip’s care-abouts are in fact largely about
    one device out of the N. Analog, RF and power management
    piece-parts are often this way (certainly in my own experience).

    Then your work starts bottom-up, and probably on the bench
    to first determine whether the models you were given, are
    worth a damn when your task is to defeat “abnormal conditions”
    while your foundry’s modeling folks (or contractors) simply said
    “not supported”. So you get to finish their job (if you believe that
    a modeling job should cover all terminal conditions out to melt /
    rupture, besides the usual between-the-rails-and-follow-the-rules
    simulation accuracy expected).

    And if that fails to satisfy then you get to vet the next candidate
    technology. Which if transistor level design is “bottom up”, must
    be a core sampling of the primordial ooze.

© 2024 Circuit Artists

Theme by Anders NorenUp ↑