Hi, My Name Is AICA – Artificial Intelligent Circuit Artist

“Hello, my name is AICA, artificial intelligent circuit artist. I specialize in creating unique and innovative designs for electronic circuits, using the latest AI technology to push the boundaries of what’s possible.
My background is in both computer science and electrical engineering, which has allowed me to develop a deep understanding of both the technical aspects of circuit design and the artistic creativity required to make circuits that are not only functional but also aesthetically pleasing.
With my advanced AI algorithms, I’m able to generate complex circuit designs that are optimized for specific functions, while also incorporating unique and creative visual elements that make each circuit a work of art.
Whether you need a custom circuit for a specialized application or simply want to add a touch of creativity to your electronic devices, I can work with you to design and produce a truly one-of-a-kind circuit that meets your needs and exceeds your expectations.
Thank you for considering me as your artificial intelligent circuit artist. I look forward to working with you and bringing your circuit design ideas to life.”

Chatgpt

Impressive, isn’t it? This is what ChatGPT wrote when I asked it to “write a self introduction for an artificial intelligent circuit artist” (I changed the name to AICA). It looks like the idea of using AI to assist with circuit design is coming true. Look, it seems to understand the artistic side of circuit design as well.

The AI wave gained even more momentum after the announcement of GPT4, so I thought it’s the perfect time for me to ride this wave a bit and share my opinions. The natural question in everyone’s mind is whether well-trained AI models can supplement other technology fields. Circuit design becomes a natural place to look at, and some machine learning technology already exists (e.g. Google TPU, Cadence Cerebrus, etc.). So what’s in the shop for IC designers?

An example of AI in circuit design

I want to start by stating that we have been using “AI” as circuit and system designers for quite a while now, especially in digital signal processing (DSP) and communication. I come from a SerDes background , dealing mostly with high-speed wireline communications (I call myself a high-tech plumber who make sure bits flow smoothly). Recent advances in wireline links pushed transceivers to use converter and DSP-based architectures (my thesis topic), and FFE+DFE becomes the go-to choice for digital equalization. Below is a picture of what a digital FFE equalizer might look like.

Parallel FFE example in an ADC-based DSP equalizer

The parallel nature of the equalizer requires each ADC sample to be sent to different FFE modules where decisions are made . I claim that each FFE module is identical to a node in neural network node, as shown below

The uncanny resemblence between an FFE module and a neural network node

What about a DFE? Well, there is something called a recurrent neural network (RNN), a fancy name for feedback – take the activation output (the sliced data) and make it another input for the node.

DFE as a RNN node

But wait, neural networks can “learn” you say. So can our FFE and DFE, we just call it “adaptation”. The underlying algorithms for finding the optimal coefficients are no different, namely some sort of gradient descent algorithm. In DSP, we normally use Least-mean-square (LMS) algorithm, and in neural networks it’s called backpropagation. FFE/DFE could be adapted with either known training sequences (i.e. supervised learning) or on equalizer’s own decisions (i.e. unsupervised learning) during mission mode. You get the point.

My understanding of neural networks obviously is still rudimentary and maybe laughable to some, but let’s all agree that the math behind DNN, ML, AI or whatever you want to call it is not new – it’s a problem of taking a multi-dimensional vector and projecting onto another vector of lower dimension (typically) after linear and nonlinear operations. With DSP equalizers, it’s taking the ADC output vector and projecting onto a single bit decision with FFE (linear) and DFE (nonlinear). The wireline application just stops at a single layer neural network, but I could easily rebrand it as “AI” in a startup pitch.

So why am I talking about this? If anyone worries about dealing with AI in the future or any new students think it’s a software only concept, think twice. We probably already have stumbled upon or even used AI concepts in our designs, but just didn’t use the same language.

What about the circuit design process itself?

Let’s move onto whether AI could assist or even replace design, layout and verification engineers. Let’s check the necessary conditions for this to happen. AI (in software terms) didn’t take off until when computation becomes much cheaper and massive amount of data is available, so we need to at least satisfy these two criteria:

  1. Enough cheap computation power for circuit design We have all experienced the excruciating wait during simulation time. Multiplying PVT corners and configuration settings quickly increase the wait exponentially. The layout CPU/memory consumption has become another bottleneck due to DRC/LVS checks for very large systems nowadays. The cost of sustaining the current IC design flow is much higher compared to say supporting an APP development team. In other words, I believe the computation power and cost for circuit design isn’t cheap enough as it stands today. Perhaps we can revisit this topic when hardware accelerators exist for SPICE simulations (startup idea anyone?).
  2. Enough meaningful data for circuit design So maybe we can take another approach. AI doesn’t need to close the entire design cycle, just some reasonable starting point for designers to take over. Without having an AI model checking and optimizing the circuit itself, we then need to teach it to build circuits “from experience”, a.k.a. using meaningful data from past designs. Here I purposely use the word “meaningful” because good and bad designs are all valuable when training AI models. I hope you can already see the challenges here:
    • The close-source nature of circuit designs today limits the amount of data available to train AI to a reasonable degree. Jamming the best journal and conference papers to the best AI model won’t help. What we need is the Github equivalent of IC design (which the PICO program is trying to do), but the rate of data generation (i.e. tape-outs) is nothing compared to open-source software. Maybe Elon Musk can buy a chip company and open source the GDS database after he keeps his promise with Twitter.
    • Labeling any design as good or bad is “subjective”. Each design is constrained in so many dimensions (power, area, speed, SNR, …). If now we relax the problem into having our AICA spit out multiple designs, each attached with a score, we still need to spend the same if not more time to verify and pick out the “right” design, if it exists at all.
    • How do we keep AICA up to date? ChatGPT is trained with new data every year across the entire internet, but it’s always one year behind (causality sucks …). Do we continue training AICA with its own output data with our supervision (reinforcement learning)? Will it become biased towards certain designs, especially if it’s trained only on a single company’s data? Does this new design cycle encourage or discourage innovation?
Flawed data [credit: xkcd]

I ask these questions hoping not to brush off AI-assisted circuit design in its entirety, but to open up new angles on how we even START to explore this space. Digital circuit designs live mostly in the RTL land(i.e. coding), and PnR flows are already automated enough that AI can begin penetrating into this space. Thus most “breakthrough” AI designed circuits are focused on digital circuits. Perhaps we need to reformulate analog/mixed signal design problems into a similar framework, and begin looking at the most basic tasks it can help with, like providing schematic templates and creating symbols.

The artistic side of AI schematic generation

Let’s tie it back to the core of this blog, the art of schematic drawing. To be honest, I view this no differently from AI generated art. Here is a “differential amplifier” I “created” on NightCafe (AI art generator) for a quick laugh.

This will be one of my dad jokes at some point. Note the systematic mismatch introduced in the differential pair.

At the same time, this picture tells the sad truth that we don’t have the right kind of differential amplifier sprinkled around the Internet. Nevertheless, AI might still be able to play important roles in the early design stage in the land of schematic drawing.

Imagine this: “AICA, draw me a template for a differential NMOS input CTLE, with 2b tuning for load resistors, 2b tuning for bias currents, 5b tuning for peaking gain, and a placeholder for local decap”, and it spits out the following schematics

I omitted many details in this dummy schematics cartoon (pin names, net names, border sheets, etc.), but they will be there in the final output.

If you descend into each sub-cell, another reasonable template is also there. The symbol for this CTLE could also look very good and be ready to use for higher level integration. The key point is that the device sizes won’t be decided by the AI, but it can help with the “mundane” job of drawing symbols and schematics (or design templates). AICA could ask more follow up questions like “do you want binary or thermometer coding on the resistor loads?” or “do you want peaking to be achieved with degeneration resistor or capacitor?” before creating each cell. The next step could also be asking the AI to create functional models that can be used to perform system level checks while you tweak the sizes.

So what does AICA need to learn in this case? Good schematics and symbols for various circuit topologies! Maybe this “declassifies” some sensitive information in a design because sizing isn’t involved and it can start with “well known” topologies. It should be easier to determine and open-source good schematics/symbols than good designs. To me, this is a more realistic first step in creating an AI circuit design assistance.

Final thoughts

I see any potential AI circuit model as any other IC design student. If we need to teach future AIs how to create circuit artworks, then we should start doing so ourselves today, and teach it to younger generation students first. Only then will we have accumulated enough good data in preparation for the AI wave in IC design. I want to finish this post with another exchange I had with ChatGPT, and maybe there is hope after all. Let me know about your ideas for AI-assisted circuit design, and as always, don’t stop drawing good schematics.

Another heart-to-heart between ChatGPT and me. I think it’s ready to look at good schematics.

4 Comments

  1. Avinash

    Loved the post!

    • Kevin Zheng

      Thank you very much Avinash. Your feedbacks are what drive me to continue write better content.

  2. Joseph Winkler

    I don’t have much background knowledge in high speed communications, but the analogy to AI was really mind blowing! I wonder how much knowledge sleeps in different disciplines that all work on the same problems but in a different “language”. I’m thinking of your example of different applications of the mathematical vector operations that a NN does and how many more examples like this must be out there.
    I find great fascination in how bad humans are at predicting the future. It’s so exciting to live in this time where a powerful “new” tool becomes available to the general public and no one knows what we will make of it.
    Especially in circuit design, I think we suffer a lot from being overprotective of designs, technologies and information about our craft in general. Isn’t it ironic how a development like AI puts this conservative industry to the test because one could argue that we will have trouble unlocking AIs power for our field if we don’t revolutionize our knowledge management first. As you’ve pointed out – what data do we want to use to train our specialized AIs? I think Prof. Murmann is leading an important initiative here.

    • Kevin Zheng

      Thanks Joseph for your comments. Another point might be that even the open source platform is a wild success, the turn-around time for a usable data point (i.e. tapeout) to be generated might still be orders of magnitude slower than the software side. It’s an interesting and exciting time that we live in for sure. I am overall cautiously optimistic about AI-assisted circuit design though.

© 2024 Circuit Artists

Theme by Anders NorenUp ↑