12/05/2024

Why Abstractions are Important in VLSI ? Synthesis : Episode - 2

In this article, we delve into several important aspects of abstraction in VLSI design. We start by discussing various forms of abstraction that we encounter in everyday life, providing relatable examples to illustrate the concept. Then, we explore the benefits of abstraction levels in VLSI, highlighting how they simplify complex designs and enhance efficiency. We examine different levels of abstraction and their roles in the synthesis process, including system-level abstraction, which offers a high-level view of the entire system. We also cover high-level abstraction and behavioral-level abstraction, explaining how they contribute to the design and functionality of VLSI systems. Furthermore, we delve into the register-transfer level (RTL) abstraction and logic gate level abstraction, detailing their significance in the detailed design phase. Finally, we summarize the key points, reinforcing the importance of understanding and utilizing various abstraction levels in VLSI design.

Various Abstraction Around Us :

Programming: In computer programming, abstraction involves creating functions, classes, and objects that encapsulate complex operations. This allows developers to interact with high-level, easy-to-understand interfaces while the underlying complex code remains hidden. 

Art: In art, abstraction involves representing objects or scenes in a simplified or stylized manner. This can evoke emotions and ideas without presenting a detailed and realistic depiction.

Science: In scientific models, abstraction is used to represent complex phenomena with simplified equations or diagrams. These models help scientists understand the underlying principles without getting lost in intricate details.

Engineering: In engineering, abstraction helps break down complex systems into manageable components. For example, a car's engine can be abstracted as a black box with inputs and outputs, ignoring the internal mechanisms.

VLSI : In VLSI (Very Large Scale Integration), abstraction refers to the process of simplifying the representation of a digital circuit or system while retaining its essential functionality. This is crucial in VLSI design, where the complexity of modern integrated circuits can be overwhelming. Abstraction enables engineers to manage this complexity by focusing on higher-level views of the design, which are easier to work with and understand.

Benefits Of Abstraction Levels in VLSI :

Each abstraction level allows engineers to work at a suitable granularity, depending on the design stage and goals. During the design process, engineers often start with higher-level abstractions to capture the functionality and overall structure. As the design progresses, they refine the abstraction, eventually reaching the transistor-level representation for detailed analysis before fabrication. Abstraction in VLSI design helps manage the intricate details of the manufacturing process, optimize performance, and ensure correctness. It allows engineers to approach the design from different angles, enabling efficient collaboration and easing the transition between different stages of the design process.

Levels of Abstraction & Synthesis :



System Level Abstraction :

System Level abstraction focuses on major components like CPUs and cores. High-level description uses languages like C/C++ or Matlab. Special software libraries, like SystemC, aid in simulating circuits at this level. System level usually avoids automated synthesis for lower-level representation. System level design tools facilitate interconnecting building blocks. The IEEE 1685-2009 standard establishes IP-XACT file format. This format represents system level designs and building blocks for such designs.

High Level Abstraction :

High-level abstraction (algorithmic level) employs traditional programming languages. These languages have limited features; e.g., C with restricted pointer usage to mimic hardware concepts. In high-level C representation, pointers simulate hardware ideas like memory interfaces. However, advanced dynamic memory management isn't allowed as it lacks digital circuit equivalents. Synthesis tools convert high-level code (like C/C++/SystemC with metadata) into behavioral HDL code. Commercial and FOSS tools for high-level synthesis are available. These tools transform high-level code into Verilog or VHDL code for implementation.


Behavioral Level Abstraction :

Behavioural Abstraction: Utilizes hardware description languages (Verilog/VHDL). Incorporates behavioral modeling in circuit representation. 

Behavioral Modeling: Employs imperative programming for data paths and registers. Utilizes constructs like "always-block" in Verilog, "process-block" in VHDL. 

Code Fragments and Sensitivity: Behavioral modeling includes code segments with a sensitivity list (signals, conditions). In simulation, execution upon sensitivity list changes or conditions triggered.

Synthesis Transformation: Synthesis converts this representation into suitable datapaths and registers. Ensures alignment with hardware description.


Register-Transfer Level (RTL) Abstraction :

Register-Transfer Level (RTL) design involves combinatorial data paths and registers (usually D-type flip flops).  Verilog code at RTL represents designs using combinational logic and registers. 

Example: assign tmp = a + b; (combinatorial data path), 

always @(posedge clk) (register), y <= tmp;.

RTL representation uses HDLs like Verilog and VHDL, with minimalistic always-blocks or process-blocks. HDLs simplify RTL simulation, no additional tools required. RTL allows optimizations like FSM detection, memory identification, resource sharing. RTL represents circuits as graphs of registers, combinatorial cells, and signals. Encoded graph is called a netlist.  RTL synthesis replaces netlist elements with gate-level circuits. RTL synthesis includes sophisticated optimizations within RTL representation. Few FOSS tools for specific RTL synthesis tasks, but none covering a wide range.


Logic Gate Level Abstraction:

Logical gate level represents designs using netlists comprising basic logic gates (AND, OR, NOT, XOR) and registers (D-Type Flip-flops). Netlist formats include EDIF, while HDL netlists (Verilog or VHDL) are commonly used for simulation ease. Logic synthesis involves optimizing gate level netlists and mapping them to physically available gate types. Two challenges: optimization within gate-level netlist and optimal mapping to physical gates. Two-level logic synthesis is basic, converting logic function into sum-of-products using methods like Karnaugh maps. Modern tools use complex multi-level logic synthesis algorithms based on Binary-Decision- Diagrams (BDD) or And-Inverter-Graphs (AIG). BDD ensures normalized form, AIG offers better worst-case performance for large logic functions. FOSS tools exist for multi-level logic synthesis. Yosys provides basic logic synthesis, can use ABC for logic synthesis (recommended).


Summary :

Behavioral Abstraction: This is the highest level of abstraction. It focuses on the functional behavior of the circuit without delving into implementation details. Engineers describe the desired functionality using high-level languages like Verilog or VHDL.

Register Transfer Level (RTL) Abstraction: At this level, the design is represented in terms of registers, logic operations, and data transfers. It provides a more detailed view of the circuit's operation, but still abstracts away lower-level implementation specifics. 

Gate-Level Abstraction: Here, the design is represented using basic logic gates (AND, OR, NOT, etc.). The abstraction captures the logic relationships but ignores the physical properties of the components.

Transistor-Level/Layout Abstraction: This level involves modeling the circuit using individual transistors. It's the closest abstraction to the physical implementation, providing insights into the electrical behavior of the design.


Watch the Video lecture here:

Courtesy: Image by www.pngegg.com




What is RTL Synthesis in VLSI? Synthesis : Episode - 1


In this article , we explore a range of essential topics related to the synthesis process in VLSI design. We begin with an introduction to synthesis, providing an overview of its role and importance in the design flow. Next, we discuss the V-curve of VLSI design, explaining its significance and how it illustrates the different phases of the design process. We then delve into what synthesis means in a general context, offering a comprehensive understanding of its purpose and functions. The concept of abstraction and its various levels is thoroughly examined, highlighting how abstraction helps manage design complexity. We also introduce the Y-diagram, which demonstrates the co-existence of different domains in VLSI design, and discuss the mapping of levels and domains to show their interrelationships. The differences between HDL compilers and synthesis compilers are explained, emphasizing their respective roles. Finally, we provide both a brief and detailed overview of the VLSI design flow, giving viewers a complete picture of the entire process from start to finish.

Once you complete the article you will be able to understand:

1. Difference between HDL compiler and synthesis compiler.

 2. Howw synthesis does attach the technology node to your design? 

3. Various levels of abstraction in VLSI design and their importance .

4. Correlation of various levels of abstraction with the V curve of the VLSI design?

Introduction to Synthesis :

Electronic Design Automation (EDA) tools play a vital role in VLSI design by automating various stages of the process. EDA tools revolutionize VLSI design, combining human ingenuity with automation to streamline the synthesis cycle. Synthesis is a foundational step where abstract Hardware Description Language (HDL) is transformed into a physically realizable design. Synthesis involves complex algorithms and optimizations to consider factors like power, timing, and area efficiency. Analysis and Verification (Design Verification - DV) phase ensures the correctness of the design through tests, simulations, and UVM/OVM verifications. DV phase identifies and rectifies functional, timing, and logical discrepancies, enhancing the quality of the final product. Testing (Design for Testability - DFT) phase focuses on identifying and fixing errors that might occur during fabrication and manufacturing. DFT phase subjects the design to various tests to ensure its resilience against fabrication anomalies.

V-Curve Of VLSI Design:


Before we dive in, it's important to understand that VLSI design is highly complex and requires breaking it into smaller, manageable steps, forming what is known as the V-curve. The "V" represents the descending and ascending phases of the design process. This curve connects to the Y-chart, which we’ll discuss later. Understanding the V-curve is key to grasping the optimization and synthesis process in VLSI design.
The first step is defining system requirements, a collaborative effort between the design team and stakeholders to document all specifications. Next comes system design, where these requirements are translated into a top-level block diagram, dividing the system into smaller sub-blocks or subchips based on their functions—analog, digital, or mixed.
The process continues with subsystem design, focusing on each sub-block in detail. Designers form specialized teams to handle different sections, ensuring a systematic approach. Following this is the design of components, where individual functional blocks are created. These could be logic gates, unit blocks, or specific circuits tailored to the system's needs. Each component undergoes detailed testing for functional correctness, often implemented using Verilog or similar HDL.
Next, the build and check components phase ensures the designed blocks meet specifications. Standard cells and more complex components are built and validated, completing the checklist for component design. Once components are ready, they are integrated to form subsystems in the build and check subsystems stage, leveraging EDA tools for efficient assembly. These subsystems are tested rigorously to identify and resolve faults.
Finally, in the build and check system phase, all subsystems are integrated to form the complete system, culminating in post-silicon validation. This step involves testing prototypes extensively to ensure the system functions as intended. Once validated, the chip is ready for fabrication.
The V-curve captures this entire process, starting with breaking down the system into smaller components and culminating in stitching them together to form a functional design. This approach applies to both digital and analog designs and lays the foundation for synthesis, the first step in chip fabrication. Understanding these abstractions is essential for grasping VLSI design and its challenges.

What Synthesis Means in General ?



Synthesis in VLSI design occurs at multiple points in the design flow, and it’s essential to clarify its meaning upfront to avoid confusion. This generalization applies to the front-end, back-end, and standard cell design processes. A table with infographics will explain synthesis at three levels: logic, circuit, and layout.

Logic Level: Behavioral descriptions, such as FSM or Karnaugh maps written in HDL (Verilog, VHDL, SystemVerilog), are converted into structural logic using predefined gates from a standard cell library. This step, enabled by EDA tools, connects your code to certified, silicon-ready standard cells.

Circuit Level: Logic gates are realized as transistor-level designs through circuit synthesis. This process uses foundry-specific PDKs to map gates to transistor interconnections.

Layout Level: Transistor designs are further synthesized into layouts, forming physical patterns for silicon realization. Layout synthesis, handled by PNR tools, optimizes and arranges standard cell layouts for manufacturability.

In this series, we focus solely on logic synthesis and its role in converting HDL descriptions into structural logic. Other synthesis stages—circuit and layout—will be explored in future discussions. Understanding synthesis at these levels provides clarity on its purpose throughout the VLSI design flow.

What Is Abstraction ?  

Abstraction is a conceptual method used to simplify complex systems or ideas by focusing on the essential aspects while ignoring unnecessary details. It involves creating a higher-level representation that captures the core features and functionality of something, while leaving out the intricacies that aren't relevant to the current context.  Abstraction doesn't eliminate the complexity; it just focuses on what's important for a particular purpose. It's like looking at the world through different lenses, each revealing a specific facet. Abstraction is a powerful tool for managing complexity, encourage understanding, and enabling innovation by allowing us to work with complex systems without being overwhelmed by their minute details. 

Abstraction Levels:


Behavioral domain is all about how a system functions. We imagine a part of the design as a mysterious black box and focus on the relationship between inputs outputs. In structural domain we describe a system by its different parts or subsystems. Geometrical or Physical domain gives information on how the sub-parts can be seen in structural domain. Level of abstraction could be defined as the amount the information that level is hiding within it. Higher level of abstraction means less detailed a level is, so more information is hidden. Lower level of abstraction means more detailed a level is, so less information is hidden. System/Processor level is highest level of abstraction. Switch/Transistor/Circuit Level is the innermost level of abstraction.

Y-Diagram : Co-existence of Domains



Helps us understand digital hardware design. 

3 Axes represent the three domains of VLSI :

Behavioral: what a particular system does.

Structural: how entities are connected together.

Physical: how to build a structure on Si that has the required connectivity to implement the behavior.

5 concentric circles represent five levels of importance or detail in the design: System, Algorithm, Register Transfer, Logic, Circuit Level. Outermost circle is the most general and each circle closer to the center, represents a smaller and more specific part of the design. The five main characteristics at each level of abstraction are basic building blocks, signal representation, time representation, behavioral representation and physical representation.

Mapping of Levels & Domains:


Now, let's focus on the mapping of abstraction levels and domains, as seen in the Y-chart framework. For detailed insights, refer to the relevant episode in the Y-chart series.  

This table illustrates the abstraction levels (behavioral, structural, and physical) and their mapping across different design domains:  

1. System/Processor Level:  

   - Behavioral: Written specifications.  

   - Structural: Modules.  

   - Physical: Physical partitioning.  

2. Algorithm/Architecture Level:  

   - Behavioral: Algorithms, flowcharts.  

   - Structural: Processor, RAM, ROM.  

   - Physical: Clusters.  

3. Functional (RTL):  

   - Behavioral: Data flow, register transfers.  

   - Structural: ALU, MUX, registers.  

   - Physical: Floor planning, standard cells.  

4.Gate/Structural Logic:  

   - Behavioral: Boolean equations.  

   - Structural: AND, OR, XOR gates.  

   - Physical: Standard cells.  

5. Switch/Transistor Level:  

   - Behavioral: Equations.  

   - Structural: Transistors, resistors, capacitors.  

   - Physical: Mask geometry, fabrication details.  

Understanding this mapping helps contextualize VLSI design stages and clarify roles within the workflow. For example, you can identify your position in the design process, understand inputs from preceding teams, and anticipate deliverables for subsequent teams.  This framework is critical for distinguishing between various design engineering roles and navigating the VLSI design flow effectively. 

HDL Compiler Vs Synthesis Compiler:

  

This series focuses on the Synthesis Compiler, while the HDL Compiler was discussed in the Verilog episode. Let’s break down their differences:  

1. HDL Compiler:  
   - Converts handwritten Verilog/SystemVerilog code into a translated design for simulation purposes.  
   - Produces waveforms (e.g., 0s and 1s) for analysis, often visualized using tools like GNU plot.  
   - Examples: Icarus Verilog (Iverilog).  
   - Technology-independent: No association with specific technology nodes (e.g., 10nm or 22nm).  

2. Synthesis Compiler:  
   - Converts the translated design into a technology-specific synthesized netlist using libraries provided by the foundry, such as PDK (Process Design Kit) or DK (Design Kit).  
   - Associates technology nodes (e.g., 10nm, 22nm) to the netlist, marking the transition toward fabrication.  
   - Ensures compatibility with the standard cell library and other design blocks.  
   - Forms the first step in the VLSI fabrication process.  

 VLSI Design Flow : Brief 


Simplified VLSI Design Flow  
- Specification: Initial target specifications (pen and paper).  
- High-Level Description: FSM or similar concepts are defined.  
- RTL Coding: Verilog/SystemVerilog implementation of the design.  
- HDL Compilation: Prepares the design for simulation.  
- Logic Synthesis: Uses PDK/DK to produce technology-bound netlists (e.g., 10nm gate-level netlist).  
- Physical Design: Includes steps like floorplanning, placement, and routing, refining the design for fabrication.  

Technology-Dependent vs. Independent:  
- HDL compilers deal with high-level design descriptions and simulations without technology node attachments.  
- Synthesis compilers introduce technology dependencies, paving the way for fabrication by linking the design to ASIC libraries for specific nodes.  

Synthesis is the gateway to fabrication, bridging high-level design and physical implementation. This understanding helps differentiate the roles of HDL and synthesis compilers in the VLSI design flow.

VLSI Design Flow : Detailed



We also have a detailed episode on this topic, which you can check out for more information. This diagram  summarizes the complete VLSI design flow. While we have covered nearly every step, some areas could be broken down further. For now, I’ve condensed it to fit within this slide.

In this series, we’ll focus specifically on synthesis and not delve into the steps that follow. Synthesis is critical because it prepares the design for fabrication. At this stage, the design transitions from being technology-independent to becoming tied to a specific technology node (e.g., 5nm, 10nm, or 22nm). From this point onward, technology constraints, design rules, and node-specific considerations come into play.

Synthesis marks the shift from a flexible design process to one that requires meticulous attention to technology details. It’s like moving from an open road to a busy, rule-bound highway, where careful navigation is essential.

Watch the video lecture here:

Courtesy: Image by www.pngegg.com



12/04/2024

What is Verification IP [VIP] in VLSI ?



In this article, we delve into key aspects of verification, beginning with an overview of general verification strategies that are essential for ensuring reliable design and functionality. We explore the need for robust verification processes, emphasizing their role in enhancing design accuracy and reliability, especially in complex systems. A detailed verification flow chart is presented to guide viewers through the structured sequence typically followed in verification. We also explain the concept of Verification IP (VIP), outlining the general verification blocks included within VIP and comparing these with those in a regular testbench. Additionally, we discuss the unique advantages of using VIP, particularly its ability to streamline verification and enhance testing efficiency, and conclude with an open example that demonstrates VIP in action.


Once you complete the article you will understand:  

1. What is verification IP in VLSI? 

2. Why a robust verification plan is necessary in VLSI with respect to the VIP context? 

3. How general verification test bench is weaker than a verification IP for the same protocol? 

4. Understanding verification IP using a open example


General Strategies of Verification:

Understand the architecture and micro-architecture; partition logic to create efficient RTL descriptions using moderate gate count blocks. Apply a bottom-up approach, and deploy synchronizers at the top-level design if needed. Use synthesizable constructs during RTL design and non-synthesizable constructs for RTL verification. Use blocking assignments for modeling combinational logic and non-blocking assignments for sequential designs. Avoid mixing blocking and non-blocking assignments. Apply optimization constraints at the RTL level to improve performance. Refer to subsequent chapters for better design and optimization understanding. Develop a robust verification architecture and verification planning for the design. Understand coverage requirements and implement verification strategies to meet coverage goals.




The above one is a picture of a desktop computer motherboard. There are so many types of connections are present which includes  connection for the hard drives, SATA ports, allocation for RAM,  the processor and on top of it, there will be a fan, PS 2 ports, USB port,  RG 45 socket, multiple D shaped ports, audio ports. 

It has to interact with the keyboard and mouse. It has to interact with other plug and play devices like printer, scanner. It has to interact with the monitor. It has to interact with the audio devices whether it is a microphone or a speaker. And here, it will interact with the 1 or more hard drives that are connected. In real time, when you turn up a desktop PC, right, for this CPU, we currently have a rush of information around it. However, when you had it on pen and paper, right, here you have listed things. But in real time, everything will be jumping from here to there, from there to some where else. So all these in and out operation will be simultaneously getting performed. And obviously, the hard OS will be booted from the hard disk. Now you can imagine a enormous data flux is coming from all around all on a sudden, and they are coming simultaneously. It's not that in sequentially, they are coming when you have turned on. The boot sequence has been performed, the OS is loaded and, your OS interface has been ready whether it is a Linux or it is a Windows or it is a Mac. So the user interface is ready. Now multiple things are already started to operate. Simultaneously, multiple operations are ongoing. So this is a real time scenario. That means our CPU has to take a load from all of these kind of applications. This is a practical example that you encounter every day and that's why I have picked this to make you understand why we need a robust verification. To incorporate all these kind of complex things can happen with the CPU. We have to write a exhaustive verification deck for the device under test. Here, the device is CPU. In your case or my case, it could be a design under test. That means a block. So to understand the need of the hour of the robust verification by these particular simple infographics. So this is a very sweet and simple example that will help you to understand why we need a robust verification because from beginning days, you know about the truth table. You will say, okay, we have a truth table kind of stuff and the device will operate like a standard defined way. However, in real time, in real life, when a chip is plugged into the board and the board has so many components and they will forward data and address in multiple direction from multiple sequences, the design can get crashed. And the target of verification is that we will make sure that at what time and what condition this DUT, that means your block or the full SOC, can crash. So we have to crash it. That is the purpose of the verification and hence we need a robust verification plan. So we are done with this particular infographics. Let's move on to the next slide. General verification flowchart.

General Verification Flow Chart



Here we will show you a general verification flowchart. This will contain several blocks. So at the left, there will be some block and from here we will fork outside and right side, there will be block some block. So this will happen maybe in parallel and at some point, we'll have a merger here. So wait for that.
We say so to have a proper kind of idea what we are going to see. So first, we have functional specification. It is also known as spec. That means we use the 4 alphabet, s p e c, for this. Generally, you can see as a spec or it could be a functional specification full word. Next, come the test plan. In previous slide, we have mentioned that your design under test can be bombarded with multiple address and data details from multiple directions simultaneously. We have to have a good test plan. Next, we have the assertions.

Now what are the assertions?
Assertion means you are saying affirmatively that this design will obey these kind of tasks. That means it could be a port having a particular sense of operation or it could be a bus that should be behaving in a particular way like this. So once you go in detail of the coding, you will understand what are the assertion. But as a overall, we can see assertions are way of hard coding the way the particular code or the block of code or the section of code which corresponds to our design behavior should remain in a firm way. And in case there is a violation, the verification text should flag a error or warning, whatever is applicable. Next, we proceed for hardware description coding, then RTL and also including the UPF. So we have verilog series. We already have UPF series. And for RTL, we have synthesis series. Each of them now has a marathon so that you can get the entire series in a single video as a marathon. In case you need to learn, you can go ahead and learn. We have all the resources here already in our channel. And in this arrow direction, the flow will go farther forward. Now comes the linting.  Linting, we already have published a linter on TCL  in this channel. You can go ahead and see that. You will find it in the TCL playlist. Or in case you search it, you will get it. The linting is nothing but a syntax checker routine that allows, engineer to verify the code even before running it. So that if there is a syntactical error or a semantical error, it will be flagged by the Linter tool. Linter tool could be there for Tcl. It could be for Verilog, system Verilog.

It could be there for PERL , Python, any kind of language, whether it is a programming language or a scripting language or hardware description language (HDL) or hardware description and verification language (HDVL). Linting is a general concept, and you can find linters as resources to use them for your good. Next is simulation with assertion and checking. Here, we said about the assertion. That means assertion and the code are there, and it is limited.
That means that we have almost eliminated kind of, in tactical or semantical errors. And then we simulate along with assertion and checking. So up to this stage we have reached. Now comes the bridging part. Now we have the test bench that means with the HDVL like systemverilog, and we plug it in here.
This was the hardware description and linting, and with assertions, we have done some checking for the code and everything. It is ready. Now we plug in the systemverilog test bench to our design. That is the DUT. Next, we will look for the functional coverage and the code coverage.

These are 2 detailed subjects. We will  not go in detail about them, but this will make sure that the exhaust stiffness of the robust verification are implemented as we have discussed with the infographics in earlier slide. Finally, if needed, from this step, we can go back to the test bench again and we do some modification or we may have to insert assertion or something like that as needed. And then we again come back to this state. This loop may go around until you are satisfied that the exhaustive and robust verification is implemented.

It was planned here. However, in coding, when we go for coding, we think in a way and we implement the code and then we try to check. And there is some flaw at the code level deviating from what we have thought in the planning stage. So we have to put some additional ports there to make the code go complete in nature and to tally with our test plan. And once this loop completed, then we will have the functional coverage and the code coverage, and we will have the verified RTL code and verification results.

So here, our verification, we end and we have it verified. So here in this slide, we have talked about, general verification flowchart. In actual practice, when you work in a company or you are using particular tool, you can see there is a little modification way around this particular flowchart. Obviously, because one representation can be represented by different persons in different way and with different perspectives. So all those perspective will be rest to the respective tool person or whoever or maybe the verification engineer who are doing the actual verification.

So this is a general concept. Keep it in mind and stay flexible that something can come in between or this kind of flowchart may have some changes because as we are proceeding further right, many more things are getting updated and things can come in between. So this is a very simple structure to understand the verification flowchart. 

What is Verification IP (VIP):

Verification IP (VIP) in VLSI is an essential tool in the chip design and development process. It provides a standardized, reusable component to verify and validate chip functionality against specific protocols or behaviors. Verification IP is a reusable, modular component used specifically in the verification phase of chip design. It is employed to test and validate the behavior and functionality of the design under test (DUT). Verification IP simplifies the process by providing a pre-built, standardized way to test whether the DUT complies with certain protocols or functions. VIP reduces the time and effort required to ensure that a chip design works correctly before moving to manufacturing.


General Verification Blocks in VIP:


Test Generator: Creates stimuli to drive the DUT.

Monitor: Passively observes and captures DUT signals for analysis.

Checker: Compares DUT output with expected values for

correctness.

Scoreboard: Tracks and compares transaction-level data

over time for consistency.



Verification Blocks Comparison : Regular TestBench Vs VIP




Advantages of Using VIP:

Each VIP is configured to simulate the behavior of these protocols and verify that the DUT adheres to their specifications. For example, a PCIe Verification IP will emulate data transfers, error scenarios, and ensure compliance with PCIe protocol standards. 

There are many such Protocols for which VIPs are created/avilable , such as:

1. PCIe (Peripheral Component Interconnect Express)

2. AXI (Advanced eXtensible Interface)

3. I2C (Inter-Integrated Circuit)

4. Ethernet

5. USB (Universal Serial Bus) .... & many more.

6. Time-saving: Reusable across multiple projects.

7. Comprehensive Testing: Provides a wide range of test scenarios, including edge cases.

8. Standardization: Ensures the DUT adheres to industry-standard protocols.

9. Automation: Automatically generates stimuli and checks results, reducing human error.

Open Example of VIP from GitHub :

We have talked theoretically and info graphics about the VIP. Now the VIP concept is with you. Now we will unbox one particular open example from GitHub. 



You can directly reach to this URL and this VIP is for AXI  protocol. Mostly the author name is Kumar Rishav. And here you can see this is a MIT license is there, and you can go through it. Now if you go down further, scrolling down the page here, you can see this is the block diagram of the VIP. Now here you can see the test bench top and we have the test module here. Inside that, we have sequence. You can find similarity with the block diagram that I have shown in the slides.



However, you can see there is a difference in arrangement. Now here it is a master slave architecture. And we have write sequencer. We have read sequencer. We have monitor. We have driver that drives this sequenced data into the interface. And interface will talk with the DUT. Since the DUT is not here, it is not shown because your VIP will not contain the DUT. It is the verification capsule for your DUT.

So here you can see the driver is here, okay, for the slave and monitor is here for the slave. So we have a master and slave architecture and hence we have 2 different monitors and 2 different drivers. And finally, we have a scoreboard and coverage routine here in this block diagram. Now you have seen the block diagram, which is very much familiar to which we have already discussed . Now we will go down further and here you can see the list of components.


So there is sequence item, sequencer2. We have driver. We have monitor. We have scoreboard. We have environment.

We have test and testbenchtop. And we have environmental config and test config. Okay. All these are there. You can read by yourself and understand.

Now you will have a real VIP in your hand. You can investigate in detail. How we will investigate? Do the up scroll again and here you can see there is a lot of code. Those are uploaded in GitHub and here you can see. Let's see the monitor code. Once I click, it will open a pin like this. It's similar to an ID you have in the left hand side the files and right hand side each file open in the port viewing window. So here you can see that the monitor has several amount of port. This is in either SV or UVM.

Generally, you can see here it's in UVM. So you have the UVM component here. And you can investigate this code by yourself. For that, at least, you have to have the knowledge of systemverilog and UVM.


Next, we have the master part. We have our the silver log code here for the master, and I think somewhere could be slave. Here it is the slave. This is the code for the slave. 
And here you can see it is very much protocol centric and that's why we have its division like this, xi slave. And here is the code of the slave and let me show you the driver. 




So here, you can see the driver will contain different, data width and etc , and it will have sequence items like this. So it will have task and functions to do the driving action. And if you go through the data, you will understand by the comments.



Now let us let me take you to the interface which, was there at the bottom of the infographics of this particular VIP. Our interface is nothing but the port connection. So in SV, we have a method called interface where we define the different ports. Here logic means it is available in only in SV not in Verilog. So guys you need to have the knowledge of SV to understand.



And here you can see all the connections are explicitly mentioned. That means if you instantiate this particular module, axi_ intf,   instantiate it, that means you can use it in a very macro way of connecting different plug and play manner, different ports, you will be doing the plug and play with this interface. So that is the beauty of the interface kind of codes in Systemverilog as well as in UVM . So we have all the codes and this is the test bench top from where we are pulling the control. And here you can see it's very simple.


We have the interface instantiation and we have the clock. We scroll down. We have the configuration here. And all the things will be in very simple manner at the top level. And here you can see, we have included the EXI package, the header file and the SV file.  And here is the test. So this is the detailed test thing. We have all the routines here for testing. And here is the code of the sequencer for the right one. There is a sequencer. So you can go up to this and have a detailed view what are there in the sequencer. 



Watch the video lecture here: 

Courtesy: Image by www.pngegg.com




12/02/2024

🎙️Insights into VLSI, AI & The Indian Semiconductor Landscape | TSP | Guest : Aloke Kumar Das

 





In this episode of The Semiconductor Podcast (TSP), we engage in an inspiring conversation with our guest, covering topics that resonate deeply with aspiring and experienced professionals in the VLSI and semiconductor industries. Here's what you'll discover: 1. The guest’s journey and key milestones, filled with lessons to inspire young newcomers to the VLSI field. 2. The evolution of VLSI over the past decade in India and where it’s headed. 3. Observations on working with young talents—their strengths and areas for improvement. 4. Insights into their recent IEEE speech and the key takeaways. 5. The value of attending conferences for both freshers and seasoned professionals. 6. A special message or expectations from the government regarding the VLSI and semiconductor sectors. 7. Thoughts on how AI and ML could shape the future of VLSI. 8. Tips on applying for internships at their institution, including timelines and eligibility. 9. What’s missing in the Indian semiconductor landscape and how the gaps could be filled. 10. A final message and advice shared for the community.

Watch the podcast : HERE This episode is packed with insights, practical advice, and a vision for the future of VLSI and semiconductors in India. Whether you’re a student, professional, or enthusiast, this discussion has something for everyone!

🎙️ The Semiconductor Podcast (TSP): Insights into VLSI, AI, and the Indian Semiconductor Landscape In this episode of The Semiconductor Podcast (TSP), we engage in an inspiring conversation with our guest, covering topics that resonate deeply with aspiring and experienced professionals in the VLSI and semiconductor industries. Here's what you'll discover:
This episode is packed with insights, practical advice, and a vision for the future of VLSI and semiconductors in India. Whether you’re a student, professional, or enthusiast, this discussion has something for everyone! 🌟 For Internship in Lab and Lectures Semiconductor : HERE Guest : Aloke Kumar Das (Founder & CEO of Lab and Lectures Semiconductor)
Founder and CEO of Lab and Lectures Semiconductor, a fabless semiconductor design company. An astute professional with 25 years of experience in the VLSI Design, EDA, CAD and embedded systems industry. Earlier, Director of Engineering at WaferSpace and BlackPepper Technology. Project manager at Intel. He is M. Tech from IIT Delhi. He is a senior member and chair of IEEE CEDA. He started contributing to IEEE from VLSI Design Conference Kolkata in 2005. He was session chair of VDAT Kolkata in 2007. He sponsored the HTC students project competition in BHTC 2020. He was a TPC member in many conferences and reviewed over 50 papers in BHTC 2020, ISCAS 2020, 2021, MWSCAS 2021, CONECCT 2020 and 2021, R10 HTC 2021, Mysuru Conference 2021, ICAECC 2021 and ICMNWC 2021.







🎙️ Exploring SPICE, QSPICE & The Future of Analog Design | TSP | Guest : Mike Engelherdt



In this exciting episode of The Semiconductor Podcast (TSP), we have an in-depth conversation with a distinguished guest about the evolution of SPICE simulators, the innovations in QSPICE, and the future of analog design and verification. Here's what we covered:

1. The guest’s fascinating career journey and key milestones. 2. The evolution of SPICE, its beginnings, and its transformative impact on the VLSI industry. 3. Memorable experiences with LTSpice and its role in the guest's career. 4. Insights into the ongoing journey with QSPICE, including its capabilities and unique features. 5. The transistor/gate limits of QSPICE and its potential. 6. Target audience and use cases for QSPICE. 7. Challenges with the lack of a unified managing body for SPICE, unlike Verilog/SystemVerilog. 8. Future plans for incorporating Verilog-A or Verilog-AMS into QSPICE. 9. Possibilities of integrating open-source PDKs like SkyWater PDK with QSPICE for enhanced design and simulation. 10. The potential impact of AI/ML on analog design and verification processes. 11. Career opportunities for freshers at Qorvo and beyond.

The episode also dives into the future of analog and digital design, discussing the roadmap for innovation and integration. Tune in for a wealth of knowledge and thought-provoking discussions! 🌟 Guest : Mike Engelhardt (Qorvo.com)
Mike Engelhardt has been writing physical simulators since 1975. His first simulators were written for high-energy physics labs and instrumentation companies doing charged-particle optics and oil exploration. Mike's educational background is physics from the University of Michigan, Ann Arbor; the University of Mainz, Germany; and the University of California, Berkeley. He holds patents in simulation and switch mode power supply design. Mike has delivered seminars on SPICE simulation in 48 countries. He is currently perfecting the QSPICE™ simulator with Qorvo as his contribution to the engineering community.

Listen in Spotify :



Watch the video here:





🎙️ Exploring VLSI Journey| TSP | Guest : Kumar Priyadarshi

 



In this episode of The Semiconductor Podcast (TSP), we bring you an inspiring and insightful conversation with our guest, covering a range of topics that showcase the dynamic world of VLSI and semiconductors: 1. His incredible personal journey into the VLSI industry. 2.Insights into Techo Veda and its vision. 3. A deep dive into their book, the motivation behind it, and its relevance to the industry. 4. Career opportunities for freshers in a semiconductor foundry and how to prepare for them. 5. Firsthand experiences and key takeaways from participating in the Semicon India Conference. 6. Balancing the hustle and bustle of a thriving career in the VLSI domain. 7. Thoughts on the upcoming Global Foundry Center in Kolkata and its potential impact on the industry.



Guest : Kumar Priyadarshi
Kumar Priyadarshi is an experienced semiconductor professional with a passion for innovation, specializing in process integration, CMP, memory design, and fab technologies. He provides consultancy, helps companies with Go-to-Market (GTM) strategies for the Indian market, designs semiconductor curricula for universities and companies, and fosters media partnerships. Kumar has led key projects such as developing the 40nm tech node at GlobalFoundries, Singapore, and spearheading India’s first memory chip at IIT Bombay (2019-2023), where he managed a team of 16 engineers. His achievements include leading India’s first lab-to-fab (IITB to SCL) translation for OTP memory, a hybrid lab-to-fab model for MTP memory, and coordinating the Bharat Semiconductor Research Center (BSRC) DPR. Kumar's Book Link: HERE


Listen in Spotify :


Watch the Podcast :