Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Showing posts with label VLSI Design. Show all posts
Showing posts with label VLSI Design. Show all posts

Tuesday, 15 June 2021

Top 5 books to refer for a VHDL beginner



VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic design automation to describe digital and mixed-signal systems such as field-programmable gate arrays and application-specific integrated circuits. To start with learning VHDL we are posting here the list of 5 VHDL books which are a good references to get started with VHDL coding.

1. VHDL: PROGRAMMING BY EXAMPLE - by Douglas L. Perry


 

  2. Digital Design | With an Introduction to the Verilog HDL, VHDL, and SystemVerilog - by M. Morris Mano


 

  3. A Vhdl Primer - by Bhasker


 

  

4. Fundamentals of Digital Logic with VHDL Design - by Stephen Brown and Zvonko Vranesic


 

 

5. Circuit Design and Simulation With VHDL - by Pedroni V.A




Please feel free to send your suggestions if there is any good book in your list. We will be happy to share here.
 

Thursday, 21 February 2013

How to read an NGC netlist file

For the occasions that you find yourself with a netlist file and you don’t know where it came from or what version it is, etc. this post is about how you can interpret the netlist file (ie. convert it into something readable).

Today I found myself with two netlists and I needed to know if they were the same. Yes of course you can try comparing the two files with a program such as Beyond Compare, but if the netlists were compiled on separate dates, you will have trouble recognizing this from the raw binary data. The best thing to do in this case is to convert the netlists to EDIF files, a readable, text file version of the netlist. Another option is to convert the netlists into VHDL or Verilog code. Here is how you can do this:

To convert a netlist (.ngc) to an EDIF file (.edf)

  1. Get a command window open by typing “cmd” in the Start->Run menu option in Windows. If you use Linux, open up a terminal window.
  2. Use the “cd” command to move to the folder in which you keep your netlist.
  3. Type “ngc2edif infilename.ngc outfilename.edf” where infilename and outfilename correspond to the input and output filenames respectively.
  4. Open the .edf file with any text editor to view the netlist.

To reverse engineer a netlist with ISE versions older than 6.1i

  1. Convert the netlist to an EDIF file using the above instructions.
  2. Type “edif2ngd filename.edf filename.ngd” to convert the EDIF file into an NGD file (Xilinx Native Generic Database file).
  3. To convert the netlist into VHDL type “ngd2vhdl filename.ngd filename.vhd“.
  4. To convert the netlist into Verilog type “ngd2ver filename.ngd filename.v“.

To reverse engineer a netlist with ISE versions 6.1i and up

  1. To convert the netlist into VHDL type “netgen -ofmt vhdl filename.ngc“. Netgen will create a filename.vhd file.
  2. To convert the netlist into Verilog type “netgen -ofmt verilog filename.ngc“. Netgen will create a filename.v file.

Now you should have all the tools you need to read an NGC netlist file.

Get free daily email updates!

Follow us!

Wednesday, 19 September 2012

Programming FPGAs with Python

py For everyone who wants to get started with developing for FPGA’s and dont want to waste time in learning a new programming  language or they just want to use their current knowledge of programming with Python to program an FPGA, this tool is just made for you!

The Python Hardware Processsor is written in Myhdl which makes it possible to run a very small subset of python on an FPGA, it converts a very simple Python code into either VHDL or Verilog and then a hardware description can be uploaded to the FPGA:

“Due to the python nature of Myhdl and the Python Hardware Prozessor written with it, it allows you, to write a Programm for the prozessor, to simulate the Hardware Processor and to convert the Processor to a valid hardware description (VHDL or Verilog) inside a single python environment.”

MyHDL surely won’t replace VHDL or Verilog but it could be a great tool for simulation and to test the behavior of your design and it’s certainly a tool worth looking into if you want to jump into the world of FPGAs.

Feel free to discuss in the comments thread.

Get free daily email updates!

Follow us!

Thursday, 6 September 2012

Choosing FPGA or DSP for your Application

 

FPGA or DSP - The Two Solutions

The DSP is a specialised microprocessor - typically programmed in C, perhaps with assembly code for performance. It is well suited to extremely complex maths-intensive tasks, with conditional processing. It is limited in performance by the clock rate, and the number of useful operations it can do per clock. As an example, a TMS320C6201 has two multipliers and a 200MHz clock – so can achieve 400M multiplies per second.

In contrast, an FPGA is an uncommitted "sea of gates". The device is programmed by connecting the gates together to form multipliers, registers, adders and so forth. Using the Xilinx Core Generator this can be done at a block-diagram level. Many blocks can be very high level – ranging from a single gate to an FIR or FFT. Their performance is limited by the number of gates they have and the clock rate. Recent FPGAs have included Multipliers especially for performing DSP tasks more efficiently. – For example, a 1M-gate Virtex-II™ device has 40 multipliers that can operate at more than 100MHz. In comparison with the DSP this gives 4000M multiplies per second.

 

Where They Excel


When sample rates grow above a few Mhz, a DSP has to work very hard to transfer the data without any loss. This is because the processor must use shared resources like memory busses, or even the processor core which can be prevented from taking interrupts for some time. An FPGA on the other hand dedicates logic for receiving the data, so can maintain high rates of I/O.

A DSP is optimised for use of external memory, so a large data set can be used in the processing. FPGAs have a limited amount of internal storage so need to operate on smaller data sets. However FPGA modules with external memory can be used to eliminate this restriction.

A DSP is designed to offer simple re-use of the processing units, for example a multiplier used for calculating an FIR can be re-used by another routine that calculates FFTs. This is much more difficult to achieve in an FPGA, but in general there will be more multipliers available in the FPGA. 

If a major context switch is required, the DSP can implement this by branching to a new part of the program. In contrast, an FPGA needs to build dedicated resources for each configuration. If the configurations are small, then several can exist in the FPGA at the same time. Larger configurations mean the FPGA needs to be reconfigured – a process which can take some time.

The DSP can take a standard C program and run it. This C code can have a high level of branching and decision making – for example, the protocol stacks of communications systems. This is difficult to implement within an FPGA.

Most signal processing systems start life as a block diagram of some sort. Actually translating the block diagram to the FPGA may well be simpler than converting it to C code for the DSP.

Making a Choice

There are a number of elements to the design of most signal processing systems, not least the expertise and background of the engineers working on the project. These all have an impact on the best choice of implementation. In addition, consider the resources available – in many cases, I/O modules have FPGAs on board. Using these with a DSP processor may provide an ideal split.

 

As a rough guideline, try answering these questions:

  1. What is the sampling rate of this part of the system? If it is more than a few MHz, FPGA is the natural choice.
  2. Is your system already coded in C? If so, a DSP may implement it directly. It may not be the highest performance solution, but it will be quick to develop.
  3. What is the data rate of the system? If it is more than perhaps 20-30Mbyte/second, then FPGA will handle it better.
  4. How many conditional operations are there? If there are none, FPGA is perfect. If there are many, a software implementation may be better.
  5. Does your system use floating point? If so, this is a factor in favour of the programmable DSP. None of the Xilinx cores support floating point today, although you can construct your own.
  6. Are libraries available for what you want to do? Both DSP & FPGA offer libraries for basic building blocks like FIRs or FFTs. However, more complex components may not be available, and this could sway your decision to one approach or the other.

In reality, most systems are made up of many blocks. Some of those blocks are best implemented in FPGA, others in DSP. Lower sampling rates and increased complexity suit the DSP approach; higher sampling rates, especially combined with rigid, repetitive tasks, suit the FPGA.

 

Some Examples

Here are a few examples of signal processing blocks, along with how we would implement them:

  1. First decimation filter in a digital wireless receiver. Typically, this is a CIC filter, operating at a sample rate of 50-100MHz. A 5-stage CIC has 10 registers & 10 adds, giving an "add rate" of 500-1000MHz.
    At these rates any DSP processor would find it extremely difficult to do anything. However, the CIC has an extremely simple structure, and implementing it in an FPGA would be easy. A sample rate of 100MHz should be achievable, and even the smallest FPGA will have a lot of resource left for further processing.
  2. Communications Protocol Stack – ISDN, IEEE1394 etc; these are complex large pieces of C code, completely unsuitable for the FPGA. However the DSP will implement them easily. Not only that, a single code base can be maintained, allowing the code stack to be implemented on a DSP in one product, or a separate control processor in another; and bringing the opportunity to licence the code stack from a specialist supplier.
  3. Digital radio receiver – baseband processing. Some receiver types would require FFTs for signal acquisition, then matched filters once a signal is acquired. Both blocks can be easily implemented by either approach. However, there is a mode change – from signal acquisition to signal reception.
    It may well be that this is better suited to the DSP, as the FPGA would need to implement both blocks simultaneously. Note that the RF processing is better in an FPGA, so this is likely to be a mixed system.
    (Note – with today’s larger FPGAs, both modes of this system could be included in the FPGA at the same time.)
  4. Image processing. Here, most of the operations on an image are simple and very repetitive – best implemented in an FPGA. However, an imaging pipeline is often used to identify "blobs" or "Regions of Interest" in an object being inspected. These blobs can be of varying sizes, and subsequent processing tends to be more complex. The algorithms used are often adaptive, depending on what the blob turns out to be… so a DSP-based approach may be better for the back end of the imaging pipeline.

Summary

FPGA and DSP represent two very different approaches to signal processing – each good at different things. There are many high sampling rate applications that an FPGA does easily, while the DSP could not. Equally, there are many complex software problems that the FPGA cannot address.

Saturday, 20 March 2010

ASIC Implementation Design Cycle

Following diagram shows the basic flow of the complete HDL Implementation design cycle.

Wednesday, 17 March 2010

FPGA Implementation Design Cycle

Following diagram shows the basic flow of the complete HDL Implementation design cycle.


Wednesday, 21 January 2009

Future of VLSI

 

feature_of_vlsi_design 

Where do we actually see VLSI in action? Everywhere, in personal computers, cell phones, digital cameras and any electronic gadget. There are certain key issues that serve as active areas of research and are constantly improving as the field continues to mature. The figures would easily show how Gordon Moore proved to be a visionary while the trend predicted by his law still continues to hold with little deviations and don’t show any signs of stopping in the near future. VLSI has come a far distance from the time when the chips were truly hand crafted. But as we near the limit of miniaturization of Silicon wafers, design issues have cropped up.

VLSI is dominated by the CMOS technology and much like other logic families, this too has its limitations which have been battled and improved upon since years. Taking the example of a processor, the process technology has rapidly shrunk from 180 nm in 1999 to 60nm in 2008 and now it stands at 45nm and attempts being made to reduce it further (32nm) while the Die area which had shrunk initially now is increasing owing to the added benefits of greater packing density and a larger feature size which would mean more number of transistors on a chip.

As the number of transistors increase, the power dissipation is increasing and also the noise. If heat generated per unit area is to be considered, the chips have already neared that of the nozzle of a jet engine. At the same time, the Voltage scaling of threshold voltages beyond a certain point poses serious limitations in providing low dynamic power dissipation with increased complexity. The number of metal layers and the interconnects be it global and local also tend to get messy at such nano levels.

Even on the fabrication front, we are soon approaching towards the optical limit of photolithographic processes beyond which the feature size cannot be reduced due to decreased accuracy. This opened up Extreme Ultraviolet Lithography techniques. High speed clocks used now make it hard to reduce clock skew and hence putting timing constraints. This has opened up a new frontier on parallel processing. And above all, we seem to be fast approaching the Atom-Thin Gate Oxide layer thickness where there might be only a single layer of atoms serving as the oxide layer in the CMOS transistors. New alternatives like Gallium Arsenide technology are becoming an active area of research owing to this.

Where does it all lead us to? The future of VLSI seems to change every little moment as we read this.

Links:

VLSI Design

VLSI chiefly comprises of Front End Design and Back End design these days. While front end design includes digital design using HDL, design verification through simulation and other verification techniques, the design from gates and design for testability, backend design comprises of CMOS library design and its characterization. It also covers the physical design and fault simulation.

While Simple logic gates might be considered as SSI devices and multiplexers and parity encoders as MSI, the world of VLSI is much more diverse. Generally, the entire design procedure follows a step by step approach in which each design step is followed by simulation before actually being put onto the hardware or moving on to the next step. The major design steps are different levels of abstractions of the device as a whole:

1. Problem Specification:  It is more of a high level representation of the system. The major parameters considered at this level are performance, functionality, physical dimensions, fabrication technology and design techniques. It has to be a tradeoff between market requirements, the available technology and the economical viability of the design. The end specifications include the size, speed, power and functionality of the VLSI system.

2. Architecture Definition: Basic specifications like Floating point units, which system to use, like RISC (Reduced Instruction Set Computer) or CISC (Complex Instruction Set Computer), number of ALU’s cache size etc.

3. Functional Design: Defines the major functional units of the system and hence facilitates the identification of interconnect requirements between units, the physical and electrical specifications of each unit. A sort of block diagram is decided upon with the number of inputs, outputs and timing decided upon without any details of the internal structure.

4. Logic Design: The actual logic is developed at this level. Boolean expressions, control flow, word width, register allocation etc. are developed and the outcome is called a Register Transfer Level (RTL) description. This part is implemented either with Hardware Descriptive Languages like VHDL and/or Verilog. Gate minimization techniques are employed to find the simplest, or rather the smallest most effective implementation of the logic.

5. Circuit Design: While the logic design gives the simplified implementation of the logic,the realization of the circuit in the form of a netlist is done in this step. Gates, transistors and interconnects are put in place to make a netlist. This again is a software step and the outcome is checked via simulation.

6. Physical Design: The conversion of the netlist into its geometrical representation is done in this step and the result is called a layout. This step follows some predefined fixed rules like the lambda rules which provide the exact details of the size, ratio and spacing between components. This step is further divided into sub-steps which are:

6.1 Circuit Partitioning: Because of the huge number of transistors involved, it is not possible to handle the entire circuit all at once due to limitations on computational capabilities and memory requirements. Hence the whole circuit is broken down into blocks which are interconnected.

6.2 Floor Planning and Placement: Choosing the best layout for each block from partitioning step and the overall chip, considering the interconnect area between the blocks, the exact positioning on the chip in order to minimize the area arrangement while meeting the performance constraints through iterative approach are the major design steps taken care of in this step.

6.3 Routing: The quality of placement becomes evident only after this step is completed. Routing involves the completion of the interconnections between modules. This is completed in two steps. First connections are completed between blocks without taking into consideration the exact geometric details of each wire and pin. Then, a detailed routing step completes point to point connections between pins on the blocks.

6.4 Layout Compaction: The smaller the chip size can get, the better it is. The compression of the layout from all directions to minimize the chip area thereby reducing wire lengths, signal delays and overall cost takes place in this design step.

6.5 Extraction and Verification: The circuit is extracted from the layout for comparison with the original netlist, performance verification, and reliability verification and to check the correctness of the layout is done before the final step of packaging.

7. Packaging: The chips are put together on a Printed Circuit Board or a Multi Chip Module to obtain the final finished product.

Initially, design can be done with three different methodologies which provide different levels of freedom of customization to the programmers. The design methods, in increasing order of customization support, which also means increased amount of overhead on the part of the programmer, are FPGA and PLDs, Standard Cell (Semi Custom) and Full Custom Design.

While FPGAs have inbuilt libraries and a board already built with interconnections and blocks already in place; Semi Custom design can allow the placement of blocks in user defined custom fashion with some independence, while most libraries are still available for program development. Full Custom Design adopts a start from scratch approach where the programmer is required to write the whole set of libraries and also has full control over the block development, placement and routing. This also is the same sequence from entry level designing to professional designing.

Links: