Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Saturday, 14 May 2011

Toward faster transistors

MIT physicists discover a new physical phenomenon that could eventually lead to the first increases in computers’ clock speed since 2002.


In the 1980s and ’90s, competition in the computer industry was all about “clock speed” — how many megahertz, and ultimately gigahertz, a chip could boast. But clock speeds stalled out almost 10 years ago: Chips that run faster also run hotter, and with existing technology, there seems to be no way to increase clock speed without causing chips to overheat.

In this week’s issue of the journal Science, MIT researchers and their colleagues at the University of Augsburg in Germany report the discovery of a new physical phenomenon that could yield transistors with greatly enhanced capacitance — a measure of the voltage required to move a charge. And that, in turn, could lead to the revival of clock speed as the measure of a computer’s power.

In today’s computer chips, transistors are made from semiconductors, such as silicon. Each transistor includes an electrode called the gate; applying a voltage to the gate causes electrons to accumulate underneath it. The electrons constitute a channel through which an electrical current can pass, turning the semiconductor into a conductor.

Capacitance measures how much charge accumulates below the gate for a given voltage. The power that a chip consumes, and the heat it gives off, are roughly proportional to the square of the gate’s operating voltage. So lowering the voltage could drastically reduce the heat, creating new room to crank up the clock.

Shoomp!

MIT Professor of Physics Raymond Ashoori and Lu Li, a postdoc and Pappalardo Fellow in his lab — together with Christoph Richter, Stefan Paetel, Thilo Kopp and Jochen Mannhart of the University of Augsburg — investigated the unusual physical system that results when lanthanum aluminate is grown on top of strontium titanate. Lanthanum aluminate consists of alternating layers of lanthanum oxide and aluminum oxide. The lanthanum-based layers have a slight positive charge; the aluminum-based layers, a slight negative charge. The result is a series of electric fields that all add up in the same direction, creating an electric potential between the top and bottom of the material.

Ordinarily, both lanthanum aluminate and strontium titanate are excellent insulators, meaning that they don’t conduct electrical current. But physicists had speculated that if the lanthanum aluminate gets thick enough, its electrical potential would increase to the point that some electrons would have to move from the top of the material to the bottom, to prevent what’s called a “polarization catastrophe.” The result is a conductive channel at the juncture with the strontium titanate — much like the one that forms when a transistor is switched on. So Ashoori and his collaborators decided to measure the capacitance between that channel and a gate electrode on top of the lanthanum aluminate.

They were amazed by what they found: Although their results were somewhat limited by their experimental apparatus, it may be that an infinitesimal change in voltage will cause a large amount of charge to enter the channel between the two materials. “The channel may suck in charge — shoomp! Like a vacuum,” Ashoori says. “And it operates at room temperature, which is the thing that really stunned us.”

Indeed, the material’s capacitance is so high that the researchers don’t believe it can be explained by existing physics. “We’ve seen the same kind of thing in semiconductors,” Ashoori says, “but that was a very pure sample, and the effect was very small. This is a super-dirty sample and a super-big effect.” It’s still not clear, Ashoori says, just why the effect is so big: “It could be a new quantum-mechanical effect or some unknown physics of the material.”

“For capacitance, there is a formula that was assumed to be correct and was used in the computer industry and is in all the textbooks,” says Jean-Marc Triscone, a professor of physics at the University of Geneva whose group has published several papers on the juncture between lanthanum aluminate and strontium titanate. “What the MIT team and Mannhart showed is that to describe their system, this formula has to be modified.”

There is one drawback to the system that the researchers investigated: While a lot of charge will move into the channel between materials with a slight change in voltage, it moves slowly — much too slowly for the type of high-frequency switching that takes place in computer chips. That could be because the samples of the material are, as Ashoori says, “super dirty”; purer samples might exhibit less electrical resistance. But it’s also possible that, if researchers can understand the physical phenomena underlying the material’s remarkable capacitance, they may be able to reproduce them in more practical materials.

Triscone cautions that wholesale changes to the way computer chips are manufactured will inevitably face resistance. “So much money has been injected into the semiconductor industry for decades that to do something new, you need a really disruptive technology,” he says.

“It’s not going to revolutionize electronics tomorrow,” Ashoori agrees. “But this mechanism exists, and once we know it exists, if we can understand what it is, we can try to engineer it.”



Source : http://web.mit.edu

Sunday, 8 May 2011

VLSI Interview Questions-1


1.What is the difference between mealy and moore state-machines.
2.How to solve setup and hold violations in the design.
3.What is antenna violation & ways to prevent it.
4.We have multiple instances in RTL(Register Transfer Language), do you do anything special during synthesis stage.
5.What is tie-high and tie-low cells and where it is used.
6.What is the difference between latches and flip-flops based designs.
7.What is High-Vt and Low Vt cells.
8.What is LEF mean?
9.What is DEF mean?
10.Steps involved in designing an optimal padring.
11.What is metastability and steps to prevent it.
12.What is local-skew, global skew and useful skew.
13.What are the various timing-paths which i should take care in my STA runs?
14.What are the various components of leakage-power.
15.What are the various yield losses in the design.
16.What is meant by virtual clock definition and why do i need it.
17.What are the various variations which impacts timing of the design.
18.What are the various Design constraints used, while performing synthesis for a design.
19.Specify few verilog constructs which are not supported by the synthesis tool.
20.What are the various capacitances with an MOSFET?
21.Vds-Ids curve for an MOSFET, with increasing Vgs.
22.Explain basic operation of an MOSFET.
23.what is channel length modulation.
24.what is body effect.
25.what is latchup in CMOS design and ways to prevent it?
26.what are the various design changes you do to meet design power targets.
27.what is meant by library characterization.
28.what is meant by wireload model.
29.what are the measures to be taken to design for optimized area.
30.what all will you be thinking while performing floorplan.
31.what are the measures in the design taken for meeting signal integrity targets.
32.what are the measures taken in the Design achieving better yield.
33.what are the measures or precautions to be taken in the design when the chip has both analog and digital portions..
34.what are the steps incorporated for Engineering Change order[ECO].
35.what are the steps performed to achieve Lithography friendly Design.
36.what does synthesis mean?
37.what are the pre-requistes to perform synthesis.
38.Can you explain the synthesis flow.
39.what are the various ways to reduce clock insertion delay in the design.
40.what are the various functional verification methodologies.
41.what does formal verification mean.
42.How will you time the output path in STA.
43.How will you time the input path in STA.
44.What is false path mean in STA and in what scenarios falsepath can come.
45.What does multicycle path mean in STA and in what scenarios MCP can come.
46.What are source synchronous paths in STA.
47.Assume there is a specific requirement to preserve the logic during synthesis , how will you achieve it..
48.we have multiple instances in RTL, do you do anything special during synthesis stage.
49.What do you call an event and when do you call an assertion.

Thursday, 5 May 2011

ISRO Builds India's Fastest Supercomputer

Indian Space Research Organisation has built a supercomputer, which is to be India's fastest supercomputer in terms of theoretical peak performance of 220 TeraFLOPS (220 Trillion Floating Point Operations per second). The supercomputing facility named as Satish Dhawan Supercomputing Facility is located at Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram. The new Graphic Processing Unit (GPU) based supercomputer named "SAGA-220" (Supercomputer for Aerospace with GPU Architecture-220 TeraFLOPS) is being used by space scientists for solving complex aerospace problems. The supercomputer SAGA-220 was inaugurated by Dr K Radhakrishnan, Chairman, ISRO today at VSSC.

"SAGA-220" Supercomputer is fully designed and built by Vikram Sarabhai Space Centre using commercially available hardware, open source software components and in house developments. The system uses 400 NVIDIA Tesla 2070 GPUs and 400 Intel Quad Core Xeon CPUs supplied by WIPRO with a high speed interconnect. With each GPU and CPU providing a performance of 500 GigaFLOPS and 50 GigaFLOPS respectively, the theoretical peak performance of the system amounts to 220 TeraFLOPS. The present GPU system offers significant advantage over the conventional CPU based system in terms of cost, power and space requirements. The total cost of this Supercomputer is about Rs. 14 crores. The system is environmentally green and consumes a power of only 150 kW. This system can also be easily scaled to many PetaFLOPS (1000 TeraFLOPS).

Saturday, 2 April 2011

Dynamic Timing Analysis

Dynamic timing analysis verifies circuit timing by applying test vectors to the circuit. This approach is an extension of simulation and ensures that circuit timing is tested in its functional context. This method reports timing errors that functionally exist in the circuit and avoids reporting errors that occur in unused circuit paths.

The most common dynamic timing analysis is the so-called min-max analysis method. Under min-max timing analysis, both minimum and maximum delays of circuit components are used to generate outputs, which are ranges (the spread of earliest data and latest arrival data) instead of edges. Since outputs are in turn fed into inputs, managing the ranges (merging them) can become very complex. As can be seen, if both min version & max version of the delays must be used, the simulation speed will be extremely slow.

Another major issue with dynamic timing analysis is the incomplete coverage. It may only check circuitry that is exercised by test stimulus, which may leave critical paths untested, and timing problems undiscovered. It is also not path oriented. Since dynamic timing analysis reports errors on a certain pin at a certain time, the user must trace through the schematic to locate the path that caused the problem (difficult for large designs).

Finally this method requires development time for test vectors. Dynamic timing analysis tools often track more information than logic simulators, making their performance slower. Also each component must contain both timing information and a functional model before timing verification can proceed. This could prevent the use of new parts that do not have functional models.

It should be noted that min-max simulation is not currently used in the industry. Instead, either functional simulation with timing (timing simulation) or formal verification method is typically used to verify complex IC designs. Typically people use the max version of delays to verify the circuit works under worst-case timing (no setup issues) and min version of the delays to verify best-case timing (no hold issues).

Advantages:
1. Extends coverage of circuit simulation (edges to region).
2. Evaluates worst-case timing using both min. and max. delay values for components.
3. Uses the same test stimulus as logic simulation.
4. Does not report false errors.

Disadvantages:
1. It is not complete.
2. It is not path oriented.
3. It is slower than logic simulation and may require additional test stimulus.
4. It requires functional behavioral models.

Dynamic timing analysis extends logic simulation by reporting violations in terms of simulation times and states. To test circuit timing using worst-case conditions, dynamic timing analysis evaluates the circuit using minimum and maximum propagation delays for each component for each component in the design.

Since dynamic timing analysis performs a simulation, it can use the same stimulus as a logic simulation. Because the stimulus functionally exercises the design, false errors of unused or uninteresting paths are not tested. Note a timing simulation reports results differently than a logic simulation. A logic simulation reports results as edge times and a timing simulation reports results as regions of ambiguity. The results of a timing simulation do not specify exactly when an event occurs, they specify a range of time in which an event can occur.

Static Timing Analysis

Static timing analysis verifies circuit timing by “adding up propagation delays along paths between clocked elements in a circuit. It checks the delays along each path against the specified timing constraints for each circuit path and reports any existing timing violations. Static timing analysis tools can determine and report timing statistics such as the total number of paths, delays for each path and the circuit’s most critical paths. As design complexity increases, performing timing analysis manually becomes extremely difficult and sometimes even impossible. With increasing popularity of HDL based design methodologies, static timing analysis becomes increasingly popular among digital logic designers. To summarize, both static and dynamic timing analysis methods offer tradeoffs. One is not a replacement for the other. However, the static timing analysis method offers more complete coverage, little overhead, and the ability to report errors in terms of the design schematic.

Advantages:
1.It resembles manual analysis methods.
2. It is path oriented and finds all setup and hold violations.
3. It does not require stimulus or functional models.
4. It is faster than simulation. (for the same amount of coverage).

Disadvantages:
1. It can report false errors.
2. It cannot detect timing errors related to logical operation.

Static timing analysis is similar to manual analysis process, except that it is automated. This allows the design to be analyzed much faster. This makes it possible for a designer to experiment with different synthesis options and constraints in a short time. This method is also complete because it traces and evaluates all paths in a design, not just those exercised by test stimulus.

Because static timing analysis does not perform logic simulation, test stimulus and functional models are not required. This makes static analysis available earlier since development time for stimulus and models are not required.

The modeling requirements for a static analysis tool are relatively simple. However, timing information for each component in the design is required and the designer must specify waveform information about the input data and clock signals the design uses. The component timing information can be found in parts libraries or data books. Such timing information typically include: pin-to-pin delays, setup, hold time specifications and signal inversion information, and clock frequency constraints. Clock and data waveforms are a normal requirement of the design process, and do not require additional development time.

The major drawback of a static timing analysis tool is that it reports false errors. By checking all possible paths in a design, static timing analysis ensures that all possible setup and hold violations in the circuit have been found. However, the potential to detect some false errors exists since circuit behavior is not considered during the analysis. Static analysis tools cannot detect timing errors related to logical operation. Because static timing analysis does not perform functional testing, it cannot detect timing errors, such as race conditions, that are based on the logical operation of the circuit.

Difference between Static and Dynamic timing analysis

Dynamic timing analysis uses simulation vectors to verify that the circuit computes accurate results from a given input without any timing violations. The problem is that the simulations vector not can guarantee 100% coverage. The goal for the dynamic analysis is to get a 100% coverage. Dynamic timing simulation is still preferred for non-synchronous logic style. As a rule, however, only dynamic timing verification tools support glitch detection and race conditions, since these are inherently dynamic events.

Static timing analysis on the other hand check all path in the circuit even the false paths. False paths are paths that are not possible or interesting in actual operation of the circuit. Therefore you can say that static analysis starts above 100% and works towards 100% by detecting and excluding the false paths. Static tools have made major advancements in recent years, in fact all synthesis tools use static timing analysis internally. Something good about this approach is that almost all tools using it supports multi-cycle paths, in which a path delay constraint exceeds a single clock period. Everything isn't just good, many static timing tools have problems with feedback loops.

Timing Analysis

Depending on the design methodologies used, three types of timing analysis methods are commonly used:

1. Manual analysis
2. Static timing analysis
3. Dynamic timing analysis

Latch based designs are not common in large-scale integration; a separate section latch based static timing analysis which will be covered in later posts.

Manual Analysis: Manual analysis consists of taking a schematic or a netlist to determine the times signals arrive or leave at the input and output ports of the design, and calculating the delay time for the path by adding up the delay times for each component in the path. The objective of the process is to ensure that all signals meet the circuit constraints. This method works well for simple circuits and it is undesirable for large or iterative design process.

Static Timing Analysis: Static timing analysis verifies circuit timing by adding up propagation delays along paths between clocked elements in a circuit. It checks the delays along each path against the specified timing constraints for each circuit path and reports any existing timing violations. Static timing analysis tools can determine and report timing statistics such as the total number of paths, delays for each path and the circuit’s most critical paths...... Read more

Dynamic Timing Analysis: Dynamic timing analysis verifies circuit timing by applying test vectors to the circuit. This approach is an extension of simulation and ensures that circuit timing is tested in its functional context. This method reports timing errors that functionally exist in the circuit and avoids reporting errors that occur in unused circuit paths...... Read more