Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Monday, 16 January 2012

Shared Variable in VHDL

How to use a single variable in more than one process…!!

VHDL87 limited the scope of the variable to the process in which it was declared. Signals were the only means of communication between processes, but signal assignments require an advance in either delta time or simulation time.

VHDL '93 introduced shared variables which are available to more than one process. Like ordinary VHDL variables, their assignments take effect immediately. However, caution must be exercised when using shared variables because multiple processes making assignments to the same shared variable can lead to unpredictable behavior if the assignments are made concurrently. The VHDL ‘93 standard does not define the value of a shared variable it two or more processes make assignments in the same simulation cycle.

The syntax of the shared variable is similar to that of the normal variable. However, the keyword SHARED is placed in front of VARIABLE in the declaration

Example:

Architecture SV_example of example is
      shared variable status_signal : bit;
begin
     p1 : process (Clock,In1)
     begin
          if(status_signal) then
          ...
          end if;
          status_signal :='0';
     end process;

     p2 : process (Clock,In2)
     begin
          if(!(status_signal)) then
          ....
          end if;
          status_signal:='1';
     end process p2;

end SV_example;

Above example shows the use of shared variable in more than one process.

Saturday, 14 January 2012

IBM developing storage device of just 12 atoms..!!!

If you're impressed with how much data can be stored on your portable hard drive, well ... that's nothing. Scientists have now created a functioning magnetic data storage unit that measures just 4 by 16 nanometers, uses 12 atoms per bit, and can store an entire byte (8 bits) on as little as 96 atoms - by contrast, a regular hard drive requires half a billion atoms for each byte. It was created by a team of scientists from IBM and the German Center for Free-Electron Laser Science (CFEL), which is a joint venture of the Deutsches Elektronen-Synchrotron DESY research center in Hamburg, the Max-Planck-Society and the University of Hamburg.

The storage unit was created one atom at a time, using a scanning tunneling microscope located at IBM's Almaden Research Center in San Jose, California. Iron atoms were arranged in rows of six, these rows then grouped into pairs, each pair capable of storing one bit of information - a byte would require eight pairs of rows.

Each pair can be set to one of two possible magnetic configurations, which serve as the equivalent of a 1 or 0. Using the tip of the microscope, the scientists were able to flip between those two configurations on each pair, by administering an electric pulse. They were subsequently able to "read" the configuration of each pair, by applying a weaker pulse using the same microscope.

While conventional hard drives utilize a type of magnetism known as ferromagnetism, the atom-scale device uses its opposite, antiferromagnetism. In antiferromagnetic material, the spins of neighboring atoms are oppositely aligned, which keeps them from magnetically interfering with one another. The upshot is that the paired rows of atoms were able to be packed just one nanometer apart from one another, which wouldn't otherwise have been possible.

Before you start expecting to find antiferromagnetic rows of atoms in your smartphone, however, a little work still needs to be done. Presently, the material must be kept at a temperature of 5 Kelvin, or -268ºC (-450ºF). The IBM/CFEL researchers are confident, however, that subsequent arrays of 200 atoms could be stable at room temperature.

It was found that 12 atoms was the minimum number that could be used for storing each bit, before quantum effects set in and distorted the information. "We have learned to control quantum effects through form and size of the iron atom rows," said CFEL's Sebastian Loth. "We can now use this ability to investigate how quantum mechanics kicks in. What separates quantum magnets from classical magnets? How does a magnet behave at the frontier between both worlds? These are exciting questions that soon could be answered."

IBM Research - Almaden physicist Andreas Heinrich explains the industry-wide need to examine the future of storage at the atomic scale and how he and his teammates started with 1 atom and a scanning tunneling microscope and eventually succeeded in storing one bit of magnetic information reliably in 12 atoms.

VHDL Procedures

The procedure is a form of subprograms. It contains local declarations and a sequence of statements. Procedure can be called in any place of the architecture. The procedure definition consists of two parts:

  • The procedure declaration, which contains the procedure name and the parameter list required when the procedure is called;

  • The procedure body, which consists of the local declarations and statements required to execute the procedure.

PROCEDURE DECLARATION

The procedure declaration consists of the procedure name and the formal parameter list.

In the procedure specification, the identifier and optional formal parameter list follow the reserved word procedure.

procedure Procedure_1 (variable X, Y: inout Real);

Objects classes constants, variables, signals, and files can be used as formal parameters. The class of each parameter is specified by the appropriate reserved word, unless the default class can be assumed (see below). In case of constants, variables and signals, the parameter mode determines the direction of the information flow and it decides which formal parameters can be read or written inside the procedure. Parameters of the file type have no mode assigned.

There are three modes available: in, out, and inout. When in mode is declared and object class is not defined, then by default it is assumed that the object is a constant. In case ofinout and out modes, the default class is variable. When a procedure is called, formal parameters are substituted by actual parameters. If a formal parameter is a constant, then actual parameter must be an expression. In case of formal parameters such as signal, variable and file, the actual parameters must be objects of the same class. Below example presents several procedure declarations with parameters of different classes and modes.

procedure Proc_1 (constant In1: in Integer; variable O1: out Integer);
procedure Proc_2 (signal Sig: inout Bit);

A procedure can be declared also without any parameters.

PROCEDURE BODY

Procedure body defines the procedure's algorithm composed of sequential statements. When the procedure is called it starts executing the sequence of statements declared inside the procedure body.

The procedure body consists of the subprogram declarative part After the reserved word is and the subprogram statement part placed between the reserved words begin and end. The key word procedure and the procedure name may optionally follow the end reserved word.

Declarations of a procedure are local to this declaration and can declare subprogram declarations, subprogram bodies, types, subtypes, constants, variables, files, aliases, attribute declarations, attribute specifications, use clauses, group templates and group declarations.

procedure Proc_3 (X,Y : inout Integer) is
  type Word_16 is range 0 to 65536;
  subtype Byte is Word_16 range 0 to 255;
  variable Vb1,Vb2,Vb3 : Real;
  constant Pi : Real :=3.14;

procedure Compute (variable V1, V2: Real) is
  begin
    -- subprogram_statement_part
  end procedure Compute;

begin
    -- subprogram_statement_part
end procedure Proc_3;

A procedure can contain any sequential statements (including wait statements). A wait statement, however, cannot be used in procedures which are called from a process with a sensitivity list or from within a function.

procedure Transcoder_1 (variable Value: inout bit_vector (0 to 7)) is
begin
  case Value is
    when "00000000" => Value:="01010101";
    when "01010101" => Value:="00000000";
    when others => Value:="11111111";
  end case;
end procedure Transcoder_1;

The procedure Transcoder_1 transforms the value of a single variable, which is therefore a bi-directional parameter.

procedure Comp_3(In1,R:in real; Step :in integer; W1,W2:out real) is
variable counter: Integer;
begin
  W1 := 1.43 * In1;
  W2 := 1.0;
  L1: for counter in 1 to Step loop
    W2 := W2 * W1;
    exit L1 when W2 > R;
  end loop L1;
  assert ( W2 < R )
    report "Out of range"
      severity Error;
end procedure Comp_3;

The Comp_3 procedure calculates two variables of mode out: W1 and W2, both of the REAL type. The parameters of mode in: In1 and R constants are of real type and Step of the integer type. The W2 variable is calculated inside the loop statement. When the value of W2 variable is greater than R, the execution of the loop statement is terminated and the error report appears.

PROCEDURE CALL

A procedure call is a sequential or concurrent statement, depending on where it is used. A sequential procedure call is executed whenever control reaches it, while a concurrent procedure call is activated whenever any of its parameters of in or inout mode changes its value.

All actual parameters in a procedure call must be of the same type as formal parameters they substitute.

OVERLOADED PROCEDURES

The overloaded procedures are procedures with the same name but with different number or different types of formal parameters. The actual parameters decide which overloaded procedure will be called.

procedure Calculate (W1,W2: in Real; signal Out1:inout Integer);
procedure Calculate (W1,W2: in Integer; signal Out1: inout Real);
-- calling of overloaded procedures:
Calculate(23.76, 1.632, Sign1);
Calculate(23, 826, Sign2);

The procedure Calculate is an overloaded procedure as the parameters can be of different types. Only when the procedure is called the simulator determines which version of the procedure should be used, depending on the actual parameters.

Properties of Procedures

  • Synthesis tools usually support procedures as long as they do not contain the wait statements.
  • The Procedure declaration is optional - procedure body can exist without it. If, however, a procedure declaration is used, then a procedure body must accompany it.

Friday, 13 January 2012

VHDL Functions and Procedures

Functions and procedures in VHDL, which are collectively known as subprograms, are directly analogous to functions and procedures in a high-level software programming language such as C or Pascal. A procedure is a subprogram that has an argument list consisting of inputs and outputs, and no return value. A function is a subprogram that has only inputs in its argument list, and has a return value.

Subprograms are useful for isolating commonly-used segments of VHDL source code. They can either be defined locally (within an architecture, for example), or they can be placed in a package and used globally throughout the design description or project.

Statements within a subprogram are sequential (like a process), regardless of where the subprogram is invoked. Subprograms can be invoked from within the concurrent area of an architecture or from within a sequential process or higher-level subprogram. They can also be invoked from within other subprograms.

Subprograms are very much like processes in VHDL. In fact, any statement that you can enter in a VHDL process can also be entered in a function or procedure, with the exception of a wait statement (since a subprogram executes once each time it is invoked and cannot be suspended while it is executing). It is therefore useful to think of subprograms as processes that (1) have been located outside the body of an architecture, and (2) operate only on their input and (in the case of procedures) their output parameters.

Nesting of functions and procedures is allowed to any level of complexity, and recursion is also supported in the language. (Of course, if you expect to generate actual hardware from your VHDL descriptions using synthesis tools, then you will need to avoid writing recursive functions and procedures, as such descriptions are not synthesizable).

Thursday, 12 January 2012

VHDL Functions

I think all of the Designer wanted the Code simple and understandable. In simulation functions can be used to accomplish all kinds of things but For synthesis we must be more careful. Functions can be useful to model a component or a type conversion.

Warning, though!!! You must THINK HARDWARE!!!

Many functions have been defined in the IEEE libraries, e.g. rising_edge(CLK)

Function Properties :

  • Function parameters can only be inputs.
  • Functions can only return one value, specified by “return”.
  • Statements inside the function are sequential except signal assignment and wait.
  • No new signals can be declared, but variables can be.
  • A function may declare local variables. These do not retain their values between successive calls, but are re-initialized each time.
  • A function can be called as an expression, in either a concurrent or sequential statement
  • Functions can be included either explicitly or through the use of packages.

Syntax:

function function_name (parameter_list) return type is
    declarations
begin
    sequential statements
end function_name;

Example:
entity full_add is
port(
         a, b, carry_in: in bit;
         sum, carry_out: out bit);
end full_add;

architecture full_add_arch of full_add is

function carry (a, b, c: in bit) return bit is
  begin
        return ((a and b) or (a and c) or (b and c));
end majority;

begin
       sum <= a xor b xor carry_in;
       carry_out <= carry(a, b, carry_in);
end full_add_arch;

Tuesday, 10 January 2012

VLSI Interview Questions - 2

  • What is the frequency of the DDR / voltage
  • What is the memory size ; explain prefetch in memory context
  • What is the Bit length for data
  • Basic protocol level DDR knowledge
  • What is absolute jitter
  • What are the types of jitter you know
  • How do you make power measurements
  • Asynchronous reset flip flop / Synchronous reset flip flop difference
  • What is a asynchronous reset D flip flop
  • How do you double the clock frequency using combinational logic
  • What do you understand by synthesis
  • What is the basic difference between ASIC and FPGA design flow
  • Blocking and non-blocking statements
  • Tools used for front end
  • PCI clock frequency
  • What is metastability
  • Delay parameters which matter for DDR ( cas latency what do u know about it )
  • RAS / CAS
  • Master – Slave FF
  • Add delay on FF1-FF2 D1Q-D2 path  and analyze a circuit ( a double inverter)
  • Swap the delay onto the clock line and analyze the circuit ( double inverter )
  • Delay nos. given 20 ns (double inverter) on clock skew line, 5 ns on the first FF to second FF line ; 100 ns clock period – analyze the circuit
  • 4:1 mux from 2:1 mux ABCD in order – draw truth table and prove
  • A equality comparator design – make it an inverter
  • XOR gate from NAND gate
  • Explain DDR protocol and timing
  • Ethernet packet format
  • Test setup and explain settings
  • Two critical debug you have done in your career and lessons learnt
  • Decoder design – explain address decoder how it works given x number of rows and columns draw timing and circuit
  • 8085 block diagram ( general uP concepts)
  • DRAM
  • FF can be used in memory? Why / why not ?  FF vs DRAM
  • Five skills obtained from board design / rules – best practices
  • Latch vs FF
  • VHDL code snippet
  • SR FF.
  • DDR banks
  • 100 MHz clock is used to give input – need to send out data at 200 MHz suggest circuits for this
  • DDR explanation – chip level
  • 100 MHz in from 1 PLL clock / 100 MHz out from PLL2 clock – design circuit
  • What problems will come in case (q 20 / 22)
  • FIFO design details and problems
  • some more design problems were asked to be analyzed
  • What is set up time
  • What is hold time
  • ASIC Design flow
  • Challenges in ASIC Design
  • Latch and Flip-Flop
  • Design a simple circuit for motion detector
  • Use of a decoder
  • Types of Flip Flop
  • Which is the most common flip flop used in ASIC designs
  • FF --- Combinational Logic --- FF ( Analysis of standard circuit)
  • Analysis of circuit with delays ( buffers added to clock lines)
  • How to find the maximum clock frequency of a given circuit
  • Synthesis tools and styles
  • Timing constraints to be given for ASIC design
  • What happens when you decrease the clock frequency – does setup / hold time violations at say 300MHz frequency vanish at 3 MHz
  • What all influence the delay of an element ( Flop – capacitance ?)
  • What parameters influence delay ( temperature effect on delay)
  • If input transition is faster what happens to delay of a cell
  • What do you understand by drive strength
  • High drive of a cell – correlates to what ?
  • Importance of hold time (adder can become subtractor – Function change!!)
  • How to solve set-up time violations
  • How to solve hold time violations
  • What is PRBS
  • What is the difference between single ended and differential
  • Why is PRBS needed in a tester
  • USB protocol / packet level understanding? Basics explanation
  • 80 MHz DDR – what do you understand from this
  • SDR and DDR difference and advantages
  • Test setup
  • Triplexer – why passive optical networks what it means
  • WDM – CO – CPE
  • What do you understand by a Loopback why is it needed
  • Challenges in finding maximum clock frequency in ASIC design
  • Power estimation in chips ?
  • Why is place and route important – any understanding of the same
  • What is skew – clock skew
  • What is slew – slew rate
  • Why do you want to do verification and enter ASIC domain
  • What is jitter
  • What is cycle – cycle / period jitter. How is it estimated
  • Common i/fs in a system
  • Pulse width
  • Why is setup and hold time first needed
  • Effect of temperature on delays ( delay increases with temperature)
  • Why clock skew arises
  • What is positive and negative skew
  • Is positive skew and advantage or disadvantage – how does it help
  • What is the worst pattern that can be used to test a set of lines
  • SSN – crosstalk
  • what do you actually look for in SI
  • What do you do in a bring-up
  • What is Custom and Semi-Custom ASIC design
  • ASIC – FPGA difference ( low power is a key)
  • When a Flop is used; when a latch is used and why?
  • Why random patterns?
  • DFM?
  • Clock tree routing problems
  • Models for components
  • Buffer circuit in IOs – Pulse width distortion / duty cycle distortion why it happens performance before and after pads causes for degradation
  • Can you explain a general verification methodology flow 
  • Explain your verification architecture 
  • Why do you think we need functional coverage 
  • Can you explain e-manager coverage implementation methods you have used 
  • DDR + problems you faced in bring up 
  • Can you give me an FSM/code/circuit to implemet code for following waveform 
  • 32 bit addr / 32 bit data / size -- map to 64 bit memory - give structure / how will you sample data for byte, word, half word, dword accesses 
  • You have 256 MB sys memory - (insufficient say for ur huge ASIC) how will u verify 
  • Dynamic memory 
  • List and indexed lists 
  • Can you explain some RISC processor architecture you know 
  • RISC vs CISC you know from college 
  • How can specman handle semaphores 
  • Some addressing fundamentals
  • Multiple threads in your env - what did you implement to run three cores simultaneously. 
  • AXI - addressing ; 4k page boundary cross over fetches; wrapping concept ; Multiple slave out of order transaction support - waveforms as to how these transactions will be ; size / length concepts

Monday, 2 January 2012

Silver jubilee of annual VLSI Design Conference in Hyderabad

The 25th International Conference on VLSI Design and the 11th International Conference on Embedded Systems is being held during January 7 – 11, 2012 at Hyderabad.

This year marks the silver jubilee of the VLSI Design Conference and therefore features an overview of the history of the conference by Vishwani D. Agarwal from Auburn University. The theme this year is Embedded solutions for emerging markets – consumer, energy and automotive.

Jaswinder Ahuja, of Cadence India, and President, VLSI Society of India, delivers the opening keynote on 'Semiconductor industry: Best of times, worst of times, and nowhere else would I rather be!'

Emerging Trends in Process Technologies by Jean Boufarhat of AMD and Samarjit Chakraborty of the Technical University of Munich, Germany, on Challenges in Automotive Cyber-physical Systems Design are two other lectures.

The conference, which features tutorials on 7-8, 2012, really gets going during January 9-11, 2012 and will also see a panel discussion on SoC Realization – A Bridge to New Horizons or a Bridge to Nowhere?

Among other attractions are Intel's Ravi Kuppuswamy on 15 billion by 2015 - Transformation of Embedded Devices to Intelligent Systems; Rajesh Gupta from the University of California San Diego on The Variability Expeditions: Exploring the Software Stack for Underdesigned Computing Machines; Berty Gyselinckx from IMEC, Belgium on A wireless sensor a day keeps the doctor away; Sri Parameswaran from the University of New South Wales on Security and Reliability in Embedded Systems and, Suresh Menon of Xilinx on the FPGA Roadmap: Technology Chalenges and Transitioning to Stacked Silicon Interconnect.

The guest lecture is to be delivered by Rajeev Madhavan, chairman and CEO, Magma Design Automation on The future of Semiconductor Design; What the Indian Electronics Industry can learn from Apple.

Meanwhile, The 3rd IEEE International Workshop on Realiability Aware System Design and Test is also being held at Hyderabad during 7-8, 2012 at Hyderabad, in conjunction with the VLSI Design Conference.

The glitch that stole the FPGA's energy efficiency

Field-programmable gate arrays (FPGAs) are notorious for high power consumption. They are hard to power down in the same way as custom logic - so they have considerable static power consumption - and they use a lot more gates to achieve the same job with their greater flexibility.

However, a good proportion of an FPGA's power consumption is avoidable. A 2007 study carried out by researchers at the University of British Columbia and published in IEEE Transactions on VLSI Systems found that up to three quarters of the dynamic power consumption could be ascribed to glitches rather than actual functional state transitions for some types of circuit.

The heart of the problem lies with timing: early-arriving signals can drive outputs to the wrong state before the situation is 'corrected' by later signals and before the final state is ready for sampling at the next clock transition. When you consider the large die size of FPGAs relative to custom logic, it is not hard to see why delays can be so large between signals.

UBC's Julien Lamoureux and colleagues recommended the use of delay elements to align signals in time to reduce these glitch events.

At the International Symposium on Low Power Electronic Design earlier this year, Warren Shum and Jason Anderson of the University of Toronto proposed an alternative: making use of the don't care conditions used in logic synthesis to also filter out potential glitches:

"This process is performed after placement and routing, using timing simulation data to guide the algorithm...Since the placement and routing are maintained, this optimization has zero cost in terms of area and delay, and can be executed after timing closure is completed."

The alterations are made in the LUTs iteratively to create new truth tables that will reduce the number of glitch transitions during operation, borrowing some concepts from asynchronous design where glitches are considered actively dangerous rather than inconvenient. On benchmark circuits, the technique reduced glitch power by around 14 per cent on average and up to half in some cases.

Intel 32nm Medfield mobile processor specs and benchmarks leaked

intel-medfield-540x390 The popular chip maker Intel that I like to call chipzilla is preparing their own mobile processor or SoC (system on chip) called Medfield that we all have probably heard about a few times. They’ve shown off a prototype device too and more details on that are below. Today we have some leaked specs and benchmarks that are actually quite impressive that put this new Intel mobile processor right up there with NVIDIA’s Tegra 2 and Qualcomm’s dual-core chipsets. More competition the better right guys?

Intel may still be a ways off from launching Android smartphones and tablets but being fully supported by 4.0 Ice Cream Sandwich they are headed in the right direction — and now we have specs and performance ideas to help our minds wander with the possibilities. Apparently VR-Zone got all the info on Intel’s first true attempt at a full out SoC and we have all the details.

The 1.6 GHz x86 Intel mobile processor was running on a reference design 10″ tablet with 1GB of RAM, 16GB of storage, WiFi, Bluetooth, camera’s and all that other usual stuff, and was tossed up against the current big dogs like the Tegra 2. Apparently they ran a few Caffeinemark 3 benchmarks and the higher clocked Intel Atom scored around 10,500 while the Tegra 2 hit 7,500 and Qualcomm’s 1.5 GHz dual-core racked up around 8,000 points.

Currently the power consumption was higher than wanted or anticipated and that could cause a problem with battery life obviously. Intel plans to cut that down a bit and make some strides in efficiency not to mention be launching on Android 4.0 ICS devices later next year. I’m sure we’ll be seeing more than a few production units at CES 2012 so stay tuned as our entire team will be there live.

A processor company as huge as Intel running on a wide array of Android devices could be a game-changer if done right so we’ll continue to monitor and update as we hear more.