Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Sunday, 13 July 2014

Calculations using command-line

calculatorMany of you do most of your work from the command-line, using vim to edit files, mutt for e-mails, cd/ls/mv/find/etc instead of a file manager, then you may get annoyed by having to fire up a GUI calculator to make (what may sometimes be) a single calculation.

One useful feature of calculating on the command-line is that you can see what you've typed. For instance, sometimes when I'm entering a long, complex calculation on a calculator (either the GUI or the solid, hold-in-your-hand type), I sometimes forget if I've actually typed in all those numbers or made the calculations in the right order. Maybe it's just me ...

This article shows how to quickly perform standard calculations on the command line including addition, subtraction, multiplication, division, square root, powers, conversion from decimal to hex, decimal to binary, hex to decimal, and binary to decimal. It also briefly introduces using bc in interactive mode and how to write files for use with bc for frequently repeated operations. There is a mention of using Google for performing calculations. It finishes with a little challenge to test the power of your CPU.

Other advantages of using bc include:

  • bc is included with (almost?) all Linux distributions as standard, as well as (again, almost?) all Unix.
  • Results from calculations in some proprietary flavours of bc have up to 99 decimal digits before and after the decimal point. This limit has been greatly surpassed in GNU bc. I don't know what that limit is, but it's at least many, many tens of thousands. Certainly it's more than any GUI-based calculators (I've used) could accommodate.
  • You may also find yourself working in an environment where you simply don't have access to a GUI.
  • The syntax for basic sums is almost identical to Google's calculator function, so you can learn how to use two utilities in one go!

bc is a pre-processor for dc. The useful thing about bc is that it accepts input from files and from standard input. This allows us to pipe data to it for quick calculations.

  • addition
  • subtraction
  • multiplication
  • scale
  • division
  • square root
  • power
  • parentheses
  • obase and ibase
  • convert from decimal to hexadecimal
  • convert from decimal to binary
  • convert from binary to decimal
  • convert from hexadecimal to decimal
  • a brief introduction to interactive mode
  • using bc with shell scripts
  • a brief introduction to using bc with files
  • a quick challenge for your PC (GNU bc only)

Most of these examples follow a simple formula.

addition

$ echo '57+43' | bc
100

subtraction

$ echo '57-43' | bc
14

multiplication

$ echo '57*43' | bc
2451

scale
The scale variable determines the number of digits which follow the decimal point in your result. By default, the value of the scale variable is zero. (Unless you use the l option in which case it defaults to 20 decimal places. More about l later.) This can be set by declaring scale before your calculation, as in the following division example:

division

$ echo 'scale=25;57/43' | bc
1.3255813953488372093023255

square root

$ echo 'scle=30;sqrt(2)' | bc
1.414213562373095048801688724209
This beats Google's calculator function which only calculates the result to 8 decimal places! ;-) Although Google's calculator function has this 8 decimal places limitation, it will allow imaginary numbers as answers.

power

$ echo '6^6' | bc
46656

parentheses
If you have read Robert Heinlein's The Number of the Beast, you may recall that the number of parallel universes in the story equals (six to the power of six) to the power of six. If you should try to calculate that like this:

$ echo '6^6^6' | bc
You will get a screen full of numbers (some 37374 digits), not the
10314424798490535546171949056

that you might expect.

If you're running a non-GNU version of bc, you'll most likely get something like:

exp too big
empty stack
save:args

The Google Calculator balks at '6^6^6' as well. Good ol' GNU.

That's because you typed the wrong question. You need to type:

$ echo '(6^6)^6' | bc

Whereas what you did type was interpreted as:

$ echo '6^(6^6)' | bc

which is an entirely different number altogether. So the positioning of parentheses (brackets to you and me!) is very important. I use brackets to separate the different components of my sums whenever possible, just eliminate any possible doubt that I may get the wrong answer. Consider the following calculations:

$ echo '7+(6*5)' | bc

$ echo '7+6*5' | bc

$ echo '6*5+7' | bc

They all give the same answer, 37, but I would have typed the first calculation, unless of course, I meant:

$ echo '(7+6)*5' | bc

Or to put it another way:

$ echo '13*5' | bc

which is 65.

obase and ibase

obase and ibase are special variables which define output and input base.

Legitimate obase values range from 2 to 999, although anything beyond 16 is wasted on me!

Legitimate ibase values range from 2 to 16.

Some examples will explain all this better.

convert from decimal to hexadecimal

Here we're converting 255 from base 10 to base 16:

$ echo 'obase=16;255' | bc
FF

convert from decimal to binary

And here we're converting the number 12 from base 10 to base 2:

$ echo 'obase=2;12' | bc
1100

Which reminds me of the old joke:

There are only 10 types of people in the world -- those who understand binary, and those who don't.

Which leads us neatly onto the next example:

convert from binary to decimal

Here we're converting the binary number 10 to a base 10 (decimal) number.

$ echo 'ibase=2;obase=A;10' | bc
2

Note that the obase is "A" and not "10". Sorry, you've got to learn some hex. The reason for this is you've set the ibase to "2", so if you now had tried to use "10" as the value for the obase, it would stay as "2", because "10" in base 2 is "2". So you need to use hex to "break out" of binary mode.

Well, that was just to explain the joke; now something a bit more challenging:

$ echo 'ibase=2;obase=A;10000001' | bc
129

convert from hexadecimal to decimal

$ echo 'ibase=16;obase=A;FF' | bc
255

Again, note the use of "A" to denote base 10. That is because "10" in hex (base 16 - the ibase value) is 16.

a brief introduction to interactive mode
You can also run bc in interactive mode:

$ bc

If you're running GNU bc, you should get the following notice:

bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
Followed by an uninviting blank prompt. Otherwise you'll just get an uninviting blank prompt.
If you wish to get straight to the uninviting blank prompt, use the -q option, which runs bc in quiet mode, preventing the normal GNU bc welcome from being printed:
$ bc -q

Using the basics we've been through from the examples above, enter a calculation:

scale=5
57/43
1.32558


Type quit to exit bc interactive mode.


using bc with shell scripts
You can use shell variables with bc, which is very useful in shell scripts:


$ FIVE=5 ; echo "$FIVE^2" | bc
25


Note the use of double-quotes to preserve the value of the variable $FIVE.

a brief introduction to using bc with files

Using bc with files allows complex calculations to be repeated, again and again, a bit like using a spreadsheet to run the same calculations on changing figures ... but faster.


Here is a simple example:

scale=2

/* C-style comments
are allowed, as are spaces */

print "\nConvert Fahrenheit to Celsius\n\n"
print "Temperature in Fahrenheit: " ; fah = read()
print "\n"
print "Equivalent Temperature in Celsius is: "
(fah - 32.0) * 5.0 / 9.0
quit

Create and save the file, then run it like this:

$ bc -q filename
Convert Fahrenheit to Celsius
Temperature in Fahrenheit: 61
Equivalent Temperature in Celsius is: 16.11


Note that this example has only been tested with GNU bc. Other (proprietary) versions of bc may have more stringent syntax requirements. Some bcs don't allow the use of print or read, for example, so you have to edit your file before each calculation. Not very useful.


a quick challenge for your PC (GNU bc only)
If you wish to test the comparative speed of your PC, try this challenge: use bc to calculate Pi to 5000 decimal places. The idea for this challenge came from a great article at Geekronomicon.
If you really want to tie up your machine for an hour (or more), you could try the "Pi to 25000 decimal places" challenge from the aforementioned Geekronomicon.

First, to put things in perspective, here is some information about my CPU:

$ cat /proc/cpuinfo | egrep "model name|MHz"
model name      : AMD Athlon(tm) 64 Processor 3500+
cpu MHz         : 2211.346


Note the use (below) of the command bc –l -q.
-l loads the math library which is required for the use of the "2" (arctangent) in the calculation for Pi. You can learn more about the math library functions in the
bc command manual.
I'm not sure what effect the -q option (quiet, no welcome message printed) has on our test, but I guess it can't harm.

$ time echo "scale=5000; 4*a(1)" | bc -l -q
3.141592653589793238462643383279502884197169399375105820974944592307\
...
...
...
73774418426312986080998886874132604720


real    0m44.164s
user 0m44.099s
sys 0m0.008s
44.099 seconds! Not bad. :-) I imagine that some Gentoo folks may be interested to see what difference their compile-time optimisations make to the speed of bc. FWIW, my distro of choice is Arch Linux.

Saturday, 14 June 2014

Solar energy may soon compete with fossil fuels

solarenergy Solar panel prices have plummeted more than 80 percent in recent years, but solar power is still more expensive than fossil-fuel power in most places, and accounts for a tiny fraction of the world’s energy supply. Few people have a better idea of what it will take to make solar compete with fossil fuels than Richard Swanson, who founded SunPower, a major solar company that sells some of the industry’s most efficient solar panels. During SunPower’s almost 30 years in existence, Swanson has seen many types of solar technology come and go. At the IEEE Photovoltaic Specialists Conference in Denver this week, he sat down with MIT Technology Review’s Kevin Bullis to talk about where the solar industry is now, and where it needs to go in order to become a major source of electricity.

How close is solar to competing with fossil fuels?

It’s darn close. In 2000, a big study concluded that solar panels could get below $1 per watt [a level thought to make solar competitive in many markets]. At the time, everyone in the industry thought the authors were in la-la land. Now we’re under that cost target.

It’s not like there’s one cost number and then everything is all good. The value of solar power varies widely, depending on local sources of electricity that it’s competing with, the amount of sunlight, and other factors. There are geographic pockets where it’s becoming quite competitive. But we need to cut costs by at least another factor of two, which can happen in the next 10 years.

What new technology will it take?

Solar panels now account for less than half of the cost of a solar panel system. For example, installers spend a lot of time and money designing each rooftop solar system. They need to have a certain number of panels in a row, all getting the same amount of sunlight. A bunch of companies are automating the process, some with the help of satellites. One of the most exciting things is microinverters [electronics that control solar panel power output] that allow you to stick solar panels anywhere on a roof—it’s almost plug and play.

To almost everyone’s surprise, silicon is still chugging along. The new developments are pretty amazing. Panasonic, Sharp, and SunPower just announced solar cell’s that break a long standing efficiency record. We need to do things like keep improving efficiency with new solar cell architectures, like the one Panasonic used. There are three basic new cell structures, and all of them are nearing or are already in production. We need to make thinner silicon wafers, improve ways of growing crystalline silicon. We need to switch to frameless solar panels because the cost of the aluminum frame hasn’t been going down much. We need to get rid of silver electrical contacts and replace them with cheaper copper. It’s tricky, but it can be done.

Is investment in research on batteries for storing solar power just as important as developing new solar cell designs?

Oh, yes. Not just batteries—other kinds of storage, too, like thermal storage. A wild card is the possibility of using batteries in electric cars to store solar power. That could change everything. People say that the battery industry will never meet the cost targets you need for this. But I come from an industry that just blew away cost targets. It’s hard for me to argue they won’t be able to do it.

Tuesday, 13 May 2014

Motorola Moto E: Price, specifications, features and comparison


After leaving the Indian market in 2011, Motorola made a comeback last year with the Moto G, and how. Its first Smartphone post the comeback shook the Rs 10,000-Rs 15,000 segment in the country. Motorola’s retail model, where it only sells phone via Flipkart, may seem restricting to many, but that didn’t stop the Moto G to become the best selling Smartphone in the company’s history. The Moto G was followed by the flagship Moto X, which was in fact launched much before in August last year. The device priced in the Rs 20,000-Rs 25,000 range took a different path to its competitors — putting user experience ahead of the specification race. While its success may not have touched the dizzying heights of the Moto G, it nonetheless managed to impress many. Now, Motorola is back with another device, which too seems like a contender to shakeup the entry-level segment. Let’s have a detailed look at the Motorola Moto E.

MOTOROLA MOTO E PRICE IN INDIA
The Moto E is priced at Rs 6,999, which is quite aggressive. Motorola is focusing on changing the perception of entry-level devices and wants to offer first time Smartphone users a premium and well built device.

MOTOROLA MOTO E DESIGN
At first glance the Moto E looks quite similar to the Moto G, but look closer and you will see that the back panels have a different pattern, and there is no front-facing camera. In terms of dimensions, the Moto E (64.8×124.8×12.3mm) is shorter, less wide and slimmer than the Moto G (65.9×129.9×11.6mm)
The customization options include removable back panels, with up to nine different colored panels to choose from.

MOTOROLA MOTO E HARDWARE
DISPLAY – A 4.3-inch qHD display may not sound much, but you have to remember that no other tier-one company offers this kind of resolution in this price segment. The display is quite sharp and does a decent job in reproducing colors. Motorola has also added Corning Gorilla Glass 3 protection, which means the display won’t be easily scratched.

PROCESSOR – The device is powered by a Snapdragon 200 dual-core processor clocked at 1.2GHz and paired with Adreno 302 GPU. Motorola again gets one over its competitors by including a 1GB of RAM, where other companies only provide 512MB of RAM.

STORAGE – You get 4GB of internal storage of which only about 2.22GB is available for users. But on the plus side, there is a microSD card slot to expand the memory up to 32GB.

CAMERAS – There is a 5-megapixel rear camera, which is the same as the Moto G, but unfortunately it isn’t accompanied by a flash. The camera does an okay job at capturing images, but more often than now you would wish you had a flash. Selfie lovers won’t be happy to hear that there is no front camera.

BATTERY – There is a 1,980mAh battery, which the company claims is good enough to last for a day.

EXTRAS – Like the Moto G and the Moto X, the Moto E has a nano coating on the inside as well as the outside. The coating will ensure your device is protected from an occasional splash of water. The coating also ensures there aren’t any ugly flaps covering the ports.

CONNECTIVITY OPTIONS – Besides the usual connectivity options like 3G, Wi-Fi and Bluetooth, the Moto E also has dual-SIM card slots. The dual-SIM functionality is quite smart in the sense that it keeps monitoring your usage and then automatically selects the SIM based on the contact you are calling.

MOTOROLA MOTO E SOFTWARE
The Motorola Moto E runs on the latest Android 4.4.2 KitKat and Motorola has also promised that the device will get at least one more OS update. Additionally, Motorola has also introduced a host of new features like Moto Alert, Instant SMS and Emergency Mode. Moto Alert will automatically alert preset contacts when you leave a particular location, Instant SMS will send a text with your location to preset contacts in emergency situations, and in Emergency mode, the device will call a preset contact or sound an alarm.

MOTOROLA MOTO E FIRST IMPRESSIONS
We got to spend some time with the device and you can read our first impressions of the device here.

MOTOROLA MOTO E COMPARISONS
At the price point, the Moto E will be up against the likes of the Sony Xperia E1 Dual, Nokia X and the Samsung Galaxy S Duos 2 among others. All the devices feature dual-core processors, 4GB expandable memory and are priced under Rs 9,000. The Moto E has the best display in terms of resolution, most RAM (1GB) and the biggest battery. It also runs on the latest Android KitKat and will get a guaranteed update. In the photography department, only Samsung offers a VGA front facing camera, while the rest don’t have one.

FEATURES
MOTOROLA MOTO E
SONY XPERIA E1 DUAL
NOKIA X
SAMSUNG GALAXY S DUOS 2
DISPLAY
4.3-inch qHD 4-inch WVGA 4-inch WVGA 4-inch WVGA
PROCESSOR
1.2GHz Snapdragon 200 dual-core 1.2GHz Snapdragon 200 dual-core 1.2GHz Snapdragon S4 dual-core 1.2GHz dual-core
RAM
1GB 512MB 512MB 768MB
STORAGE
4GB expandable 4GB expandable 4GB expandable 4GB expandable
CAMERAS (REAR/FRONT)
5-megapixel/ - 3-megapixel/ - 3-megapixel/ - 5-megapixel/VGA
BATTERY
1,980mAh 1,700mAh 1,500mAh 1,500mAh
Operating System
Android 4.4.2 KitKat Android 4.3 Jelly Bean Nokia Software Platform Android 4.2 Jelly Bean
PRICE
Rs 6,999 Rs 8,363 Rs 6,955 Rs 8,305







Wednesday, 16 April 2014

Intel’s e-DRAM

When Intel launched their Haswell series chips last June, they stated that the high-end systems would have embedded DRAM, as a separate chip in the package; and they gave a paper at the VLSI Technology Symposium that month, and another at IEDM.
It took us a while to track down a couple of laptops with the requisite Haswell version, but we did and now we have a few images that show it’s a very different structure from the other e-DRAMs that we’ve seen.
IBM has been using e-DRAM for years, and in all of their products since the 45-nm node. They have progressed their trench DRAM technology to the 22-nm node [3], though we have yet to see that in production.Embedded DRAM in IBM Power 7+ (32-nm)
TSMC and Renesas have also used e-DRAM in the chips they make for the gaming systems, the Microsoft Xbox and the Nintendo Wii. They use a more conventional form of memory stack with polysilicon wine-glass-shaped capacitors. TSMC uses a cell-under-bit stack where the bitline is above the capacitors, and Renesas a cell-over-bit (COB) structure with the bitline below.
Embedded DRAM in Microsoft Xbox GPU fabbed by TSMC (65-mm)
Embedded DRAM in Nintendo Wii U GPU fabbed by Renesas (45-mm)
Intel also uses a COB stack, but they build a MIM capacitor in the metal-dielectric stack using a cavity formed in the lower metal level dielectrics. The part is fabbed in Intel’s 9-metal, 22-nm process:
General structure of Intel’s 22-nm embedded DRAM part from Haswell package
When we zoom in and look at the edge of the capacitor array, we can see that the M2 – M4 stack has been used to form the mould for the capacitors.Intel’s-22-nm-embedded-DRAM-stack
Looking a little closer, we can see the wordline transistors on the tri-gate fin, with passing wordlines at the end of each fin. Two capacitors contact each fin, and the bitline contact is in the centre of the fin.A closer look at the Intel 22-nm embedded DRAM stack
We can see some structure in the capacitors, but at the moment we have not done any materials analysis.  A beveled sample lets us view the plan-view:
Plan-view image of the Intel 22-nm embedded DRAM capacitors
The capacitors are clearly rectangular, but again in the SEM we cannot see any detailed structure. We’ll have to wait for further analysis with the TEM for that!
Intel claims a cell capacitance of more than 13 fF and a cell size of 0.029 sq. microns, so about a third of their 22-nm SRAM cell area of ~0.09 sq. microns, and a little larger than the IBM equivalent of 0.026 sq. microns. The wordline transistors are low-leakage trigate transistors with an enlarged contacted gate pitch of 108 nm (the minimum CGP is 90 nm). In the Haswell usage the die is used as a 128 MB L4 cache, with a die size of ~79 sq. mm, co-packaged with the CPU.
Intel got out of the commodity DRAM business almost thirty years ago; it will be interesting to see where they take their new entry, though not likely into competition with the big three suppliers. Their “Knights Landing” high-performance computing (HPC) platform is reported to use 16 GB of eDRAM, which will take the equivalent of 128 of these chips, so perhaps the future is in HPC and gaming systems such as the one we bought to get the part.



Saturday, 5 April 2014

Latch Up In CMOS

What is latch up in CMOS design and ways to prevent it?

A Problem which is inherent in the p-well and n-well processes is due to relatively large number of junctions which are formed in these structures, the consequent presence of parasitic diodes and transistors.

Latch-up is a condition in which the parasitic components give rise to the Establishment of low resistance conducting path between VDD and VSS with Disastrous results

Latch-up may be induced by glitches on the supply rails or by incident radiation.

Latch-up pertains to a failure mechanism wherein a parasitic thyristor (such as a parasitic silicon controlled rectifier, or SCR) is inadvertently created within a circuit, causing a high amount of current to continuously flow through it once it is accidentally triggered or turned on. Depending on the circuits involved, the amount of current flow produced by this mechanism can be large enough to result in permanent destruction of the device due to electrical overstress (EOS).

Preventions for Latch-Up

  • by adding tap wells, for example in an Inverter for NMOS add N+ tap in n-well and connect it to Vdd, and for PMOS add P+ tap in p-substrate and connect it to Vss.
  • an increase in substrate doping levels with a consequent drop in the value of  Rs.
  • reducing Rp by control of fabrication parameters and by ensuring a low contact resistance to Vss.
  • and the other is by introducing of guard rings.....

Latchup in Bulk CMOS

A byproduct of the Bulk CMOS structure is a pair of parasitic bipolar transistors. The collector of each BJT is connected to the base of the other transistor in a positive feedback structure. A phenomenon called latchup can occur when (1) both BJT's conduct, creating a low resistance path between Vdd and GND and (2) the product of the gains of the two transistors in the feedback loop, b1 x b2, is greater than one. The result of latchup is at the minimum a circuit malfunction, and in the worst case, the destruction of the device.

parasitic_transitor_in_bulk_cmos  Cross section of parasitic transistors in Bulk CMOS

parasitic_transitor_in_bulk_cmos_equivalent_circuit Equivalent Circuit

Latchup may begin when Vout drops below GND due to a noise spike or an improper circuit hookup (Vout is the base of the lateral NPN Q2). If sufficient current flows through Rsub to turn on Q2 (I Rsub > 0.7 V ), this will draw current through Rwell. If the voltage drop across Rwell is high enough, Q1 will also turn on, and a self-sustaining low resistance path between the power rails is formed. If the gains are such that b1 x b2 > 1, latchup may occur. Once latchup has begun, the only way to stop it is to reduce the current below a critical level, usually by removing power from the circuit.

The most likely place for latchup to occur is in pad drivers, where large voltage transients and large currents are present.

Preventing latchup

Fab/Design Approaches:

  1. Reduce the gain product b1 x b1
  • move n-well and n+ source/drain farther apart increases width of the base of Q2 and reduces gain beta2 ­> also reduces circuit density
  • buried n+ layer in well reduces gain of Q1

    2. Reduce the well and substrate resistances, producing lower voltage drops

· higher substrate doping level reduces Rsub

· reduce Rwell by making low resistance contact to GND

· guard rings around p- and/or n-well, with frequent contacts to the rings, reduces the parasitic resistances.

cmos_transitor_with_guard_rings CMOS transistors with guard rings

Systems Approaches:

  1. Make sure power supplies are off before plugging a board. A "hot plug in" of an unpowered circuit board or module may cause signal pins to see surge voltages greater than 0.7 V higher than Vdd, which rises more slowly to is peak value. When the chip comes up to full power, sections of it could be latched.
  2. Carefully protect electrostatic protection devices associated with I/O pads with guard rings. Electrostatic discharge can trigger latchup. ESD enters the circuit through an I/O pad, where it is clamped to one of the rails by the ESD protection circuit. Devices in the protection circuit can inject minority carriers in the substrate or well, potentially triggering latchup.
  3. Radiation, including x-rays, cosmic, or alpha rays, can generate electron-hole pairs as they penetrate the chip. These carriers can contribute to well or substrate currents.
  4. Sudden transients on the power or ground bus, which may occur if large numbers of transistors switch simultaneously, can drive the circuit into latchup. Whether this is possible should be checked through simulation.

Get free daily email updates!

Follow us!

Monday, 3 March 2014

Do U Know? Mobile devices said to consume more energy on storage tasks

28058-clipart-illustration-of-a-battery-mascot-cartoon-character-flexing-his-arm-musclesDo u know that “Flash storage takes power to write - a 20 volt jolt to each cell - but needs almost none to maintain. The real power hog is the inefficient storage software stack that eats 200 times the power required for the hardware.”

Given the always-on mobile infrastructure - background updates, instant messages, email, updates, file sync, logging and more - lots of background storage I/O is happening all the time. And it's eating your device's power budget.

Researchers from Microsoft and the University of California at San Diego benchmarked how Android and Windows RT mobile devices used energy for storing data. They focused on activities that occur with the screen off, since displays are a major power consumer when lit. "Measurements across a set of storage-intensive micro benchmarks show that storage software may consume as much as 200x more energy than storage hardware on an Android phone and a Windows RT tablet," the research team wrote in a paper. "The two biggest energy consumers are encryption and managed language environments."

Results
On Windows RT they found that the OS/CPU/DRAM overhead was between 5 and 200 times the power used by the flash storage itself, depending on how DRAM power use was allocated. File system APIs, the language environment and encryption drove the CPU power consumption during I/O. Full disk encryption - protecting user data - incurred 42 percent of CPU utilization.

On an Android phone, the encryption penalty is even worse: 2.6–5.9x more energy per KB over non-encrypted I/O.

For applications, the team found that on Windows RT, the energy overhead in a managed environment is 12.6–18.3 percent while overhead on Android is between 24.3–102.1 percent. It appears that Android's algorithms are not optimized for application I/O power efficiency.

Thursday, 27 February 2014

Assertion Debugging in Questa – few tips

Playing around debugging some complex assertions in Qeusta? Here are some tips:

1. Use vsim –assertdebug

2. Add –novopt for trivial code containing assertions + stim alone as otherwise many signals get optimized away. On real designs, perhaps you are better off with +acc* (Read doc for more)

3. Once the GUI comes up, the assertions are not listed in its own browser – ideally I would have liked to see a menu item under “Tools” menu. But it is hidden under “View –> Coverage –> Assertions” – GOK why! (GOK – God Only Knows)  :)

4. Before starting simulation, enable ATV

5. After sim one can do “view ATV” for advanced debug!

Questa_dbg

 

Get free daily email updates!

Follow us!