Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Showing posts with label Ideas. Show all posts
Showing posts with label Ideas. Show all posts

Saturday, 1 June 2013

Keep Environment Variables when Using SUDO

Recently we were developing a script in perl where we need to specify the environment variable UVM_LIBRARY_PATH = ../examples/UVM1.10, but when we run the code with sudo script/server, it doesn't run because that library path is not in root's env.

Its very easy to keep the env variables even while running the script with SUDO. All you need to do is modify your /etc/sudoers file

search following lines

Defaults    env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR \
LS_COLORS MAIL PS1 PS2 QTDIR USERNAME \
LANG LC_ADDRESS LC_CTYPE LC_COLLATE LC_IDENTIFICATION \
LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME
LC_PAPER LC_TELEPHONE LC_TIME LC_ALL LANGUAGE LINGUAS \
_XKB_CHARSET XAUTHORITY"

Add env variable eg. UVM_LIBRARY_PATH. in our case we have

Defaults    env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR \
LS_COLORS MAIL PS1 PS2 QTDIR USERNAME \
LANG LC_ADDRESS LC_CTYPE LC_COLLATE LC_IDENTIFICATION \
LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME
LC_PAPER LC_TELEPHONE LC_TIME LC_ALL LANGUAGE LINGUAS \
_XKB_CHARSET XAUTHORITY UVM_LIBRARY_PATH"

Save your file and run the script.









Get free daily email updates!



Follow us!


Wednesday, 24 April 2013

UI scientists create powerful microbatteries

UI …. i.e. University of Illinois…

IonCrossing_in_new_battery Developed by researchers at the University of Illinois at Urbana-Champaign, the new microbatteries out-power even the best supercapacitors and could drive new applications in radio communications and compact electronics.

The most powerful batteries on the planet are only a few millimeters in size, yet they pack such a punch that a driver could use a cellphone powered by these batteries to jump-start a dead car battery – and then recharge the phone in the blink of an eye.

“This is a whole new way to think about batteries. A battery can deliver far more power than anybody ever thought. In recent decades, electronics have gotten small. The thinking parts of computers have gotten small. And the battery has lagged far behind. This is a microtechnology that could change all of that. Now the power source is as high-performance as the rest of it,” said William P. King, bliss professor of mechanical science and engineering.

“The picture illustrates a high power battery technology from the University of Illinois.  Ions flow between three-dimensional micro-electrodes in a lithium ion battery.”

With currently available power sources, users have had to choose between power and energy. For applications that need a lot of power, like broadcasting a radio signal over a long distance, capacitors can release energy very quickly but can only store a small amount. For applications that need a lot of energy, like playing a radio for a long time, fuel cells and batteries can hold a lot of energy but release it or recharge slowly.

The new microbatteries offer both power and energy, and by tweaking the structure a bit, the researchers can tune them over a wide range on the power-versus-energy scale.

The batteries owe their high performance to their internal three-dimensional microstructure. Batteries have two key components: the anode (minus side) and cathode (plus side). Building on a novel fast-charging cathode design by materials science and engineering professor Paul Braun’s group, King and Pikul developed a matching anode and then developed a new way to integrate the two components at the microscale to make a complete battery with superior performance.

The graphic illustrates a high power battery technology from the University of Illinois.  Ions flow between three-dimensional micro-electrodes in a lithium ion battery.

With so much power, the batteries could enable sensors or radio signals that broadcast 30 times farther, or devices 30 times smaller. The batteries are rechargeable and can charge 1000 times faster than competing technologies – imagine juicing up a credit-card-thin phone in less than a second. In addition to consumer electronics, medical devices, lasers, sensors and other applications could see leaps forward in technology with such power sources available.

“Any kind of electronic device is limited by the size of the battery – until now. Consider personal medical devices and implants, where the battery is an enormous brick, and it’s connected to itty-bitty electronics and tiny wires. Now the battery is also tiny,” explained Mr. King.

Now, the researchers are working on integrating their batteries with other electronics components, as well as manufacturability at low cost.

“To dare is to lose one's footing momentarily. To not dare is to lose oneself.”

Get free daily email updates!

Follow us!

Tuesday, 16 April 2013

India needs homegrown wafer fabs for its electronics

FAB_VLSI The government of India is offering up to $2.75 billion in incentives for the construction and equipping of the country's first wafer fabrication facility. India imported $8.2 billion in semiconductors last year, according to Gartner. Getting its own wafer fab is said to present a number of challenges to India, especially in the necessary infrastructure and an ecosystem of suppliers.

The domestic purchasing mandate, known as the “preferential market access” policy, seeks to address a real problem: imports of electronics are growing so fast that by 2020, they are projected to eclipse oil as the developing country’s largest import expense.

India’s import bill for semiconductors alone was $8.2 billion in 2012, according to Gartner, a research firm. And demand is growing at around 20 percent a year, according to the Department of Electronics and Information Technology.

For all electronics, India’s foreign currency bill is projected to grow from around $70 billion in 2012 to $300 billion by 2020, according to a government task force.

Read more…

Get free daily email updates!

Follow us!

Sunday, 24 February 2013

6 Ways to Improve Chip Yield Even Before the Project Starts

13782587-bulb-on-computer-chip--technology-concept Early on in Chip projects, yield is not taken very seriously. The common thinking goes –  anyhow there isn’t much to do as this early point of time. However, there are actually several things you can do even before the Chip design starts, which will translate to clear savings.

1- Know your Yield

Yield has a great deal of impact only if production volume is high. If you plan to manufacture only a few tens of thousands of components, perhaps yield is not the most important topic in your project’s plan.

Yield can be roughly calculated or estimated before the project has even started. Yet, if you have calculate a yield target of 95% there is no reason to invest money and efforts to try improving the yield from (the calculated) 95% to 99% because that would not be possible.  Therefore, it is important that you have calculated your yield and set that as a goal.

2- Consider Foundry Applicability

Semiconductor foundries are not taking any yield losses. It is not the fab responsibility whether your yield is high or low because they sell wafers and not dies. Therefore you should select the foundry the suits best to your Chip domain.

If you chip requires small node geometries go to GLOBALFOUNDRIES, TSMC etc. If you chip needs excellent RF performance go to: IBM, TowerJazz etc. The foundry can help you calculating the wafer yield based on their own process technology. If you can provide them with die size, number of layers, process node and options, they should be able to provide you with a very accurate yield figures for your project.

3- Match Design Team Experience to Your Project

If you have decided to outsource the frond-end and physical design activities to an external vendor, the main yield-related risk here is experience. If the design team does not have the relevant experience that matches your chip project (for instance: RF, High Voltage) you are really wasting your time. Don’t hire analog designer without high voltage experience if you need to design a 120V chip.

4- Select Silicon Proven IPs

More and more companies are shopping for Semiconductor IPs to help reduce time to market and minimize engineering cost. There are many IP vendors with high quality products and some with lower quality. The keyword here is risk minimization. You really want to make sure the IP blocks you are about to purchase and integrate into your chip are bug free and have been silicon proven and qualified for your process. Ask for test results and references.

5- Follow Package Design Rules

For simple QFN packages there are no real concerns besides following the assembly house design rules. However complex packages can reduce yield dramatically. If your chip uses a package that consists of a multilayer substrate with high speed signals, this substrate should be considered as part of the silicon die. Improper routing of high speed signals, for example, will make the substrate performance very marginal and thus result in failures during final test.

6 – Say No to Tight Test Limits, Say Yes to Better Hardware

The only place to measure yield is at the testing phase. And this is done by the ATE.

Great ASIC engineers often try to over-engineer the chip design and as a by-product also tighten up the test result criteria. These limits will have direct impact on your profit. Every device that fails to meet limits during the screening process will be scraped. Therefore, don’t create the perfect test specification. Make one that meets your system requirements.

Loadboards, sockets and probecards have different quality levels and therefore different cost. But since these are the actual physical interface between your chip and the tester, you want to make sure they have the right quality and durably to allow solid connectivity to the tester during the test period. Otherwise, lower quality hardware will shave off your yield figures. Sockets for example, have limited number of insertions; you therefore should buy a socket that meets your chip production volume. Bottom-line — don’t compromise on the quality of the hardware interfacing your chip.

There is so much more to write on this topic, we promise to write more articles in the future. Stay tuned.

Get free daily email updates!

Follow us!

Thursday, 21 February 2013

How to read an NGC netlist file

For the occasions that you find yourself with a netlist file and you don’t know where it came from or what version it is, etc. this post is about how you can interpret the netlist file (ie. convert it into something readable).

Today I found myself with two netlists and I needed to know if they were the same. Yes of course you can try comparing the two files with a program such as Beyond Compare, but if the netlists were compiled on separate dates, you will have trouble recognizing this from the raw binary data. The best thing to do in this case is to convert the netlists to EDIF files, a readable, text file version of the netlist. Another option is to convert the netlists into VHDL or Verilog code. Here is how you can do this:

To convert a netlist (.ngc) to an EDIF file (.edf)

  1. Get a command window open by typing “cmd” in the Start->Run menu option in Windows. If you use Linux, open up a terminal window.
  2. Use the “cd” command to move to the folder in which you keep your netlist.
  3. Type “ngc2edif infilename.ngc outfilename.edf” where infilename and outfilename correspond to the input and output filenames respectively.
  4. Open the .edf file with any text editor to view the netlist.

To reverse engineer a netlist with ISE versions older than 6.1i

  1. Convert the netlist to an EDIF file using the above instructions.
  2. Type “edif2ngd filename.edf filename.ngd” to convert the EDIF file into an NGD file (Xilinx Native Generic Database file).
  3. To convert the netlist into VHDL type “ngd2vhdl filename.ngd filename.vhd“.
  4. To convert the netlist into Verilog type “ngd2ver filename.ngd filename.v“.

To reverse engineer a netlist with ISE versions 6.1i and up

  1. To convert the netlist into VHDL type “netgen -ofmt vhdl filename.ngc“. Netgen will create a filename.vhd file.
  2. To convert the netlist into Verilog type “netgen -ofmt verilog filename.ngc“. Netgen will create a filename.v file.

Now you should have all the tools you need to read an NGC netlist file.

Get free daily email updates!

Follow us!

Sunday, 10 February 2013

FPGAs: An Alternative To Cloud Computing !!

As complexity intensifies within sophisticated computing, so does the demand for more computing power. On top of that, the need to mine data from the burgeoning mountain of Internet search data has led to huge data centers that must be located close to water to feed their massive equipment cooling systems.

Weather modeling, for instance, continues to drill down into smaller geographical elements to fine-tune accuracy. And longer and more sophisticated encryption keys require greater compute power to crack them.

New tasks are also emerging in fields ranging from advertising to gene sequencing. Companies in the bio-sciences area gain competitive advantage based on the speed they’re able to sequence the genes held in DNA samples. Drug companies rely heavily on computer modeling to identify suitable candidate chemical formulas that may be useful in combating diseases.

In security, the focus has turned to deep packet inspection and application-aware monitoring. Companies now routinely deploy firewalls that are able to break into individual communication streams and identify traffic specific to, say, social networking sites, which can be used to help stop malicious attacks on corporate assets.

The Server Farm Approach

Traditionally, increases in processing requirements are handled with a brute force approach: Develop server farms that simply throw more microprocessor units at a problem. The heightened clamor for these server farms creates new problems, though, such as how to bring enough power to a server room and how to remove the generated heat. Space requirements are another problem, as is the complex management of the server farm to ensure factors like optimal load balancing, in terms of guaranteeing return on investment.

At some point, the rationale for these local server farms runs out as the physical and heat problems become too large. Enter the great savior to these problems, otherwise known as “The Cloud.” In this model, big companies will hire out operating time on huge computing clusters.

In a stroke, companies can make physical problems disappear by offloading this IT requirement onto specialised companies. However, it’s not without problems:

  • Depending on the data, there may be a requirement for high-bandwidth communications to and from the data centre.
  • A third party is added into the value chain, and it will try to make money out of the service based on used computing time.
  • Rather than solving the power problem, it’s simply been moved—exactly the same amount of computing needs to be undertaken, just in a different place.

This final issue, which raises fundamental problems that can’t be solved with traditional processor systems, splits into two parts:

  • Software: Despite advances in software programming tools, optimization of algorithms for execution on multiple processors is still far away. It’s often easy to break down a problem into a number of parallel computations. However, it’s much harder for the software programmer to handle the concept of pipelining, where the output of one stage of operation is automatically passed to the next stage and acted upon. Instead, processors perform the same operation on a large array of data, pass it to memory, and then call it back from memory to perform the next operation. This creates a huge overhead on power consumption and execution time.
  • Hardware: Processor systems are designed to be general. A processor’s data path is typically 32 or 64 bits. The data often requires much smaller resolution, leading to large inefficiencies as gates are clocked unnecessarily. Frequently, it becomes possible to pack data to fill more of the available data width, although this is rarely optimal and adds its own overhead. In addition, the execution units of a processor aren’t optimised to the specific mathematical or data-manipulation functions being undertaken, which again leads to huge overheads. 

The FPGA Approach

In the world of embedded products, a common computing-power approach is to develop dedicated hardware in an FPGA. These devices are programmed using silicon design techniques to implement processing functions much like a custom-designed chip. Many papers have been written on the relative improvements between processors, FPGAs, and dedicated hardware. Typical speed/power-consumption improvements range between a factor of 100 and 5000. 

Following in that vein, a recent study performed by Plextek explored how a single FPGA could accelerate a particular form of gene sequencing. Findings revealed an increase of just under a factor of 500. This can be viewed either as a significantly shorter time period or as an equipment reduction from 500 machines to a single PC. In reality, though, the savings will be a balance of the two.

Previously, these benefits were difficult to achieve for two reasons:

  • Interfacing: Dedicated engineering was required to develop FPGA systems that could easily access data sets. Once the data set changes, new interfacing requirements arise, which means a renewed engineering effort.
  • Design cycle time: The time it takes for an algorithm engineer to explain his requirements to a digital design engineer, who must then convert it all into VHDL along with the necessary testbenches to verify the design, simply becomes too long for exploratory algorithm work.

Now both of these problems have largely been solved thanks to modern FPGA devices. The first issue is resolved with embedded processors in the FPGA, which allow for more flexible interfacing. It’s even possible to directly access FPGA devices via Ethernet or even the Internet. For example, Plextek developed FPGA implementations that don’t have to go through interface modifications any time a requirement or data set changes.

To solve the second problem, companies such as Plextek have been working closely with major FPGA manufacturers to exploit new toolsets that can convert algorithmic descriptions defined in high-level mathematical languages (e.g., Matlab) into a form that easily converts into VHDL. As a result, significant time is saved from developing extensive testbenches to verify designs. Although not completely automatic, the design flow becomes much faster and much less prone to errors.

This doesn’t remove the need for a hardware designer, although it’s possible to develop methodologies to enable a hierarchical approach to algorithm exploration. The aim is to shorten the time between initial algorithm development and final solution.

Much of the time spent during algorithm exploration involves running a wide set of parameters through essentially the same algorithm. Plextek came up with a methodology that speeds up the process by providing a parameterised FPGA platform early in the process (see the figure). The approach requires the adoption of new high-level design tools, such as Altera’s DSP Builder or Xilinx’s System Generator.

74316_fig

A major portion of time involved in algorithm exploration revolves around running a wide set of parameters through essentially the same algorithm. Plextek’s methodology provides a parameterized FPGA platform early in the process, which saves a significant amount of time.

A key part of the process is jointly describing the algorithm parameters that are likely to change. After they’re defined, the hardware designer can deliver a platform with super-computing power to the scientist’s local machine, one that’s tailored to the algorithm being studied. At this point, the scientist can very quickly explore parameter changes to the algorithm, often being able to explore previously time-prohibitive ideas. As the algorithm matures, some features may need updating. Though modifications to the FPGA may be required, they can be implemented much faster.

A side benefit of this approach is that the final solution, when achieved, is in a hardware form that can be easily scaled across a business. In the past, algorithm exploration may have used a farm of 100 servers, but when rolled out across a business, the server requirements could increase 10- or 100-fold, or even to thousands of machines.  With FPGAs, equipment requirements will experience an orders-of-magnitude reduction.

Ultimately, companies that adopt these methodologies will achieve significant cost and power-consumption savings, as well as speed up their algorithm development flows.

Get free daily email updates!

Follow us!

Monday, 19 November 2012

A Brain-On-A-Chip

Sounds Interesting !!!!!

The idea is to devise a “micro-environment’’ that mimics the human brain. Researchers hope to study neurodegenerative conditions such as Alzheimer’s disease, strokes and concussions. The eventual goal is to study the effects of drugs and vaccines on the brain.

Draper, a spinoff from the Massachusetts Institute of Technology (MIT), and USF are using embryonic cells from rats, but researchers plan to use human cells in the future. The brain-on-a-chip combines several technologies, including an emerging field called microfluidics.

Microfluidics deals with the control of fluids in devices. Tiny chip-like devices using microfluidics are used in many applications, such as cell sorting and detection, gene analysis, inkjet print heads, lab-on-a-chip units and point-of-care diagnostic tools. Meanwhile, lab-on-a-chip, and a related field, organ-on-a-chip (i.e. brain-on-a-chip), are systems that integrate various functions in a chip-like format. Some, but not all, lab-on-a-chip systems use microfluidics.

Read More >>

Get free daily email updates!

Follow us!

Wednesday, 10 October 2012

Digital Logic in Analog Block - How To Test It?

Analog IP blocks these days have increasing amounts of digital control logic. With very small amounts of digital logic, it's possible to just draw gates on the schematic and run targeted tests that will hopefully catch any errors. But when you have several thousand digital gates, a new approach is needed, as I discovered in a recent discussion.

"2,000 gates is probably a good transition point where people switch from manually inserting gates in a schematic to synthesis," said Bob Melchiorre, director of field operations for digital implementation at Cadence. "Beyond 2,000 gates you have to start thinking about how you're going to test it, because you cannot guarantee that simulation or targeted testing will catch all the issues. Around 10,000 gates it starts getting completely unbearable, and you cannot write enough targeted tests to find all the faults in your digital logic."

Click here to read more ...

Get free daily email updates!

Follow us!

Monday, 3 September 2012

Intel Chips Will Support Wireless Charging by 2014

Earlier this month it was rumored that Intel was developing a new wireless charging solution it could integrate into the Ultrabook platform. In so doing, it would remove the need to plug your Ultrabook into a power socket directly, instead placing it on a charging pad or at least near a power source.

Intel’s interest in wireless charging has today been confirmed through a new partnership with Integrated Device Technology (IDT). IDT will develop a new integrated transmitter and receiver for Intel, allowing for wireless charging using resonance technology from up to several feet away.

“Our extensive experience in developing the innovative and highly integrated IDTP9030 transmitter and multi-mode IDTP9020 receiver has given IDT a proven leadership position in the wireless power market,” said Arman Naghavi, vice president and general manager of the Analog and Power Division at IDT.

wireless_power_idtp9020_30_block_diagramAs for when we can expect to see an Intel-branded wireless charging system, IDT is working to provide samples in the first half of 2013, suggesting product integration should happen in time for the major holiday season at the end of next year.

IDT’s Gary Huang has also suggested that eventually wireless charging will expand to power everything on your desktop. So your wireless keyboard, mouse, backup storage device, smartphone, and PC/laptop will all be completely wireless, with each including the necessary components and battery to be charged.

The chipmaker is entering a market where there is already a proposed standard called Qi. Qi has received a wide array of support, including Energizer, Texas Instruments, Verizon, and phone manufacturers including Nokia, Research In Motion, LG, and HTC. 

Currently, 88 products are listed by the Wireless Power Consortium as being Qi-compatible, including phones from NTT DoCoMo and HTC.

Intel is not a part of that group, and its wireless charging effort is based ona platform created by IDT is apparently not Qi-compatible. Since Qi is already getting widespread support and Intel’s chips have made it in to very few mobile devices so far, Intel has some work ahead if it is to be a success. 

A completely wire-free desk at home sounds great to me, and if this IDT/Intel venture is successful it could be a reality within a year or two.

Tuesday, 21 August 2012

Race condition in Verilog

In Verilog certain type of assignments or expression are scheduled for execution at the same time and order of their execution is not guaranteed. This means they could be executed in any order and the order could be change from time to time. This non-determinism is called the race condition in Verilog.

Verilog execution order:

verilog_execution

If you look at the active event queue, it has multiple types of statements and commands with equal priority, which means they all are scheduled to be executed together in any random order, which leads to many of the faces..

Lets look at some of the common race conditions that one may encounter.

1) Read-Write or Write-Read race condition.

Take the following example :

always @(posedge clk)
x = 2;

always @(posedge clk)
y = x;

Both assignments have same sensitivity ( posedge clk ), which means when clock rises, both will be scheduled to get executed at the same time. Either first ‘x’ could be assigned value ’2′ and then ‘y’ could be assigned ‘x’, in which case ‘y’ would end up with value ’2′. Or it could be other way around, ‘y’ could be assigned value of ‘x’ first, which could be something other than ’2′ and then ‘x’ is assigned value of ’2′. So depending on the order final value of ‘y’ could be different.

How can you avoid this race ? It depends on what your intention is. If you wanted to have a specific order, put both of the statements in that order within a ‘begin’…’end’ block inside a single ‘always’ block. Let’s say you wanted ‘x’ value to be updated first and then ‘y’ you can do following. Remember blocking assignments within a ‘begin’ .. ‘end’ block are executed in the order they appear.

always @(posedge clk)
begin
x = 2;
y = x;
end

2) Write-Write race condition.

always @(posedge clk)
x = 2;

always @(posedge clk)
x = 9;

Here again both blocking assignments have same sensitivity, which means they both get scheduled to be executed at the same time in ‘active event’ queue, in any order. Depending on the order you could get final value of ‘x’ to be either ’2′ or ’9′. If you wanted a specific order, you can follow the example in previous race condition.

3) Race condition arising from a ‘fork’…’join’ block.

always @(posedge clk)
fork
x = 2;
y = x;
join

Unlike ‘begin’…’end’ block where expressions are executed in the order they appear, expression within ‘fork’…’join’ block are executed in parallel. This parallelism can be the source of the race condition as shown in above example.

Both blocking assignments are scheduled to execute in parallel and depending upon the order of their execution eventual value of ‘y’ could be either ’2′ or the previous value of ‘x’, but it can not be determined beforehand.

4) Race condition because of variable initialization.

reg clk = 0

initial
clk = 1

In Verilog ‘reg’ type variable can be initialized within the declaration itself. This initialization is executed at time step zero, just like initial block and if you happen to have an initial block that does the assignment to the ‘reg’ variable, you have a race condition.

There are few other situations where race conditions could come up, for example if a function is invoked from more than one active blocks at the same time, the execution order could become non-deterministic.

Sunday, 19 August 2012

Asynchronous Integrated Circuits Design

Many modern integrated circuits are sequenced based on globally distributed periodic timing signals called clocks. This method of sequencing, synchronous, is prevalent and has contributed to the remarkable advancements in the semiconductor industry in form of chip density and speed in the last decades. For the trend to continue as proposed in Moore’s law, the number of transistors on a chip doubles about every two years, there are increasing requirements for enormous circuit complexity and transistor down scaling.

As the industry pursues these factors, many problems associated with switching delay, complexity management and clock distribution have placed limitation on the performance of synchronous system with an acceptable level of reliability. Consequently, the synchronous system design is challenged on foreseeable progress in device technology.

These concerns and other factors have caused resurgence in interest in the design of asynchronous or self-timed circuits that achieve sequencing without global clocks. Instead, synchronization among circuit elements is achieved through local handshakes based on generation and detection of request and acknowledgement signals.

Some notable advantages of asynchronous circuits over their synchronous counterparts are presented below:

* Average case performance. Synchronous circuits have to wait until all possible computations have completed before producing the results, thereby yielding the worst-case performance. In the asynchronous circuits, the system senses when computation has completed thereby enabling average case performance. For circuits like ripple carry adders with significantly worst-case delay than average-case delay, this can be an enormous saving in time.

* Design flexibility and cost reduction, with higher level logic design separated from lower timing design

* Separation of timing from functional correctness in certain types of asynchronous design styles thereby enabling insensitivity to delay variance in layout design, fabrication process, and operating environments.

* The asynchronous circuits consume less power than synchronous since signal transitions occur only in areas involved in current computation.

* The problem of clock skew evident in synchronous circuit is eliminated in the asynchronous circuit since there is no global clock to distribute. The clock skew, difference in arrival times of clock signal at different parts of the circuit, is one of the major problems in the synchronous design as feature size of transistors continues to decrease.

Asynchronous circuit design is not entirely new in theory and practice. It has been studied since the early 1940′s when the focus was mainly on mechanical relays and vacuum tube technologies. These studies resulted to two major theoretical models (Huffman and Muller models) in the 1950′s. Since then, the field of asynchronous circuits went through a number of high interest cycles with a huge amount of work accumulated. However, problems of switching hazards and ordering of operations encountered in early complex asynchronous circuits resulted to its replacement by synchronous circuits. Since then, the synchronous design has emerged as the prevalent design style with nearly all the third (and subsequent) generation computers based on synchronous system timing.

Despite the present unpopularity of the asynchronous circuits in the mainstream commercial chip production and some problems noted above, asynchronous design is an important research area. It promises at least with the combination of synchronous circuits to drive the next generation chip architecture that would achieve highly dependable, ultrahigh-performance computing in the 21st century.

The design of the asynchronous circuit follows the established hardware design flow, which involves in order: system specification, system design, circuit design, layout, verification, fabrication and testing though with major differences in concept. A notable one is the impractical nature of designing an asynchronous system based on ad-hoc fashion. With the use of clocks as in synchronous systems, lesser emphasis is placed on the dynamic state of the circuit whereas the asynchronous designer has to worry over hazard and ordering of operations. This makes it impossible to use the same design techniques applied in synchronous design to asynchronous design.

The design of asynchronous circuit begins with some assumption about gate and wire delay. It is very important that the chip designer examines and validates the assumption for the device technology, the fabrication process, and the operating environment that may impact on the system’s delay distribution throughout its lifetime. Based on this delay assumption, many theoretical models of asynchronous circuits have been identified.

There is the delay-insensitive model in which the correct operation of a circuit is independent of the delays in gates and in the wires connecting the gate, assuming that the delays are finite and positive. The speed-independent model developed by D.E. Muller assumes that gate delays are finite but unbounded, while there is no delay in wires. Another is the Huffman model, which assumes that the gate and wire delays are bounded and the upper bound is known.

For many practical circuit designs, these models are limited. For the examples in this discussion, quasi delay insensitive (QDI), which is a combination of the delay insensitive assumption and isochronic-fork assumption, is used. The latter is an assumption that the relative delay between two wires is less than the delay through a sequence of gates. It assumes that gates have arbitrary delay, and only makes relative timing assumptions on the propagation delay of some signals that fan-out to multiple gates.

Over the years, researchers have developed a method for the synthesis of asynchronous circuits whose correct functioning do not depend on the delays of gates and which permitted multiple concurrent switching signals. The VLSI computations are modeled using Communicating Hardware Processes (CHP) programs that describe their behavior algorithmically. The QDI circuits are synthesized from these programs using semantics-preserving transformations.

In conclusion, as the trend continues to build highly dependable, ultrahigh-performance computing in the 21st century, the asynchronous design promises to play a dominant role.

Tuesday, 24 July 2012

Editing your FPGA source

I noted that in a recent poll of FPGA developers, emacs was far and away the most popular VHDL and Verilog editor. There are a few reasons for this – namely, emacs comes with packages for editing your HDL of choice. For those of us not wanting to install (and learn) the emacs operating system, I got Notepad++ to work with these packages.

Notepad++ already has VHDL and Verilog highlighting along with other advanced text editor features, but I wanted templates, automated declarations and beautification. To do this, he used the FingerText to store code as snippets and call them up at the wave of a finger.

As I writes his code, the component declarations constantly need to be updated, and with the help of a Perl script I can update them with the click of a hotkey. Beautification is a harder nut to crack, as Notepad++ doesn’t even have a VHDL or Verilog beautifier plugin. This was accomplished by installing emacs and running the beautification process as a batch script. Nobody can have it all, but we’re thinking that this method of getting away from emacs is pretty neat.

Friday, 11 May 2012

ZeroN project, Computer-controlled magic levitation created by MIT student

What if materials could defy gravity, so that we could leave them suspended in mid-air?

MIT genius Jinha Lee has created an incredible computer-controlled system for levitating objects.

The project, called ZeroN, uses magnets, a Kinect visual system, plus special software that enables either the computer to move a steel ball around in space, or a human to just grab it and move it, essentially telling the computer where it should go.

ZeroN is a new physical/digital interaction element that can be levitated and moved freely by computer in a three dimensional space. Both the computer and people can move the ZeroN simultaneously. In doing so, people and computers can physically interact with one another in 3D space.

It can even remember how it was moved, then repeat the movement automatically.

I really want to experience it live …!!

http://www.leejinha.com/zeron

http://web.media.mit.edu/~jinhalee/zeron_jinha_lee.pdf