Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Thursday, 28 February 2013

French researchers print first ADC on plastic

2013_0225_EE Millions of tons of food are wasted annually because 'the date'. But the date on the package is always a conservative estimate, so much food that is still good in the waste lands. Would it not be useful if the pack 'taste' of the food is still good? Researchers at the CEA-Liten, Eindhoven University of Technology, STMicroelectronics and University of Catania presented last week in the U.S. technical capstone that makes this possible - a plastic analogue to-digital converter. This gives a plastic sensor circuit of less than one euro cent feasible, which is an acceptable price increase is for example, a bag of potato chips or a piece of meat. The ultra cheap plastic electronics has many potential applications, for example in medicine.

“Organic electronics is still in its infancy, thus only simple digital logic and analogue functions have been demonstrated yet using printing techniques,” said CEA-Liten.

The ADC circuits printed by CEA-Liten include more than 100 n- and p-type transistors and a resistive layer on a transparent plastic sheet. The ADC circuit offers a resolution of 4 bits and has a speed of 2Hz.

The carrier mobility of the printed transistors is higher than the one observed in amorphous silicon, which is widely used in the display industry (CEA technology p-type µp = 1.8 cm²/V.s and n-type µn = 0.5 cm²/V.s).

Read more

Get free daily email updates!

Follow us!

Sunday, 24 February 2013

6 Ways to Improve Chip Yield Even Before the Project Starts

13782587-bulb-on-computer-chip--technology-concept Early on in Chip projects, yield is not taken very seriously. The common thinking goes –  anyhow there isn’t much to do as this early point of time. However, there are actually several things you can do even before the Chip design starts, which will translate to clear savings.

1- Know your Yield

Yield has a great deal of impact only if production volume is high. If you plan to manufacture only a few tens of thousands of components, perhaps yield is not the most important topic in your project’s plan.

Yield can be roughly calculated or estimated before the project has even started. Yet, if you have calculate a yield target of 95% there is no reason to invest money and efforts to try improving the yield from (the calculated) 95% to 99% because that would not be possible.  Therefore, it is important that you have calculated your yield and set that as a goal.

2- Consider Foundry Applicability

Semiconductor foundries are not taking any yield losses. It is not the fab responsibility whether your yield is high or low because they sell wafers and not dies. Therefore you should select the foundry the suits best to your Chip domain.

If you chip requires small node geometries go to GLOBALFOUNDRIES, TSMC etc. If you chip needs excellent RF performance go to: IBM, TowerJazz etc. The foundry can help you calculating the wafer yield based on their own process technology. If you can provide them with die size, number of layers, process node and options, they should be able to provide you with a very accurate yield figures for your project.

3- Match Design Team Experience to Your Project

If you have decided to outsource the frond-end and physical design activities to an external vendor, the main yield-related risk here is experience. If the design team does not have the relevant experience that matches your chip project (for instance: RF, High Voltage) you are really wasting your time. Don’t hire analog designer without high voltage experience if you need to design a 120V chip.

4- Select Silicon Proven IPs

More and more companies are shopping for Semiconductor IPs to help reduce time to market and minimize engineering cost. There are many IP vendors with high quality products and some with lower quality. The keyword here is risk minimization. You really want to make sure the IP blocks you are about to purchase and integrate into your chip are bug free and have been silicon proven and qualified for your process. Ask for test results and references.

5- Follow Package Design Rules

For simple QFN packages there are no real concerns besides following the assembly house design rules. However complex packages can reduce yield dramatically. If your chip uses a package that consists of a multilayer substrate with high speed signals, this substrate should be considered as part of the silicon die. Improper routing of high speed signals, for example, will make the substrate performance very marginal and thus result in failures during final test.

6 – Say No to Tight Test Limits, Say Yes to Better Hardware

The only place to measure yield is at the testing phase. And this is done by the ATE.

Great ASIC engineers often try to over-engineer the chip design and as a by-product also tighten up the test result criteria. These limits will have direct impact on your profit. Every device that fails to meet limits during the screening process will be scraped. Therefore, don’t create the perfect test specification. Make one that meets your system requirements.

Loadboards, sockets and probecards have different quality levels and therefore different cost. But since these are the actual physical interface between your chip and the tester, you want to make sure they have the right quality and durably to allow solid connectivity to the tester during the test period. Otherwise, lower quality hardware will shave off your yield figures. Sockets for example, have limited number of insertions; you therefore should buy a socket that meets your chip production volume. Bottom-line — don’t compromise on the quality of the hardware interfacing your chip.

There is so much more to write on this topic, we promise to write more articles in the future. Stay tuned.

Get free daily email updates!

Follow us!

Thursday, 21 February 2013

How to read an NGC netlist file

For the occasions that you find yourself with a netlist file and you don’t know where it came from or what version it is, etc. this post is about how you can interpret the netlist file (ie. convert it into something readable).

Today I found myself with two netlists and I needed to know if they were the same. Yes of course you can try comparing the two files with a program such as Beyond Compare, but if the netlists were compiled on separate dates, you will have trouble recognizing this from the raw binary data. The best thing to do in this case is to convert the netlists to EDIF files, a readable, text file version of the netlist. Another option is to convert the netlists into VHDL or Verilog code. Here is how you can do this:

To convert a netlist (.ngc) to an EDIF file (.edf)

  1. Get a command window open by typing “cmd” in the Start->Run menu option in Windows. If you use Linux, open up a terminal window.
  2. Use the “cd” command to move to the folder in which you keep your netlist.
  3. Type “ngc2edif infilename.ngc outfilename.edf” where infilename and outfilename correspond to the input and output filenames respectively.
  4. Open the .edf file with any text editor to view the netlist.

To reverse engineer a netlist with ISE versions older than 6.1i

  1. Convert the netlist to an EDIF file using the above instructions.
  2. Type “edif2ngd filename.edf filename.ngd” to convert the EDIF file into an NGD file (Xilinx Native Generic Database file).
  3. To convert the netlist into VHDL type “ngd2vhdl filename.ngd filename.vhd“.
  4. To convert the netlist into Verilog type “ngd2ver filename.ngd filename.v“.

To reverse engineer a netlist with ISE versions 6.1i and up

  1. To convert the netlist into VHDL type “netgen -ofmt vhdl filename.ngc“. Netgen will create a filename.vhd file.
  2. To convert the netlist into Verilog type “netgen -ofmt verilog filename.ngc“. Netgen will create a filename.v file.

Now you should have all the tools you need to read an NGC netlist file.

Get free daily email updates!

Follow us!

Wednesday, 20 February 2013

Micron shrinks 128Gb NAND flash memory to 146-square mm

micron_enterprise_nand_flash Micron Technology on Thursday introduced the industry's smallest 128Gb NAND flash memory device made using 20nm process technology. The new 128Gb device stores three bits of information per cell (3bpc or triple level cell [TLC]), which makes it smaller and more cost-efficient. 

Measuring 146mm2, the new 128Gb TLC device is more than 25% smaller than the same capacity of Micron's 20nm multi-level-cell (MLC) NAND device. The 128Gb TLC device is targeted at the cost-competitive removable storage market (flash cards and USB drives), which is projected to consume 35% of total NAND gigabytes in calendar 2013.1 Micron is now sampling the 128Gb TLC NAND device with select customers; it will be in production in calendar Q2.

"This is the industry's smallest, highest-capacity NAND flash memory device – empowering a new class of consumer storage applications. Every day we learn of new and innovative use cases for flash storage, underpinning the excitement and opportunity for Micron. We are committed to enriching our portfolio of leading Flash storage solutions that serve our broad customer base," said Glen Hawk, vice president of Micron's NAND solutions group.

Get free daily email updates!

Follow us!

Thursday, 14 February 2013

How and why DDR4 timing is important

DDR4-RAMJEDEC's DDR4 DRAM standard is compatible with 3DIC architectures and is capable of data transfer rates up to 3.2 gigatransfers per second, Kristin Lewotsky notes in this article. "We've got a broad  population of folks who really haven't had the time or the business need to learn about DDR4," says Perry Keller of Agilent Technologies. "What we hope to do is familiarize them with DDR4: What it is, why it exists, what it can bring to their products, and how to do something practical with it." EE Times

Sunday, 10 February 2013

FPGAs: An Alternative To Cloud Computing !!

As complexity intensifies within sophisticated computing, so does the demand for more computing power. On top of that, the need to mine data from the burgeoning mountain of Internet search data has led to huge data centers that must be located close to water to feed their massive equipment cooling systems.

Weather modeling, for instance, continues to drill down into smaller geographical elements to fine-tune accuracy. And longer and more sophisticated encryption keys require greater compute power to crack them.

New tasks are also emerging in fields ranging from advertising to gene sequencing. Companies in the bio-sciences area gain competitive advantage based on the speed they’re able to sequence the genes held in DNA samples. Drug companies rely heavily on computer modeling to identify suitable candidate chemical formulas that may be useful in combating diseases.

In security, the focus has turned to deep packet inspection and application-aware monitoring. Companies now routinely deploy firewalls that are able to break into individual communication streams and identify traffic specific to, say, social networking sites, which can be used to help stop malicious attacks on corporate assets.

The Server Farm Approach

Traditionally, increases in processing requirements are handled with a brute force approach: Develop server farms that simply throw more microprocessor units at a problem. The heightened clamor for these server farms creates new problems, though, such as how to bring enough power to a server room and how to remove the generated heat. Space requirements are another problem, as is the complex management of the server farm to ensure factors like optimal load balancing, in terms of guaranteeing return on investment.

At some point, the rationale for these local server farms runs out as the physical and heat problems become too large. Enter the great savior to these problems, otherwise known as “The Cloud.” In this model, big companies will hire out operating time on huge computing clusters.

In a stroke, companies can make physical problems disappear by offloading this IT requirement onto specialised companies. However, it’s not without problems:

  • Depending on the data, there may be a requirement for high-bandwidth communications to and from the data centre.
  • A third party is added into the value chain, and it will try to make money out of the service based on used computing time.
  • Rather than solving the power problem, it’s simply been moved—exactly the same amount of computing needs to be undertaken, just in a different place.

This final issue, which raises fundamental problems that can’t be solved with traditional processor systems, splits into two parts:

  • Software: Despite advances in software programming tools, optimization of algorithms for execution on multiple processors is still far away. It’s often easy to break down a problem into a number of parallel computations. However, it’s much harder for the software programmer to handle the concept of pipelining, where the output of one stage of operation is automatically passed to the next stage and acted upon. Instead, processors perform the same operation on a large array of data, pass it to memory, and then call it back from memory to perform the next operation. This creates a huge overhead on power consumption and execution time.
  • Hardware: Processor systems are designed to be general. A processor’s data path is typically 32 or 64 bits. The data often requires much smaller resolution, leading to large inefficiencies as gates are clocked unnecessarily. Frequently, it becomes possible to pack data to fill more of the available data width, although this is rarely optimal and adds its own overhead. In addition, the execution units of a processor aren’t optimised to the specific mathematical or data-manipulation functions being undertaken, which again leads to huge overheads. 

The FPGA Approach

In the world of embedded products, a common computing-power approach is to develop dedicated hardware in an FPGA. These devices are programmed using silicon design techniques to implement processing functions much like a custom-designed chip. Many papers have been written on the relative improvements between processors, FPGAs, and dedicated hardware. Typical speed/power-consumption improvements range between a factor of 100 and 5000. 

Following in that vein, a recent study performed by Plextek explored how a single FPGA could accelerate a particular form of gene sequencing. Findings revealed an increase of just under a factor of 500. This can be viewed either as a significantly shorter time period or as an equipment reduction from 500 machines to a single PC. In reality, though, the savings will be a balance of the two.

Previously, these benefits were difficult to achieve for two reasons:

  • Interfacing: Dedicated engineering was required to develop FPGA systems that could easily access data sets. Once the data set changes, new interfacing requirements arise, which means a renewed engineering effort.
  • Design cycle time: The time it takes for an algorithm engineer to explain his requirements to a digital design engineer, who must then convert it all into VHDL along with the necessary testbenches to verify the design, simply becomes too long for exploratory algorithm work.

Now both of these problems have largely been solved thanks to modern FPGA devices. The first issue is resolved with embedded processors in the FPGA, which allow for more flexible interfacing. It’s even possible to directly access FPGA devices via Ethernet or even the Internet. For example, Plextek developed FPGA implementations that don’t have to go through interface modifications any time a requirement or data set changes.

To solve the second problem, companies such as Plextek have been working closely with major FPGA manufacturers to exploit new toolsets that can convert algorithmic descriptions defined in high-level mathematical languages (e.g., Matlab) into a form that easily converts into VHDL. As a result, significant time is saved from developing extensive testbenches to verify designs. Although not completely automatic, the design flow becomes much faster and much less prone to errors.

This doesn’t remove the need for a hardware designer, although it’s possible to develop methodologies to enable a hierarchical approach to algorithm exploration. The aim is to shorten the time between initial algorithm development and final solution.

Much of the time spent during algorithm exploration involves running a wide set of parameters through essentially the same algorithm. Plextek came up with a methodology that speeds up the process by providing a parameterised FPGA platform early in the process (see the figure). The approach requires the adoption of new high-level design tools, such as Altera’s DSP Builder or Xilinx’s System Generator.

74316_fig

A major portion of time involved in algorithm exploration revolves around running a wide set of parameters through essentially the same algorithm. Plextek’s methodology provides a parameterized FPGA platform early in the process, which saves a significant amount of time.

A key part of the process is jointly describing the algorithm parameters that are likely to change. After they’re defined, the hardware designer can deliver a platform with super-computing power to the scientist’s local machine, one that’s tailored to the algorithm being studied. At this point, the scientist can very quickly explore parameter changes to the algorithm, often being able to explore previously time-prohibitive ideas. As the algorithm matures, some features may need updating. Though modifications to the FPGA may be required, they can be implemented much faster.

A side benefit of this approach is that the final solution, when achieved, is in a hardware form that can be easily scaled across a business. In the past, algorithm exploration may have used a farm of 100 servers, but when rolled out across a business, the server requirements could increase 10- or 100-fold, or even to thousands of machines.  With FPGAs, equipment requirements will experience an orders-of-magnitude reduction.

Ultimately, companies that adopt these methodologies will achieve significant cost and power-consumption savings, as well as speed up their algorithm development flows.

Get free daily email updates!

Follow us!