Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Wednesday, 30 January 2013

Rambus Introduces R+ LPDDR3 Memory Architecture Solution

Virtium-DDR3-VLP-SO-UDIMM2 Sunnyvale, California, United States - January 28, 2013   – Rambus Inc. the innovative technology solutions company that brings invention to market, today announced its first LPDDR3 offering targeted at the mobile industry. In the Rambus R+ solution set, the R+ LPDDR3 memory architecture is fully compatible with industry standards while providing improved power and performance. This allows customers to differentiate their products in a cost-effective manner with improved time-to-market. Further helping improve design and development cycles, the R+ LPDDR3 is also available with Rambus’ collaborative design and integration services.

The R+ LPDDR3 architecture includes both a controller and a DRAM interface and can reduce active memory system power by up to 25% and supports data rates of up to 3200 megabits per second (Mbps), which is double the performance of existing LPDDR3 technologies. These improvements to power efficiency and performance enable longer battery life and enhanced mobile device functionality for streaming HD video, gaming and data-intensive apps.

“Each generation of mobile devices demands even higher performance with lower power. The R+ LPDDR3 technology enables the mobile market to use our controller and DRAM solutions to provide unprecedented levels of performance, with a significant power savings,” said Kevin Donnelly, senior vice president and general manager of the Memory and Interface Division at Rambus. “Since this technology is a part of our R+ platform, beyond the improvements in power and performance, we’re also maintaining compatibility with today’s standards to ensure our customers have all the benefits of the Rambus’ superior technology with reduced adoption risk.”

The seed to the improved power and performance offered by the R+ LPDDR3 architecture is a low-swing implementation of the Rambus Near Ground Signaling technology. Essentially, this single-ended, ground-terminated signaling technology allows devices to achieve higher data rates with significantly reduced IO power. The R+ LPDDR3 architecture is built from ground up to be backward compatible with LPDDR3 supporting same protocol, power states and existing package definitions and system environments.

Additional key features of the R+ LPDDR3 include:

  • 1600 to 3200Mbps data rates
  • Multi-modal support for LPDDR2, LPDDR3 and R+ LPDDR3
  • DFI 3.1 and JEDEC LPDDR3 standards compliant
  • Supports package-on-package and discrete packaging types
  • Includes LabStation™ software environment for bring-up, characterization, and validation in end-user application
  • Silicon proven design in GLOBALFOUNDRIES 28nm-SLP process

Get free daily email updates!

Follow us!

India fab decision likely this quarter

intro-fab BANGALORE, India—A decision on the long-pending proposal to set up a domestic wafer fab in India is likely to be made by the end of March, semiconductor industry executives said.
India has been debating building a wafer fab for years. A decision on whether to move forward with a plan to build a fab here had been promised by the end of 2012. But that deadline came and went with no decision made.

Still, semiconductor industry executives here said they are now far more confident that a decision on the plan by India's national government is imminent. Their optimism is based on some policies conducive to domestic electronics manufacturing being adopted last year, with still others in the works.

"Electronics manufacturing is being looked at more progressively," a representative of the India Semiconductor Association (ISA) said here Tuesday (Jan. 22).

A study conducted by the ISA and market researcher Frost & Sullivan on this market projected a compound annual growth rate of nearly 10 percent for India's electronics, system design and manufacturing market from 2011 through 2015. The market is expected to grow to $94.2 billion in 2015 from $66.6 billion in 2011, according to the study.

"This is a fantastic growth rate and should show the way to product development and value-added manufacturing domestically, rather than relying on imports and low value-added screwdriver assembly," the ISA said.

Fears loom over the import bill for electronic products though, with 65 percent of India's demand currently being met by imports.

Another worry is the projected decline in high value-added manufacturing within the country. Of the total electronics market of $44.81 billion in 2012, high value-added domestic manufacturing was just $3.55 billion. The ISA is concerned that this decline in high value-manufacturing will cause a cumulative opportunity loss of a whopping $200 billion by 2015.

India’s semiconductor market revenue was an estimated $6.54 billion in 2012. The country's semiconductor design industry is expected to grow at a 17 percent CAGR, amounting to over $10 billion in 2012. Of this, VLSI design accounted for $1.33 billion, embedded software for $8.58 billion and board/hardware design for $672 million.

Get free daily email updates!

Follow us!

Sunday, 27 January 2013

A Realistic Assessment of the PC's Future

All around us, the evidence is overwhelming that the PC world is changing rapidly and in numerous ways -- use, sales, share of the electronics/IT equipment market, application development, and, very importantly, the surrounding supply chain.

Certainly, the PC has a future in our homes and businesses, but don't let anyone convince you they know exactly how that future will look or where things will remain the same over the next five years. Within a few years, the PC market will lose its title as the dominant consumer of semiconductors -- if it hasn't already. In the near future, the leading destination for many components used in traditional PCs will be tablet and smartphone plants.

The supply chain, especially the procurement and production elements, must be focused on accelerating that transition. I don't believe that's the case today, though the trends have been apparent for quite a few quarters. As consumers have migrated toward mobile devices, especially smartphones, the consequences for PC vendors and their component suppliers have become obvious. But apparently, they aren't obvious enough.

Intel Corp., the company with the most to lose as this shift has accelerated, has worked to establish a beachhead in the smartphone market. Nevertheless, many well-meaning analysts and industry observers have continued to spout the misleading view that the PC sector is unshakeable. The general opinion for a while was that tablets and smartphones would serve as complementary products to the traditional PCs, rather than cannibalizing the market. Think again.

Paul Otellini, Intel's president and CEO, had this to say about the changes in his company's market during a fourth-quarter earnings conference call.

From a product perspective, 2012 was a year of significant transitions in our markets and a year of important milestones for Intel...
At CES last week, I was struck by our industry's renewed inventiveness. PC manufacturers are embracing innovation as we are in the midst of a radical transformation of the computing experience with the blurring of from factors and the adoption of new user interfaces.
It's no longer necessary to choose between a PC and a tablet.

Let's turn to an IDC report released Monday for further explanation. The research firm said it sees PC innovation accelerating over the next few years as OEMs struggle to stem their losses and blunt the impact of smart phones on the market. PC OEMs and chip vendors can no longer afford to be complacent, IDC said; they must compete on all levels with tablets and Smartphone manufacturers to demonstrate the continued relevance of their products.

This view implies that PC vendors and their suppliers have been satisfied with the status quo until now. That would be putting it mildly. Until Apple Inc. rolled out the iPhone and positioned it as an alternative platform for accessing the Internet, many OEMs didn't see smartphones as competing devices. IDC said in its report:

Complacency and a lack of innovation among OEM vendors and other parts of the PC ecosystem has occurred over the past five years. As a result, PC market growth flattened in 2012 and may stagnate in 2013 as users continue gravitating to ever more powerful smartphones and tablets.

Ouch. Some in the industry still believe tablets and smartphones aren't an arrow aimed at the PC market. I don't see tablets and smartphones replacing PCs in all situations, but they will encroach enough on that territory to leave a visible mark. That's why PC vendors, semiconductor suppliers, and manufacturers of other components need to develop a strategy that embraces the smaller form factors of tablets and smartphones and leverage their advantages over traditional computing platforms to create market-winning products.

Mario Morales, program vice president for semiconductors and EMS at IDC, said in a press release, "The key challenge will not be what form factor to support or what app to enable, but how will the computing industry come together to truly define the market's transformation around a transparent computing experience."

That conversation is a couple of years late, but it's welcome nonetheless.

Wednesday, 23 January 2013

Reusable VHDL IP in the Real World

Reuse has been an industry buzzword for years now. It is hardly a new idea, and probably goes back as far as the time when man first realized he could use the same fire both for keeping warm and for roasting his sabre‐tooth tiger ribs. When it comes to IP, reuse can be an extremely powerful way of saving resources and shortening project timescales.

At RF Engines, we find that reusing existing IP is very desirable. Not only does it save us development time and help us fulfill challenging delivery requirements, but making use of pre‐proven components also helps give customers confidence in our designs.

Another dimension to this is that multi‐FPGA designs are becoming increasingly widespread, and there are obvious advantages to having a chip‐level infrastructure that is reusable within the project.

Reuse, then, is clearly A Good Thing. However, in practice it has often proven surprisingly difficult to achieve. So it is worth bearing in mind a few principles and techniques that can be applied to make it more straightforward. Though it would be foolish to say that creating reusable VHDL doesn’t require any extra effort, it frequently pays significant dividends in the longer term.

At the beginning of implementation, thinking about what else your component might be useful for in the future may not be the first thing on your mind. Sometimes it’s not obvious that your component has multiple applications, and that reuse should be one of your design goals. But even when you don’t know what the component might be useful for in the future, if you take a step back, it’s often easy to see a few aspects of the design which can be written in a flexible way without too much of an overhead.

One issue that often arises in design for reuse is that many of the language features which make design for reuse easier are software‐style constructs. Hardware engineers may be wary of these constructs as they can obscure the relationship between the VHDL and the resulting implementation, but if used carefully they can be a great help. Here are a few examples.

Using a generic to set parameters such as the width of entity ports is common practice. It not only makes the component easier to reuse in other circumstances, but helps to document the code, since a meaningful generic name can be used instead of an apparently random number.

Any code which depends on the width of such ports or signals then also needs to be parameterized on the generic, e.g. to produce a reduction‐and of data_in above:

Generics are also useful to make different variants of a component. You can add a Boolean generic and use an if generate statement to implement optional code, rather than writing a different variant of an entity.

Another useful trick is to gather collections of signals together using arrays. Signal processing applications often operate on blocks of data at a time, and the operations often have a strong regularly structure. Using arrays in this context can help extensibility and readability. For example, say you want to instance several copies of an entity to perform a signal processing operation, each working with one of a number of parallel data streams. You could write this as:

The width of the data vectors, or the number of vectors in a collection, can be changed simply by altering the relevant constant. Notice also the use of the ‘range attribute, which yields the range of an array type, variable or signal. You could alternatively just use the explicit range 1 to collection_size instead of collection_type’range, but it’s a little more restrictive as it assumes that collection_type’s range starts at 1. Commonly in a signal processing application, once you have operated on each element of your data block, you will want to perform another operation on the results. For example, you could sum the results of from all the operations above using a process such as:

This is again very flexible as it doesn’t rely on knowing the data width or number of results. The ‘range attribute has been used again, this time to help us create an unsigned vector of the same dimensions as data_vec (which is a std_logic_vector). Of course, the code above could produce a very large adder chain and so in practice a more sophisticated pipelined architecture would almost certainly be necessary.

There are a couple of pitfalls to watch out for in this example. One is that VHDL requires the element type (data_vec) of the array (collection_type) to be globally static. That unfortunately means you can’t use a generic to parameterize both the width of data_vec and the size of collection_type (hence the use of constants instead). In fact, collection_size could be a generic in the example, but data_width could not. There are various ways to get around this but none are perfect. One is to use a two-dimensional array, which many synthesis tools now support. In this case, if we wrote:

then both collection_size and data_width could be generics. This can make it a little messier to access the data “rows”, so it is a trade off of flexibility against simplicity.

One other issue in this area is that, if data_collection_in/out had been entity ports, many synthesis tools would have broken them up into separate vectors, which could cause incompatibilities when doing gate level simulation. If you are unwilling to take this hit, you could create a wrapper which assigns the array elements to individual ports at the top‐level.

A final tip for making design reuse easier is to put a component declaration for your block in a package. Usually, you would have to place a component declaration in every architecture where your block is to be used, but if you create a package which contains a component declaration, then you save yourself and subsequent users the hassle:

Users of the block then simply need to reference the package, instead of re-declaring the component:

Is it worth it?

So is design – or redesign – for reuse really worth the effort? It may seem like a lot of work at the outset, but it can bring significant benefits in the long term. Or, looking at it the other way round, if you don’t have reusable components, you could end up wishing you had. Remember, a stitch in time saves nine. A bird in the hand is worth two in the bush. Many hands make light work but too many cooks spoil the broth... well, you get the idea.

Get free daily email updates!

Follow us!

IHS iSuppli: IC inventories hit record levels in Q3

hip inventories reached record highs near the end of 2012, and according to IHS iSuppli, semiconductor revenue will decline in Q1, prompting new concerns about the state of the market.

Overall semiconductor revenue is expected to slide three percent between January and March 2013, on top of a 0.7 percent decline in Q4 2012. What's more, inventory reached record levels in Q3 2012, amounting to 49.3 percent of revenue, more than at any point since Q1 2006. IHS iSuppli believes the uncomfortably high level of inventory points to the failure of key demand drivers to materialize.

The PC market remains slow and hopes of a Windows 8 renaissance have turned into a nightmare. Bellwether Intel saw its revenue drop three percent in Q4, with profit tumbling 27 percent, and the trend is set to continue. AMD is expected to announce its earnings Tuesday afternoon and more gloom is expected across the board. The only bright spot in an otherwise weak market is TSMC, which quickly rebounded after posting the lowest revenues in two years a year ago. TSMC now expects to see huge demand for 28nm products in 2013 and many observers point to a possible deal with Apple.

In addition, TSMC plans to invest $9 billion in capital expenditure in 2013, and it will likely spend even more in 2014, as it moves to ramp up 20nm production. However, Intel's plans to increase capital spending to $13 billion, up $2 billion over 2012 levels, have not been welcomed by analysts and investors. Unlike TSMC, Intel is not investing to increase capacity in the short term, it is making a strategic bet on 450mm wafer technology, which promises to deliver significantly cheaper chips compared to existing 300mm wafers. However, 450mm plants are still years away.

TSMC's apparent success has a lot to do with high demand for Smartphone's and tablets, which are slowly eating into the traditional PC market. Semiconductor shipments for the wireless segment were expected to climb around four percent in 2012 and positive trends were visible in analog, logic and NAND components. However, the mobile boom can't last forever, and we are already hearing talk of "Smartphone fatigue" and "peak Apple".

IHS iSuppli estimates the first quarter of 2013 will see growth in industrial and automotive electronics and other semiconductor markets will eventually overcome the seasonal decline, so a rebound is expected in the second and third quarters.

Semiconductor revenue could grow by four percent in the second, and nine percent in the third quarter. However, the assumptions are based on a wider economic recovery, which is anything but certain at this point. If demand evaporates, semiconductor suppliers could find themselves hit by an oversupply situation, leading to more inventory write-downs throughout the year.

Get free daily email updates!

Follow us!

Tuesday, 22 January 2013

What Is A 'Clocking Block'?

In Verilog, a module is the unit for any design entity. SystemVerilog extends this to include other design entities such as an interface, a program block and, last but not the least, a clocking block. An interface separates how a design interacts with the rest of the design from the design itself. A program block separates a test benching function from a silicon implementable design. And a clocking block specifies clock signals and the timing and synchronization requirements of various blocks. A clocking block is helpful in separating clocking activities of a design from its data assignments activities and can be used powerfully in test benching.

A clocking block assembles all the signals that are sampled or synchronized by a common clock and define their timing behaviors with respect to the clock. It is defined by a clocking-endclocking keyword pair. Perhaps an example will describe this best.

clocking clock1 @(posedge clk1);
   default input #2ns output #3ns;
   input a1, a2;
   output b1;
endclocking
In the above example,

  1. The name of the clocking block is clock1. You can have as many clocking blocks in your environment as you want. Also, there may be multiple clocking blocks for the same clock, inputs or outputs in a single design.
  2. The clock associated with this clocking block is clk1. Each clocking block must have at least one clock associated with it.
  3. The default keyword defines the default skew for inputs (2 ns) and output (3 ns).
  4. The input and output keywords define the input and output signals associated with the clock and the skew defined earlier.
  5. One thing to note here is that input or output declarations inside a clocking module does not specify the data width.

A clocking block is both a declaration and an instance of that declaration and can only occur within a module, interface or program block (in the same scope as an always block). Variables inside a clocking block can be accessed specifying the full pathname. For instance, if the full pathname for clock1 above is top.test.clock1, the full pathname for variable a1 is top.test.clock1.a1.

A clocking block only describes how the inputs and outputs are sampled and synchronized. It does not assign a value to a variable. That is left to a module, interface or program that the clocking module is part of. While the parent block of a clocking module properly assigns a value to a variable, a clocking block defines how inputs are sampled and outputs are synchronized for its parent module. This is why an input or output declaration inside a clocking block does not need to specify any data width since it is only relevant if you assign a value to a variable or read from it.

Get free daily email updates!

Follow us!

Sitemap