Featured post

Top 5 books to refer for a VHDL beginner

VHDL (VHSIC-HDL, Very High-Speed Integrated Circuit Hardware Description Language) is a hardware description language used in electronic des...

Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, 30 November 2014

Intel funding to develop printer for blind

subhum-banerjee-and-braille-printerA 13-year-old Indian-origin boy has received a huge investment from Intel for developing a low-cost printer for the blind, making him the youngest tech entrepreneur funded by a venture capital firm.

Shubham Banerjee, CEO of the Braille printer maker Braigo Labs, had closed an early round funding with Intel Capital, the company's venture capital arm, last month to develop a prototype of low-cost Braille printer.

But to attend the event, Banerjee had to take the day off from middle school. That’s because he’s just 13 years old — making him, quite possibly, the youngest recipient of venture capital in Silicon Valley history. (He’s definitely the youngest to receive an investment from Intel Capital.)

“I would like all of us to get together and help the visually impaired, because people have been taking advantage of them for a long time,” Banerjee said. “So I would like that to stop.”

By “taking advantage,” Banerjee is referring to the high price of Braille printers today, usually above $2,000. By contrast, Braigo Labs plans to bring its printer to market for less than $500.

Banerjee has invented a new technology that will facilitate this price cut. Patent applications are still pending, so he wouldn’t divulge any of the details. But the technology could also be used to create a dynamic Braille display — something that shows one line of text at a time by pushing small, physical pixels up and down, and which currently costs $6,500, according to Braigo advisor Henry Wedler, who is blind.

Banerjee also figures that volume production will help keep the price low. Currently, Braille printers cost so much because the demand is low, so current manufacturers need to set a high price in order to recoup their costs.

“The truth is that demand is low in the U.S.,” Banerjee told me. But, he added, if you brought the price low enough there would be huge demand outside the U.S.

Banerjee built the version version of his Lego Braille printer for a science fair. He didn’t know anything about Braille beforehand. In fact, he’d asked his parents how blind people read, he said onstage, and they were too busy to answer. “Go Google it,” he said they told him, so he did.

After learning about Braille, he came up with the idea to make a Braille printer. He showed it at his school’s science fair, then later entered it into the Synopsys Science & Technology Championship, where he won first prize, which included a big trophy and a $500 check.

After that, he started getting a lot of attention on his Facebook page. People kept asking him if they could buy one, he said, which led to the idea of creating a company.

Lego was just for the first prototype, by the way: Future versions will be made with more traditional materials.

So how did Intel come to invest in such a young inventor? His father, Niloy, works for Intel — but that’s not exactly how it happened, according to Niloy.

After working with the beta version of Intel Edison (the chip company’s tiny embeddable microprocessor) at a summer camp, Banerjee’s project came to the attention of Intel, which invited him to show off his printer at the Intel Developer Forum. After appearing at IDF, Intel Capital came calling.

Young Banerjee seems composed in front of crowds, which should serve him well. (That’s not surprising, given that Braigo’s website touts coverage on everything from BoingBoing and SlashGear to CNN and NPR.) When asked onstage, in front of 1,000 entrepreneurs, investors, and Intel employees, how he knew that the printer worked even though he doesn’t read Braille, Banerjee answered immediately, “I Googled it.” The crowd laughed.

“I’m happy that I live in Silicon Valley,” Banerjee said. “So many smart people.”

Saturday, 14 June 2014

Solar energy may soon compete with fossil fuels

solarenergy Solar panel prices have plummeted more than 80 percent in recent years, but solar power is still more expensive than fossil-fuel power in most places, and accounts for a tiny fraction of the world’s energy supply. Few people have a better idea of what it will take to make solar compete with fossil fuels than Richard Swanson, who founded SunPower, a major solar company that sells some of the industry’s most efficient solar panels. During SunPower’s almost 30 years in existence, Swanson has seen many types of solar technology come and go. At the IEEE Photovoltaic Specialists Conference in Denver this week, he sat down with MIT Technology Review’s Kevin Bullis to talk about where the solar industry is now, and where it needs to go in order to become a major source of electricity.

How close is solar to competing with fossil fuels?

It’s darn close. In 2000, a big study concluded that solar panels could get below $1 per watt [a level thought to make solar competitive in many markets]. At the time, everyone in the industry thought the authors were in la-la land. Now we’re under that cost target.

It’s not like there’s one cost number and then everything is all good. The value of solar power varies widely, depending on local sources of electricity that it’s competing with, the amount of sunlight, and other factors. There are geographic pockets where it’s becoming quite competitive. But we need to cut costs by at least another factor of two, which can happen in the next 10 years.

What new technology will it take?

Solar panels now account for less than half of the cost of a solar panel system. For example, installers spend a lot of time and money designing each rooftop solar system. They need to have a certain number of panels in a row, all getting the same amount of sunlight. A bunch of companies are automating the process, some with the help of satellites. One of the most exciting things is microinverters [electronics that control solar panel power output] that allow you to stick solar panels anywhere on a roof—it’s almost plug and play.

To almost everyone’s surprise, silicon is still chugging along. The new developments are pretty amazing. Panasonic, Sharp, and SunPower just announced solar cell’s that break a long standing efficiency record. We need to do things like keep improving efficiency with new solar cell architectures, like the one Panasonic used. There are three basic new cell structures, and all of them are nearing or are already in production. We need to make thinner silicon wafers, improve ways of growing crystalline silicon. We need to switch to frameless solar panels because the cost of the aluminum frame hasn’t been going down much. We need to get rid of silver electrical contacts and replace them with cheaper copper. It’s tricky, but it can be done.

Is investment in research on batteries for storing solar power just as important as developing new solar cell designs?

Oh, yes. Not just batteries—other kinds of storage, too, like thermal storage. A wild card is the possibility of using batteries in electric cars to store solar power. That could change everything. People say that the battery industry will never meet the cost targets you need for this. But I come from an industry that just blew away cost targets. It’s hard for me to argue they won’t be able to do it.

Thursday, 23 January 2014

3D graphene-like material promises super electronics

novel-3D-graphene-1Graphane, the thinnest and strongest known material in the universe and a formidable conductor of electricity and heat – gets many of its amazing properties from the fact that it occupies only two dimensions: It has length and width but no height, because it's made of a single layer of atoms. But this special characteristics sometimes makes it difficult to work with, and a challenge to manufacture.

Researchers around the world have looked for ways to take full advantage of its many desirable properties. Now scientists have discovered a material that has a similar electronic structure to graphene but can exist in three dimensions, instead of a flat sheet like graphene, could lead to faster transistors and more compact hard drives.

 Plot of energy levels of electrons in trisodium bismuthide showing that this bulk material has properties similar to graphene.

The material is called three-dimensional topological Dirac semi-metal (3DTDS) and is a form of the chemical compound sodium bismuthate, Na3Bi.

Researchers led by scientists from Oxford University, Diamond Light Source, Rutherford Appleton Laboratory, Stanford University, and Berkeley Lab's Advanced Light Source, has discovered 3DTDS.

'The 3DTDS we have found has a lot in common with graphene and is likely to be as good or even better in terms of electron mobility – a measure of both how fast and how efficiently an electron can move through a material,' said Dr Yulin Chen of Oxford University's Department of Physics.

'You can think of the electronic structure of the 3DTDS as being rather like that of the graphene – the so called ''Dirac cone'' where electrons collectively act as if they forget their mass – but instead of flowing masslessly within a single sheet of atoms, the electrons in a 3DTDS flow masslessly along all directions in the bulk.'

Moreover, unlike in graphene, electrons on the surface of the 3DTDS remember their 'spin' – a quantum property akin to the orientation of a tiny magnet that can be used to store and read data – so that the magnet information can be directly transferred by the electric current, which could enable faster and more efficient spintronic devices.

'An important property of this new type of material is its magnetoresistance – how its electrical resistance changes when a magnetic field is applied,' said Dr Chen. 'In typical Giant Magnetoresistance Materials (GMR) the resistance changes by a few tens of percent and then saturate but with 3DTDS it changes 100s or 1000s of percent without showing saturation with the external magnetic field. With this much larger effect we could make a hard drive that is higher intensity, higher speed, and lower energy consumption – for example turning a 1 terabyte hard drive into a drive that can store 10 terabytes within the same volume.'

While this particular compound is too unstable to use in devices, the team is testing more stable compounds and looking for ways to tailor them for applications.

Dr Chen said: 'Now that we have proved that this kind of material exists, and that such compounds can have one of the highest electron mobilities of any material so far discovered, the race is on to find more such materials and their applications, as well as other materials with unusual topology in their electronic structure.'

Get free daily email updates!

Follow us!

Thursday, 12 December 2013

Broadcom releases satellite-constellation location IC

8521338394_ec9d0e1f06_c Broadcom Corporation has introduced a Global Navigation Satellite System (GNSS) chip, designated BCM47531, that generates positioning data from five satellite constellations simultaneously (GPS, GLONASS, QZSS, SBAS and BeiDou), totaling 88 satellites. The newly added Chinese BeiDou constellation increases the number of satellites available to a smartphone, enhancing navigation accuracy, particularly in urban settings where buildings and obstructions can cause interference.

The company’s new GNSS SoC is based on its widely deployed architecture that reduces the “time to first fix” (TTFF) and allows smartphones to quickly establish location and rapidly deliver mapping data. The SoC also features a tri-band tuner that enables smartphones to receive signals from all major navigation bands (GPS, GLONASS, QZSS, SBAS, and BeiDou) simultaneously.

The BCM47531 platform is available with Broadcom’s Location Based Services (LBS) technology that delivers satellite assistance data to the device and provides an initial fix time within seconds, instead of the minutes that may be required to receive orbit data from the satellites themselves.

The BCM47531 brings a number of powerful features to the table:

  • Simultaneous support of five constellations (GPS, GLONASS, QZSS,SBAS and BeiDou) allows for position calculations based on measurements from any of 88 satellites.
  • Broadcom's tri-band tuner brings the ability to receive all navigation bands, GPS (which includes QZSS and SBAS), GLONASS and BeiDou simultaneously to the commercial GNSS market without having to reconfigure and hop between bands.
  • Utilizes BeiDou signals for up to 2x improved positioning accuracy.
  • Best-in-class Assisted GNSS (AGNSS) data available worldwide from Broadcom's hosted reference network.
  • Allows a device to interchangeably use the best signal from any satellite regardless of the constellation, ensuring better accuracy in urban and mountainous environments.
  • Features advanced digital signal processing for interference rejection that enables satellite signal search and tracking during LTE transmission.
  • Leverages Broadcom's connectivity solutions including Wi-Fi, Bluetooth Smart, Near Field Communications (NFC), Instant Messaging System (IMES) and handset inertial sensor data for best indoor/outdoor location.

Nanobubbles with graphene - diamond substrate

Observing HighPressure Chemistry in Graphene Bubbles<br /> Scientists at the National University of Singapore have come up with a way to trap liquids inside nanoscale bubbles made of graphene, topping a diamond substrate. "We discovered a way to bond the two materials together by heating the diamond to its reconstruction temperature where its surface hydrogen is desorbed," said Kian Ping Loh, the research team leader.

The team were able to use these graphene bubbles as high pressure chemical reactors to perform reactions that are normally forbidden, such as fullerene polymerisation.

Anvil cells generate extremes of pressure by applying a force over as small an area as possible. As one of the thinnest elastic membranes in existence, graphene can be strain-engineered to form nanometre bubbles; spaces small enough to reach extremes of pressure when heated.2 Thanks to the bubbles’ impermeability to almost any fluid, this implies that graphene could be used to seal and pressurise fluids in nano-sized liquid cells.

Read more >>

Thursday, 27 June 2013

Quantum-tunneling technique

Quantum-tunneling technique promises chips that won't overheat

quantumpicResearchers at Michigan Technological University have employed room-temperature quantum tunneling to move electrons through boron nitride nanotubes. Semiconductor devices made with this technology would need less power than current transistors require, while also not generating waste heat or leaking electrical current, according to the research team.

Rather than relying on a predictable flow of electrons of current circuits, the new approach depends on quantum tunneling. In this, electrons travel faster than light and appear to arrive at a new location before having left the old one, and pass straight through barriers that should be able to hold them back. This appears to be under the direction of a cat which is possibly dead and alive at the same time, but we might have gotten that bit wrong.

here is a lot of good which could come out of building such a computer circuit. For a start, the circuits are built by creating pathways for electrons to travel across a bed of nanotubes, and are not limited by any size restriction relevant to current manufacturing methods.

Read more >>

Saturday, 18 May 2013

flexible heart monitor thinner than a dollar bill

12625-babyskin_news Stanford Engineers combine layers of flexible materials into pressure sensors to create a wearable heart monitor thinner than a dollar bill. The skin-like device could one day provide doctors with a safer way to check the condition of a patient's heart.

Most of us don't ponder our pulses outside of the gym. But doctors use the human pulse as a diagnostic tool to monitor heart health.

Zhenan Bao, a professor of chemical engineering at Stanford, has developed a heart monitor thinner than a dollar bill and no wider than a postage stamp. The flexible skin-like monitor, worn under an adhesive bandage on the wrist, is sensitive enough to help doctors detect stiff arteries and cardiovascular problems.

The devices could one day be used to continuously track heart health and provide doctors a safer method of measuring a key vital sign for newborn and other high-risk surgery patients.

Read More >>

Get free daily email updates!

Follow us!

Tuesday, 16 April 2013

India needs homegrown wafer fabs for its electronics

FAB_VLSI The government of India is offering up to $2.75 billion in incentives for the construction and equipping of the country's first wafer fabrication facility. India imported $8.2 billion in semiconductors last year, according to Gartner. Getting its own wafer fab is said to present a number of challenges to India, especially in the necessary infrastructure and an ecosystem of suppliers.

The domestic purchasing mandate, known as the “preferential market access” policy, seeks to address a real problem: imports of electronics are growing so fast that by 2020, they are projected to eclipse oil as the developing country’s largest import expense.

India’s import bill for semiconductors alone was $8.2 billion in 2012, according to Gartner, a research firm. And demand is growing at around 20 percent a year, according to the Department of Electronics and Information Technology.

For all electronics, India’s foreign currency bill is projected to grow from around $70 billion in 2012 to $300 billion by 2020, according to a government task force.

Read more…

Get free daily email updates!

Follow us!

Monday, 25 March 2013

3D IC market to see stable growth through 2016

The global 3D integrated circuit market is forecast to grow by 19.7 percent between 2012 and 2016, with the major growth driver being strong demand for memory products, particularly flash memory and DRAM.

3D integrated circuits help improve the performance and reliability of memory chips, and as an added benefit the resulting chips are smaller and cheaper. However, chips based on 3D circuits face thermal conductivity problems which might pose a challenge to further growth.

According to Infiniti Research, the biggest 3D IC vendors at the moment are Advanced Semiconductor Engineering (ASE), Samsung., STMicroelectronics and Taiwan Semiconductor Manufacturing Co. (TSMC). IBM, Elpida, Intel and Micron are also working on products based on 3D ICs.

Intel was a 3D IC pioneer and it demoed a 3D version of the Pentium 4 back in 2004. The overly complicated chip offered slight performance and efficiency improvements over the 2D version of the chip, which really isn't saying much since Prescott-based Pentium 4s were rubbish.

The focus then shifted on memory chips and some academic implementations of 3D processors, but progress has been relatively slow, hence any growth is more than welcome.

Thursday, 14 February 2013

How and why DDR4 timing is important

DDR4-RAMJEDEC's DDR4 DRAM standard is compatible with 3DIC architectures and is capable of data transfer rates up to 3.2 gigatransfers per second, Kristin Lewotsky notes in this article. "We've got a broad  population of folks who really haven't had the time or the business need to learn about DDR4," says Perry Keller of Agilent Technologies. "What we hope to do is familiarize them with DDR4: What it is, why it exists, what it can bring to their products, and how to do something practical with it." EE Times

Sunday, 10 February 2013

FPGAs: An Alternative To Cloud Computing !!

As complexity intensifies within sophisticated computing, so does the demand for more computing power. On top of that, the need to mine data from the burgeoning mountain of Internet search data has led to huge data centers that must be located close to water to feed their massive equipment cooling systems.

Weather modeling, for instance, continues to drill down into smaller geographical elements to fine-tune accuracy. And longer and more sophisticated encryption keys require greater compute power to crack them.

New tasks are also emerging in fields ranging from advertising to gene sequencing. Companies in the bio-sciences area gain competitive advantage based on the speed they’re able to sequence the genes held in DNA samples. Drug companies rely heavily on computer modeling to identify suitable candidate chemical formulas that may be useful in combating diseases.

In security, the focus has turned to deep packet inspection and application-aware monitoring. Companies now routinely deploy firewalls that are able to break into individual communication streams and identify traffic specific to, say, social networking sites, which can be used to help stop malicious attacks on corporate assets.

The Server Farm Approach

Traditionally, increases in processing requirements are handled with a brute force approach: Develop server farms that simply throw more microprocessor units at a problem. The heightened clamor for these server farms creates new problems, though, such as how to bring enough power to a server room and how to remove the generated heat. Space requirements are another problem, as is the complex management of the server farm to ensure factors like optimal load balancing, in terms of guaranteeing return on investment.

At some point, the rationale for these local server farms runs out as the physical and heat problems become too large. Enter the great savior to these problems, otherwise known as “The Cloud.” In this model, big companies will hire out operating time on huge computing clusters.

In a stroke, companies can make physical problems disappear by offloading this IT requirement onto specialised companies. However, it’s not without problems:

  • Depending on the data, there may be a requirement for high-bandwidth communications to and from the data centre.
  • A third party is added into the value chain, and it will try to make money out of the service based on used computing time.
  • Rather than solving the power problem, it’s simply been moved—exactly the same amount of computing needs to be undertaken, just in a different place.

This final issue, which raises fundamental problems that can’t be solved with traditional processor systems, splits into two parts:

  • Software: Despite advances in software programming tools, optimization of algorithms for execution on multiple processors is still far away. It’s often easy to break down a problem into a number of parallel computations. However, it’s much harder for the software programmer to handle the concept of pipelining, where the output of one stage of operation is automatically passed to the next stage and acted upon. Instead, processors perform the same operation on a large array of data, pass it to memory, and then call it back from memory to perform the next operation. This creates a huge overhead on power consumption and execution time.
  • Hardware: Processor systems are designed to be general. A processor’s data path is typically 32 or 64 bits. The data often requires much smaller resolution, leading to large inefficiencies as gates are clocked unnecessarily. Frequently, it becomes possible to pack data to fill more of the available data width, although this is rarely optimal and adds its own overhead. In addition, the execution units of a processor aren’t optimised to the specific mathematical or data-manipulation functions being undertaken, which again leads to huge overheads. 

The FPGA Approach

In the world of embedded products, a common computing-power approach is to develop dedicated hardware in an FPGA. These devices are programmed using silicon design techniques to implement processing functions much like a custom-designed chip. Many papers have been written on the relative improvements between processors, FPGAs, and dedicated hardware. Typical speed/power-consumption improvements range between a factor of 100 and 5000. 

Following in that vein, a recent study performed by Plextek explored how a single FPGA could accelerate a particular form of gene sequencing. Findings revealed an increase of just under a factor of 500. This can be viewed either as a significantly shorter time period or as an equipment reduction from 500 machines to a single PC. In reality, though, the savings will be a balance of the two.

Previously, these benefits were difficult to achieve for two reasons:

  • Interfacing: Dedicated engineering was required to develop FPGA systems that could easily access data sets. Once the data set changes, new interfacing requirements arise, which means a renewed engineering effort.
  • Design cycle time: The time it takes for an algorithm engineer to explain his requirements to a digital design engineer, who must then convert it all into VHDL along with the necessary testbenches to verify the design, simply becomes too long for exploratory algorithm work.

Now both of these problems have largely been solved thanks to modern FPGA devices. The first issue is resolved with embedded processors in the FPGA, which allow for more flexible interfacing. It’s even possible to directly access FPGA devices via Ethernet or even the Internet. For example, Plextek developed FPGA implementations that don’t have to go through interface modifications any time a requirement or data set changes.

To solve the second problem, companies such as Plextek have been working closely with major FPGA manufacturers to exploit new toolsets that can convert algorithmic descriptions defined in high-level mathematical languages (e.g., Matlab) into a form that easily converts into VHDL. As a result, significant time is saved from developing extensive testbenches to verify designs. Although not completely automatic, the design flow becomes much faster and much less prone to errors.

This doesn’t remove the need for a hardware designer, although it’s possible to develop methodologies to enable a hierarchical approach to algorithm exploration. The aim is to shorten the time between initial algorithm development and final solution.

Much of the time spent during algorithm exploration involves running a wide set of parameters through essentially the same algorithm. Plextek came up with a methodology that speeds up the process by providing a parameterised FPGA platform early in the process (see the figure). The approach requires the adoption of new high-level design tools, such as Altera’s DSP Builder or Xilinx’s System Generator.

74316_fig

A major portion of time involved in algorithm exploration revolves around running a wide set of parameters through essentially the same algorithm. Plextek’s methodology provides a parameterized FPGA platform early in the process, which saves a significant amount of time.

A key part of the process is jointly describing the algorithm parameters that are likely to change. After they’re defined, the hardware designer can deliver a platform with super-computing power to the scientist’s local machine, one that’s tailored to the algorithm being studied. At this point, the scientist can very quickly explore parameter changes to the algorithm, often being able to explore previously time-prohibitive ideas. As the algorithm matures, some features may need updating. Though modifications to the FPGA may be required, they can be implemented much faster.

A side benefit of this approach is that the final solution, when achieved, is in a hardware form that can be easily scaled across a business. In the past, algorithm exploration may have used a farm of 100 servers, but when rolled out across a business, the server requirements could increase 10- or 100-fold, or even to thousands of machines.  With FPGAs, equipment requirements will experience an orders-of-magnitude reduction.

Ultimately, companies that adopt these methodologies will achieve significant cost and power-consumption savings, as well as speed up their algorithm development flows.

Get free daily email updates!

Follow us!

Wednesday, 30 January 2013

Rambus Introduces R+ LPDDR3 Memory Architecture Solution

Virtium-DDR3-VLP-SO-UDIMM2 Sunnyvale, California, United States - January 28, 2013   – Rambus Inc. the innovative technology solutions company that brings invention to market, today announced its first LPDDR3 offering targeted at the mobile industry. In the Rambus R+ solution set, the R+ LPDDR3 memory architecture is fully compatible with industry standards while providing improved power and performance. This allows customers to differentiate their products in a cost-effective manner with improved time-to-market. Further helping improve design and development cycles, the R+ LPDDR3 is also available with Rambus’ collaborative design and integration services.

The R+ LPDDR3 architecture includes both a controller and a DRAM interface and can reduce active memory system power by up to 25% and supports data rates of up to 3200 megabits per second (Mbps), which is double the performance of existing LPDDR3 technologies. These improvements to power efficiency and performance enable longer battery life and enhanced mobile device functionality for streaming HD video, gaming and data-intensive apps.

“Each generation of mobile devices demands even higher performance with lower power. The R+ LPDDR3 technology enables the mobile market to use our controller and DRAM solutions to provide unprecedented levels of performance, with a significant power savings,” said Kevin Donnelly, senior vice president and general manager of the Memory and Interface Division at Rambus. “Since this technology is a part of our R+ platform, beyond the improvements in power and performance, we’re also maintaining compatibility with today’s standards to ensure our customers have all the benefits of the Rambus’ superior technology with reduced adoption risk.”

The seed to the improved power and performance offered by the R+ LPDDR3 architecture is a low-swing implementation of the Rambus Near Ground Signaling technology. Essentially, this single-ended, ground-terminated signaling technology allows devices to achieve higher data rates with significantly reduced IO power. The R+ LPDDR3 architecture is built from ground up to be backward compatible with LPDDR3 supporting same protocol, power states and existing package definitions and system environments.

Additional key features of the R+ LPDDR3 include:

  • 1600 to 3200Mbps data rates
  • Multi-modal support for LPDDR2, LPDDR3 and R+ LPDDR3
  • DFI 3.1 and JEDEC LPDDR3 standards compliant
  • Supports package-on-package and discrete packaging types
  • Includes LabStation™ software environment for bring-up, characterization, and validation in end-user application
  • Silicon proven design in GLOBALFOUNDRIES 28nm-SLP process

Get free daily email updates!

Follow us!

India fab decision likely this quarter

intro-fab BANGALORE, India—A decision on the long-pending proposal to set up a domestic wafer fab in India is likely to be made by the end of March, semiconductor industry executives said.
India has been debating building a wafer fab for years. A decision on whether to move forward with a plan to build a fab here had been promised by the end of 2012. But that deadline came and went with no decision made.

Still, semiconductor industry executives here said they are now far more confident that a decision on the plan by India's national government is imminent. Their optimism is based on some policies conducive to domestic electronics manufacturing being adopted last year, with still others in the works.

"Electronics manufacturing is being looked at more progressively," a representative of the India Semiconductor Association (ISA) said here Tuesday (Jan. 22).

A study conducted by the ISA and market researcher Frost & Sullivan on this market projected a compound annual growth rate of nearly 10 percent for India's electronics, system design and manufacturing market from 2011 through 2015. The market is expected to grow to $94.2 billion in 2015 from $66.6 billion in 2011, according to the study.

"This is a fantastic growth rate and should show the way to product development and value-added manufacturing domestically, rather than relying on imports and low value-added screwdriver assembly," the ISA said.

Fears loom over the import bill for electronic products though, with 65 percent of India's demand currently being met by imports.

Another worry is the projected decline in high value-added manufacturing within the country. Of the total electronics market of $44.81 billion in 2012, high value-added domestic manufacturing was just $3.55 billion. The ISA is concerned that this decline in high value-manufacturing will cause a cumulative opportunity loss of a whopping $200 billion by 2015.

India’s semiconductor market revenue was an estimated $6.54 billion in 2012. The country's semiconductor design industry is expected to grow at a 17 percent CAGR, amounting to over $10 billion in 2012. Of this, VLSI design accounted for $1.33 billion, embedded software for $8.58 billion and board/hardware design for $672 million.

Get free daily email updates!

Follow us!

Sunday, 27 January 2013

A Realistic Assessment of the PC's Future

All around us, the evidence is overwhelming that the PC world is changing rapidly and in numerous ways -- use, sales, share of the electronics/IT equipment market, application development, and, very importantly, the surrounding supply chain.

Certainly, the PC has a future in our homes and businesses, but don't let anyone convince you they know exactly how that future will look or where things will remain the same over the next five years. Within a few years, the PC market will lose its title as the dominant consumer of semiconductors -- if it hasn't already. In the near future, the leading destination for many components used in traditional PCs will be tablet and smartphone plants.

The supply chain, especially the procurement and production elements, must be focused on accelerating that transition. I don't believe that's the case today, though the trends have been apparent for quite a few quarters. As consumers have migrated toward mobile devices, especially smartphones, the consequences for PC vendors and their component suppliers have become obvious. But apparently, they aren't obvious enough.

Intel Corp., the company with the most to lose as this shift has accelerated, has worked to establish a beachhead in the smartphone market. Nevertheless, many well-meaning analysts and industry observers have continued to spout the misleading view that the PC sector is unshakeable. The general opinion for a while was that tablets and smartphones would serve as complementary products to the traditional PCs, rather than cannibalizing the market. Think again.

Paul Otellini, Intel's president and CEO, had this to say about the changes in his company's market during a fourth-quarter earnings conference call.

From a product perspective, 2012 was a year of significant transitions in our markets and a year of important milestones for Intel...
At CES last week, I was struck by our industry's renewed inventiveness. PC manufacturers are embracing innovation as we are in the midst of a radical transformation of the computing experience with the blurring of from factors and the adoption of new user interfaces.
It's no longer necessary to choose between a PC and a tablet.

Let's turn to an IDC report released Monday for further explanation. The research firm said it sees PC innovation accelerating over the next few years as OEMs struggle to stem their losses and blunt the impact of smart phones on the market. PC OEMs and chip vendors can no longer afford to be complacent, IDC said; they must compete on all levels with tablets and Smartphone manufacturers to demonstrate the continued relevance of their products.

This view implies that PC vendors and their suppliers have been satisfied with the status quo until now. That would be putting it mildly. Until Apple Inc. rolled out the iPhone and positioned it as an alternative platform for accessing the Internet, many OEMs didn't see smartphones as competing devices. IDC said in its report:

Complacency and a lack of innovation among OEM vendors and other parts of the PC ecosystem has occurred over the past five years. As a result, PC market growth flattened in 2012 and may stagnate in 2013 as users continue gravitating to ever more powerful smartphones and tablets.

Ouch. Some in the industry still believe tablets and smartphones aren't an arrow aimed at the PC market. I don't see tablets and smartphones replacing PCs in all situations, but they will encroach enough on that territory to leave a visible mark. That's why PC vendors, semiconductor suppliers, and manufacturers of other components need to develop a strategy that embraces the smaller form factors of tablets and smartphones and leverage their advantages over traditional computing platforms to create market-winning products.

Mario Morales, program vice president for semiconductors and EMS at IDC, said in a press release, "The key challenge will not be what form factor to support or what app to enable, but how will the computing industry come together to truly define the market's transformation around a transparent computing experience."

That conversation is a couple of years late, but it's welcome nonetheless.

Monday, 21 January 2013

Is ReRAM the end of NAND flash?

a85bdfc871ed6d6fa42b53abc31e313c A primary storage technology: ReRAM.

NAND flash stores data in a little cloud of electrons in a quantum well. The presence or absence of charge - or the strength of the charge - tells us what bits are stored.

ReRAM stores data through changes in the resistance of a cell. There are a variety of ReRAM technologies in development, including phase-change memory (PCM) and HP's memristors, based on at least a half-dozen competing materials.

Expect healthy competition as the industry and buyers sort out the details.

Advantages

While different implementations have different specs, all ReRAM has key advantages over today's common NAND flash.

  • Speed. ReRAM can be written much faster - in nanoseconds rather than milliseconds - making it better for high-performance applications.
  • Endurance. MLC flash - the most common - can only handle about 10,000 writes. ReRAM can handle millions.
  • Power. Researchers have demonstrated micro-Amp write power and expect to get in the nano-Amp range soon, which makes ReRAM much more power efficient than NAND flash, which requires voltage pumps to achieve the 20 volts required for writes.

The Storage Bits take

NAND flash will retain advantages in cost and density for the foreseeable future, meaning that it will be here for decades to come. So where will ReRAM fit in the storage hierarchy?

  • Data integrity. Losing a snapshot is no big deal. Losing your checking account deposit is. Mission critical applications will prefer ReRAM devices - and can afford them.
  • Performance. Today's SSDs go through many contortions to give good performance - and don't succeed all that well. A fast medium removes complexity as well as increasing performance.
  • Mobility. Depending on how the never-ending tug-of-war between network bandwidth and memory capacity develops, consumers may come to prefer large capacity storage on their mobile devices. If so, ReRAM's power-sipping ways will be an asset on high-end products.

Toshiba is well-positioned to enter these high-end markets with SSDs analogous to today's 15k disks. It may not be a huge market, but the margins will make it worthwhile.

Other vendors, including Panasonic, Micron and Samsung, are also working on ReRAM products. Another interesting question: to what extent will fast ReRAM replace DRAM in systems?

Get free daily email updates!

Follow us!

Tuesday, 15 January 2013

SEMI Industry spending $32.4B this year on IC gear

ics Fab equipment spending saw a drastic dip in 2H12 and 1Q13 is expected to be even lower, says SEMI, which reckons that the projected number of facilities equipping will drop from 212 in 2012 to 182 in 2013.

Spending on fab equipment for System LSI is expected to drop in 2013. Spending for Flash declined rapidly in 2H12 (by over 40 %) but is expected to pick up by 2H13. The foundry sector is expected to increase spending in 2013, led by major player TSMC, as well as Samsung and Global foundries.

Fab construction:
While fab construction spending slowed in 2012, at -15%,  SEMI  projects an increase of 3.7 % in 2013 (from $5.6bn in 2012 to $5.8bn  in 2013).

The report tracks 34 fab construction projects for 2013 (down from 51 in 2012).  An additional 10 new construction projects with various probabilities may start in 2013. The largest increase for construction spending in 2013 is expected to be for dedicated foundries and Flash related facilities.

Many device manufacturers are hesitating to add capacity due to declining average selling prices and high inventories.

However SEMI reckons flash capacity will grow 6%  by mid-2013, with nearly 6 % growth, adding over 70,000wpm.

SEMI also foresees a rapid increase of installed capacity for new technology nodes, not only for 28nm but also from 24nm to 18nm and first ramps for 17nm to 13nm in 2013.

SEMI cautiously forecasts  fab equipment spending in 2013 to range from minus 5 to plus 3.

Get free daily email updates!

Follow us!

Sunday, 13 January 2013

Full Speed Ahead For FPGA

droppedImageIn the world of high-frequency trading, where speed matters most, technology that can gain a crucial split-second advantage over a rival is valued above all others.

And in what could be the next phase of HFT, firms are looking more and more to hardware solutions, such as field-programmable gate array (FPGA), as it can offer speed gains on the current software used by HFT firms.

FPGA technology, which allows for an integrated circuit to be designed or configured after manufacturing, has been around for decades but has only been on the radar of HFT firms for a couple of years. But new solutions are beginning to pop up that may eventually see FPGA become more viable and be the latest must-have tool in the high-speed arms race.

For instance, a risk calculation that can take 30 microseconds to perform by a software-based algorithm takes just three microseconds with FPGA.

Current HFT platforms are typically implemented using software on computers with high-performance network adapters. However, the downside of FPGA is that it is generally complicated and time consuming to set up, as well as to re-program, as the programmer has to translate an algorithm into the design of an electronic circuit and describe that design in specialized hardware description language.

The programming space on FPGA is also limited, so programs can’t be too big currently. Although, some tasks such as ‘circuit breakers’ are an ideal current use for FPGA technology.

It is the drawbacks, as well as the costs involved, that are, at present, are holding back trading firms from taking up FPGA in greater numbers. However, because of the speed gains that it offers, much resources are being poured into FPGA in a bid to make the technology more accessible—and some technology firms are now beginning to claim significant speed savings with their products.

Cheetah Solutions, a provider of hardware solutions for financial trading, is one firm that says it can now offer reconfigurable FPGA systems to trading firms. It says its Cheetah Framework provides building blocks which can be configured in real time by a host server and an algorithm can execute entirely in an FPGA-enabled network card with the server software taking only a supervisory role by monitoring the algo’s performance and adapting the hardware algo on the go.

“True low latency will only be achieved through total hardware solutions which guarantee deterministic low latency,” said Peter Fall, chief executive of Cheetah Solutions. “But if the market moves, you want to be able to tweak an algorithm or change it completely to take advantage of current conditions. Traditional FPGA programming may take weeks to make even a simple change whereas Cheetah Framework provides on-the-fly reconfigurability.”

Another technology firm to claim that it can make automated trading strategies even faster and more efficient is U.K.-based Celoxica, which recently debuted its new futures trading platform, based on FPGA technology, which involves a circuit on one small chip that can be programmed by the customer.

Celoxica says the platform is designed to accelerate the flow of market data into trading algorithms to make trading faster. It covers multiple trading strategies and asset classes including fixed income, commodities and foreign exchange.

“For futures trading, processing speed, determinism and throughput continue to play a crucial role in the success of principle trading firms and hedge funds trading on the global futures markets,” said Jean Marc Bouleier, chairman and chief executive at Celoxica. “Our clients and partners can increase focus on their trading strategies for CME, ICE, CFE, Liffe US, Liffe and Eurex.”

While last August, Fixnetix, a U.K. trading technology firm, said that it had signed up a handful of top-tier brokers to use its FPGA hardware chip, which executes deals, compliance and risk checks, suggesting that this niche technology is picking up speed rapidly.

Tuesday, 30 October 2012

IBM's New Chip Tech.

IBM Scientist holds bottles full of carbon nanotubes IBM has put the chip industry on notice by inventing a new technology that would replace silicon with a new material, carbon nanotubes.

IBM has found a new way to put what seems like an impossibly large number of transistors into an insanely small area, the width of only a few atoms. That's 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology.

Or as IBM explains:

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotube forms the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power our data-crunching servers, high performing computers and ultra fast smart phones.

Inventing the tech is one thing, being able to manufacture it at scale is another. And that's the real breakthrough IBM announced. It has put more than 10,000 of these "nano-sized tubes of carbon" onto single chip using a standard fabricating method.

It will still be years, maybe even a decade, before carbon nanotubes would really replace silicon-based chips in our servers and our smartphones. But this breakthrough is important because the chip industry is reaching a point where it physically can't squeeze much more processing power onto existing forms of chips.  Some have predicted that we'll soon reach an end to Moore's Law which tries to double the density of chips on a wafer every two years.

Chip transistors are already super tiny—or nanoscale.

ibm-carbon-nanotube

This is what a nanotube looks like under a microscope.

Earlier this year Intel dumped $4.1 billion into two new techniques to help the chip industry continue to get more powerful at smaller scales. These two new technologies are not the same as what IBM is working on.

IBM's carbon-based method may represent a whole new beginning for Moore's Law, the industry maxim that chips keep getting cheaper, more powerful, and smaller.

Monday, 10 September 2012

Intel Is Cooling Entire Servers By Submerging Them in Oil

originalAir cooled computers are for wimps. But while the idea of keeping temperatures in check using water might be a step in the right direction, Intel is doing something even more radical: it's dunking entire servers—the whole lot—into oil to keep them chill.

Don't panic, though: they're using mineral oil, which doesn't conduct electricity. It's a pretty wacky idea, but it seems to be working. After a year of testing with Green Revolution Cooling, Intel has observed some of the best efficiency ratings it's ever seen. Probably most impressive is that immersion in the oil doesn't seem to affect hardware reliability.

All up, it's extremely promising: completely immersing components in liquid means you can pack components in more tightly as the cooling is so much more efficient. For now, though, it's probably best to avoid filling your computer case with liquid of any kind.



Get free daily email updates!

Follow us!