Previous Next

 
 

If nine-die stacks can be done with wire bonding, why go to TSV?

In his recent blog post on Semiconductor International, Dick James talks about SanDisk’s success with stacking 9 chips in a microSD memory card. At the end of the post, he poses the following question, “If we can build nine-stacks of 30 µm-thick dice with wire bonding, will TSVs ever become economic in the commodity chip arena?” I found this an interesting question to pose to our panel. Bob – since you’re in the memory stacking business, perhaps you’d like to respond.
Allow download: 
Allow download
By Françoise vo... on Jul 06, 2009
Forum: 3D IC Technology Progress & Limitations - # of views: 5008

#1

For low I/O count, if wire bonding can do it, wire bonding will be less expensive for some time to come. TSVs can improve many things, but stacking low I/O count, slow speed devices have little to gain. Where we see the greatest gain for TSVs in memory is when process separation can be done. In our case we build the memory cells in a memory centric process that has been stripped of all unnecessary process steps for logic. We stack several layers of reduced complexity memory cell layers on a single logic processed layer. This gives us the advantages of cost, power, density and speed, all at the same time. Doing this however, requires not 10's of connections, but a couple of million. This can only be done by TSVs. So TSVs will replace wire bonding and it will be for cost reasons, but at the root is a paradigm shift.
By Robert Patti, July 6, 2009 - 11:53pm

#2

Depending on the dimensions and the type of via, TSV enables to connect chips either at bond pad (via last) or global interconnect level (via middle). Wirebonds only allow to connect at bond pad level. That means it is possible to build 3D SoC-like configurations with TSV whereas it is impossible with wires.

By Yann Guillou, July 7, 2009 - 7:57am

#3

Regarding #2

This is very true. I'm a big advocate of tighter integration with wide busses. But there is a lot of industry focus on mainstream memory incorporating TSVs. The huge volume potential is very attractive. I would point out that while flash devices, such as SanDisk's don't offer any quick road to addoption, this isn't true for DRAM. DRAM devices have an immediate need for TSVs. The very high speed busses are driving a basic desire to improve signal integrity. TSV's have much lower parasitics and offer a reasonable cost model compared to other alternatives.
By Robert Patti, July 7, 2009 - 1:17pm

#4

In the live panel discussion last Monday, the consensus seemed to be that performance must be the driver for TSV.

From a package integration perspective, use of a silicon interposer (with TSV) seems to provide sufficient functionality for most mainstream IC applications today, allowing for two stacked layers of DRAM (one flipchip on microprocessor, one under interposer) without needing to integrate TSV into either memory or logic chips.

So, what applications today call for more than two layers of memory along with logic, and do such applications call for extremely high bandwidths such that an interposer would add excessive delay?

By Ed Korczynski, July 21, 2009 - 10:09pm

#5

Discussion Summary

If required speed, performance and density can be achieved by interconnecting bond pads with wire bond, then it's the most cost effective way. For flash memory with low I/O count, there's no motivation to move to TSV stacking. However with DRAM, there is an "immediate need" for TSVs.  CMOS repartitioning requires it. Again, it all comes down to performance improvements and cost.

By Françoise vo..., July 26, 2009 - 11:17pm
Back
Previous Next
Jump to forum