HP Forums

Full Version: Dead 67 & Woodstock ACT chips & Prevention
You're currently viewing a stripped down version of our content. View the full version with proper formatting.

Although 20+ years is a commendable lifespan for a handheld calculator, I keep reading about ACT chips going south, and it gives me the willies.

Does anyone know the cause of this? Some sort of internal corrosion in the chip, or failing PSUs that eventually fry the poor thing?

Seeing that HP will never remanufacture the chips, what can be done in the interests of preserving the remaining working machines? Somehow upgrading the PSUs with more reliable components, or perhaps developing a fancy "socket" for the chips that isolate them from out of spec inputs? This should be something on the mind of every collector of the afflicted machines.

As for me, I'm in the process of "re-capping" (replacing all capacitors) in my woodstocks. I used to refurbish antique radios and one of the first things done in a restoration job was to replace all the caps as there was always a high probability of failure in units 30+ years old. Also, I only use alkaline batts in my woodstocks - NEVER using the recharging circuitry. I'm fortunate enough to have external chargers for my classics and topcats so I charge those batts outside the machines.



I don't believe it's any external event that kills the ACT. I've never had a replacement fail shortly after fitting it in place of the dead one (if, say, the PSU was the cause, you'd expect it to kill the replacement too). I think it's just old age of the chip (chips do _not_ last for ever, no matter what some people believe).

I also don't believe in blanket recapping (not of vintage valved equipment -- yes, I work on that too -- nor of HP calculators). Certainly replace capacitors that are causing problems, but I don't find replacing good ones does anything at all. I've actually had stuff where some caps have needed replacing several times (and yes, I do use good replacements, I had to replace them every 10 years or so), while others are still the originals.

Alkaline batteries have a higher voltage than the original NiCds. While this shouldn't cause a problem (the power converter is a regulated circuit, and should be happy with 3V rather than 2.5V input), I don't do it. Mind you, I do have the reserve power packs (external chargers) for the classics, woodstocks, spices and topcats :-). If you don't have the woodstock one, just use a junk 21 (say with a dead ACT or ROM chip). All you need is the charger connector, resistor and diode down one side of the PCB, and the battery contacts.

On non-C machines, there is _no_ risk to charging batteries in the machine with the switch turned off. There is then no connection from the +ve side of the PSU or battery to the rest of the machine. I've seen the schematics...

I was afraid you would say something like that -- we're all DOOMED!! (by some arcane law of semiconductor physics, no less) Yikes. Ah well, nothing is forever. I guess we should enjoy them while they last. BTW, I do have a woodstock charger, but it gets hot when charging, so I never used it, I just boxed it up and forgot about it years ago. At the time, I didn't want to dig into it and alkalines were all to easy. I always liked the look of those 'blank' woodstock cases HP used to make the chargers.

PS: Thanks, I appreciate your ongoing advice, and I'm sure everyone else does too.



This is very interesting and frightening.
What age could a normal chip reach ?
Are older machines (bigger components and lesser integration) less prone to failure ?

Should machines be kept powered ? Hot / cold storage place ?
I'm also very much worried about LCD displays.

Obviously there must be people here who know.
Pardon the dumbness, this is a real question !!

It’s not a dumb question at all. There are in fact several reasons why ICs fail. Most of them begin with heat and/or thermal cycling. What actually ends up failing on a particular IC as a result of this tends to vary by generation ie: the process tools used at the time of manufacture. In almost any IC fabrication process there is a point where the semiconducting layer must be “doped” to provide the proper junction type in the material; be it a transistor, a diode, or even a resistance in the proper location of the circuit. Such doping (be it n-type or p-type) may be accomplished in several ways such as thermal diffusion or ion implantation with an “impurity” such as boron, etc. This doping causes a change in the energy levels of the conduction (or more commonly: semi-conduction) band of the material. The trick is to put just enough of the doping material, just deep enough to make the circuit behave as desired, too little or too much causes various problems: speed, power, voltage thresholds, etc. The point is that the ions are not necessarily permanently fixed in the lattice but they may continue to diffuse in the material over time under stress of operation or thermal stress. Eventually a portion of the circuit has an improper balance in a critical location and that portion of the circuit stops working: a bit (voltage level) won’t change states reliably when needed.

Another problem is the difference in coefficient of thermal expansion between various parts of the IC: substrate, metal interconnect layers, and more common: the package materials (die to package interface, wire bond leads, etc). Over time these stresses cause a mechanical break somewhere in the die itself or in its connection to the outside world.

Does this mean that the new 90nm process used in the Intel Precott CPUs will fail much sooner

as it all ready has severe power leakage problems?!

Presumably the IBMs Silicon On Insulator is safer?


Hmmm... a cold, stable environment. Would the PCBs withstand a gradual reduction in temp to say, that of liquid nitrogen? Just keep those babies in a vat until needed.

sorry guys I was half brain dead on allergy medicine and lack of sleep when I put up my first post. The IC designers have come up with some very clever and impressive ways to improve their yields and long term reliability even as they push the limits in line width and feature size. And yes, SOI is one of those methods. Also, phase shifting masks for the photolithography processes which allow you to have feature sizes smaller than the wavelength of the exposure source. The masks look truly bizarre but when exposed with the correct wavelength of light, provide constructive and destructive interference where necessary to yield perfect lines and features. They have even started using the back side of the wafers (thinner now of course) and doped them with “getter” compounds which actually draw some of the impurities and contaminants in the wafer back away from the active devices. It is my understanding that many of the motherboard failures blamed on the CPU are in fact cause by poor power supply by-passing of the cheeper mother board manufactures. The CPUs now (particularly the Intel devices) have such stringent limits on noise at the supply pins (of which there are dozens on each CPU) whose voltages keep dropping (under 3 V now) that they require many dozens of expensive tantalum and ceramic capacitors located all around the CPU to keep them happy (the power up sequence is a nightmare too - can’t have this Vcc powered up within Xns of that Vdd, etc) The cheeper board manufactures put just enough capacitors of just sufficient quality that they keep the warranty returns to a bearable level and then 6 mo to a year after the warranty is out, the capacitors start to give it up and you’re on borrowed time.

But I digress. There are actually many many sources of IC failure, but don’t forget that there are sometimes hundreds of process steps to fabricating even a pedestrian IC like the simple brain of our beloved calculators. When they start up a new process (whether it’s the 70’s or 30 years later) once you’ve tweaked the process to provide adequate yield, you crank up the production rate as fast as you can, and sample for infant mortality rates as necessary. The fact is, any step along the way could be using a wet etchant, reaction gas, or other material that is not quite up to the required purity or you don’t quite remove all the stuff you needed to in just one step and...well, you’ve left the door open for those contaminants (leftover resist, etc) to start migrating around buried in your IC. The thermal cycling helps them ooze a few nanometers and eventually it gets where it causes a problem and your done. By-the-way, once an IC from a particular lot fails after the infant mortality period is over, almost all the rest that fail will do so from the same defect assuming they were exposed to similar environments (not too much static discharge, etc). That’s one reason why they do thermal life testing and burn-in on ICs before they leave the factory (on some of the more expensive devices anyway). If it doesn’t fail in a few hours or days, it’s probably good for years and years. They call the mortality curve (ratio of good/bad devices from a particular lot) for ICs an upside-down bathtub. It shoots up quickly at the beginning then holds steady for years before they start dropping like flies.

You could possibly improve your odds by keeping things cool (or cold) but that could cause its own problems like condensation or coef of thermal expansion problems. The best thing is probably just keep them at as constant a temperature (and low humidity) as possible and try not to subject them to static discharge or too much mechanical shock.

Your informative discussion sheds a lot of light on a subject that some of us have no knowledge.