LECTURE NO. 23

MENTAL INERTIA AND NEW ENERGY TECHNOLOGY

Copyright © Harold Aspden, 1998


INTRODUCTION

These comments were compiled as an aide memoire some two to three days prior to my participation in a discussion meeting on new energy topics held in London on May 10th, 1998.

They constitute a brief summary of points which I feel need to be made in a general debate on how to progress in the task of awakening interest in new and revolutionary energy conversion techniques which, at this time, challenge orthodox science and so are not welcomed by those who advise on research funding.

CONCERNING THE TITLE

In the above title I refer to 'Mental Inertia'. What is meant by this is the difficulty of deflecting the thought processes of the vast majority of physicists concerning heat and power generation, which seem to move forward relentlessly following a single track, that which conforms with the Second Law of Thermodynamics.

Heat dispersed into space is somehow lost forever. It has no destiny except infinity. As a result, the minds of those who ponder upon such questions, drift off themselves into that wilderness of infinity. Why not pause and surmise that, notwithstanding what we do achieve by building and testing heat engines, which spend energy, Nature has her own secret ways of processing spent energy and forcing it into a temporary holding condition, a 'quantum dance', before it can be packaged as a proton and released back into our real world. We see matter all around us. It is built from protons and electrons. Somehow it was created, necessarily from energy. Where did that energy come from? If you think the universe is a one-off product that came off God's production line some ten or so billion years ago, then you are not thinking as I do. I think each and every proton is created by an ongoing mass-production process fed by energy tapped from that dispersed throughout space. If those who advise on energy research know otherwise, then they should explain how they know! It is an important issue warranting debate.

Apart from that issue I am also suggesting that scientists have developed a mental block by relying too heavily upon their Second Law of Thermodynamics. They suffer from 'mental inertia' and it is to the detriment of progress on the 'New Energy' front.

THE ENERGY SOURCE

Energy cannot be 'created'. It can only be converted from one form to another. Ignoring the usual energy sources, whether oil, gas, coal, nuclear, hydroelectric power, wind power and even solar power, there is new energy territory to be explored. There are two sources latent to our environment. One is the ambient heat, the energy stored in our atmosphere and the other is the energy stored locally in the quantum underworld of the vacuum state.

Concerning the first of these sources, physicists reject the notion that exploitation of ambient heat is possible, because they live under the spell of the Second Law of Thermodynamics. As to the second of these sources, they refuse to believe that there is any energy in space other than that which is an extension of the matter form owing to what they term the 'fields', the electric and magnetic action of matter upon nearby space. Whether it is Einstein's theory that has brushed the aether aside or quantum theory that provides them with pacifying answers, they choose not to recognize these two important sources of energy.

Now, for the purpose of these notes I can only comment briefly on the major source, that vested in space, so I will do that first.

Simply put, that source of energy hidden in the quantum underworld is the power house which creates the protons and electrons from which all matter in the universe is formed, whether as hydrogen atoms or as transmuted atoms formed from those protons and electrons. I can prove that proposition by showing how an understanding of the process involved can explain the precise value of the mass of the proton in terms of the mass of the electron. By precise I mean fully in agreement with the degree of its precision measurement, a part or so in ten million. However, that is mere theory.

When we come to view this in the context of the real world, we can look at the phenomenon of 'cold fusion' and ask how it is that two atomic nuclei, that of hydrogen and that of heavy hydrogen, namely the proton and the deuteron can come close enough together to combine or transmute their forms in such a way as to shed energy. You might say that all those atoms were created long ago in something called the 'Big Bang' which is an event far removed from what can be happening here and now in the space local to body Earth. However, in that case you too would then be indulging in mere theory.

In the obituary of Sir Charles Frank, reported on page 25 of the British newspaper THE TIMES of Monday, April 27, 1998, there is the statement that he was 'an inspirational physicist who worked in a wide range of fields, from earthquakes to cold fusion'. There was a later comment in the obituary which referred again to cold fusion, by saying that:
"Charles Frank was the first scientist to think of the idea of cold nuclear fusion. In 1947 he suggested using an elementary subnuclear particle, a muon, to catalyse the fusion of deuterium and tritium. It would take the electron's place and allow the nuclei to approach some 200 times closer than usual, and so help produce fusion."

The problem with that idea is that the muon, which is otherwise known as the 'heavy' electron, just as the deuteron can be said to be the 'heavy' proton, is a mystery in itself. In fact, it has a 'ghostly' existence, to my way of thinking, because it is the primary 'stuff' which accounts for the energy of space, but that fact is not appreciated by the world of science. I know that to be a fact because my explanation of the creation of the proton involves the combination of those aethereal muons!

So the evolving cold fusion scene will, I expect, one day make us appreciate the existence of that energy source in space. Meanwhile, however, I shall myself pursue another route aimed at gaining access to that sea of energy in space, namely by the process of magnetic induction designed to involve 'magnetocaloric cooling' of the vacuum medium. That introduces my interest in certain forms of magnetic reluctance motor, but to explain that in detail is also just a little too specific for the immediate purpose of these notes. So, apart from referring below to the problem of testing 'over-unity' performance, I shall here restrict my onward comments to the somewhat related subject of extracting energy from ambient heat in defiance of the Second Law of Thermodynamics.

THE SELF-GENERATING HEAT ENGINE

It is not 'perpetual motion' in the sense of a machine that runs on no power but delivers power. No, it is simply a machine which absorbs heat from the atmosphere and generates an electrical power output. It is the ideal of a kind of air-conditioning system which cools but yet uses the heat extracted to produce useful power as output, sufficient power in excess of that needed to run the cooling system, so that the surplus is provided as 'free energy' output.

That might seem to be an impossible ideal, but it is possible, given two heat engines connected back-to-back, if one is a conventional engine operating as a reverse heat engine, namely as a heat pump, and the other is an engine which feeds on heat but is not governed by Carnot efficiency limitations.

I may add here that I have even secured grant of a U.S. Patent which includes the description of such a system and has a claim pertaining to that very proposition. It is the last claim of my U.S. Patent No. 5,101,632. It is noted that the subject of that patent has been introduced earlier in these Web pages as Lecture No. 17.

To implement an energy generating process such as this, however, the main requirement is to have a device which does convert heat into electricity without the Carnot restraint imposed by compliance with that Second Law of Thermodynamics. The structure proposed in that U.S. patent would need extensive development of a technique of fabrication of a cellular mirror structure that deploys internal heat radiation.

An alternative which warrants development is a technique based on the fabrication of a laminar structure composed of ferromagnetic metal films or plates. This is also described in the patent literature (U.S. Patent No. 5,288,336 and U.S. Patent No. 5,376,184) but it is felt that scientists generally need to be enlightened concerning the technical reason why the device functions, notwithstanding that Second Law of Thermodynamics.

Now, instead of describing such a device in detail, I will point my finger at something that is occurring throughout the energy world everywhere, namely a phenomenon present in every electrical power transformer. This is the fact that there is an electrical power loss in the transformer that is a complete mystery to those who design them, including the university professors who teach those designers.

Every power transformer that has a laminated steel core is subject to what is called 'iron loss', the magnetization losses which are ongoing when the transformer is connected in the power transmission circuit. There is a component of that loss which is attributable to magnetic induction of currents which circulate in each core lamination. Those currents involve higher losses than one expects from theoretical calculation based on a knowledge of the resistivity of the steel. For that to be the case there has to be something wrong with the theory underlying the calculation. Yet the electrical theory involved is a fairly exact science and errors of calculation as such can be ruled out, which leaves us having to face up to the fact that some physical phenomenon is present but not taken into account.

That phenomenon is the fact that heat flowing from the transformer by passage through those laminations is being regenerated as electrical drive power which augments those circulating eddy currents and increases the loss which develops that heat!

Now, the temperature difference between the centre of a transformer core and the surfaces across which it cools is little more than 10 or 20 degrees C. A conventional heat pump can elevate heat through such a temperature and require only one tenth as much energy as input to power the heat pump as is transferred between those temperatures. So one might suspect that the opposite applies when it comes to heat regenerating electricity, meaning that, at best, only one tenth of the normal heat loss could be converted back to electricity to augment that eddy current flow. In that case the eddy current anomaly factor, meaning actual eddy current loss, as measured, in relation to the theoretical eddy current loss would be a factor of ten or so per cent above unity.

However, if that were the case, the 'eddy current anomaly' would never have been significant enough to be noticed. The overall eddy current anomaly factor came to attention when the technology of fabricating electrical steel laminations developed to the point where attempts to reduce eddy currents were confounded by the measured anomaly factor being 2 or 3 and sometimes higher, though 1.5 was the tolerable norm. In my own experiments as part of my Ph.D. research on this subject in the early 1950s I measured factors as high as 5 and 6. That is an additional loss of 400% or 500% and not a mere 10% extra loss.

That can only mean one thing, now that I can see the evidence in retrospect, some 48 years on, and am no longer blighted by science orthodoxy pertaining to that Second Law of Thermodynamics. The heat generated is, in part, indeed for the most part, being regenerated as electricity inside the metal lamination. There can be no doubt that the conversion efficiencies involved exceed by far those set by the Carnot criteria which govern the conventional heat engine. I would suggest also that the so-called 'warm superconductors' are highly conductive because they are, internally, regenerating as a forward electrical action 100% of any heat shed by resistance effects.

Based on such evidence those interested in new methods of generating energy must surely see the scope for exploiting the phenomenon. Its underlying cause is known in physics as the Nernst Effect, the thermoelectric effect by which heat flow through a magnetic field develops an electric EMF at right angles to the heat flow and the field direction.

A device which I believe worked on this principle, has been demonstrated by John Scott Strachan. It is the subject of that U.S. Patent No. 5,288,336 mentioned above. It was shown to be able to run an electric motor, the only input being the temperature differential of a melting ice block such as one might put in one's whisky glass. It was shown to operate in reverse, freezing water efficiently and rapidly when fed by an electrical power input.

TESTING OF 'OVER-UNITY' DEVICES

One of the major problems experienced by inventors of energy devices which exhibit anomalous power generation properties, is that of convincing an observer viewing a demonstration.

Investors and those who advise investors and corporations who might develop the invention need absolute proof of viability of the technology. Demonstrations are suspect. Consider, for example, the problems confronting Dr. Paulo Correa and his wife Alexandra who have invented a glow discharge tube which delivers d.c. power output in pulses in amounts exceeding the electrical d.c. input power. The proof, so far as the inventors are concerned, resides in the fact that a battery of many electric cells supplies the input power and the output power charges an exhausted, but similar, battery of electric cells. This takes time and, if measurements are made for short interval runs, the energy involved, whether as input or output, is dependent upon simple d.c. calculations based on measurements of d.c. voltages that change by a small amount and a prior calibration of the cells to determine energy stored as a function of voltage.

Ideally, one needs to measure the instantaneous energy activity to compare input and output in an ongoing manner. However, the pulsatory nature of the output, even when monitored by oscilloscopic means, makes it difficult to have trust in such measurements.

A similar situation arises where one tests an electric motor that needs input energy to keep it operational but develops its output power mechanically by feeding a load, which, however, may be an electrical generator. That compounds the measurement problem because much depends upon the efficiency of the generator, rather than that of the motor. This applies, for example, to the motor developments of Dr. Robert Adams in New Zealand. It applies also to my own motor measurements, where to satisfy myself that 'over-unity' performance was in evidence I had to calculate the heat loss generated by pulsing the drive windings of the motor.

Note that where input or output power involves transients and pulses one cannot rely on normal measuring instruments to give the relevant reading on a dial. One has to be sure that an energy anomaly is in evidence and one really needs a way of demonstrating that fact to others who witness the tests.

For this reason I think it worth suggesting that tests in such circumstances can suffice as a demonstration if the power input to the machine is d.c. as drawn from an electric battery source, or normal a.c. if the source waveform is not distorted, whilst the output is all converted directly into heat. Then, by encasing the machine and/or the load in a thermally insulated enclosure, albeit one that does convect the same rate of heat from its surface without acquiring too high a temperature, one can monitor the internal temperature of that enclosure to see how it depends upon machine operation.

Suppose, for example, that the test machine draws 100 watts of input power when delivering electrical output power to a load located outside that enclosure. It will, after a short period have settled to its operating state, meaning its own temperature will have stabilized. Meanwhile, by controlling the delivery of a calibrating power input to a heating resistor inside that enclosure one can see what stable temperature is reached by precisely the same amount of power, 100 watts, as is being fed into the test machine.

Then the test consists of switching off the calibrating power input and switching the output from the test machine to supply heat input to the enclosure. If now the temperature being measured increases one has 'over-unity' operation as between test machine output versus input, not allowing for operational heat loss from the machine itself. To allow for the latter, the test machine needs to be located within the enclosure as well.

In summary, therefore, with such a test procedure, given that one can measure electrical power input with little difficulty and the test then only depends upon watching a thermometer to see if its temperature goes up, I can see no reason why demonstrations of 'over-unity' performance cannot be made convincing. Besides that, such tests would not involve the need to disclose know-how or design details or even the principles of how the machine or device under test operates. The test could be adapted to estimate the degree of over-unity once its existence has been proved. For example, if the device delivers twice as much power output as it takes as input, then the calibration could be at 200 watts to see if the test temperature can be held with the 100 watt device input.

On the other hand, if, to convince investors, the demonstration is intended to display a commercially viable working prototype ready for market production, then that is another story. I tend to believe that the real problem is one of convincing academic scientists and development engineers that 'over-unity', in the sense of generating excess power, as if from the environment alone, is a real possibility. If the test protocol I propose would not suffice for that purpose then that problem I call 'Mental Inertia' is worse than I could have imagined.