LECTURE NO. 11

THE HUBBLE CONSTANT: ITS 'FREE ENERGY' ROLE

Copyright, Harold Aspden, 1998

INTRODUCTION

In Lecture No. 6 I declared that I would tell the story about deducing the Hubble constant in a later Lecture. This is that Lecture. It is about my discovery that there is something amiss in Maxwell's Equations as applied to free space, something which gives us insight into why light waves emanating from distant stars lose frequency. This avoids the need to interpret that shift of frequency towards the red end of the visible spectrum as an indication of an expanding universe having, as a start point, what cosmologists have named the 'Big Bang'.

The story I tell is not all theoretical, as will be seen when we come to the details of Gieskieng's Canyon Experiments. It is based on the loss of energy by propagating waves, coupled with their loss of frequency. It is based also on the role played by the aether in energy recovery, meaning our ability to extract energy from the aether locally in response to the absorption of those propagating waves by matter in their path.

As proof of my case, apart from the experimental findings of Gieskieng, I will show how to formulate a value for the Hubble constant H. That should wake cosmologists up from the slumbers and help to put a stop to their dreams about the creation of the universe as a Big Bang phenomenon.

To conclude this Introduction I will first present the equation connecting H with the other fundamental constants of physics that we have measured in the confinement of laboratories on body Earth, doing so in a form which can be compared with the six 'Governing Equations' that I listed in Lecture No. 6. Then I will quote a passage from a book by Brian W. Petley of the National Physical Laboratory in U.K. to set the scene for the onward discourse.

The Hubble Equation, according to my theory, is:

H-1 = (2mμ/me)9(N3)[72π]3(e2/mec2)[6/πc]

where c is the speed of light and e2/mec2 is the 'classical radius of the electron', a recognized fundamental physical constant having no particular physical meaning, but one tabulated with very high precision in the standard physical data tables, it being listed as 2.81794092(38)x(10)-13 cm. N is an integer characteristic of the galactic domain region through which the electromagnetic wave travels, its value being normally 1843, as we have seen from what has been said in the Tutorial Notes in these Web pages and particularly in Lecture No. 6. The expression 2mμ/me is the ratio of the mass of a pair of virtual muons to the mass of the electron, it being the mass-energy of such a pair of muons that accounts for virtually all of the energy in one unit cubic cell of the aether, the lattice dimension of that cell being 72π times that classical electron radius or 108(pi) times the Thomson radius of the electron.

We shall be deriving the above equation as we proceed, but, from the data just given and the knowledge from Lecture No. 6 that 2mμ/me is twice 206.3329, you can calculate H-1, the Hubble time, to find it is 4.5x(10)17 seconds or about 14.3 billion years.

What this means is that a light wave needs to travel for 1.43 million years through space devoid of matter in order to suffer a loss of frequency of 1 part in 10,000. Such are the numbers on which modern ideas concerning our universe are based.

Quoting now from Petley's book: The Fundamental Constants and the Frontier of Measurement, as already referenced in the Discussion section of my Keynote Address in these Web pages, one reads on pp. 41-42:

The question of time formed the crux of Dirac's argument. The largest time that we encounter is the age of the universe, ~1017s, and the smallest is about the time it takes for light to travel a distance equal to the classical electron radius, the tempon, or chronon, and which for many scientists represents a natural unit of time, ~0.49x(10)-23 s. Taking the ratio of these again yields about the same number ~1040.

The essential point made by Dirac was that the dimensionless ratios all came to this value, not as the result of an accidental coincidence, but because they depended on the age of the universe in chronons. This therefore led to his theory for the change of these ratios with time. Thus the force ratio led to a prediction that the gravitational constant would vary with time.

There have, of course, been a number of theories proposing time variations of certain combinations of the constants since that of Dirac. These constants are as follows:
(1) the fine structure constant,
(2) ...... ,
(3) ...... ,
(4) the quantities δ and ε

δ = Hh/mec2 ~ 1042
ε = Gdo/H2 ~ 2x(10)-3
for
do = 7x(10)-28 kg/m-3

Both δ and ε involve the Hubble constant H, which characterizes the variation of the red shift of stellar light with distance and is roughly the reciprocal of the age of the universe. The different cosmologies predict different variations of the parameters with time, and the fact that none of them has been entirely successful is an indication that the subject is still an open one.

Item (2) is a constant of Dirac's theory that we shall not address, it relating to Fermi's theory of beta decay. Item (3), the ratio of Coulomb force to gravitational force, will be addressed separately in the theory of record in these webpages.
Note that δ is dimensionless in that H has the dimension of the reciprocal of time, whereas h/mec is distance, it being the Compton wavelength of the electron (a recognized fundamental constant of 2.42631058(22)x(10)-10cm) and so c, which is a speed, renders the expression dimensionless. The quantity do is a density, it being representative of the order of the average mass density of matter in the universe.

Whatever you, the reader, make of this interest which Nobel Laureate Paul Dirac took in the Large Number Hypothesis, I hope you will come to see it for what it is, merely a play on numbers. It is not true physics to infer that because two ratios, which man can contrive to relate to physical constants, are both very large and of the same order, then there has to be a physical relationship based on those ratios being the same. That is mere wishful thinking and it should not be allowed to substitute for the methodical explanation of each of those two ratios in terms of a genuine physical process. It is so unlikely that the ratios would prove to be equal in the real world as governed by Nature. Our task ahead is to show what determines the Hubble constant and so deduce the true physical value for that quantity (delta). We have already, in Lecture No. 6 given the physically-based formula for G in terms of the charge/mass ratio of the electron and that yields one of Dirac's 'Large Numbers'. We find, of course, that we cannot associate that number with the one which will be derived below from our study of the Hubble situation.

THE STEADY-STATE FREE ELECTRON POPULATION OF FREE SPACE

This was the title of a paper of mine that was published in the English language periodical Lettere al Nuovo Cimento of the Italian Institute of Physics, a periodical noted for its rapid publication of scientific papers. It appeared at pp. 252-256 in vol. 41 and was dated October, 1984.

As my object in writing these Web pages is to (a) present my aether theory to the world, (b) arouse interest in exploring the scope for extracting useful energy from the aether and (c) to make it very clear that the main body of those who see themselves as true scientists has chosen to ignore the truths evident from the research findings I have reported. I will in this section now be quoting extensively from that 1984 paper, interjecting a few comments in italics.

Note that the argument is developed in two parts. The first part concerns how it is that in what we see as empty space there is a uniform ongoing activity that creates the transient presence of a kind of pseudo-matter. The aether attempts to create protons but does not succeed because there is no energy surplus to the equilibrium requirements of the aether. Yet those attempts intrude into the pathways of propagating light waves and cause the aether to absorb energy from those waves. The second part explains why the resulting attenuation of those waves with distance, attenuation over and above the normal inverse of distance degradation, will involve frequency attenuation as well, the latter having been misinterpreted as a Doppler effect attributable to a cosmic expansion seated in a Big Bang.

In referring to that 1984 paper, I will here be incorporating a few corrections, entering those in italic text in the quoted extracts. My notes, as such, will be put in brackets, as will the reference data amended to refer to the Bibliography section of these Web pages. The only other preliminary point I wish to make concerns the 'Thomson scattering cross-section'.

This formula:

A = [8π/3](e2/mec2)2
gives the area deemed to obstruct an electromagnetic wave owing to the presence of a single electron of mass me and charge e. It is a formula derived by J. J. Thomson and is based on the assumption that an electromagnetic wave will promote oscillatory electron acceleration f so as to dissipate energy at a rate given by the Larmor formula 2e2f2/3c3. The force of the electric field of the wave acting on the electron is equated to mf to get this result and it is assumed that the energy radiated as electric field energy is doubled to account for magnetic field energy accompanying the radiated wave.

Now, I wish to make it abundantly clear that this formula for energy scattering by a single electron is incorrect, because the acceleration of an electron does not involve energy radiation. Indeed, the inertial response of an electron in its efforts to make sure that its intrinsic electrical energy is not dissipated, is just such as will endow it with mass according to the formula E=Mc2. That is a vital feature of my theory and one which makes my method independent of the Einstein assumptions. What Larmor had overlooked was that, so far as the single electron reaction is concerned, the accelerating electric field that produces the acceleration interacts with the electron's own field and the cross-products of that field interaction cancel any energy radiation from the body of the electron. This does not govern the collective interaction of electrons and so the Larmor radiation formula does have physical meaning and some practical use. It can be used to explain radio propagation from an antenna, where billions of electrons all accelerate in harmony to set up the electromagnetic waves. As to the single electron, you can see that it cannot radiate energy, as otherwise every atom would collapse with its electrons coming to rest in the nucleus. Avoiding that scenario led to Bohr quantizing electron motion in the atom and declaring, quite arbitrarily, that the atom could only radiate if an electron, for some reason, jumped from one of its quantized orbits to another. The point I make about deriving E=Mc2, was the subject of a paper of mine in the 'International Journal of Theoretical Physics'. [1976b]

However, in the free space situation we shall be considering, even that collective action does not result in any energy absorption, because the electrons that are transiently present are so far separated that they do not share the same accelerated motion, meaning one that is in-phase with that of adjacent electrons.

So we need to look again at the action of the electromagnetic wave upon the electron. The wave has two energy components, the electric component and the dynamic component, the latter being regarded as the 'magnetic' feature of the wave. Only the electric component accounts for the force accelerating the electron and all of the energy absorbed goes into kinetic energy. This energy, which is half that we might otherwise have presumed to be captured according to the Thomson scattering cross-section, acts to cause that electron to become a wave-transmitting antenna itself and it produces a wave that is 90o out of phase with the intercepted wave. This sets up a secondary wave. However, as this secondary wave propagates it loses half of its energy as it settles to a condition in which the in-phase electric and magnetic fields adapt to a more natural mode of oscillation in which they are in phase-quadrature. This latter situation corresponds to energy oscillations in what is a standing wave system, as between the electric and dynamic action of the aether charge conveying the wave. The waves do not convey energy through space; they merely ripple the aether energy which exists in a sea of uniformity and equilibrium spread throughout all space. However, this secondary wave process involves a loss of half of that energy absorbed by the electron. It is shed to the quantum underworld as entropy pending its eventual deployment in those ongoing efforts to create matter in the form of protons.

Overall, this means that we can use the formula for the scattering cross-section of the electron, provided we recognize that the relevant absorbing cross-section is one quarter of that cross-section as formulated by Thomson.

Quoting now from the paper:

In an earlier letter [H. Aspden and D. M. Eagles: Phys. Lett. A, v.41, 423 (1972)] it was suggested that space may have properties associated with a characteristic cubic-cell of lattice dimension d=72πe2/mec2, a characteristic frequency f=mec2/h and a characteristic threshold energy quantum which analysis gave as the combined energy of 1843 electrons. This led to a value of α-1 of:

108π(8/1843)1/6 = 137.035915 ........ (1)

Above, e is the electron charge, me the electron mass, c the speed of light in vacuo and h is Planck's constant. The expression (alpha) is the fine structure constant.

The theory also indicated that space may well be populated by virtual energy quanta, equivalent to having a muon pair in each cell. In 1975 this model was applied to the exact derivation of the proton/electron mass ratio ([1975a] in the Bibliography of these Web pages), the dual muon energy constituting the nucleus on which the proton form was synthetized. Recently, by regarding these muon constituents as point charges migrating at random at the frequency fo, (the Compton electron frequency), the model has found further application in explaining and evaluating the muon lifetime,(See my book: 'Physics Unified', pp. 145-146), the neutron lifetime [1981b] and the pion lifetime [1982d]. Furthermore, the critical energy threshold set by the 1843mec2 quantum was crucial to the neutron lifetime determination and is of significance, on stability criteria, to the creation of the proton.

Physically, this quantum arises because each cubic cell has a lattice charge element q set in a uniform background continuum of opposite charge density and the condition for which q can change in form without displacing the continuum is that it absorbs energy to create N electrons and positrons occupying the same volume. Thus:

q = N(e+, e-) .............. (2)

The argument is that q has to be zero or at near zero potential in relation to all other q charge and the continuum charge. This fixes the cubic structure and the position of q in relation to the centre of each cell. The dynamics of the space model are linked to the properties of the electron and the physical size of the electron charge in relation to that of q. The analysis shows [1972a] that a true zero potential condition would correspond to a non-integral value of N lying between 1844 and 1845. Since the potential cannot be negative for a true vacuum state, N has to be lower than this and it must be odd to cater for electron-positron pair creation and q converting to an electron or positron. Thus N is 1843.

( I have quoted this from that 1984 paper to show that enough concerning the detail of my aether theory is of record in university libraries for scientific academia to have taken my research seriously and so realize that this account of the Hubble constant kills the hypothetical notion of an expanding universe!)

The pair of virtual muons in each cell were identified as such because they assure energy equilibrium by giving the cell the same energy density as the q elements. Analysis indicated that their mass was very slightly less than the mass of the real muon. The same analysis ('Physics Unified', pp. 103 and 108), applying the Thomson formula relating charge radius and energy, allowed the volume of q to be determined as (1/N)1/2(me/mmu)d3, where mmu is the mass of the virtual muon.

The advance now to be presented in this paper is based on the simple realization that in free space the transition indicated by equation (2) will occur naturally but with a very low probability. It takes the energy of nine virtual muons to exceed the energy threshold set by Nmec2, with N=1843. The virtual muon mass is a little in excess of 206me. Therefore, we look to the event when four muon pairs plus one muon of charge opposite to q all combine within the volume of q in the same cycle of migration. The muon pairs have a random freedom of movement and are not confined to a particular cell. (See comment below) The chance of one muon entering the q volume is one in (1/N)1/3(me/2mmu). Therefore, the chance of nine muons entering the same cell volume at the same time is this factor raised to the power 9.

(The way I visualize this process is that the entry of a muon, or a muon pair, into the body of the q charge traps the energy for one period of the aether rhythm. This allows the cell occupied by q to be replenished by energy inflow from surrounding aether in readiness for the next muon strike. If this occurs in the following cycle, then the energy of the muons remains trapped, otherwise that first muon (or muon pair) will decay during that cycle and displace any energy that has entered the cell in its absence inside q. This scenario therefore permits a chain of events building up the energy inside q, but with a diminished probability factor for each successive step in the chain. I am now saying that, by 'simultaneous' as used in the paper, I mean the sequence of events that occur within a charge q without rupturing the timed energy chain.)

The logic of this supposes that each muon arrives independently and simultaneously and that the chance of four negative muons appearing is the factor raised to the power 4, whereas the chance of five positive muons appearing is the factor raised to the power 5, the total chance being the product of the two. We find that the overall effect is that at any time the chance of a q element converting according to eq. (2) is (1/N)3(me/2mμ)9.

(By this it is meant that at any instant the chance of the q charge in each cell of space being excited to the threshold state is that specified by the above expression. With N=1843 this means that momentarily the q charge has become 921 electron-positron pairs plus a residual electron, assuming that q is the same as the electron charge.)

The electron-positron pairs will not obstruct the passage of electromagnetic waves because they have a mutual inertial balance and are collectively neutral in their response to electric fields. This leaves the electrons as presenting a scattering cross-section to radiation.

(Here I introduced the 'Thomson scattering cross-section' as providing the obstruction to onward propagation of an electromagnetic wave. However, I ought really to have termed this the 'absorbing cross-section', for the reasons already stated above where I indicated I would be using a factor 4 in what follows in order to correct an error in that 1984 paper.)

The formula given in the introductory paragraph can be used to evaluate d as 6.37x10-11 cm, meaning that there are 3.87x1036 cells in a cubic metre of space. With N=1843 and mmu/me=207 (or, to be precise, 206.3329, an adjustment now incorporated in the onward text.) it is evident that one cell in 2.17x1033 is subject to the transition just discussed. There are, therefore, approximately 1,780 excited electron cells in each cubic metre of free space.

The Thomson scattering cross-section of the electron is well established as 0.666 barns or 6.66x10-29 m2 and, accordingly, our theory tells us that the vacuum should present a cross-section of 1780 times 0.666 barns or 1.185x10-25 m2 per metre cubed. Here, however, we must divide by 4 for the reasons already stated. This reduces that cross-section to 2.96x10-26 m2 per metre cubed. On average, therefore, a photon would have to travel at the speed of light 3x108 m/s for 1.125x1017 seconds before being wholly absorbed.

It is noted that this period is really a rate of exponential decay but is does mean that we are here contemplating a period measured in billions of years. We are now ready to move on to the task of explaining why waves lose frequency reduces in their through free space. First, note that the paper contained a reference to 'missing matter'.

The mass density of the electron population causing this obstruction of radiation is as low as 1.6x10-27 kg/m3, which is curiously of the order of the mean mass density seen in the galaxies and attributed to the so-called missing mass in cosmological theory.

There is purpose in examining whether the scattering process has some bearing upon the cosmological red shift. Universal expansion by which the red shift becomes a cosmological Doppler effect is the accepted hypothesis. The alternative provided by the 'tired light' hypothesis, which requires the degeneration of frequency in transit, is discounted by Misner, Thorne and Wheeler ('Gravitation' (Freeman, San Francisco), p. 775) who quote Zel'dovich (Sov. Phys. Uspekhi, v.6, p. 475; 1964. He stressed that the statistical picture of photon interception by particles in the interstellar space would require some photons to lose more energy than others, resulting in a spectral line broadening that is not observed. Yet space is so tenuous that one may well question how one can be sure that a statistical interception process applies when the particles involved are about 10 cm apart.

(Here I would have liked when I first wrote the paper to have expressed my view that photons really do not convey energy at the speed of light. They are events which occur in space when waves are generated or absorbed. The wave is the only feature we need consider. Neither photons nor waves transport energy across interstellar space at the speed of light. It was only the belief that there is no aether that led physicists into the syndrome which drove them down that blind alley where all they could 'see' as explaining a cosmological red shift was the so-called 'Big Bang'.

There is something very special about the true vacuum that is never mentioned in this context. It has the ability to transmit waves without frequency dispersion, the very property which Zel'dovic sees as missing when matter is present. The author ( H. Aspden: 'Wireless World', v. 88, p. 37; 1982) has recently discussed this zero dispersion vacuum property and argues that space itself must adapt to the local wave disturbance so as always to be in tune locally with the signal in transit. Furthermore, the frequency property must somehow be codified at each point in space-time without regard to whatever happens at adjacent points and without involving the propagation speed c.

The author has shown that this state of affairs applies if one accepts that the electric field vector E is a composite of two electric field vector components E1 and E2 having separate physical significance. This allows us to write two equations:

E = E1 + E2 .................. (3)
dE/dt = (E1 - E2)F(E1/E2), .............. (4)
where t is time and F is a function of the ratio of E1 and E2. The rate of change of the amplitude of an electromagnetic wave can be codified in this way in terms of the strengths of two electric field components at the point in question. It need not be determined by the speed at which the wave progresses to adjacent points.

The function F is governed by the condition that there is zero frequency dispersion, at least up to the threshold frequency at which electrons and positrons are created. One can infer that at this limit E2 is zero, whereas at frequencies in the radio and optical spectrum E1 and E2 are approximately equal, though their actual ratio is a crucial indicator of frequency.

With such a feature electromagnetic theory admits the possibility that the presence of matter could attenuate E1 and E2 unequally, that is, not in linear proportion. In this case the ratio E1/E2 can change and the frequency might vary in transit. In the quantum situation, where collective action of intercepting matter co-operates in a photon reaction, the change is substantial and the frequency is reduced in a quantum step, but in a very rarified interstellar medium, the frequency will reduce progressively as each element of matter is intercepted to scatter energy.

The analysis involves analogy with a simple harmonic oscillator for which the linear restoring force rate is a variable giving a resonant frequency fr, where:

(fr)2 = kfo2 ............ (5)
The variable k is equal to E/E1, this being 1-E2/E1. It gives E2=0 when fo is the Compton electron frequency.

For a sinusoidal planar wave the amplitude of dE/dt is 2πfrE or:

2πfok1/2(E1-E2),
which is of the form given by equation (4) because k is a function of E1/E2. Also, E1 and E2, though approximately equal in magnitude over much of the frequency spectrum, are associated with relatively very different charge densities and inversely so related to very different physical displacements. This causes one of them to be the seat of almost all wave energy loss so that E1 is effectively constant in a planar wave and E2 is the main variable.

The energy density W of such a wave is proportional to E2 or (kE1)2, which means that (1/W)dW can be written as (2/k)dk. Also, from (5), with f constant, we find that (2/fr)dfr is (1/k)dk. Taken together, these relationships allow us to write:

(1/fr)dfr/dx = (1/4W)dW/dx .......... (6)
where x is the distance travelled by the wave.

We arrive, therefore, at the remarkable proposition that the dual component electric displacement, needed to explain the zero dispersion property of the vacuum, gives it the property of attenuating the frequency of waves in transit at one quarter of the rate at which the wave energy is absorbed, subject to overriding quantum effects associated with matter when present.

In the true vacuum where only the transient electron induction process discussed in this paper causes any attenuation, there should be a progressive reduction of frequency with a time constant of 4.5x1017 seconds, that is four times the period calculated for energy attenuation. This is 14.3 billion years, a quantity comparable with the 12 billion years estimated as the average age of the galaxies as judged from their spectral character (Narlikar: The Structure of the Universe, Oxford University Press, 1977, p. 228).

The author sees the contribution in this paper as a major advance in a theory of the structured vacuum which has been evolving for many years. It is extremely gratifying that a theory which has proved to be so fruitful in determining fundamental constants with high precision should so easily lead to the theoretical derivation of the most relevant constant in cosmology. The interpretation of Hubble's constant as a phenomenon linked to dual displacement in the field medium should now encourage experimental enquiry into the detection of this property of radiation, one avenue being the study of anomalous antenna properties reported by Gieskieng (D.H. Gieskieng: The Mines Magazine (January, 1981), p. 29).


As already noted the above account concerning the theoretical evaluation of the Hubble constant was published in 1984 in the Italian Institute of Physics periodical, Lettere al Nuovo Cimento, v. 41, pp. 252-256. That is a publication in the English language which offered rapid publication of scientific contributions which could survive peer review by referee scrutiny. It was shortly after that that the experimental research findings of Dave Gieskieng came to my attention. They confirmed to me that the aether does have the self-tuning, dual displacement property, which sustains natural oscillations at the signal frequency of electromagnetic waves. Nature's ongoing attempts to convert aether energy into protons put that something, a kind of ghost-like, quasi-state of matter, into space and, as explained above, it can weaken those waves and progressively reduce their frequency, according to distance travelled. That led to efforts to publish what Gieskieng had discovered. The story on that it now told in these Web pages as the Appendix to the previous Lecture No. 10. I have deferred entering this Lecture 11 until I was able to record it immediately following Lecture No. 10.

Harold Aspden February 25, 1998.