Go to National Library of New Zealand Te Puna Mātauranga o Aotearoa
Volume 77, 1948-49
– 30 –

Section A—Physical Sciences.

Some Aspects of Experimental Nuclear Physics.

The subject, for our purpose, can be divided into sections:—

I.

—Transmutations

II.

—The development of equipment

III.

—The study of particles

IV.

—The study of the nucleus

These divisions, of course, overlap, but they will serve to direct our thought.

I.—Transmutations.

In natural radioactivity we have transmutations of unstable nuclei into others that may or may not be stable—thus the uranium series, starting with uranium, changes through a series of elements, one of which is radium, and ends with a particular isotope of lead. Thorium gives a similar series; actinium (a branch product from the uranium series) gives a third. The only other unstable nuclei known on earth are certain isotopes of potassium, rubidium, and lutecium, which give off electrons, and samarium, an α-emitter. Their activity is very weak, i.e., they have long lives.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

In 1919, however, arising out of observations by Dr. Marsden, Rutherford demonstrated what was termed the disintegration of nitrogen, in which collisions of α-particles (which are the nuclei of He atoms) with N nuclei resulted in the emission of protons (which are nuclei of hydrogen). Later, as Blackett's cloud-chamber photographs show, this was recognised as a true transmutation, i.e., the α-particle is first captured by the N nucleus, a proton is then ejected and an oxygen isotope forms the remnant: N14+He4→O17+p1 (N14αpO17)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

In 1932—the bonanza year of modern experimental physics—Cockroft and Walton, in Rutherford's laboratory at Cambridge, showed the transmutation of Li, using accelerated protons:— Li7+p1→2He4 (Li7p, 2α)

—the first artificial transmutation of matter.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

In that same year, 1932, the positron (the positive counterpart of the electron) and the neutron (a neutral particle of mass just greater than the H atom) were discovered and, further, Curie and Joliot found artificial radioactivity—e.g. that aluminium, bombarded by α-rays, became radioactive. From aluminium an unstable nucleus of phosphorus was formed which decayed with the emission of the newly-discovered positron:— 13Al27+2He415P30 +0n1 (period=half life—195 sec.)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

15P3014Si30+e+

– 31 –

(The neutron at first eluded detect on.) The P30 is a new isotope of phosphorus too unstable to exist in nature. This activity is of a new type—the natural radioactive nuclei emit α's or β's, none give off positrons.

A tremendous field was thus opened up. The neutron was shown by Fermi to be particularly prolific in such actions (owing to its ease of penetration into the highly charged nucleus) and it was possible to bombard materials with α-rays or with accelerated particles such as p (or its ally, the deuteron d, i.e. the nucleus of deuterium or heavy hydrogen, the isotope of ordinary hydrogen, mass 2) or with neutrons, say, from the convenient (Rn+Be) source (Beryllium filings in a Rn tube, using the α-rays):—

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Be9+He4→C12+n1 or finally with photons from γ-rays: Be9+γ→Be8+n1.

The discharge-tube method, using accelerated nuclei as ions, was of prime importance because it was obviously possible to control both the particle used and the energy applied. Cockroft and Walton obtained their high tension by a process of voltage multiplication. Van de Graaff in 1931 had developed a belt type of electrostatic generator and this was further improved for use in accelerating tubes of the Cockroft-Walton type where the ions received a step-wise increase of energy down the tube. These generators now can produce some millions of volts with an output of some milliamps.—a useful power of ∼ 10 kw. Such generators have also been made in pressure tanks (to prevent sparkover) and form a medium-size apparatus (∼20 ft.). In 1934, Lawrence, who had been experimenting with phased acceleration, conceived the idea of bending the particles by a magnetic field into a circular or spiral path and imparting then the phased acceleration across the gaps of D's in high vacuum. The particles starting from the centre would be speeded up; their bending would then become less, so that a spiral path resulted; calculation showed that the increase of speed corresponded to the increase of path so that each semi-turn of the spiral took the same time. This time is very short ∼ 10-7sec.; it is therefore a radio-frequency voltage of some megacycles that must be applied to the two D's to accelerate bursts of ions in phase with it. The first model built by Lawrence had pole pieces ∼ 1 foot across. A small “laboratory” size has been described by Kruger et al. (Phys. Rev. 51). The largest yet made is the one at Berkeley, California, with pole pieces 15 ft. across. Liverpool and Cambridge possess cyclotrons (of average size) and the one at Birmingham (where two of my former students are working) is just nearing completion. There is, as yet, none in the Southern Hemisphere, though probably one is planned for Australia. The cyclotron has the advantage that the energy given to the various particles can be much greater than in the linear accelerator (up to 100 Mev., cf. with, say, 10 Mev.) and the useful yields (in current) are in general higher, e.g., ∼ 3 m.amp. giving ∼ 30 kw. in the beam pulses. Further, the cyclotron can be extended and its limitations partly removed. It is limited by reason of the increase in the mass of the particles as they are speeded up. This upsets the phasing relative to the accelerating field and results in a

– 32 –

static equilibrium orbit being reached beyond which further acceleration cannot be achieved. If, however, either the magnetic field or the accelerating potential be varied to suit, i.e. to fit in with the spiralling burst of particles, then further acceleration is possible until the particles, owing to their high speed, finally radiate energy as fast as they acquire it. The limit then is the input of energy. This phasing development (suggested by Veksler in Russia and by McMillan at Berkeley) is called the synchroton principle. The effect has been demonstrated but not yet applied to the cyclotron.

The cyclotron can be used for massive particles (i.e. of atomic mass). It cannot be used for electrons because of their rapid acceleration and the consequent increase of their mass with the speed. For these, Kerst has developed the betatron, in which the electrons are accelerated by the electric field associated with a changing magnetic field. An A.C. magnet is used with a frequency of 600; the growing electric field in a cycle accelerates electrons from the gun till they reach an equilibrium orbit; by using a central core of iron dust which saturates before the iron outside, the flux inside this orbit is diminished towards the end of the cycle and the electrons spiral in on to a target. If a thin target is used the X-rays generated are in a very narrow, forward beam, e.g., in the 100 Mev., 60 cycle, betatron built at the American General Electric laboratory, the beam has a breadth of only 4 to 6 in. at 11 ft. It thus produces a concentrated beam of X-rays of energy much greater than any so far known except in cosmic rays. Later modifications, to allow of more economic use of the flux variation, indicate the possibility of a 250 Mev. machine.

Another modification of this idea is the race-track synchrotron proposed by Crane in which ½ Mev. electrons will be accelerated as in a betatron, but instead then of circling in an equilibrium orbit they will be accelerated further by an r.f. field on the straight legs of the track. This field will be automatically frequency-modulated by the electron beam itself and the whole acceleration related to the more slowly varying magnetic field so as to maintain the equilibrium path in the active part of the cycle.

Looking at these machines from the point of view of New Zealand developments, we realise first that physics on these lines has inevitably taken an engineering turn of a very specialised type. For high-energy ions the cyclotron principle is supreme; for high-energy X-rays the betatron. For medium energy, a Herb pressure E.S. generator should give good service and be within reach of our resources. The second point that emerges is that such machines, to justify the investment, need not only a competent design staff of physicists and engineers, but also a permanent running staff of physicists and technicians. University resources in Physics will have to be on a different scale altogether from the existing miserable provision.

The machines so far described produce, in general, a special type of result, viz., accelerated particles in a more or less convenient or concentrated form so that unwanted results such as × or γ rays may be largely screened out. There are, however, two other sources of high-energy particles, of different type. The first is a natural one—cosmic

– 33 –

rays—where, by and large, we must take what nature provides and plan to catch interesting events. The second is an artificial one, but still more or less uncontrolled as regards its radiations—I refer to the fission pile. Here from the complex of actions going on there results a veritable bath of high-intensity and penetrating radiations— neutrons, α-rays, β-rays, positrons—of varying energies. While not theoretically impossible, it is at present impracticable to isolate any of these, say, n's of a desired energy. The pile may still be used, however, for suitable reactions; it produces a very high intensity of neutrons, for example, and forms a major instrument for transmutations. The size of the pile depends mainly upon the material used for slowing down the neutrons—the graphite piles used during the war were of the small-house size, but much smaller ones can be used with, say, heavy water, the manufacture of which requires mainly electrical power. This seems a feasible project for New Zealand in either form, and many of the new radioactive isotopes will be required here for plant, animal, and human physiology—branches of knowledge basic to our agriculture and to our medicine. Thus the radioactive form of phosphorus, mentioned earlier, behaves chemically like ordinary phosphorus, but its distribution in a plant or an animal can be traced by reason of its radioactivity. Minute ray “counters” have been designed to do this work or, of course, photographic plates can be used where suitable. Similarly radioactive carbon, iron, cobalt, nitrogen, potassium, manganese, sodium, calcium, copper, etc., may be employed—the only requirements are (1) that the life be long enough for the particular process being studied, and (2) that the radiation be energetic and intense enough to be detectible.

Many of the reactions of transmutations leave the nucleus excited, i.e., with extra energy which it may emit as γ-ray and some of the new isotopes will be of value as γ-ray sources which may either be applied or inserted or, in favourable cases, differentially secreted in a particular organ requiring this form of treatment.

The tracer method also introduces a new technique in all wear problems and in chemistry for quick analysis of gases, liquids or solids—such problems as transport numbers, diffusion in solids. adsorption, gaseous diffusion or absorption, etc., can be elegantly followed using the counter technique provided that the different mass of the isotope plays no predominant part.

II.—Development of Equipment.

I have already indicated the high importance of design in physics. This, of course, is no new thing—it characterises all good experimental work—and a high place must be given to apparatus design in assessing the honours for advances in physics. Wilson's cloud-chamber has given an intimate insight into atomic processes; Lawrence's cyclotron has played and will play an important part in progress. Kapitza is another design genius whose interest, like that of Lawrence, seems to be in design itself. The development of apparatus must go hand in hand with actual research and the competent designer is worthy of his hire at a goodly wage. This is even more the case to-day, since many of the modern machines are large and failures in design more

– 34 –

expensive. A complete mass spectograph will probably cost about £2,000, a Cockroft-Walton accelerator for 500 kv. about £3,000, a cyclotron of small size about £30,000. Much smaller items of apparatus are equally important—the simple counter tube plays a highly important part in modern physics and photographic plate technique using particle tracks in the emulsion is a still simpler method of increasing importance, because, like the cloud-chamber, it gives a picture in time and space. An improved ion source may increase the efficiency of a large machine 100 per cent, and raise it to a highly productive level.

In addition to design of apparatus, there is also the testing of it, e.g., it is necessary to know its efficiency. Ion yields are simply measured with a Faraday cylinder—in the ion accelerators this is usually included round the target. Neutron yields are more difficult to measure. An elegant solution (due to Amaldi and Fermi) is to slow all the neutrons down by collisions with protons in hydrogen-bearing molecules such as water, hydrocarbon oils, etc. Such light particles, by the laws of collision, absorb most of the neutron energy, reducing them ultimately to thermal velocity, i.e., in temperature equilibrium with the medium. A detector which reacts with thermal neutrons, such as Rh foil, is activated at varying distances from the source and gives thus a measure of the slow neutron density. Integration through a sphere yields the answer.

As an example of modern design, I should like to mention the powerful new apparatus for neutrons called the velocity spectrometer. This selects neutrons of a certain velocity range from a composite beam. Neutrons emitted from sources are nearly always fast; their energy is of the order of Mev's., and a 25 Mev. neutron has a velocity ⅓ that of light. Many of the interesting reactions of n's with matter occur with slower neutrons—from thermal velocities (105 cms./sec. or ·03 ev. energy) upward to 1,000 ev. No primary sources of these exist—they must be produced by slowing down—usually in paraffin wax. If, now, a neutron source (e.g., a deuteron beam on a Li target) has the ion beam pulsed, say, at 50 cycles, then we get bursts of fast neutrons from the target and of the slowed neutrons from the paraffin wax around the target—these bursts occurring with the 50-cycle periodicity. If the detector (which consists of an ionisation chamber plus amplifier) be now similarly pulsed, the two can be phased so that the detector lags behind the source by the time taken for neutrons of a particular slow velocity to pass from source to detector. Thus velocities can be selected by varying the distance, the frequency, and the phasing. So far, energies up to 1,000 ev. have been dealt with (v ∼ 5X107 cms./sec.) and the apparatus developed has 16 ranges, with this upper limit. The idea originated both in England and in the United States of America, and while in both, countries the first development was unsatisfactory, it has now been successfully worked out in the States so that experiments, normally taking months, can be done in days and accurate data on absorption of slow n's by different elements is now pouring out in a quick succession of papers.

This development for n's brings us naturally to my third division of the subject:—

– 35 –

III.—Study of Particles.

The yields, energy ranges, spatial distributions, masses, and the moments of the various particles—α, e_, e+, p, d, n, u, γ—from the various reactions makes a huge branch of nuclear physics. I can only mention it here. In addition, there is the particularly important aspect of the reaction of these particles with other similar ones, usually bound in matter, e.g., with protons in hydrogen atoms, deuterons in deuterium, α's in helium, etc. The classic case of this, of course, was the study of α particles by Rutherford and his co-workers, which both established the nuclear nature of the atom and also discovered transmutation, thus pioneering the whole development of nuclear physics. No less important are the scattering of protons, neutrons and deuterons, especially with the simpler nuclei, because these give vital information on the forces acting between the particles that constitute nuclei.

Such experiments need again either natural sources (where available) or sources from the machines mentioned previously. They employ also the detecting devices of counters, cloud-chambers, photographic plates, along with amplifier technique.

The scattering of neutrons has been of special interest partly because of the newness of the neutron and of its intriguing nature as a massive neutral particle, mainly, of course, because of its fundamental character as a nucleon or nuclear particle. Although neutral, it possesses a magnetic moment and is thus subject to magnetic fields, e.g., the field in saturated iron causes a better transmission of those neutrons with moments parallel to it and thus results in a beam which is partially polarised.

Another aspect of scattering is of interest. In X-rays we know the action of a crystal in scattering a beam of X-rays in preferred directions determined by the crystal lattice and by the wave-length or energy of the X-rays. On wave ideas we think of this in terms of wave-length, in particle terms (photons) we relate it to the energy. Electron diffraction showed that electrons also partook of the nature of waves and could also (apart from their charge) be regarded in either way. Any particles are similar; λ=h/p=h/mv, so that given m and v, λ can be calculated. Thus protons, neutrons, etc., show also the properties which we have hitherto associated with waves and will thus be specially diffracted by crystalline materials. This feature has recently been studied with neutron scattering; it can affect the distribution of n's appreciably, but since the λ is approximately the same as for X-rays and n's are much more difficult to control, it has not opened any new path of investigation comparable with the electron microscope.

IV.—Study of the Nucleus Itself.

This final division of the subject is concerned with the internal economy of the nucleus. Its charge was determined by Rutherford and by Chadwick: its approximate size (∼ 10−13cm.) resulted from the same experiments. Of course, “size” in relation to a particle in

– 36 –

Physics must be specially defined because one cannot find any sharp edge on which to put measuring tools or their equivalent! Thus in the α-scattering experiments of Rutherford and of Chadwick, the distance to which the ordinary inverse-square law held gave a maximum measure for the radius, viz.~ 10−13cm. The elastic scattering of fast neutrons (which have no special re-actions with nuclei) gives a closer measurement because the neutrons get further in. Thus Sherr, using Li bombarded with 10 Mev. d's obtained 25 Mev. n's and isolated the action of the fastest ones by using as a detector the reaction C12 (n 2n)C11 which has a threshold of 20.4 Mev. From his results he could calculate the nuclear radius.

The magnetic moments of nuclei have been determined by Rabi using beams of atoms and of molecules in magnetic fields. The nuclei are (ideally) oriented by a divergent field (the polariser) and then will be transmitted by a similar one (the analyser). By applying, in between, a uniform field to hold them and at right angles to the orientation an A.C. field, the nuclei are made to precess so that they will not pass the analyser. The effect is actually statistical; the moment can thus be evaluated. For nuclei in ordinary materials a new elegant method based on the above has been devised by Bloch and by Purcell and is called nuclear induction. In this the nuclei in ordinary matter, e.g., protons in water, are made to precess by applying a resonance field at right angles to an aligning one. The induction produced by the precessing nuclei is then picked up in a receiving coil perpendicular to the precession coil. In this way the magnetic moment may be studied, its relaxation time, etc. In Purcell's method the resonance absorption in a cavity resonator is measured as the aligning field is altered in value.

The angular momentum of the nucleus can be derived from refined developments of spectroscopy.

Masses are measured by the mass spectrograph—separation of isotopes is also possible by this means.

The constitution of the nucleus is adequately explained in term of n's and p's, these particles in some form being held together by very strong, short-range forces between n-n, n-p and p-p against the electrostatic repulsion of the protons. From the emission of α's by the heavy radioactive atoms and from transmutations like those of Li and B, this α-grouping, (2n+2p), would seem to be somewhat differentiated as a sort of closed group. Gamow conceived the potential diagram of a nucleus, and in this we can represent levels of energy for the particles, both occupied and excitation levels. The latter may be found from the so-called “resonance” energies of bombarding particles, i.e., energies at which the particle has a much larger chance of capture by the nucleus. Such level ideas lead to a sort of spectroscopy of the nucleus, in analogy with that of the atom; it governs energy of transitions (with γ-ray emission, i.e., photons), probability of transition, life in the excited state; and the reverse of these, such as probability of absorption of a photon, p, n, α, d, etc (large absorption being known as “resonance” Absorption).

– 37 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Of the actual forces between the nucleons very little can be definitely stated. The modern theory uses the meson or mesotron (a particle of mass ∼ 200 mass of the electron and ∼ 1/9 mass of the proton) as the energy vehicle between the nucleons, but mathematicians cannot yet formulate a satisfactory theory. This is the urge behind the extension of accelerating apparatus ever to higher and higher energies. The mesotron appears in cosmic rays; its intrinsic energy (mc2) is ∼ 100 Mev. and energy of at least this amount must be available in nuclear particles such as neutrons and protons before we can hope to create mesotrons and thus study the postulated method of interchange between nucleons. Nuclear forces are investigated mainly from scattering experiments, but also from the build-up of nuclei, and this brings us to the subject of stability of nuclei.

All the natural radioactive processes, together with the transmutations produced by capture and the evidence of stability shown by the isotopes existing in nature, enable a theory of stability of nuclei to be formed and predictions to be made (within limits) of the effects of possible changes, i.e., the course of nuclear reactions (a parallel to chemical equilibria). The new phenomenon of fission has added an interesting chapter to this—here the addition of one particle can cause a fundamental splitting of the new nucleus:—

92U238+0n192U239→fission

92U238+1d293U240→fission

92U238+γ→92U239*→fission or with α or p.

The fissionable atoms known are U, Th, Pa, Np, and Pu and the energy needed varies in the different cases; neutrons are the most efficient bombarding particle. Various modes of splitting exist, along with the simultaneous emission of n's. The new nuclei first formed are strongly β-active, emitting high-energy electrons. The fission particles are projected in opposite directions with very great energy ∼ 100 Mev. each. Such high energies (relative to the low chemical-action energies characteristic of combustion, ∼ 10 cv. compared with 108) combined with a relatively high proportion of atoms changed produce the tremendous energy release that characterises the so-called “atomic” bomb. In the pile process, the kinetic and radioactive energy of fission is degraded into low temperature heat, but if it could be used directly (e.g., in a dream turbine) and the high temperature of the primary particles thus made effective, it would give a highly efficient engine. Again, the order of energy in all the nuclear actions is very large—thus radon gives, per gram, energy at the rate of 25 h.p. for 5½ days, so that if, some day, a suitable α-product can be economically made, this might well form the energy source for small-scale engines. Such energy-development work will need a very general institute, embracing or associated with, most lines of nuclear work and using mathematicians, physicists, chemists, engineers and technicians—its outlook must be broad and general, though its lines of work may be restricted as a matter of financial limitation.

– 38 –

I have, I hope, said sufficient to indicate the rapid advances made in this subject—advances that will help greatly to extend the control by man of the resources of nature, that have already widened the range of atomic material with which he can operate and that will put greater and more convenient power sources at his disposal. New Zealand has a special interest in the subject, in view of the outstanding work of Lord Rutherford in placing it on sure foundations (and some of his disciples are still among us!); New Zealand must also have an interest in it because of its very fundamental nature, such that it marks a real epoch in the history of physics and, indeed, of mankind. Some of the branches of the subject such as cosmic rays, mass spectra, nuclear moments can be attacked with relatively simple apparatus, but the greater part of the work lies with high-energy particles and demands both the use of expensive machines and, as I have mentioned, the employment of a fairly permanent staff to operate them. In the fundamental side of all this work the University must play its part, and for this it will need greatly increased resources; the applications to medicine, agriculture, and engineering must be the realm of Government planning. The two must go hand in hand, for the one thing certain is that this latest advance of Physics is fraught with such possibilities, for good and for evil, that New Zealand cannot stand aside and neglect it.

– 39 –

Some Physical Principles Affecting Housing.

By the commencement of the present century, a traditional type of domestic building-construction had been developed fairly generally throughout the Dominion. This construction had proved itself both economical and at the same time reasonably well suited to the climatic conditions of the country. Nearly 90% of the houses built between 1890 and 1910 conformed in the main to the following general specifications:—

Framework of timber, usually 4 in. × 2 in., lined externally with weatherboards, and internally with rough lining, scrim and wall-paper. Stud heights usually from 10 to 12 ft. Ceilings also of timber, either dressed below and painted, 01 left rough and covered with scrim and wallpaper. The timber was all thoroughly seasoned before use, and this, together with the good craftsmanship prevalent at that time, resulted in a tight wall-construction which left a comparatively dead-air space between outer and inner linings. The roof was generally of corrugated iron, fitting tightly, and leaving a dead-air space over the ceiling. To this was added wood sarking in a large proportion of houses. Windows were high and hung in sashes, with counterweights, so as to open by easily variable amounts at top and bottom. Most of the rooms were provided with an open fireplace and chimney.

As a result of all this, houses were warm, dry, and usually well ventilated, without excessive draught.

Within the last 20 years, radical departures from this traditional mode of domestic building-construction have developed. Weatherboards are to a very considerable extent superseded by veneers of brick or asbestos cement or concrete and the like; while interiors (both walls and ceilings) are being lined with such wall-board materials as fibrous plaster, pumice-cement board or wood-fibre board, or with lath and plaster. A wooden framework is still preserved in most types of construction; but the timber is green or partly dried and, either through indifferent workmanship or by intention, the wall cavities are usually very heavily ventilated. Roofing iron has been largely replaced by tiles, which are far from airtight and often not even reasonably watertight. Stud heights have been lowered to 9 or even 8 ft., and sash windows have given place to casements, with or without leadlights. With these, draughtless ventilation is more difficult, and, in fact, in a remarkably large proportion of homes, draughts are avoided by the simple but noxious habit of keeping all windows strictly closed in all but the calmest and warmest of weather. Chimneys have disappeared from all rooms except the living room—at least in smaller houses.

These modern houses are found in general to be much colder, much more damp and much less adequately ventilated than were those of the older type. The reasons for the deterioration is not far to seek, if we inquire into the physical features associated with the new materials and modes of construction.

In the first place, the thermal insulation provided by the traditional construction was reasonably good. Methods have been developed at the Dominion Physical Laboratory of measuring heat losses through a typical wall section of a house on the site. This is achieved by applying an open heated box to the interior of an external wall, and measuring the heat input to the box and the temperature drop across the wall section when steady conditions have been established.

The box itself is heavily insulated and pressed tightly against the wall interior by braces and jacks. A fan circulates the heated air within the enclosure. In order to ensure that all the heat passes out through the section perpendicularly to the surface, the whole room containing the box is heated to the same temperature as the box.

In this way, a typical traditional wall section was found to have a thermal transmittance value (U) or about 0.27, the units being B.Th.U. per sq. ft. per hour per °F. temperature difference across the section.

– 40 –

In the modern house, the wall linings are poor insulators, are usually comparatively thin, are backed by a draughty cavity instead of a sealed one, and often do not have the outer protection of as poor a thermal conductor as weather board timber. In consequence, U values have been found to be much higher. Here are some typical values:—

Weatherboards and lath and plaster 0.37
Brick veneer and pumice-cement board 0.54
Brick veneer and lath and plaster 0.00

Modern standards abroad call for U values of not more than 0.20; and; values even as low as 0.15 are strongly recommended in Great Britain. These low values mean much leas heat loss in cold weather, and much less fuel consumption to keep the interior warmed up to comfort levels. It will be realised therefore that the high values discovered in modern New Zealand dwellings convey a very pointed condemnation of our present-day building methods.

Turning next to the problem of ventilation, we have found that the rate of ventilation in modern domestic rooms, in reasonably calm weather and with windows closed, is usually less than one air change per hour.

This has been measured by releasing about 1% of hydrogen gas into the air of the room, and measuring the rate of replacement of the mixed air by fresh outside air, by an electrical method. The method depends essentially on the high thermal conductivity of hydrogen compared to air. We have found it convenient to employ an aeroplane fuel-air-ratio analyser, which estimates the proportion of carbon dioxide in the exhaust gases of the engine. The method gives results which agree with more direct but less convenient methods to within 2 or 3%.

As a result of these measurements, we have learned that the effect of a sash window open slightly top and bottom is much more effective than that of a casement window open a similar amount; and that in a room with on open chimney, ventilation rates are from two to three changes per hour without a fire, and more than double that amount when a fire is burning.

The most serious consequence of this combination of low ventilation late and high thermal loss is the development of very high humidities in occupied rooms. The moisture contributed to the air by the occupants builds up into high moisture contents which the ventilation is inadequate to remove, and which the low wall temperatures in the winter convert to high relative humidities. Rooms are damp, and unhealthy, and in over 50 per cent, of modern houses, mould develops on ceilings and walls. This mould rapidly disfigures very seriously interior finishings, and results in a demand for more frequent redecoration.

The fundamental remedy for this dampness and mould in houses must clearly lie in an increase in ventilation together with a decrease in thermal losses through walls, ceilings and floors. So far has the theory of this problem now advanced, and been confirmed by experimental methods, that it has been found possible to survey a room and calculate the extra insulation and ventilation necessary to remedy the mould and dampness, under given external conditions. Unfortunately the remedy is expensive, both in houses during erection and especially in such as are already erected. It is probable, therefore, that those responsible for the present chaotic state of housing comfort will hesitate as long as possible to apply these remedies in order to put the matter right.

A further important problem of domestic comfort is at present under investigation. The open hearth fire as a means of household heating has always been in favour, in spite of its recognised inefficiency. So long as fuel is cheap and abundant, little attention is paid to the wastage. Now, investigation has been prompted by scarcity and costliness, and it has been shown that little more than about 15 per cent, of the heat available in even the best of domestic coal goes to increase the comfort of the household. More efficient methods of utilising the heat without losing the homeliness of the open fire are under consideration. By using the convective heat in a heat exchanger and by suitable control of the air supply to the fire, a considerable increase in efficiency has been secured in Britain. It is not certain if these improvements can be applied without modification to our locally available fuels, and until experimental work has been carried out on these lines, no very satisfactory conclusions can be drawn. At the moment, progress is held up for lack of a specially-designed laboratory. In this it is hoped to be able to measure the total heat contributed to the room, by making the room behave as its own calorimeter. When funds are available for this project, it is confidently expected that much saving in fuel and a much more efficient method of domestic heating will be developed, in spite of the low quality of the fuel at present available for household use.

Picture icon

Diffraction patterns of.
(a) Magnetite with Molybdenum Radiation
(b) Magnetite with Cobalt Radiation.
(c) Magnetite with Unfiltered Iron Radiation
(d) Silver with Copper Radiation

– 41 –

But even if an efficient method of room heating is introduced, our room will not warm up quickly if the walls consist of material of low insulating quality or of high thermal capacity. There is on record an outstanding example, of this. A room lined with plaster usually took about an hour and a-half to warm up to comfort levels in the winter mornings. When, however, the lining was supplemented with oak panelling, the same gas fire was able to warm the room adequately in half-an-hour. The panelling improved the insulation of the room, but more important, being constructed of a material of low thermal capacity, its temperature rose much more rapidly than did the plaster. In consequence, radiant heat was retained to the room more rapidly and the room reached a level of comfort much more quickly. The development of new types of wall-linings having these desirable characteristics is fraught with difficulty, because other factors must be kept to the fore. Of these, cheapness of manufacture and ease of application are of prime importance. There is a big field for a young physicist in this domain. But with the present shortage of qualified men interested in classical aspects of the science, progress must necessarily be slow and results delayed.

X-Ray Diffraction Of Ironsand.

New Zealand's ironsands, particularly those on the west coast of the North Island, have long been regarded as a potentially rich source of iron. Now their relatively high titanium content has aroused considerable interest, and their smaller vanadium content is also considered worth attention.

The sands consist of grains of many different minerals. The proportions of these vary from place to place, even from one part of a beach to another; and the South Island ironsands differ very considerably from those of the North. In the North Island sands, the mineral present in quantity from which iron should most conveniently be extracted is magnetite (Fe3O4), and as this is strongly magnetic it can readily be separated almost completely from the other materials. The grains of magnetite are found to contain one-tenth as much titanium as iron, and a small quantity of vanadium.1 This unusually large proportion of titanium is the main reason for the failure of past attempts to work the sands.2 In a blast furnace titanium forms heavy infusible slags which accumulate at the bottom and soon put the furnace out of action. No method of reducing the titanium content before sending the magnetite to the furnace has yet been devised. Monro and Beavis1 concluded from chemical evidence that the titanium atoms are actually in the magnetite lattice, i.e., some of the lattice points normally occupied by iron atoms are occupied by titanium atoms. This arrangement is known to occur in some minerals in other parts of the world. Chemical evidence alone, however, cannot settle this matter, and the only way of obtaining the necessary further information is to apply the methods of X-ray crystallography.

The apparatus used to do this is that designed by Williamson and constructed at the Dominion Physical Laboratory.3 It consists of an X-ray tube with interchangeable anticathodes and the necessary pumps and power supplies, and a Debye-Hull powder camera. The method requires substantially monochromatic X-rays. These are obtained by strongly exciting the characteristic K radiation of a suitable metal used as the anticathode or target, and removing the unwanted Kβ line with a suitable filter. The material to be studied is ground up to a fine powder, bound together with a gum containing only light atoms, and formed into a very thin rod which is slowly rotated at the centre of the camera (with its axis vertical) while a narrow (horizontal) beam of monochromatic X-rays

– 42 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

is shone upon it. The purpose of this treatment is to present as many orientations of crystal planes as possible to the incident beam so that the diffracted radiation will form continuous cones about the incident beam as axis. In order to record the positions of these cones a strip of photographic film is held in the shape of a cylindrical surface coaxial with the specimen so that upon development a set of dark lines mark the intersections of the cones of radiation with the cylinder of film. From the measured positions of these lines the angles of diffraction can immediately be deduced and the corresponding spacings of crystal planes calculated from the well-known Bragg equation: nλ = 2nd sin θ

where n = the order of diffraction

λ = the wavelength of the X-rays

d = the spacing of the crystal planes

θ = the angle between the incident or diffracted ray and the crystal plane.

From the positions and relative densities of the lines it would be possible with much labour to find the position of every atom in the unit cell of the crystal studied. However, this has not been necessary in the present work as the materials so far encountered have their diffraction patterns listed in the A.S.T.M. X-ray Data Cards.4, 5 These cards are so arranged that any listed element, chemical compound or mineral can be identified from its diffraction pattern.

Plate 1 (d), a contact print from the original, shows the diffraction pattern of silver, using copper Kα radiation. This was tried first to check the reliability of the apparatus. The lattice constant found agreed to one part in a thousand with the published value. This pattern is interesting in that it shows plainly the rapid increase in resolution as the ends of the pattern are approached. The doubling of the last line is due to the fact that the Kα line is really a doublet.

The choice of wavelength of radiation to be used for best results in the magnetite problem depends on several factors. Molybdenum radiation is a good general-purpose one because it is sufficiently penetrating for absorption in the specimen usually to give no trouble. However, its short wavelength results in a diffraction pattern that extends only a short distance on each side of the undeviated beam, so that it is of little use when we seek weak lines which will be obscured in this case by the crowding of stronger ones. Copper radiation is the most commonly used when a greater spread is required in the pattern. But copper radiation is a short distance on the wrong side of the K-absorption edge of iron, which means the radiation consists of photons energetic enough to eject electrons from the K shells of iron atoms and is thus strongly absorbed in any specimen containing iron (see Text Fig. 1). This results in a considerable reduction in the intensity of the diffracted beams and an increase in the background intensity due to re-radiation in random directions by the iron. Under such circumstances only the strongest lines in the diffraction pattern are detectable. If a still softer radiation is used to avoid this difficulty with iron we run into the same difficulty with vanadium and titanium which are also present. If, however, we go so far as to use vanadium radiation this will be on the right side of the K-absorption edges of all these elements; but there are two difficulties. The first is that solid or sheet vanadium metal for use as a target is hard to obtain. The second is that vanadium radiation is so soft that it will be fairly strongly absorbed in the specimen, with the result that at least those lines near the undeviated beam will be rather weak.

A chromium target was available and this was tried first, as the radiation is at least on the right side of the K-absorption edges of both iron and vanadium, but the patterns obtained were not as clear as could be desired. However, every line of the magnetite pattern could be detected and one line besides. Extra lines in a pattern almost invariably mean additional chemical compounds present, but this one line was much too far along the pattern to be the strongest line of any known compound, and was not likely to be the second or even third strongest. Thus the most likely explanation seemed to be that there was just one lattice present (magnetite) and that replacement of some of the iron atoms by titanium atoms (as suggested by Monro and Beavis) permitted a line to appear that would normally be cancelled out. (This explains the abstract appearing in the Congress programme, as this had to be written at this stage in the investigation.)

As the patterns obtained with chromium radiation could not be regarded as conclusive and as no vanadium target was available, an effective absorption

– 43 –
Picture icon

Text Fig. 1.

curve for the specimen as a whole was drawn assuming the proportions of iron to vanadium to titanium found by Monro and Beavis. This showed plainly that cobalt radiation is likely to be the best for the purpose (see Text Fig. 2). (It is as far away as possible from the titanium- and vanadium-absorption edges without exciting iron.) The patterns obtained with cobalt radiation give one definite piece of evidence. Besides the magnetite lattice there is also present the lattice of ilmenite (FeTiO3).

The reason why the strongest line of this did not show up in the earlier patterns is presumably that it should appear in that part of the pattern fairly near the undeviated beam and would consequently be strongly absorbed in the specimen, chromium radiation being very soft. Another strong line of ilmenite is irresolvable from the strongest line of the magnetite pattern, which completely conceals its presence.

– 44 –
Picture icon

Text Fig. 2.

A rough quantitative estimate (as good as the information in the X-ray Data Cards will allow) of the relative proportions of ilmenite and magnetite present indicates that there is probably enough ilmenite to contain most or all of the titanium. The quantity of vanadium present is so small that even the most refined of X-ray-diffraction techniques would be unlikely to reveal its state of combination. Plate 1 shows the diffraction patterns obtained with molybdenum, cobalt and unfiltered iron radiation. In (e) the doubling of the number of lines due to failure to remove the Kß radiation can be seen. Most specimens were made up from sand from Patea (west of Wanganui), but one made up from sand from Muriwai (north-west of Auckland) gave precisely the same diffraction pattern.

It should be pointed out that this is an account of only the beginning of X-ray investigation of New Zealand ironsands. There are many more minerals to be investigated and many refinements of technique to be tried, especially those leading to more accurate quantitative results.

The author is pleased to acknowledge the very helpful interest of Dr. E. R. Cooper, Director of the Dominion Physical Laboratory, in this work.

References.

1. Monro and Beavis (1946). N.Z. J. Sci. and Tech., 27B, 237.

2. Wylie (1937). N.Z. J. Sci. and Tech., 19, 227.

3. Williamson, K. I. (1946). N.Z. J. Sci. and Tech., 27B, 393–397.

4. (1946.) A.S.T.M. Standards, Part 1-B, 865–871.

5. Smith and Barrett (1947). J. Applied Physics, 18, 177.

– 45 –

Dielectric Properties Of Crystal Quartz At High Frequencies

Dielectric measurements can in general be divided into three methods suitable for various frequency ranges—bridge methods for frequencies up to a few me./s., resonant circuits using lumped elements for frequencies between about 10 and 100 mc./s., and for higher frequencies resonant circuits with distributed constants.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Dielectric properties may be conveniently represented by a complex dielectric constant given by ∊ = ∊′−j∊″. The two properties to be considered here are the normal dielectric constant or permittivity which is given by ∊′ and the loss factor tan δ which = ∊″/∊′. For low-loss dielectrics at high frequencies ∊′ << ∊′ and tan δ becomes equal to the power factor cos φ. Experimentally, ∊′ is found from effects due to a change in capacity and tan δ by a change in the losses of the circuit in which the dielectric is placed.

Measurements of these properties of crystal quartz are dominated by the facts that quartz is a piezo-electric material, has low losses of the same order as the best dielectrics known, and that only physically small specimens are conveniently available.

A quartz crystal consists roughly of a hexagonal prism with a hexagonal pyramid at either end. The line joining the vertices of the pyramids is a trigonal axis of symmetry—the principal or optic axis (Z) which is a direction of zero piezo-electric effect. Three digonal axes of symmetry also exist in a plane at right angles to this axis and these are parallel to the faces of the hexagonal prism. These digonal or electric axes (X) are directions of maximum piezo-electric effect and the three directions co-planar with, and perpendicular to, the electric axes are called the mechanical axes (Y). Measurements have been taken with specimens cut so that the electric field is parallel to these various axes.

The low loss of quartz necessitates the use of resonant sections with the lowest possible attenuation and this is best given by some type of resonant cavity where the supports of the central conductor (if any) and of the specimen form part of the cavity itself.

Picture icon

Text Fig. 1.

For the 100–300 mc./s. region a convenient method allowing for the size of resonant sections and of the quartz, is a re-entrant cavity with the quartz in the gap. By using correct dimensions this can be treated as a shorted coaxial transmission line with lumped capacity at one end.

For the micro-wave region suitable methods, again allowing for the size of resonator and specimen and for ease of calculation, are a shorted coaxial line and a cavity resonant in the E010 mode.

For the coaxial line the fields and placing of the specimen are as shown:

Picture icon

Text Fig. 2.
Dielectric in Form of Ring
Axial Field Distribution without Dielectric

– 46 –

For the E010 resonant cavity: The only factor controlling the resonant frequency is the radius: λ = 2.6125a. The length is immaterial, but the greater the length, i.e., the greater the volume to surface ratio, the greater is the Q factor.

At about 10 cm. the E010 resonant cavity is the simpler of the two; for longer wavelengths the coaxial section is more suitable, and for shorter wavelengths (say 3 cm.) another type, a cavity resonant in the H01n mode is preferable.

A paper by Horner and others in the J.I.E.E., Vol. 93, Part 3, January, 1946, gives formulae derived from Maxwell's equations, which are suitable for use with all three methods. Before this paper was available, solutions from transmission line equations were obtained for a variable frequency method with a shorted coaxial line section by using a similar mathematical treatment to that in a paper by Gent of the Standard Telephone Company, which considered an open-ended section used with a variable length method. The two treatments for the coaxial line section give the same solutions for the permittivity, but the evaluation of the loss factor is approached slightly differently.

Picture icon

Text Fig. 3.

010 resonator for micro-wave region: For the E010 resonant cavity the formulae which have been used in calculations are taken from the paper by Horner and others.

∊′ is evaluated from the resonant frequency and the dimensions of the cavity and specimen. Tan δ is found from a formula which contains the dimensions and ∊′, and the difference of the reciprocals of the Q factors of the cavity when (1) the dielectric is present (say Q) or (2) the dielectric is replaced by a loss-free specimen of the same permittivity and dimensions (say Q′)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

i.e. tan δ ∝ [1/Q - 1/Q′]

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

This value of Q′ cannot be determined experimentally and cannot be used with the experimental value Q since Q factors obtained in practice never approach theoretical values. The difference is attributed to a departure of the depth of current penetration from the theoretical value given by d = 1/√πμfσ

where μ = permeability

σ = conductivity.

The Q of an air-filled resonator is found experimentally (say Qa) and used in the theoretical formula

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

QT = a/d (1 + a/1r)

where a = radius of resonator

1r = length of resonator

to give an effective current depth d′ according to the equation

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Qa = a/d′ (1 + a/1r)

This value d′ after allowance for any frequency difference, is now used in the theoretical formula for Q′ to give an equivalent experimental value which can be used in the equation

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

tan δ ∝ (1/Q - 1/Q′)

– 47 –

The method has the disadvantage that each cavity can be used for only one frequency with a particular specimen; but the construction of them is relatively simple. The variable-frequency method only can be used for the determination of Q factors, i.e. Q = f0/Δf where fo is the resonant frequency and Δf is the width at half-power points.

Suitable sizes of specimens and cavities were shown. The diameters here are 3 011 in. and 2.510 in. and the length is 1 in., which is much less than the value at which oscillation in other modes commences. The specimens are half an inch in diameter. Two cavities must be used, since the introduction of the quartz changes the resonant frequency to a value outside the range of the oscillators available, e.g., the smaller changes from approximately 3,600 to 2,875 or 2.845 mc./s., depending on which cut of quartz is used.

The Q values are quite low. For the air-filled cavity the theoretical value at 10 cm. is approximately 6800. This, of course, is never approached owing to the difference in current penetration from the theoretical, but it is also lowered appreciably by the coupling loops, the holes in the curved surfaces, and the poor finish of the surface. The best experimental value used in calculations is 2900, but this has recently been raised by decreasing the coupling considerably. Plating and better finishing of the surface would allow these Q values to be raised appreciably when further cavities are constructed.

Experimentally a signal from a klystron oscillator is fed into the cavity by a small loop, resonance being indicated by a crystal detector and mirror galvanometer, which has been calibrated and gives a square-law response. The signal is heterodyned with another from a fixed klystron oscillator using a crystal mixer and the beat frequency determined on a calibrated HRO receiver. The frequency of the fixed klystron is obtained using a coaxial line wave-meter. An alternative method is to measure the beat frequency on a discriminator with a calibrated response characteristic, but this does not have the flexibility of the receiver which covers a wide range of frequencies with varying degrees of band spreading.

Results so far obtained give for specimens cut parallel to the × and Y axes

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

∊′ = 4.42 at 2.875 mc./s. tan δ = 0.0002

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

and for quartz cut parallel to Z axis ∊′=4.60 at 2,845 mc./s. tan δ = 0.0003

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Re-entrant cavity for 100–300 mc./s. region.—From transmission line theory when all losses are small, the condition for resonance of a shorted coaxial section with capacity at one end is Z0 tan β 1 = 1/ωC

where Z0 = characteristic impedance of line.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

With dielectric in the condenser and a resonant frequency f1 Z0 tan β1 1 = 1/ω1∊′C

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

With air as dielectric and resonant frequency f2 Z0 tan β2 1 = 1/ω2 C ∊′ = f2/f1 · tan β2 1/tan β1 1

To obtain the loss factor the equivalent series resistances must be considered.

– 48 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The equivalent circuit with dielectric present is: tan δ = ω ∊′ C RC Q1 = ω L/RL + RC = 1/RL + RC · 1/ω ∊′ C

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Remove the dielectric and retune so the capacity is again ∊′ C, i.e. same resonant frequency Q2 = 1/RL ω ∊′ C tan δ = 1/Q1 − 1/Q2

Picture icon

Text Fig. 4.

Picture icon

Text Fig. 5.

Several corrections have to be made to these simple formulae—allowance must be made for the edge or fringing capacity at the gap—this is of the same order as the direct capacity and cannot be eliminated from the equations. A small change in line length occurs on retuning the line after the dielectric is removed and for constructional purposes it is more convenient not to have the gap at one end of the cavity. Control of the resonant frequency lies in the capacity, and to work at any particular frequency it is in general necessary to have different values for the gap separation and the thickness of the specimen. It is also desirable to be able to use various diameters of specimens.

Picture icon

Text Fig. 6.

Solutions have been obtained covering all the above cases.

In general, where dimensions are as shown:

C0 = edge capacity

C1 = capacity of air condenser of same size as dielectric

C2 = capacity of air condenser, radius b, separation (t–d)

C2 = capacity of ring condenser, radii a and b, and separation t

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

∊′ = 1/t/d [f1/f2 · (tan β1 1A + tan β1 1B)/tan β2 1A + tan β2 1B) · A—3] + 1

– 49 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where A = 1 − (C2 + C0) ω2 Z0 [tan β2 1A + tan β2 1B]/1 − (C2 + C0) ω1 Z0 [tan β1 1A + tan β1 1B] tan δ = [1/Q1 − 1/Q2 · tan β1 1A + tan β1. 1B ± ∧ 1B/tan β1 1A + tan β1 1B] x (∊′ C1 + C2) [∊′ C1 C2 + (C2 + C0) (∊′ C1 + C2)]/∊′ C1 C22

The dimensions of the cavity were chosen so that the length is greater than the diameter, and the radius much less than a quarter wave length in order to satisfy the resonance conditions. The diameter of the outer conductor is 6 in. and the inner 1.5 in, giving a ratio approximately equal to that for an optimum Q value.

The length can be changed by inserting additional sections giving overall lengths of 9 to 12 in, in order to give a greater frequency coverage. The top shorting plate can be removed for insertion of the specimen and the upper portion of the central conductor controlled by a micrometer moves through spring finger contacts to give an adjustable gap of known dimensions. The cavity is fed through a small coupling loop near the base and a similar output loop feeds to a crystal detector as a resonance indicator.

Picture icon

Text Fig. 7.

Frequencies are measured by heterodyning a variable oscillator with the fundamental or harmonics of a second oscillator which is in turn heterodyned with a standard signal generator to give a final beat frequency which can be measured on the HRO receiver.

Curves of resonant frequency are of the form shown. For the 9 in. cavity the values are approximately 300 mc./s. at 0.3 in. and 195 mc./s. at 0.05 in. For the 12 in. cavity, 230 mc./s. at 0.3 in. and 150 mc./s. at 0.05 in. The frequency is, of course, lowered when the dielectric is inserted.

Experimental values of the edge capacity are obtained by determining the total capacity at the gap from the equation and subtracting the calculated geometrical capacity.

Picture icon

Text Fig. 8.

It varies from about 1.20 pf. at a separation of 0.1 in. to 1.05 pf. at a separation of 0.2 in. At these separations the calculated direct capacity which has a curve of the form shown has the approximate values 3.9 pf. and 2.0 pf. respectively.

– 50 –

The Q factor of the air filled cavity varies approximately as shown.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Accurate results for the permittivity and loss factor of quartz have not. yet been obtained, since at present there is a slight variation from parallelism in the gap faces due to difficulties in construction. A difference of 1/1000 in. introduces an appreciable error. The results do show, however, that plates cut perpendicularly to the × and Y axes give similar values which are lower than those for plates cut perpendicularly to the Z axis (same as in microwave case).

Values obtained for various diameters and thicknesses of plates range from 4.25 to 4.45 for the former and 4.4 to 4.7 for the latter over frequencies between 200 and 250 mc./s., but, in view of the known inaccuracies, these should be taken only as an indication of the true values.

The permittivity of quartz as given by different authors in available literature shows appreciable variations over the lower frequencies up to 30 mc./s. and in tables a most probable value is generally quoted. Piezo-electric effects are responsible for some of the discrepancies as shown by variations for plates of slightly different thicknesses at the same frequencies.

Picture icon

Text Fig. 9.

The results obtained so far by the above two methods are in the same range as at lower frequencies and show a similar greater value of the permittivity and loss factor for specimens with the optic axis parallel to the direction of the-electric field than for those in which the axis is perpendicular to the field.

Synchronized Feedback In Scale-Of-2 Electronic Counters.

[Abstracted from a thesis submitted to the University of New Zealand on October 25, 1946.]

Introduction.—For automatic counting at the highest possible speeds of any events which can be made to produce suitable electrical impulses, hard-valve-electronic counters are widely used. The best circuits working on the well-known scale-of-2 principle have advantages of simplicity, stability, speed, reliability of

– 51 –

action, and tolerance in construction over other types. It must be assumed here that the reader is already familiar with the scale-of-2 principle. Its sole disadvantage is the considerable inconvenience of converting the reading to decimal notation, especially when there are many stages. Successful attempts have been made to produce decimal scales to avoid this difficulty, either by using rings of five valves, each of which cuts off all the others as it conducts, or by putting certain extra couplings between the stages of a 4-stage scale-of-2 counter, which overwhelm the normal action at certain parts of the cycle (e.g. Potter's “forced reset”) and make the cycle 10 instead of 16. In either case, however, there is some loss of the advantages listed above for the simple scale-of-2 counter, and there is reason to believe that this loss is fundamental to the method and cannot be entirely overcome by suitable design. For this reason an attempt was made to design an entirely new decimal counter in which the component stages operate only in their normal manner. It was found to be possible to produce such a counter having any even cycle whatsoever. by a general method referred to as synchronized feedback.

Feedback in Counters.—Before describing this method, it is necessary to explain briefly the idea of feedback, as it applies to a counter circuit. If, say, a 3-stage scale-of-2 counter, having a complete cycle of 8, is arranged so that at some point in its cycle it feeds an extra impulse into its own input, then, ignoring for the present questions of resolving time and actual mechanism, it is clear that the counter will return to its initial state after only 7 external impulses. Similarly, the cycle could be reduced to 5 (external impulses) by feeding back 3 extra impulses during the cycle. This method of changing the cycle to a desired value must have been obvious from the first, but owing to certain practical difficulties which will be discussed, it does not seem to have been much used. A circuit used in the Loran navigational equipment was the only application, besides the present one, to which we were able to find reference. Moreover, this uses a different basic type of counter (condenser charging), which cannot be substituted for the present type owing to a number of limitations. The forced-reset principle mentioned above is different from feedback as discussed here.

Difficulty of Feedback.—The difficulty in the use of feedback is as follows. The correct number of feedback impulses must be produced, one at a time, at suitable times during the cycle, and this is done by the action of one or more of the later stages. However, any such stage only changes over because of (and simultaneously with, to within 1 μs or less) an action of every stage preceding it. If the feedback impulse is returned very quickly, it is liable to be confused with; the pulse which initiated it in the earlier stage, and the latter may only respond once instead of twice. If, on the other hand, the extra impulse were deliberately delayed so that it was always recorded, it might itself inhibit the counting of an ordinary impulse arriving about the same time. In other words, the recovery time would become longer for the particular impulse that initiates the feedback. This irregularity would be very undesirable, apart from the difficulty of producing a suitable delay.

Synchronized Feedback.—In the present method (see Fig. 1), we use feedback into a stage after the first, the (x+1)th stage. The delay is provided by the operation of the previous stage (the xth), which may or may not be the first. When feedback impulses occur, they are synchronized with the xth stage changing to the on* state, and are therefore interpolated exactly halfway, in the count, between the normal impulses into the (x+1)th stage, which occur when the xth stage goes off. No stage is required, at any time, to operate more rapidly than the first stage has operated. Apart, therefore, from any small incidental differences between stages, the first stage is responsible for all missed impulses, and these can be fully accounted for by stating a single resolving time in the normal way.

Holding Circuit.—The synchronization is effected by means of a holding circuit. This is a circuit similar to a normal stage with two stable states, but with separate inputs (to the plate or grid) to each triode, instead of the usual input to a common point. Because of this, a positive impulse at A will change the circuit to on but not to off, and vice versa at B. (Impulses of one sign only are normally used througout a counter. Positive impulses are selected by the buffer triodes, as they are used in the basic scale-of-2 circuit chosen. These buffers

[Footnote] * It is convenient to refer to the initial and alternative stable states of a stage as “off” and “on” respectively.

– 52 –
Picture icon

Fig 1 General circuit for synchronized feedback.

– 53 –

are used in every connection between stages.) The holding circuit is normally off, and the impulses which reach it from time to time, via the connection shown, as the xth stage goes on, have no effect. At appropriate points in the cycle, an impulse is fed back from one of the later stages, as it goes on, to the holding circuit, which comes on also. So far, there is no effect on the earlier stages. However, the next time the xth stage goes on, the impulse it produces changes the holding circuit to off, and, in going off, the holding circuit sends an impulse into the (x+1)th stage. The latter does not normally receive an impulse, when the xth stage goes on, and therefore there is no coincidence or overlapping, and the extra impulse is duly recorded. Except that the apparent count is now greater than the true count by 2x, counting continues normally again, until the next feedback impulse is sent to the holding circuit.

General Theory.—Consider now a cycle of any even number C. Let C = 2xD. Where × and D are integers and D is odd. Let D = 2x—S. Where 2y-1 < D < 2y; and y is an integer. Then there must be x+y stages, and it is apparent that 2x+y-1 < O < 2x+y. The first x—1 stages are fed only in the normal scale-of-2 manner, and feed only to the next stage. The xth stage triggers the holding circuit as well, and the impulses from the latter are fed to the (x+1)th stage. Now 2y-1 < D and D = y—S. S < 2y-1, and any number less than 2y-1 can be expressed as the sum of selected numbers from the sequence.

1 2 4 8 2y-2
=20 =21 =22 =23 =2y-2

The corresponding number of feedback impulses (adding up to S), to cock the holding circuit, can be obtained from the last, second to last, (y−1)th last or (x+2)th stage, respectively. Hence the whole circuit must be as shown. Since S is odd, the last stage must always feed back. Other stages, from the (x+2)th to the 2nd-to-last, may or may not feed back, according to the value of S required. Since S impulses are fed back, each equivalent to 2x external impulses, and since the counter would normally complete its cycle after 2x+y impulses, we have that the modified cycle =2x+y—S2x=2x (2y—S) =2x D = C, as required. × must be at least one, to provide a stage to trigger the holding circuit. Hence C must be even.

Picture icon

Fig 2. Scale-of-2 circuit used.

– 54 –

Further Results.—With the 3-triode basic scale-of-2 circuit actually used (the third triode being a buffer and rectifier), it has been shown that the number of (twin-triode) valves required can never exceed 9 log10C. It is usually nearer 6 or 7 log10C, cf. the value 5 log10C for a simple scale-of-2 counter of the same type. If every stage, including the holding circuit, contains a small neon lamp to show when it is on, it has been proved that it is always possible to allocate to the lamps, for any cycle at all, values whose sum at any time, for the lamps on, is the number of impulses counted. This is important, if it is desired to indicate the answer with a meter, instead of lamps. Meters can be conveniently used in many cases, e.g. for a cycle of 10, or of 100 (2 meters). The cycle of 10 has the disadvantage of requiring the maximum possible number of valves (9), but 100, 1000, 60, 360, and most similar cycles, are satisfactory in this respect (<7 log10C). The circuit for 1000 has the advantage of requiring a minimum modification of an existing scale-of-2 counter. Apart from meter indication or adding lamp values, readings may be obtained from tables of lamp settings, in the same way as for a scale-of-2 counter. The only difference is that there are blanks for certain settings of the lamps which are skipped when feedback occurs. Whereas the table for an ordinary scale-of-2 counter increases in size indefinitely, with the number of stages used, and may be of many pages, the tables for cycles of 100 or 1000 will go on a single sheet, and complete cycles of 10,000, 1,000,000, etc., may be obtained, simply by connecting two or more counters in series and taking the corresponding number of readings, without requiring calculations or increasing the table.

Experimental.—Counters with cycles of 10, 100, 1,000 and others, have been constructed and tested satisfactorily. The basic scale-of-2 circuit actually used (see Fig. 2) was due to the Defence Development Section, Christchurch, New Zealand. However, this circuit is not essential to the method. Our only requirements are to be able

(a)

to take impulses from both plates of some stages (i.e. “on” as well as “off” impulses), which is always possible with a symmetrical circuit, and

(b)

to parallel the output from two or more buffers. This is usually fairly easy, especially, as in our case, if the buffers are normally biassed to cut-off so as to be sensitive only to positive impulses (see Fig. 3).

Picture icon

Fig 3 Parallel outputs

The holding circuit used in the preliminary try-out was simply an ordinary counter stage with separate condenser inputs to each grid instead of the normal cathode input. It works satisfactorily, but owing to the capacity loading of the grids must have had an unduly long recovery time. This will be overcome, at the expense of one extra valve, by using separate triggering triodes, with plates parallel to those of the holding circuit, after the manner of Stevenson and Getting's counter stage.

Resolving Times.—The reason for putting the uncomplicated scale-of-2 stages, if any, at the input end of the counter, is clear when we consider the effects of small individual differences between stages. If x=1, so that the xth stage is the first, or if the preceding stages have been “speeded up,” it is necessary that the (x+1)th stage and the holding circuit should be faster than the xth stage by a small but definite margin. This can be arranged by a small adjustment of time constants. Later stages are not critical. Stages before the xth may be speeded up by suitable (more critical) adjustment, if desired, provided that the resolving

– 55 –

time of the 2nd stage is < twice that of the first, the resolving time of the 3rd stage < 4 times that of the first, and so on up to the xth. Otherwise prolonged insensitive periods may occur. Similar conditions apply to all types of counters. When they are satisfied, the resolving time is simply that of the first stage. In synchronized feedback, if × > 1 and the first stage has not been speeded up, the other stages are not critical.

Conclusion.—Since this work was commenced in 1945, descriptions of a number of practicable decimal counters have been published, in spite of the limitations mentioned above. However, the method is still of interest as showing the possibility of varying the cycle by means of feedback, using only the normal action of the individual stages, and it may be a useful alternative, for certain purposes, to other methods of avoiding the conversion difficulty.

Table for Cycle of 100.
Dots indicate stage on.
Stage 4
3
2
1
7 6 5
00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15
16 17 18 19 20 21 22 23 24 25 26 27
28 29 30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49 50 51
52 53 54 55 56 57 58 59 60 61 62 63
64 65 66 67 68 69 70 71 72 73 74 75
76 77 78 79 80 81 82 83 84 85 86 87
88 89 90 91 92 93 94 95 96 97 98 99

Values of lamps are 1, 2, 4, 8, 12, 24, 48; and 4 for the holding circuit (indicated by underlining).

A Meter Indicator For Use With Electrical Counters Using Scale-Of-2 Circuits.

Abstract.—A meter method of indicating the count of a series of coupled scale-of-2 circuits is described. The method is particularly suitable when a scale-of-16 unit has been converted to act as a decade counter.

High-speed electrical counters or chronographs are, at present, being used in increasing numbers and in an increasing variety of applications. When an instrument is first devised and is performing a new function, a certain amount of inconvenience in use is tolerated, but when it becomes an everyday tool it is desirable that the operation be as simple as possible. During the war period electrical counters or chronographs underwent considerable delevopment, one of the main uses being the measurement of short time intervals in determining the muzzle velocity of shells. High-speed counters have also been used for many years in radioactivity measurements, each disintegration in a sample producing a count.

The fundamental unit of a high-speed electrical counter is a scale-of-2 circuit (Fig. 1), which consists of two electronic valves coupled together so that the system has two stable states. In one stable state, the first valve, A, is conducting, and the second, B, non-conducting. If a suitable impulse is fed to the input, the conducting states are reversed, B becomes conducting and A non-conducting. Another impulse to the input returns the system to the original state, and by taking an output signal from one valve of the pair

– 56 –

one output pulse can be obtained for two input pulses. Hence the name, Scale-of-2. By using a number of scale-of-2 circuits in series, it is possible to build up scales-of-4, 8, 16, etc.

Picture icon

Fig 1 Scale - of - Two Circuit

The normal method of indicating the count is to have one neon lamp connected to each scale-of-2 circuit (across one of the plate load resistors). If the lamp is glowing, the count is one, if the lamp is extinguished, the count is zero. Hence for a scale-of-16, there would be four lamps (Fig. 2). At the start of a counting period all the lamps would be extinguished. When one input pulse is received, the first lamp glows. With a second input pulse, the first lamp goes out, but the second lamp goes on. The counting value for the second lamp is two, the value for the third lamp four, and for the fourth lamp eight. So if, after a counting period, the first and fourth lamps were glowing, the number of pulses received would have been 1 + 8, i.e., 9. In a large sealing unit using 10 scale-of-2 circuits and giving, in effect, a scale-of-1024, there is a considerable possibility of error in recording the count. The trouble is that the final state of the instrument is not simply interpreted in terms of our decimal system of counting.

Picture icon

Fig 2 Scale - of - Sixteen

There have been two developments to simplify the operation and use of high-speed electrical counters. The first improvement was the introduction of the decimal or decade counter. One type, described by Potter*, uses a very ingenious method of resetting a normal scale-of-16 circuit to its original state after ten pulses have been received. Potter used four neon lamps as indicators for each decade, thus making a small amount of arithmetic necessary when finding the number of units, tens and hundreds in a count. A second type has been described by Lewist.† This method uses a scale-of-2 circuit followed by a ring-of-5. For indicators, Lewis suggests neon lamps or magic-eye tubes.

[Footnote] * J. T. Potter. A Four-tube Counter Decade. Electronics: 110–113. June, 1944.

[Footnote] † W. B. Lewis. Electrical Counting. P. 91. Cambridge University Press, 1942.

– 57 –

The second improvement has been the introduction of a meter to indicate the count, the meter pointer moving over the scale in steps, and the reading giving the state of the counter directly. Quite recently, a decade counter using the scale-of-2 and ring-of-5 system with a meter indicator, has been described by West.* This particular counting system had been described earlier (during the war period) in a report which had a restricted circulation.

The remainder of this paper describes a meter indicator system which can be used with electrical counters consisting of a series of scale-of-2 circuits. The method is particularly convenient when used with a decade counter of the Potter type, i.e., a scale-of-16 converted to act as a scale-of-10.

Picture icon

Fig 3.

Fig. 3 shows the principle. Each pair of valves forms a scale-of-2 circuit. In the initial state before a counting period, valves 1, 3, 5 and 7 are conducting, and valves 2, 4, 6 and 8 non-conducting. The current through the plate load resistor of a conducting valve is about 2.5 ma., and the current through the other plate load resistor about 0.5 ma. (due to the resistors of the coupling circuit). When a suitable pulse is received and the scale-of-2 circuit changes its stable state, the change in current through each plate load resistor is about 2 ma. In the initial state of the counter, the 0.5 ma. currents produce a total voltage drop across R1, R2, R3 and R4 of about 37.5 mV. A similar potential drop is arranged across R5, so, at the start, there is zero potential difference between points × and Y.

When the first pulse is received and valve 2 becomes conducting, the extra 2 ma. through R1 produces a potential change of 0.01 V. at X. With a 2nd pulse, 2 becomes non-conducting, but 4 becomes conducting. The 2 ma. through R1 and R2 produces a potential change of 0.02 V. at X. A 3rd pulse makes valve 2 conducting besides valve 4, so giving 4 ma. through R1 and 2 ma. through R2 to provide the change. This produces 0.03 V between × and Y. With a 4th pulse we get 0.04 V. at X, since valves 2 and 4 become non-conducting but 6 conducting. A 5th pulse gives 0.05 V., etc. Hence each pulse received causes the potential of × to change by equal increments. In the case of the circuit shown, the 15th pulse causes valves 2, 4, 6 and 8 to be conducting, and a potential at × of 0.15 V. The 16th pulse makes these valves non-conducting, causes one output pulse from the unit, returns the potential difference between × and Y to zero, and brings the system to the original state as at the beginning of the counting period. The sole effect of Potter's modification of the circuit is to make valves 2, 4, 6 and 8 non-conducting on the receipt of the 10th pulse instead of needing to wait for

[Footnote] * S. S. West. An Electronic Decimal Counter Chronometer. Electronic Engineering. 19:3–6, Jan., 1947. 19:58–61, Feb., 1947.

– 58 –

the 16th. A direct-reading instrument is made by connecting a suitable meter between points × and Y. A 0–500 microammeter with resistance about 150–200 ohms would be satisfactory. Of course, the introduction of the meter will alter the distribution of currents and potentials slightly from those described above, but this does not interfere with the meter's action of indicating the count.

Picture icon

Fig. 4.

Note: The plate-to-grid coupling condensers marked 10μμF. should be 100μμF.

The undesired couplings between different scale-of-2 circuits due to the common resistors R1, R2 and R3 are very small, and no trouble has been experienced due to them. The instrument can be returned to its initial state, i.e., the meter reading returned to zero, at any time, by opening the reset key for a moment. This makes valves 2, 4, 6 and 8 non-conducting.

High-speed electrical counters using scale-of-2 circuits have been used for a number of years with neon lamps. These indicators need to be interpreted by the operator, and there is a considerable possibility of human error in the case of large scaling circuits. The meter indicator described in this paper simplifies the interpretation.

Discussion.

Mr. R. L. Taylor asked what was the maximum scale of counting that could be covered by the use of a meter indicator.

The speaker replied that this depended on the size and ease of reading of the meter scale. With the ordinary type of panel meter available he did not advocate the use of this method above, say, a scale-of-16 counter. A larger meter would enable higher scales to be distinguished. He advocated the method particularly for decimal counters.

A Proposed Auroral Index Figure.

The search for a simple index figure by means of which the intensities of individual auroral displays may be compared, as well as for the purposes of statistics and correlation has been made by many observers. In New Zealand the problem is somewhat accentuated by having to deal with the reports from numerous observers of varied training and experience. Many observers have used an arbitrary scale which is usually defined as “faint,” “moderate,” “bright,” or “brilliant.” Such a scheme, without further definition does not take into account auroral forms visible, movement or duration, and depends a good deal on personal experience of witnessing aurorae. In Canada, Currie and Jones developed a method of giving an hourly index which took these factors into account, but it indicates the need of fairly continuous observations by trained observers at a single station, as well as neglecting the fairly rapid changes that may occur even within a few minutes. In New Zealand the whole story of a display is made up from a large number of individual reports made at various times of the night hours depending on each observer's circumstances.

– 59 –

The scale proposed by la Cour and published in the Supplement to The Photographic Atlas of Auroral Forms, indicates that he considered intensity to be related to auroral form. In his sequence of auroral forms during a typical display, Geddes implies a rise and fall in intensity with auroral form. From these and other considerations the assumption is made that the intensity of any auroral display at any instant, is directly proportional to the auroral form in evidence. Where several auroral forms are visible in the sky at the same time, the intensity is taken as the sum for all the forms.

The following scheme for relating auroral intensity to auroral form is suggested. In the original paper, symbols are given for the different forms, which brings out the scheme mu more clearly. In the table given here the usual auroral abbreviations are used.

la Cour's Scale Proposed Scale Auroral Forms.
0–1 1 G (faint).
2 G (moderate).
3 G (bright), HA (single, faint),
PA (isolated patches).
1–2 4 HA (single, moderate), PA (whole arc),
HA (multiple, faint), R, DS.
6 HA (single, bright), HA (multiple, moderate).
6 HA (multiple, bright), RA (1 or 2 rays).
7 HA (folded, moderate),
HB (single, moderate), RA (few rays).
8 HA (folded, bright), HB (complex, bright),
RA (many, moving).
2–3 9 RA (bundles), PS.
10 RA (whole arc raying).
11 RB, F, RA (whole arc raying, considerable movement).
12 D
3–4 13 D (near zenith).
14 C
15 FC.

Examples, using this scheme, were studied for observations made at the single stations, Timaru and Campbell Island for the display of 1946, April 23. Two interesting results were obtained after plotting intensity against time.

(1)

The greater general auroral intensity for the station nearer the auroral zone was immediately obvious.

(2)

An examination of the discrepancies between maxima and minima of activity between the two stations led to the better interpretation of the results by showing that the forms were moving in geomagnetic latitude during the course of the display.

The above scheme, as with all others, refers in each case to observations made at a single station, whereas considering New Zealand observations alone, reports on displays have to be considered from Auckland to Invercargill. A description was given of the method by means of which all these reports can be quickly analysed for grosser features by use of the hieroglyph auroral symbols, and the whole display reduced as if seen from some point halfway between Dunedin and Invercargill. Examples of results obtained for the displays of 1946, February 7–8, 8, March 24–25, 25–26, 28–29, April 23–24 (N.Z.S.T.) were given. The intensities were shown in tabulated form for cach 0.1 hour during the display.

From such tables of observational results it was suggested thot the following advantages were obtained:—

(1)

The variation of auroral activity during small intervals of time, 6 minutes, were available for study. Rate of change of intensity was available within this limit of time if necessary.

(2)

By means of the simple sum of intensities during the hour an hourly auroral number becomes available.

(3)

The simple sum of the hourly numbers gives the complete auroral number for the whole display. This is in effect a rough integration which takes into account all the variations of intensity and the duration of the display.

– 60 –

Most auroral data are concerned with observations from single stations, whereas it would be highly desirable to know by means of some index the auroral condition right from the auroral zone to the lowest geomagnetic latitude at which the display is seen, so as to obtain a complete evaluation of the auroral activity. No suggestion can be made at this stage, and in the meantime New Zealand, Campbell Island, Tasmanian and Australian observations are to be dealt with separately. Brief mention was made of the apparently great expansion and contraction of the auroral zone in the Southern Hemisphere in sympathy with the solar cycle.

Weather Forecasting To-Day.

The preparation of a weather forecast takes place in three distinct phases. First there is the assembling of the observational data, then the second phase of diagnosis when the present state of the atmosphere and its motions are analysed, and the third phase of prognosis when the atmospheric motions are projected forward through finite time intervals and the future characteristics of the air are assessed.

The First Phase—Observation.

The collection of data improved considerably during the war years, both through the extension of the observing networks, especially into the upper air, and through the provision of speedier, more efficient communications. Teletype and radio networks exclusively for meteorological purposes are now standard practice.

The observational requirements may be examined from the theoretical aspect. For a mathematically complete specification of the state of the atmosphere we would require a mass of observational data approximating to continuity in both space and time, to enable the construction of the scalar fields of pressure, temperature, density and humidity, the vector field of motion, and the time differentials of those fields.

In practice the meteorologist must content himself with a series of samplings at strategic intervals of space and time. The system is to take simultaneous observations at points sufficiently close to reveal the significant space variations of the elements, and repeated at sufficiently short time intervals to reveal significant time variations. Closer, more frequent, observations would merely load the communication systems unnecessarily.

The International Meteorological Organisation has recently recommended a suitable compromise on the basis of war-time experience. At low levels variations are rapid, and surface observations not more than 50 to 100 miles apart over land, or 300 miles apart over oceans, seem desirable. They should be available at least every three hours from nearby areas and every six hours from more remote regions.

Complete upper-air observations appear desirable from points not more than 200 miles apart over land, or 600 miles over the oceans, at six-hourly intervals.

The necessary horizontal extent of the observational network is related to the dimensions of the features to be shown, such as anticyclones and depressions, and to their speeds of movement. In temperate regions forecasts for periods of two or three days ahead appear to require data from at least a sixth of the earth's surface. To give an illustration, a depression developed just west of Perth on 11th May, 1947. Only 48 hours later it had travelled 3,000 miles and was passing just south of New Zealand, causing gales in the South Island.

The same example illustrates the natural handicap under which the forecaster works in New Zealand. Although reports are received from points up to 3,000 miles away in directions from west through north to north-east, New Zealand is completely unguarded in directions from west through south to east, with the exception of the reports from two isolated islands 500 miles off the

– 61 –

coast, Campbell Island to the south and Chatham Island to the cast. The depression from Western Australia did not again pass within 700 miles of a reporting station until it reached New Zealand; its progress and development meantime could be deduced from only the most indirect evidence.

Of the available land areas, Australia and New Zealand have a reasonable density of surface observations during daylight hours, but there are some conspicuous gaps in the Pacific Islands since the withdrawal of the military stations. At night, unfortunately, the networks are very meagre in New Zealand and the Islands. This deficiency is aggravated by the available observations made in darkness being inherently less informative than daylight observations. It is a major difficulty in a sparsely-populated country like New Zealand to find people in suitable locations who are willing and able to make regular reports during the night or early morning, punctually, 365 days per year. The community should be grateful to those light-house keepers, Post and Telegraph employees, radio operators and aerodrome caretakers who so conscientiously provide the basic information.

The war accelerated enormously the development of upper-air networks, but unfortunately peace has seen the closing of many stations. This is particularly true in the South Pacific, where at one time over thirty radiosonde stations were operating; but now only twelve remain. Only two of these are outside Australia.

Upper-wind observations benefited during the war by the use of radar instead of visual means for following pilot balloons. By this means winds could be measured to high levels regardless of cloud or weather. Again, in the South Pacific the war-time networks have not been maintained, and at the moment only two radar wind stations are still in operation.

The Second Phase-Diagnosis.

In the field of diagnosis very little that is really new has emerged from the war. Rather has there been an introduction into everyday forecasting of certain analytical techniques hitherto confined to research.

The methods of surface analysis in temperate regions have not changed appreciably. The forecaster still draws isobars and fronts on the surface weather map, and deduces the characteristics of the air masses mainly from the temperatures, humidities, cloud forms and hydrometeors.

The greater volume of upper-air data now available has increased the part played by upper-air charts of various sorts. Before the war, tephigrams and adiabatic charts were in regular use for studying the vertical variations of pressure, temperature and humidity, but analysis of their horizontal variations was little developed, and mainly confined to pressure charts for certain fixed levels, such as 5,000 and 10,000 feet. In Germany, however, there had been in routine use for some years pressure-contour charts, which, instead of showing the pressures at fixed heights, showed the heights of fixed-pressure surfaces such as 1,000, 800 and 500 millibars.

Early in the war Dr. Petterssen, a prominent Norwegian meteorologist, took charge of the Upper Air Section of the British Central Forecasting Office, and, after a series of comparative tests, introduced the pressure-contour charts there. United States meanwhile had become deeply committed in her vast training programme based on the fixed-level charts, and was not able to make a general change until late in 1945.

Construction of Contour Charts for at least the 700 and 500 millibar-pressure surfaces, corresponding to about 10,000 and 18,000 feet, is now a standard routine in most services. These charts possess all the virtues of the old fixed-level charts, and others besides. They can also be constructed much more speedily from the data normally available.

Tropical meteorology was in a very primitive state at the outbreak of war. The advances made since, however, suggest that this war may have done for the tropics what the 1914–18 war did for the temperate regions. During that war a close network of observing stations on the coast and in small vessels off-shore enabled the Norwegians to study the detailed structure of depressions and evolve their frontal theories. They established the life history of the depression from its birth as a wave on the surface of separation of

– 62 –

two air masses of different density, to its death as an occluded cyclone. Complex structures may now be identified from relatively sparse data. No such synoptic models had been evolved for tropical disturbances. It was not until observation stations were established throughout the tiny islands of the South Pacific, that a comprehensive picture could be obtained, unobscured by continental influences or local diurnal effects.

Meteorologists in the forward areas were too busy to make much immediate contribution to the broad theoretical aspects. However, in 1943, C. E. Palmer (N.Z. Meteorological Service) was invited to take charge of the Institute of Tropical Meteorology being established at San Juan (Puerto Rico) as a joint project of the U.S.A.A.F., U.S.N. and U.S. Weather Bureau for research

Picture icon

Fig. 1.—Example of routine wind flow analysis in tropics.

– 63 –

and the training of forecasters. Under his direction current knowledge was systematised, and particular weather patterns came to be associated with certain ware-like sinuosities in the air streams commonly occurring at pressure troughs. It was after his return to New Zealand that the more spectacular advances in technique were made.

New Zealand meteorologists experimented with a form of stream-flow analysis well known in oceanography but largely overlooked by meteorologists since it had been used early in the century by V. Bjerknes.

A major problem of the forecaster is that of detecting and predicting vertical motion, because it is ultimately ascent of moist air which produces cloud and precipitation. Vertical velocities of the order of one centimetre per second may be highly significant, but are far beyond the range of instrumental measurement by any known means. It was found for the tropics that, with a good network of upper winds and a careful analysis of the fields of motion in the lowest 8,000–10,000 feet of the atmosphere, the horizontal divergence or convergence could be estimated. This gave a measure of the ascent out of the layer or descent of air into the layer. Quick methods of computation were devised. Figs. 1 and 2 show an example of a routine analysis. The estimations are less accurate than those obtained by laborious field geometry, but the approximations appear justified by results.

Much research has still to be undertaken, but this promising diagnostic method has already revealed something of the structure of tropical disturbances and the normal synoptic models.

The Third Phase—Prognosis.

The methods of prognosis have not made any spectacular advance. This does not imply that the quality of the forecasting has not improved. The extension of the observational facilities, especially into the upper air, allowing the more complete and confident application of known principles, and the wealth of experience gained during the war when data were at a maximum, have increased the reliability of the forecasts. Perhaps the real advance lies in the better understanding of the processes taking place in the atmosphere. Capable brains have been delving deeper into the hydrodynamics of the baro-clinic medium. A critical re-examination of many of the traditional equations has revealed that some terms hitherto dismissed as negligible are on the contrary of some significance.

The major problems are still those of predicting pressure changes, because pressure patterns, in the temperate regions at least, still give the best approximation to the horizontal flow of air, and of predicting vertical motion because cloud and rain are due to ascent of moist air.

The method of predicting the pressure field introduced by Petterssen in 1933 is based on the pressure and isallobaric (pressure change) fields. The movement, acceleration and intensification of pressure systems, may be extrapolated without any assumptions being made about the physical processes involved. The method has limited application in New Zealand owing to the impossibility of constructing accurate isallobaric fields over the surrounding sea areas and to the masking effects of the rugged topography on the fields.

Early in the war Professor Rossby at Chicago publicised a method of computing the pressure changes due to the horizontal transport or advection of air of a different density. Estimates were made from a hodograph of the wind vectors for successive levels aloft. While the method has its uses, it treats only one term of the total pressure change. The elusive convergence term often has the same order of magnitude.

In England Dr. Petterssen's team made a more rigorous exploration of the theory of pressure changes and devised another convenient method of computing the advective contribution from upper-air charts. They attempted to estimate the convergence term also from the difference between the observed winds at different levels and those computed from the upper-air charts. Departures from the geostrophic values would be due to convergence. The present accuracy of the observations proved insufficient for any reliable estimates.

– 64 –
Picture icon

Fig. 2.—Divergence-convergence patterns in lowest 10,000 ft. of atmosphere, with relative intensities, estimated from wind flow patterns in Fig. 1. Horizontal convergence implies ascent of air out of layer and divergence, descent into layer.

Professor Rossby at Chicago made another attack based on the assumption of conservation of absolute vorticity in an air column. The absolute vorticity is the sum of the vorticity relative to the earth and that due to the rotation of the earth itself. Air moving to different latitudes must continually readjust its relative vorticity to compensate for variation of the earth's contribution. As a result it describes an oscillation between limiting latitudes. Slide rules or graphical means are available for the rapid computation of the future trajectories of selected particles of air on, for example, the 700 millibar surface.

– 65 –

Thus the future flow patterns can be predicted. Unfortunately convergence appears again to render the estimates unreliable on at least a proportion of occasions.

Experience in England and United States during the war seems to indicate that the average error in the 24-hour prognosis of frontal positions by present methods is not likely to be reduced below about 100–150 miles, corresponding to errors of four or five hours in timing. These errors would be consistent with a 5 m.p.h. uncertainty in speed of movement. In New Zealand we are less favourably placed. We are fortunate if we know a front's present position with that accuracy. The front which will be arriving over the country in 24 hours' time is probably 500 or even 1.000 miles out over the South Tasman at present, and we are lucky if ship reports chance to show its present position within 100–200 miles.

Even in regions with more favourable networks, incipient frontal waves may slip through undetected and develop into active depressions within 12 or 18 hours.

The longer the period of the forecast the greater the probable error in timing and intensity. Beyond 24–36 hours the weather is usually due to disturbances which have not yet formed. Forecasts of any great precision beyond 24 hours are usually possible only when a large, slow-moving anticyclone seems likely to dominate the situation for some time. Some regions are more favoured than others in the persistence of their anticyclones. New Zealand unfortunately lies in a region of fairly rapidly-moving anticyclones.

The importance to military operations of forecasts for periods longer than 24–48 hours led to investigations of several methods, especially in the U.S.A. Probably the most successful method there is the one originally tested at Massachusetts Institute of Technology and now used by the U.S. Weather Bureau in preparing its rather generalised 5-day forecasts. It is based on the principle that long-term trends, which are masked by the complexity of the day-to-day weather maps, may be revealed by charts of the five-day-mean values of the elements. These patterns tend to be related to the mean strength of flow of the westerly winds around the globe. For example, during periods when the westerlies are strong and the so-called “westerly zonal index” is high, disturbances tend to be widely spaced, fast moving and not very intense. In low-index periods the north-south movements of air are greater, and disturbances often deep and slow-moving. On the basis of these trends the U.S. Weather Bureau issues 5-day forecasts twice a week, indicating the nature of the anomalies in temperature and precipitation relative to the normal values.

In New Zealand use is made of the general principles, but a precise estimate of the zonal index itself is impossible owing to the complete absence of observations east and west of New Zealand in the critical latitudes. The index is necessarily based on indirect deductions.

In 1944 the application of the system to tropical forecasting was tested under the joint auspices of the U.S.A.A.F. and New Zealand Services. The result was negative owing to the obscurity of the relationship between tropical convection and pressure systems.

A method developed by Dr. Krick in California was tried, less successfully, in other regions during the war. He classifies the weather over a broad area. into basic weather types, and assumes that a particular type will persist for a period of about six days, during which time the day-to-day patterns follow characteristic sequences. The identification of the dominant type at the outset enables forecasts to be prepared for periods up to six days. The system has not been successful in the New Zealand region.

To summarise, the recent advances have been in the direction of faster and more comprehensive collections of observational data rather than in improved techniques for forecasting. New Zealand, by its geographical isolation, is deprived of the full benefit of existing techniques. The most promising way of minimising its disadvantages appears to be in the improvement of available observation networks, with special emphasis on radar measurements of upper winds, and backed by adequate research into the fullest utilisation of the data.

– 66 –

The Frequency Of Heavy Daily Rainfalls In New Zealand.

Method of Statistical Analysis: It has been established (Seelye, 1947) with particular reference to Wellington rainfalls that a daily rainfall of amount × or more occurring on the average once in N years can be satisfactorily expressed as × = u + k log N, where u and k are constants: Such an equation applies to the heavier rains and is suitable when N is at least unity. The values of u and k can be determined by considering the extreme daily rainfall for each calendar year covered by the available record for a station.

Gumbel. who has developed the theory of extreme values, has proved that the mean and the mean absolute deviation of the series of extreme values lead to a rather smaller probable error in the calculated coefficients of the distribution than the use of the mean and the mean–square deviation. The steps involved in computing u and k are described below, but the statistical theory is not repeated here.

Let a rainfall record cover n years. The rainfalls on the wettest day of each of these n years are arranged in order of increasing magnitude and let the rth rainfall in such an arrangement be denoted by xr. Corresponding to n, find m′ = 0.36788n—0.63212 (Gumbel. 1943). [It is convenient to have a table showing the integers n and the associated m′ values.] Let m be the nearest integer less than m′ which in usually fractional. Estimate the rainfall corresponding to. this fractional serial number by interpolation between xm and xm+1. that is, take u = xm + (m′−m) (xm+1−Xm).

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

u then is the rainfall which occurs with the average frequency of once a year. Let the mean of the n rainfalls be x= 1/n ·Σnr=1xr

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

and their mean absolute deviation be θ= 1/n ·Σnr=1|xr−x|

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The latter is best computed as θ= 2/n ·[px−Σpr=1xr]

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where p is the serial number of the rainfall immediately smaller than x. A more-precise estimate of the mean deviation is given by θ= √n/n−1 ·θ=[1+½n] θ approximately

From θ we have 1/α = 1.01731 θ (Gumbel. 1942), which is one of the coefficients in the “distribution of floods.” In its present application this expresses the probability W(x) that × is the largest daily rainfall in the year as W(x) = exp [—exp—(x—u)/α]. Our k is related to α (Seelye, 1947) and finally we have k = 2.3425 θ.

Discussion of Results: The New Zealand Meteorological Office has tabulations of the wettest days by months an 1 years for most of the longer rainfall records. These were completed to the end of 1945 for 91 places with over 40 years of records, for 99 extending over 30 years and for another 99 of shorter duration. For each of these records × and θ were computed and u and k derived as described above. Common logarithms are used in the formula so u, u + k, u + 2k represent rainfalls which are likely to occur once in 1. 10, and 100 years respectively. Representative results are set out in the accompanying table, while the two maps summarise the position. The rainfalls considered are those measured each morning and are not necessarily the maximum for any 24 hours. However, it was found that the present results needed to be increased some 12 or 13 percent, in the case of Wellington to give the fall likely to be encountered in any 24 hour period. In the absence of other autographic records of sufficient length to give reliable information for other places one can only say that a similar increase seems reasonable for any locality.

– 67 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Years of Record k Daily reached 1 yr. u Rainfall on average 10 yr. u+k (inches) once in 100 yr. u+2k Largest Daily Rainfalls of the Record
Whangarei 36 3,31 2.93 6.24 9.55 8.52 11.41
Auckland 82 2.04 2.00 4.04 6.08 6.38 6.39
Waihi 46 5.10 4.96 10.06 15.16 12.15 16.50
Hamilton 46 1.44 2.02 3.46 4.90 4.69 5.12
Tauranga 44 2.77 2.92 5.69 8.46 6.45 9.41
Rotorua 60 2.58 2.55 5.13 7.71 7.78 8.80
Taumarunui 31 1.65 2.01 3.66 5.31 4.29 5.34
“Riversdale,” Inglewood 58 3.93 3.97 7.90 11.83 10.90 12.16
Pakihiroa 32 2.30 5.06 7.36 9.66 9.51 9.82
Gisborne 44 2.79 2.28 5.07 7.86 7.50 7.68
Tutira 46 4.63 3.20 8.83 11.46 12.76 13.56
Napier 67 2.58 2.28 4.86 7.44 7.16 8.03
Masterton 57 1.85 1.94 3.13 5.64 5.45 7.06
Martinborough 33 1.35 1.51 2.86 4.21 3.14 3.80
New Plymouth 70 1.80 2.43 4.24 6.03 4.86 7.29
Stratford 40 2.31 3.24 5.55 7.86 8.43 9.56
Wanganui 60 1.14 1.63 2.77 3.91 3.52 3.70
Feilding 59 0.99 1.58 2.57 3.56 3.44 3.45
Palmerston North 53 1.25 1.59 2.84 4.09 2.92 6.34
Wellington 83 1.84 2.29 4.13 5.97 6.00 6.32
Westport 53 1.77 2.59 4.36 6.13 5.80 6.86
Greymouth 46 2.32 3.12 5.44 7.76 6.55 12.50
Otira 40 3.29 7.12 10.41 13.70 11.30 11.81
Hokitika 67 2.32 3.71 6.03 8.35 8.16 9.17
Nelson 49 1.58 2.16 3.74 5.32 4.50 4.83
Spring Creek, Bm. 37 1.25 1.95 3.20 4.45 4.50 4.95
“Emscote,” Stag and Spey 23 5.14 2.94 8.08 13.22 7.30 19.69
Arthurs Pass 27 3.76 6.83 10.59 14.35 11.32 12.70
Christchurch 71 1.34 1.59 2.93 4.27 4.00 4.71
Timaru 53 1.63 1.45 3.08 4.71 4.37 5.79
Benmore Station 39 1.14 1.71 2.85 3.99 3.28 3.98
Clyde 49 0.83 1.06 1.89 2.72 2.14 2.65
Dunedin 93 1.83 1.82 3.65 5.48 5.42 6.81
Roxburgh 49 0.64 1.13 1.77 2.41 2.28 2.28
Gore 37 0.73 1.28 2.01 2.74 2.12 2.53
Invercargill 49 0.90 1.30 2.20 3.10 2.78 3.25
Half Moon Bay (Stewart Island) 31 1.10 1.52 2.62 3.72 2.90 3.18

As regards the rainfall likely to be attained once a year on the average, it is seen that for most of the North Island low country the amount is about 2 in. In the lower Manawatu values are as small as 1.5 in. This is a region where steady frontal rains with orographic reinforcement can rarely occur and the strong westerly winds to which it is at times subject usually bring only showery precipitation. The east coast, though having a low average annual rainfall, shows higher values on the present map than does the west coast. The meteorological situation with an active depression to the north and onshore easterly winds is not uncommon and favours steady rains on the east coast. The high values 4.96 in. for Waihi and over 5 in. at Wairongomai and Pakihiroa Stations towards East Cape are noteworthy. For the South Island, amounts are under 2 in. for the majority of places east of the main ranges. The lowest figure found was 0.93 for Alexandra in Central Otago. The chief exceptions are Banks Peninsula and the Kaikoura coast. In Westland values are more than 3 in., for Otira the figure is 7 in., and for Milford Sound 9 in. It does not appear likely that values appreciably exceed the last amount over any great extent of country.

– 68 –
Picture icon

Map 1.

With reference to the 10-year expectation, the largest amount calculated for the North Island was 10.06 in. at Waihi. Values of 5 in. or more are common in the eastern portions of Auckland Province and in the more exposed parts of Hawkes Bay. Less than 3 in. is shown for the Taihape-Manawatu region. In the Southern Alps and Southern Sounds 10–14 in. appears to be the range, with the latter figure only in the most exposed positions. An appreciable part of Central

– 69 –
Picture icon

Map 2.

Otago has under 2 in. and the Southland Plains have under 3 in. For most of Canterbury values of 3 to 4 in. are encountered, but 6–8 in. occur about Banks Peninsula and the places to the north with an open southerly to south-easterly exposure.

– 70 –

It may be remarked that “Emscote,” Stag and Spey received 19.69 in. on the 6th May, 1923, and 10.83 in. the day following. The heaviest measured fall for the South Island, however, was 22 in. recorded at the P.W.D. Camp, Milford Sound, on 17th April, 1939, the Hostel receiving 18.39 in. at the time. For the North Island the record is held by “Riverbank,” Rissington, H.B., which experienced 20 14 in. in the space of 10 hours on the 11th March, 1924. The world record for a 24-hour fall is the 46 in. which fell at Baguio, Luzon (Philippines), between the 14th–15th July, 1911.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The Accuracy of the Results: Assuming the distribution of floods is the one followed by rainfall there will be an error in the statistical coefficient calculated from the results of a finite number of years. From Gumbel's work (1942) it is possible to assign a standard error to u and k as calculated here from records extending over n years. If the standard errors are denoted by σu and σk respectively, it can be shown that σu = .57k/√n σk = .87k/√n

Examples are given hereunder. These include the results for “Emscote” using 23 years' records and also omitting the year with the phenomenal 19.69 in. fall.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

u ± σu k ± σk
Auckland 2.00 ± .13 2.04 ± .18
Waihi 4.96 ± .44 5.10 ± .64
Invercargill 1.30 ± .07 0.90 ± .11
“Emscote” (23 years) 2.94 ± .61 5.14 ± .89
(22 years) 2.93 ± .44 3.58 ± .63

To give some indication of the accuracy of the method, the following analysis was made. After the constants were calculated for a station the number of years equal to the length of the record was introduced into the formula to give the rainfall that was likely to have been exceeded once during the record. Some records (32) received no rainfall as heavy as this, others received 1, 2, or 3 such falls. The distribution over the 289 records handled is tabulated.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Number of Heavy Rainfalls
0 1 2 3 4 or more Total
Actual No. of Records 32 133 106 18 0 289
Poisson Distribution 106 106 53 18 6 289

The average chance of one such rainfall at a station is [ unclear: ] per year. Poisson's law of rare events gives the likely number of records having 0, 1, 2, 3, 4 or more such rare falls. From the table it is apparent that there are more cases of 1 and 2 rainfalls occurring in the record than the Poisson theory would expect with a noticeable reduction in the cases which escape such a rainfall. Whereas we should expect to average one such heavy rainfall per record, the average from the records is 1.4. Thus the tendency is for the calculated frequency of occurrence to be somewhat too low. However, in spite of such a deficiency, the method, which is easy to apply, proves to be at least as accurate as more elaborate methods dealing with a more bulky volume of observational data.

Acknowledgment: I have to thank Dr. M. A. F. Barnett, Director of Meteorological Services, in whose Office this work was commenced, for kindly affording facilities for the completion of this study.

References.

Gumbel, E. J., 1942. Statistical Control-curves for Flood Discharges. Trans. Amer. Geophys. Union, 489–500.

—— 1943. On the Plotting of Flood Discharges. Trans. Amer. Geophys. Union, 699–713.

Seelye, C. J., 1947. Rainfall Intensities, at Wellington, N.Z. Proc. N.Z. Inst. Eng., 33, 452.

– 71 –

The Structure Of Symmetrical Depressions.

Introduction and Summary.

Before the advent of Frontal Analysis, prognosis of the movement of pressure systems on the weather chart, insofar as it was based on other than empirical considerations, relied on theories of the mutual interaction of identifiable (radially symmetrical) vortices. This approach yielded little of value for practical forecasting, partly because it ignored the role of asymmetric systems, but also, partly, because unsatisfactory models of the symmetrical depression itself were used as a basis for the theory.

Frontal theory, correctly, stresses the importance of discontinuity in the short-term prognosis, and its very success in this restricted field has drawn attention away from the necessity of coming to grips with symmetrical systems, which still, in large measure, govern the general long-term weather situation. The time seems ripe, therefore, for a reappiaisal of the role of the symmetrical vortex in general meteorological theory.

The classical picture1 envisages a depression as a central mass of air in solid rotation, surrounded by a region in which wind speed varies inversely as the distance from the centre. Such a structure can be made to fit actual depressions reasonably well over a limited range, but it must be excluded as a valid overall picture of vortex structure on theoretical grounds. The wind speed falls off so slowly with radial distance that a vortex of infinite extent would possess an infinite kinetic energy.

It is possible to produce a simple model of wind and pressure structure tree from these faults.

The model investigated predicts that the ratio h/Vm Rm for a cyclone should only vary within narrow limits. Here Vm is the maximum wind-speed at a distance Rm from the centre, and h is the depth of the depression.

An examination of 18 depressions (Table 1) shows that this ratio does, indeed, vary within narrow limits.

The total kinetic energy for a vortex can be evaluated, and also the mutual kinetic energy of adjacent vortices.

The mutual interaction of vortices presents features of great interest. Two vortices of like sign attract for large separations, but repel for close approach, while vortices of unlike sign repel and then attract. Thus we have a phenomenon analogous to the barrier effect of wave-mechanics. Several features of the interaction of cyclones, anticyclones, and fronts, on the synoptic chart can be interpreted in terms of this barrier effect.

The investigation is mainly confined to geostrophic vortices, i.e., vortices in which the pressure gradient is balanced by the coriolis force due to the earth's rotation. A brief discussion is also given of the cyclostrophic vortex, in which the pressure gradient is balanced by the centrifugal acceleration of rotating winds. This model applies to the tropical cyclone, and accounts satisfactorily for the well-known “eye” phenomenon. A similar structure might also be postulated by smaller-scale vortical systems, such as tornados, and the small-scale vortices of turbulence phenomena.

The Specification of a Vortex.—Meteorologists customarily speak of depressions as deep or shallow according as to whether the central pressure is very low, or not so low, in relation to the average pressure over a wide area. A depression may also be categorised as extensive, or widespread, or small. These are qualitative categorizations. The meteorologist is not obliged to put figures to depth or extension, and, indeed, may not have a perfectly clear idea of precisely what is meant by these terms. Nevertheless, his intuitions are right. The salient properties of a symmetrical vortex, considered merely as a distribution of pressure about a point, are something corresponding to a depth, and something corresponding to an extension, just as the distribution of any statistical quantity may be characterised in part by specifying a mean value, and some parameter representing the spread.

For refined analysis these two parameters may not be sufficient, but for any consideration at all they are necessary. It is necessary, therefore, to inquire

[Footnote] 1. See Rayleigh, P.R.S. (A), No. 93, p. 148; Brunt, P.R.S. (A), No. 99, p. 397.

– 72 –

what quantities, obtainable from the synoptic chart, may be taken as measures of depth or spread of a depression. The central pressure might be taken as a measure of depth, but this is not going quite far enough; what we want is depth in relation to some observed data. We might, perhaps, define the depth as the difference of the central pressure and the average pressure for the area for that particular season. This gets closer to the mark, but it ignores the fact that, even apart from the existence of a particular depression, the pressure over a wide area may be abnormally high or low in relation to the statistical average. The conclusion is that it is not possible to measure depth directly from the synoptic chart. It is possible, however, to measure a parameter which depends on the depth.

In an atmospheric vortex the pressure-gradient is zero at the centre and at large distance therefrom. Somewhere in between it reaches a maximum value, depending both on the depth and the spread of the vortex. The radius of maximum pressure-gradient. which coincides, under geostrophic balance, with the radius of maximum wind-speed, may itself be taken as measure of the extent of the depression. There are thus two parameters of a cyclone, the radius of maximum wind-speed, Rm, and a maximum wind-speed, Vm, from which it is possible to infer the strength and spread of a vortex, given the form of its pressure-profile.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The pressure profile of a radially symmetric vortex must be such that the pressure-gradient is zero at the centre, and falls away at large distances sufficiently rapidly for the total kinetic energy of the system to be finite. A pressure profile satisfying these conditions is p = p0 + he−r2/a2 (1)

where p0 is the pressure at great distance from the centre, r = 0. At r = 0, p = p0 + h. so that h (negative) may be defined as the depth of the depression. The parameter h will be positive for an anticyclone. The parameter a is a measure of the spread of the vortex. As e−r2/a2 decreases very rapidly with r, when r is a multiple of a, the influence of the vortex is completely lost at a relatively short distance from the centre. The wind-speed in this type of vortex drops off rather more rapidly with distance than is usual in depressions, but the model has the merit of mathematical simplicity.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

An alternative type of profile is represented by p = p0 + h/1 + 1n/an (2)

where, in addition to the two parameters h and a, we have a third parameter n, the index of the distribution. Here wind-speed falls off less rapidly with 1 than is the case with a normal vortex.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The Normal Vortex: The tangential wind-speed in a geostrophic vortex at distance r from the centre is: V = ½ρω òp/òr (3)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where ρ is the density of air and ω the vertical component of the earth's rotation, the sense of the motion being clockwise in the southern hemisphere, anti-clockwise in the northern for a vortex of low pressure. Substituting the value of òp/òr obtained from the equation (1) in (3) we have V = h i e−12/a2/ρωa2 (4)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The radius of maximum wind-speed is given by òv/òr = 0, i.e., Rm = a/√z. The radius of maximum wind-speed is thus 0.707 times the spread, a. The maximum wind-speed obtained by making the substitution r = Rm in (3) is Vm = h e−½ / 2ρ ω Rm

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

We infer from this that the ratio h/VmRm = 2 √e. ω ρ = 3.30 ωρ (5)

is invariant for all “normal” geostrophic vortices in that latitude.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The kinetic energy of unit volume of air is ½ ρ V2 = h2r2e−2r2/a2/2ρω2a4

– 73 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The total energy of a horizontal disc of air, thickness dz is π h2dz/ρω ∫0 x2 e−2x2dx = π h2dz/8 ρω2 (6)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Integrating in the vertical, we find for the total kinetic energy of the vortex π/8 ω20 h2dz/ρ (7)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

If we assume that the depth of the depression varies with height according to the equation h = h0 (1-z/Z) for z < Z, and h = 0 for z > Z, we find for the total kinetic energy of the vortex system π h02 Z/16 ρ ω2 (7a)

where ρ is some mean density of the air column up to height Z.

We see from (7) that the total kinetic energy of a normal vortex varies as the square of the depth, the height in the atmosphere to which the vortex extends, and inversely as the square of the vertical component of the earth's rotation.

One remarkable feature of this result is that the kinetic energy of a normal geostrophic vortex does not depend at all on the spread of the disturbance.

We infer from (7) that the kinetic energy of a vortex increases on movement towards the equator, and decreases on movement towards the pole, provided the depth and vertical extent of the depression remain the same.

One might anticipate latitudinal movement of depressions to be associated with compensating changes in h0, or Z, or both. Thus, if a depression were to move from latitude 30° to latitude 45°, the energy of the system would remain unchanged, if the depth in latitude 45° were 1.4 of the depth in lat. 30°, with Z unaltered, or if the vertical extent were double with h0 unchanged.

The Power Vortex: Similar results may be derived for a vortex with the pressure profile given by equation (2).

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Here we have for the tangential wind-speed, V = n h/2 ρ ω an rn−1/(1 + rn/an)2 (8)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The radius of maximum wind-speed is given by [Rm/a]n = n − 1/n + 1 (9)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

For the maximum wind-speed we have Vm = (n2 − 1) h/8n ρ ω RM (10)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Thus we have h/VM RM = 8n/n2 − 1 ρ ω (10a)

For n = 2, the characteristic ratio of the vortex, h/Vm Rm, is equal to 5.3, for n = 3, 3.0, and for n = 4, 2.1. It becomes small for large values of n.

Table I gives the values of Vm, Rm and h0 for 18 depressions, as estimated by Goldie.2 Goldie's values of Vm have been increased by 1.4 to allow for the effects of surface friction. The fourth column shows the ratio of Vm Rm/h0 and the last column, the value for n, inferred from this ratio in the expression (10a), correct to a half-integer.

The experimental data are subject to large errors, but one can infer from the clustering of values of n around 2 or 3 that a reasonable fit to extra-tropical cyclones can be obtained with a pressure profile of the type (2). The data do not permit us to say whether n is the same for all vortices, or varies from vortex to vortex.

The small spread of the characteristic ratio encourages the belief that some such structure as that envisaged holds for extra-tropical cyclones. The “classical” theory permits h/Vm Rm to have any value at all.

2

[Footnote] 2. Goldie, Geophys. Mens., No. 79 (1939).

– 74 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The total kinetic energy of a vortex is given by 2π ∫0 x2dz ∫0 x2 ½ ρ V2 r dr

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Taking the value of V2 from (8) we find for the total energy nπ/24 ω20 x2 h2dz/ρ = nπ h02Z/48 ω2 ρ (11)

using the same assumptions regarding the vertical extent of the depression as in the last section.

The expression (11) for the energy of a power vortex is exactly analogous to the corresponding expression (7) for a normal vortex. except that the new parameter n enters. The energy of a power vortex for n = 3 is, indeed, exactly the same as for a normal vortex, and the characteristic ratios are nearly the same. For many purposes a normal vortex can be substituted for a power vortex, index 3.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The Interaction of Vortices: If we have two adjacent vortices giving kinetic energy fields ½ ρ V12 and ½ ρ V22

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

the resultant kinetic energy field is, ½ ρ (V1 + V2)2 = ½ ρ V12 + ρ V1.V2 + ½ ρ V22 (12)

The total kinetic energy of the system is thus the sum of the kinetic self-energies of the individual vortices, plus the cross-term ρ V1.V2 representing the mutual kinetic energy of the interacting vortices.

It is our purpose, now, to consider this mutual term.

We consider only the normal vortex, on the grounds of mathematical expedience, as this type of vortex gives a solution in simple terms. The results obtained will remain valid, in a qualitative sense, for vortices of the power type.

Consider two normal vortices of the same strength, h, and spread, a, at points (O, O) and (x0, O) respectively.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

We may write for the cross term ρ V1.V2 = h2 r2/ρ ω2 a4 e−[r2+′2/a2]−h2 × x0/ρ ω2 a4 e−[r2+′2/a2] (13)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where r2 = x2, + y2, r′2 = (x − x0)2 + y2.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Integrating this expression over all space, and assuming h = h0(1−z/Z), for z < Z, as before, we have for the mutual energy of the vortices E = π h02Z/8 ω2ρ e−x02/2a2[1−x02/2a2] (14)

We notice that for two vortices of the same sign E is negative for x0 > √2a, and positive for x0 < √2a. On approaching, two vortices of like sign lose from their joint stock of kinetic energy, up to a distance √2a, which energy reappears in the form of isallobaric kinetic energy associated with the approach of the vortices. Vortices of like sign therefore attract for separations larger than √2a, and repel for closer approaches.

Vortices of unlike sign, on the other hand, are mutually repelled for large separations, and attracted for close approaches.

This interesting “barrier” effect, which inhibits the over-close approach of two vortices, permits new interpretations of a number of synoptic situations. For example, a series of eastward-moving frontal depressions is frequently terminated by an outburst of polar air; the cold front pushes far to the north, with an anticyclone building up in the cold air in its rear, while the previously existing anticyclone in the warm air ahead of it collapses simultaneously.

This situation can be interpreted in terms of the mutual interaction of the cold front, considered as a line vortex, with the warm anticyclone. If the line vortex approaches the circular vortex with sufficient speed, it can crash through the potential barrier, temporarily annihilating the anticyclone by producing a counter-circulation, and pass out of the potential barrier on the northward side of the high, the circulation of which is re-established in the cold air. On this view, there is no real annihilation of the anticyclone with a corresponding creation of a cold high. The one anticyclonic system exists all the time, and is only temporarily eclipsed.

– 75 –

The Tropical Cyclone: So far we have been concerned exclusively with geostrophic vortices. In low latitudes geostrophic control is too weak to balance a strong-gradient, and in tropical cyclones the balance is provided, in the main, by the centrifugal force of rotating winds.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The equation for balance between centrifugal force and the pressure gradient is ρV2/r = -òp/òr (15)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

If we assume the pressure-profile (2) the equation (15) becomes ρV2 = n h rn/an/(1 + 1n/an)2 (15a)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The total kinetic energy of such a vortex is, n π a20h dz ∫0 rn+1 dr/an+2 (1+rn/an)2 (16)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

For n = 4 this expression becomes π2 a2/2 ∫0h dz (16a)

The kinetic energy of a centrifugal vortex is proportional to the depth, to the vertical extent, and to the square of the spread.

The integral (16) diverges for n < 2, so that the index for a tropical cyclone must be greater than 2.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The maximum velocity, at 1 = a, is Vm = ½ √ n h/ρ (17)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

From (17) we infer that Vm2/n h = ¼ρ

is an invariant for centrifugal vortices.

In Table II is listed values of Vm, h, and the values inferred for n for 18 tropical cyclones. It will be remarked that the dispersion is considerably greater than for the approximately geostrophic vortices of temperate latitudes.

The values of n are also higher on average. This is, perhaps, what might be anticipated from the well-known “eye-of-the-storm” often associated with tropical cyclones. The fact that a calm can exist over a wide area, with adjacent hurricane winds, indicates that the rate of wind increase around the centre is less than would be associated with a solid rotation.

Since wind-speed is not a linear function of the pressure-gradient in centrifugal vortices, it is not possible to develop a theory of the mutual energy of adjacent vortices along the lines of the interaction of vortices. The theory of centrifugal vortices is thus of necessity more limited.

Table I.
Cyclone No. h (mb) Vm (m/sec.) Rm (km) h/VmRm ρ ω n
1 43 27.5 550 4.0 2.5
2 48 27.5 500 5.0 2
3 43 27.5 450 5.0 2
4 52 40.5 280 6.4 2
5 55 35.2 420 5.25 2
6 61 37.3 450 5.0 2
7 21 25.2 360 3.2 3
8 9 26.6 320 1.5 5
9 21 28.0 380 2.8 3
10 33 25.5 470 3.8 2.5
11 25 18.2 480 4.0 2.5
12 45 36.5 320 5.5 2
13 55 37.0 430 4.8 2
14 41 28.2 460 4.5 2.5
15 63 37.0 500 4.8 2
16 68 33.6 650 4.4 2.5
17 48 32.4 700 2.9 3
18 47 33.6 400 5.0 2
– 76 –
Table II.
No. Place Date (h)
(mb)
Vm2 (m2/sec2) Vm2/ho) (gm−1cm3) n
1 Dry Tortugas 9/9/19 77 2900 380 2
2 Miami 18/9/26 73 7920 1080 5.5
3 Bermuda 22/18/26 47 6720 1420 7.5
4 Key Largo 28/9/29 60 9220 1540 8
5 East Columbia 13/8/32 67 4240 640 3.5
6 Savanna 5/11/32 93 16900 1820 9.5
7 Cape Henry 23/8/33 40 1920 480 2.5
8 Turks 1/9/33 30 3920 1240 6.5
9 Jupiter 3/8/33 60 6480 1080 5.5
10 Cape Hatteras 16/9/35 51 2380 800 2.5
11 Bimini 23/9/35 63 6040 960 5
12 Miami 1/11/35 37 2320 620 3
13 Fort Walton 31/7/36 37 3360 920 5
14 Manila 20/10/1882 40 6040 1000 8
15 Manila 13/9/28 76 10600 1400 7.5
16 Japan 21/9/34 100 7200 720 3.5
17 Indo China 16/7/28 50 3720 740 4
18 Hong Kong 18/8/23 50 2740 550 3

The Determination Of Upper Winds By Electronic Means.

Upper-wind observations are made by following the course of a small hydrogen-filled balloon which rises at a more-or-less constant rate through the atmosphere-and which at the same time is carried along in the wind stream. At certain intervals of time readings are made of at least three of the following four measurments—azimuth angle, elevation angle, slant range and height, and from these measurements it is possible to work out the velocity of the balloon over any interval of time.

There are three main methods of following these balloons, which are:—

Visual Methods.
Radio-direction Finding.
Radar Methods.

With visual methods a special theodolite is employed and measurements of azimuth angle and elevation angle are made while slant range can also be obtained by tachyometry methods if a tail is hung below the balloon. Height can be assumed from the rate of ascent of the balloon. A further refincment in visual methods is the use of two theodolites situated at the ends of a base line, when slant range can be calculated with considerable accuracy. Most of the upper-air observations taken to-day are made with visual single-theodolite methods.

In Radio-direction Finding a radiosonde transmitter is carried aloft by the balloon and the position of this transmitter is “fixed” either by taking the bearings of the signal from two or more ground stations, or by obtaining: azimuth angle and elevation angle from a single station. In both cases use is made of the height which is calculated from the radiosonde measurements of pressure, temperature and humidity.

In Radar methods electromagnetic pulses are transmitted from the ground radar set and a portion of these waves are reflected by a special target that is carried aloft by the balloon—with radar methods, azimuth angle, elevations angle and slant range are all measured.

The accuracy required in upper-air observations, largely depends on the use to which such observations are put. Perliaps the greatest accuracy is-required when these observations are used for gun correction, and artillery experts would like these winds with a vector error not exceeding 1 m.p.h.

– 77 –

Whether such an accuracy has any real meaning when the effect of wind over a distance comparable with the range of modern guns is taken into account is a debatable point and, in any case, such an accuracy is usually beyond the capability of any known method of upper wind determination, with the possible exception of the double-theodolite visual method.

For purely meteorological purposes such as the drawing of upper-air charts and forecasting the maximum permissible vector, error should not exceed 5 knots when the wind is taken over 2,000ft. layers up to 30,000ft. For the simple day-to-day use of these observations in say aviation forecasting this accuracy is probably too stringent, but for research purposes and the more exact forecasting techniques, particularly in tropical regions, this accuracy is not sufficient and the magnitude of the vector error should not exceed 3 knots. It is of interest to state here that the British Air Ministry have specified the maximum permissible vector error as 2.7 knots at a maximum range of 100 miles, these figures being used as a guide in the design of new electronic wind-finding equipment.

Now a balloon ascension rate of 1,000 feet per minute can usually be obtained with a moderate-sized balloon and careful design of the target, and assuming an average wind velocity to 30,000ft. of 60 knots the azimuth-angle accuracy required to give a vector error at 30,000ft. of less than 5 knots over 2,000ft. layers must be at least 0.3°. Therefore, any electronic method employed for wind finding must be capable of measuring bearing accuracy to within 0.3°. The other main requirement in electronic wind-finding equipment is range, and although in our assumption of an average 60-knot wind to 30,000ft. the range at 30,000ft. is only 35 miles this average may be exceeded on several occasions; also observations to over 50,000ft. are most desirable. Hence range requirements have usually been stated as a minimum of 50 miles with a desirable maximum of 100 miles.

The British M.O. Direction Finding Set employs an Adcock H-formed aerial system which is rotated for minimum signal strength which occurs when the horizontal arm of the H is at right angles to the incoming signal. Three of these sets spaced at the corners of an approximately equilateral triangle of side of about 20–30 miles are used to follow the Kew radiosonde, which operates in the frequency band of 27.5 to 28 megacycles per second. The height of the balloon is obtained from the measurements of pressure, temperature and humidity sent out by the radiosonde.

The British M.O. consider these sets give probable vector errors of under 4 knots at 30,000ft. for the better-sited stations and under 6 knots for the poorer-sited stations. The disadvantages of this type of equipment are mainly economic, being the cost of the radiosonde transmitter which must be flown every time a sounding is required, and also the number of operators required to man all three stations.

The American-designed Direction Finding Set S.C.R. 658 was developed to track the Diamond Hinman 375 megacycle radiosonde. Two sets of aerials are provided, one for azimuth and one for elevation and the signals from each set are compared on a separate C.R.T., thus enabling measurements of both azimuth angle and elevation angle to be made. Height is obtained from the radiosonde measurements so it is possible to obtain a complete fix with only one station. The accuracy of this equipment is not known, but it is assumed to give better than 5-knot vector error at 30,000ft.

The advantage of this type of D.F. set over the British M.O. set is that only one station need be operated. The expense of the radiosonde transmitter required for each flight is still a disadvantage, while there is a further disadvantage in that the set cannot give accurate elevation angles below 15° because of ground reflection interference. The U.S. authorities are therefore developing a 1,725 m.c. radiosonde and D.F. set that can be used down to 4°.

The original Radar sets used for following meteorological balloons were not designed specially for that purpose, but anti-aircraft fire-control radar sets were usually used because they gave the necessary accuracy and also measured elevation as well as bearing and range. The longer-wave-length radar sets were originally used, but microwave sets are now employed almost exclusively.

The first of these sets to be described is the American S.C.R. 584. This was originally an A.A. fire-control radar that was used extensively in Britain

– 78 –

in the last few years of the war. It is a 10 c.m. set of 400 K.W. power which can automatically track the balloon reflector. It can measure azimuth and elevation angles to 0.1° and proved quite a suitable set for wind-finding purposes; but the initial cost of these sets was almost prohibitive.

The New Zealand Micro-wave Meteorological Radar, developed by the Dept. S.I.R., also operates on 10 cm., and has a power output of 200 K.W., and a bearing and elevation accuracy of 0.25°. This accuracy is achieved by the use of a narrow beam—hence the largé parabola, and by bisecting an are scribed (or painted) on a long persistence P.P.I. tube.

The chief advantage of radar methods over D.F. methods is that slant range is measured with considerable accuracy and so the vector error in wind determination at extreme ranges is usually less than with D.F. methods. The economic aspect of lower operating costs is of considerable importance. Another advantage is the use of wind-finding radar sets for storm detection and though this has more immediate use in tropical latitudes it can still prove a very valuable tool to the meteorologists in temperate latitudes if suitable techniques are worked out. Generally warnings of the onset of extensive rain areas, especially those associated with cold fronts where there are usually strong convectional currents, can be obtained from upwards of 100 miles away.

The chief disadvantage of radar methods, apart from their large initial cost, is their rather limited range especially when using reflectors of moderate size. This defect can to some extent be overcome by the use of pulse-repeater equipment. This pulse-repeater equipment was developed for a rather special purpose, namely, to enable warships to make upper-wind observations readily and conveniently by using their existing radar installations. Three models were developed at 200, 700 and 900 megacycles respectively. The 200 megacycle model was specifically designed for use aboard weather and guard ships which carried only search radars incapable of measuring angles of elevation. The 200 megacycle pulse repeater was always used with a standard radiosonde so that height was obtained from the radiosonde measurements. The 700 and 900 megacycle models were designed for tracking by fire-control radars, and thus elevation angle as well as range and azimuth angle was obtained; hence these pulse repeater equipments could be used by themselves. They could be followed for up to 100 miles, and, as slant range was always measured, the accuracy was considerably better than obtained with standard D.F. methods using the same frequency.

In conclusion I would like to stress that the design of electronic equipment for wind determination has by no means reached a static state, and new developments and designs are being tried out in many of the larger countries, particularly Britain and U.S.A. The latest British development in wind finding uses a microwave radiosonde which is triggered by a U.H.F. ground transmitter. In America, as mentioned previously, the development is towards straight direction finding on a centimetric radiosonde transmitter.

A Preliminary Study Of Some Results From The Radio-Meteorological Investigation Conducted In Canterbury.

Introduction: The propagation of ultra high frequency electro-magnetic radiation near the surface of the earth is dependent to a large degree on the distributions of temperature and humidity in the first few hundred feet of the atmosphere. Some of the theoretical aspects behind this propagation are briefly considered, and it is shown how a particular climatic modification of the lower atmosphere can radically influence the propagation, more particularly in the region beyond the geometrical horizon of the transmitter, a space normally associated with the relatively weak diffraction field.

– 79 –

The particular modification is that which occurs when warm dry air flows out from land over a colder sea. The resulting distributions of refractive index with height in the first few hundred feet have been measured, and are such as to cause abnormally strong signals to be received well beyond the horizon of a transmitter.

Orthodox Propagation: In an atmosphere in which the air is well mixed, the distributions of temperature and humidity with height result in a linear decrease of refractive index with height for all relevant values of height, of such a magnitude that the downward curvature of a ray in this space is approximately one quarter of the earth's curvature, a factor which increases the effective horizon of a source above the earth's surface. Utilising the concept of modified refractive index which permits the treatment of the earth's surface as a flat one, it follows that in a well-mixed atmosphere the modified refractive index increases linearly with height.

Picture icon

Fig. 1—The Dependence of Refractive Index μ and Modified Refractive Index M in well-mixed Atmosphere.

At centimetre wave-lengths, both the earth and sea behave as almost perfect reflectors, and, therefore, under normal conditions the field distribution some distance from a transmitter will depend on height in the manner shown in Fig. 2.

Picture icon

Fig. 2—The Dependence of Field Strength E on height.

Above the horizon, the normal lobe structure is evident, due to the interference of the direct and reflected rays. The calculation of the lobe positions must take into account the slight bending of rays in the atmosphere. The fact that the minima of field strength are not zeros is due to a divergence factor introduced by the sphericality of the earth.

Below the horizon is the diffraction field, decreasing in intensity as the height becomes smaller. The scale of the lobe structure and the diffraction curve will, of course, depend on the height of the transmitter, the wave-length of the radiation, and the separation distance of transmitter and receiver.

It will be seen below how it is possible to achieve well below the horizon field strengths comparable with that existing above the horizon.

Anomalous Propagation: It is well known that certain temperature and humidity distributions in the lower atmosphere result in a modified refractive index distribution which, over a limited height called the height of the radio duct, is conducive to the downward bending of rays. The dependence of modified refractive index on these two meteorological quantities is such that a temperature excess and a specific humidity deficit near the surface will provide the necessary variation in modified refractive index.

– 80 –
Picture icon

Fig. 3—The Potential Temperature and Specific Humidity Distributions Resulting in the Formation of a Surface Duct.

Employing the ray theory of propagation, it can be shown* that under these conditions a certain amount of the energy from the transmitter is trapped in the duct, and will result in abnormally strong fields well beyond the horizon, if, of course, the duct extends as far as this.

Mode Theory of Propagation in a Duct: The ray-theory treatment of super-refraction in the atmosphere, which consists in considering rays being refracted down to the surface, reflected, refracted back once more to the surface, reflected, and so on, immediately recalls the propagation of ultra high frequency radiation along a waveguide.

There exists between the two situations quite a useful analogy. The passage of a transverse electric wave of the Hon type down a waveguide may be regarded as being accomplished by the successive reflection of a plane wave at two opposite walls of a guide. The pattern of electric field or distribution of field across the guide is determined by the guide dimensions, the wave-length, and the angle of reflection. The boundary conditions permit only a finite number of discrete angles of reflection to be assumed; to each there corresponds a certain field pattern, called a mode of propagation. Some of these distributions are shown in Fig. 4.

Picture icon

Fig. 4—Modes of Propagation in Waveguide.

Now consider the propagation of horizontally polarised radiation in an atmosphere, the modified refractive index of which decreases linearly with height. A plane wave starting at some small angle will be refracted and ultimately return to the earth's surface, where it is reflected. Compare this process with the waveguide mechanism. The surface of the earth or sea acts like the bottom side of the waveguide, and the function of the other side is performed by the refractive properties of the medium.

Picture icon

Fig. 6—Modes of Propagation in an infinite Radio Duct.

[Footnote] * See succeeding paper.

– 81 –

Exactly as in the waveguide there exist definite modes of propagation or patterns of electric field which travel immediately above the surface; some of the distributions are shown in Fig. 5.

At the earth's surface, the field is zero for all the modes, but there is no well-defined upper limit to the field as there is in a waveguide. Another essential difference between the two cases is that the energy associated with each mode is not principally confined to the same track-width for all modes. The track-width increases with mode number, but depends also on the wave-length and the la pserate of refractive index.

The atmosphere chosen as an example is a very artificial one, since the region of negative dM/dh is usually confined to a few hundred feet. Because of the dependence of track-width on wave-length and mode number, only the low order modes for the smaller wave-lengths will be propagated without attenuation. Higher modes at these wave-lengths and all modes of greater wave-lengths will not be “trapped.” the energy associated with them leaking away from the top of the duct, the leakage increasing with mode number and wave-length. This explains the fairly frequent occurrence of anomalous propagation at centimetre wave-lengths.

Mathematical Treatment: The above solution is not directly applicable to the problem in general because, even if we assume stratification, i.e., the same M profile existing for all distances, the dependence of modified refractive index on height is not linear or bilinear. However, an attack via mode theory is still extremely powerful, and although entailing not inconsiderable numerical work, the first and second mode solutions have been obtained for some simple profiles.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

For horizontally polarised radiation, the electric field vector E is contained in the wave equation ò2E/òx2 + ò2E/òh2 + k2μ2 E = O (1)

Where × is the horizontal co-ordinate, h is the height, k is the wave number, and μ is the modified refractive index, dependent in some way upon the height h.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

A solution of this equation may be obtained in the form E = ΣanUn (h) exp (−i k μ0 × cos θn) (2)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where μ0 is the value of μ when h = o. Each term of this solution in series represents a mode of propagation, and Un (h) is a solution of the one dimensional wave equation d2Un/dh2 + k22—μo2cos2θn) Un = O (3)

The boundary conditions Un (O) = O and Un(h)→ O as h → ∞ permit only discrete values of cos θn. These, the eigenvalues of the problem, may be real or complex, corresponding to trapped or leaky modes. Their computation and the evaluation of the associated eigenfunctions Un (h) provide a difficult problem for all distributions of refractive index except the very simplest.

Undoubtedly, the assumption of stratification provides a convenient first approximation, but it appears from the few observations we have obtained on advection ducts that the distribution of refractive index with height varies not inconsiderably with offshore distance, and that a final solution will have to take into account this dependence of modified refractive index on the two variables × and h.

A Typical Advection Duct: That the types of temperature, humidity, and refractive index distributions which have been discussed do exist under certain meteorological conditions, and that they result in pronounced superrefraction at centimetre wave-lengths, is evident from a set of observations obtained in Canterbury in February of this year.

A typically warm, dry, north-west wind was blowing steadily over the area of observations for the duration of the measurements.

The temperature modification introduced by the sea surface may be studied in Fig. 6, which is a cross-section perpendicular to the coast and approximately in the direction of the wind.

The average potential temperature of the air-mass over the land is 75° F., whereas over sea the potential temperature varies from 59° F. immediately above the surface to 75° F. for the undisturbed upper layers. This modification, sharp at first, occupies a greater height as the offshore distance increases, but is com-

– 82 –
Picture icon

Fig. 6—Isopleths of Potential Temperature—°F.

plicated by a variation in sea temperature. For the first 40 kms., the sea temperature is 59° F., then it drops steadily to a value of 54° F. at 80 kms., beyond which it appears to be maintained.

The effect of evaporation from the surface into the lower layers of the air-mass is shown in Fig. 7, similar to the previous diagram, but with specific humidity replacing potential temperature. The modification in this case, however, is not so well defined.

Picture icon

Fig. 7—Isopleths of Specific Humidity—Q gms./kgm.

– 83 –
Picture icon

Fig. 8.—Isopleths of Modified Refractive Index—M.

The resulting distribution of refractive index is given in Fig. 8. Above 700 ft., the modification caused by the sea surface is negligible even at great ranges, i.e., 120 mk. The duct height, i.e., the height of the M inversion, increases rapidly it offshore distance, reaching a maximum height of 300 ft. at 60 mk., which is maintained for another 60 mk. at least.

The coast is a point of discontinuity, since the surface temperature and humidity both change radically here. With the variation of duct height with distance, there is also a considerable change in the shape of the M curve. Just

Picture icon

Fig. 9.—Variation of Duct Shape with Distance Offshore.

– 84 –

offshore, the duct is very intense, but also very low; by intensity, we choose to mean the difference between the surface value of M and the minimum value. As the distance increases, the intensity lessens. This is seen in Fig. 9.

From calculations made for M profiles of roughly similar dimensions, it is reasonable to suppose that such a radio duct as the one encountered above is capable of considerable trapping of centimetre radiation. Such a prediction is completely borne out by field strength measurements made, this day well beyond the geometrical horizon. These are given in Fig. 10, where for each of the four transmitters the field strength in decibels above noise is plotted against height. In order to have a useful absolute measure of these field strengths, the free space field strength was estimated by observing the lobe structure obtained from the aircraft flying at constant height away from the transmitter. The level of free space field in decibels above noise is inserted in each of the height gain curves of Fig. 10. Unfortunately, there is inherent in our method of observation a small range variation in the height gain curves, but this we believe to be outweighed by other considerations.

Picture icon

Fig. 10.—Height Gain Curves obtained on Slant Descent. Range Change—2,000–1,000–50 ft.

Referring to the first height gain curve in Fig. 10, which is for a transmitter 29 ft. above sea-level operating at a wave-length of 9.3 cm., it is evident that at a distance of 80 km. the field above the horizon corresponds roughly to a lobe pattern, but below the horizon, instead of decreasing rapidly, a fairly constant signal level is maintained down to 100 ft., where there exists a small maximum.

The same effect is achieved on the other transmitters, one at the same height operating at 3.2 cm., and a similar pair of transmitters at a height of 88.5 ft. above sea-level. The absence of any radical change in the height gain curves due to the change of transmitter height is not surprising, since both transmitters are well within the radio duct. Only if the transmitters were placed well above the duct would the field strength well beyond the horizon be diminished appreciably.

Although it is not possible at this stage to predict completely the field strength variation with height associated with a given field of modified refractive index, we are permitted one or two general deductions.

Were propagation beyond the horizon performed by the first mode alone, we would expect to have a maximum somewhere in the duct, e.g., 100 ft., but above the duct the signal would be expected to fall off rapidly with height, only increasing again as the horizon is approached. This decrease above the duct is

– 85 –

non-existent in our case. This may be due to the presence of a number of trapped modes, or the fact that the energy associated with the large number of leaky modes will escape from the duct into the region above it. Of course, the decrease in intensity of the duct with distance will undoubtedly result in some departure from the classical picture of mode propagation in a radio duct.

Some General Conclusions: It appears that under north-west conditions in Canterbury, the necessary changes of temperature and humidity near the sea surface are sufficient to cause very considerable superrefraction on wave-lengths of 3 and 10 cm.

The excess of upper air potential temperature over the surface temperature is frequently of the order of 15–25°F. and the humidity deficit often 5 gms/kgm.

Although it has been our misfortune to examine only a few advective situations, it would appear that the radio duct extends to at least 150 km. offshore, although its effect, i.e., its ability to cause trapping of radio energy, is much weakened at this stage. The maximum height of the modified refractive index inversion is of the order of 300–500 ft., and is attained within the first 40–60 km.

Obviously such figures will vary with the wind speed and wind profiles, the warmth and the dryness of the nor'-wester, and the sea temperature, but they are, we think, generally illustrative of the conditions which do occur. The situation is sometimes complicated by the presence of an onshore sea breeze, but this is usually confined by the offshore wind to a few miles each side of the coastline, and the effect on propagation is small. A more complex set of conditions is provided when the north-west wind is kept aloft by a north-east wind which may extend in height up to a thousand feet.

It is hoped that it will be possible to obtain an accurate correlation between the modified refractive index distribution and the field strength dependence on height. This should make possible the prediction of propagation curves for different frequencies and for various advective situations, and a correct assessment of the modifications introduced by some of the simpler complicating factors mentioned above.

Acknowledgment: For the initial portion of this article, the author is indebted to H. G. Booker and G. G. Macfarlane, of the Telecommunications Research Establishment, Naiver, England.

Some Problems And Techniques Involved In The Radio-Meteorological Investigation Conducted In Canterbury.

Summary: The phenomenon of superrefraction results in the trapping of ultra-high frequency electromagnetic radiation close to the earth's surface. It is caused by abnormal variations of temperature and humidity in the lower atmosphere. resulting in a sufficiently sharp lapse rate of refractive index to cause lays to have a curvature greater than that of the earth.

These conditions arise when warm dry air passing over a cooler sea surface is modified thereby, and an investigation is being conducted in Canterbury to discover:—

(a)

The relationship of the distributions of refractive index in the atmosphere with the distribution of energy from a transmitter situated at various levels.

(b)

The laws governing the modification of the air mass.

(c)

A forecasting technique for determining the onset and intensity of superrefraction conditions from general synoptic data.

The organization and general techniques adopted to achieve these results are discussed.

Introduction: Superrefraction or anomalous propagation is a phenomenon associated with electromagnetic propagation in the troposphere, which has very marked and important effects at the lower wave-lengths used in radar techniques.

– 86 –

With the comparatively small degree of bending of rays in a normal atmosphere, one might expect to find only diffracted energy at these wave-lengths beyond the horizon of a transmitter, but under superrefraction conditions strong fields are found close to the surface far beyond the horizon, very much greater than could be explained by diffraction alone.

One is led to the conclusion that a strongly reflecting or refracting layer exists at some distance above the earth's surface. The ionosphere can be disregarded, as the frequencies with which we are dealing are high enough for radiation to pass straight through it. A cloud might suggest itself as a reflecting layer, but superrefracion is frequently observed in a perfectly clear atmosphere, so we are forced to assume that a strongly refracting layer at some level in the atmosphere is responsible for the phenomenon.

We must first examine the normal distribution of refractive index in the atmosphere, determine what factors cause it to vary, and discover the variations under superrefraction conditions to see if they can account for the phenomenon.

Refraction in a Standard Atmosphere: If μ is the refractive index of the atmosphere, it is easily shown that, for small angles of elevation, the downward curvature of rays is given quite closely by − dμ/dh or μ′ where μ′ denotes the lapse rate of μ in the atmosphere.1

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Near ground level the refractive index of the atmosphere at height h, down to wave-length of a centimeter or so, is given by2. μn = 79/106T [P − PC/621 + 4800Pq/621T] + 1 (1)

where P = total pressure in millibars

q = specific humidity in grams per kilogram

and T = absolute temperature.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Differentiating with respect to height, this reduces at NTP roughly to— μ′ = 5 × 10−6 (. 072P′ + 1.6q′ − .11t′)

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where t denotes degrees Fahrenheit and P′, t′, and q′ denote the lapse rate of pressure, temperature, and specific humidity. But the curvature of the earth is 5 × 10−6 radians per hundred feet, so the above is equivalent to μ′ = Ke (.24 + 1.6q′ − .11t′) (2)

where Ke is the curvature of the earth and q′ and t′ are expressed in grams per kilogram and degrees Fahrenheit per hundred feet. The term .24 arises from the normal lapse rate of pressure in the lower atmosphere.

In a standard (i.e., thoroughly mixed) atmosphere, we have

q′ = .04 gms/kgm/100 feet.

t′ = .5°F/100 feet.

so μ′ = ¼ Ke.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Thus the downward curvature of rays due to refraction in a standard atmosphere amounts to about one quarter of the earth's curvature. The treatment of curved rays over a curved earth is mathematically complicated, and in calculations on the subject the curvature of the earth is adjusted so that rays are straight— i.e., the earth is made to have a radius of 4/3 its natural value. The horizon on this new earth is known as the “radio” or “radar” horizon, and in a standard atmosphere one would expect only the weak diffraction field beyond this horizon.

Picture icon

Fig. 1—Path of Rays in Standard Atmosphere over Earth of Curvature ¾ Ke.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Modified Refractive Index: It is convenient for many purposes to consider the earth as flat, and apply the earth's curvature to rays. With the fiat-earth treatment a modified refractive is introduced, denoted by M, such that M′ = μ′ −; Ke (3)

where M′ denotes the lapse rate of M. In a standard atmosphere μ′ equals ¼ Ke so M′ is negative, or the gradient of M is positive with height, and rays have an upward curvature equal to ¾ Ke.

– 87 –
Picture icon

Fig. 2—The Distribution of Refractive Index and Path of Rays in a Standard Atmosphere on the Curved and Flat-earth Concepts.

The path of rays leaving a transmitter at various angles in a standard atmosphere is shown in Fig. 2, on the curved and flat-earth concepts.

Abnormal Variation of Refractive Index in the Atmosphere giving rise to Superrefraction: Under certain conditions, however, lapse rates of humidity and temperature are far from normal, and the downward curvature of rays may be several times greater than that of the earth. From equation (2) above it can be seen that a lapse rate of humidity greater than normal and a lapse rate of temperature less than normal or negative lapse rate of temperature (i.e., temperature inversion), will give rise to a greater value of μ′. An actual case will illustrate this point.

On the 24th February of this year, when a north-west wind was blowing offshore in Mid-Canterbury, average lapse rates of temperature and humidity in the first 135 ft. at a point 8 ½ miles off the coast were

t′ = —4.3°F/100 ft.

q′ = 2.4 gm/kgm/100 ft.

which gives μ′ = 4.6 Ke and M′ = 3.6 Ke in this region.

Above this level, lapse rates were normal, so the average distribution of modified refractive index and the behaviour of rays from a transmitter situated at, say, 50 ft. is shown in Fig. 3.

Picture icon

Fig. 3—The Behaviour of Rays Propagated within a Surface Duct.

Rays leaving the transmitter at small angles of elevation are bent towards the earth's surface, where they are reflected, and then by a process of continuous refraction they are once more returned to the surface, where they are again reflected. This process will continue as long as the given distribution of modified refractive index exists, and the energy associated with these rays will be concentrated close to the surface of the earth and penetrate well beyond the radiohorizon. Rays propagated at other than small angles of elevation will, however, penetrate into the region where the lapse rate of M is normal, and will not be returned to the earth's surface.

When a negative gradient of modified refractive index occurs we have what is known as a “modified index inversion” or “radio duct.” Rays are said to be “trapped” within the duct, which is sometimes known as a “trapping layer.” An M inversion can exist only over a limited height interval, however—thus in the above case it extended from the surface up to 135 ft., the gradient being normal above this height. This type of duct is known as a “simple surface duct.” “Elevated” ducts exist, as shown in Fig. 4, where the lapse rate of M is normal above and below the inversion.

From equation (3) it is seen that if μ′ = Ke then M′ = O (i.e., M is constant with height) and rays have the same curvature as that of the earth. This condition is shown in Fig. 5. If M′ > O, rays have a curvature greater than the earth. From this the usefulness of the modified refractive index concept will be readily appreciated.

– 88 –
Picture icon

Fig. 4—Elevated Duct.

Picture icon

Fig. 5—Behaviour of Rays when M is constant with Height.

The case cited in Fig. 3 is simpler than that obtained in practise. A given distribution of modified index with height is seldom maintained over any great distance. In fact, at the same time as the case above, at a point fifty miles offshore, the average lapse rates of temperature and humidity were such that μ′ = 1.85 Ke and the M inversion extended up to 350 ft. from the surface. The initial properties of an air mass, the distribution of wind velocity and direction in it, and the nature of the surface over which the air mass is moving, all influence the present and future distributions of refractive index over a path. The problem of discovering the behaviour of radiation in practise is obviously a complex one.

Conditions giving rise to Superrefraction and the Importance of the Problem: We have seen that superrefraction occurs to a varying degree when the lapse rate of humidity is greater than normal and the lapse rate of temperature is less than normal. These conditions are brought about when warm dry air overlies cool damp air, and can occur in various ways. One of the most important is the advection of warm dry air over a relatively cool sea; the lower layers are cooled and moisture evaporates into them, bringing about the conditions for a simple surface duct to form.

The phenomenon of superrefraction makes itself apparent on radars by giving greatly extended ranges on targets at low levels, and this is a very important effect especially in war time. It became vital during the recent war to discover the mechanism of the phenomenon, especially under advection conditions, and although a great deal of quantitative radio information was obtained, it was of little use owing largely to the number of variables in the atmosphere over a path near the surfce. A site was required where radio-ducts could develop in a steady meteorological situation, and where the distribution of modified refractive index and the field strength from transmitters situated near the surface could be studied in detail.

The Fohn Wind in Canterbury and the Launching of the Canterbury Project: When a moving air-mass encounters an obstacle such as a mountain range it is forced upwards, and in so doing much of the moisture condenses and is precipitated, the latent heat of condensation warming the air. On the lee side of the obstacle the air descends, and streams out at lower levels as a hot dry wind. The north-west wind as developed in Canterbury, New Zealand, is of this type, known as a Fohn wind, and blows at all seasons of the year up to several days at a time.

Fig. 6 is a map of the centre portion of the South Island of New Zealand In Mid-Canterbury, at Ashburton, the plain is about thirty miles wide from the foot-hills to the sea, with a gentle slope clear of obstructions.

The general direction of wind development is approximately at right angles to the coast-line, which is practically straight and runs in a north-east–south-west direction. The area offshore is completely clear of any obstructions, such as islands, to complicate the picture, and conditions are ideal for a simple surface duct to form in a north-west situation. There is every chance that a detailed study of the problem of superrefraction can be made in a steady and simple meteorological situation, with the minimum number of complicating factors. A long-term investigation, known as the “Canterbury Project” has been inaugurated; operations were begun towards the end of September. 1946. and will continue until some time towards the end of this year.

– 89 –
Picture icon

Fig 6 Map of Centre Portion of South Island of New Zealand Showing Area of Operations

The Problem: Briefly, the problem is threefold:

1.

To correlate the distribution of modified refractive index with the distribution of field strength in and above a radio-duct, from transmitters on different frequencies situated at various levels.

2.

To study the modification of an air-mass as it moves offshore and formulate the rules governing the changes.

3.

To develop if possible a forecasting technique for the onset, intensity and duration of superrefraction at various frequencies under advection conditions, from the general synoptic situation, or from a detailed low level study of the air structure at some convenient point inland.

The first section of the problem requires ideally an instantaneous picture of the conditions of the atmosphere and the distribution of field strength from transmitters on different wave-lengths over a path from the coast to some distance out to sea, of the order or 100 miles, from the surface up to a few thousand feet. This is impossible to obtain in practise, but in a steady meteorological situation fast-moving sources of measurement should be able to obtain a fairly complete and reliable picture. The second and third sections of the problem require successive detailed measurements of conditions in a moving air-mass, up to some 1,500 to 2.000 ft., as it passes over land and out to sea. Again a steady meteorological situation is highly desirable, since it allows some freedom in the timing and placing of the various observations.

Mobile sources of measurement are essential to obtain full advantage of a given set of conditions without an unwieldy amount of equipment.

General Techniques Adopted in Canterbury: Fig. 6 shows the area of operations. The line runs in a north-west-south-east direction across the Canterbury Plains just north of the path of the Ashburton River, and out to sea for about 100 miles.

– 90 –

Radar transmitters on 3 cm., 10 cm., and 3 m. are situated at the coast about 25 ft. above mean sea-level, while a second station on 3 and 10 cm. is situated a little to the north at a height of about 85 ft. above mean sea-level. Anson aircraft, on loan from R.N.Z.A.F., are fitted with receivers on these three channels which amplify signals from their respective ground stations and provide pulses to trigger associated transmitters on the same frequencies, which reradiate pulses of constant power. The signals are accurately measured on receivers at the ground stations. A trawler is fitted with similar equipment. Under north-west conditions the aircraft and trawler work in the region offshore, and with suitable manipulation, their positions always being known, a picture of the distribution of field strength in two dimensions along a path approximately at right angles to the shore is obtained. A channel on 50 cm. is being installed to obtain further data on the influence of a given set of conditions on propagation at different frequencies. The aircraft is also fitted with a wet-and-dry-bulb aircraft psychrometer, type Mark VI B, which measures temperature and humidity.

Three heavy trucks on land and the trawler at sea are fitted with the American-wired sonde equipment which, by means of raising and lowering special elements with the aid of kite or balloon, measures the temperature and humidity of the lowest 1,500 ft. or so of the atmosphere.3 A small cup-type anemometer is similarly used to give wind profiles in this region, and pilot balloon ascents at Ashburton provide further data above the reach of this equipment. Three fixed ground meteorological stations across the plains supply continuous records of temperature, humidity, pressure, and wind velocity and direction. The trawler is also fitted with a marine barometer to supply further pressure data out to sea. Radiosonde data on the properties of the air-mass before it crosses the mountain chain is obtainable from the meteorological station at Hokitika.

By suitably positioning the trucks on the coast and inland, and the ship and aircraft out to sea, the structure and modification of the air-mass as it moves across the land and offshore may be obtained.

All observations are controlled and co-ordinated from a central point at the Headquarters, Ashburton Aerodrome. The meteorological sounding trucks and the trawler are linked to Headquarters by radio-telephone, with the aircraft linked on a second channel and the ground radar stations by land-line. Information concerning sharp gradients in the atmosphere are frequently passed to the controller of operations at Headquarters from all meteorological sounding teams so that he may have at all times as complete a picture as possible of the whole situation, and may consequently direct activities so that the maximum amount of useful information may be obtained.

The aircraft, being the fastest moving source of measurement, covers as much ground as possible during the period of the flight (around two hours), and usually follows the type of course shown in Fig. 7.

Picture icon

Fig. 7 Cross Sections of Area of Operations Showing Participating Units.

The sounding trucks are situated with one always at the coast and others at variable distances inland according to conditions. The modification of the air-mass close inshore is very rapid, the gradients in this region being sometimes too sharp to be covered successfully by the aircraft, so to date the ship has been employed mostly in this region on kite and balloon soundings. Soundings by the trucks fill in the picture over land. Radar observations are taken on the outer legs of the aircraft flight, readings being made at the ground stations every ten seconds. Aircraft psychrometer observations are made every twenty seconds throughout the flight, and at the trucks and trawler observations are made every 6 to 50 ft. or so according to the sharpness of the gradients.

– 91 –

It can be seen with this organization and technique that, under steady meteorological conditions, data can be acquired which should lead to the solution of the problems. How far they are successful can be shown only by the examination of results, some of which are described in another paper.4

References.

1. Booker, H. G. The Theory of Anomalous Propagation in the Troposphere and its Relation to Waveguides and Diffraction. T.R.E. Report, No. T1447, 12th April, 1943.

2. Katz, I., and Austin, J. M. Qualitative Survey of Meteorological Factors affecting Microwave Propagation. Radiation Laboratory, Massachusetts Institute of Technology, Report No. 488, 1st June, 1944.

3. Anderson, P. A., Barker, C. L., Fitzsommons, K. E., and Stephenson, S. I. The Captive Radiosonde and Wired Sonde Technique for Detailed Low Level Meteorological Sounding. Department of Physics, Washington State College, N.D.R.C. Project No. P.D.R.C.—647, Contract No. OEMst—728, Report No. 3, 4th October, 1943.

4. Davies, H., A Preliminary Study of some Results obtained in the Radio Meteorological Investigation conducted in Canterbury. This volume, pp. 78–85.

A Radiosonde Method For Potential-Gradient Measurements In The Atmosphere.*

Summary.—A radiosonde method for obtaining records of potential gradient is described. The method is a simple modification of the Bureau of Standards radio-meteorograph The factors involved in the calibration of the instrument, a typical record obtained, and the further extension of the work is discussed.

Introduction.—The distribution of land, sea and population in New Zealand—in particular in the Auckland district—makes it imperative to rely on radiosonde methods for the recording of potential-gradient distribution in the upper air, since the chances of recovering any records taken within an ascending instrument are extremely remote. Various types of radio-meteorographs or radiosondes have been and are still being developed by various countries(1). In this country, the best-known type of radiosonde transmitting and receiving equipment is that evolved by the Bureau of Standards, and, as it happens, the transmitter appears to be the most readily convertible one for our purpose.

The Bureau of Standard Radio-meteorograph.—In the Bureau of Standard radiosonde(2) the meteorological elements control the audio-frequency modulation of the transmitter's carrier frequency by means of resistance variation in the grid-circuit time constant (R-C combination) of the modulator valve. Disregarding the dashed lines and circles in the diagram, Fig. 1 represents the circuit of the transmitter as used at present.

An aneroid-barometer unit moves a switch arm over a contact assembly, in which every 5th resp. 15th contact is arranged to cause the emission of reference signals, i.e., such with a fixed audio-frequency modulation of 190 resp. 192 c/s. The remaining contacts when reached by the arm of the baroswitch activate a relay, connecting a hygrometer element in parallel to the grid-condenser of the modulator. When the baroswitch arm glides over an insulated spacer a temperature-sensitive resistance takes the place of the humidity-element. It should be noted that the contact strips are narrower than the insulated spacers—a feature of which is made special use, as will be seen later—thus giving with the normal meteorograph temperature indication of longer duration than those relating to humidity. The contact sequence is calibrated against pressure and since the temperature at each level is known, the record reveals accurately the height of the sonde at each particular instant.

[Footnote] * Paper submitted to the Sixth Science Congress, Wellington, 20th-23rd May, 1947.

– 92 –
Picture icon

Fig. I.

The audio-frequency modulation appearing at the receiver output is converted electronically into D.C. currents proportional to frequency and automatically recorded.

Prior Methods of Atmospheric Potential-gradient Measurements.—It is outside the scope of this paper to deal with measurements of the potential gradient on, or close to, the earth's surface. We are rather concerned with the distribution of the gradient at different heights above the earth and particularly with the disturbed field in the presence of thunderclouds.

One of the few methods serving this purpose, though not employing radio-transmissions, was evolved by Simpson and Scrase(3) who obtain a record within the instrument during the ascent and thus rely on recovery. The potential gradient is recorded on polarity paper, which gives the sign, and the instrument depends for its action on point-discharge currents initiated by the field. For if two points, connected by a conductor, are exposed in a strong electric field, with the axis of the conductor parallel to the field, point-discharge will commence, i.e., current will begin to flow from one point to the other, provided the magnitude of the field is above a minimum value (starting potential). The direction of the current is governed by the direction of the field, positive electricity entering the point directed towards the positive potential in an attempt to equalise the potentials at the levels of the respective points. The magnitude of the gradient dV/dX determines the width of the trace obtained on the polarity paper, giving a gradient of 10 volts per centimetre as the smallest potential gradient which produced a legible record with the arrangement used (20 m.-length of wire). In general, as will be seen, the sensitivity of point-discharge current measurements depends on the length of the collector wires, the number of the points used at each end and the overall sensitivity of the recording arrangement.

Mecklenburg and Lautner(4) devised a particularly small and stable electrometer attached to a parachute and suitable for throwing from an aeroplane. But here again recovery of the apparatus was essential.

Wenk(5), on the other hand, designed a sonde in which the discharging time of a condenser is made a function of the applied potential, the time being longer than a reference value for positive and shorter for negative gradients. By interrupting transmission periods to correspond to these time intervals, both

– 93 –

sense and magnitude of the potential gradient is conveyed to the receiving station.

The Modifications to the Bureau of Standard Radiosonde.—The main idea adopted in our modification of the Bureau of Standard Radiosonde to make possible potential gradient recording is just the opposite to Wenk's principle. Using the point-discharge method a positive gradient causes a current to flow through the grid leak, to which our collector wires with one or several points at their ends are attached, in such a direction as to assist the discharge of the grid condenser in parallel to the grid leak and vice versa. Hence in influencing the duration of grid-condenser discharge our modulation frequency is increased for positive gradients whilst a negative gradient, which leads to a reversal of the current direction, will decrease the modulation frequency from a pre-adjusted reference value. The modulation appears in the R.F. section of the sonde in the same way as in the original apparatus, namely by interrupting the R.F. oscillation during the period when the modulator is oscillating. Thus the duration of the R.F. interruptions is sensibly constant and only the number of those occurring will be perceived as the modulation frequency by the unaltered receiving equipment. This represents a further difference to Wenk's principle.

Preliminary tests made it seem advisable to separate collector and radiator functions, using polystyrene-impregnated cork spacers between, the γ/2-dipole and the collector wires. Strongest signals were emitted for a collector wire length of (2n-1) γ/2, n = 5 being chosen in the later ascents (corresponding to approximately 20 m. length). The relay of the original sonde was retained in the early ascents to disconnect the grid of the modulator valve from the collector wires and to insert a fixed-time constant at the reference contacts.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Calibration.—For a given strength of field, F, the point-discharge current, after the “threshold-gradient,” M, for a given length of collector wires has been exceeded, depends in the main on the number of points, n, exposed at each end of the collector wires, in fact it has been found that with the spacing between the points used, it is directly proportional to n. This is easily understood, since each point may be assumed to act individually for striking, but once discharge has started on each they act collectively. Based on Whipple and Scrase(6) the current follows the law i0 = n · a (F2 - M2) μA V/cm V/cm

where a is a constant depending mainly on the exposure of the point, i.e., their relative distance and shape. (For large values of F, M may be neglected.) The constant “a” and “M” is determined in the laboratory in an arrangement as indicated in Fig. 2 and thus the relationship between F and i can be verified. The vertical lines in the diagram represent tautly stretched wire screens, the outer ones being 4 ft. square whilst the inner ones measure 5 ft. square with small insulated lead-throughs in various positions. The horizontal line repre-

Picture icon

Fig. 2

– 94 –

sents the collector wire which is terminated in one or several points. The uniformity of the field is ascertained by repeating the measurement in different positions, and the distance between the outer and the earthed centre screens is altered until, for the same gradient, V, the current is sensibly constant. The earthed centre screens guard against leakage currents and serve to minimise the insulation problem.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

The relationship i0 = na (F2 - M2), however, only holds near the ground. At greater heights the effect of the decrease in atmospheric pressure must be allowed for, since with the same gradient, i increases as the pressure diminishes. According to Tamm(7), who measured the discharge current between point and plate under various conditions, the current change may be expressed approximately by i/i0 = (P0/P1)1.6

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

where i = current at pressure p, i0 = current at pressure p0 (ground level say), and if we introduce an exponential law between height and pressure of the usual form h = 8 log0 P0/P where h in km and the mean air temperature taken as 273°K, we obtain i = n a (F2-M2) e0.2h

Finally we have to determine the relation between current and modulation frequency and knowing p or h from the calibration of the baroswitch supplied with each sonde, F can be evaluated as a function of height.

The graph of modulation frequency may be determined in several ways—

(i)

By replacing i through the grid leak due to F by an equivalent current supplied by a high-impedance source;

(ii)

by replacing the IR drop by an equivalent grid-bias battery;

(iii)

by direct calibration of modulation frequency against F.

The preliminary tests were carried out and some experimental results were obtained in 1945 by R. Belin, M.Sc., as part of his Honours thesis under the supervision of the author.

Picture icon

Fig 3

– 95 –

Checking the relationship between i and F the experimental points follow roughly the empirical law stated above, and plotting the modulation frequency against bias voltage gave a linear relation (14 cycles variation per volt change for a particular transmitter) for positive voltages, whilst for negative bias the modulation frequency tends very rapidly and non-linearly to zero. This becomes also apparent in Fig. 3 which shows a direct calibration of L dV./dX, against modulation for a collector wire (length L) with seven points on either end. This curve gives a relation of approximately the second order for positive gradients only, since in this region the frequenecy depends linearly on the volts applied to the modulator grid.

This graph may be used to evaluate potential gradients from our records by dividing L dV/dX for a pressure-corrected modulation frequency or current by the length of the collector wire to obtain the gradient in volts/cm. However, the foregoing considerations are merely empirical, e.g., they do not take into consideration different types of ions that may be encountered in the atmosphere, particularly in clouds. They must be accepted with certain reservations until further study has provided a surer basis to our problem.

Depending on the collector-wire length, L, chosen, the sensitivity of the sonde can be readily assessed or adjusted, since the greater the distance between the points the smaller the gradient need to be in order to produce the same point-discharge current. Whilst, however, a change in length affects the minimum gradient which can be measured, the number of points only controls the current flow after the starting potential has been reached. The minimum gradient measured so far is 3 volts/cm.

Picture icon

Fig. 4.

– 96 –

Preliminary observations seemed to indicate a noticeable if not considerable influence of relative humidity on the striking potential, thus influencing particularly the evaluation for weak fields. This problem is being further investigated.

Results.—Of particular interest is the investigation of charge distribution in thunderclouds and the potential gradient under these disturbed conditions.(8) Only few records have been obtained so far by Belin and the author(9) One of these, shown in Fig. 4, has particularly interesting features and gives an idea of the sensitivity of the sonde.

The meteorological conditions as judged from the ground revealed some fracto-stratus formation below a towering cumulo-nimbus cloud.

A smoothed curve drawn through the recorded points is shown in Fig. 4. Starting from about 4 volts/cm. the field increases first to 5 volts/cm. at 600 ft. and then drops back gradually to zero up to 2,000 ft. A weak negative region follows up to 3,500 ft. This latter part corresponds to the fracto-stratus formation. On entering the cumulo-nimbus, the gradient rapidly increases beyond the recorder range, a rough estimate based on the modulation frequency observed with the monitor loudspeaker indicating a gradient of about 20 to 25 volts/cm. At about 7,000 ft. a decrease (still off the record) commenced followed at 10,000 ft. by a rapid reversal to a strongly negative field (zero modulation frequency—the absence of receiver noises indicating the presence of the carrier). At 14,000 ft. the negative gradient commenced to decrease and changed ultimately into another positive gradient of 6 volts/cm. at 14,600 ft. which at 20,000 ft. had decayed again to normal field strength following the inverse-square law relation.

In explaining the record, it appears that the cumulus cloud is negatively charged at its base with a strong positive charge centred at about 10,000 ft., a further strong negative charge appearing at 14,000 ft. The base of the fracto-stratus region appears to be positively charged as well as the upper-most region beyond 20,000 ft.

Picture icon

Fig. 5 gives a reproduction of the top portion of the actual record showing continuous small-scale fluctuations observable throughout the ascent. (Any attempt at explaining these fluctuations will have to wait until more records of this kind have been obtained.)

Further Extension of Work—It will be apparent that a logarithmic response of the sonde or some kind of range extension would be preferable in order to increase the range and still maintain good accuracy for small gradients. Whilst this can be readily done on the receiving end for positive gradients, either electronically or by a manually controlled attenuator, this method would have no influence on negative gradients as here the modulation

– 97 –

frequency soon drops to zero. Shifting the reference frequency higher up leads either to loss of sensitivity (small grid leak say) or excessive bias-voltage values. The choice of grid-condenser values is likewise limited by the oscillatory conditions of the modulator valve. The solution which is being incorporated into the sondes at present consists in causing negative gradients, which lead to zero modulation frequency, to give increased positive modulation-frequency excursion. This is achieved by combining relay and baroswitch in the arrangement as shown in Fig. 6.

Picture icon

Fig. 6

Positive gradients with a current flow downwards, give an increase in modulation frequency in the down position of the relay-switch, whilst negative fields give a positive deflection when the cathode is switched to the upper collector wire. Since the contacts over which the baroswitch glides are narrower than the insulating spacers, the switch position is easily recognised from the record. The simple alteration essential to convert the original radiosonde transmitter for potential-gradient operation are indicated by the dashed circles and lines in Fig. 1.

The grid condenser has been changed from a value of about .08 μF to .003 μF, the humidity element is not being inserted, and the temperature sensitive resistance is to be replaced by a 1 megohm carbon resistor. Finally, the collector wires ending in one or several points are connected to the relay contacts as shown.

Combining the potential-gradient record with one obtained on a different radio frequency by a second receiver from a normal sonde tied to the gradient sonde, temperature correlation and humidity correction will ultimately be possible. But whilst the radiosonde methods have the advantage of giving on immediately available record independent of recovery of the transmitter, the apparatus would become more involved if one attempted to sectionalize a cloud by releasing a greater number of sondes in relatively quick succession, say ever 10 minutes.(10) This would require a frequency separation between the individual transmitters to cover a wider band with corresponding additions to the receiving and recording apparatus. On the other hand, the need for this complication would be counteracted by the use of radar both to plot the actual path of the balloon through the cloud and by ascertaining the size of the water droplets present. These results would further benefit if supplemented by parallactic photography fixing the full extent of a cloud in space.

By using radio-active material on the points or better still stronger ionizing agents, e.g., small flames, the striking potential is lowered and, hence, the minimum gradient which can be recorded for the same length of collectors will be further reduced. Information about gradients at still higher levels up to ionospheric regions and beyond might be obtained by the ultimate use of rockets from which the sonnet attached to a parachute would be ejected.

– 98 –

Acknowledgments.—The author is greatly indebted to Dr. Barnett, Director of Meteorological Services, for his interest and generosity in making the sondes and necessary accessories available and to Professor P. W. Burbidge for his helpful suggestions.

References.

(1) F. J. Scrase; J. Sc. Inst., 18, 119. No. 7, July, 1941.

(2) H. Diamond, W. S. Hinman, F. W. Dunmore, and E. G. Lapham; J. Res. Nat. Bur. Std., 25, 327, 1940.

(3) G. Simpson and F. J. Scrase; Proc. Roy. Soc. A, 161, 309, 1937.

(4) W. Mecklenbury and P. Lautner; Zs. f. Phys., 115, 557, 1940.

(5) P. Wenk: Naturwiss., 30, 225, 1942 (Wir. Eng. 1942, p. 315, 1913).

(6) F. J. W. Whipple and F. J. Scrase; Geo. Mem., No. 68, 1936.

(7) H. Tamm; Ann. d. Phys., 6, 259, 1901.

(8) K. Kreielsheimer; Austria. Journ. Sc., 9, 95, 1946.

(9) K. Kreielsheimer and R. Belin; Nature, 157, 227, 1946.

(10) G. Simpson and G. D. Robinson; Proc. Roy. Soc. A, 177, 281, 1944.

Mathematics In Research In New Zealand.

Introduction.—The reason for this paper is two-fold: to discuss a problem and to obtain information.

The problem is not new, but it is one that is becoming acute with ever-increasing use of mathematics in the sciences and in technology. It is this: how best to provide the research worker with the mathematics that he requires. In this the word research is not to be limited to imply only investigations under laboratory conditions but is to be understood to cover the whole field of problems involved in adapting and improving scientific knowledge for the attainment of desired ends.

With regard to the second point is is hoped to obtain information as to the extent and type of mathematics being used and required in research in New Zealand.

My interest in the matter arose from my endeavours to cultivate the use of statistical methods by workers in New Zealand. This aspect is being adequately cared for now. Formal instruction is available at the four colleges; but, more important, has been the establishment of the Biometrics Laboratory in the Department of Scientific and Industrial Research under the capable direction of Mr. I D. Dick.

Place of Mathematics and the Service it can Give.—Mathematics is but one—certainly a most powerful one in suitable circumstances—of the tools of the investigator in the analysis of his problem. This point is stressed since the young research worker who has some acquaintance with mathematics is liable to be carried away with the power of mathematical methods and to lose sight of his problem in a profusion of mathematics. It is necessary here, as in all things, to preserve a balance, and failure to do so may lead to distrust and disillusionment, when the fault is rather, that of the user in expecting mathematics to do his thinking for him.

In general one can say that mathematics is a language that simplifies the process of thinking and makes it more reliable. This is its principal service and it was amplified in the delivery of the paper. As it is not immediately necessary to the main purpose of the paper the points can be covered by reference to an article by T. C. Fry in the Bell Telephone System Technical Journal, 20, 3, 1941.

Background to the Problem in New Zealand.—Physics and engineering students reach the equivalent of S III mathematics, occasional ones going as far as honours standard, at least in part. Chemists, on the other hand, have on the whole been satisfied with less, though the demand from them is increasing. When we turn to the biologists we find that most avoid mathematics as far as possible, and the same is the case for other potential users of mathematics such as the economists, psychologists and students of education.

Thus the most we can reckon on is three years' work in mathematics (S I, S II, S III). Because of the nature of the courses for a university

– 99 –

degree in New Zealand, a student has to cover a range of subjects, so we find students nominating for S I mathematics who have no intention of pursuing the subject further (for example, at Victoria University College, S I, 190 and S II 60). It seems to me that serious consideration will have to be given to making the first year course more of an orientation course with less emphasis on the technical aspects than at present. In fact, I consider that S I mathematics should aim principally at presenting mathematics to the student as one of the great developments of the human intellect. Indeed, I am not so sure that in this way a better foundation might not be laid for the later work of those who will pursue the subject further.

If this view is accepted, then we shall have at most two years for the majority of science students in which to make them mathematically literate. This time will be fully occupied with what is usually regarded as “pure” mathematics and would allow of few excursions into the field of applied or quantitative mathematics. I consider that any attempt to increase the extent of such excursions would act deterimentally on the standard of the work attained, and this we can ill afford.

Where then the applicable mathematics? I would suggest that, in the main, it is best treated as a post-graduate study. This belief is fully confirmed by overseas experience. (See Nature, 158, 690, 1946.)

To meet the situation two types of courses are required:—(a) General-surveys of methods over a wide field. It may be possible in some circumstances to give these concurrently with the student's other work. The aim of such courses would not be the attainment of technical facility, but to indicate to the student where mathematics can be of assistance. A further important feature would be the instruction of the student in how to express his problem mathematically. The present mechanics (so-called applied mathematics) course enables us to treat this matter in a small way but it is too restricted. A lack of knowledge of the resources of mathematics very often leads to too great a simplification of the problem under consideration, with the result that the solution arrived at is often of very little value, (b) Special—courses giving a detailed discussion of a particular field. This work must definitely be postgraduate. Such courses would be designed to meet the needs of restricted groups of workers.

Organisation Required.—The immediate requirements can possibly be satisfied by adequate liaison between the mathematics' departments and outside organisations. As for instruction, the university could provide some of it and I am sure that suitable lecturers could be found for some topics among the members of government departments and industrial organisations. In this connection the possibility of setting up temporary lectureships for such workers is worth consideration. It would give an opportunity for the carrying out of some research in conjunction with the teaching duties.

The specialised courses could well be the training ground for a class of research worker for which there is an increasing, though from the nature of it, necessarily limited demand—the mathematical technologist—the name coined for the mathematical consultant in scientific and industrial research. Such workers would be drawn from all fields and not necessarily only from among those with a highly specialised mathematical background. The main preliminary requirements would be an aptitude for the mathematical approach and the ability to co-operate with other workers.

It may prove to be the case that the approach outlined above will not be adequate for handling the demand. In that event it might prove desirable to set up a distinct department at one or more of the university colleges. But if the demand is sufficiently large I believe the whole question should be viewed in relation to the establishment in New Zealand of a College of Technology.

Conclusion.—At present the university is being asked to perform two functions:

(i)

to provide a liberal education;

(ii)

to train men and women for specific occupations.

The result is, in my opinion, that it is not doing either adequately. I do not believe that the second function stated above is that of a university at all, and my object has been to endeavour to suggest how, with our present organisation, we can separate the two without prejudice to either.

– 100 –

Abstracts Of Papers

A. Wilson Cloud Chamber.

A description was given, with a demonstration, of a diaphragm type of Wilson cloud chamber using alpha particles.

Acoustical Analysis by Variable Density Sound Film.

A description was given of an optical apparatus which the author had devised, which gives the Fourier transform of any ordinary function, provided this function is presented in the form of a light-variation or density-variation such as is the case with sound films. Such Fourier analyses are important in a large number of physical problems, and it was pointed out that in the case of sound waves recorded on films the analysis, is of special interest in that it gives the acoustic spectrum of the original sound. The apparatus performs the analysis almost instantaneously (within 05 second), so that a record may be fed continuously through the machine and the changing character of the spectrum observed visually or photographed on a second moving film. A description was given of the essential features of the apparatus, namely, (1) a rotating glass disc of special construction, having upon its surface a grid of lines or fringes with sinusoidal density-variation, (2) a fine slit which limits the light transmitted by the disc to a narrow diametral portion, (3) an optical system for projecting an image of this slit on to the sound-track or vice versa, (4) a photoelectric cell to receive the resulting light signals, (5) an audio amplifier to amplify them and transmit them to an oscillograph, and (6) an arrangement to provide the oscillograph with a sinusoidal time base, synchronous with the rotation of the disc. Theory was given to show that the oscillograph trace then constitutes the analysis which is required.

The author explained that after the apparatus had been designed and was in the experimental stage, the same method was described by Born, Fürth, and Pringle (Nature, p. 756, 1045), who had evolved the method independently and had applied it successfully to the analysis of a variety of mathematical functions. The above authors used amplitude-modulation of the oscillograph trace, but intensity-modulation was shown to be more appropriate in the present investigation, in order to permit of a continuous or running analysis.

While such running analyses of vibrations would find a useful application in the study of a variety of physical phenomena (mechanical, electrical, acoustical, etc.), the promising field of research offered by speech sounds had been a big incentive in the development of the method. Following on a description of the results obtained with standard wave-forms, which were used as tests of the apparatus, photographs were shown which had been obtained from actual speech sounds recorded on variable-density sound film. The changing nature of the acoustic spectrum, as the vowel quality and the inflection of the voice changed, produced a photographic pattern which enabled certain phenomena of speech production to be studied. A number of harmonic frequencies could be detected, evidently having as their fundamental, at any particular instant, the frequency of vibration of the vocal cords. Changes in vowel quality could be seen to correspond to changes in the relative intensity of the various harmonics, because of variations of the vocal resonances in the cavities formed by the throat, tongue, lips, etc., during articulation. The probable value of these characteristic and readily intelligible patterns in connection with phonetic and related studies was discussed. However, it was not possible to claim complete novelty for the patterns, as during the course of the research a group of collected papers had appeared (Journ. Acoustical Soc. of America, 18, 1, July, 1946) describing a large-scale investigation by a team of workers at the Bell Telephone Laboratories, New York. Here an alternative method, using magnetic tape recording, had been evolved for producing speech patterns, examples of which were shown and discussed. These workers were thus the first to achieve success in what is referred to in one of the papers as the long search for a legible and quantitative display of speech.

In conclusion, the author wished to acknowledge the invaluable assistance of Messrs. C. F. Coleman and J. W. Lyttelton in carrying out the research.

– 101 –

A G-M Counter Used in Prospecting Bores.

A portable Geiger-Muller counter has been designed for remote operation of the G-M tube through 2,000 feet of cable. It has been used to measure gamma lay intensities at various depths in prospecting bores, and cosmic ray intensities in deep water.

Some Consequences of a Modified Equation of State.
By D. B. Macleod, Canterbury University College.

A modification of van der Waals' equation was suggested, assuming the volume of the molecule to be a function of pressure. Various consequences of this were examined and compared with experimental results. In particular, a possible explanation of liquid helium II was discussed in the light of this theory.

The paper summarised five articles recently published in the Transactions of the Faraday Society. (1) Van der Waals' Equation of State and the Compressibility of Molecules. XL, (10), 1944, 439–47. (2) A Calculation of the Latent Heat of Vaporisation based on a Revised Equation of State. XLI (3), 1945, 122–6. (3) On the Direct Calculation of the Viscosity of a Liquid, both under Ordinary and High Pressures, on the Basis of a New Equation of State. XLI (11, 12), 1945, 771–7. (4) On Some Theoretical Consequences of a Revised Equation of State and a Possible Explanation of Liquid Helium II. XLII (6, 7), 1946, 465–8. (5) A New Explanation of Liquid Helium II (with H. S. Yubsley). XLII (9, 10), 1946, 601–16.

Principles of Molecular Distillation.

A vacuum still with the distance from the distilling surface to the condenser reduced to a minimum is termed a “molecular still.” Molecular distillation permits the fractionation of mixtures of heavy long-chain molecules (such as liver oils) without decomposition. Apparatus for such work was described and some of the design difficulties discussed.

A full description of an industrial centrifugal still and equipment will shortly be published in the N.Z. Journal of Science and Technology.

Instrument Testing and Development.

A brief survey was given of the activities of the Instrument Testing and Development Section of the Dominion Physical Laboratory, together with a description of methods and apparatus used. Illustrative examples were shown.

A. New Optical Polishing Abrasive.

An account was presented of the various types of abrasive materials used on optical glass, with special reference to the use of titanium oxide as a polishing abrasive.

Analytical Balances and Their Faults.

A description was given of common faults in analytical balances, together with methods of correcting them. A brief outline of the requirements of reliable instruments was supplemented by a description of suitable mountings for them.

The Present Stage of the Solar Cycle.
By I. L. Thomsen, Carter Observatory.

The paper reported New Zealand observations showing the present trends of solar activity, and indicating what region of the upper part of the solar cyclemay have been reached at the present time. A comparison was made with longterm predictions and the method of indicating the solar activity was described.

– 102 –

Cosmic Rays.
By E. Marsden, Department of Scientific and Industrial Research.

A summary was given of recent advances abroad, especially in regard to latitude variation and altitude variation and to the new photographic technique. Also a programme of observations in New Zealand, and possibly the Antarctic, was submitted for discussion.

Lunar Short-Wave Radiation.

Evidence was presented indicating that radiation from the moon contributes to the ionisation of the upper atmosphere. Analysis of observations of long-distance radio transmission show periodic maxima corresponding with the period of synodical revolution of the moon.

Notes on the Aurora Australis.
By I. L. Thompson, Carter Observatory.

Before 1933 very little exact knowledge was available concerning the appearance of the Aurora Australia as seen from New Zealand. Up to that date most of our knowledge of the Aurora Australis had been obtained from the reports of various Antarctic expeditions, commencing with Cook in 1773, which were of necessity spasmodic and based on short periods of one or a few years. Moreover, it is worth noting that, by coincidence, most of these expeditions took place in the period of the solar cycle from one and a-half years after maximum to the minimum period.

The work commenced by the late Mr. M. Geddes in 1933 and at present being continued as far as possible by the Carter Observatory has given us records of auroral displays over one and a-half sunspot cycles, which by virtue of this long period should provide results of considerable interest. The time has now arrived when this recorded data should be reviewed and analysed. It is considered that each New Zealand aurora recorded has some significance for the polar auroral activity as a whole, because in each case it indicates a northward advance of the zone, and thus an increase above what might be termed “quiet” auroral periods.

Perhaps the most fundamental general result appearing from the work to date is that, as far as can be seen, the southern aurorae are similar in form and height to the northern aurorae. Despite this, the many statements by travellers to the effect that although the same forms may be apparent, the northern aurorae have a quality lacking in the southern aurorae, would indicate the desirability of a direct comparison by the same observer. The same sequence of forms appears to take place as in the northern hemisphere, and several of the remarkable auroral forms studied by Stormer have also been clearly recognised.

The great bulk of the New Zealand work, however, has been of the purely visual type without the use of instruments and there are therefore no data available concerning auroral heights remotely comparable with those of the northern hemisphere. Geddes, working under great difficulties in his spare time, has published results of measures from 18 duplicate photographs. While these indicate that the auroral forms are of the same order of height as in the northern hemisphere, they are not sufficient to provide conclusive data. The cameras were lent to New Zealand by Professor Stormer and are still here. No work has been done since the time of Geddes, and the present policy is that unless a good base-line can be established, numerous photographs of good quality obtained and their rapid measurement and reduction undertaken, it would be better not to continue the work until these conditions can in good measure be fulfilled. An attempt is being made to institute a programme of auroral height measurements.

Another feature of the Aurora Australis appears to be the rather large expansion and contraction of the auroral zone in sympathy with the solar cycle, compared to the northern auroral zone. This is shown to explain the discrepancy which appeared to exist between the observation of Mawson at Macquarie Island at sunspot minimum that the aurorae were seen in the southern sky, and the New Zealand observations of Geddes at sunspot maximum that the auroral forms were either directly over or to the north of Auckland, Campbell, and Macquarie

– 103 –

Islands. It has been further confirmed by recent observations at Campbell Island where aurorae were confined to the southern sky during the last sunspot minimum period, but are now beginning to appear in the northern sky at a time close to sunspot maximum. The study of geographic position of southern auroral forms would therefore appear to be of some importance.

No work, as far as is known, has been carried out in the southern hemisphere on auroral spectroscopy or photometry.

At present, auroral studies are based on numerous reports sent in to the Carter Observatory by voluntary observers scattered all over New Zealand, and from Tasmania and Australia. The establishment of a scientific party on Campbell Island which makes auroral observations is a most valuable aid, and its importance cannot be too strongly stressed. Valuable co-operation exists between the Magnetic Observatory, Christchurch, and the Carter Observatory. The outstanding need at the present moment is for additional staff for the purpose of assisting in the review of material at present on hand and the better organisation of work for the future.

Very Soft X-rays.

A brief description was given of the development of the technique of soft X-ray spectroscopy. This region of the spectrum is distinguished by the extreme absorbability of the radiations and by the fact that the wave-lengths are usually too great to be examined by the normal procedure of crystal spectroscopy. It may be considered to embrace all X-ray wave-lengths greater than about 10Å.

The early investigations were mostly performed by a difficult photo-electric critical-potential technique. Whilst this showed the presence of these soft radiations, and whilst it yielded results of some value, many of the effects observed were difficult of interpretation. It was superseded by the plane-grating method, in which the radiations were allowed to fall on the grating at almost grazing incidence. At very small glancing angles the radiation is totally reflected owing to the fact that the refractive index of any material is, for X-rays, leas than unity. Hence adequate intensity may be obtained from the grating. Fortunately, also, the grazing-incidence position leads to great dispersion and compensates in some measure for the coarseness of ruled (as opposed to crystal) gratings. Finally, provided that it is only a millimetre or so wide, the grating at grazing incidence is self-focusing. It was shown by Compton and Doan that ordinary X-ray spectra could be obtained from a grating used in this way; and by Osgood and Thibaud it was shown to be the ideal means of investigating the soft X-ray region. (Slides of a vacuum spectrometer designed by L. P. Chalklin and the writer were shown, together with illustrations of the spectra obtained.) It was now possible to measure the emission lines and absorption edges of the soft X-ray region and so to check up the values for those energy levels lying near the “surface” of the atom. In the course of this work it was clearly demonstrated that it was possible to obtain radiations due to transitions between energy levels of the same electronic “shell,” e.g., the Miii—Miv, v radiation of molybdenum.

The plane-grating method suffered from the limitation of resolution and of intensity imposed by the small width and consequent small number of rulings of the grating. By employing a concave grating it was possible to maintain the focusing properties and at the same time to increase the size and the resolution of the instrument. All modern soft X-ray spectroscopy is performed with, such gratings. The slit and the photographic plate are usually placed on the Rowland circle of the grating, but owing to the necessity for grazing incidence and the small angles of diffraction, the slit, grating and plate are close together, and the radiation strikes the plate at a very oblique angle. (A slide was shown of a concave grating instrument designed by S. S. Watts and the writer. This instrument was planned to have the merit of being adjustable by optical means without the necessity for tedious trial and error adjustments in which each test would mean a separate sealing and evacuation of the apparatus. A slide of a vacuum spark spectrum showed, by the resolution of close sharp lines, that the theoretical resolving power of the instrument was in fact attained.) In soft X-ray, as in ordinary X-ray spectroscopy, Siegbahn and his co-workers have played a predominant part.

– 104 –

The soft X-ray technique affords the best method of investigating transitions in which valence electrons move to levels within the atom. Such transitions always give X-ray lines of appreciable width. Measurement of this width allows the calculation of the energy spread of the valence electrons. It has been measured for many metals by O'Bryan and Skinner, and in most cases it has been found to be in remarkably good agreement with the simple Sommerfeld theory of metals. (A slide was shown to illustrate the wide emission bands obtained, by O'Bryan and Skinner.)

In the more detailed theory of solids the levels at the “surface” of the atom are replaced in the solid by bands of levels. Neglecting the small effect of thermal agitation, it may be said that the valence electrons of the solid fill the lowest of these close levels, but, by the Pauli principle, only one electron can live in each. Hence all the lowest levels are full and all the higher ones are empty. This sudden discontinuity between full and empty levels should give a sharp short-wave edge to the emission band, and this is, in fact, always found in the case of metals. In the case of insulators, however, it is believed that the valence electrons lie in completed bands of levels and that the lowest empty band of levels is an appreciable energy step above. Application of an electric field cannot in such case accelerate the electrons because this would imply a small increase in their energy and there are no empty energy levels of the required value available. In short, an insulator should be characterised by a full valence band and there should be no sudden division between filled and unfilled levels and no sharp short-wave edge to the soft X-ray-emission band. In some recent work, in which a new method of measuring the intensities in the bands has been developed by the writer, the K-emission band of the insulator diamond has been examined. Although it is not symmetrical in form (and is not actually expected to be) it shows very definitely that there is no sharp edge. (A slide of the diamond curve was shown.)

Until the intensity distribution in the emission bands has been studied thoroughly by the soft X-ray techniques and until the results have received adequate theoretical interpretation, the behaviour of the valence electrons in solids will not be fully understood. The urgency of such investigations is manifest from the fact that the valence electrons govern the conductivity and the cohesion of metals. In conclusion, it may be pointed out that now, in addition to atomic-spectra and band spectra, we have solid spectra and that the work on this subject is in its infancy.

Curve-fitting by Least Squares.

The elementary methods of least squares as applied to curve fitting, including the method of orthogonal polynomials, was discussed.

Powers Punched Card Equipment and Computational Problems.

The application of Powers-Samas punched card equipment to the solution of the problems mentioned in the previous paper was demonstrated on a 21-column Powers equipment.

General Principles of Radar Design.
(Notes from Lecture)
By D. M. Hall, Dominion Physical Laboratory.

At the beginning of 1939 the radar sets used in England were almost exclusively those which operated on approximately 7 metres and were known, as C.H. (Chain Home) sets. These sets had flxed aerial systems and were used solely for detecting high-flying aircraft. The design of these sets was vastly different from those used on shipping these days to pick up marker buoys at 50 yards or the type of set used automatically to track flying bombs.

I will now mention briefly the fundamental factors which determine the-design of a set once its purpose and specifications are known. In the design of these sets one of the most important factors used is that of R.F. frequency of the transmitted pulse. In the earliest sets the maximum frequency available from transmitting tubes with suitable power was approximately 30 megacycles.

– 105 –

Now it is possible to produce more than 200 k.w. on 10,000 mc/s. The need for higher frequencies was made very evident in the early days of the war. This was so because the higher the frequency for a given size of aerial the narrower the beam becomes and for radar sets near sea-level it is possible to illuminate the surface of the sea for a greater distance. This was essential to pick up low flying: aircraft, shipping, submarines, etc. With a narrower beam it is possible, too, to obtain better resolution. In these days it is usual for marine radar sets, air-borne radar sets, and some land-based ones to operate on approximately 10,000 mc/s. Although higher frequencies have been used, there is another factor which comes into play when frequencies on or above this are used, i.e., the metecorological effect. It is evident from the above that for a set to be used in all weather s in tropical regions for long ranges 3000 mc/a, or 10 cm. appears to be about the limit. However, this feature has its benefits as well as detrimental effects, since clouds containing a high moisture content, besides attenuating radio waves also reflect them. This means that with 3 cm. sets it is possible to track thunderstorms for distances of 100 miles or more and forecast with accuracy when they will be overhead. Lower frequencies such as 100 mc/s. are by no means out of date, as these frequencies can still be used to pick up targets at long range that are not near the horizon. A frequency of 75 mc/s. was used to pick up the moon. It is possible at the lower frequencies to use triode valves and hence to use a long pulse length, in the nature of 10μ sec. With this long pulse length the receiver requires a relatively narrow band pass, and in addition it is possible to use R.F. stages of high gain in the receiver.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Pulse Length and Shape.—The pulse issued is the main factor in determining the maximum and minimum range of a set. Short pulses are essential if a short minimum range is to be achieved, i.e., 1μ sec. pulse if ranges of 164 yards or more are to be received. 1/10μ sec. pulse if ranges of less than 50 yards. The pulse length also determines the distance between two targets in line it is possible to separate; in other words, the pulse duration must not cover the two targets. If it does, then the two targets merge into one. When a greater range is required, then the pulse length is increased, as this enables a narrow band receiver to be used. For triodes this may be as long as 10μ sec., but for magnetrons the upper limit is approximately 3μ sec., due to moding. Plate modulation is generally used for triodes, and cathode modulation for magnetrons, and in the latter case a negative pulse is applied. Generally, a rectangular-shaped pulse is required for magnetrons with a 5 per cent, rise and fall and no voltage spikes.

Pulse Recurrence Frequency—This ranges from 50 c/s. to 5000 c/s., depending on the type of set used. First, the repetition frequency must be kept low enough to enable the echoes from targets to be received on one sweep; i.e., a P.R.F. of 1000 c/s. enables echoes up to 93 miles to be obtained. If echoes up to 186 miles are to be received, then the P.R.F. must not be more than 500 c/s. The P.R.F. is kept as high as the ratings of the modulator system allows to enable the maximum number of pulses to be received from a target when a beam is sweeping past it. This factor also helps to eliminate fading, which may occur with successive pulses and also increases intensity on intensity-modulated display tubes.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Peak Power.—In describing the power output of a radar transmitter two terms are used. (1) Peak power, which is the average power during a pulse (3) Average power, which is the average over the repetition period. Peak powers from a kilowatt to 5 megawatts have been used in transmitting equipment and as much as 250 k.w. are generated by 3 cm. magnetrons. However, although the peak-power output may be very high, the average power output is small because of the difference between the pulse length rr, and the pulse interval σ Peak power/Average power=σ/rr

The maximum range is proportional to the fourth root of the power output, hence to double the range of the set the power must be increased 16 times.

Beam Width of Aerial System..—The gain of the antenna will also affect the range of a radar set. Gain and area are related in terms of wave-length as follows:—

– 106 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

G = K A/λ2

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Where K is a constant of proportionality, λ is the wave-length at which the antenna operates. Therefore, for a given wave-length and diameter, maximum range is proportional to √A. The beam-width is given by θ ∝ Kλ/D

Increasing the antenna area has the additional effect of decreasing the beam angle of the antenna. For aircraft and shipping the aerial size is, of course, limited, and most shipborne radar sets use an aerial of no more than 4 ft. diameter. On 3 cm. this gives a beam of approximately 2 degrees to half power. For the navy and also on bombers to provide for continuous illumination of the target during evasive tactics, gyro-stabilised platforms were used. Other types of aerials used, besides the parabolic type, include slotted wave-guides and for accurate direction finding, nutating dipoles are used and these also enable the radar set automatically to follow the target.

Rates of Angular Rotation.—Generally, for land-based sets a speed of 6 r.p.m. is used, but where information is required, say, by a ship moving up a channel, then higher speeds of rotation are used which are in the vicinity of 60 r.p.m. Scan rates are made as slow as factors permit to allow an increased number of pulses to illuminate the target. Halving the scan rate produces a system gain of 1.5 db. Decay time of the cathode-ray display screen must be closely related to scan rate if all advantages of proper scan rate are to be realised.

Other Factors Affecting Minimum Range.—If echoes are to be obtained immediately following the transmitter pulse, then several other steps must be taken.

(1)

A fast recovery in TR tube. This depends on type of gas used.

(2)

Receiver must not paralyse, hence the use of a pulse during transmitter pulse, applied to the suppressors of two of the I.F. tubes to reduce the gain of the receiver.

(3)

Low time constants in components in receiver.

Required Accuracy in Bearing.—Aircraft and marine radars do not usually give more accurate bearings than 1°. If greater accuracy than this is required, then either: (a) Lobe switching or (b) a narrow beam must be employed, and this necessitates a large aerial.

Accuracy in Range.—A general-purpose search radar is made accurate to a fraction of a mile. For gunnery work and other uses a range accuracy of a matter of yards can be measured. In radar this accuracy remains constant as long as the target is visible on the screen.

Weight of Equipment.—In airborne equipment, engine-driven 800 c/s. alternators are used, to save weight and space, as this cuts down the size of transformers and the amount of filtering required. Shipborne sets use 50 c/s. up to 500 c/s. to save space.

When a radar set is required for a specific purpose, the above-mentioned factors are the main ones to be taken into consideration; but time is too short to deal with many others such as transmission lines, aerial turning mechanisms, etc. The majority of sets built these days use centimeter wave-lengths to enable high definition to be obtained.

Radar and Radio Methods of Position-fixing and Navigation.

Systems providing assistance to the navigation of ships, and employing techniques developed during the war are briefly reviewed.

Radar.—Marine navigational radar is a simple P.P.I. radar designed for high discrimination and good minimum range. An example developed in Britain provides a beam-width of 2 ½ degrees and minimum range and range discrimination each of 50 yd. The latter figure implies a pulse length of ¼ microsecond, and

– 107 –

a receiver with a band-width of at least 6 mc/s. and incorporating special measures to prevent “blocking” by the transmitter pulse. P.P.I. scales giving edge ranges from 3,000 to 80,000 yd. make displays available which are suitable for navigation of close waters, for coasting, or for making a landfall. Demonstrations in Britain have shown that navigation in busy waters in the worst visibility is feasible. This is much facilitated by a display method which provides continuous visual comparison between chart and P.P.I.

The wave-length employed in the radar described above is 3 cm.; at a wave-length of 10 cm., less interference is experienced from heavy tropical rain, but the same discrimination and detection of low-lying land is not possible.

At a wave-length of 3 cm., an aerial 5 ft. wide is needed to give a 2 ½ degree beam-width. The British practice is to rotate the aerial at over 20 r.p.m. in order to obtain information quickly; American practice prefers a slower rate. The aerial must be mounted high enough and far enough forward in the ship to avoid serious masking on ahead bearings.

The pulse repetition frequency should be high, in order to ensure adequate brightness of the display, but too high a frequency may lead to the appearance of long-range echoes on the trace subsequent to that to which they belong. The limit is about 1500 p.p.s.

Hyperbolic Systems.—The group of position-fixing systems known as the hyperbolic systems depend on the measurement of range differences from three or more known points. The range differences may be measured as the intervals between the arrival times of pulses, as in Gee and Loran, or as the differences in phase between CW signals, as in Decca. The locus of points of constant-range difference from two fixed points is one of a family of hyperbolae of which the two fixed points are foci. Thus from a pair of fixed stations a position line can be found; from a second pair a second-position line can be found, and hence the point of intersection is the position sought.

In the pulse systems, cathode-ray-tube methods resembling radar ranging methods are used for timing. With Decca, continuously rotating phase meters record the changes.

Gee operates in the band 20–80 Mc/s., Loran at about 2 Mc/s., and Decca in the neighbourhood of 100 kc/s. The operating ranges are about 100 miles, 600 miles, and 250 miles respectively, by daylight, using ground-waves. Sky-waves at night do not affect Gee, double the range of Loran, and halve the range of Decca. (Loran can discriminate and use sky-waves; Decca cannot discriminate sky-waves from ground-waves, with consequent errors.)

Low-frequency Loran on 180 kc/s. has had experimental trial and gives ranges of 1,500 and 3,000 miles by day and night respectively.

Decca has very high accuracy (0.05%), but can at present register only change of position. The pulse systems give absolute position, but with an accuracy not better than 0.3%.

Other Systems.—Consol, a system developed by the Germans, gives a bearing from fixed stations with an accuracy of 0.2 to 1 degree, to a range of about 1,500 miles, and uses a frequency of 200–500 kc/s.

Development effort in England and elsewhere is now being directed to giving useful radio aid to small ships at minimum cost. One approach is to attempt to provide by radio methods approximately the same facilities as those provided visually by a lighthouse. Proposed systems include (1) a beam which carries voice-modulation, speaking the bearing as it rotates; (2) two beams rotating, one at twice the speed of the other, and timed by a stopwatch to obtain bearing; and (3) a beam carrying pairs of pulses whose spacing varies with the bearing.

Radar Display Circuits and Techniques.

The principles of design of display circuits for radar search sets were discussed. Apparatus made in the Dominion Physical Laboratory and its use were described.