Go to National Library of New Zealand Te Puna Mātauranga o Aotearoa
Volume 34, 1901

V.—Chemistry and Physics.

Art. XLVI.—Studies on the Chemistry of the New Zealand Flora.

[Read before the Wellington Philosophical Society, 5th November, 1901]

Part II.—The Karaka-Nut.
(Preliminary Note.)

The karaka-tree (Corynocarpus Icevigata) is endemic to New Zealand and the surrounding islands. It is plentiful in the North Island, but its distribution in the South Island is very limited. It is, the largest and commonest of all the trees in the Chatham Islands, where it attains a height of over 50 ft.

The kernel of the karaka-berry is known to be very poisonous in its raw state, but if suitably prepared by cooking and subsequent soaking the kernel forms a staple article of Maori food. A detailed account of the process as carried out by the Maoris is given in a paper by Skey.* Owing to the kindness of Mr. H. B. Kirk, Inspector of Native Schools, we have been informed that the process employed by the Morioris in the Chatham Islands is practically identical with that used by the Maoris.

The karaka-kernels have been investigated by Skey. The results of his examination showed—(1) That the kernels contain oil, sugary matter, gum, and amorphous proteids; (2) that the nuts lose their bitter taste when heated to 100† C. for four hours; (3) that animal charcoal removes from the acidified aqueous extract of the kernel a bitter crystalline substance. This compound was named “karakin,” but was not obtained in sufficient quantity for a satisfactory examination.

The authors have re-examined the karaka-nut. They find—(1.) That the aqueous extract of the nut yields much prussic acid on distillation. (2.) That air-dried kernels contain 14—15 per cent, of non-drying oil, which yields solid acids on saponitication. (3.) That the sugars present are man nose and dextrose. (4.) That the aqueous extract, upon evaporation, even at 35 †, in shallow pans, loses the greater part of its bitter taste. The concentrated extract contains no karakin

[Footnote] * Trans.N.Z. Inst., vol. iv., p. 318.

– 496 –

(see below), but a nitrogenous glucoside, corynocarpin, together with a highly soluble, non-nitrogenous, crystalline compound. These substances have not been detected in the fresh extract. (5.) That a compound agreeing in nearly all respects with Skey's karakin can be readily obtained from fresh kernels by extracting with cold alcohol, and subsequently distilling off the spirit in a partial vacuum. By repeated crystallization from hot alcohol the karakin is obtained in radiating acicular crystals. (6.) The quantity of karakin diminishes rapidly with the age of the nut. The yield from fresh nuts gathered in February, 1901, was 0.3 per cent.; nuts three months old only yielded 0.1 per cent.; after twelve months the nuts were still bitter, but only a small quantity of karakin was obtained from them.


Since this compound is the only substance in the fresh nut of any considerable interest, the method of preparation and properties of it shall alone be given in detail. The fresh kernels are first put through a sausage-machine, and then through the wooden rollers of a wringing-machine, and the mash well stirred with one and a half times its weight of methylated spirit and allowed to stand for thirty-six hours with repeated stirrings. The spirit is removed by a filter-press and the press-cake again extracted with alcohol. The united filtrates are distilled in a partial vacuum, at a temperature not exceeding 35†, until the greater part of the alcohol is removed. The turbid liquid gradually deposits crystals of karakin, together with a gummy bitter substance which can be removed by recrystallization from boiling alcohol. The purekarakin melts at 122†, dissolves easily in acetone, methyl alcohol, glacial acetic acid, acetic ether, and phenol; with, difficulty in cold ethyl alcohol (0.4 gram in 100 cc.) and water. It is very sparingly soluble in ether and benzene. Deposited from hot concentrated solutions in water or alcohol, it separates as an oil, which subsequently becomes crystalline. The compound reduces Fehling's solution readily. After hydrolysis with dilute hydrochloric acid it gives a yellow precipitate when warmed with sodium-acetate and phenyl-hydrazine solution (glucoside reaction). It is highly nitrogenous-Analysis agrees with the formula (C5H8NO5)8.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Calculated. Found.
C = 37.0 37.2
H = 4.9 4.8
N = 8.6 8.6
O (by difference) = 49.5 49.4
100.0 100.0
– 497 –

Molecular weight in phenol solution: Calculated (C5H8NO5)8 = 486; found = 450.

The characters given by Skey for the karakin prepared by the animal-charcoal method differ in two important respects from those above described. The melting-point according to Skey is 100†, and the substance contains no nitrogen. At first sight it would therefore seem that the two substances are not identical. From Skey's paper, however, it would appear that the karakin was not recrystallized, and this would account for the difference in the melting-points. The failure, on the other hand, to detect nitrogen in organic substances has occurred so often in the history of chemical research, more particularly before the application of the metallic-sodium test had become general, that the authors do not attach much importance to this apparent discrepancy. They would add that they have prepared karakin by Skey's method and found it to contain nitrogen, and to have the same melting-point as the compound already described.

The expenses in connection with this investigation have been defrayed by a grant from the Royal Society of London.

Art. XLVII.—Raoult's Method for Molecular Weight Determination.

[Read before the Wellington Philosophical Society, 5th November, 1901]

The teaching of practical chemistry at the present day differs greatly from the teaching in vogue twenty-five years ago. At that time qualitative analysis only was, as a rule, taught to the elementary student, and experimental proof of chemical theory was either ignored or only practised in the lecture-room. Nowadays, however, the teaching of qualitative analysis is usually prefaced by a series of simple quantitative experiments, performed by the students themselves, and designed to illustrate modern chemical principles. Such an introduction greatly facilitates the understanding of the science.

So far as we are aware, no attempt has been made to teach the practice of molecular-weight determination by Raoult's method to the elementary student, it being generally supposed that expensive apparatus is necessary for such determinations. As a matter of fact, the experiment may be successfully carried out with the simplest of school apparatus, and with a very small expenditure of time and material.

– 498 –

Raoult's law states that the depression in the freezing-point of a given solvent is directly proportional to the concentration of the solution and inversely to the molecular weight of the dissolved substance—i.e.,

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

D ∝ w/WM′

where D = depression in freezing-point, W = weight of the solvent, w = weight of the dissolved substance, and M = molecular weight of the dissolved substance. So that, if K represent the depression which the molecular weight of any substance (in grams) will cause in 100 grams of the solvent,

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

M = w × 100/W × D K.

This method of determining molecular weights is in every-day practice amongst research chemists, giving good results even for substances with very high molecular weights. With such substances the observed depression is so small that an exceedingly sensitive, and therefore expensive, thermometer is required. For class purposes, however, we must make the depression large enough to be easily registered on a common thermometer. This is easily done by choosing a solvent whose depression constant (K) is large, and dissolving in it some substance whose molecular weight is small.

Now, of all common substances water has the lowest molecular weight, whilst phenol has the highest depression constant (72); indeed, 1 per cent of water depresses the melting-point of phenol about 4°C.

The apparatus needed is illustrated in the figure. As it consists only of a test tube, common centigrade thermometer, cork, and brass-wire stirrer, no explanation is necessary.

To perform the experiment about 10 grams of good carbolic acid is weighed into the test tube, thoroughly melted by immersing for a few moments in hot water, and the freezing-point determined by thoroughly stirring until the superfused liquid begins to crystallize and the temperature indicated by the thermometer becomes steady. This operation should, of course, be repeated. About 0.1 gram of water is now added to the carbolic acid in the tube, and the freezing-point again determined. The water is conveniently added from a dropping-pipette, the number of drops being carefully counted, and the number of drops which make up a cubic centimeter being determined in a separate experiment.

– 499 –

The numbers obtained by the members of a large class of elementary students varied from 17 to 21 for the molecular weight of water. Still better results were obtained for the molecular weight of methyl alcohol. It is instructive to allow the students to perform a series of experiments at different concentrations. In the case of water in phenol the observed molecular weight increases very rapidly with the concentration (molecular association). Scarcely any such effect is noticed with methyl alcohol in phenol.

Art. XLVIII.—The Vapour Densities of the Fatty Acids

[Read before the Wellington Philosophical Society, 11th February, 1902.]

It is well known that a large number of substances have vapour densities at their boiling-points which are a little above those calculated from their molecular weights. This may in many cases be explained by the fact that the gaseous laws which are used in the calculations are not rigorously true at the point of liquefaction. In other cases, however, the abnormality is undoubtedly due to the fact that association of the molecules takes place at temperatures in the neighbourhood of the boiling-point.

The first substance to attract the attention of chemists was acetic acid. That the abnormality in this case is really due to the formation of molecular complexes is shown, first, by the fact that the normal vapour density is not reached till 110° above the boiling-point. Secondly, the value for the expression MW/T′ (where M is the molecular weight, W the latent heat of vaporization) is 15, while for liquids of normal molecular weight a constant value of about 21 is obtained. This low value can only be explained on the assumption that the molecules are associated in the gaseous state.

Similarly, it was found that normal butyric and isovaleric acids were associated, although to a less extent. In general it may be said that this is true of all the lower fatty acids and their derivatives, which do not decompose on heating. This is quite analogous to their behaviour in solution. In benzene and naphthalene most hydroxyl compounds, and especially acids, associate.* This is also true for the solvents bromo-form, nitrobenzene, and parabromtoluene. Even in phenol,

[Footnote] * Auwers, Zeit. Phys. Chem., 1893, &c.

– 500 –

which owes its wide application in cryoscopy to its slight tendency to cause bodies to associate, the experiments of the authors have shown that, while the alcohols and other aliphatic compounds remain fairly normal, the fatty acids associate strongly with rising concentration. This, however, will be discussed in a future communication.

1. Acetic acid has already been examined in detail. The following are a few of the numbers:—*

Temperature. (Boiling -point, 119° C.) Denity. (Noraml Molecular Weight, 2 08).
125° 3.20
130° 3.12
140° 2.90
160° 2.48
190° 2.30
219° 2.17
230° 2.09
250° 2.08

On plotting these results as a curve it is found that the rate of dissociation of these molecular complexes is a direct function of the temperature for 50° above the boiling-point. Then the rate is gradually decreased till 230° C., when the vapour density becomes constant.

2. We can find no record of any determinations made with propionic acid, the next member of the fatty acids. Accordingly we performed several experiments by the ordinary method of Dumas.

(a.) At 146° C. and at 760 mm. the vapour density was found to be 51. The normal value is 37; consequently at 5° above the boiling-point the vapour is associated 38 per cent.

(b.)At 192° C. and at 755 mm. the vapour density was found to be 45, indicating 22 per cent. association.

3. Normal butyric has already had its vapour density determined at different temperatures. A normal value is reached at a temperature of about 100° above the boiling-point of the compound. By continuing the curve for butyric acid it is found that the vapour density 5° above its boiling-point would be 3.8—i.e., at that temperature it is associated 25 per cent. Thus for the three acids at 5° above their boiling-points—

Vapour associated.
Acetic acid 54 per cent.
Propionic acid 38 "
Butyric acid 25 "

[Footnote] * Cahours, “Comptes Rendus,” xx., 51.

[Footnote] † Cahours, loc. cit.

– 501 –

Thus it is noticed that the amount of association decreases with rising molecular weight, which fact is of universal occurrence for homologous bodies in the liquid state (Ramsay and Shields; Traube) as well as for substances in solution (Biltz).

In this case, however, it happens that the amount of association is inversely as the square of the molecular weight:—

Per Cent. Amount of Association. Numbers Proportional to Inverse of Square of Molecular Weight.
Acetic acid 54 52
Propionic acid 38 36
Butyric acid 25 26

Further experiments are in progress with the object of ascertaining whether the same law holds for the higher members of the series.

Art. XLIX.—The Latent Heats of Fusion of the Elements and Compounds.

Communicated by Professor Easterfield.

[Read before the Wellington Philosophical Society, 11th February, 1902.]


Crompton states that Aw/Tv = K, where A is atomic weight, w latent heat of fusion, T melting point on absolute scale, and v the valency of the element. Now, the valency of an element is known to vary, and as the results were not very concordant the author, from theoretical grounds, replaced this by the relation Aw/T 3√A/d where d is density and 3√A/d repre-sents the space between the atoms. Just as the atomic heat of the elements is only constant for elements with atomic weights over about 40, so is this relation only true under the same conditions. In the case of the fourteen elements with atomic weights over 40 the value varies between 1 and 1 3, with the exception of the three, gallium, lead, and bismuth. But applying this rule to the compounds, and changing the atomic weight into molecular weight, still more concordant results are obtained. Out of thirteen inorganic compounds, with the exception of two the results vary from 19 to 23, being mostly near the mean 2.1. There is also good agreement among the organic bodies examined, the mean being about 2.4. The author intends to calculate more results, and to present fuller tables to the Society.

Various attempts have been made to arrive at a definite

– 502 –

law connecting the latent heats of fusion with the atomic weights and other physical constants. Berthelot (1895), after proving that in the case of the latent heads of vaporization MW/T′= constant (where M is the molecular weight, W the latent heat of vaporization, and T′ the boiling-point on the absolute scale), supposed a similar law to be true in the case of the latent heats of fusion.

Holland Crompton, in a paper entitled “Latent Heat of Fusion,”* endeavoured to show that the equation Aw/Tv=a constant for the elements, A being the atomic weight, w the latent heat of fusion, and v the valency. The difficulty first encountered in this relation is due to the fact that the valency of an element varies with its mode of combination and with different physical conditions.

Shortly afterwards Deerr concluded that the relationship Aw/T is constant only for certain groups of “similar” elements.

In 1897 Crompton published another paper, in which he attempted to disprove the hypothesis of electrolytic dissociation. He arrives at the result dw/T= constant for mono-molecular liquids, where d is the density of the liquid. In the same paper the results are given for the elements, the densities in many cases being taken in the solid state. As shown below, the numbers are exceedingly divergent.

De Forcrand§ showed that M(W+w)/T′ is approximately constant: M is the molecular weight of the substance in the state of a gas at its boiling-point T′, and W and w are the latent heats of vaporization and fusion respectively. But W is generally about ten times as great as w; and, as MW/T′=a constant is true (Trouton's law), the value of w will make little difference in the result. Further, if the equation M(W+w)/T′ be divided by the constant MW/T′, it follows that W/w= a constant. Using Traube's numbers for the latent heats of vaporization of the following elements, which gave very satisfactory numbers for Trouton's constant, the values of W/w are—Mercury, 26; zinc, 14; cadmium, 15; bromine, 3; iodine, 3.2; bismuth, 17 Since these numbers should be equal if De Forcrand's relationship is a physical law, his generalisation may be dismissed without further consideration.

Now, let it be assumed for the present that Aw/T = 8.8 (this value is only empirical. but its magnitude will not affect the following argument). On dividing the values of A thus obtained by the real atomic weights, the result is a series

[Footnote] * Journ. Chem. Soc, 1895, 67 315.

[Footnote] † Proc. Chem. Soc, 1895, and Chem. News, 1897.

[Footnote] ‡ Journ. Chem. Soc, 70, 925.

[Footnote] § “Comptes Rendus,” 1901, 132, 878.

– 503 –

of numbers which appear to be periodic functions of the atomic weights:—

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Table I.
Cu. Zn. Ga Ge. As. Se. Br.
3.8 3.1 1.7 2.4
Ag. Cd. In. Sn. Sb Te. I.
3.7 3.4 26 2.3
Au. Hg T1. Pb. Bi.
3.8 3.6 3.6 4.4 1.9

It will be noticed that the values tend to increase from top to bottom and from right to left—e.g., zinc to mercury and iodine to silver. It would seem, therefore, that some periodic quantity must take the place of the v in Crompton's formula to make the relation true for all the elements.

It can be proved that TS/w = a constant, where S is the specific heat of the element, by using the relation TC = a constant, C being the coefficient of expansion. But Pictet proves TC 3√A/d = K, the expression 3√A/d representing the mean distance between the atoms if d is the density. Applying this, it follows that—

TS 3√A/d/w = constant.

But AS = constant (Dulong and Petit);

∴ Aw/T3√A/d = constant.

In Table II. are given the values thus calculated for the elements with atomic weights above 40 whose latent heats are known. As in the case of Dulong and Petit's law, the relationship does not hold for the elements with low atomic weights. The values of d are taken at ordinary temperatures for the substances in the solid state, except in the case of bromine, the specific gravity of which in the solid state is unknown. Most of the constants required have been obtained from the papers of Crompton and Deerr, while the values for silver and copper are due to Heycock and Neville.*

[Footnote] * Trans. Royal Soc., 1897, 189, 25.

– 504 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Table II.—Elements.
Element. Aw. T. 3√A/d Aw/T3√A/d
Mercury 565 234 2.41 1.00
Zinc 1839 688 212 1.28
Cadmium 1531 593 2.35 1.10
Bismuth 2602 540 2.77 1.75
Gallium 1336 286 228 205
Palladium 3873 1773 209 105
Gold 3227 1335 2 15 1.12
Tin 1573 503 255 1.22
Lead 1212 600 2.61 076
Thallium* 1183 562 262 0.82
Bromine 1295 266 297 (?) 1 63 (?)
Iodine 1485 387 296 1.28
Copper 3140 1355 1.91 122
Silver 2920 1230 216 1.10
Platinum 5295 2052 210 1.23

The greatest discrepancies are observed in the cases of bismuth and gallium, the only two metals which are known to expand on freezing.

By using the results of Heycock and Neville for the freezing-points of alloys the value for lead becomes about 1. Their experiments confirm the values of zinc, cadmium, tin, and bismuth, the first three of which give concordant results for the constant. Using this value for lead, and excluding bismuth and gallium, the results vary from 1 to 1.3 for twelve elements with melting-points ranging from—40† to +1,800† C. The mean value is 1.16. In the table below the results are compared with those of Crompton:—

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Table III.
Element. A. B. C. D. E.
Mercury 100 −14 1.21 −8 1.65
Zinc 1.28 +10 1.34 +3 2.65
Cadmium 1 10 −5 129 −1 1 84
Palladium 105 −10 109 −16 2 83
Platinum 1.23 +4 1.29 −1 2.33
Tin 1.22 +5 1.56 +21 186
Silver 1.10 −5 237 +80 2.08
Gold 1 13 −3 080 −40 2.36
Copper 1.22 +5 1.16 −10 3 26
Iodine 128 +10 1.27 −2 1.48
Lead 1.00 −14 097 −25 1 00
Thallium 1 02 −12 2 62 +1 00 1.52

[Footnote] *Since the paper was communicated to the Society the latent heat of fusion of thallium has been directly determined by the author. The value thus found (mean of ten observations) gives a value of 1.02 for the final expression. Thallium thus conforms to the general law.

– 505 –

A represnts Aw/T3√A/d.

B represents precentage difference from mean.

C represents Cropmton's 1895 relation, Aw/Tv.

D represents precentage difference from means.

E represents Crompton's 1897 relation, 10×wd/T.

Thus the relationship Aw/T3√A/d is much the most satis-factory, the mean deviation being ±8 per cent., a deviation of the same order as observed in the law of Dulong and Petit. But it must be borne in mind that the latent heat of fusion is one of the most difficult physical constants to determine, and that if the densities were taken at some corresponding temperatures, such as at the melting-points, the results would perhaps be even closer. There is a large number of wide deviations in Crompton's first relation, while in the case of bromine and iodine it was assumed that the valencies were 3, which assumption is decidedly open to criticism.

In the case of compounds, if A is replaced by M (mole-cular weight) the values for Mw/T3√M/d are also found to be constant. The following are the data for those substances whose density in the solid state I have been able to find. The values for lead-bromide and silver-chloride are from the results of Weber, who deduced them from electrical experiments. The latent heats of antimony chloride and bromide and the bromides of tin and arsenic have been calculated from their depression constants. The remaining numbers are taken from Crompton's paper.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Table IV.
A. Inorganic Compounds.
Compond. Mw. T. 3√M/d Mw/T3√M/d 10 × dw/T
Lead-chloride 5810 758 3.64 2.10 1 60
Lead-bromide 5100 763 3.80 1 76 1.23
Lead-iodide 5300 648 4 17 1 98 1.10
Silver chloride 4400 730 307 1.96 2.35
Antimony-chloride 2920 345 4 19 2 01 124
Tin-bromide 2910 303 5 11 1.90 0.73
Antimony-bromide 3490 369 4 42 2.15 1 10
Arsenic-bromide 2740 295 4.39 2.11 1.14
Water 1439 273 2.70 1.96 2.93
Iodine-chloride 2297 289 3.70 2.14 1.02
Potassium-nitrate 4949 606 3.65 2 24 1.69
Sodium-nitrate 5520 578 3.40 2 81 2 46
– 506 –

With the exception of lead-bromide and sodium-nitrate, the numbers vary from 1.9 to 2.2, with the value 2.07 as mean.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

B. Ortanic Compounds.
Compond. Mw. T. 3√M/d Mw/T3√M/d 10 × dw/T
Acetic acid 2661 277 3 84 2 50 1.61
Phenylacetic acid 4293 348 4.80 2.56 1.00
Azibenzene 5187 342 5.31 2.85 0 87
Benzoic acid 4607 396 4.56 2.55 1.08
Orthonitrophenol 3725 316 460 256 1 10
P. dichlorobenzene 4395 325 4 90 2.76 1.15
P. dibromobenzene 4862 358 5.04 2.70 1.06
Diphenyl 4391 343 5.36 2.39 0.83
Naphthalene 4559 353 4.81 2 67 0.99
Resorcinol 4906 383 4 41 2.90 1.37
Patatoluidine 4177 312 4.67 2.87 1 21
Parabromphenol 3961 337 342 462 4.73
Phenanthrene 4450 369 5.50 2.19 0 71
Thymol 4130 321 5.36 2 41 0.81
Nitropaphthalene 4383 329 5.06 2 64 0.97
Anethoil 4070 294 5 30 2.61 0 94
Nitrobenzene 2743 264 4.45 2 35 1 02
Acetophenone 35.0 293 4.86 2 53 1.02
Benzophenone* 4320 321 5.38 2.50 0 90
Chloracetic acid* 3850 334 4.04 2 90 1.71
Acetoxime* 3022 333 4 22 2 15 1.12

With the exceptions of acetoxime and phenanthrene, the numbers vary from 2.35 to 2 9. The mean value 2.57 is distinctly greater than the value obtained for the inorganic compounds. Whether this difference is due to the large number of atoms in the compounds of carbon or whether it is one of those peculiar properties of this element remains to be seen. The value of the constant is about twice as great as that obtained for the elements themselves.

I have neglected to compare Crompton's 1895 relation Mw/TEV with the others, because until chemists can agree as to what is really meant by the sum of the valencies (EV) in a compound the results thus obtained will be of no value. In the last column of Table IV. is placed his second relation ship, 10 x wd/T. Regarding this, he says that when the result is about unity the liquids are non-associated, and when greater the liquid is proportionately associated It may be remarked that about 25 per cent. of the values are considerably “below” unity.

[Footnote] * Specific gravity in the solid state determined by the author.

– 507 –

By combining the equation Mw/T3√M/d with the well-known law of Van t'Hoff, D = 02T3/w, the result is D = KMMT3√M/d, where D is the molecular depression of the solvent and K a constant. Hence the molecular depression of any body can now be calculated without a knowledge of the latent heat of fusion.

Trouton's law states that—

MW/T1 = K1.

But Mw/T3√M/d = K;

w/W = Mw/T3√M/d/T1

That is, the latent heat of fusion is to the latent heat of vaporization as the freezing-point multiplied by the cube root of the specific volume is to the boiling-point. This, of course, is only true when Trouton's law is true–that is, when the molecular condition of the body is unchanged in passing from the liquid to the gaseous state.

Art. L.—Some Observations on the Fourth Dimension.

[Read before the Hawke's Bay Philosophical Institute, 9th September 1901.]

Helmholtz was the earliest writer to attempt to present the conception of transcendental space in a form inviting popular investigation, and his efforts have been ably seconded in recent times by the author of “Flatland,”* in the first place, and by Mr. C. H. Hinton, in the second place. The former has produced a work which has attractions beyond the mere consideration of the fairyland of mathematics; while the latter, beginning with pamphlets of a distinctly popular, nature, has in his latest work laid down, still without- ab-struse mathematics, a scheme of mental training the avowed object of which is to enable the student to form a perfect mental image of a figure in four dimensions.

[Footnote] * “Flatland, A Romance in Two Dimensions, by a Square.” Seeley and Co.

[Footnote] † Author of “Scientific Romances” and “A New Era of Thought.” Swarm, Sonnenscheim, and Co.

– 508 –

The method of treatment does not admit of much originality. A straigh line bounded by two points will, if moved in a direction perpendicular to itself, trace out a square, bounded by four lines and four points. By moving this square in an independent direction at right angles to the two original directions we shall obtain a cube, bounded by six squares, twelve lines, and eight points. If the cube be now moved in an in dependent direction compounded of none of the three original directions, but at right angles to them all, it will trace out a four - dimensional figure (called by Mr. Hinton a “tessaract.”) which will be bounded by eight cubes, twenty-four squares, thirty-two lines, and sixteen points.

A very small amount of consideration will show how these latter figures are arrived at. The bounding cubes consist of the cube in its original position, the cube in its final position, and the six cubes traced out by the motion of the six squares which bounded the cube. of the squares we had six in the initial and six in the final position, while each of the twelve lines of the cube traced a square, making twenty-four in all. So too with the lines: twelve in the initial and twelve in the final position, with eight traced by the eight points, bring up the total to thirty-two. We may tabulate these results as follows:—

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Dimensious. Points. Lines. Squares Cubes.
Line 1 2
Square 2 4 4
Cube 3 8 12 6
Tessaract 4 16 32 24 8

We might, of course, carry on the enumeration for figures in five, six, or “n” dimensions.

Mr. Hinton remarks that, if we take two equal cubes and place them with their sides parallel and connect the corresponding corners by lines, we shall form the figure of a tessaract. But it seems to the present writer that this suggestion ignores the limitation of our three-dimensional space. It is just these limitations which prevent our placing the cubes in a satisfactory position. The suggestion contains the assumption that, as we may project a cube on to a plane, so we may project a tessaract on to a three-dimensional system. In point of fact, such a projection might be made by a being in four dimensions, but we three-dimensional beings must be content with projecting our tessaract upon a plane.

– 509 –

Use has accustomed us to the fact that we may represent by projection the figure traced by the square ABCD when moved in a direction (parallel to AE) perpendicular to all the lines contained in itself. And in a projection there is nothing to hinder us from moving the cube AG, thus obtained, in a direction represented by AK, which shall be at right angles to AE, AB, and AD. In fact, we might, had we been so disposed, have moved our square AC, in that direction, and traced, instead of AG, the cube AM,

without doing any violence to our notions of the propriety of our dealings with the projection. By moving, then, either the cube AG in a direction AK perpendicular to each of its sides, or the cube AM in a direction AE perpendicular to each of its sides, we shall obtain a projection of the tessaract AQ which will be found to have all the defining elements which are contained in the table above. There are the eight cubes, AM, EQ, AR, BQ, KF, NG, AG, KQ: the twenty-four squares, AC, EG, KM, OQ; AL, EP, DM, HQ; AN, ER, BM, FQ; AH,

KB, BG, LQ; AO, BP, DR, CQ; AF, KP, DG, NQ: the thirty-two lines, AE, KO, DH, NR, BF, LP, CG, MQ, AB, KL, EF, OP, HG, RQ, DC, NM, AK, EO, DN, HR, BL, FP, CM, GQ, AD, KN, EH, OR, BC, LM, FG, PQ: and the sixteen points, A, B, C, D, E, F, G, H, K, L, M, N, O, P, Q, R.

As has been indicated above, any one of the eight cubes here enumerated

– 510 –

rated might have been considered the generating-cube, which in turn gives us the option of starting from any one of the twenty-four squares or the thirty-two lines.

In two dimensions revolution takes place about a point, while figures are bounded by lines. In three-dimensional space revolution takes place about a line, and figures are bounded by surfaces. In four dimensions revolution takes place about a plane, and figures are bounded by solids.

To a two-dimensional being the figures ABC, DEF are essentially different, no amount of revolution about a point effecting coincidence. To us, however, it is obvious that one can be made to coincide with the other by performing half a revolution about a line. Similarly, to us the cubes figured below are essentially different, and no revolution

about a line can make them coincide. To effect this one must be taken into four-dimensional space and revolved about a plane. Another aspect of the same fact is that, as we appreciate the identity of the two triangles by concentrating our attention upon one face or the other of either triangle, so a four-dimensional being appreciates the identity of the two cubes by virtue of the fact that either side is equally accessible to him. As the faces of the cube are no greater hindrance to him than are the edges of a triangle or square to us, he can apprehend the cube with the angle (6) forward or with the angle (3) forward at will; the latter being the cube B, the former being the cube A, above.

– 511 –

The realisation of the possibility of the existence of fourth-dimensional space leads naturally to two questions, which, of course, may suggest others: (1.) Seeing that we may conceive of our space system as being made up of innumerable two-dimensional systems, each possibly inhabited by beings quite without cognisance of the companion systems, may not four-dimensional space be compounded of innumerable three-dimensional systems, similar to our own, but lying completely outside our cognisance? (2.) Seeing that figures in a two-dimensional system might be regarded as sections of solids, might not our so-called solids be in reality sections in three dimensions of four-dimensional figures?

With regard to the first question, it does not appear that any reason can be adduced why it should not be answered in the affirmative. This leads to the curious consideration that, in spite of preconceived notions, two bodies may, apparently, occupy the same space. If one plane be superimposed upon another, a figure moved out of the latter an infinitely small distance passes into the other. Two beings might in this way be separated by the smallest possible distance, and yet for all practical purposes be at an infinite distance from one another. In the same way, if we conceive of a cube, say of 1 ft. side, moved one millionth part of an inch in the direction of the fourth axis, it will pass immediately out of our system, and presumably its place may be occupied by another cube similar to itself. The centres of gravity of the two would be separated by an infinitesimal distance, and yet each in its own space system might be a solid, the two cubes to all intents and purposes occupying the same space. In this connection it may be mentioned that it has been suggested that, as we may imagine a plane to be bent over so as to re-enter itself, with or without a twist in the process, so we may suppose it possible for our space system to have been similarly treated. This would make it possible to arrive at one's starting-point by travelling along an apparently straight line for a considerable distance. But though the notion of limited space thus introduced had attractions for so great a thinker as W. K. Clifford, it seems that such a process would involve an extension of our present three-dimensional limitations.

With regard to the second question, while solids may mathematically be such sections, the answer must, when we come to the case of animate beings, assuredly be a negative one, for it is scarcely conceivable that a section could contain the consciousness of the whole. If what we imagine to be independent figures proper to our own space system are but sections of four-dimensional figures, it would seem to be necessary that the innumerable sections of these solids are also playing their parts in an infinite number of two-dimen-

– 512 –

sional space systems. The author of “Flatland” makes a sphere pass in and out of two-dimensional space, and thereby conveys the suggestion that higher space beings might similarly visit our space and similarly disappear. In fact, the idea has been seized upon as explaining many of the socalled phenomena of Spiritism. But writers on this point have not reckoned with the difficulty of insuring that the higher space being should always offer the same three-dimensional section on entering our space system. Even so simple a figure as a cube might appear in a two-dimensional universe as a point, a line, a triangle, quadrilateral, or five-, or six-sided figures. In fact, under each of the four last headings an infinite variety of forms might be offered. And the possible sections of a four-dimensional figure in space of three axes offers, of course, a far greater variety of forms. It is not conceivable that a being moving freely in space of four dimensions could present itself repeatedly to us in sections even suggesting identity of form.

There is one further objection which must be dealt with in reference to both the above questions. The assumption is usually made by writers on this subject, and has been tacitly accepted in this paper, that a figure might be removed from a plane and afterwards replaced in that plane; and, by analogy, that one of our solids might conceivably be lifted into four-dimensional space and afterwards replaced in our system. Now, either of these processes endues the body dealt with for the time being with an existence in a system higher by one dimension than that in which it was assumed to exist. Two-dimensional beings, if such there be, are by the nature of their limitations placed absolutely without the scope of our cognisance, and we, of course, without the scope of theirs. So too we, as long as we continue to be three-dimensional beings, are absolutely cut off from such four-dimensional beings as there may be. The method of treatment adopted by writers in endeavouring to place the conception of the fourth dimension within reach of their readers consists in developing figures from one space system to another. But it is a fallacy to suppose that the matter occupying a figure can be similarly dealt with. In fact, we have no knowledge or conception of fourth-dimensional matter, any more than of two-dimensional matter. It is this fallacy which vitiates the application of the fourth dimension to reported spiritist wonders, and the recognition of it restores confidence in the old theory that two bodies cannot occupy the same space. J. B. Stallo remarks* that “the analytical argument in favour of the existence or possibility of transcendental

[Footnote] * “The Concepts and Theories of Modern Physics,” p. 269

– 513 –

space is another flagrant instance of the reification of concepts.” It would appear, however, that his strictures apply not to the arguments for the possibility of transcendental space, but to the arguments that we can have, under our present limitations, any practical acquaintance with such space.

Art. LI.—The Equatorial Component of the Earth's Motion in Space.

[Read before the Wellington Philosophical Society, 11th February, 1902.]

Attempts have been made from time to time to find the velocity of the earth—or, rather, the solar system—in space by observing the proper motions of stars. Methods have also been suggested that depend on the relative motions of the earth and ether. The following method, however, I have not seen described anywhere, although it seems extremely simple. If a rotating body moves along a path in the plane of its equator, it is evident that a point on its surface moves faster relatively to space on one side of its path than on the other; but an acceleration is proportional to the rate of change of velocity, so that the point should undergo an alternating acceleration.

Let V = tangential velocity of the point P in space, u = velocity of earth's centre in space, and v = rotational velocity of P. Then, resolving along the tangent, we get V = vu. sin. θ. If f is the acceleration of P along the tangent,

f = dV/dt = − u dθ/dt cos. θ = − uw cos. θ

– 514 –

The motion of the sun, as deduced from the proper motion of the stars, is, according to Proctor (“The Sun”), 150,000,000 miles per year—that is, 25,154.38 ft. per second, the line of motion being inclined to the earth's orbit at about 53° in longitude 285°. This is about 60° to the earth's axis. Resolving along the equatorial plane, this gives—

u = 21,784 ft. per second;

and, as w = 0.000073 rad. per second,

we get f =—1.584 =—g/20 (about) when θ = 0.

Similarly, this would be the acceleration along the radius at θ = 90°; so that the weight of a body at the equator should vary by 10 per cent. every twelve hours. The motion of the earth in space, therefore, cannot be as great as deduced from the proper motions of stars.

If A be the angle through which a plumb-bob is deflected by this spacial acceleration, we have—

tan. A =—uω cos. θ/uω sin. θ cos. λ + g

Perhaps this in some part reconciles the seismological tides found by Milne with Lord Kelvin's value of the rigidity of the earth.

From an experimental point of view the method is very accommodating. Being a harmonic quantity, it does not matter when we set our instruments, which may, for the same reason, measure variations in pressures. Being an acceleration, it may be magnified to any extent by using large masses. With sufficiently delicate apparatus, and observations extending over a long period, it might be possible to deduce the relative motion and distance of a star for which the earth's orbit failed to show any parallax.

Art. LII.—Mathematical Treatment of the Problem of Production, Rent, Interest, and Wages.

[Read before the Wellington Philosophical Society, 11th February, 1902.]

The following attempt at a mathematical treatment of some of the problems of political economy was not originally intended for publication, but I have been persuaded to submit it as a paper to the Wellington Philosophical Society. I have not solved all the interesting points in the subject, but merely

– 515 –

a few of the more simple ones. Several attempts have been made to treat political economy mathematically, but they have chiefly resulted in failure, for the reason that the mathematics has taken quite a subordinate part, being used to express the result of elaborate reasoning by words. It is like the man who keeps a watchdog and does the barking himself.

The most successful attempt so far seems to have been made by Professor Jevons,* but he states in his preface that, although many of the problems might have been solved more directly, he preferred to limit himself to the simplest possible mathematics, thus the book hides rather than shows the value of applying mathematics to the subject. Another writer on the subject is Professor J. D. Everett. A long list of other writers is given at the end of Professor Jevons's book, but the two mentioned are the only mathematical ones to which I have been able to refer; and, from a remark on the customary method of treatment in Professor Everett's paper, I believe that the proofs in the following paper are new, though the results have in many cases been previously obtained by a patient application of logic.

The fundamental principle which is assumed in the following is that in the serious affairs of life a person always endeavours to obtain the maximum return on an investment. This one might almost call an axiom, and as such it is used. With regard to the definitions, I have defined the quantities as I intend to use them, and as long as a definition and its use are consistent no more is required of it.

Many people think that the application of mathematics to political economy is an almost impossible proceeding. The science, they say, is too vague and conditional for it to be possible. The same might have been said of other sciences in their beginnings, but which have since had mathematics successfully applied to them. For instance, what is more capricious than evolution? yet Professor Pearson is successfully applying mathematics to this subject. The problems of political economy in many cases resemble problems in dynamics, and it is quite a possibility that its elements might be expressed in terms of energy which would thus bring it more into line with other branches of applied mathematics. In fact, so apparent are the advantages of the mathematical treatment of the subject to many that a well-known professor jokingly said, in a lecture on the representation of facts by curves, that before long we should probably see our legislators,

[Footnote] * “Theory of Political Economy.”

[Footnote] † “On Geometrical Illustrations of the Theory of Rent” (Jour. R.S.S., lxii., 703).

– 516 –

instead of preparing lengthy speeches, framing the laws of the country by means of squared paper and curves.

Some may demur to the latter part of the definition of interest as requiring proof, but it is rather a historical point than otherwise, and, in any case, does not affect results obtained.



“Production” is the changing of form, constitution, place, or time of a natural product of nature in order to render it efficient for human needs.


“Land” is the whole of the material universe that has not undergone production.


“Labour” is human force applied to production.


“Production” (P), when used in a quantitative sense, refers to the value (referred to some convenient standard) of the products after they have undergone the process of production.


“Rent” (R) is that portion of production which is given up to landowners in return for benefits derived from land in their possession.


“Marginal production” (p) is the production which would be obtained if all the land were equal in productivity to the most productive land available without the payment of rent.


“Wages” (W) is that portion of production returned to labour in return for its co-operation in production.


“Capital” (C) is the surplus of production which is used to assist labour in further production, by means of costly appliances, &c.


“Interest” (I) is that portion of production which is delivered to capital as equal in value to the mean increase of raw products due to the vitality of nature.


“Rate of interest” is the fraction obtained by dividing interest by capital.


“Proportional profit” is the profit derived from a certain investment divided by the amount invested.


If u be the proportional profit at one point and u′ that at another where u is less than u′, then motion will take place from u to u′, because every one tries to make the greatest profit he can. Further, the greater the difference between u and u′ the greater the velocity of adjustment. Therefore, if there be n proportional profits at n different points, there will be a tendency to motion which will cease when all the profits are equal. Therefore, if V, V′, V″, V-‴, V″″, &c., be the amounts invested at different points, the condition that there should be equilibrium is that—

– 517 –

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

1/V.dV/dt= 1/V′.dV′/dt=1/V″.dV′/dt=&c.

From this we may deduce a relation between property-values and rate of interest (r).

Let V = property-value.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

r = 1/C.dC/dt;

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

but 1/V.dV/dt = 1/C.dC/dt = r.

Integrating, we get—

V = Voert,

and C = Coert.

This assumes that r is constant, and that all the rent is devoted to buying more land and the interest to increasing capital. Neither of these assumptions is true, for evidently a man who is both a landowner and a capitalist may be most erratic in his investments; but it seems evident that, since the area of land in use is limited, more rent will find its way to capital than interest to land, so that capital will increase more quickly than given above and land-values more slowly. We may, however, deduce a formula free from both these objections by replacing dV/dt by R; then V, R, and r are simultaneous values at any time, and therefore true for all time.

R/V = r.

Since R is always greater than O, we see that when r = O V = ∞, and vice versâ. There is one case in which R = O: that is at and below the margin of cultivation; the formula then gives V = O. True, but of no importance.


Rent is equal to production minus the marginal production.

Let productiveness mean the production from unit-area of ground, and let it be represented by y, and the marginal productiveness by g. Further, let dR/dx = z. Then we may write y = g + z + f (x); so that the profit after the rent has been paid is g + f(x); but at the margin no rent has to be paid, so that the profit there is g. Now, if equal areas of land be taken at the margin and at any other point, we have—

1/C.dC/dt = 1/C′.dC′/dt

—here I use C and C′ to include not only capital, but also labour—

g + f(x)/C = g/C′;

or (C - C′)g = f(x)C′.

– 518 –

If the distribution of C be uniform, f(x) = 0, so that—

y = g + z.

Integrating between suitable limits,—

P = p + R,

or R = P—p.

This is the ordinary theory of rent, which seems always to be deduced by placing the distribution of capital under a restriction; but this is more apparent than real, for C and C′ contain both capital and labour, and to put them equal only means that their joint effects are the same at all points, though the distribution of capital may be extremely variable. This agrees with observation. The less capital a man has to work his land the harder he has to work to keep afloat.


P = R + I + W,

= R + p,

∴ W = p—I = pCr.

This shows that if the capital increases whilst p remains constant the wages will fall, and that in new countries, where p is large and C small, the wages should be large.

We have seen in I. that when r is constant C = Coert. Now, suppose I to have reached a constant value, then C = Co + It. The corresponding land-value will be—

V = V0e

The Malthusian theory states that population is kept down by its pressure against production—that is, if n is the population and w the demand of each.

P = nw,

∴ I = p—P =—R,

since nw = W.m

Similarly, if the law were p = nd we should get I = O, which is equally untrue and absurd.

Let the coefficient of labour-saving devices (s) be measured by the production which can be done by unit labour when using a labour-saving device. Then P = sN, where N is the number of men required to do this production with this coefficient. If we put R + I = mP, where m is some proper fraction, we have—

P = mP + W = mP + nw;

or nw/1−m

where n is the number of men available; so that—

w = N/n(1-m)s = P/n(1 - m).

– 519 –

Since R = P—p, the effect of increasing P is to increase R without increasing the wages, as the latter are included in p. Therefore to increase w we must increase (1—m)—that is, decrease m. Or we may put it that since P is directly proportional to s we must increase the ratio of N/n, which may be done by either increasing N or decreasing n. Let us examine these ways more in detail.

The diminishing of n has in the past been the most common way of increasing wages, but it has been far from successful, having been brought about by wars, pestilences, &c., which tend to diminish P at the same time; also, the destruction of property and ruin of the country generally is so great that the increase of wages is negligible.

The increasing of N has been tried—relief-works, for example—but is expensive, wasteful, and not lasting, for as soon as the artificial stimulant is removed the wages must revert to their former state.

The decreasing of m is what the single-taxers aim at doing, and what the rating on unimproved values aims at effecting. We have seen in II. that rent is the natural outcome of variable productivity and cannot be done away with, but it might be collected by the Government and distributed in the form of efficacious and lasting public works.

Art. LIII.—On the Phenomena of Variation and their Symbolic Expression.

[Read before the Wellington Philosophical Society, 11th March 1902.]


“A PERSON who uses an imperfect theory with the confidence due only to a perfect one will naturally fall into abundance of mistakes; his prediction will be crossed by disturbing circumstances of which his theory is not able to take account, and his credit will be lowered by the failure. And inasmuch as more theories are imperfect than are perfect, and of those who attend to anything the number who acquire very sound habits of judging is small compared with that of those who do not get so far, it must have happened, as it has happened, that a great quantity of mistake has been made by those who do not understand the true-use of an imperfect theory. Hence much discredit has been brought upon theory in general, and the schism of theoretical and practical men has arisen.”—(De Morgan, “Penny Cyclopædia,” Art. “Theory.”)


The present writer proceeds upon the assumption that the means of comparing those theories which are used to predict

– 520 –

the quantities of physical phenomena with experiment upon those phenomena are in some cases not quite so effective as the theory of probability enables them to be made, and that the latter theory has even had a detrimental effect upon the comparison by reason of it having been frequently assumed to have provided a universally satisfactory method—that of least squares—by which we can determine those constants which arise from the unestablished properties of matter, and at the same time more or less tacitly institute a comparison between the theory and the results of experiment in the case of a phenomenon of variation where quantity is both measurable and supposed determinable by theory, given the properties of matter. The results of the theory of probability will be accepted with regard to the probable value of a single quantity directly measured and its probable error.

In the present paper the writer proposes to examine the representation of physical phenomena of variation by means of formulæ, whether empirical or founded more or less completely upon reason.

1. The phenomena which will be examined are those where a quantity (Y) varies with a variable (X)—that is to say, takes up magnitudes which, ceteris paribus, depend in some fixed way upon the magnitude of X. If we observe, by experiment, how the variation occurs we obtain knowledge which can be expressed by a graph. We may make the axis of Y the ordinate, that of X the abscissa. We shall consider only such cases where Y has, in fact, although it may not have been observed, one value, and only one, for each value of X, which in general extends from plus to minus infinity.*

2. The first fact we notice is that in such case we observe values within a limited range. This we may call the “experimental range of X.” Beyond that range we know nothing, whereas most mathematical expressions will yield values from minus to plus infinity. The definite integral is in form a striking exception to this, and from one's experience of textbook formulæ it is to be wished that some simple means of indicating experimental range could be brought into general use. This idea of dealing with the experimental range only will be found of fundamental importance in later parts of this paper.

3. Now, the graph may be of two distinct kinds—(A) that of a curve or curves, or (B) that of a series of datum points. The first kind, that of a curve, contains the same complete statement of values of Y as does the analogous kind of mathematical formula, which is defined as holding between limits of range of X. The second kind, that of datum points, contains

[Footnote] * But see Appendix, III.

– 521 –

information which may be given also by a table.* This refers to the information which is directly derived from experiment; but, this usually being insufficient for the practical applications, we have to perform interpolation in order to get what we want. This may be done graphically or by application of the calculus, but in either case the result is a guess. It will be here asserted that, à priori, we have nothing to show that the judgment of an engineer or physicist will lead to error more readily than will the corresponding assumptions of a computer. We shall refer to the judgment as the arbiter in this indeterminate question of interpolation.

4. So far we have accepted the results of experiment, but it is evident that such knowledge must (in continuous variation) be inaccurate to some extent; data we get by measuring must be subject to fortuitous error, and may be subject to systematic error due to the system of measurement—instruments may be wrongly calibrated, and so on. Fortuitous error may be made definite by the application of the theory of probability, provided, of course, that the necessary work is done in the experiments; and we may take from this application the information that the true values (but still affected by systematic causes) of the quantities lie within limits of probable error—more probably so than not—the probability of a value being the true one decreasing very rapidly outside these limits, as indicated by the well-known frequency curve. It is much to be regretted that in many researches, even of the classical kind, no attempt is made to assign limits of probable error. In an example which has come under the writer's notice this was not done, although repeated measurements at each datum point were made, with the result that a very laborious research is rendered very much less valuable than it would have otherwise been. The effects of this lack of system are usually not very apparent at the time the research is made; it is only when the matter comes to be looked at from a new standpoint, or examined for residual phenomena, that the absurdity of giving such figures as accurate without a statement of probable error becomes apparent. This, of course, applies to those measurements which form the connecting-link between theory and the things that happen; many practical experiments are made under a well-understood convention as to negligible error.

5. In a graph such information as to probable error could be conveyed by giving a band (twice the probable error in vertical width) instead of a line for a curve, or a row of vertical lines instead of a series of dots for datum values (that

[Footnote] * See sections 10 to 15.

– 522 –

is, supposing all the error attributable to the values of Y—i.e., where values of X may be taken as accurate for the purposes of reasoning, as we always suppose).

6. It is obvious, however, that systematic, or what we may call instrumental, error must be eliminated or it will infallibly render any reasoning wrong which is based on the results, provided, of course, that the error be sensible in amount.

7. While the graph forms a very complete representation of the observed facts, and indicates interpolation in the case of datum observations, and in the hands of a person of clear insight may often be the means of reasoning which may not be practicable or even possible by the more formal means of algebraic symbols, yet it is clearly necessary to find, if possible, some formula or function of X which will stand for the graph as well as may be. There are many reasons for this the chief theoretic one being the enormous developing-power of the algebraic calculus.

8. In the preceding we have considered the graph as the most natural mode of recording phenomena of variation, but we may have occasionally inferential reasons for believing that the phenomenon should follow some particular function of X more or less completely, and it is necessary to examine the rationale of the functions in various cases.

(a.) A function may be logically applicable to a phenomenon. For instance, formulæ which state the results of definition, or those which state such inferences as that the angles in a plane triangle are 180°, may be regarded as truly applicable. Even this class may be subject to systematic instrumental error.

(b.) Functions in which there are strong inferential grounds for the belief that they express the substantial truth. For instance, formulæ deduced from Newton's laws of motion may be expected to apply closely to the motion of the major objects of the solar system; but experiments of an accuracy greater than those upon which such laws were founded may always be apt to demonstrate that the functions are not strictly applicable to any given phenomenon, and that there are systematic residual causes which should be taken into account.

(c.) Functions which have some inferential foundation, but the substantial applicability of which it is worth while to question and examine.

(d) Functions whose foundation is largely hypothetical. This class we may term “empirical.”

(e.) Functions which have no foundation except, perhaps, certain notions of continuity in rates of change, and so on. This class we may term “arbitrary.”

– 523 –

9. It is intended to confine our attention chiefly to the example, of the last or last but one class, which is called the “power-expansion” or “Taylor's series formula.”* It is, however, intended that the objections to the use of an arbitrarily systematic mode of computation should apply to all classes with respect to systematic instrumental error, and to all but the first with respect to the effect of systematic residual causes which are not allowed for in the function, or of any mistake or incompleteness in the inferring of the function.

10. Besides the curve and the datum-point graphs, we need to mention an intermediate class—namely, that of experiments which are arranged to give data for many points of X without any attempt to obtain repeated measures at any one point.

11. We might venture to define the characteristic virtues of the two main types of graph by saying that the curve yields a clear idea of the continuity of a phenomenon without allowing any great accuracy to be obtained in the measures of Y, while the datum point allows great accuracy to be attained in the measures of Y, and also permits definiteness to be attained in probable error, but leaves the interpolation to be judged. It may be put also thus: the curve gives a notion of dY/dX, the datum of Y. It is sometimes possible to form a graph of both kinds of measures—to measure accurately datum points and also to get the slope of the curve near these points. This procedure is analogous to that of constructing mathematical tables where datum points are often computed exactly and intermediate points found by Taylor's theorem. By such means very full information would be given of the actual phenomena.

12. A graph of the above-mentioned intermediate class, while it combines the virtues of both main forms, combines also their defects. In contemplating such a graph one would feel more content if a likely value for probable error at a few points of X were provided by the experimenter. The difficulty with this form of measurement is the very large number of measures, necessary—theoretically a double infinity.

13. It is perhaps desirable to point out that in datum measurements we usually cannot get either X or Y exactly the same for each measure, accordingly we have to interpolate the values of Y to one common or mean value of

[Footnote] * It is to be observed that, in the case of functions the Taylor's series expansion of which are sufficiently convergent when applied to the experimental range, the result of the application of such a formula is practically identical with the result of the application of an unexpanded function of any class.

– 524 –

X (which we are going to take as absolutely accurate, theoretically). It is common to take the mean of both quantities, a process that often leads to the use of a few more decimal places. A more satisfactory process is to either measure dY/dX or else estimate it from antecedent knowledge of the likely curve, and then make a graph of the measures of each datum point and analyse it by means of the curve of dY/dX, which will be usually a straight line. From this we can get the probable (fortuitous) error, and also make a note of discordances, which is not always possible when the mean merely is taken.

14. A further advantage lies in the fact that we can avoid taking the mean value for X, and take instead a convenient adjacent value which has few integers, the last few significant figures being made noughts. This affords a vast saving in tabulation and in computation.

15. It may be thought by some that such matters as are being advanced are refinements for which time is too short; but the writer would appeal to those who may have honestly tried to get a reliable value for any physical constant which is not absolutely simple or else fundamental—even a there or thereabouts value—whether an enormous amount of labour has not been absolutely wasted by the neglect of such principles.

Least Squares.

16. An assertion will now be given which it is believed can be substantiated by reference to some recent text-books—that if a formula be applied to the results of observation so that the sum of the squares of the residual quantities or deviations of observed quantities from those calculated is a minimum with regard to the constants of the formula, then this formula may be referred to as the best, or even the most probable, and, in fine, that such application is a strictly scientific process. It must not be supposed for a moment that it is intended to convey that this view is held by accurate thinkers, but simply that it is observable that others have been led by the beauty of the method, and the very evident desirability of possessing a method of computation which should be free from personal bias, into an unwarrantable and indiscriminate promulgation of the formal procedure of the method.

17. There are two distinct objections to the mode of computation which has been described, and which it is hoped may be described as “least squares” without misunderstanding—namely: (1) That least squares observably tends to eliminate the application of the judgment to the indications of a graph, and, further, that it tends to make systematic deviations

– 525 –

look as much like true errors as possible; and (2) that the computations of least squares are often prohibitively laborious, thus practically preventing the analytical application of all sorts of formulæ which it may be easily possible to apply by other means.

18. The first objection will be illustrated by a couple of examples. Suppose we had a graph which consisted of the curve of a phenomenon following exactly (although the computer is not aware of this) a power-expansion formula of four terms, or cubic; and for certain reasons—say, the labour of least squares—are unable to use a formula of more than three terms, or parabolic. Then it can easily be seen, or proved, that least squares (which becomes a problem in integration in the case of a continuous curve) leads to a symmetrical arrangement of the deviations the proportions of which are shown in Graph A. It is pretty clear that for the observed range this arrangement of deviations strikes a good average; but conceive extrapolation to be necessary, or even a terminal value to be an important physical constant, would it not be preferable to accept the notions which one gathers from the shape of the curve and to extrapolate by means of some such freehand curve as is drawn dotted? The answer seems obvious enough when put in this way, and yet an almost precisely analogous condition of things has been the cause of considerable error in a certain oft-quoted classical research which the writer is recomputing by the graphic process.

19. A still more conclusive example is contained in the very common case of a few datum points representing the only observed facts. Here a physicist will often feel justified in drawing a curve for interpolation, and will have a very strong conviction of the unlikelihood of certain other curves which are much different from one he might draw. If least squares is followed up it is obvious that it leads to an exact representation of n datum points in a formula of n constants. In the case of the power-expansion formula the solution is identical with that of simultaneous equations. Graph B shows the least-square curve passing through six points—at X = 0, 0.2, 0.4, 0.6, 0.8, and 1, Y being zero at all points except 0.6. The indeterminate question to be here answered is whether there are any particular virtues about the least-square curve as compared, for instance, with the dotted curve (which was made by a flexible spring passing over rollers at the points). s not the interpolation here very questionable, and the extrapolation doubtful in the utmost? It may be here remarked that the extrapolation of such formulæ of high degree is always very doubtful, except when there is a strong convergency.

– 526 –

20. We have here got two clear examples of what least squares leads to. In the first case, that of the curve, as we shall afterwards see, the shape of the curve of deviations is most strongly indicative of the need for the application of a formula of four terms, if not more. We have drawn the deviations according to least squares, which may be proved to arrange the deviations (given simply the direction of the axis of X) from a cubic phenomenon to which a parabola is applied, and where the observations are at nearly equal intervals of X, with a symmetry similar to that of the graph. The deviations, it will be observed, run ±(—,0, +,0,—,0, +). By the graphic process we should arrange the parabola so that they run ±(0, +, 0,—, 0)—so that, in fact, they bear a close resemblance to the standard cubic of Graph II. It is asserted that there is less likelihood of systematic deviations so arranged being mistaken for fortuitous errors than is the case with the least-squares arrangement. It may be again mentioned that this example is not, in its general features, a mere hypothetical case.

21. In the second case, that of six data, we have got a curve from our least squares which we have asserted to be quite unjustifiable, and not to be compared with the results that one would get from a common-sense judgment of the graph—not to be compared, that is, in avoiding rash assumptions as to the truth of the matter.

22. Following our definition of least squares, we have neglected to take any account of fortuitous probable error in these examples, but its vital necessity in such cases will be sufficiently obvious from what has been said in previous sections. The effect of probable error in the graph is to obscure the true points or line of the true curve of the phenomenon. When this occurs to such an extent as to hide any system there may be in the deviations, then, provided we are quite sure that our formula is substantially accurate compared with the scale of the probable errors, we might reasonably employ least squares to systematize our computations. This is a matter which is dependent upon circumstances, and more on judgment, and we believe that the employment of the latter will be found to be very largely dependent upon whether the treatment of empirical formulæ is taken as a mere extension of the beautiful applications of the theory of probability to astronomy and surveying or as a most important branch of the graphical calculus. This part of the subject is too complicated to treat of here except by suggestion, but we may refer to the example in Dr. F. Kohlrausch's work (see section 38), where a case of this complicated kind is given as if it were a simple and logical application of least squares; and where, moreover, the data are deliberately subjected to extra-

– 527 –

polation to the extent of half the observed range. It will be observed that we do not say that in this example anything better than is done by least squares could be done with such data, but we do say that it is absolutely misleading as an example of experimentation and of computation.

23. It is now necessary to draw attention to some theorems in the graphical calculus in which the combination of such curves as correspond to functions of the algebraic calculus is treated. X is the variable, A, B, &c., the (variable) constants. Suppose we have a curve whose function is—

F(X) = f1(X) + f2(X) + (other similar terms),

then we may build up the curve of F(X) by drawing the curves f(X) all to the same scale, and then adding their ordinates at corresponding points of X. This is the theorem of sliding, for we conceive the ordinates of each of the component curves (of f(X)) to be capable of being slid over one another parallel to themselves, or to the axis of Y, and we so slide them that they are placed end to end, when we have the ordinates of the curve of the additive function F(X); then always, if we have found enough ordinates, we can complete the curve by freehand drawing, or even by eye without drawing.

24. Next we have the theorem of one-way stretch of ordinates, by which we can introduce variation in the constants of additive functions which are linear in the said constants. Thus, considering one term of the additive function F(X), and writing it with its constants displayed, its expression is A.f(X). The theorem is that if we draw the curve of this function, making A take a convenient standard value—say, unity—we can find the ordinates corresponding to any given value of A by the use of some such device as proportional compasses applied to the curve we have drawn. So also with other similar terms. There is a curious point with these constants which had better be pointed out to prevent confusion—namely, that it is immaterial whether any algebraic relationships (independent of X variation) exist between them or not, provided that each is not fixed by any combination of the others, but is capable of taking up independent values. Such relationships should be studied, however, with a view to facilitating the graphical work. Thus, if two terms aref1(X, A) and f2(X, A, B), then we may have reason to prefer to take them as f1(X, A) and f2(X, C), or as f3(X, A) and f4(X, B), in the latter case breaking up the second function. Considerations such as these may be traced in the process for Taylor's series formulæ.

25. This is all we shall need, for the Taylor's series analysis, but we may refer to text-books on least squares

– 528 –

for the application of Taylor's theorem to the approximative treatment of non-linear functions, and mention two other theorems of the graphical calculus which are of occasional use. In cases where X (or Y) is invariably associated with a constant by addition or by multiplication we get possible graphical operations, for, if the expression is f(X + A) we may draw a curve to f(X) and then introduce the effect of any value of A by shifting the curve bodily along its X axis; and so also with regard to Y. In the case of multiplication we get a stretch of a drawn standard curve in either one way or in two ways. For, to take the latter case, when f1(A x Y) = f2(B x X), having drawn a standard curve to convenient values of A and B, we get the effect of any values of either constant by uniformly stretching the curve in directions parallel to both axes. This can be effected by means of throwing shadows, and appears of value in our subject, since the frequency curve is of this form (with an immaterial relationship between the constants).

26. Reverting to the question of appealing to the judgment to detect systematic deviation from a formula, we see that we expect the deviation to become evident as a recognisable additive curve—i.e., as if it were representable by a term f(X). Clearly, this is frequently the case even where the deviations may be logically functions of Y, as, indeed, we supposed all errors to be in section 5; for, in a graph, if a function of X be represented, the corresponding function of Y is also automatically represented by the curve. By such means we can sometimes form an estimate of causes of error or deviation, and sometimes also—as we shall see in the case of the Mississippi Problem—be able to form an idea whether it is any use or not to go on complicating the particular formula which we are employing. When our resources are practically exhausted we shall give our formula, together with a statement of its range and the relation between probable (fortuitous) error and observed deviation, exhibiting the latter quantities in a graph of deviations, and leave it for others to judge what degree of likelihood attaches to our formula. Circumstances may lead us to employ least squares, but the value of our experiments cannot be adequately indicated unless we provide at least the equivalent of the details mentioned.

A Graphic Process for applying Power-expansion or Taylor's Series Formulæ.

27. A process will now be described by which it is very easy to graphically apply to date formulæ of four or even five terms in ascending powers of the variable.

28. It follows from Taylor's theorem that, if we use a

– 529 –

formula of n terms to approximate to a given curve, we obtain exactly the same choice of approximations whatever the scale of the variable may be or wherever its origin may be. We may therefore elect to make the experimental range unity in a new variable, and make the beginning of the range the origin. Thus, if the experimental variable X ranges from p to p + q, we take as a temporary variable x = X - p/q. It may be noted that in the case of a continuous function the corresponding Taylor's series becomes—

Y = Yp + [q/1 (dY/dX)p]. x +[q2/1.2(d2Y/dX2)p]. x2 + &c.

29. If this expression were very convergent our analysis would lead us to the values of the bracketed quantities. Since, however, curves in general cannot be said to be representable by continuous functions, and particularly convergent ones, we cannot expect to make this conception of curves being built up of the effects of initial rates of change our basis of operations. We may with great convenience utilise the average rates of change for the whole range. Something of the sort is done in using an interpolation-table method, such as that given in “Thomson and Tait” (1890, i., p. 454).

30. The graphic process consists in taking for the first term the initial value of Y as given by the graph; for the second the average rate of change for the whole experimental range; for the third the average curvature for the whole scale expressed in terms of a parabola; for the fourth the difference in curvature of the first and second halves of the range expressed as a cubic standard formula; and soon. Up to the fourth term at least there is no difficulty whatever in keeping the effect of each of these three operations in the mind, and in forming one's conclusions whether a certain formula is as good as can be possibly got. The standard formulae which the writer has used for this purpose are for the parabola (x—x2), and for the cubic (x—3x2 + 2x8). This is as far as we shall go for the present, but a table is given of some standard functions which might be used up to x8 (or formulæ of nine terms) if one were clever enough to perform the work with all of them at once, or under special circumstances. These formulæ will be reverted to again.

31. The practical work is now very simple. We draw the graph of the experiments in terms of the temporary variable (which we should have mentioned is better not arranged to have its scale exactly equal to that of the experimental variable, but as nearly as is practicable, keeping p and q

– 530 –

simple numbers for convenience in conversion),* and then, provided with a scale to measure the constant term, a straight-edge to produce linear terms and drawn curves of the standard parabolic and cubic, and with a protractor or proportional compasses, we proceed to build up a curve the ordinates of which are added proportions of each of these four constituents, till we get a curve that is as nearly like the given one as possible. It will be quite obvious when this is done that we have a most clear idea of the prospective advantages of any other cubic formula whatever, and that we can arrange the deviations in any desired way—for instance, to arrange them for the application of a quartic formula, if it appears that such a course is advisable. We shall also develope a decided opinion on the subject of the application of common-sense to the resultant curve of deviations, both for interpolation and for extrapolation, and for residual causes or for error in the theory of the formula. In cases where the accuracy of the figures is great it may be necessary, after a rough analysis, to replot the deviations to a larger scale, so as to get over the limited accuracy practicable in a graph.

32. Many details will become obvious if a trial is made, and we need not pause over them; but it may be mentioned that the process is obviously applicable to all such formulæ as are made up of sums of terms each of which is linear in and contains only one constant. So the process might be arranged for harmonic analysis, or for the solution of simultaneous equations, and so on. The process is approximative, so that it is of indefinite accuracy, and is limited solely by the power of the judgment to indicate what alterations are desirable. The vast difference between this procedure and that of least squares will be apparent from the fact that we may be led to apply formulæ which have more terms or constants than there are datum points. This is due to the part we are allowing the judgment to play in controlling the interpolation.

33. The process is evidently susceptible of mechanical treatment, and the writer hopes to be enabled to construct a machine for this purpose.

34. With respect to the higher-power formula, there is a point which seems of theoretic interest in simplifying mathematical formulæ which are to be applied for a definite range (the converse of our experimental range), for, as is noted in

[Footnote] * It is to be noticed that in these formulæ the adjustment of the scale by introducing q is a perfectly simple matter of arithmetic; but to alter the zero point (p) is more troublesome, and should be avoided when possible. The Z functions of the Appendix afford an alternative range of—1 to + 1, using the same curves.

– 531 –

the “Notes on the Graphs,” the numerical values of these functions (even better formulæ may be obtained for this particular purpose) become very small, within the range, in comparison with the numerical value of the coefficient of xn (that is, the highest power of x in the standard formula). This means that in the expansion as given in section 32, and when it is converging, we may for values of x from 0 to 1, by throwing the series into standard form, eliminate one or more of the higher-power terms, and so obtain an expression which is practically as accurate as the simple Taylor series, and is less in degree. The extent to which this may be expected to go is to be seen in the decreasing numerical or percentage values of the ordinates as the degree becomes large—with the octic it is already 1.3 parts in 10,000. of course, in doing this we sacrifice all pretence to accuracy outside our defined range.

35. To sum up, we may emphasize the importance of the idea of the experimental range, as we have seen this leads to a great accession of power in the case of what we have ventured (not without precedent, of course, but, still, with some misgiving) to call “Taylor's series formulæ.” An analogous idea is familiar enough in the “period” of the “Fourier series formulæ.” Secondly, we venture to think that too much stress cannot be laid on the necessity for the statement of probable error in the individual data. This matter is strongly stated in the extract from Sir G. B. Airy's works given in section 38. Even the warnings of so great an authority as the late Astronomer Royal seem to have been greatly disregarded.

36. Thirdly, however plausible or apparently authoritative the theory of a physical phenomenon of variation may be, the experimental data upon it should be so prepared that the precise support given to the theory by the observations should be made evident, as can often be done by a graph either of the observations themselves or of the deviations from the aforesaid plausible theory, the graph exhibiting probable error in the way mentioned in section 5.

37. Finally, the writer wishes to disclaim any novelty in the foregoing, with one exception, and to apologize for lack of references, which are, indeed, very incomplete in Wellington. His object has been to collect a number of what he believes to be true although, no doubt, trite remarks, with the object of collecting an argument which he has been unable to find in any of the works to which he has access, and which is necessary for the development of another paper, to which reference has been made. The portion for which it is thought some novelty may be claimed is that of the treatment of the Taylor's series formulæ and similar linear

– 532 –

additive functions.* It is thought that the statement in Thomson and Tait's “Natural Philosophy” (ed. 1890, p. 454) of an interpolation method, and the reference to “a patient application of what is known as the method of least squares” in Professor Perry's “Calculus for Engineers” of 1897 (p. 18), form a sufficient ground for this conclusion.


38. Those who may be inclined to question the necessity of such remarks as have been made upon an admittedly insufficient definition of least squares are recommended to examine, in the light of the considerations that have been advanced, the example of least squares put forth in Dr. F. Kohlrausch's work, English translation (called “Physical Measurements”), of 1894, from the German of 1892 (7th ed., chap. 3), and also Professor Merriman's “Theory of Least Squares” (1900 edition), with reference to Clairault's formula (about page 126), and from page 130 to the end of the Mississippi Problem. If, also, it is desired to observe how even legitimate least squares may lead to error, an examination may be made of the warnings of Sir G. B. Airy in the conclusion of his work on the “Theory of Errors of Observation, &c.” (pp. 112, 113). The 1874 edition of this work is available in the Public Library. A paper by F. Galton, F.R. S., in the “Proceedings of the Royal Society” of 1879, page 365, also contains a significant warning that the fundamental principle of the arithmetic mean is not always reliable. This should be considered in relation to the use of a curve of dY/dX in treating measures where X cannot conveniently be adjusted to the desired datum point for every observation.

39. We may also venture on the suggestion that, while many writers have been quite wrong in calling the constants of an empirical formula the “most probable ones,” those who have called them “the best” merely may have been quite justified in making use of such an expression where it has not been shown that analytical resources of greater power are available, as has been the case with the Fourier series, and it is hoped will be now seen to be the case with the Taylor series and other linear additive formulæ. Further, the habit of referring to empirical formulæ as “laws” may have helped to give such formulæ an importance which, compared with the graph, they assuredly do not possess.

[Footnote] * It should be mentioned, however, that Professor Callendar (Phil. Trans., 1887, p. 161) uses the formula of the standard parabola in connection with the reduction of platinum thermometry.

[Footnote] † In this connection see section 4.

– 533 –


The “Mississippi Problem” is of some celebrity, and may with advantage be discussed. It refers to the velocity of the water at different depths in the Mississippi at Carrollton and Baton Rouge. The experiments were made in 1851, and were reduced by the experimenters by means of a parabolic approximation, which they applied according to common-sense principles similar to those of the present writer, except that they apparently did not perceive the bearing of the facts that are fundamental theorems in the graphical calculus (section 23). Consequently they failed to get such a good approximation to the experiments as is possible, although many engineers may think their approximation quite sufficient. Then in 1877* Professor Merriman, after referring somewhat caustically to “tedious approximative methods,” proceeds to give a reduction by what he calls the “strictly scientific” method of least squares. This application is one to which our definition of least squares is strictly applicable. The calculations are given also in Professor Merriman's “Theory of Least Squares,” 1900.

[The section below cannot be correctly rendered as it contains complex formatting. See the image of the page for a more accurate rendering.]

Again, in 1884, Mr. T. W. Wright, “Adjustment of Observations,” page 413, reverts to the phenomena, applying both a parabolic and a cubic formula by least squares; and he remarks that, since the latter formula yields a smaller “sum of the square of the residual errors”—the italics are the present writer's—“the observations are better represented by the formula last obtained.” From the graph of deviations obtained by the present writer he has no hesitation in saying that the indications are for the application of a discontinuous formula, the first section holding from depth 0 to 0.5 or 0.6, and the other from that to 0 9, the formulæ differing chiefly in the constant term. This reduces the deviations to 1/2000, about, at most (judging from the graph), against about 1/700 with the least-square parabolic. The value of the probable (fortuitous) errors is not given or discussed in either reference, so that it is quite a matter of speculation whether this indication of discontinuity is genuine or whether it is a mere matter of luck. At any rate, we should not attempt to improve such a graph by means of a cubic formula; it evidently would require a formula of a large number of terms to reduce the deviations to as small limits as those of the discontinuous parabolic. It is to be noted that all these considerations are obvious upon a mere inspection of a graph of the deviations which are given by Professor Merriman, and also that it is not suggested that

[Footnote] * Journ. Frank. Inst., C. iv., p. 233.

– 534 –

the motion of the water was discontinuous; more likely there are systematic instrumental errors. (See Graph C.)


A few remarks may be made with respect to the arrangement of deviations in least-square form, the graphic process in the case of the power-expansion formula especially giving very convenient first approximations to the least-square values of the constants. If we take the expression “the mean” to signify that the algebraic sum of the deviations concerned is zero, and “the weighted mean” the same with respect to the deviations multiplied by datum values of certain weighting functions, then we may define least squares as the process which makes the weighted mean zero for all the weighting functions which can be obtained by differentiating the formula with regard to each of the constants separately and introducing the datum values of X. By writing down the equations which are needed to bring this about we obtain the normal equations of least squares, and we notice a valuable check on the correctness of a least-square reduction,* for in the power-expansion formula we see that the mean must hold, and also the weighted mean of the deviations, each multiplied by the datum values of x, of x2, and so on to the last degree; or for x2 we may substitute the standard parabolic, and so on. If, considering the formula to be in the standard terms, we examine a graph of deviations we can easily see that to approximate to least-square form we must take out all the amounts of standard components that will diminish the general magnitude of the deviations, but without allowing our judgment to come into play with regard to the run of any systematic deviation.

A little practice will often enable us to get such a close approximation to least-square form that the solution of the normal equations becomes much simplified. The normal equations, again, may be found more easily solved if made up in standard terms, for in examples similar to that of section 18 some of the coefficients in the normal equations tend to become zero, with formulæ of larger degree than the second—that is, using the formulæ of the “Notes on the Graphs.”


It is perhaps profitable to remark that, for the proper appreciation of a graph, we must get rid of the confusion that sometimes arises from the algebraical usage of making the symbols − and + stand for the operations of addition and subtraction and also as signs to designate whether a magnitude

[Footnote] * Given in Mr. T. W. Wright's book, page 144.

– 535 –

is positive or negative. The usage being as it is, we often in physical problems need to go back to the old arithmetical notion of negative quantities being impossible or imaginary, and consider the graph accordingly. For instance, take Taylor's theorem. It is usually expressed in one formula for the introduction of positive or negative increments; but Taylor himself (De Morgan, P. Cyc., p. 126) gave two formulæ, one for increments (or additions to x) and another for decrements (or subtractions from x). If now we take Maclaurin's form, we readily see that the second formula is impossible if we cannot reduce the magnitude or quantity to less than nothing.

Thus, to take a typical case, the magnetisation, or B—H, curve of iron, we cannot properly regard B and H as positive and negative quantities, but as direct and reverse positive magnitudes. A Taylor's series increment curve may then, perhaps, hold for magnitudes in either direction. If, however, we adhere to the algebraic usage, we shall be unable to express both of the symmetrical halves of the curve unless we employ only odd-power terms in our formula. This is obviously a very great disadvantage from a graphical point of view. As an indication of the contrary advantage it may be mentioned that a complete half of the sine curve can be built up of added proportions of the standard parabolic and quartic curves, with an extreme error of about 1 in 1,000 units, π radians forming the unit range of x.

Further, unless we adhere to the arithmetical notions, we are led to alternative values and imaginary quantities when, as in the example of the B—H curve may be desirable, we employ formulæ of fractional powers. Here the only alternative is to drop the fractions which have even denominators, which we can easily foresee may make formulæ of this class improcticable for arbitrary approximation to a curve.

To make clear what is meant, consider the expression √−1. Arithmetically—1 directs 1 to be subtracted from something which appears in the context. To take the square root of that which directs 1 to be subtracted from something else is evidently meaningless arithmetically. So also with (—1)2, and so on. Algebraically we here take the symbol—to indicate that the number to which it is attached is negative in quality or impossible arithmetically. This quality is also indicated by using a different symbol, i2n, instead of † and—, where i is an imaginary unit powers of which when combined with arithmetical symbols make quantities impossible in arithmetic. It is conventions as to the effect of powers of i upon the directions to add or subtract which enable us to perform calculations upon arithmetical quantities by algebraical methods with only occasional ambiguities.

– 536 –

Enough has now been said to guard against misleading use of the symbols + and—in graphical work.

Notes on the Graphs. (Plates XXXVI., XXXVII.)

Graph A gives the characteristic curve of deviations of a cubic-formula curve to which a parabolic approximation is applied (section 20); Graph B is that of section 21; Graph C that of the “Mississippi Problem,” Appendix, I. Graph I. shows the curve of the standard parabolic and Graph II. that of the cubic. Graph III. shows two symmetrical quartics—(A), zero at x = 0, ⅓, ⅔, and 1, and having the formula—

(A) = 2x—11x2 + 18x8—9x4,

and (B), zero at x = 0, ¼, ¾, and 1, with formula—

(B) = 3x—19x3 + 32x8—16x4.

The maxima roach 0.11, and 0.25 respectively, or 1.2 and 1.6 per cent. of the numerical value of the coefficient of x4. Graph IV. is a symmetrical quintic, zero at x = 0, ¼, ½, ¾, and 1, with formula—

(C) = 3x—25x2 + 70x8—80x4 + 32x5.

Its maxima reach 0.11, which, the coefficient of x5 boing 32, represents 0.34 per cent. of the value of the latter. Graphs V., VI., and VII. are specimen standards of the hexic, heptic, and octic degree respectively. Their formulæ are given in the accompanying table. They are all mado zero at the same points as the previous quintic. The curves all being symmetrical, about x = 0.5, are, as has been before noted, both simpler in form and more easy to compute values from if the origin of the abscissa is taken at x = ½ and the scale of the variable halved. The formulæ are accordingly given in terms of z = 2x—1, as well as in x—which is the most straightforward variable for common cases of a few terms.

It will be remarked that, so far as these standard formulæ have been developed, it has been arranged to keep the formula simple. In constructing formulæ for actual practice, however, it may be better to sacrifice the mathematical simplicity altogether in order to obtain curves that are convenient for the visual processes of analysis (see Postcript).

The percentage values of the maximum ordinates in those curves compared with the value of the coefficient of the highest power of x are as follows: Hexic, 0.39 per cent.; heptic, 003,5 per cent.; octic, 0.01,3 per cent. Thus, if we throw a formula into the standard form we can see what the effect will be if we throw away the highest torm. It is ovident that we shall often be able thus to reduce a formula of

– 537 –

a high degree to a simpler expression with very little error, but only for the range between the limits of the variable (original or transformed) 0 and 1.

Some Symmetrical Standard Functions.

(Abscissa, x, or z = − (1 − 2x).)
Term. Formula in z or x.
Constant (x) Unaltered.
Linear (z) +½(1+z).
" (x) x.
Parabolic (z) +¼(1−z2)
" (x) (x−22)
Cubic (z) +¼(z−2)=−¼(1−5z2+42
" (x) +x−3x2+2x2
Quartic (z) −¼(1−z2)(1−4z2)=−¼(1−5x2+4z4)
" (x) +3x−19x2+32x3−16x4
Quintic (z) +1z/4(1−x2)(1−4z2)=+1z/4(1−5z2+4z4)
" (x) +8x−25x2+70x3−80x4+82z5
Hexic (z) −1/16(1−z2)(1−z2)(1−4z2)=−1/16(1−6z2+9z4−4z2)
" (x) +8x2−22x2+51x4−48x2+16x6
Heptic (z) +z/16(1−z2)2(1−4x2)=+z/16(1−6z4−4z5)
" (x) +3x2−28x3+95x4−150x5+112z6−32x7
Octic (z) −z2/16(1−x2)2(1−4z2)=−z2/16(1−6z4+9z4−4z5)
" (x) +3x2−34x3+151x4−340x6−256x7+64x8

Postscript.—Since writing the above I have had occasion to employ formulæ of more than four terms, and certain practical points have come to light. Suppose the graph consists of a “smooth curve,” or, if it is of the datum-point variety, a smooth curve can be satisfactorily drawn through the data, then the analysis may proceed as follows: The constant, linear, and parabolic (standard) terms are obtained as before, and we draw in the base-line corresponding to this formula of three terms. Thus we have reduced the deviations to zero at x = 0, ½, and 1. We then scale off the deviations from this formula at x = ¼ and ¾, and then, using the standard cubic and a quartic the formula of which may be x(1—x)(1—2x)2, and which is obviously zero at the same three points as the cubic, we compute the amounts of these functions required to reduce the deviations to zero at the quarter points of x.

– 538 –

If further approximation be desired, the next systematic step would be to reduce the deviations at the odd-eighth points of x by means of quintic, hexic (somewhat different from the tabulated sample, obviously), heptic, and octic functions; and so on.

The same operations might possibly be considered easier if performed by simple algebra, but we should then lose the analytical power of judgment which is the vital advantage of the “standard” method.

With reference to the computation of values of Y from formulæ, the values of the standard functions up to the quartic degree may be computed by writing down the datum values of x; (1—x); and (1—x)—x, or (1—2x), and forming the proper products. Up to the octic degree we should need also (1—4x) and (3—4x), or 2(1—2x) ± 1. It is clear that if computation is to proceed by means of an arithmometer the labour of computation will not materially differ from that in which simple powers are used. With logarithms it will be necessary to enter the table once extra for each factor required.