which are the residuals, hy v-^,v^, . . .Vn, this equation may be written ^''(z'l) + 4>\v^) + . . . + 0'(z'„)= o. . . . (4) Supposing now the value of a which satisfies equation (3) to be the arithmetical mean, \ve have by Art. 9, Z'l + Z'2 + . . . + Z're = o (5) We wish therefore to find the form of the function 4>' such that equation (4) is satisfied by every set of values of Wi , fu , . . . w„ which satisfy equation (5). For this purpose, suppose all the values of » except v^ and v^ to remain unchanged while aqua- ton (5) is still satisfied. The new values may then be denoted by Z'l + k and v,. — k, in which k is arbitrary. Substituting the new values in equation (4), the sum of the first two terms must remain unchanged since all of the other terms are unchanged ; therefore, /(», + /^) +
f
shows that the half which exceed r do so by a total amount greater than
that by which the other half fall short of it.
§IV.]
MEASURES OF THE RISK OF ERROR.
35
'» = ;;7^='-'«29r (I)
>V2
= 1.4826 r,
(2)
52. Fig. 5 shows the positions of the ordinates corresponding
to r, 11 and £ in the curve of facility of errors
k — Wxa
The diagram is constructed for the value A ss f.,
r Tj s
Fig. 5.
From the definitions of the errors it is evident that the ordinate
of r bisects the area between the curve and the axes, that of 7
passes through its centre of gravity, and that of e passes through
its centre of gyration about the axis of^.
36 PROBABILITY OF ACCIDENTAL ERRORS. [Art 52
The advantage of employing in practice a measure of the
risk of error, instead of the direct measure of precision, results
from the fact that it is of the same nature and expressed in the
same units as the observations themselves. It therefore conveys
a better idea of the degree of accuracy than is given by the value
of the abstract quantity h. When the latter is given, it is of
course necessary also to know the unit used in expressing the
errors.
Tables of the Probability Integral.
rt
53. The integral ~ dt is known as the error function and
is denoted by Erf/ .* Table I, which has already been described,
Art. 44, gives the values of-^ — Erf t, which is the probability
that an error shall be numerically less than the error x, of
which the reduced value is t. The argument of this table is
the reduced error t.
But it is convenient to have the values of the probability
given also for values of the ratio of the error x to the probable
error. Putting z for this ratio, we have, since hx = t and hr = p,
^~ r ~ p'
Table II gives, to the argument z, the same function of / which
is given in Table I ; that is to say, the function of ^ tabulated is
* The integral e ~ 'V< is denoted by Erfc i, being the complement of
the error function, so that
Erf / + Erfc t= I e~^ di = ^ ij ir.
These functions occur in several branches of Applied Mathematics.
A table of values of Erfc i to eight places of decimals was computed by
Kramp (" Analyse des Refractions Astronomiques et Terrestres," Stras-
bourg, 1799), ^""^ irom this the existing tables of the Probability Integral
have been derived.
§IV.] TABLES OF THE PROBABILITY INTEGRAL. 3/
which is the probability that an error shall be numerically less
than the error x whose ratio to the probable error is z.
54, By means of the tables of the probability integral, com-
parisons have been made between the actual frequency with
which given errors occur in a system containing a large number
of observations and their probabilities in accordance with the
law of facility.
The following example is given by Bessel in the Fundamenta
Astronomiae. From 470 observations made by Bradley on the
right ascensions of Procyon and Altair, the probable error of
a single observation was found (by the formula given in the next
section) to be
r = o".2637.
With this value of r, the probability that an error shall be
numerically less than o".i is found by entering Table II with
the argument
d'.-L
d'.2byj
B = 7iriE-z = o.jn j{m\r\-\-m\r\->r...-Vmlr^;), ... (2)
where r^, r^, . , . r, are the probable errors of the several
observed quantities.
In particular, if the n quantities have the same probable error
r, the probable error of their sum is rV«- The probable error
of their arithmetical mean, which is — of this sum, is therefore
' n
4—. This result agrees with that found in Art. 64, where,
t^n "
* The fact that the law of facility thus reproduces itself has often been
regarded as confirmatory of its truth. This property of the law «~
results from its being a limiting form for the facility of error in the linear
function Z, when n is large, whatever be the forms of the facility functions
for Xi, Xi, . . . X,. Compare the foot-note on page 49, and see the
memoir there referred to. It follows that " we shall obtain the same
law g—^'"' (for a single observed quantity) if we regard each actual
eiror as formed by the linear combination of a large number of errors
due to different independent sources."
§ VI.] NON-LJM£AR FUNCTIONS. 73
however, the n quantities were all observed values of the same
quantity, and the arithmetical mean was under consideration by
virtue of its being the most probable value in accordance with
the law of facility.
90. It is to be noticed that in formula (2) it is essential that
the probable errors ^i , rj, . . . r„ should be the results of inde-
pendent determinations. For example, in the illustration given
in Art. 88, we have ki^^ f + p, whence we should expect to find
(prob. err. of hif = (prob. err. of (pf + (prob. err. of ^)'' ;
but it will be found that this is not true when the probable
errors of ^l.
13. What is the probable error of the area of the rectangle
whose sides measured as in the preceding example are ^'land s^ ?
14. A line of levels is run in the following manner : the back
and fore sights are taken at distances of about 200 feet, so that
there are thirteen stations per mile, and at each sight the rod is
read three times. If the probable error of a single reading is
0.01 of a foot, what is the probable error of the difference of level
of two points which are ten miles apart ? .093.
15. Show that the probable error of the weighted mean of
observed quantities has its least possible value when the weights
are inversely proportional to the squares of the probable errors
of the quantities, and that this value is the same as that given in
Art. 68 for the case of observed value of tiie same quantity.
VII.
The Combination of Independent Determinations of
THE Same Quantity.
The Distinction between Precision and Accuracy.
ijl2. We have seen in Arts. 63 and 67 that the final determi-
nation of the observed quantity derived from a set of observations
follows the exponential law of the facility of accidental errors.
The discrepancies of the observations have given us the means
of determining a measure of the risk of error in the single
observations, and we have found that the like measure for the
final determination varies inversely as the square root of its
weight compared with that of the single observation. Since
this weight increases directly with the number of constituent
observations, it is thus possible to diminish the risk of error
indefinitely ; in other words, to increase without limit the pre-
cision of our final result.
93. It is important to notice, however, that this is by no means
the same thing as to say that it is possible by multiplying the
number of observations to increase without limit the accuracy
of the result. Ihe precision of a determination has to do only
with the accidental errors ; so that the diminution of the prob-
able error, while it indicates the reduction of the risk of such
errors, gives no indication of the systematic* errors (see Art. 3)
*The term systematic is sometimes applied to errors produced by a
cause operating in a systematic manner upon the several observations,
thus producing discrepancies obviously not following the law of accidental
errors. Usually a discussion of these errors leads to the discovery of
their cause, and ultimately to the corrections by means of which they may
be removed. All the remaining errors, whose causes are unknown, are
generally spoken of as accidental errors ; but in this book the term acci-
dental is applied only to those errors which are variable in the system of
observations under consideration, as distinguished from those which have
a common value for the entire system.
§VII.] PRECISION AND ACCURACY. JJ
which are produced by unknown causes affecting all the obser-
vations of the system to exactly the same extent.
The value to which we approach indefinitely as the precision
of the determination is increased has hitherto been spoken of
as the " true value," but it is more properly the precise value
corresponding to the instrument or method of observation
employed. Since the systematic error is common to the whole
system of observations, it is evident that it will enter into the
final result unchanged, no matter what may be the number of
observations ; whereas the object of increasing this number is
to allow the accidental errors to destroy one another. Thus the
systematic error is the difference between the precise value,
from which accidental errors are supposed to be entirely elimi-
nated, and the accurate or true value of the quantity sought.
94. Hence, when in Art. 64 the arithmetical mean of n obser-
vations was compared to an observation made with a more
precise instrument, it is important to notice that this new
instrument must be imagined to lead to the same ultimate
precise value, that is, it must have the same systematic error as
the actual instrument, whereas in practice a new instrument
might have a very different systematic error.
Again, in the illustration employed in Art. 64, where the final
determination of an angle is given as 36° 42'.3 ± i'.22, the
" true value," which is just as likely as not to lie between the
limits thus assigned, is only the true value so far as the instru-
ment and method employed can give it ; that is, the precise value
to which the determination would approach if its weight were
increased indefinitely.
95. A failure to appreciate the distinction drawn in the
preceding articles may lead to a false estimate of the value
of the method of Least Squares. M. Faye in his "Cours
d'Astronomie " gives the following example of the objections
which have been urged against the method: "From the
discussion of the transits of Venus observed in 1761 and 1769,
M. Encke deduced for the parallax of the sun the value
8".57ii6±o".o37o.
78 INDEPENDENT DETERMINATIONS. lArt. 95
In accordance with this small probable error it would be a
wager of one to one that the true parallax is comprised between
8". 53 and 8". 61. Now we know to-day that the true parallax
8". 813 falls far outside of these limits. The error, o".24i84, is
equal to 6.536 times the probable error o".037. We find for
the probability of such an error o.ooooi. Hence, adhering to
the probable error assigned by M. Encke to his result, one could
wager a hundred thousand to one that it is not in error by
0.24184, and nevertheless such is the correction which we are
obliged to make it undergo."
Of course, as M. Faye remarks, astronomers can now point
out many of the errors for which proper corrections were not
made ; but the important thing to notice is that, even in Encke's
time, the wagers cited above were not authorized by the theory.
The value of the parallax assigned by Encke was the most
probable with the evidence then known, and it was an even wager
that the complete elimination of errors of the kind that produced
the discrepancies or contradictions among the observations could
not carry the result beyond the limit assigned ; but the existence
of other unknown causes of error and the probable amount of
inaccuracy resulting from them is quite a different question.
Relative Accidental and Systematic Errors.
96. Let us now suppose that two determinations of a quantity
have been made with the same instrument and by the same
method, so that they have the same systematic error, if any ; in
other words, they correspond to the same precise value. The
difference between the, two results is the algebraic difference
between the accidental errors remaining in the two determi-
nations; this may be called their relative accidental error.
Regarding the two determinations as independent measure-
ments of two quantities, if r, and ra are their probable errors,
that of their difference is V iA + ^0 ; and, since this difference
should be zero, the relative error is an error in a system for
which the probable error is
§VII.] ACCIDENTAL AND SYSTEMATIC ERRORS. 79
For example, if the determination of an angle mentioned in Art.
94 is the mean often observations, it is an even wager that the
mean of ten more observations of the same kind shall differ
from 36° 42'.3 by an amount not exceeding i'.22 X V 2 or i'.73
Again, r being the probable error of a single observation, the
probable error of the mean of n observations is — t— , but the
discrepancy from this mean of a new single observation is as
likely as not to exceed
\/(^ +''')• ^^^*'^' V
« + I
97. If, on the other hand, the two determinations have been
made with different instruments or by a different method,
they may involve different systematic errors ; so that, if each
determination were made perfectly precise, they would still
differ by an amount equal to the algebraic difference of their
systematic errors. Let this difference, which may be called the
relative systematic error, be denoted by 8. Then, d denoting
the actual difference of the two determinations, while 8 is the
difference between the corresponding precise values, we may
put
^ = 3 + X,
in which x is the relative accidental error.
The Relative Weights of Independent Determinations.
98. In combining values to obtain a final mean value, we have
hitherto supposed their relative weights to be known or assumed
beforehand, as in Arts. 75 and 77. Since the squares of the
probable errors are inversely proportional to the weights, (Arts.
66 and 68,) the ratios of the probable errors both of the con-
stituents and of the mean are thus known in advance, and it
*This does not apply to the residuals of the original « observations,
because in taking a residual the mean is not independent of the single
observation with which it is compared.
80 INDEPENDENT DE TERMINA TIONS, [An. 98
only remains to determine a single absolute value of a probable
error to fix them all. In this process it is assumed that the
values have all the same systematic error.
But, when the determinations are independently made, their
relative weights are not known, and their probable errors have
to be found independently. If now it can be assumed that the
systematic errors are the same, so that there is no relative
systematic error, the weights may be taken in the inverse ratio
of the squares of the probable errors.
99. To determine whether the above assumption can fairly be
made in the case of two independent determinations whose
probable errors are r^ and r, , it is necessary to compare the
difference d with the relative probable error i^(rl + rf), Art. 96.
If d is small enough to be regarded as a relative accidental
error, it is safe to make the assumption and combine the deter-
minations in the manner mentioned above.
As an example, let us suppose that a certain angle has been
determined by a theodolite as
24° 13' 36"±3"-i,
and that a second determination made with a surveyor s transit
24° I3'24"±i3".8.
In this case r, = 3.1, r, = 13.8 and d= 12. It is obvious that
a relative accidental error as great as d may reasonably be
expected. (In fact the relative probable error is 14. i ; and, by
Table II, the chance that the accidental error should be at least
as great as 12 is about .57.) We "may therefore assume that
there is no relative systematic error, and combine the determi-
nations with weights having the inverse ratio of the squares of
the probable errors. This ratio will be found, in the present
case, to be about 20 : i, and the corresponding weighted mean
found by adding ^ of the difference to the first value, is
24° 13' 35"43-
100. It appears doubtful at first that the valae given by the
§V1I,] CONCORDANT DETERMINATIONS. 8l
theodolite can be improved by combining with it the value
given by the inferior instrument. The propriety of the above
process becomes more apparent, however, if we imagine the
first determination to be the mean of twenty observations made
with the theodolite ; a single one of these observations will then
have the same weight and the same probable error as the second
determination. Now the discrepancy of this new determination
from the mean is such as we may expect to find in a new single
observation with the theodolite. We are therefore justified in
treating it as such an observation, and taking the mean of the
twenty-one supposed observations for our final result.
lOI* The probable error of the result found in Art. 99 of
course corresponds with its weight; thus, denoting it by R, we
have ^=i^f\, whence R = 3" .03, and the final result is
24° 13' 35".43 ± 3"-03.
In general, ri and ra being the given probable errors, that oi
the mean is given by
rl + r\'
Determinations which, considering their probable errors, are
in sufficient agreement to be treated as in the foregoing articles
may be called concordant determinations. They correspond to
the same precise value of the observed quantity, and the result
of their combination is to be regarded as a better determination
of the same precise value.
The Combination of Discordant Determinations.
102. As a second illustration of determinations independently
made, let us suppose that a determination of the zenith distance
of a star made at one culmination is
14° 53' i2".i ±o".3,
and that at another culmination we find for the same quantity
14° 53' i4"-3 ± o".5.
In this case we have d = 2.2. This is about 3.8 times the rela
tive probable error whose value is o".58.
82 IMDEPENDENT DETERMINATIONS. [Art. 102
From Table II we find that the probability that the relative
accidental error bhould be as great as d is only about i in 100.
We are therefore justified in assuming that the difference rf is
mainly due to errors peculiar to the culminations. In other
words, we assume that, could we have obtained the precise
values corresponding to the two culminations, (by indefinitely
increasing the number of observations at each,) they would still
be found to differ by about 2". 2. Supposing now that there is
no reason for preferring one of these precise values to the other,
we ought to take their simple arithmetical mean for the final
result ; and, since the two given values are comparatively close
to the precise values in question, we may take their arithmetical
mean, which is
14° 53' i3"-2,
for the final determination.
103. Determinations like those considered above, whose
difference is so great as to indicate an actual difference between
the precise values to which they tend, may be called discordant
determinations. The discordance of the two determinations
discloses the existence of systematic errors which were not
indicated by the discrepancies of the observations upon which
the given probable errors were based. In combining the deter-
minations, these systematic errors ar^ treated as accidental
errors incident to the two determinations considered as two
observed values of the required quantity. In fact, it is generally
the object in making new and independent determinations to
eliminate as far as possible a new class of errors by bringing
them into the category of accidental errors which tend to
neutralize each other in the final result. The probable error
of the result cannot now be derived from the given probable
errors, but must be inferred from the determinations themselves
considered as observed values, because we now take cognizance
of errors which are not indicated by the given probable errors.
104. When there are but two observed values, formula (4),
Art. 72, becomes
§ VII.] DISCORDANT DE TERMINA TIONS. 83
in which p^ , p^ are the weights assigned to the two values.
Denoting the difference by d, the residuals have opposite signs,
and their absolute values are
Pi +Pi pi + pa
Substituting these values, we have for the probable error of the
mean
P1+P2 pi + pi '
When/, = /a. this becomes
^0= -^ = 0.3272 d. (2)
In the example given in Art. 102, the value of i?o thus obtained
is o".742, which, owing to the discordance of the two given
determinations, considerably exceeds each of the given probable
errors.
Of course no great confidence can be placed in the results
given by the formulae above on account of the small value ofn.*
105. Since the error of each determination is the sum of its
accidental and systematic error, if .Si and s^ denote the probable
*The argument by which it is shown that the value of A deduced in
Art. 69 is the most probable value involves the assumption that before
the observations were made all values of A are to be regarded as equally
probable ; just as that by which it is shown that the arithmetical mean
is the most probable value of the observed quantity a involves the assump-
tion that before the observations all values of a were equally probable. In
the case of a, the assumption is admissible with respect to all values of a
which can possibly come in question. But, in the case of A, this is not true ;
because (supposing « = 2 as above) when d = o the value of A is infinite,
and when d is small the corresponding values of A are very large, so that
it is impossible to admit that all values of A which can arise are a priori
equally probable.
In the present application of the formula, however, these inadmissible
values do not arise, because we do not use it when d is small, employing
instead the method of Art. 99 and the formula of Art. loi.
84 INDEPENDENT DETERMINATIONS. [Art. 105
systematic errors, the probable errors of the two determinations
when both classes of errors are considered are
The proper ratio of weights with which the determinations
should be combined is F\ : R\. The method of procedure
followed in Art. 99 assumes that Sx and s^ vanish. On the other
hand, in the process employed in Art. 102 we are guided, in an
assumption of the ratio R\ : FX, by a consideration of the value
which the ratio s\ : s{ ought to have.
For example, in the illustration, Art. 102, the ratio R\ : J?l is
taken to be one of equality, whereas the hypothesis we desired
to make was that Si=: Si, so that we ought to have
Rl-Rl = ^,-rl.
On the hypothesis Ri = R^ the value of each of these prob-
able errors is, in accordance with equation (2), Art. 104, pd. In
the example this is i".05. If we take (1.05)' as the average
value of Rl and Rl , and introduce the condition written above,
we shall find as a second approximation to the value of the ratio
Rl: Rl about 15:13. The final value corresponding to this
ratio of weights is 14° 53' 13".!, and its probable error as deter-
mined by equation (i). Art. 104, is slightly less than that before
found, namely, Ro — o".740.
Indicated and Concealed Portions of the Risk of Error,
106. It will be convenient in the following articles to speak
of the square of the probable error as the measure of the risk
of error.
The foregoing discussion shows that the total risk of error,
R', of any determination consists of two parts, r" and i", of
which the first only is indicated by discrepancies among the
observations of which the given determination is the mean. It
is only this first part that can be diminished by increasing the
number of the constituent observations. The remaining part
remains concealed, and cannot be diminished until some varia-
§ VII.] PORTIONS OF THK TOTAL RISK OF ERROR. 85
tion is made in the circumstances under which the observationsi
are made, giving rise to new determinations. When the indi-
cated portions of the risk of error in the several determinations
are sufficiently diminished, discordance between them must
always be expected, and this discordance brings into evidence
a new portion, but still it may be only a portion, of the hitherto
concealed part of the risk of error.
107. What we have called in Art. 103 discordant determina-
tions are those in which the indication of this new portion of
the risk of error, to which corresponds the relative systematic
error, is unmistakable, because of its magnitude in comparison
with what remains of the portion first indicated in the separate
determinations, that is, r\ and r\. On the other hand, the con-
cordant determinations of Art. loi are those in which the new
portion is so small compared with ri and 7% as to remain con-
cealed.
Thus, to return to the illustration discussed in Art. 99, if
vwenty times as many observations had been involved in the
determination by the transit, its probable error would have
been reduced to equality with that of the determination by the
theodolite. But if this had been done we should almost cer-
tainly have found the determinations discordant ; that is to say,
the ratio in which the difference between the determinations is
reduced would be much less than that in which the probable
relative accidental error V {A + 'I) is diminished. The ratio in
which the remaining difference between the determinations
should be divided in making the final determination now
depends upon our estimate of the comparative freedom of the
instruments from systematic error,* but the important thing to
be noted is that the probable error of the result would now be
found as in Art. 104, and would be greater than those of the
*It may be assumed that, when the instruments are carefully adjusted,
the one which is less liable to accidental errors is correspondingly less
liable to systematic errors. But this comparison is concerned with the
probable errors of a single observation in each case, and not with those of
the determinations themselves.
86 /NDEPENDENT DETERMINATIONS. [Art, 107
separate determinations. Thus the apparent risk of error would
be increased by making a new determination, but this is only
because a greater part of the total risk of error has been made
apparent, and the result is so much the more trustworthy as a
greater variety has been introduced into the methods employed.
The Total Probable Error of a Determination.
108. In the illustrations given in Arts. 99 and 102 it was sup-
posed that two determinations only were made, so that we had
but a single discrepancy upon which to base our judgment of the
probable amount ofthe relative systematic error. But, in general,
what are regarded as determinations at one stage ofthe process
are at the next stage treated as observations which may be
repeated indefinitely before being combined into a new deter-
mination. Let one of the determinations first made be the
mean of n observations equally good, and let r be the probable
error of a single observation. Then the probable accidental
error of the mean is r^ = -t— • Now, if R is the probable error
of the final value as obtained directly from the discrepancies
of the several determinations, (their number being supposed
great enough to allow us to obtain a trustworthy value,) we shall
find that R exceeds r^, and putting
^' = J + ^ (0
r\ is the new portion of the risk of error brought out by the
comparison of the determinations.
r^
109. The form of this equation shows that when — is already
small compared with r\, the advantage gained by increasing thfe
value of n soon becomes inappreciable.
For example, the reticule of a meridian circle is provided
with a number of threads, in order that several observations of
time may be taken at a single transit. If seven equidistant threads
are used, the mean of the times is equivalent to a determination
§ VII.] THE TOTAL PROBABLE ERROR. S}
based upon seven observations of the time of transit. Chauvenel
found that, for moderately skilful observers, the probable acci-
dental error of the transit over a single thread of an equatorial
star isr — o=.o8, whence for the mean of the seven threads we
have ro = o'.03. The probable error of a single determination
of the right ascension of an equatorial star was found to be
J? = o'.o6, so that, from Ji^ = rl+ rf we have rj = o'.052. The
conclusion is reached that " an increase of the number of threads
would be attended by no important advantage," and it is stated
that Bessel thought five threads sufficient.*
110. Suppose the value of Ji' in equation (i), Art. io8, to
have been derived from the discrepancies of n' determinations of
equal weight. A systematic error may exist for these n'
determinations, and j, being its probable value, we shall have
that is to say, the concealed portion of the risk of error in one
of the original determinations has been decomposed into two
parts, one of which has been disclosed at the second stage of the
process, while the other remains concealed.
The total risk of error in a single one of the n' determina-
tions is R^ + sf, and that of the mean of the determinations is
w
In like manner, if at a further stage of the process we have the
means of finding the value of the probable error R^ of this new
determination by direct comparison with other coordinate deter-
minations, a portion of the value of jj will be disclosed, and we
shall have
n nn n'
where again it must be supposed that a portion s\ of the risk of
error still remains concealed.
* Chauvenet's " Spherical and Practical Astronomy," vol. ii, p. 194
et seq.
88 INDEPENDENT DETERMINATIONS. [Art. iii
111. The comparative amounts of the risk of error which are
disclosed at the various stages of the process depend upon the
amount of variety introduced into the method of observing.
Thus, to resume the illustration given in Art. 109, if the star
be observed at n' culminations, r^ will correspond to errors
peculiar to a thread, and r\ will correspond to errors peculiar to a
culmination. Again, if different stars whose right ascensions are
known are observed, in order to obtain the local sidereal time
used in a determination of the longitude, r\ will correspond to
errors peculiar to a star, together with instrumental errors
peculiar to the meridian altitude.
The Ultimate Limit of Accuracy.
112. The considerations adduced in the preceding articles
seem to point to the conclusion that there must always be a
residuum of the risk of error that has not yet been reached, and
thus to explain the apparent existence " of an ultimate limit of
accuracy beyond which no mass of accumulated observations
can ever penetrate."* But it does not appear to be necessary
to suppose, as done by Professor Peirce, that there is an absolute
fixed limit of accuracy, due to " a failure of the law of error
embodied in the method of Least Squares, when it is extended
to minute errors." He says: "In approaching the ultimate
limit of accuracy, the probable error ceases to diminish propor-
tionally to the increase of the number of observations, so that
the accuracy of the mean of several determinations does not
surpass that of the single determinations as much as it should
do, in conformity with the law of least squares ; thus it appears
that the probable error of the mean of the determinations of the
longitude of the Harvard Observatory, deduced from the moon-
culminating observations of 1845, 1846, and 1847, is i°.28 instead
of I'.oo, to which it should have been reduced conformably to
the accuracy of the separate determinations of those years."
* Prof. Benjamin Peirce, U. S. Coast Survey Report for 1854, Appendix,
p. 109.
§ VII.] EXAMPLES. 80
To account for the fact cited on the principles laid down
above, it is only necessary to suppose that there are causes of
error which have varied from year to year ; and, recognizing this
fact, we ought to obtain our final determination by comparing
the determinations of a number of years, and not by combining
into one result the whole mass of observations.
Examples.
1. In a system of observations equally good, r being the
probable error of «* single observation, if two observations are
selected at random, what quantity is their difference as likely as
not to exceed ? rsj 2.
2. In example i, what is the probability that the difference
shall be less than r? O.367.
3. When two determinations are made by the same method,
show that the odds are in favor of a difference less than the sum
of the two probable errors, and against a difference less than the
greater of the two, and find the extreme values of these odds.
66 : 34 and 63 : 37.
4. A and B observe the same angle repeatedly with the same
instrument, with the following results :
A
B
47° 23' 40"
47° 23' 30^'
47 23 45
47 23 40
47 23 30
47 23 50
47 23 35
47 24 00
47 23 40
47 23 20
Show that there is no evidence of relative systematic (personal)
error. Find the relative weights of an observation by A and
by B, and the final determination of the angle.
100: 13; 47° 23' 38".23± i".62.
5. Show that the probable error in example 4 as computed
from the ten observations taken with their proper weights is
i".53, but that derived from the formula of Art. 104 is o".43,
which is much too small. (See foot-note, p. 83.)
go INDEPENDENT DETERMINATIONS. [Art. Iia
6. Two determinations of the length of a line in feet give
respectively 683.4 ±0.3 and 684.9 i o-3i there being no reason
for preferring one of the corresponding precise values to the
other ; show that the probable error of each of the precise values
(that is, the systematic error of each determination) is 0.65 ; and
that the best final determination is 684.15 ± 0.51.
7. Show generally that when the weights are inversely pro-
portional to the squares of the probable errors, the formula of
Art. 104 gives a value of R greater or less than that given by
the formula of Art. loi, according as d is greater or less than
the relative mean error.
VIII.
Indirect Observations.
Observation Equations.
113. We have considered the case in which a quantity
whose value is to be determined is directly observed, or is
expressed as a function of quantities directly observed. We
come now to that in which the quantity sought is one of a
number of unknown quantities of which those directly observed
are functions. The equation expressing that a known function
of several unknown quantities has a certain observed value is
called an observation equation. Let /» denote the number of
unknown quantities concerned. Then, in order to determine
them, we must have at least [i. independent equations. Thus,
if two of the equations express observed values of the same
function of the unknown quantities, they will either be ident-
ical, so that we have in effect only /*— i equations, or else they
will be inconsistent, so that the values of the unknown quan-
tities will be impossible. So also it must not be possible to
derive any one of the /i equations, or one differing from it only
in the absolute term, from two or more of the other equations.
114. If we have no more than the necessary /i equations, we
shall have no indication of the precision with which the obser-
vations have been made, nor, consequently, any measure of the
precision with which the unknown quantities have been deter-
mined. With respect to them, we are in the same condition as
when a single observed value is given in the case of direct
observations.
Now let other observation equations be given, that is tb say,
let the values of other functions* of the unknown quantities be
observed. The results of substituting the values of the unknown
*Itisnot necessary that these additional equations should be inde-
pendent of the original y. equations, for an equation expressing a new
observed value of a function already observed will be useful in deter-
mining the precision of the observations.
92 INDIRECT OBSERVATIONS. [Art. 114
quantities will, owing to the errors of observation, be found to
differ from the observed values, and the discrepancies will give
an indication of the precision of the observations, just as the dis-
crepancies between observed values of the same quantity do, in
the case of direct observations.
115. As an example, let us take the following four observa-
tion equations* involving x,y and z:
X— y+ 22= 3,
3X + 2y-52= 5,
4;tr + J/ + 4z= 21,
— x + 3j/ + 2Z=H.
jf we solve the first three equations we shall find
x=2^, y = z\, z=\\.
Substituting these values in the fourth equation, the value of
the first member is i2f, whereas the observed value is 14; the
discrepancy is \\. If the values above were the true values,
the errors of observation committed must have been o, o, o, i-f ;
but, since each of the observed quantities is liable to error, this
is not a likely system of errors to have been committed. In
fact, any system of values we may assign to x,y and z implies
a system of errors in the observed quantities, and the most
probable system of values is that to which corresponds the
most probable system of errors.
116. In general, let there be m, observation equations,
involving /t unknown quantities, trO'ix; then we have first to
consider the mode of deriving from them the most probable
values of the unknown quantities. The system of errors in the
observed quantities which this system of values implies will
then enable us to measure the precision of the observations.
Finally^ regarding the /i unknown quantities as functions of the
m observed quantities, we shall obtain for each unknown quan-
tity a measure of the precision with which it has been deter-
mined.
• Gauss, " Theoria Motus Corporum Coelestium," Art. 184.
§ VIII.] REDUCTION TO LINEAR FORM. 93
The Reduction of Observation Equations to the Linear Form.
1 17* The method of obtaining the values of the unknown
quantities, to which we proceed, requires that the observation
equations should be linear. When this is not the case, it is
necessary to employ approximately equivalent linear equations,
which are obtained in the following manner.
Let X, V, Z, . . . be the unknown quantities, and Mi,
M2, . . . Mm the observed quantities; the observation equations
are thee of the form
MX,V,Z,...:) = Mi,
/,{X, Y,Z,...) = M„
MX,Y,Z,...) = M^,
where /i ,/i, . . ./m are known functions. Let Xa, Vo, Zo, . ..
be approximate values of X, V, Z, . . ., which, if not otherwise
known, may be found by solving a of the equations ; and put
X=Xo + x, Y=Y„+y, ....
so that j:,jv, 2', . • . are small corrections to be applied to the
approximate values. Then the first observation equation may
be written
/I (^o + X, Yo+y,Zo + z,...) = Mi,
or, expanding by Taylor's theorem,
where the coefficients 01 x,y,z,... are the values which the
partial derivatives oi fii^X, Y, Z,. ..) assume when X=Xa,
Y= Yo, Z= Zo, • ", and the powers and products of the
small quantities x,y,z,.,.are neglected as in Art. 91.
Denoting the coefficients of x, y, z, . . . hy ai, 61, Ci, . .,
putting «i for M -/i (.Xo, Vo, Zo,.. ), and treating the othet
observation equations in the same way, we may write
94 INDIRECT OBSERVATIONS. [Art 117
a^x + b^y + c^s + , .. . = m^
(I)
am^+ 6my+ CmZ+ ...=»»,■
for the observation equations in their linear form-
118. Even when the original observation equations are in the
linear form, it is generally best to transform them as above, so
that the values of the unknown quantities shall be small.
Another transformation sometimes made consists in replacing
one of the unknown quantities by a fixed multiple of it. For
example, if the values of the coefficients oiy are inconveniently
large they may be reduced in value by substituting ky for_y
and giving to >^ a suitably small value.
119. In the observation equations (i), the second members
may be regarded as the observed quantities, since they have the
same errors. If the true values o{x,y, z , = , . are substituted in
these equations they will not be satisfied, because each n differs
from its proper value by the error of observation v; we may
therefore write the equations
OiX + b^y + t iS' + . , . — Ml = z^i
(h?c + b^y + ^Tj^' .+ ...— ^2 = f 2
K^)
in which, ifx,y, z, . , . are the true values, v, .Vi,.... Vm are the
true errors of observation, and if any set of values be given to
x,y, z, . ^ ., the second members are the corresponding restd-
uals. These corrected observation equations may be called, the
residual equations.
Observation Equations of Equal Precision.
120. Let us first suppose that the m observations are equally
good, and let h be their common measure of precision. Then,
since v is the error, not only of the absolute term n-^ in the first
of equations (2), but of the first observed quantity Mi,, the prob-
§Vm.] EQUATIONS OF EQUAL PRECISION. 95
ability before the observations are made that the first observed
value shall be M^ is
where, as in Art. 35, Av is the least count of the instrument.
Hence we have, for the probability before the observations are
made that the m actual observed values shall occur,
exactly as in Art. 41. The values oiv\,v\, . . .vl^ being given
by equations (2), this value of /* is a function of the several
unknown quantities ; hence it follows, as in Art. 41, that for any
one of them that value is, after the observations have been
made, most probable which assigns to P its maximum value ;
in other words, that value which makes
^1 + ^a + • • • + ^m = ^ minimum.
Thus the principle of Least Squares applies to indirect as
well as to direct observations.
121. To determine the most probable value of ^, we have, by
differentiation with respect to x,
dvi , «foj , , ^, dvm ^
or, since, from equations (2), Art. i IQ;
dvi _ dvt _ <^^n _ -
^-''" ^-*" ••• dx -^'
a^Vi + a^Vi + . . + OrnVm = O. . . . . (l)
This is called the normal equation/or x. Whatever values
are assigned to y, 2, . . .,'\\. gives the rule for determining the
value of X which is most probable on the hypothesis that the
values assigned to the other unknown quantities are correct.
Since v^,v.i, . . .Vm represent the first members of the obser-
96 INDIRECT OBSERVATIONS. [Art. 121
vation equations (i), Art. 117, when so written that the second
member is zero, we see that the normal equation for x may be
formed by multiplying each observation equation by the coeffi-
cient of X in it, and adding the results.
122. The rule just given for forming the normal equation
shows it to be a linear combination of the observation equations,
and the reason why the multipliers should be as stated may be
further explained as follows : If we suppose fixed values given
\.o y,z,. . . , each observation equation may be written in the
form ax^= N, where N only differs from the observed value
^ by a fixed quantity, and therefore has the same probable
error. Now, writing the observation equations in the form
x =
«1
= jr.,
X —
a.
= Xi,
x =
N^
— -^nij
we may regard them as expressing direct observations oix. 11
r is the common probable error of A'l, ^, . . . .A^, that of
-A^ r . r
— or Xi is — ; that of Xj is — , and so on. Thus the equations
are not of equal precision for determining x, and their weights
when written as above (being inversely as the squares of the
probable errors) are as a? : «2 : . . . : «Ji. It follows that the
equation for finding x is, as in the case of the weighted arith-
metical mean (see Art. 66), the result of adding the above
equations multiplied respectively by af, a|, . . . a^;* that is to
say, it is the result of adding the original observation equations
of the form ax — N^ o multiplied respectively by «! , «2 , . . . ««,.
•It must not be assumed that the weight of the value of x, determined
from the several normal equations, is 2a°, that of an observation being
unity. This is its weight only upon the supposition that the absolute
values of the other quantities are known.
§vin.]
THE NORMAL EQUATIONS.
97
123.
The Normal Equations.
In like manner, for each of the other unknown quantities
we can form a normal equation, and we thus have a system o<
equations whose number is equal to that of the unknown quan-
tities. The solution of this system of normal equations gives
the most probable values of the unknown quantities. Let us
take for example the four observation equations given in Art.
115. Forming the normal equations by the rule given above,
we have
2'jx + 6jf = 88,
6x + ly + e = 70,
jy + 5^2= 107.
The solution of this system of equations gives for the most
probable values,
49154 _
19899
2617 _
737
6633
124. Writing the observation equations in their general form,
UiX + b^y + ... + IJ—ni
a^x + 6iy +...+ht—ni
X —
y =
= 2.47.
= 3-55.
1.92.
(0
amX + 6my + . . . + lmi= nm .
we obtain for the normal equations in their general form,
Id? .x+ lab.y + . . . + Ial.t= Ian
Iab.x+ lb'' .y+ ... + Ibl. t=Ibn I . , (2)
lal.x^ Ibl.y-^ ... -^ 11" .t=^Iln
It will be noticed that the coefficient of the rth unknown
quantity in the Jth equation is the same as that of the jth
unknown quantity in the rth equation; in other words, the
98 INDIRECT OBSERVATJONS. [Art. 124
determinant of the coefficients of the unknown quantities in
equations (2) is a symmetrical one.
Observation Equations of Unequal Precision.
125. When the observations are not equally good, if
^1 ) ^21 • • • ^m
are the measures of precision of the observed values
M^, M^, . . . M^,
the expression to be made a minimum is
h\v\ + h\v\ + . . . + KA,
as in Art. 65. Thus, as in the case of direct observations, if the
error of each observation be multiplied by its measure of pre-
cision so as to reduce the errors to the same relative value, it is
necessary that the sum of the squares of the reduced errors
should be a minimum.
Since z/j = o, »a = o, . . . z^m = o are equivalent to the observa-
tion equations, it follows that, if we multiply each observation
equation by its measure of precision (so that it takes the form
hv = o), we may regard the results as equations of equal pre-
cision.
126. The result may be otherwise expressed by using num-
bers /i,/^ , . . .pm proportional, as in Art. 66, to the squares of
the measures of precision ; the quantity to be made a minimum
then is
p,v\ +piu\ + . . . +/m<,
and the normal equation for x is
A^i^i + pia^V^ -I- . . . + pmamVm = O.
The numbers pi, pi, . . . pm are called the weights of the
observation equations; thus, in the case of weighted equations,
the normal equation for x may be formed by multiplying each
observation equation by the coefficient of x in it, and also by its
weight, and adding the results.
§Vni.] EQUA7V0NS OF UNEQUAL PRECISION. 99
The general form of the normal equations is now
Ipa" . X + Ipab .J/ + . . . + Ipal.t = Ipan
Ipab . X + ipb'' .y + ... -\- Ipbl . t - ipbn
Ipal .x+ Ipbl .;)/+...+ Ip? . t - Ipin
(3)
The result is evidently the same as if each observation equation
had been first multiplied by the square root of its weight, by
which means it would be reduced to the weight unity, and the
system would take the form (2), Art. 124.
Formation of the Normal Equations.
127. When the normal equations are calculated by means of
their general form, a table of squares is useful not only in cal-
culating the coefficients Ipa^, Ipb'' ,.. . IpP, but also in the
case of those of the form Ipab, Ipac, . . . Epan, . . . For,
Since
ab= i[(a + by - a' - b'"'],
we have
Spab = illpCa + by - Ipa' - Ipb';\,
by means of which ^pab is expressed in terms of squares,* Or
for the same purpose we may use
ipab = hilpa' + Ipb" - lp{a - bJI.
In performing the work it is convenient to arrange the coeffi-
cients in a tabular form in the order in which they 'occur in the
observation equations, and, adding a column containing the sums
of the coefficients in each equation, thus,
^1 = dtj -t- 3i -F ... + /, + Ml, etc.,
• If 'Zpab alone were to be found, the formula
Lpab = \ [2/(a -1- bf—1p{a—b)^l ,
derived from that of quarter-squares, would be preferable ; but, since
2/0', 2pb^ have also to be calculated, the use of the formula above,
which was suggested by Bessel, involves less additional labor.
lOO
INDIRECT OBSER VA TIONS.
[Art, I2J
to form the quantities "Spas, "Spbs, ... . "Spns in addition to those
which occur in the normal equations. We ought then to find
2pas = Ipa? + Ipab + . . . + Ipan,
Ipbs = lpab+ Ipb" + . . . + Ipbn,
Ipns = Ipan-^Ipbn + . . . + Ipn^,
and. the fulfilment of these conditions is a verification of the
accuracy of the work.
In many cases, the use of logarithms is to be preferred,
especially when the logarithms of the coefficients in the ob»
servation equations are more readily obtained than the values
themselves.
The General Expressions for the Unknown Quantities.
128. In writing general expressions for the most probable
values of the unknown quantities, and in deriving their prob-
able errors, we shall, for simplicity in notation, suppose that the
observation equations have been reduced to the weight unity as
explained in Art. 126, so that they are represented by equations
(i), and the normal equations by equations (2) of Art. 124.
Let D be the symmetrical determinant of the coefficients of
the unknown quantities in the normal equations, thus
Z> =
la^
lab
Sab
lb'
2al
Ibl
lal Ibl ... 11^
let Dx denote the result of replacing the first column by a
column consisting of the second members, Ian, Ibn, . . „ Iln\
and let Dy, Di, . . . Dt be the like results for the remaining
columns. Then
are the general expressions for the unknown quantities.
(1}
§ VIII.] EXPRESSIONS IN DE TERMINANT FORM. lOI
129. Let the value of x when expanded in terms of the
second members of the normal equations be
X = Qilan + Q^Ibn + . . . + Q^Sln.
(2)
Now, in the expansion of the determinant Z?j, in terms of the
elements of its first column, the coefficients of Ian, l'6n, . . . I'ln
are the first minors corresponding to I'a'', lad, . . . la/, in the
determinant Z>.
Denoting the first of these by Z>i , so that
A =
Id' Ibc
Ibc Ic'
Ibl Id
Ibl
Id
II'
it follows, on comparing the values of x in equations (i) and
(2), that
Q.=
Pi
D •
In like manner, the values of Q2, Q^, . . . Q^ are the results of
dividing the other first minors by D.
The Weights of the Unknown Quantities.
130. Let the value oix, when fully expanded in terms of the
second members »i, Kj, . . . «m of the observation equations, be
X = OiWi + a,«j +
+ "m«w
(3)
Then, if rx denotes the probable error of x, and r that of a
standard observation, that is, the common probable error of
each of the observed values Wi, « Wm, we shall have, by
Art. 89,
rl^r'.Ia'.
The precision with which x has been determined is usually
expressed by means of its weight, that of a standard observation
I02 INDIRECT OBSERVATIONS. [Art. 130
being taken as unity. The weights being inversely propor-
tional to the squares of the probable errors, we have, therefore,
for that of X,
131. Since the value- of ;i; is obtained from the normal equa-
tions, we do not actually find the values of the a's ; we therefore
proceed to express I,a? in terms of the quantities which occur
in the normal equations.
Equating the coefficients of «i , «2 , . . . «m in equations (2)
and (3), we find
«i = «ij6*i + hQ-, + . . . + kQ^ \
«2 = ciiQ^ + b^Q^ + . . . + l^Q^
O-m—amQi + bm,Qi+ . . . + ImQp. -
. (I)
Multiplying the first of these equations by cti, the second by
fla , and so on, and adding the results, we have
Sa'^Iaa.Q, + Iba.Q,+ ... + Ila..Q^. . (2)
The value of Saa is found by multiplying the first of equa-
tions (i) by «!, the second by «j, and so on, and adding. The
result is
Saa = la' .Q,+ Iab.Q,+ ... + Sal. Q^. . (3)
Multiplying this equation by D, the second member becomes
the expansion of the determinant D in terms of the elements of
its first column. Hence
Saa = I (4)
In like manner we find
Iba ^Iab.Q, + Ib\Q,+ ... + Ibl. Q^, . (5)
and when this equation is multiplied by D, the second member
is! the expansion of a determinant in which the first two columns
§VIII.] WEIGHTS OF THE UNKNOWN QUANTITIES. I03
are identical. Thus Iba = o, and in the same way we can show
that lea .... iVa vanish.*
Substituting in equation (2), we have now
^«'=5.; (6)
hence from Arts. 130 and 129 we have, for the general expres-
sion for the weight of x,
^'--Q^D. (7)
132. It follows from equation (2), Art. 129, that if in solving
the normal equations we retain the second members in alge-
braic form, putting for them A, B,C, .. ., then ihe weight of x
will be the reciprocal of the coefficient of A in the value of x.'\
In like manner, that oiy will be the reciprocal of the coefficient
of 5 in the value oiy, and so on.
For example, if the normal equations given in Art. 123 are
written in the form
27^ + 6y = A,
6x + i^y + z = B,
j + 542= C,
the solution is
19899;!; = 809^ — 3245 + 6C,
miy = -i2A + 545 - C,
66335' = 2A — gB + 123 C
•Comparing equation (3) with equation (2), Art. 129, we see that Saa
is the value which x would assume if in each normal equation the
second member were equal to the coefficient of x. The system of equa-
tions so formed would evidently be satisfied by ;i; = 1,7 = o, z =; o, , . .
/ = o ; hence Xaa = 1. In like manner, comparing equation (5) with the
same equation, we see that 2*a is the value which x would assume if
the second member of each normal equation were equal to the coefficient
o£^. This value would be zero ; thus 2i5a^ o.
t If the value of the weight of x alone is required, it may be found as
the reciprocal of what the value of * becomes when ^ =: i, B^=.o,
Cz=o, . . . , that is to say, when the second member of the first norma]
equation is replaced by unity, and that of each of the others by zero.
I04 INDIRECT OBSERVA'IIOA'S. [Art. 132
The weights oix,y and z are therefore
A =
809
— 24.60,
A =
737
54
= 13-65,
A =
6633
123
= 53-93-
133. When the value oix is obtained by the method of sub-
stitution, the process may be so arranged that its weight shall be
found at the same time. Let the other unknown quantities be
eliminated successively by means of the other normal equations,
the value ol x being obtained from the first normal equation or
normal equation for x. Then, if this equation has not been
reduced by multiplication or division, the coefficient of A in
the second member will still be unity, and the equation will be
of the form
Rx=T+ A,
where T depends upon the quantities B, C, . . . Now it is
shown in the preceding article that the weight oix is the recip-
rocal of the coefficient of A in the value of x ; hence in the
present form of the equation the weight is the coefficient of ;t;.*
As an illustration, let us find the values of x and its weight
in the example given above, the normal equation being
27^1;+ 6y = 88,
6^ + 15J' + ■^ = 70.
y + 5^2= 107,
The last equation gives
I , 107
2= — —y H '-,
54-^ 54
•The effect of the substitution is always to diminish the coefficient o£
x; for, as mentioned in the foot-note to Art. 122, if the true values of
y, z, , . . t were known, the weight of x would be Sa^, which is the original
coefficient of x, and obviously the weight on this hypothesis would exceed
fx 3 which is the weight when j/, z, . . , I are also subject to error.
§ Vin.] WEIGHTS :.F THE UNKNOWN QUANTITIES. !OS
and if this is substituted in the second, we obtain
■y~ 809 -^ ^ 809 •
Finally, by the substitution of this value of j/ in the first normal
equation, we obtain, before any reduction is made,
19899 „ _ 49154 .
809 809 '
whence
_ 19899 _„j ^-49154
as before found.
The Determination of the Measure of Precision.
134.. The most probable value of^ in the case of observations
of equal weight is that which gives the greatest possible value
to P, Arte 120, that is, to the function
in which the errors are denoted by Mi , Ma , . . » Wm, so that we
may retain Wi , z/j , . . . z'm to denote the residuals which corres-
pond to the values of the unknown quantities derived from the
normal equations. By differentiation we derive, as in Art. 69,
for the determination of h,
J«'=^,. = ..... . (i)
The value of I'm' cannot, of course, be obtained, but it is
known to exceed 2V, which is its minimum value, and the best
value we can adopt is found by adding to Iv" the mean value
of the excess, J«° — Iv*.
135. Let the true values of the unknown quantities be
X ■\- &x,y-\- Sy,.. .t-\- U, while x, y,...t denote the values
derived from the normal equations. We have then the residual
equations
I06 INDIRECT OBSERVATIONS. [Art. IJ5
a^X + b^y + ... + /i/ — «i = !7i
flaX + ^jj/ + . . . + 4/ — «2 = Z/a
O-wP^ "T ^w,y ~r • • • T" ^m^ ^m — — ^w
and, for the true errors, the expressions,
a^l^x + tx') + iJiCjj' + ^j)/) + . . . + /i(i? f 3^) -n^ = u^
a^x + te) + 3j(jj/ + 5j)/) + + 4(/ + Si] -■n^ = u^
OmCx+Sx') + im(y+Sy) + . + 4(/+ 30 — 7Zm= M,„
(i3
..(2)
Multiplying equations (i) by ^i, !72, . . . z'm respectively, and
adding, the coefficient oix in the result is
ChVi + a^V^ + . . . + OmVm,
which vanishes by the first normal equation (i), Art. 121. In
like manner, the coefficient ofy vanishes by the second normal
equation, and so on. Hence
2V = ~ Snv, (3)
Treating equations (2) in the same way, we have
Suv = — 2nv ;
hence
iv'' = i;uv. . . . (4)
Again, multiplying equations (i) by Mi, u^, ^m, and
adding,
Suv = Sau.x + 2bu y + o , . + Slu.i- Inu,-,
and treating equations {2) in the same way.
Sti' = lau (X 4- Sx") + Ibu {y + 3j) + + Slu (f + df) ■ - Inu.
Subtracting the preceding equation, we havj, by equation (4),
Iti' - Iv' ■= lau , Sx + Ibu . 5ji/ + , „ , + IJu M, . (5)
an expression for the correction whose mean value we arn
seeking
§ VIILl THE MEASURE OF PRECISION. IO7
136. Expressions for Sx, Sy, . . . dt are readily obtained as
follows. Treating equations (2) exactly as the residual equa-
tions (i) are treated to form the normal equations, we find
Id" . {x + Sx') + lab . ( J/ + 4r) + ...
+ 2'a/. {t + ^f) = 2an + 2au
lah.ix + Sx') + 2'<5^ (j/ + 5y) + . . .
+ Ibl. (i + df) = Sbn + Ibu
Sal. (x + Sx) + Ibl. (;)/ + 5jj/) 4- . . .
+ 11^ . {t + hf) = Iln + Ilu J
Subtraction of the corresponding normal equation from each
of these gives the system,
Id" . Sx + lab .Sy + . . . + lal . 8t = lau '
lab . dx+ lb' .Sy+ ... + Ibl.St= Ibu
Sal. dx + Sbl . 8y + . . . ■\- Sl\8i = Ilu
a comparison of which with the normal equations shows that
8x, Sy, . . . 8t are the same functions of «i, u^, . . . Um that
x,y, . . .t are of «i, Wj, . . • «m. Hence we have
Sx = O-iUi + a^i + . . . + O-mUm >
where Oi, Oa, . . . an have the same meaning as in Art. 130.
137. Consider now the first term, Sau.Sx, of the value of
Su' — Sv'', equation (5), Art. 135. Multiplying the value of
5;t just found by
SaU ^ fliMi + flSjMj + . . . + aniUm,
the product consists of terms containing squares and products
of the errors. We are concerned only with the mean values of
these terms, in accordance with the law of facility, which is for
each error -7— e~ '''"'. Since the mean value of each error is
zero, it is obvious that the mean value of each product vanishes;
I08 INDIRECT OBSERVATIONS. [Art. 137
SO that the mean value of S,au . te is the mean value of
• Now by Art. 50 the mean value of each of the squares
«f , k| , . . . «^ is — T3 ; hence the mean value of 2au . 8x is —^ ,
or, by equation (4), Art. 131, —p.
In the same manner it can be shown that the mean value of
each term in the second member of equation (5), Art. 135, is
-p; hence that of lu' — Iv' is -^, and the best value we can
adopt for Su' is
Substituting this in equation (i), Art. 134, we have
Iv' = — o— > whence h= J — ^i^.
The Probable Errors of the Observations and Unknown
Quantities.
138. The resulting values of the mean and probable error of
a single observation are
- JL- - I ^^ I \
and the probabFe errors of the unknown quantities are
When the observation equations have not equal weights we
§ VIIL)
PROBABLE ERRORS,
109
may replace lii', which represents the sum of the squares 01
the residuals in the reduced equations, by Ipv', in which the
residuals are derived from the original observation equations.
The formulae (i) and (2) will then give the mean and probable
errors of an observation whose weight is unity.
It will be noticed that when /i = i the formulee reduce to
those given in Art. 72 for the case of one unknown quantity.
139. Instead of calculating the values oiv^,Vi,. . . v,n directly
from the residual equations, and squaring and adding the
results, we may employ the formula for Hv' deduced below.
By equation (3), Art. 135,
Iv' = - Inv.
Now multiplying equations (i) of that article by «i , »fj , . . ,
respectively, and adding the results, we have
Mm
Snv
Therefore
Iv' = In'
lan.x + Idn.y + . . . + Iln.i— In'.
Ian . X •
Ibn.y — ... — Iln . i. . (1)
The quantity In^ which occurs in this formula may be calcu-
lated at the same time with the coefficients in the normal equa-
tions. It enters with them into the check equations of Art. 127.
We may also express Iv' exclusively in terms of these quan-
tities, for if we write
Dn =
and consider the develooment of Dn in terms of the elements
of its last row, we see that
Id'
lab .
. lai
Ian
lad
•
16' .
■
.. I6i
•
I6n
•
•
I'ai
•
Idi .
.. IP
•
Iln
< Ian
Ib7: .
.. Iln
In'
i?„ = — Ian . Dx — Ibn . D«
. — Iht.Di^ In*.D,
I lO INDIkBCT OBSER V A TIJ./VS ^ Ari. ^35
where JD, Da. • Dt have the same meanings as m Art, xsg-
hence
Iv^ = ^. . . . (2)
140. For example, in the case of the four observatiois equa-
tions of Art 115,
X— y -V 2z= 3
3^ + 2j/ - 5^ = 5
4JC + J/ + 4^' = 21
— ■ar + 2U)' + 32=i4J
for which the normal equations are solved in Art 123, the value
of Zr^ is 671 ; and formula (i) gives
l'z^ = 67i -88 X 45154 _7oX 2^
19899 ' 19899
^, 38121 1600
— 107 X ^-5 — = — 5 — ,
' 19899 19899
in which 1600 is the value of Z>„. Substituting this value oi
J'z'" in the formulae of Art 138, we find
e = 0.2836, r = 0.1913
for the mean and probable errors of an observation ; and using
the weights found in Art. 132, we find for those of the unknowi?
quantities
e-8 = 0.057, ^» = 0-077. ^' = 0-039*
rx = 0.038, Ty = 0.052, n = 0.026.
In this example we have found the exact value of I'ti'; if
approximate computations are employed, the formula used has
the disadvantage that a very small quantity is to be found by
means of large positive and negative terms, which considerably
increases the number of significant figures to which the work
must be carried. Thus, because Un' = 671 in the above exam-
ple, the work would have to be carried out with seven-place
logarithms to obtain Hz/' to four decimal places. The direct
§VIII.] MEASURE OF INDEPENDENCE. Ill
computation of the z/^'s from the observation equations would
present the same difficulty in a less degree.
141. Of course, no great confidence can be placed in the
absolute values of the probable errors obtained from so small a
number of observation equations as in the example given above.
There being but one more observation than barely sufficient to
determine values of the unknown quantities, the case is com-
parable to that in which « = 2 when the observations are direct.
By increasing the number of observations we not only obtain
a more trustworthy determination of the probable error of a
single observation, but, what is more important, we increase the
weight, and hence the precision, of the unknown quantities.
The measure in which this takes place depends greatly upon
the character of the equations with respect to independence.
As already mentioned in Art. 113, if there were only /t equa-
tions it would be necessary that they should be independent ;
in other words, the determinant of their coefficients must not
vanish, otherwise the values of the unknown quantities will be
indeterminate. When this state of things is approached the
values are ill-determined, and this is indicated by the small
value of the determinant in question. The same thing is true
of the normal equations. Accordingly, the weights are small
when the determinant D is small ; thus the value of Z) is in a
general way a measure of the efficiency of the system of obser-
vation equations in determining the unknown quantities.
142. If we write the coefficients in the m observation equa-
tions in a rectangular form, thus,
Ox 0>t . . • flfi • • .