IffliJIS' zti;:;^:. Hem ^ark Hntt CSfnUege of AgticuUuw ^t (S^otmll Ininerattg Date Due MAY 1 < f^W^QPLft JAN ^ o^ybo ^ CORNELL UNIVERSITY LIBRARY 924 065 496 Cornell University Library The original of tliis book is in tine Cornell University Library. There are no known copyright restrictions in the United States on the use of the text. http://www.archive.org/details/cu31924065496014 WORKS OF PROF. W. WOOLSEY JOHNSON PUBLISHED BY JOHN WILEY & SONS, INC. A Treatise on the Integral Calculus. Founded on the Method of Rates. ziv-|-440 pages, Six 8, 71 figures. Cloth, S3.00 net. An Elementary Treatise on the Integral Cal- culus. Founded on the Method of Rates, vii + 230 pages, 5}X8, 37 figures. Cloth, $1.50 net. The Theory of Errors and the Method of Least Squares. X + 172 pages, 5 X 7i. Cloth, J1.50 net. Curve Tracing in Cartesian Coordinates. vi + 86 pages, 5 X 7i, 54 figures. Cloth, Jl.flO net. Differential Equations. A Treatise on Ordinary and Partial Differentia] Equations. zii + 36S pages, 5iX8. Cloth, S3.50 net. Theoretical Mechanics. An Elementary Treatise. xv + 434 pageSt 5 X7i, 115 figures. Cloth, $3.00 net. A Treatise on the Differential Calculus. ' Founded on the* Method of Rates, xiv + 404 pages. Si X 8, 70 figures. Cloth, S3.00 net. An Elementary Treatise on the Differential Calculus. Founded on the Method of Rates, x + 191 pages, Six 8, 52 figures. Cloth, tl.50 net. Differential Equations. Being No. 9 of the Mathematical Monograph Series. vi + 72 pages, 6X9. Cloth, J1.25 net. THE THEORY OF ERRORS AND METHOD OF LEAST SQUARES WILLIAM WOOLSEY JOHNSON FSOFSSSOR OF MATHEMATICS AT THB. UNITED STATES NAVAL ACADEMY ANNAPOLIS MARYLAND NEW YORK JOHN WILEY & SONS, Inc. London: CHAPMAN & HALL, Limited CorVRIGHT, 1892, BY W. WOOLSEY JOHNSON Copyright Renewed, 1920, BY W. WOOLSEY JOHNSON PRESS OF BRAUNWORTH'a CO. BOOK MANUFACTURERa BROOKLVN. N. V. PREFACE. The basis adopted in this book for the theory of accidental errors is that laid down by Gauss in the Theoria Motus Cor- porum Coslestium (republished as vol. vii of the Werke), which may be described for the most part in his own words, as fol- lows : "The hypothesis is in fact wont to be considered as an axiom that, if any quantity has been determined by several direct observations, made under similar circumstances and with equal care, the arithmetical mean between all the observed values presents the most probable value, if not with absolute rigor, at least very nearly so, so that it is always safest to ad- here to it." (Art. 177.) Then introducing the notion of a law of facility of error to give precise meaning to the phrase "most probable value," we cannot do better than to adopt that law of facility in accord- ance with which the arithmetical mean is the most probable value. After deriving this law and showing that it leads to the principle of least squares, he says : " This principle, which in all applications of mathematics to natural philosophy ad- mits of very frequent use, ought everywhere to hold good as an axiom by the same right as that by which the arithmetical mean between several observed values of the same quantity is adopted as the most probable value." (Art. 179.) IV PREFACE. Accordingly no attempt has been made to demonstrate the principle of the arithmetical mean, nor to establish the expo- nential law of facility by any independent method. It has been deemed important, however, to show the self-consistent nature of the law, in the fact that its assumption for the errors of direct observation involves as a consequence a law of the same form for any linear function of observed quantities, and particularly for the final determination which results from our method. This persistence in the form of the law has too frequently been assumed, in order to simplify the demonstra- tions ; but at the expense of soundness. No place has been given to the so-called criteria for the rejection of doubtful observations. Any doubt which attaches to an observation on account of the circumstances under which it is made, is recognized, in the practice of skilled ob- servers, in its rejection, or in assigning it a small weight at the time it is made ; but these criteria profess to justify the sub- sequent rejection of an observation on the ground that its residual is found to exceed a certain limit. ^Yith respect to this Professor Asaph Hall says: "When observations have been honestly made I dislike to enter upon the process of cull- ing them. By rejecting the large residuals the work is made to appear more accurate than it really is, and thus we fail to get the right estimate of its quality." {The Orbit of lapetm, p. 49, Washington Observations for 1882, Appendix I.) The notion that we are entitled to reject an observation, that is, to give it no weight, when its residual exceeds a certain limit, would seem to imply that we ought to give less than the usual weight to those obseryations whose residuals fall just short of this limit, in fact that we ought to revise the obser- vations, assigning weights which diminish as the residuals increase. Such a process might appear at first sight plausible PREFACE. but it would be equivalent to a complete departure from the principle of the arithmetical mean and the adoption of a new law of facility. For this we have no justification, either from theory or from the examination of the errors of extended sets of observations. In the discussion of Gauss's method of solving the normal equations, the notion of the 'reduced observation equations ' (see Arts. 154, 155) which gives a new interpreta- tion to the ' reduced normal equations ' has been introduced with advantage. This conception, although implied in Gauss's elegant discussion of the sum of the squares of the errors (see Art. 160), seems not to have appeared explicitly in any treatise prior to the third edition of W. Jordan's Handbuch ier Vermessungskunde (Stuttgart, 1888). To this very complete work, and to Oppolzer's Lehrbuch zur Bahnbestimmung der Kometen und Flaneten, I am indebted for the forms recom- mended for the computations connected with Gauss's method, and for many of the examples. W. W. J. U. S. Naval Academy, June, 1892. CONTENTS. I. Introductory. PAGE Errors of Observation , i Objects of the Theory 2 II. Independent Observations of a Single Quantity. The Arithmetical Mean 4 Residuals 4 Weights 5 The Probable Value 6 Examples 7 III. Principles of Probability. The Measure of Probability g Compound Events g Repeated Trials 10 The Probability of Values belonging to a Continuous Series ... 11 Curves of Probability 12 Mean Values under a given Law of Probability 14 The Probability of Unknown Hypotheses 16 Examples ig IV. The Law of Probability of Accidental Errors. The Facility of Errors .21 The Probability of an Error between given Limits 23 The Probability of a System of Observed Values 24 Vlll CONTENTS. PAGB The most Probable Value derivable from a given System of Observed Values 24 The Form of the Facility Function corresponding to the Arithmet- ical Mean 25 The Determination of the Value of C 26 The Principle of Least Squares 28 The Probability Integral 29 The Measure of Precision 30 The Probable Error 32 The Mean Absolute Error 32 The Mean Error 33 Measures of the Risk of Error 34 Tables of the Probability Integral and Error Function 36 Comparison of the Theoretical and Actual Frequency of Errors . . 37 The Distribution of Errors on a Plane Area 38 Sir John Herschel's Proof of the Law of Facility (foot-note) ... 39 The Surface of Probability 40 The Probability of Hitting a Rectangle .40 The Probability of Hitting a Circle 42 The Radius of the Probable Circle 42 The most Probable Distance 43 Measures of the Accuracy of Shooting . ." 44 Exainples ....'. ■ 44 V. The Combination of Observations and Probable Accuracy of THE Results. The Probability of the Arithmetical Mean 48 The Combination of Observations of Unequal Precision 50 Weights and Measures of Precision 51 The Probability of the Weighted Mean 52 The most Probable Value of h derivable from a System of Observa- ■ tions 53 Equality of the Theoretical and Observational Values of the Mean Error in the case of Observations of Equal Weight 54 Formulae for the Mean and Probable Errors 55 The most Probable Value of h in Target Practice 57 The Computation of the Probable Error 58 The Values of h and r derived from the Mean Absolute Error ... 63 Examples 66 CONTENTS. IX VI. The Fahility of Error in a Function of one or more Observed Quantiiies. PA-E The Linear Function of a Single Observed Quantity 68 Non-linear Functions of a Single Observed Quantity . . ... 69 The Facility of Error in the Sum or Difference of two Observed Quantities 70 The Linear Function of Several Observed Quantities 72 The Non-linear Function of Several Observed Quantities .... 73 Examples 74 vn. The Combination of Independent Determinations of the SAME Quantity. The Distinction betwee.i Precision and Accuracy 76 Relative Accidental and Systematic Errors 78 The Relative Weights of Independent Determinations .... 79 The Combination of Discordant Determinations 81 Formulae for Probable Error when n = 1 (see foot-note) .... 83 Indicated and Concealed Portions of ihe Risk of Error 84 The Total Probable Error of a Determination 86 The Ultimate Limit of Accuracy 88 Examples 89 VIII. Indirect Observations. Observation Equations 91 The Reduction of Observation Equations to the Linear Form ... 93 The Residual Equations 94 Observation Equations of Equal Precision 94 The Normal Equation for x 95 The System of Normal Equations 97 Observation Equations of Unequal Precision 98 Formation of the Normal Equations 99 The General Expressions for the Unknown Quantities 100 The Weights of the Unknown Quantities 101 The Determination of the Measure of Precision 105 X CONTENTS. PACK The Probable Errors of the Observations and Unknown Quantities io8 Expressions for 'Sv'' log Measure of the Independence of the Observation Equations . . . iii Empirical or Interpolation Formulae II2 Conditioned Observations 113 The Correlative Equations 115 Examples ., , 116 IX. Gauss's Method of Substitution. The Reduced Normal Equations 120 The Elimination Equations 122 The Reduced Observation Equations 123 Weights of the Two Quantities First Determined 126 The Reduced Expression for 2o' 127 The General Expression for the Sum of the Squares of the Errors . 128 The Probability of a Given Value olt 133 The Auxiliaries Expressed in Determinant Form 134 Form of the Calculation of the Auxiliaries 136 Check Equations 138 Numerical Example 141 Values of the Unknown Quantities from the Elimination Equations 141 Independent Values of the Unknown Quantities 142 Computation of CTi, a^, etc 144 The Weights of the Unknown Quantities 145 Computation of the Weights 148 Examples . . 149 Values of Constants 152 Values of the Probability Integral. Table I.— Values of /"< , 153 Table II.— Values oi P, 4 ... 154 Squares, Cubes, Square-roots, and Cube-roots 155 THE THEORY OF ERRORS AND METHOD OF LEAST SQUARES. I. Introductory. Errors of Observation. 1. A quantity of which the magnitude is to be determined is either directly measured, or, as in the more usual case, deduced by calculation from quantities which are directly measured. The result of a direct measurement is called an observation. Observations of the kind here considered are thus of the nature of readings upon some scale, generally attached to an instru- ment of observation. The least count of the instrument is the smallest difference recognized in the readings of the instrument, so that every observation is recorded as an integral multiple of the least count. 2. Repeated observations of the same quantity, even when made with the same instrument and apparently under the same circumstances, will nevertheless differ materially. An increase in the nicety of the observations, and the precision of the instru- ment, may decrease the discrepancies in actual magnitude ; but at the same time, by diminishing the least count, their numerical measures will generally be increased ; so that, with the most refined instruments, the discrepancies may amount to many times the least count. Thus every observation is subject to an error, the error being the difference between the observed value and the true value ; an observed value which exceeds the true value is regarded as having a positive error, and one which falls short of it as having a negative error. INTRODaCTORY. [Art. 3 3. An error may be regarded as the algebraic sum of a num- ber of elemental errors due to various causes. So far as these causes can be ascertained, their results are not errors at all, in the sense in which the term is here used, and are supposed to have been removed by means of proper corrections. Systematic errors are such as result from unknown causes affecting all the observations alike. These again are not the subjects of the " theory of errors," which is concerned solely with the acci- dental errors which produce the discrepancies between the observations. Objects of the Theory. 4. It is obvious that when a set of repeated observations of the same quantity are made, the discrepancies between, thjem enable us to judge of the degree of accuracy we have attained. Speaking in general terms, of two sets of observations, that is the better which exhibits upon the whole the smaller dis- crepancies. It is obvious also that from a set of observa- tions we shall be able to obtain a result in which we can have greater confidence than in any single observation. It is one of the objects of the theory of errors to deduce from a number of discordant observations (supposed to be already individually corrected, so far as possible) the best attainable result, together with a measure of its accuracy ; that is to say, of the degree of confidence we are entitled to place in it. 5. When a number of unknown quantities are to be deter- mined by means of equations involving observed quantities, the quantities sought are said to be indirectly observed. It is neces- sary to have as many such observation equations as there are unknown quantities. The case considered is that in which it is impossible to make repeated observations of the individual observed elements of the equations. These may, for example, be altitudes or other astronomical magnitudes which vary with the time, so that the corresponding times are also among the observed quantities. Nevertheless, there is the same advantage in employing a large number of observation equations that there §1.] OBJECTS OF THE THEORY. 3 is in the repetition of direct observations upon a single required quantity. If there are n unknown quantities, any group con- taining n of the equations would determine a set of values for the unknown quantities; but these values would differ from those given by any other group of n of the equations. We may now state more generally the object of the theory of errors to be, when given more than n observation equations involving n unknown quantities, the equations being somewhat inconsistent, to derive from them the best determination of the values of the several unknown quantities, together with a measure of the degree of accuracy attained. 6. It will be noticed that, putting « = i, this general state Went includes the case of direct observations, in which all the equations are of the form ^i, ^^ Xi J i^ ^^ JC% f « • • I where X is the quantity to be determined, and each equation gives an independent statement of its value. We commence with this case of direct observations of a single quantity, and our first consideration will be that of the best determination which can be obtained from a number of such observations. n. Independent Observations of a Single Quantity. The Arithmetical Mean. 7. Whatever rule we adopt for deducing the value to be accepted as the final result derived from several independent observations, it must obviously be such that when the observa- tions are equal the result shall be the same as their common value. When the observations are discordant, such a rule pro- duces an intermediate or mean value. Thus, if there be n quantities, x^,Xi, . , , Xn, the expressions -^, V(^i-=^2----^«), y-^- etc., give different sorts of mean values. Of these, the one first written, which is the arithmetical mean, is the simplest, and it is also that which has universally been accepted as the final value when Xx, x^, . . . Xn are independently observed values of a single quantity x, the observations being all supposed equally good. Residuals, 8. The differences between the several observed values and the value which we take as our final determination of the true value are called the residuals of the observations. The residuals are then what we take to be the errors of the observations ; but they differ from them, cf course, by the amount of error existing in our final determination. If the observed values were laid down upon a straight line, as measured from any origin, the residuals would be the abscissas of the points thus representing the observations when the point corresponding to the final value adopted is taken as the origin. §11.] RESIDUALS. 5 Q, In the case of the arithmetical mean, the algebraic sum of the residuals is zero. For, if a denote the arithmetical mean of the n quantities ;fi, Xj, .. .;«■„, we have (0 the residuals are Ix a = — , n Xx — a, Xi — a, ... Xn — a, and their sum is Hx — na , which is zero by equation (i). When the observations are represented by points, as in the preceding article, the geometrical mean point or centre of gravity of these points is the point whose abscissa is a, and, when this point is taken as the origin, the sum of the positive abscissas of observation points is equal to the sum of the nega- tive abscissas. Weights. 10. When the observations are not made under the same cir- cumstances, and are therefore not regarded as equally good, a greater relative importance can be given to a better observation by treating it as equivalent to more than one occurrence of the same observed value in a set of equally good observations. For example, if there were two observations giving the observed values JT, and x^ , and the first observation were regarded as the best, we might proceed as if the observed value x-^ occurred twice and x^ once in a set of three observations equally good. The arithmetical mean would then be '2X\ "T Xi,/j, . . .pn. 12. The weight of a result obtained by the rule given above is defined to be the sum of the weights of its constituents ; so that, because alp = 2px, the product of a result by its weight is equal to the sum of the like products for its constituents. It follows that, in obtaining the final result, we may for any group of observations substitute their mean with the proper weight. In the case of observations supposed equally good, the weight of each is taken equal to unity, and then the weight of the mean is the number of observations. The Probable Value. 13. The most probable value of the observed quantity, or simply the probable value, in the ordinary sense of the expres- sion signifies that which, in our actual state of knowledge, we are justified in considering as more likely than any other to be the true value. In this sense, the arithmetical mean is the most §11.] THE PROBABLE VALUE. 7 probable value which can be derived from observations con- sidered equally good. This is, in fact, equivalent to saying that we accept the arithmetical mean as the beat rule for com- bining the observations, having no reason either theoretical or practical for preferring any other.* But, if instead of a rule of combination we adopt a theory with respect to the nature of accidental errors, the probable value will depend upon the adopted theory. To become the subject of mathematical treatment such a theory must take the shape of a law of the probability uf accidental errors, as will be explained in a subsequent section. Since, in the nature of things, this law can never be absolutely known, and since more- over it probably differs with differing circumstances of observa- tion, the most probable value in this technical sense is itself unknown. But when the expression is used without specifying the law of probability, it signifies the value which is the most prubable in accordance with the generally accepted law of proba- bility. Before proceeding to this law, we shall consider, in the following section, the principles of probability so far as we shall need to apply them. Examples. 1. Show that the formula nf{a) = 2f{^x) determines a mean value of w quantities for any form of the function/, and that the geometric mean is included in this rule. 2. Except wheny"(.*-) = ex in Ex. i, the position of the point whose abscissa is a is dependent upon the position of the origin as well as upon the observation points. *That the most probable value, when there are but two observations, is their arithmetical mean follows rigorously from the hypothesis that positive and negative errors are equally probable. Tho propeity of the arithmetical mean pointed out in Art. 12 shows that tho result for three observations is expressible as a function of the result for two of them and the third observation, and so on for four or more observations. It was upon the assumption that the most probable value must possess this property that Encke based his so-called proof that the arithmetical mean is the most probable value for any number of observations [Berliner Astronomisches Jahrbuch for 1834, pp. 260-262). 8 OBSERVATIONS OF A SINGLE QUANTITY. [Art. 13 3. If the values of x are nearly equal in Ex. i, the result of the formula is nearly equivalent to a weighted arithmetical mean in which the weights are proportional X.of'{hxi + \a), /'{ixs+ia), etc. 4. When a mean value is determined by an equation of the form I!/{x — a) = o, the position of the point whose abscissa is a is independent of the origin. Give the cubic determining^ a when 2(x — af = o, and show that one root only is real. 5. Prove that the weighted arithmetical mean of values of X •{■ y\s the sum of the like means of the values of x and of the values of jc respectively. in. Principles of Probability. The Measure of Probability, 14. The probabtHfy of a. future event is the measure of our reasonable expectation of the event in our present state of knowledge of its causes. Thus, not knowing any reason to the contrary, when a die is to be thrown we assign an equal proba- bility to the several events of the turning up of its six different faces. We say, therefore, that the probability or chance that the ace will turn up is i to 5, or better, i out of 6, hence the fraction ■§■ is taken as the measure of the probability. Thus the probability of an event which is one of a set of equally likely events, one of which must happen, is the fraction whose num- erator is unity and whose denominator is the number of these events. Obviously, the probability of an event which can happen in several ways is the sum of the probabilities of the several ways. Thus if the die had two blank faces, the probability that one of them would turn up would be \ or \. The sum of the proba- bilities of all the possible events is unity, which represents the certainty that some one of the events will happen. Compound Events. 15. An event which consists of the joint occurrence of two independent events is called a compound event. By independent events we mean events such that the occurrence or non-occur- rence of the first has no influence upon the occurrence or non- occurrence of the second. For example, the throwing of sixes with a pair of dice is a compound event consisting of the turning up of a special face of each die. The whole number of com- pound events is evidently the product of the numbers of simple events ; and, since the several probabilities are the reciprocals lO PRINCIPLES OF PROBABILITY. [A t. 15 of these numbers, the probability of the compound event is the product of the probabilities of the simple events. Thus, when a pair of dice is thrown we have 6 X 6 = 36 compound events, and the probability of a special one, such as the throwing of sixes, is i X i = ^. In like manner, if more than two simple events are concerned, it is easily seen that, in general, the probability of a compound event is the product of the probabilities of the independent simple events of whose joint occurrence it consists. 16. A compound event may happen in diflferent ways, and then, of course, the probabilities of these independent ways must be added, For example, six and five may be thrown in two ways, that is to say, two of the 36 equally likely events consist of the combination six and five, hence the chance is -^ or ^. A throw whose sum amounts to 10 can occur in three ways, therefore its chance is ^ or ^. Repeated Trials. 17. When repeated opportunities for the occurrence or non- occurrence of the same set of events can be made to take place under exactly the same circumstances, equally probable events will tend to occur with the same frequency. Therefore, in a large number of such opportunities or trials, the relative fre- quency of the occurrence of an event which can happen in m ways and fail in n ways (the m -V n ways of both kinds corres- ponding Xo m -V n equally probable elementary events) will tend to the value , which is the fraction expressing the prob- ability of the event. This is commonly expressed by saying that the ratio of the number of occurrences of an event to the whole number of trials will " in the long run " be the fraction which expresses the probability. The correspondence of this frequency in the long run with the estimated probability forms the only mode, though an uncertain one, of submitting our results to the test of experience. §ni.] PROBABILITY OF CONTINUOUS VALUES. II The Probability of Values belonging to a Continuous Series, l8. In the examples given in the preceding articles, the equally probable elementary events, which are the basis of our estimate of probability, form a limited number of distinct events, such as the turning up of the different faces of a die. But, in many apphcations, these events belong to a consecutive series, inca- pable of numeration. For example, suppose we are concerned with the value of a quantity x, of which it is known that any value between certain limits a and b is possible ; or, what is the same thing, the position of the point P, whose abscissa is x, when P may have any position between certain extreme points A and B. We cannot now assign any finite measure to the prob- ability that X shall have a definite value, or that P shall fall at a definite point, because the number of points upon the lii>e AB is unlimited. We have rather to consider the probability that /'shall fall upon a definite segment of the line, or that the value of j; shall lie between certain limits. ig. It is customary, however, to compare the probabilities that P shall fall at certain points. Suppose in the first place C D c 2?' ix A I ? B? Fig. I. that, when any equal segments of the line AB are taken, the probabilities that P shall fall in these segments are equal. In this case, the probability that P shall fall at a given point is said to be constant for all points of the line. Let Ax be a segment of the line AB ; then, if the probability for all points of AB is constant, it readily follows from the definition just given that the 12 PRINCIPLES OF PROBABILITY. [Art. ig probability that P shall fall in the segment Ax is proportional to Ax. Since we suppose it certain that P shall fall somewhere between A and B, this probability will be represented by Ax Ax AB ^'^ J^^a Let an ordinate y be taken such that yAx is the value of this probability; then and, constructing as in Fig. i the line CD having this constant ordinate, the probabilities for any segments of AB are the cor- responding rectangles contained between the axis and the line CD For different values of the limiting space AB in which P may {2S\.,y varies in inverse ratio. Thus, if AB is changed to AB' , the new ordinate AC or y is such that y'.AB'=y.AB, each of the areas A CDB and A CDS' being equal to unity. The two values oi y are said to determine the relative proba- bilities that P shall fall at a given point in the two cases. Curves of Probability. 20. Taking now the case in which the probability is not con- stant for all points, let AB be divided into segments, and let rectangles be erected upon them, the area of each rectangle representing the probability that /"shall fall in the corresponding segment. The heights of these rectangles will now differ for the different segments. Denoting the height for a given segment Ax hyy, the relative values of j/ for any two segments deter- mine, as explained in the preceding article, the relative proba- bility that /"shall fall at a given point in one or the other of the segments, on the hypothesis that the probability is constant throughout the segment. They may thus be said to measure the mean values of the probabilities forgiven points taken in the various segments. The sum of the areas of the rectangles will, of course be unity; that is, 2yAx=^\. 21. If we now subdivide the segments, the figure composed §ni.] CURVES OF PROBABILITY. 13 of the sum of the rectangles will approach more and more nearly, as we diminish the segments without limit, to a curvilinear area, and the variable ordinate of the limiting curve will measure the continuously varying probability that P shall fall at a given point of the line AB. The value of^ is now a continuous function of x the abscissa of the corresponding point, and, putting j)/=/(;c), the function f{x) is said to express the law of the probability of the value x. Fig. 2. The curve y ■=f{x') is the probability curve corresponding to the given law/(j;). The entire area ACDB, Fig. 2, whose Cb valufe is ydx (which is the limit oiSyAx ; see Int. Calc, Art. 99), a and b being the limiting values between which x certainly falls, is equal to unity. In general, for any limits the value of the integral ydx is the probability that x falls between the values a and ;?. The AemerA ydx of this integral may be called the element of probability for the value x. It is sometimes called the probability that the value shall fall between x and x + dx, it being in that case understood that dx is taken so small that the probability may be regarded as constant in this interval. 22 As an illustration of what precedes, suppose it to be known that the value oix must fall between zero and a, and that the probabilities of values between these limits are proportional to the values themselves. These conditions give and y=.CX , I ydx = I , 14 PRINCIPLES OF PROBABILITY. [Art 22 whence, substituting and integrating, ca^ 2 — = I , or c= —. 2 a" Hence the law of probability in this case is We may now find the probability that x shall fall between any given limits. For example, the probability that x shall exceed Ja is represented by P = I ydx = —^ xdx = — . h" * ha 4 Thus the odds are 3 to i that x exceeds Ja when the law of probability is that proposed. Mean Values under a given Law of Probability. 23. When a quantity x has a given law of probability, we have frequently occasion to consider what would be its mean or average value " in the long run," that is to say, the arithmetical mean of its values, supposing them to occur in a large number of trials with the frequency indicated by the given law of prob- ability. See Art. 17. Let us suppose, in the first place, that only a limited number of distinct values, say are possible. Let P^, P^ . . . Pm be the proper fractions which represent the respective probabilities of these values. Then, in a large number n of trials, the number of times in which the distinct values x^jX^. . . Xm occur will be nP^ , nP^,... nPm respectively. The arithmetical mean mentioned above is, there- fore, nPiXi + nP^x^ + . . . + nPmXm §111.] MEAN VALUES UNDER A GIVEN LAW. 1 5 that is, P^Xx + P^X^ + . . . + PmXm , or IPx. That is to say, the mean value is found by multiplying thcj m distinct values by their probabilities and adding the results.* 24. Next, supposing a continuous series of values possible, let jcJjT be taken, as in Art. 20, to represent the probability that X falls between x and x + Ax. Evidently, in each term of SPx, we must now substitute this expression for P, and for x some intermediate value between x and x + Ax. When we pass to the limit, in which _y becomes a continuous function oi x, this sum becomes 1 xydx , which is thus the mean value of x, when y is the function expressing its law of probability and a and b its extreme possible values. For example, with the law of probability considered in Art. 22, namely, 2X y=w' the mean value oix is 2_ p a' Jo x''dx = — a. 3 25. In the same manner it may be sLown that, \i y:=f{x') expresses the law of probability of x, the mean value of any function F(^x) is ^ F{x)f{x')dx . *The "value of an expectation" is an instance of a mean value. Thus, if x^ is the value to be received in case a certain event whose prob- ability is P^ happens, x, the value to be received if an event whose probability is /", happens, and so on for m distinct events, one of which must happen, then the mean value 'SiPx is called the valu*; of the expec- tation. 1 6 PRINCIPLES OF PROBABILITY. [Art. 25 Thus, again taking the law of probability ^ = — r » the mean value of x^ * is Again, that of — is — j;V;!r = — . «' Jo 2 ^ Lf X = - . « Jo « 26, If all values between a and b are equally probable, the . . dx element of probability is , _ ; thus the mean value of x, in this case, is f* xdx _ ^° — a' _ a + 3 ]ab — a~ 2(b — a)~ 2 ' which is the same as the arithmetical mean between the limiting values. Again, the mean value ol x^, in this case, is f* x'^dx b^ — a' 1 /« , i , jN Jai> — a Z{b — a) * ^ T'-^^ Probability of Unknown Hypotheses. 27. No distinction can be drawn between the probability of an uncertain future event and that of an unknown contingency, in a case where the decisive "event" has indeed happened, but we remain in doubt with regard to it because only probable evidence * It should be noticed that if « = F[^x\ the law of probability for « is not found by simply expressing/(a;) as a function of e. It is necessary to transform the element of probability /■(j;)rf;c, which expresses the proba- bility that X falls between x and x + dx, and therefore represents also the probability that z falls between z and z + dz. Thus, in the present case, putting z = x', f{x)dx = -^dx = -^, which indicates that all values of z between o and a' are equally prob- able when, as supposed in Art. 22, the probability of a value of x is pre portional to the value itself. §111.] PROBABILITY OF UNKNOWN HYPOTHESES. t/ is known to us. In any case, the probability is a mental estimate q{ credibility depending only upon the known data, and there- fore subject to change whenever new evidence becomes known. Let there be two hypotheses A and B, one of which must be true, and which so far as we know are equally probable, and suppose that a trial is to be made which on either hypothesis may eventuate in one or the other of two ways ; in other words, that an event X may or may not happen. Suppose, further, that on the hypothesis A the probability of X is a, and on the hypothesis B the probability of X is b. Now it is clear that after the trial has been made and the event X has happened, we are entitled to make a different estimate of the relative credibilities of the hypotheses A and B. 28. To obtain the new measures of the probabilities ol A and B, we employ the notion of relative frequency in the long run. Let us then consider a great number of cases of the four kinds which before the event ^we regard as possible, the frequencies of the different kinds being proportional to their probabilities as we estimate them before the event. The hypotheses A and B respectively are true in an equal numberof cases, say «, of each. The event X will happen in na of the cases in which A is true, and not happen in «(i — a) cases. Again, Xwill happen in nb cases in which B is the true hypothesis, and not happen in n{j. — b') cases. Now, since X has actually happened, from the whole number, ■zn, of cases we must exclude those in which ^does not happen, and consider only the na + nh cases in which Xdoes happen. Attending only to these cases, the relative frequency of those in which A and B respectively are true is the measure of our present estimate of their relative probability. Hence these probabilities are in the ratio a:b, that is, the probability oi A is — ^ , and that of B is y . a -V b 29. As an illustration, suppose there are two bags, A and B, containing white and black balls, A containing 3 white and 5 18 PRINCIPLES OF PROBABILITY. [Ait. 29 black balls, B containing 5 white and i black ball. One of the bags is chosen at random, and then a ball is drawn at random from the bag chosen. The ball is found to be white ; what is the probability that the bag A was chosen? Here « — f , since three out of eight balls in A are white, and ^ = f; hence the probabilities are in the ratio f : •§■ or 9 : 20. The probability that the bag was A is therefore ^. Again, suppose A is known to contain only white balls, and B an equal number of white and black. If a white ball is drawn « = I, ^ = I", the odds in favor of ^ are 2 : i or the probability of A is f. But if a black ball had been drawn, we should have had a = o,b—\, the probability of A is zero, that is, it is certain that the bag chosen was not A. 30. If there are other hypotheses besides A and B consistent with the event X, the same reasoning as in Art. 28 establishes the genera] theorem that the probabilities of the several hypoth- eses, which before an event X were considered equally probable* are after the event proportional to the numbers which before the event express the probabilities of X on the several hypotheses. The various hypotheses in question may consist in attributing different values to an unknown quantity x, and these values may constitute a continuous series. The probabilities of the various values will then be proportional to the corresponding prob- abilities of the event X. Hence, to find the law of the prob- ability of ;tr, it is only necessary to determine a constant in the same manner that c is determined in Art. 22. In particular it is to be noticed that, of all the values of an unknown quantity which before the occurrence of a certain event were equally probable, that one is after the event the most prob- able which before the event assigned to it the greatest probability. * If this is not the case, the probabilities before the event are called the antecedent or a priori probabilities, and the theorem is that the ratio of the antecedent probabilities is to be multiplied by the probabilities of ^ on the several hypotheses, in order to find the ratio of the probabilities after the event. JUL] EXAMPLES. I9 Examples. 1. From 2« counters marked with consecutive numbers two are drawn at random ; show that the odds against an even sum are nX.on — i. 2. A and B play chess, A wins on an average 2 out of 3 games; what is the chance that A wins exactly 4 games out of the first six? ■^^. 3. A domino is chosen from a set and a pair of dice is thrown ; what is the chance that the numbers agree ? ■^. 4. Show that the chance of throwing 9 with two dice is to the chance of throwing 9 with three dice as 24 to 25. 5. A and B shoot alternately at a mark. A hits once in n times, B once in « — i times ; show that their chances of first hit are equal, and find the odds in favor of ^ after A has missed the first shot. nXon— 2. 6. A and B throw a pair of dice in turn, A wins if he throws numbers whose sum is 6 before B throws numbers whose sum is 7 ; show that his chance is f^. 7. A walks at a rate known to be between 3 and 4 miles an hour. He starts to walk 20 miles, and B starts one hour later, walking at the rate of 4 miles an hour. What is the chance of overtaking him : 1° if all distances per hour between the limits are equally probable; 2° if all times per mile between the limits are equally probable? 1°, i to 2 ; 2°, 2 to 3. 8. If all values of x between o and a are possible and their probabilities are proportional to their squares, show that the probability that x exceeds \a'\& \, and find the mean value of^. !«• 9. If, in the preceding example, we are informed that x exceeds \a, how is the probabihty affected, and what is now the mean value of JT? M**; ID. If two points be taken at random upon a straight line AB, whose length is a, and X denote that which is nearest A, show that the curve of probability for Xis a straight line passing through B, and find the mean value of AX. \a. 20 PRINCIPLES OF PROBABILITY. [Art. y 11. On a line AB, whose length is a, a point Z'vs, taken at random, and then a point X is taken at random upon AZ. Determine the probability curve for AX, or x, and the mean value of jr. i , a a y— —log - ;—. 12. Two points are taken at random on the circumference of a circle whose radius is a. Show that the chord is as likely as not to exceed ai^ 2, but that the average length of the chord is4i. ■K 13. In a semicircle whose radius is a, find the mean ordinate : 1° when all points of the semi-circumference are equally prob- able ; 2° when all points on the diameter are equally probable. o 2a . o '!■« ' TT ' ^ ' 7- 14. A card is missing from a pack; 13 cards are drawn at random and found to be black. Show that it is 2 to i that the missing card is red. 15. A card has been dropped from a pack; 13 cards are then drawn and found to be 2 spades, 3 clubs, 4 hearts, and 4 diamonds. What are the relative probabilities that the missing card belongs to the suits in the order named ? 11:10:9:9. 16. A and B play at chess: when A has the first move the odds are 11 to 6 in favor of A, but when B has the first move the odds are only 9 to 5. A has won a game; what are the odds that he had the first move? 154 to 153. 17. The odds are .2 to i that a man will write 'rigorous' rather than ' rigourous.' The word has been written, and a letter taken at random from it is found to be ' u '; what are now the odds ? 9 to 8. 18. A point /"was taken at random upon aline yi5, and then a point C was taken at random upon AP. If we are informed that C is the middle point of AB, what is now the probability curve of AP ? i X log 2* IV. The Law of Probability of Accidental Errors. The Facility of Errors, 31. If observations made upon the same magnitude could be repeated under the same circumstances indefinitely, only a limited number of observed values, which are exact multiples of the least count of the instrument, would occur, and the rela- tive frequency with which they occurred would indicate the law of the probability of the observed values, that is to say, the law of facility with which the corresponding errors are com- mitted. In the theory of errors, however, it is necessary to regard all observed values between certain limits as possible, so that when they are laid down upon a line as abscissas, the law of facility may be represented by a continuous curve, as explained in Art. 21. This is in fact equivalent to supposing the least count diminished without limit. The curve thus obtained is the probability curve for an observed value ; and, if the point representing the true value be taken as origin, the abscissas become errors, and the curve becomes the probability curve for accidental errors committed under the given circumstances. 32. The probability curves corresponding to different circum- stances of observation would differ somewhat, but in any case would present the following general features. In the first place, since errors in defect and in excess* are equally likely to occur, the curve must be symmetrical to the right and left of the point which represents the true value* of the observed quantity. In the next place, since accidental errors are made up of elemental errors (Art. 3) which, as they may have either direction, tend *Theie is usually no distinction in kind between these : eitiier direc- tion may be taken as positive, and errors of a given magnitude in one direction or the other are equally likely to occur. 22 PROBABILITY OF ACCIDENTAL ERRORS. [Art. 32 to cancel one another, small errors are more frequent than large ones, so that the maximum ordinate occurs at the central point. In the third place, since large errors (which can only result when most of the elemental errors have the same direc- tion and their greatest magnitudes) are rare, and errors beyond some undefined limit do not occur, the curve must rapidly approach the axis of x both to the right and left, so that the ordinate (which can never become negative) practically van- ishes at an indefinite distance from the central point. 33. If J)/ = fix) is the equation of the curve referred to the central point as origin, the general features mentioned above are equivalent to the statements : first, that fix) is an even func- tion, that is, a function of x^ ; secondly, that {x — a) , . . . . (i) we have from equation (2), Art. 36, \ogP=^lx^~a') + 4)(_Xi—a)+ ... +4'(xn—a') + n\og Jx, (2) and a is to be so taken that P, and therefore log P, shall be a maximum. Hence, putting ^' for the derivative of which are the residuals, hy v-^,v^, . . .Vn, this equation may be written ^''(z'l) + 4>\v^) + . . . + 0'(z'„)= o. . . . (4) Supposing now the value of a which satisfies equation (3) to be the arithmetical mean, \ve have by Art. 9, Z'l + Z'2 + . . . + Z're = o (5) We wish therefore to find the form of the function 4>' such that equation (4) is satisfied by every set of values of Wi , fu , . . . w„ which satisfy equation (5). For this purpose, suppose all the values of » except v^ and v^ to remain unchanged while aqua- ton (5) is still satisfied. The new values may then be denoted by Z'l + k and v,. — k, in which k is arbitrary. Substituting the new values in equation (4), the sum of the first two terms must remain unchanged since all of the other terms are unchanged ; therefore, /(», + /^) + f shows that the half which exceed r do so by a total amount greater than that by which the other half fall short of it. §IV.] MEASURES OF THE RISK OF ERROR. 35 '» = ;;7^='-'«29r (I) >V2 = 1.4826 r, (2) 52. Fig. 5 shows the positions of the ordinates corresponding to r, 11 and £ in the curve of facility of errors k — Wxa The diagram is constructed for the value A ss f., r Tj s Fig. 5. From the definitions of the errors it is evident that the ordinate of r bisects the area between the curve and the axes, that of 7 passes through its centre of gravity, and that of e passes through its centre of gyration about the axis of^. 36 PROBABILITY OF ACCIDENTAL ERRORS. [Art 52 The advantage of employing in practice a measure of the risk of error, instead of the direct measure of precision, results from the fact that it is of the same nature and expressed in the same units as the observations themselves. It therefore conveys a better idea of the degree of accuracy than is given by the value of the abstract quantity h. When the latter is given, it is of course necessary also to know the unit used in expressing the errors. Tables of the Probability Integral. rt 53. The integral flot at all upon its direction.* This agrees with • Sir John Herschel's proof of the law of facility of errors {^Edinburgh Review, July, 1850) rests upon the assumption that it must possess the property which is above shown to belong to the exponential law. He compares accidental errors to the deviations of a stone which is let fall with the intention of hitting a certain mark, and assumes that the deviations in the directions of any two rectangular axes are independent. But, since there is no reason why the resultant deviations should'depepd upon their direction, this implies that,/(j;') being the law of facility, we must have where x' a.nAy' denote coordinates referred to a new s«t,of rectangular axes, so that X* +y*=ix'^ +y« Now the solution of the functional equation is fvhere c and k are constants. Tb$re is no a priori reason why the deviations in y should, as assumed 40 PROBABILITY OF ACCIDENTAL ERRORS. [Ait. 55 the usual custom of judging of the accuracy of a shot solely by its distance from the point aimed at. The Surface of Probability. 56. If at every point of the plane of ;ri' we erect a perpendicular z, taking we shall hav? a surface of probability analogous to the curve of probability in the case of linear errors. Since the probability of hitting the elementary area dxdy is zdxdy, the probability of hitting any area is the value of the double integral II zdxdy taken over the given area. That is to say, it is the volume o/ the right cylinder having this area for its base, and having its upper surface in the surface of probability. The probability surface is a surface of revolution. The solid included between it and the plane of xy is in fact similar to that employed in Art. 39, in evaluating the integral -Ws! dx. The Probability of Hitting a Rectangle. 57. The probability of hitting the rectangle included between the horizontal lines _y =^j)/i ,y^yi and the vertical lines x = Xi, .r = Xj is the double integral h - r re-^''''e-'''y'dydx, above, occur with the same relative frequency when x has one value as when it has another ; but it is noteworthy that, having made this assump- tion, no other law of facility of linear deviation would produce a law of distribution in area involving only the distance from the centre. On the other hand, no other law of distribution in area depending only upon r (such for example as « — '■) would make the law of facility for deviations in J/ independent of the value of x. §IV.] PROBABILITY OF HITTING A RECTANGLE. 4I which, because the limits for each variable are independent oi the other, is equivalent to y Jx, n Jii, Hi that is, it is the product of the probabilities that x and j/ respec- tively shall fall between their given limits. This result is, of course, nothing more than the expression of the hypothesis made in Art. 55.* If /i be known, the values of the factors in the expression (2) may be derived from Table I, as explained in Art. 44. In particular, putting Xi= — S, x^ = S, y^— — S' , y^ — <5', we have for the probability of hitting a rectangle whose centre is at the origin and whose sides are 2S and 28', P = PiPy, where Pf, and /•«' are tabular results taken from Table I, if ^ be given, or from Table II if the probable error of the deviations be given. For example, for the square whose centre is the origin and whose half side is r^ , the probable error of the component deviations, the probability of hitting is \. Again, to find the side of the centrally situated square which is as likely as not to be hit, and which therefore may be called the probable square, we must determine the value of i5 for which /"s = V ^ — 0.7071. This will be found to correspond to t— 0.7437, whence the side of the square is 28, where f,_ t _ 0-7437 h~ h ' * The property of the probability surface corresponding to the assump- tion that the relative frequency of the deviations in y is independent of the value of j. is that any section parallel to the plane of yz may be derived from the central section in that plane by reducing all the- values of » in the same ratio. In accordance with the pri ceding foot-note, this is the only surface of revolution possessing this property. 42 PROBABILITY OF ACCIDENTAL ERRORS. [Art. 58 The Probability of Hitting a Circle. 58. Putting o = 2T.rdr in the expression derived in Art. 55, the probabiHty of hitting the elementary annular area between the circumferences whose radii are r and r-\-dr is found to be dp = 2h'e-^"''rdr (i) Hence the probability that the distance of a shot from the point aimed at shall fall between r^ and r, is rdr = e-^''^-e-^K . . (2) Putting the lower limit r, equal to zero, we have, for the prob- ability of planting a shot within the circle whose radius is r, /=i-^-'"" (3) a formula in which h is the measure of the accuracy of the marksman. The Radius of the Probable Circle. 59 If we denote by a the value of r corresponding \.of = i in equation (3) of the preceding article, we shall have whence ^ -*'"*=* (I) = ^^ (2) k Then a is the radius of the probable circle, that is, the circle within which a shot is as likely as not to fall, or within which in the long run the marksman can plant half his shots. Thus a is analogous to the probable error in the case of linear devia- tions, and, being inversely proportional to h, may be taken as an inverse measure of the skill of the marksman. Eliminating h from the formula for/ by means of equation (i), nre obtain ^ = '-(t^ (3) §IV.] THE RADIUS OF THE PROBABLE CIRCLE. 43 Denoting by n the whole number of shots, and by m the number of those which miss a circular target of radius r, we may, if n and m be sufficiently large, put , vt '^ n Supposing/ in equation (3) to be thus determined, we derive the formula log 2 « = ''^j log n — log m ' in which the ordinary tabular logarithms may be employed.* Tke Most Probable Distance. 60. Equation (i), Art. 58, shows that the probability of hitting the elementary annulus of radius r is proportional to The value of r which makes this function a maximum is found to be identical with e, the mean error of the linear deviations, namely, * = ^' which is therefore the most probable distancef at which a shot can fall. This distance might, like a, be taken as the inverse measure of the skill of the marksman. *This is Sir John Herschel's forinula for the inverse measure of the skill of the marksman. See " Familiar Lectures on Scientific Subjects," p. 498. London and New York, 1867. fThe point at which the probability is a maximum (that is, where the density of the shots in the long run is the greatest) is of course the origin, at which the ordinate z in the probability surface is a maximum. The value of r here determined is that for which the right cylindrical surface included between the plane of xy and the probability surface is a maximum, that is, the annulus nrhich contains the greatest number of shot in the long run. 44 PROBABILITY OF ACCIDENTAL ERRORS. [Art. 6i Measures of the Accuracy of Shooting, 6l. Any quantity inversely proportional to h might be taken as the measure of the marksman's risk of error, or inverse measure of precision. We may employ for this purpose either a, the radius of the probable error, e, the most probable dis- tance, 5, the half side of the probable square (Art. 57), or r^, the probable error of a linear deviation. The most probable value of h derivable from n given shots will be shown in the next section, Art. 73, to be Employing this value oih we have « = -5^^ = 0.8326^—, I /jr' /r^ . 0.7437 Z^''' «=_^ = 0.7437 V^. = 0.4769 y — . Examples. 1. Show that the abscissa of the point of inflexion in the probability curve is the mean error. 2. In looo observations of the same quantity how many may be expected to differ from the mean value by less than the probable error, by less than the mean absolute error, and by less than the mean error respectively ? 500, 575, 683. 3. An astronomer measures an angle 100 times ; if, when the unit employed is i", the measure of precision is known to be § IV.] EXAMPLES. 45 h-=i\, how many errors may be expected to have a numerical value between 2" and 4" ? 31. 4. In 1 25 observations whose probable error is 2", how many errors less than i" are to be expected ? 33. 5. If the probable error is ten times the least count of the instrument, show that about 27 observations out of 1000 will be recorded with the true value, and 21 will exceed it by an amount equal to the probable error. 6. If h is changed to mh (jn > i), errors less than a certain error x^ are more probable, and errors greater than jt, are less probable. Find /i the reduced value of x^ . 7. Show that the envelop of the probability curve, when k varies, is the hyperbola the abscissa of the point of contact being the mean error. 8. Show that I e ''^dx = x \ e "'" Viz< ; and thence derive the value of the integral. 9. Deduce the formula of reduction (m positive) [x'^e-'^'^'dx = ^=^ [/"-'e-'^^dx; and thence show that (« being a positive integer) the mean value of the 2wth power of the error is (2m)! 22»w!;i2»' and that the mean absolute value of the (2tt + i)th power of the error is n\ .sn-fi , • 46 PROBABILITY OF ACCIDENTAL ERRORS. [Art. 6l 10. Show that Jo 3 2! 5 3! 7 ^ •• 11. Deduce the formula of reduction {n positive) and thence show that Erfc/-n-'V/-^/'i L+i:3_ 1:3:5+ \ 12. Find the probability that the deviation of a shot shall exceed 2a. ^. 13. Find the probability that a shot shall fall within the circle whose radius is c i — e'^ — 0.3935. 14. A marksman shoots 500 times at a target ; if his skill is such that when errors are measured in feet, A = i, what is the number of bullet marks between two circles described from the centre with radii i and 2 feet? 175. 15. If errors are measured in inches in example 14, what are the values of h and of a ? ^, 9.99. 16. An archer is observed to plant 9 per cent of his arrows within a circle one foot in diameter ; what is the diameter of a target which he might make an even bet to hit at the first shot ? 2 ft. 8J in. 17. A hits a target 3 feet in diameter 51 times out of 79 shots ; B hits one 2 feet in diameter 39 times out of 87 shots. Find the diameters of the targets that each can make an even wager to hit at the first shot. For A, 2.45 feet ; for B, 2.16 feet. 18. In example 17, what are the odds that B will hit A's probable circle at the first shot? About 59 to 41. 19. If the circular target which a marksman has an even chance of hitting be divided by circumferences cutting the radius into four equal parts, how many shots out of loco will fall in the respective areas ? 42, 117, 164, 177. § IV.] EXAMPLES. 4? 20. A circular target 32 inches in diameter is divided into rings by circumferences cutting the radius into four equal parts. The number of shots out of 1000 which fell in the several areas Were 31, 89, 121, 141 ; what are the respective values of a in inches determined from the numbers of shots in the several circles? 18.764, 18.628, 19.025,19.202. 21. Find the probability of hitting a square target circum- scribing the circle whose radius is a. -5790. 22. If several shots be fired at a wafer on a wall and the wafer be subsequently removed, show that the centre of gravity of the shot marks is the most probable position of the wafer. V. The Combination of Observations and Probable Accuracy of the Results. The Probability of the Arithmetical Mean. 62. We have seen that, in accordance with the law of facility which we have adopted, the best result of llie combination of a number of equally good observations is their arithmetical mean. We have next to determine the probable accuracy of this result, and then to consider the best method of combining observations of unequal precision. Let there be n observations, the law of facihty of error for each of which is ^ = -^.-^(— )'. (I) a being the true value of the observed quantity, and x■^,x^...X1i the observed values. Then the value of P, equation (2), Art. 36, becomes i>=£^-^»2"'-'«^'z/^,. (2) and, as shown in Art. 30, the probabilities of the different hypotheses which we can make as to the value of a are propor- tional to the corresponding values oi P. 63. Let us now take a to denote the arithmetical mean, and put a — 5 for the true value, so that S is the error of the arith- metical mean ; then denoting the residual by v, the true error will be .a: — « + 5 = 2* + 5. It was shown in Art. 43 that hence the general value of P must now be written § v.] PROBA n ILITY OF THE ARITHME TICAL ME AM. 49 and the value expressed by equation (2) is now the maximum value, corresponding to 5 = o. Distinguishing this value by the symbol /J,, equation (3) may be written />= /-o^-"'"'' (4) Since the probability of 5, which is the error of our final determination, is proportional to P, and Pa is independent of i, equation (4) shows that the arithmetical mean has a law of prob- ability which is identical with that which we have adopted in equation (i) for the single observations, except that wA^ takes the place of A". Thus, denoting by jKo the facility of error in the arithmetical mean, we have y°=^' (5) The fact that the assumption of the law (i) for a single observation implies a law of the same form for the final value determined from the combined observations is one of the con- firmations of this law alluded to in Art. 40.* 64. Equation (5) of the preceding article shows that the arithmetical mean of n observations may be regarded as an observation made with a more precise instrument, the new measure of precision being found by multiplying that of the single observations by V «• Since hr is constant when r repre- sents any one of the measures of risk, we have for the probable error of the arithmetical mean, _ r *In general, an assumed law, >- = ^(x), of facility of error for the single observations would produce a law of a different form for the result determined from k observations. Laplace has shown that whatever be the form of for the single observations, the law of facility of error in the arithmetical mean approaches indefinitely to — Jflxi y =: ce as a limiting form, when « is increased without limit. See the memoir " On the Law of Facility of Errors of Observation, and on the Method of Least Squares," by J. W. L. Glaisher, Memoirs Royal Ast. Sac, vol. xxxix pp. 104, 105. 50 THE COMBINATION OF OBSERVATIONS. [Art. 64 and the same relation holds in the case of either of the other measures of risk. Thus, for example, it is necessary to take four observations in order to double the precision, or reduce the risk of error to one half its original value. The probable error of a final result is frequently written after it with the sign ±. Thus, if the final determination of an angle is given as 36° 42'. 3 ± i'.22, the meaning is that the true value of the angle is exactly as likely to lie between the limits thus assigned (that is, between 36° 4i'.o8 and 36° 43'.52) as it is to lie outside of these limits. The Combination of Observations of Unequal Precision. 65. When the observations are not equally good, let h^, h^, ... hnh& their respective measures of precision ; so that, a being the true value, the facility of error of Xi is that oix^is and so on. The value of /", Art. 36, which expresses the prob- ability of the given system of observed values on the hypothesis of a given value of «, now becomes ^1^2 ^,-^^n^-o.Y^^j^^^^^^^^. . (,) and, as before, the probabilities of different values of a are pro- portional to the values they give to P. It follows that that value of a is most probable which makes Ilh\x — af ox h\{x^—a)''-\-h\(x^—af^ ... H-A^^irn—a^^ a minimum. (2) In other words, if the error of each observation be multiplied by the corresponding measure of precision, so as to reduce the errors § v.] OBSER VA TIONS OF VNEQ VA L PRE CiStON. J 1 to the same relative value (see Art. 47), it is necessary that the sum 0/ the squares of the reduced errors should be a minimum. This is, in fact, the more general statement of the principle of Least Squares. Differentiating with respect to a, we have h\ {x^ - d) ^ h\{x^- d) ^ . . . -^ h\ {x^ - fl) = o; (3) and the value of a determined from this equation is h\x, + h\x^ + . . . + hlxn IK'x h\ + hl+ ... + h^ IK' (4) which is therefore the most probable value of a which can be derived from the n observations. Weights and Measures of Precision. 66. The value of a found above is in fact the weighted arithmetical mean of the observed values (see Art. 1 1), when the respective values of }/■ are taken as the weights. But, since the weights are numbers with whose ratios only we are concerned, we may use any proportional numbers pi, p^,., . pn,in place of the values of h. Thus putting h\=Pih\ k\=p,h\ ... K=pji\ . . (5) equation (4) may be written A+A+..-+A ~'W ' ' ^^ Hence the most probable value which can be derived from the n observations is the weighted arithmetical mean, the weights of the observations being proportional to the squares of their measures of precision. The quantity h in equations (5) is the measure of precision of an observation whose weight is unity. It is immaterial whether such an observation actually exists among the n observations or not. S 2 THE COMB IN A TtON OF OBSER VA TIONS. [Art. 66 If each of the observations has the weight unity, 2*/ takes the value n, and the value of a becomes the ordinary arithmetical mean. The Probability of the Weighted Mean. . 67. Let us now, employing a to denote the value determined above, put a + 5 in place of a in the value of P, so that 5 repre- sents the error in our final determination of a. Then, writing V for the residual, we have, as in Art. 63, to replace x — a \>y » + 5. The value ai P, equation (i). Art. 65, thus becomes p^ Mh^^ e -^''^'>+'^'jxJx, ...Jxn Now, by equation (3), Ih^v = o, therefore I A' (v + sy = lAV + S'Sl;?- substituting, we obtain P= ^-^— -^ e-^'^'e'''^''jxjx, . . . Jxn. Hence, putting /o for the value assumed by P wh^n 5 = o, we have /> _ /)^ g - «H''! + ftS + . . . + ftj)^ Since the probability of S is proportional to P, it follows, as in Art. 63, that the law of facility of the mean is of the same form as those of the separate observations, the square of the new measure of precision being the sum of the squares of those of the separate observations. Denoting the facility of error in the weighted mean by jo, and employing the notation of Art 66, we have therefore in which h is the measure of precision of an observation whose §V.J PROBABILITY OF THE WEIGHTED MEAN. S3 weight is unity. When the weights are all equal, this formula becomes identical with that of Art. 63. 68. The weight of the mean is defined in Art. 12 to be J^, the sum of the weights of the constituent observations. Hence the value of jfo found above shows that, in comparing the final result with any single observation, as well as in comparing the observations with one another, the measures of precision are proportional to the square roots of the weights. The probable error being inversely proportional to h, it fol- lows that, r representing the probable error of an observation whose weight is unity, and ^o that of the mean whose weight is 2p, we shall have r ro = V2>- This result includes that of Art. 64, and, like it, is applicable to either of the measures of risk. 7X1? Most Probable Value of h derivable from a System of Observations. 69. Substituting the values of ^1 , ^j , . . . ^n in terms of the weights, equations (5), Art. 66, the value of P, equation (i), Art 65, becomes The same principle which we have employed to determine the most probable value of the observed quantity serves to determine the most probable value of h. Thus the most prob- able value of h is that which gives the greatest value to P, or, omitting factors independent of h, to the expression Putting the derivative of this expression equal to zero, we have 54 THE COMBINATION OF OBSERVATIONS. [Art. 6i whence ^=S 2lpCx-ar (2J in which a denotes the true value of the observed quantity. 70. Equation (2) may be written ^p{x-ay _ _i_ n ~ 2h^ ^^' When the observations are all made under the same circum- stances, so that we may put A =A = • . • = A = I » the equation becomes I{x-ay _ I n - 2h' ^^' in which h denotes the measure of precision of each of the observations. The second member of this equation is the value of £*, the square of the " mean error," which was defined in Art. 50 as the mean value of the square of the error, having regard to its probability in a system of observations whose measure of precision is h. In other words, it is the mean squared error in an unlimited number of observations made under the given circumstances of observation. On the other hand, the first member of equation (2) is the actual mean squared error for the n given observations. The square root of this quantity may be called the observational value of the mean error, in distinction from the theoretical vahie, e, which is a fixed function of h. Thus the equation asserts that the most probable value of h is found by assuming the theoretical value of the mean error to be the same as its observational value. In other words, it is a consequence of the accepted law of facility that the measure of precision of a set of observations equally good is proportional to the reciprocal of the mean error as determined from the observations themselves. S v.] MEAN AND PROBABLE ERRORS. 55 Formulcs/or the Mean and Probable Errors. 71. The quantity Ip{_x — af in the value of h, equation (2), Art. 69, is the sum of the weighted squares of the actual errors of the observed values Xi,Xi, . . . Xn. Now, when a denotes the weighted arithmetical mean, x—a must be replaced byv + S, as in Art. 67, and Ipiv+8y=Ipv' + S'Sp. . . . . (i) The value of S, which is the error of the arithmetical mean, is of course unknown ; it may be either positive or negative, but, since S' is essentially positive, the true value of l'p(^x — a)' always exceeds I!p7^. The best correction we can apply to the approximate value I'pv' is found by giving to S' in equation (i) its mean value ; for, by adopting this as a general rule we shall commit the least error in the long run. Now we have seen in Art. 67 that S follows a law of probability of the usual form in which the measure of precision is k^Zp, hence the mean value of 5' is the same as the mean squared error found in Art. 50, except that k is changed to k^ Ip. That is to say, the mean value of ^' is I Putting this in place of S' in equation (i) we have lp(v + dy = ipv' + ^, (2) Equation (2), Art. 69, may be written in the form ^, = 2lpix-ay, and, employing the value just determined, we have J = 22>z;' + -^; whence we derive *=VJ^ ....... (3) 56 THE COMBINATION OF OBSERVATIONS. [Art. 71 for the most probable value of h for an observation of weight unity. 72. The resulting value of the mean error of an observation whose weight is unity is and by Art. 68, the mean error of the arithmetical mean whose weight is ^p is So ~ V(«-i)2> ^^' Again, the value of the probable error of an observation whose weight is unity is ^ = -J-=/'V2^^ = o.6745^^. . (3) and that of the weighted arithmetical mean is ^0=0.6745^^;^^^ (4) The constant 0.6745 is the reciprocal of that which occurs in equation (2), Art. 51. For a set of equally good observations we have, by putting pi =P% = . • . =pn = I, ^-=0.6745 y^-:^ (5) for the probable error of a single observation, and ro = 0.6745 V'^C^ (6) for the probable error of the simple arithmetical mean. §V.] VALUE OF h IN TARGET PRACTICE. $7 The Most Probable Value of h in Target Practice. 73. We have seen in Art. 55 that in target practice the prob- ability of hitting an elementary area a, situated at the distance r from the point aimed at, is — e a. Suppose that n shots have been made, the first falling upon the area a^ , the second upon a^ , and so on ; then, before the shots were made, the probability that the shots should fall upon these areas in the given succession is P=-—e a,a. x^2 • • • ^n* Hence, the shots having been made, the probabilities of different values of A are proportional to the values they give to the expression Making this function of A a maximum, we have whence we have, for the most probable value oik, ^^\JP' the value quoted in Art. 61. 74. The value of e' hence derived is 2« 2M " where e is the mean error for the component deviations, which are the values of x and y respectively. The values of e' as determined from the lateral and vertical deviations respectively, are 58 THE COMBINATION OF OBSERVATIONS. [Art. 74 e. = ^\ e' = ^. n ' n Thus the value of c', which we have derived from the total deviations, or values of r, is the mean of its most probable values as separately derived from the two classes of component deviations. It will be noticed that neither of the quantities Ix?, Ey^ or Sr^ needs to be corrected as in Art. 71, because we are here dealing with actual errors and not with residuals.* Tke Computation of the Probable Error. 75. The annexed table gives an example of the application of formulae (5) and (6), Art. 72. The seventeen values of jt in X V v" 4-524 + .0185 .00034225 4-500 - -0055 3025 4-515 + .0095 9025 4-508 + .0025 625 4-513 + .0075 5625 4-5" + .0055 3025 4-497 - .0085 7225 4-507 + .0015 225 4.501 -.0045 2025 4.502 - -0035 1225 4-485 — .0205 42025 4-519 + -0135 18225 4-517 + .0115 13225 4-504 — .0015 225 4-493 — .0125 15625 4.492 - -0135 18225 4-505 — .0005 25 « = 4.505^ = 4-5055 Iv" = .00173825 • If the position of the point aimed at had been inferred from the shot marks, as in example 22 of the preceding section, it would have been necessary to change « into «-^ i, as in the case of errors of obser- vation. So also this change should be made when the errors employed are measured from the mean point of impact, as in testing pieces of ordnance. SV.J COMPUTATION OF THE PROBABLE ERROR. 59 the first column are independent measurements of the same quantity made by Prof. Rowland for the purpose of determining a certain wave length. At the foot of the column is the arith- metical mean of the seventeen observations. The second column contains the residuals found by subtracting this from the separate observations. The values of v^ in the third column are taken from a table of squares, and their sum is written at the foot of the column. Dividing this by 16, the value of w — i, we find = 0.00010864, K — I ^ and taking the square root, e = 0.01042. Multiplying by the constant 0.6745 we have r = 0.00703 for the probable error of a single observation. Again, dividing by V i7> we have To = O.OOI7I for the probable error of the final determination, which may therefore be written X = 4.5055 ± 0.0017. It will be noticed that nine of the residuals are numerically less and eight are numerically greater than the value we have found for the probable error of a single observation. 76. The equation I(v + sy =Sv' + US', derived in Art. 43, enables us to abridge somewhat the com- putation of Sv^, and to reduce the extent to which a table of squares is needed. Thus, if we use the value of a to three places of decimals, namely a = 4.505, in forming the values of 60 THE COMBINATION OF OBSERVATIONS. [Art. 76 V, each of these quantities will be algebraically greater than it should be by -jAy of a unit in the third decimal place. Putting — -j-y, no — -^^ — 21X7' hence 2'z'', as found on this supposition, will be too great by 34-7- of a unit in the sixth decimal place. The columns headed V and %i^ would then stand as follows : V v' + .019 X)0036i — .005 25 + .010 100 + .003 9 + .008 64 + .006 36 — .008 64 + .002 4 — .004 16 — .003 9 — .020 400 + .014 196 + .012 144 — .001 I — .012 144 — .013 169 .000 o I(v + sy = .GDI 742 and making the correction found above, we have 2v^ = .001738^, which is the exact value. The smallness of the correction is due to the fact that Iv' is a minimum value. The correction might have been neglected, being, in this case, only about ^ of the correction made in the formula on account of the mean value of the unknown error in the arithmetical mean. §V.] COMPUTATION OF THE PROBABLE ERROR. 6 1 77. As an example of the application of the formulae involving weights, let us suppose that instead of the seventeen observations in the preceding article we were given only the means of certain groups into which the seventeen observations may be separated. These means we have seen may be regarded as observations having weights equal to the respective numbers of observations from which they are derived. The annexed table presents the p X V v" 2 4.512 + .0065 .00004225 .00008450 I 4-515 + .0095 9025 9025 4 4-507 + .0015 225 900 3 4-503 — -0025 625 1875 2 4.502 -•0035 1225 2450 2 4-5" + .0055 3025 6050 3 4-497 -.0085 7225 21675 « = 4-5055 lpv' = .00050425 datJi in such a form, the first value of x being the mean of the first two values in the preceding table, the next being the third observation, the next the mean of the following four, and so on. The weighted mean of the present seven values of x of course agrees with the final value before found. The values of v and of v' are formed as before, and the values oipv^ are given in the last column, at the foot of which is the value of Spv' . Dividing this by 6, the present value of re — i, we find --^- — = 0.00008405, » — I and, multiplying the square root of this by 0.6745, the value of the probable error of an observation whose weight is unity is r =0.00618. The probable error of the weighted mean found by dividing this by V 17, the value of ^1 Ip, is To = 0.00150. 62 THE COMBINATION OF OBSERVATIONS. [Art. 78 78. The value of r found above corresponds to a single observation of the stt given in Art. 75. It differs considerably from the value found in that article. The discrepancy is due to the fact that in Art. 77 we did not use all the data given in Art. 75, and it is not to be expected that the most probable value of h which can be deduced from the imperfect data should agree with that deduced from the more complete data. In one case we have seventeen discrepancies from the arithmetical mean, due to accidental errors, upon which to base an estimate of the precision of the observations ; in the other case we have but seven discrepancies. The result in the former case is of course more trustworthy ; and in general, the larger the value of n, the more confidence can we place in our estimate of the Tceasures of precision. 79. It should be noticed particularly that the weighted obser- vations in Art. 77 are not equivalent to a set of seventeen observations of which two are equal to the first value oix, one to the second, four to the third, and so on, except in the sense of giving the same mean value. Compare Art. 10. Such a set would exhibit discrepancies very much smaller on the whole than those of the seventeen observations in Art. 75. Accordingly, the value of e" in the supposed case would be very much smaller than that found above for the weighted observations. The value of 2v^ would in fact be the same as that of Ipv" in Art, 77, but it would be divided by 16 instead of by 6. The approximate equality of the results in Art. 75 and Art. 77 is due to the fact that the z^"s, of which seventeen exist in each sum, are on the average very much diminished* when the mean of a group is substituted for the separate observations, and this * The amount of this diminution is, however, largely a matter of chance. For example, if we had taken the seven groups in such a manner that the successive values of/ were 2, 3, 2, 4, 2, i, 3, we should have found r = 0.00833, differing in excess from that of Art. 75 still more than that obtained ab'we does in defect §V.] COMPUTATION OF THE PROBABLE ERROR. 63 makes up for the change in the denominator by the decrease in the value of n. 80. Different weights are frequently assigned to observations made under different circumstances, according to the judgment of the observer. Thus an astronomer may regard an observa- tion made when the atmosphere is exceptionally clear as worth two of those made under ordinary circumstances. Regarding the latter as standard observations having the weight unity, he will then assign the weight 2 to the former. As explained in the preceding article this is not equivalent to recording two standard observations, each giving the observed value. The latter procedure would lead to an erroneous estimate of the degree of accuracy attained. The Values of h and r derived from the Mean Absolute Error. 81. The mean absolute error 5j is a fixed function of ^, viz: ' = ^1^5 (^) hence, if we were able to determine it independently, we should have a means of finding the value of h, and consequently that of r. In the case of « equally good observations, let [x — a] denote the numerical value of an error taken as positive, then -^C^-^] (2) n *■ is the arithmetical mean of the absolute values of the n actual errors. This may be called the observational value of the mean absolute error in distinction from the theoretic value given in equation (i), which is the value of this mean in accordance with the law of probability, when the measure of precision is h. If we assume these values to be equal, we obtain S[x - a] _ I 64 THE COMBINATION OF OBSERVATIONS. [Art. 8l whence '^ ^ :§'[*- a] f^r' ^3) and ''=J=P^^ n (4) If in this formula we put for a the arithmetical mean, so that '2\x — a\ becomes •2'[w], it gives the apparent probable error, that is, the value r would have if the arithmetical mean were known to be the true value of x. Denoting this by r' , we have then '- =/'Vf-^ = 0-8453 -^- • • • (5) 82. It is obvious from Arts. 71 and 72 that the values of r' and r as derived from the square of the residuals are r = 0.6745 |/—, ^ = °-674S y ^^7^ » so that r:r' = \/n:i^{n- i).* (6) * This relation between the apparent and the real probable error is de- rived directly by C. A. F. Peters {Berliner Astronomisches Nachrichten, 1856, vol. xliv. p. 29) as follows: \l d , e^, . . . en are the true errors, that of the arithmetical mean is then = -(^i + ^a+ • • • +e»). « — I I I -ei et — . . . en, etc. n n n Since r is the probable error of each e, and / that of each v, the formula for the probable error of a linear function of independent quantities (see Art. 89) gives I ^2 = ;m+('^-)^j^^ This result is used by Peters to establish the formula derived above, but it may also be used in place of the method of Art. 71 for the correction of the apparent value of r in terms of 'Sv'. § v.] FORMULA IN VOL VING MEAN ABSOL UTE ERROR. 65 Combining this lesult with equation (5) we have ' ^ °-«453^ J^jj (7) and hence, for the probable error of the arithmetical mean, As an illustration, let us apply these formulae to the observa- tions given in Art. 75, for which we find 2[»] = 0.1405. Sub- stituting this value, and putting « = 17, we find r =■ 0.00720, r„ = 0.00175. These values agree closely with those derived in Art. 75 from the formulae involving ^i?, which indeed give the most probable values of r and r„ , but involve much more numerical work, especially when n is large. 83. In order to adapt the formulae of Art. 82 to the case of weighted observations, it is necessary to reduce the errors to the same scale; in other words, to make them proportional to the reduced errors or values of t, see Art. 47. Since the measures of precision are proportional to the square roots of the weights, this is effected by multiplying each error by the square root of the corresponding weight. The products may be regarded as errors belonging to the same system, namely, that which corresponds to the weight unity. Hence equation (7) gives for the probable error of an obser- vation whose weight is unity r = 0.8453 ,r , \-i . ^^"i/Hn - i)] and for the probable error of the weighted arithmetical mean jye have 66 THE COMBINATION OF OBSERVATIONS. [Art. 83 Examples. 1. A line is measured five times and the probable error oi the mean is .016 of a foot. How many additional measure ■ ments of the same precision are required in order to reduce the probable error of the determination to .004 of a foot ? 75. 2. It is required to determine an angle with a probable error less than o''.25. The mean of twenty measurements gives a probable error of o".38 ; how many additional measurements are necessary ? 27. 3. If the probable error of each of two like measurements of a foot bar is .00477 of an inch, what is the probable error of their mean ? .00337. 4. Ten measurements of the density of a body made with equal precision gave the following results : 9.662, 9.664, 9.677, 9.663, 9.645, 9-673. 9-659> 9-662, 9.680, 9.654. Wli it is the probable value of the density of the body and the probable error of that value ? 9.6639 ± .0022. 5. Forty micrometric measurements of the error of position of a division line upon a standard scale gave the following results : 3-68 5-08 2.81 4-43 548 4.21 3.28 5-21 3-1 1 2-95 4-65 3-43 3-76 5-23 3-78 4-43 4.76 6.35 3-27 3-26 4-59 4-45 3.22 2.28 2-75 3-78 4.08 2.48 2.64 3-95 3-98 4.10 4-15 4-49 4-51 4.84 2.98 2.66 3-91 4.18 Find the probable value of the quantity measured and its prob- able error. 3-93° ± 0.097. 6. In the preceding example what is the probable error of a single observed quantity: 1°, by the formula involving the squares of the errors ; 2°, by that involving the absolute errors? 1°, r = 0.6:6 ; 2°, r = 0.618. § v.] EXAMPLES. 6j 7. An angle in the primary triangulation of the U. S. Coast Survey was measured twenty-four times with the following results : ii6°43'44".45 49.20 51-05 51.75 51.05 49-25 50 -55 48.85 47-85 49.00 51-70 46.75 50 .95 47.40 50.60 52-35 49.05 49-25 48 .90 47-75 48.45 51-30 50.55 53-40 Find the probable error of a single measurement, and the final determination of the angle. i".35 : 1 16° 43' 49".64 ± o".28. 8. In example 7, taking the means of the six groups of four observations each, determine the probable error of the first of these means : 1°, considered as a measurement of four times the weight of those in example 7 ; 2°, directly as one of six obser- vations of equal weight ; 3°, as a determination from its four constituents. 1°, o".67 ; 2°, o".72 ; 3°, i".oo. 9. An interval of 600 units as determined by a micrometer was forty times measured to determine the error in the pitch of the screw, with the following results : 600.0 604.8 600.7 601.4 602.0 602.6 600.0 602.4 599-7 606.1 602.4 603.4 602.7 602.7 600.7 602.4 599-5 604.7 601.6 603.1 603.7 600.9 601.4 602.1 604.6 602.1 601.7 601.8 602.1 601.4 602.9 603.6 603.9 602.2 601.4 600.6 602.3 600.8 602.9 603.6 Find the probable value of the interval and its probable error. 602.22 ±0.157. VI. The Facility of Error in a Function of One or Morb Observed Quantities. The Linear Function of a Single Observed Quantity. 84. If the value of an observed quantity X be subject to an error x, the value of a given function of ^, say Z^=f (^X), will be subject to a corresponding error s. Assuming x to follow the usual law of facility, h being the measure of precision and r the probable error, we have now to determine the law of facility of ^, for any form of the function y. Let us first consider the linear function Z=mX+b, where m and b are constants. The case is obviously the same as that of the simple multiple mX, the relation between the corresponding errors being z = tnx. The probability that the error 2 falls between z and z + dz is the same as the probability that x falls between x and x + dx, namely, Expressing this in terms of z, it becomes -7- ^ m" — , or, putting - = H, —r- e dz. Thus the law of facility for Z is of the same form as that for X, SVI.] FUNCTIONS OF A SINGLE QUANTITY. 69 the measure of precision being found by dividing that of ^by m \ and, denoting the probable error of Z by R, we have (since probable errors are inversely as the measures of precision) R=. mr, and the same relation holds between either of the other measures of the risk of error. The curves of facility for X and Z are related in the same manner as those drawn in Fig. 4, page 30, and the process of passing from one to the other is that described in Art. 46; that is to say, the abscissas which represent the errors are multiplied by m, and then the ordinates are divided by m, so that the areas standing upon the corresponding bases dx and dz shall remain equal. Nov.- Linear Functions of a Single Observed Quantity. 85. A non-linear function of an observed quantity subject to the usual law of facility does not strictly follow a law of facility of the same form. If, however, as is usually the case, the error X is very small, any function of the observed quantity will very nearly follow a law of the usual form. Let a be the true value of the observed quantity, then X=-a-\- X, and Z=f{X)=f{a^x-). Expanding by Taylor's Theorem, and neglecting the higher powers oi X,* we may take Z=/(a) + V'(«), which is of the linear form. Hence we may regard Zas subject to the usual law of facility, its probable error being R = rf{a-), or, putting the observed value in place of a, R = rf'iX). *The ratio of the square of the error to the error itself is the value of the error considered as a number, and it is this numerical value which must be small. ^0 THE FACILITY OF ERROR IN A FUNCTION. [Art. 86 The Facility of Error in the Sum or Difference of Two Observed Quantities. 86. Let X and Y be two observed quantities subject to the usual law of facility of error, their measures of precision being h and k respectively. If Z=X^Y, the relation between the errors of Z, ^and Y is obviously z-=x +y. In order to find the facility ol z, that is, the probability that s shall fall between z and z + dz, let us first suppose that x has a definite fixed value. With this hypothesis, the probability in question is the same as the probability that j/ shall fall between y and j)/ + dy, where y = z — x, and dy = dz. This probability is A^-*Vrfj^, or ^e-'^^'-'^'dz. Multiplying by the elementary probability of the hypothesis made, which is we have ^e-^'^'-'^'-'^'dzdx (I) for the probability that the required event (namely, the occur- rence of the particular value of z') shall happen in this particular way, that is, in connexion with the particular value of x. To find the total probability of the event we therefore sum the above expression for all possible values oix, thus obtaining ^r,-(ft» + »»>»+2*'«>-fc'.V^rf:f. ... (2) §VI.] THE SUM OF TWO OBSERVED QUANTITIES. 7I The exponent of e in this expression may be written whence, putting a = , , „ ■ and ^ = ^'~TFTk'^'WT^* • . . • (3) the expression (2) becomes Since a is independent olx, the value of the integral contained in this expression is, by Art. 39, , (^ t' , ^n ' l^^"^^ ^^ proba- bility that 2 shall fall between z and z -\- dz\^ ^^ e-^^'^dz, or JLe-^'^'dz. 87. The result just obtained shows that the sum of two quantities subject to the usual law of facility of error is subject to a law of the same form, its measure of precision being deter- mined by equation (3). Writing equation (3) in the form — - — + — it is evident that, if ^i, r, and R be the probable errors of ^, Y and -X" + K we shall have Ii' = r\ + r\, the same relation holding in the case of either of the other measures of risk of error. For the difference Z=X-Y, we have the same result; for the errors of— Khave obviously the same law of facility as those of Y. 72 THE FACILITY OF ERROR IN A FUNCTION. [Art. 88 88. As an illustration, suppose the latitude tp and the polar distance / of a circumpolar star to be determined from the altitudes of the star at its upper and lower culminations. Since ^1 = ?+^ and hi = j{m\r\-\-m\r\->r...-Vmlr^;), ... (2) where r^, r^, . , . r, are the probable errors of the several observed quantities. In particular, if the n quantities have the same probable error r, the probable error of their sum is rV«- The probable error of their arithmetical mean, which is — of this sum, is therefore ' n 4—. This result agrees with that found in Art. 64, where, t^n " * The fact that the law of facility thus reproduces itself has often been regarded as confirmatory of its truth. This property of the law «~ results from its being a limiting form for the facility of error in the linear function Z, when n is large, whatever be the forms of the facility functions for Xi, Xi, . . . X,. Compare the foot-note on page 49, and see the memoir there referred to. It follows that " we shall obtain the same law g—^'"' (for a single observed quantity) if we regard each actual eiror as formed by the linear combination of a large number of errors due to different independent sources." § VI.] NON-LJM£AR FUNCTIONS. 73 however, the n quantities were all observed values of the same quantity, and the arithmetical mean was under consideration by virtue of its being the most probable value in accordance with the law of facility. 90. It is to be noticed that in formula (2) it is essential that the probable errors ^i , rj, . . . r„ should be the results of inde- pendent determinations. For example, in the illustration given in Art. 88, we have ki^^ f + p, whence we should expect to find (prob. err. of hif = (prob. err. of (pf + (prob. err. of ^)'' ; but it will be found that this is not true when the probable errors of

^l. 13. What is the probable error of the area of the rectangle whose sides measured as in the preceding example are ^'land s^ ? 14. A line of levels is run in the following manner : the back and fore sights are taken at distances of about 200 feet, so that there are thirteen stations per mile, and at each sight the rod is read three times. If the probable error of a single reading is 0.01 of a foot, what is the probable error of the difference of level of two points which are ten miles apart ? .093. 15. Show that the probable error of the weighted mean of observed quantities has its least possible value when the weights are inversely proportional to the squares of the probable errors of the quantities, and that this value is the same as that given in Art. 68 for the case of observed value of tiie same quantity. VII. The Combination of Independent Determinations of THE Same Quantity. The Distinction between Precision and Accuracy. ijl2. We have seen in Arts. 63 and 67 that the final determi- nation of the observed quantity derived from a set of observations follows the exponential law of the facility of accidental errors. The discrepancies of the observations have given us the means of determining a measure of the risk of error in the single observations, and we have found that the like measure for the final determination varies inversely as the square root of its weight compared with that of the single observation. Since this weight increases directly with the number of constituent observations, it is thus possible to diminish the risk of error indefinitely ; in other words, to increase without limit the pre- cision of our final result. 93. It is important to notice, however, that this is by no means the same thing as to say that it is possible by multiplying the number of observations to increase without limit the accuracy of the result. Ihe precision of a determination has to do only with the accidental errors ; so that the diminution of the prob- able error, while it indicates the reduction of the risk of such errors, gives no indication of the systematic* errors (see Art. 3) *The term systematic is sometimes applied to errors produced by a cause operating in a systematic manner upon the several observations, thus producing discrepancies obviously not following the law of accidental errors. Usually a discussion of these errors leads to the discovery of their cause, and ultimately to the corrections by means of which they may be removed. All the remaining errors, whose causes are unknown, are generally spoken of as accidental errors ; but in this book the term acci- dental is applied only to those errors which are variable in the system of observations under consideration, as distinguished from those which have a common value for the entire system. §VII.] PRECISION AND ACCURACY. JJ which are produced by unknown causes affecting all the obser- vations of the system to exactly the same extent. The value to which we approach indefinitely as the precision of the determination is increased has hitherto been spoken of as the " true value," but it is more properly the precise value corresponding to the instrument or method of observation employed. Since the systematic error is common to the whole system of observations, it is evident that it will enter into the final result unchanged, no matter what may be the number of observations ; whereas the object of increasing this number is to allow the accidental errors to destroy one another. Thus the systematic error is the difference between the precise value, from which accidental errors are supposed to be entirely elimi- nated, and the accurate or true value of the quantity sought. 94. Hence, when in Art. 64 the arithmetical mean of n obser- vations was compared to an observation made with a more precise instrument, it is important to notice that this new instrument must be imagined to lead to the same ultimate precise value, that is, it must have the same systematic error as the actual instrument, whereas in practice a new instrument might have a very different systematic error. Again, in the illustration employed in Art. 64, where the final determination of an angle is given as 36° 42'.3 ± i'.22, the " true value," which is just as likely as not to lie between the limits thus assigned, is only the true value so far as the instru- ment and method employed can give it ; that is, the precise value to which the determination would approach if its weight were increased indefinitely. 95. A failure to appreciate the distinction drawn in the preceding articles may lead to a false estimate of the value of the method of Least Squares. M. Faye in his "Cours d'Astronomie " gives the following example of the objections which have been urged against the method: "From the discussion of the transits of Venus observed in 1761 and 1769, M. Encke deduced for the parallax of the sun the value 8".57ii6±o".o37o. 78 INDEPENDENT DETERMINATIONS. lArt. 95 In accordance with this small probable error it would be a wager of one to one that the true parallax is comprised between 8". 53 and 8". 61. Now we know to-day that the true parallax 8". 813 falls far outside of these limits. The error, o".24i84, is equal to 6.536 times the probable error o".037. We find for the probability of such an error o.ooooi. Hence, adhering to the probable error assigned by M. Encke to his result, one could wager a hundred thousand to one that it is not in error by 0.24184, and nevertheless such is the correction which we are obliged to make it undergo." Of course, as M. Faye remarks, astronomers can now point out many of the errors for which proper corrections were not made ; but the important thing to notice is that, even in Encke's time, the wagers cited above were not authorized by the theory. The value of the parallax assigned by Encke was the most probable with the evidence then known, and it was an even wager that the complete elimination of errors of the kind that produced the discrepancies or contradictions among the observations could not carry the result beyond the limit assigned ; but the existence of other unknown causes of error and the probable amount of inaccuracy resulting from them is quite a different question. Relative Accidental and Systematic Errors. 96. Let us now suppose that two determinations of a quantity have been made with the same instrument and by the same method, so that they have the same systematic error, if any ; in other words, they correspond to the same precise value. The difference between the, two results is the algebraic difference between the accidental errors remaining in the two determi- nations; this may be called their relative accidental error. Regarding the two determinations as independent measure- ments of two quantities, if r, and ra are their probable errors, that of their difference is V iA + ^0 ; and, since this difference should be zero, the relative error is an error in a system for which the probable error is §VII.] ACCIDENTAL AND SYSTEMATIC ERRORS. 79 For example, if the determination of an angle mentioned in Art. 94 is the mean often observations, it is an even wager that the mean of ten more observations of the same kind shall differ from 36° 42'.3 by an amount not exceeding i'.22 X V 2 or i'.73 Again, r being the probable error of a single observation, the probable error of the mean of n observations is — t— , but the discrepancy from this mean of a new single observation is as likely as not to exceed \/(^ +''')• ^^^*'^' V « + I 97. If, on the other hand, the two determinations have been made with different instruments or by a different method, they may involve different systematic errors ; so that, if each determination were made perfectly precise, they would still differ by an amount equal to the algebraic difference of their systematic errors. Let this difference, which may be called the relative systematic error, be denoted by 8. Then, d denoting the actual difference of the two determinations, while 8 is the difference between the corresponding precise values, we may put ^ = 3 + X, in which x is the relative accidental error. The Relative Weights of Independent Determinations. 98. In combining values to obtain a final mean value, we have hitherto supposed their relative weights to be known or assumed beforehand, as in Arts. 75 and 77. Since the squares of the probable errors are inversely proportional to the weights, (Arts. 66 and 68,) the ratios of the probable errors both of the con- stituents and of the mean are thus known in advance, and it *This does not apply to the residuals of the original « observations, because in taking a residual the mean is not independent of the single observation with which it is compared. 80 INDEPENDENT DE TERMINA TIONS, [An. 98 only remains to determine a single absolute value of a probable error to fix them all. In this process it is assumed that the values have all the same systematic error. But, when the determinations are independently made, their relative weights are not known, and their probable errors have to be found independently. If now it can be assumed that the systematic errors are the same, so that there is no relative systematic error, the weights may be taken in the inverse ratio of the squares of the probable errors. 99. To determine whether the above assumption can fairly be made in the case of two independent determinations whose probable errors are r^ and r, , it is necessary to compare the difference d with the relative probable error i^(rl + rf), Art. 96. If d is small enough to be regarded as a relative accidental error, it is safe to make the assumption and combine the deter- minations in the manner mentioned above. As an example, let us suppose that a certain angle has been determined by a theodolite as 24° 13' 36"±3"-i, and that a second determination made with a surveyor s transit 24° I3'24"±i3".8. In this case r, = 3.1, r, = 13.8 and d= 12. It is obvious that a relative accidental error as great as d may reasonably be expected. (In fact the relative probable error is 14. i ; and, by Table II, the chance that the accidental error should be at least as great as 12 is about .57.) We "may therefore assume that there is no relative systematic error, and combine the determi- nations with weights having the inverse ratio of the squares of the probable errors. This ratio will be found, in the present case, to be about 20 : i, and the corresponding weighted mean found by adding ^ of the difference to the first value, is 24° 13' 35"43- 100. It appears doubtful at first that the valae given by the §V1I,] CONCORDANT DETERMINATIONS. 8l theodolite can be improved by combining with it the value given by the inferior instrument. The propriety of the above process becomes more apparent, however, if we imagine the first determination to be the mean of twenty observations made with the theodolite ; a single one of these observations will then have the same weight and the same probable error as the second determination. Now the discrepancy of this new determination from the mean is such as we may expect to find in a new single observation with the theodolite. We are therefore justified in treating it as such an observation, and taking the mean of the twenty-one supposed observations for our final result. lOI* The probable error of the result found in Art. 99 of course corresponds with its weight; thus, denoting it by R, we have ^=i^f\, whence R = 3" .03, and the final result is 24° 13' 35".43 ± 3"-03. In general, ri and ra being the given probable errors, that oi the mean is given by rl + r\' Determinations which, considering their probable errors, are in sufficient agreement to be treated as in the foregoing articles may be called concordant determinations. They correspond to the same precise value of the observed quantity, and the result of their combination is to be regarded as a better determination of the same precise value. The Combination of Discordant Determinations. 102. As a second illustration of determinations independently made, let us suppose that a determination of the zenith distance of a star made at one culmination is 14° 53' i2".i ±o".3, and that at another culmination we find for the same quantity 14° 53' i4"-3 ± o".5. In this case we have d = 2.2. This is about 3.8 times the rela tive probable error whose value is o".58. 82 IMDEPENDENT DETERMINATIONS. [Art. 102 From Table II we find that the probability that the relative accidental error bhould be as great as d is only about i in 100. We are therefore justified in assuming that the difference rf is mainly due to errors peculiar to the culminations. In other words, we assume that, could we have obtained the precise values corresponding to the two culminations, (by indefinitely increasing the number of observations at each,) they would still be found to differ by about 2". 2. Supposing now that there is no reason for preferring one of these precise values to the other, we ought to take their simple arithmetical mean for the final result ; and, since the two given values are comparatively close to the precise values in question, we may take their arithmetical mean, which is 14° 53' i3"-2, for the final determination. 103. Determinations like those considered above, whose difference is so great as to indicate an actual difference between the precise values to which they tend, may be called discordant determinations. The discordance of the two determinations discloses the existence of systematic errors which were not indicated by the discrepancies of the observations upon which the given probable errors were based. In combining the deter- minations, these systematic errors ar^ treated as accidental errors incident to the two determinations considered as two observed values of the required quantity. In fact, it is generally the object in making new and independent determinations to eliminate as far as possible a new class of errors by bringing them into the category of accidental errors which tend to neutralize each other in the final result. The probable error of the result cannot now be derived from the given probable errors, but must be inferred from the determinations themselves considered as observed values, because we now take cognizance of errors which are not indicated by the given probable errors. 104. When there are but two observed values, formula (4), Art. 72, becomes § VII.] DISCORDANT DE TERMINA TIONS. 83 in which p^ , p^ are the weights assigned to the two values. Denoting the difference by d, the residuals have opposite signs, and their absolute values are Pi +Pi pi + pa Substituting these values, we have for the probable error of the mean P1+P2 pi + pi ' When/, = /a. this becomes ^0= -^ = 0.3272 d. (2) In the example given in Art. 102, the value of i?o thus obtained is o".742, which, owing to the discordance of the two given determinations, considerably exceeds each of the given probable errors. Of course no great confidence can be placed in the results given by the formulae above on account of the small value ofn.* 105. Since the error of each determination is the sum of its accidental and systematic error, if .Si and s^ denote the probable *The argument by which it is shown that the value of A deduced in Art. 69 is the most probable value involves the assumption that before the observations were made all values of A are to be regarded as equally probable ; just as that by which it is shown that the arithmetical mean is the most probable value of the observed quantity a involves the assump- tion that before the observations all values of a were equally probable. In the case of a, the assumption is admissible with respect to all values of a which can possibly come in question. But, in the case of A, this is not true ; because (supposing « = 2 as above) when d = o the value of A is infinite, and when d is small the corresponding values of A are very large, so that it is impossible to admit that all values of A which can arise are a priori equally probable. In the present application of the formula, however, these inadmissible values do not arise, because we do not use it when d is small, employing instead the method of Art. 99 and the formula of Art. loi. 84 INDEPENDENT DETERMINATIONS. [Art. 105 systematic errors, the probable errors of the two determinations when both classes of errors are considered are The proper ratio of weights with which the determinations should be combined is F\ : R\. The method of procedure followed in Art. 99 assumes that Sx and s^ vanish. On the other hand, in the process employed in Art. 102 we are guided, in an assumption of the ratio R\ : FX, by a consideration of the value which the ratio s\ : s{ ought to have. For example, in the illustration, Art. 102, the ratio R\ : J?l is taken to be one of equality, whereas the hypothesis we desired to make was that Si=: Si, so that we ought to have Rl-Rl = ^,-rl. On the hypothesis Ri = R^ the value of each of these prob- able errors is, in accordance with equation (2), Art. 104, pd. In the example this is i".05. If we take (1.05)' as the average value of Rl and Rl , and introduce the condition written above, we shall find as a second approximation to the value of the ratio Rl: Rl about 15:13. The final value corresponding to this ratio of weights is 14° 53' 13".!, and its probable error as deter- mined by equation (i). Art. 104, is slightly less than that before found, namely, Ro — o".740. Indicated and Concealed Portions of the Risk of Error, 106. It will be convenient in the following articles to speak of the square of the probable error as the measure of the risk of error. The foregoing discussion shows that the total risk of error, R', of any determination consists of two parts, r" and i", of which the first only is indicated by discrepancies among the observations of which the given determination is the mean. It is only this first part that can be diminished by increasing the number of the constituent observations. The remaining part remains concealed, and cannot be diminished until some varia- § VII.] PORTIONS OF THK TOTAL RISK OF ERROR. 85 tion is made in the circumstances under which the observationsi are made, giving rise to new determinations. When the indi- cated portions of the risk of error in the several determinations are sufficiently diminished, discordance between them must always be expected, and this discordance brings into evidence a new portion, but still it may be only a portion, of the hitherto concealed part of the risk of error. 107. What we have called in Art. 103 discordant determina- tions are those in which the indication of this new portion of the risk of error, to which corresponds the relative systematic error, is unmistakable, because of its magnitude in comparison with what remains of the portion first indicated in the separate determinations, that is, r\ and r\. On the other hand, the con- cordant determinations of Art. loi are those in which the new portion is so small compared with ri and 7% as to remain con- cealed. Thus, to return to the illustration discussed in Art. 99, if vwenty times as many observations had been involved in the determination by the transit, its probable error would have been reduced to equality with that of the determination by the theodolite. But if this had been done we should almost cer- tainly have found the determinations discordant ; that is to say, the ratio in which the difference between the determinations is reduced would be much less than that in which the probable relative accidental error V {A + 'I) is diminished. The ratio in which the remaining difference between the determinations should be divided in making the final determination now depends upon our estimate of the comparative freedom of the instruments from systematic error,* but the important thing to be noted is that the probable error of the result would now be found as in Art. 104, and would be greater than those of the *It may be assumed that, when the instruments are carefully adjusted, the one which is less liable to accidental errors is correspondingly less liable to systematic errors. But this comparison is concerned with the probable errors of a single observation in each case, and not with those of the determinations themselves. 86 /NDEPENDENT DETERMINATIONS. [Art, 107 separate determinations. Thus the apparent risk of error would be increased by making a new determination, but this is only because a greater part of the total risk of error has been made apparent, and the result is so much the more trustworthy as a greater variety has been introduced into the methods employed. The Total Probable Error of a Determination. 108. In the illustrations given in Arts. 99 and 102 it was sup- posed that two determinations only were made, so that we had but a single discrepancy upon which to base our judgment of the probable amount ofthe relative systematic error. But, in general, what are regarded as determinations at one stage ofthe process are at the next stage treated as observations which may be repeated indefinitely before being combined into a new deter- mination. Let one of the determinations first made be the mean of n observations equally good, and let r be the probable error of a single observation. Then the probable accidental error of the mean is r^ = -t— • Now, if R is the probable error of the final value as obtained directly from the discrepancies of the several determinations, (their number being supposed great enough to allow us to obtain a trustworthy value,) we shall find that R exceeds r^, and putting ^' = J + ^ (0 r\ is the new portion of the risk of error brought out by the comparison of the determinations. r^ 109. The form of this equation shows that when — is already small compared with r\, the advantage gained by increasing thfe value of n soon becomes inappreciable. For example, the reticule of a meridian circle is provided with a number of threads, in order that several observations of time may be taken at a single transit. If seven equidistant threads are used, the mean of the times is equivalent to a determination § VII.] THE TOTAL PROBABLE ERROR. S} based upon seven observations of the time of transit. Chauvenel found that, for moderately skilful observers, the probable acci- dental error of the transit over a single thread of an equatorial star isr — o=.o8, whence for the mean of the seven threads we have ro = o'.03. The probable error of a single determination of the right ascension of an equatorial star was found to be J? = o'.o6, so that, from Ji^ = rl+ rf we have rj = o'.052. The conclusion is reached that " an increase of the number of threads would be attended by no important advantage," and it is stated that Bessel thought five threads sufficient.* 110. Suppose the value of Ji' in equation (i), Art. io8, to have been derived from the discrepancies of n' determinations of equal weight. A systematic error may exist for these n' determinations, and j, being its probable value, we shall have that is to say, the concealed portion of the risk of error in one of the original determinations has been decomposed into two parts, one of which has been disclosed at the second stage of the process, while the other remains concealed. The total risk of error in a single one of the n' determina- tions is R^ + sf, and that of the mean of the determinations is w In like manner, if at a further stage of the process we have the means of finding the value of the probable error R^ of this new determination by direct comparison with other coordinate deter- minations, a portion of the value of jj will be disclosed, and we shall have n nn n' where again it must be supposed that a portion s\ of the risk of error still remains concealed. * Chauvenet's " Spherical and Practical Astronomy," vol. ii, p. 194 et seq. 88 INDEPENDENT DETERMINATIONS. [Art. iii 111. The comparative amounts of the risk of error which are disclosed at the various stages of the process depend upon the amount of variety introduced into the method of observing. Thus, to resume the illustration given in Art. 109, if the star be observed at n' culminations, r^ will correspond to errors peculiar to a thread, and r\ will correspond to errors peculiar to a culmination. Again, if different stars whose right ascensions are known are observed, in order to obtain the local sidereal time used in a determination of the longitude, r\ will correspond to errors peculiar to a star, together with instrumental errors peculiar to the meridian altitude. The Ultimate Limit of Accuracy. 112. The considerations adduced in the preceding articles seem to point to the conclusion that there must always be a residuum of the risk of error that has not yet been reached, and thus to explain the apparent existence " of an ultimate limit of accuracy beyond which no mass of accumulated observations can ever penetrate."* But it does not appear to be necessary to suppose, as done by Professor Peirce, that there is an absolute fixed limit of accuracy, due to " a failure of the law of error embodied in the method of Least Squares, when it is extended to minute errors." He says: "In approaching the ultimate limit of accuracy, the probable error ceases to diminish propor- tionally to the increase of the number of observations, so that the accuracy of the mean of several determinations does not surpass that of the single determinations as much as it should do, in conformity with the law of least squares ; thus it appears that the probable error of the mean of the determinations of the longitude of the Harvard Observatory, deduced from the moon- culminating observations of 1845, 1846, and 1847, is i°.28 instead of I'.oo, to which it should have been reduced conformably to the accuracy of the separate determinations of those years." * Prof. Benjamin Peirce, U. S. Coast Survey Report for 1854, Appendix, p. 109. § VII.] EXAMPLES. 80 To account for the fact cited on the principles laid down above, it is only necessary to suppose that there are causes of error which have varied from year to year ; and, recognizing this fact, we ought to obtain our final determination by comparing the determinations of a number of years, and not by combining into one result the whole mass of observations. Examples. 1. In a system of observations equally good, r being the probable error of «* single observation, if two observations are selected at random, what quantity is their difference as likely as not to exceed ? rsj 2. 2. In example i, what is the probability that the difference shall be less than r? O.367. 3. When two determinations are made by the same method, show that the odds are in favor of a difference less than the sum of the two probable errors, and against a difference less than the greater of the two, and find the extreme values of these odds. 66 : 34 and 63 : 37. 4. A and B observe the same angle repeatedly with the same instrument, with the following results : A B 47° 23' 40" 47° 23' 30^' 47 23 45 47 23 40 47 23 30 47 23 50 47 23 35 47 24 00 47 23 40 47 23 20 Show that there is no evidence of relative systematic (personal) error. Find the relative weights of an observation by A and by B, and the final determination of the angle. 100: 13; 47° 23' 38".23± i".62. 5. Show that the probable error in example 4 as computed from the ten observations taken with their proper weights is i".53, but that derived from the formula of Art. 104 is o".43, which is much too small. (See foot-note, p. 83.) go INDEPENDENT DETERMINATIONS. [Art. Iia 6. Two determinations of the length of a line in feet give respectively 683.4 ±0.3 and 684.9 i o-3i there being no reason for preferring one of the corresponding precise values to the other ; show that the probable error of each of the precise values (that is, the systematic error of each determination) is 0.65 ; and that the best final determination is 684.15 ± 0.51. 7. Show generally that when the weights are inversely pro- portional to the squares of the probable errors, the formula of Art. 104 gives a value of R greater or less than that given by the formula of Art. loi, according as d is greater or less than the relative mean error. VIII. Indirect Observations. Observation Equations. 113. We have considered the case in which a quantity whose value is to be determined is directly observed, or is expressed as a function of quantities directly observed. We come now to that in which the quantity sought is one of a number of unknown quantities of which those directly observed are functions. The equation expressing that a known function of several unknown quantities has a certain observed value is called an observation equation. Let /» denote the number of unknown quantities concerned. Then, in order to determine them, we must have at least [i. independent equations. Thus, if two of the equations express observed values of the same function of the unknown quantities, they will either be ident- ical, so that we have in effect only /*— i equations, or else they will be inconsistent, so that the values of the unknown quan- tities will be impossible. So also it must not be possible to derive any one of the /i equations, or one differing from it only in the absolute term, from two or more of the other equations. 114. If we have no more than the necessary /i equations, we shall have no indication of the precision with which the obser- vations have been made, nor, consequently, any measure of the precision with which the unknown quantities have been deter- mined. With respect to them, we are in the same condition as when a single observed value is given in the case of direct observations. Now let other observation equations be given, that is tb say, let the values of other functions* of the unknown quantities be observed. The results of substituting the values of the unknown *Itisnot necessary that these additional equations should be inde- pendent of the original y. equations, for an equation expressing a new observed value of a function already observed will be useful in deter- mining the precision of the observations. 92 INDIRECT OBSERVATIONS. [Art. 114 quantities will, owing to the errors of observation, be found to differ from the observed values, and the discrepancies will give an indication of the precision of the observations, just as the dis- crepancies between observed values of the same quantity do, in the case of direct observations. 115. As an example, let us take the following four observa- tion equations* involving x,y and z: X— y+ 22= 3, 3X + 2y-52= 5, 4;tr + J/ + 4z= 21, — x + 3j/ + 2Z=H. jf we solve the first three equations we shall find x=2^, y = z\, z=\\. Substituting these values in the fourth equation, the value of the first member is i2f, whereas the observed value is 14; the discrepancy is \\. If the values above were the true values, the errors of observation committed must have been o, o, o, i-f ; but, since each of the observed quantities is liable to error, this is not a likely system of errors to have been committed. In fact, any system of values we may assign to x,y and z implies a system of errors in the observed quantities, and the most probable system of values is that to which corresponds the most probable system of errors. 116. In general, let there be m, observation equations, involving /t unknown quantities, trO'ix; then we have first to consider the mode of deriving from them the most probable values of the unknown quantities. The system of errors in the observed quantities which this system of values implies will then enable us to measure the precision of the observations. Finally^ regarding the /i unknown quantities as functions of the m observed quantities, we shall obtain for each unknown quan- tity a measure of the precision with which it has been deter- mined. • Gauss, " Theoria Motus Corporum Coelestium," Art. 184. § VIII.] REDUCTION TO LINEAR FORM. 93 The Reduction of Observation Equations to the Linear Form. 1 17* The method of obtaining the values of the unknown quantities, to which we proceed, requires that the observation equations should be linear. When this is not the case, it is necessary to employ approximately equivalent linear equations, which are obtained in the following manner. Let X, V, Z, . . . be the unknown quantities, and Mi, M2, . . . Mm the observed quantities; the observation equations are thee of the form MX,V,Z,...:) = Mi, /,{X, Y,Z,...) = M„ MX,Y,Z,...) = M^, where /i ,/i, . . ./m are known functions. Let Xa, Vo, Zo, . .. be approximate values of X, V, Z, . . ., which, if not otherwise known, may be found by solving a of the equations ; and put X=Xo + x, Y=Y„+y, .... so that j:,jv, 2', . • . are small corrections to be applied to the approximate values. Then the first observation equation may be written /I (^o + X, Yo+y,Zo + z,...) = Mi, or, expanding by Taylor's theorem, where the coefficients 01 x,y,z,... are the values which the partial derivatives oi fii^X, Y, Z,. ..) assume when X=Xa, Y= Yo, Z= Zo, • ", and the powers and products of the small quantities x,y,z,.,.are neglected as in Art. 91. Denoting the coefficients of x, y, z, . . . hy ai, 61, Ci, . ., putting «i for M -/i (.Xo, Vo, Zo,.. ), and treating the othet observation equations in the same way, we may write 94 INDIRECT OBSERVATIONS. [Art 117 a^x + b^y + c^s + , .. . = m^ (I) am^+ 6my+ CmZ+ ...=»»,■ for the observation equations in their linear form- 118. Even when the original observation equations are in the linear form, it is generally best to transform them as above, so that the values of the unknown quantities shall be small. Another transformation sometimes made consists in replacing one of the unknown quantities by a fixed multiple of it. For example, if the values of the coefficients oiy are inconveniently large they may be reduced in value by substituting ky for_y and giving to >^ a suitably small value. 119. In the observation equations (i), the second members may be regarded as the observed quantities, since they have the same errors. If the true values o{x,y, z , = , . are substituted in these equations they will not be satisfied, because each n differs from its proper value by the error of observation v; we may therefore write the equations OiX + b^y + t iS' + . , . — Ml = z^i (h?c + b^y + ^Tj^' .+ ...— ^2 = f 2 K^) in which, ifx,y, z, . , . are the true values, v, .Vi,.... Vm are the true errors of observation, and if any set of values be given to x,y, z, . ^ ., the second members are the corresponding restd- uals. These corrected observation equations may be called, the residual equations. Observation Equations of Equal Precision. 120. Let us first suppose that the m observations are equally good, and let h be their common measure of precision. Then, since v is the error, not only of the absolute term n-^ in the first of equations (2), but of the first observed quantity Mi,, the prob- §Vm.] EQUATIONS OF EQUAL PRECISION. 95 ability before the observations are made that the first observed value shall be M^ is where, as in Art. 35, Av is the least count of the instrument. Hence we have, for the probability before the observations are made that the m actual observed values shall occur, exactly as in Art. 41. The values oiv\,v\, . . .vl^ being given by equations (2), this value of /* is a function of the several unknown quantities ; hence it follows, as in Art. 41, that for any one of them that value is, after the observations have been made, most probable which assigns to P its maximum value ; in other words, that value which makes ^1 + ^a + • • • + ^m = ^ minimum. Thus the principle of Least Squares applies to indirect as well as to direct observations. 121. To determine the most probable value of ^, we have, by differentiation with respect to x, dvi , «foj , , ^, dvm ^ or, since, from equations (2), Art. i IQ; dvi _ dvt _ <^^n _ - ^-''" ^-*" ••• dx -^' a^Vi + a^Vi + . . + OrnVm = O. . . . . (l) This is called the normal equation/or x. Whatever values are assigned to y, 2, . . .,'\\. gives the rule for determining the value of X which is most probable on the hypothesis that the values assigned to the other unknown quantities are correct. Since v^,v.i, . . .Vm represent the first members of the obser- 96 INDIRECT OBSERVATIONS. [Art. 121 vation equations (i), Art. 117, when so written that the second member is zero, we see that the normal equation for x may be formed by multiplying each observation equation by the coeffi- cient of X in it, and adding the results. 122. The rule just given for forming the normal equation shows it to be a linear combination of the observation equations, and the reason why the multipliers should be as stated may be further explained as follows : If we suppose fixed values given \.o y,z,. . . , each observation equation may be written in the form ax^= N, where N only differs from the observed value ^ by a fixed quantity, and therefore has the same probable error. Now, writing the observation equations in the form x = «1 = jr., X — a. = Xi, x = N^ — -^nij we may regard them as expressing direct observations oix. 11 r is the common probable error of A'l, ^, . . . .A^, that of -A^ r . r — or Xi is — ; that of Xj is — , and so on. Thus the equations are not of equal precision for determining x, and their weights when written as above (being inversely as the squares of the probable errors) are as a? : «2 : . . . : «Ji. It follows that the equation for finding x is, as in the case of the weighted arith- metical mean (see Art. 66), the result of adding the above equations multiplied respectively by af, a|, . . . a^;* that is to say, it is the result of adding the original observation equations of the form ax — N^ o multiplied respectively by «! , «2 , . . . ««,. •It must not be assumed that the weight of the value of x, determined from the several normal equations, is 2a°, that of an observation being unity. This is its weight only upon the supposition that the absolute values of the other quantities are known. §vin.] THE NORMAL EQUATIONS. 97 123. The Normal Equations. In like manner, for each of the other unknown quantities we can form a normal equation, and we thus have a system o< equations whose number is equal to that of the unknown quan- tities. The solution of this system of normal equations gives the most probable values of the unknown quantities. Let us take for example the four observation equations given in Art. 115. Forming the normal equations by the rule given above, we have 2'jx + 6jf = 88, 6x + ly + e = 70, jy + 5^2= 107. The solution of this system of equations gives for the most probable values, 49154 _ 19899 2617 _ 737 6633 124. Writing the observation equations in their general form, UiX + b^y + ... + IJ—ni a^x + 6iy +...+ht—ni X — y = = 2.47. = 3-55. 1.92. (0 amX + 6my + . . . + lmi= nm . we obtain for the normal equations in their general form, Id? .x+ lab.y + . . . + Ial.t= Ian Iab.x+ lb'' .y+ ... + Ibl. t=Ibn I . , (2) lal.x^ Ibl.y-^ ... -^ 11" .t=^Iln It will be noticed that the coefficient of the rth unknown quantity in the Jth equation is the same as that of the jth unknown quantity in the rth equation; in other words, the 98 INDIRECT OBSERVATJONS. [Art. 124 determinant of the coefficients of the unknown quantities in equations (2) is a symmetrical one. Observation Equations of Unequal Precision. 125. When the observations are not equally good, if ^1 ) ^21 • • • ^m are the measures of precision of the observed values M^, M^, . . . M^, the expression to be made a minimum is h\v\ + h\v\ + . . . + KA, as in Art. 65. Thus, as in the case of direct observations, if the error of each observation be multiplied by its measure of pre- cision so as to reduce the errors to the same relative value, it is necessary that the sum of the squares of the reduced errors should be a minimum. Since z/j = o, »a = o, . . . z^m = o are equivalent to the observa- tion equations, it follows that, if we multiply each observation equation by its measure of precision (so that it takes the form hv = o), we may regard the results as equations of equal pre- cision. 126. The result may be otherwise expressed by using num- bers /i,/^ , . . .pm proportional, as in Art. 66, to the squares of the measures of precision ; the quantity to be made a minimum then is p,v\ +piu\ + . . . +/m<, and the normal equation for x is A^i^i + pia^V^ -I- . . . + pmamVm = O. The numbers pi, pi, . . . pm are called the weights of the observation equations; thus, in the case of weighted equations, the normal equation for x may be formed by multiplying each observation equation by the coefficient of x in it, and also by its weight, and adding the results. §Vni.] EQUA7V0NS OF UNEQUAL PRECISION. 99 The general form of the normal equations is now Ipa" . X + Ipab .J/ + . . . + Ipal.t = Ipan Ipab . X + ipb'' .y + ... -\- Ipbl . t - ipbn Ipal .x+ Ipbl .;)/+...+ Ip? . t - Ipin (3) The result is evidently the same as if each observation equation had been first multiplied by the square root of its weight, by which means it would be reduced to the weight unity, and the system would take the form (2), Art. 124. Formation of the Normal Equations. 127. When the normal equations are calculated by means of their general form, a table of squares is useful not only in cal- culating the coefficients Ipa^, Ipb'' ,.. . IpP, but also in the case of those of the form Ipab, Ipac, . . . Epan, . . . For, Since ab= i[(a + by - a' - b'"'], we have Spab = illpCa + by - Ipa' - Ipb';\, by means of which ^pab is expressed in terms of squares,* Or for the same purpose we may use ipab = hilpa' + Ipb" - lp{a - bJI. In performing the work it is convenient to arrange the coeffi- cients in a tabular form in the order in which they 'occur in the observation equations, and, adding a column containing the sums of the coefficients in each equation, thus, ^1 = dtj -t- 3i -F ... + /, + Ml, etc., • If 'Zpab alone were to be found, the formula Lpab = \ [2/(a -1- bf—1p{a—b)^l , derived from that of quarter-squares, would be preferable ; but, since 2/0', 2pb^ have also to be calculated, the use of the formula above, which was suggested by Bessel, involves less additional labor. lOO INDIRECT OBSER VA TIONS. [Art, I2J to form the quantities "Spas, "Spbs, ... . "Spns in addition to those which occur in the normal equations. We ought then to find 2pas = Ipa? + Ipab + . . . + Ipan, Ipbs = lpab+ Ipb" + . . . + Ipbn, Ipns = Ipan-^Ipbn + . . . + Ipn^, and. the fulfilment of these conditions is a verification of the accuracy of the work. In many cases, the use of logarithms is to be preferred, especially when the logarithms of the coefficients in the ob» servation equations are more readily obtained than the values themselves. The General Expressions for the Unknown Quantities. 128. In writing general expressions for the most probable values of the unknown quantities, and in deriving their prob- able errors, we shall, for simplicity in notation, suppose that the observation equations have been reduced to the weight unity as explained in Art. 126, so that they are represented by equations (i), and the normal equations by equations (2) of Art. 124. Let D be the symmetrical determinant of the coefficients of the unknown quantities in the normal equations, thus Z> = la^ lab Sab lb' 2al Ibl lal Ibl ... 11^ let Dx denote the result of replacing the first column by a column consisting of the second members, Ian, Ibn, . . „ Iln\ and let Dy, Di, . . . Dt be the like results for the remaining columns. Then are the general expressions for the unknown quantities. (1} § VIII.] EXPRESSIONS IN DE TERMINANT FORM. lOI 129. Let the value of x when expanded in terms of the second members of the normal equations be X = Qilan + Q^Ibn + . . . + Q^Sln. (2) Now, in the expansion of the determinant Z?j, in terms of the elements of its first column, the coefficients of Ian, l'6n, . . . I'ln are the first minors corresponding to I'a'', lad, . . . la/, in the determinant Z>. Denoting the first of these by Z>i , so that A = Id' Ibc Ibc Ic' Ibl Id Ibl Id II' it follows, on comparing the values of x in equations (i) and (2), that Q.= Pi D • In like manner, the values of Q2, Q^, . . . Q^ are the results of dividing the other first minors by D. The Weights of the Unknown Quantities. 130. Let the value oix, when fully expanded in terms of the second members »i, Kj, . . . «m of the observation equations, be X = OiWi + a,«j + + "m«w (3) Then, if rx denotes the probable error of x, and r that of a standard observation, that is, the common probable error of each of the observed values Wi, « Wm, we shall have, by Art. 89, rl^r'.Ia'. The precision with which x has been determined is usually expressed by means of its weight, that of a standard observation I02 INDIRECT OBSERVATIONS. [Art. 130 being taken as unity. The weights being inversely propor- tional to the squares of the probable errors, we have, therefore, for that of X, 131. Since the value- of ;i; is obtained from the normal equa- tions, we do not actually find the values of the a's ; we therefore proceed to express I,a? in terms of the quantities which occur in the normal equations. Equating the coefficients of «i , «2 , . . . «m in equations (2) and (3), we find «i = «ij6*i + hQ-, + . . . + kQ^ \ «2 = ciiQ^ + b^Q^ + . . . + l^Q^ O-m—amQi + bm,Qi+ . . . + ImQp. - . (I) Multiplying the first of these equations by cti, the second by fla , and so on, and adding the results, we have Sa'^Iaa.Q, + Iba.Q,+ ... + Ila..Q^. . (2) The value of Saa is found by multiplying the first of equa- tions (i) by «!, the second by «j, and so on, and adding. The result is Saa = la' .Q,+ Iab.Q,+ ... + Sal. Q^. . (3) Multiplying this equation by D, the second member becomes the expansion of the determinant D in terms of the elements of its first column. Hence Saa = I (4) In like manner we find Iba ^Iab.Q, + Ib\Q,+ ... + Ibl. Q^, . (5) and when this equation is multiplied by D, the second member is! the expansion of a determinant in which the first two columns §VIII.] WEIGHTS OF THE UNKNOWN QUANTITIES. I03 are identical. Thus Iba = o, and in the same way we can show that lea .... iVa vanish.* Substituting in equation (2), we have now ^«'=5.; (6) hence from Arts. 130 and 129 we have, for the general expres- sion for the weight of x, ^'--Q^D. (7) 132. It follows from equation (2), Art. 129, that if in solving the normal equations we retain the second members in alge- braic form, putting for them A, B,C, .. ., then ihe weight of x will be the reciprocal of the coefficient of A in the value of x.'\ In like manner, that oiy will be the reciprocal of the coefficient of 5 in the value oiy, and so on. For example, if the normal equations given in Art. 123 are written in the form 27^ + 6y = A, 6x + i^y + z = B, j + 542= C, the solution is 19899;!; = 809^ — 3245 + 6C, miy = -i2A + 545 - C, 66335' = 2A — gB + 123 C •Comparing equation (3) with equation (2), Art. 129, we see that Saa is the value which x would assume if in each normal equation the second member were equal to the coefficient of x. The system of equa- tions so formed would evidently be satisfied by ;i; = 1,7 = o, z =; o, , . . / = o ; hence Xaa = 1. In like manner, comparing equation (5) with the same equation, we see that 2*a is the value which x would assume if the second member of each normal equation were equal to the coefficient o£^. This value would be zero ; thus 2i5a^ o. t If the value of the weight of x alone is required, it may be found as the reciprocal of what the value of * becomes when ^ =: i, B^=.o, Cz=o, . . . , that is to say, when the second member of the first norma] equation is replaced by unity, and that of each of the others by zero. I04 INDIRECT OBSERVA'IIOA'S. [Art. 132 The weights oix,y and z are therefore A = 809 — 24.60, A = 737 54 = 13-65, A = 6633 123 = 53-93- 133. When the value oix is obtained by the method of sub- stitution, the process may be so arranged that its weight shall be found at the same time. Let the other unknown quantities be eliminated successively by means of the other normal equations, the value ol x being obtained from the first normal equation or normal equation for x. Then, if this equation has not been reduced by multiplication or division, the coefficient of A in the second member will still be unity, and the equation will be of the form Rx=T+ A, where T depends upon the quantities B, C, . . . Now it is shown in the preceding article that the weight oix is the recip- rocal of the coefficient of A in the value of x ; hence in the present form of the equation the weight is the coefficient of ;t;.* As an illustration, let us find the values of x and its weight in the example given above, the normal equation being 27^1;+ 6y = 88, 6^ + 15J' + ■^ = 70. y + 5^2= 107, The last equation gives I , 107 2= — —y H '-, 54-^ 54 •The effect of the substitution is always to diminish the coefficient o£ x; for, as mentioned in the foot-note to Art. 122, if the true values of y, z, , . . t were known, the weight of x would be Sa^, which is the original coefficient of x, and obviously the weight on this hypothesis would exceed fx 3 which is the weight when j/, z, . . , I are also subject to error. § Vin.] WEIGHTS :.F THE UNKNOWN QUANTITIES. !OS and if this is substituted in the second, we obtain ■y~ 809 -^ ^ 809 • Finally, by the substitution of this value of j/ in the first normal equation, we obtain, before any reduction is made, 19899 „ _ 49154 . 809 809 ' whence _ 19899 _„j ^-49154 as before found. The Determination of the Measure of Precision. 134.. The most probable value of^ in the case of observations of equal weight is that which gives the greatest possible value to P, Arte 120, that is, to the function in which the errors are denoted by Mi , Ma , . . » Wm, so that we may retain Wi , z/j , . . . z'm to denote the residuals which corres- pond to the values of the unknown quantities derived from the normal equations. By differentiation we derive, as in Art. 69, for the determination of h, J«'=^,. = ..... . (i) The value of I'm' cannot, of course, be obtained, but it is known to exceed 2V, which is its minimum value, and the best value we can adopt is found by adding to Iv" the mean value of the excess, J«° — Iv*. 135. Let the true values of the unknown quantities be X ■\- &x,y-\- Sy,.. .t-\- U, while x, y,...t denote the values derived from the normal equations. We have then the residual equations I06 INDIRECT OBSERVATIONS. [Art. IJ5 a^X + b^y + ... + /i/ — «i = !7i flaX + ^jj/ + . . . + 4/ — «2 = Z/a O-wP^ "T ^w,y ~r • • • T" ^m^ ^m — — ^w and, for the true errors, the expressions, a^l^x + tx') + iJiCjj' + ^j)/) + . . . + /i(i? f 3^) -n^ = u^ a^x + te) + 3j(jj/ + 5j)/) + + 4(/ + Si] -■n^ = u^ OmCx+Sx') + im(y+Sy) + . + 4(/+ 30 — 7Zm= M,„ (i3 ..(2) Multiplying equations (i) by ^i, !72, . . . z'm respectively, and adding, the coefficient oix in the result is ChVi + a^V^ + . . . + OmVm, which vanishes by the first normal equation (i), Art. 121. In like manner, the coefficient ofy vanishes by the second normal equation, and so on. Hence 2V = ~ Snv, (3) Treating equations (2) in the same way, we have Suv = — 2nv ; hence iv'' = i;uv. . . . (4) Again, multiplying equations (i) by Mi, u^, ^m, and adding, Suv = Sau.x + 2bu y + o , . + Slu.i- Inu,-, and treating equations {2) in the same way. Sti' = lau (X 4- Sx") + Ibu {y + 3j) + + Slu (f + df) ■ - Inu. Subtracting the preceding equation, we havj, by equation (4), Iti' - Iv' ■= lau , Sx + Ibu . 5ji/ + , „ , + IJu M, . (5) an expression for the correction whose mean value we arn seeking § VIILl THE MEASURE OF PRECISION. IO7 136. Expressions for Sx, Sy, . . . dt are readily obtained as follows. Treating equations (2) exactly as the residual equa- tions (i) are treated to form the normal equations, we find Id" . {x + Sx') + lab . ( J/ + 4r) + ... + 2'a/. {t + ^f) = 2an + 2au lah.ix + Sx') + 2'<5^ (j/ + 5y) + . . . + Ibl. (i + df) = Sbn + Ibu Sal. (x + Sx) + Ibl. (;)/ + 5jj/) 4- . . . + 11^ . {t + hf) = Iln + Ilu J Subtraction of the corresponding normal equation from each of these gives the system, Id" . Sx + lab .Sy + . . . + lal . 8t = lau ' lab . dx+ lb' .Sy+ ... + Ibl.St= Ibu Sal. dx + Sbl . 8y + . . . ■\- Sl\8i = Ilu a comparison of which with the normal equations shows that 8x, Sy, . . . 8t are the same functions of «i, u^, . . . Um that x,y, . . .t are of «i, Wj, . . • «m. Hence we have Sx = O-iUi + a^i + . . . + O-mUm > where Oi, Oa, . . . an have the same meaning as in Art. 130. 137. Consider now the first term, Sau.Sx, of the value of Su' — Sv'', equation (5), Art. 135. Multiplying the value of 5;t just found by SaU ^ fliMi + flSjMj + . . . + aniUm, the product consists of terms containing squares and products of the errors. We are concerned only with the mean values of these terms, in accordance with the law of facility, which is for each error -7— e~ '''"'. Since the mean value of each error is zero, it is obvious that the mean value of each product vanishes; I08 INDIRECT OBSERVATIONS. [Art. 137 SO that the mean value of S,au . te is the mean value of • Now by Art. 50 the mean value of each of the squares «f , k| , . . . «^ is — T3 ; hence the mean value of 2au . 8x is —^ , or, by equation (4), Art. 131, —p. In the same manner it can be shown that the mean value of each term in the second member of equation (5), Art. 135, is -p; hence that of lu' — Iv' is -^, and the best value we can adopt for Su' is Substituting this in equation (i), Art. 134, we have Iv' = — o— > whence h= J — ^i^. The Probable Errors of the Observations and Unknown Quantities. 138. The resulting values of the mean and probable error of a single observation are - JL- - I ^^ I \ and the probabFe errors of the unknown quantities are When the observation equations have not equal weights we § VIIL) PROBABLE ERRORS, 109 may replace lii', which represents the sum of the squares 01 the residuals in the reduced equations, by Ipv', in which the residuals are derived from the original observation equations. The formulae (i) and (2) will then give the mean and probable errors of an observation whose weight is unity. It will be noticed that when /i = i the formulee reduce to those given in Art. 72 for the case of one unknown quantity. 139. Instead of calculating the values oiv^,Vi,. . . v,n directly from the residual equations, and squaring and adding the results, we may employ the formula for Hv' deduced below. By equation (3), Art. 135, Iv' = - Inv. Now multiplying equations (i) of that article by «i , »fj , . . , respectively, and adding the results, we have Mm Snv Therefore Iv' = In' lan.x + Idn.y + . . . + Iln.i— In'. Ian . X • Ibn.y — ... — Iln . i. . (1) The quantity In^ which occurs in this formula may be calcu- lated at the same time with the coefficients in the normal equa- tions. It enters with them into the check equations of Art. 127. We may also express Iv' exclusively in terms of these quan- tities, for if we write Dn = and consider the develooment of Dn in terms of the elements of its last row, we see that Id' lab . . lai Ian lad • 16' . ■ .. I6i • I6n • • I'ai • Idi . .. IP • Iln < Ian Ib7: . .. Iln In' i?„ = — Ian . Dx — Ibn . D« . — Iht.Di^ In*.D, I lO INDIkBCT OBSER V A TIJ./VS ^ Ari. ^35 where JD, Da. • Dt have the same meanings as m Art, xsg- hence Iv^ = ^. . . . (2) 140. For example, in the case of the four observatiois equa- tions of Art 115, X— y -V 2z= 3 3^ + 2j/ - 5^ = 5 4JC + J/ + 4^' = 21 — ■ar + 2U)' + 32=i4J for which the normal equations are solved in Art 123, the value of Zr^ is 671 ; and formula (i) gives l'z^ = 67i -88 X 45154 _7oX 2^ 19899 ' 19899 ^, 38121 1600 — 107 X ^-5 — = — 5 — , ' 19899 19899 in which 1600 is the value of Z>„. Substituting this value oi J'z'" in the formulae of Art 138, we find e = 0.2836, r = 0.1913 for the mean and probable errors of an observation ; and using the weights found in Art. 132, we find for those of the unknowi? quantities e-8 = 0.057, ^» = 0-077. ^' = 0-039* rx = 0.038, Ty = 0.052, n = 0.026. In this example we have found the exact value of I'ti'; if approximate computations are employed, the formula used has the disadvantage that a very small quantity is to be found by means of large positive and negative terms, which considerably increases the number of significant figures to which the work must be carried. Thus, because Un' = 671 in the above exam- ple, the work would have to be carried out with seven-place logarithms to obtain Hz/' to four decimal places. The direct §VIII.] MEASURE OF INDEPENDENCE. Ill computation of the z/^'s from the observation equations would present the same difficulty in a less degree. 141. Of course, no great confidence can be placed in the absolute values of the probable errors obtained from so small a number of observation equations as in the example given above. There being but one more observation than barely sufficient to determine values of the unknown quantities, the case is com- parable to that in which « = 2 when the observations are direct. By increasing the number of observations we not only obtain a more trustworthy determination of the probable error of a single observation, but, what is more important, we increase the weight, and hence the precision, of the unknown quantities. The measure in which this takes place depends greatly upon the character of the equations with respect to independence. As already mentioned in Art. 113, if there were only /t equa- tions it would be necessary that they should be independent ; in other words, the determinant of their coefficients must not vanish, otherwise the values of the unknown quantities will be indeterminate. When this state of things is approached the values are ill-determined, and this is indicated by the small value of the determinant in question. The same thing is true of the normal equations. Accordingly, the weights are small when the determinant D is small ; thus the value of Z) is in a general way a measure of the efficiency of the system of obser- vation equations in determining the unknown quantities. 142. If we write the coefficients in the m observation equa- tions in a rectangular form, thus, Ox 0>t . . • flfi • • . rX, Y=M^-^y, ... T=M^ + i, so that x,y,...t are the required corrections to the observed values. The equations of condition may be reduced as in Art 117 to the linear forms a^x + a^y + . . . + a^t = E^ ' 61X + 6iy + . . . + b^i — Ei fiX -V/^y + . . . +/^i= E^ . ■ (0 The values oix,y, . . ,t must satisfy these equations, which are, however, insufficient in number to determine them, and, by the principle of Least Squares, those values are most probable which, while satisfying equations (i), make piX' +piy^ + . . . +/jx^' = a minimum. In other words, the values must be such that p^xdx +p.^ydy + .. . + p^id/ = o, ... (2) for all possible simultaneous values of dx, dy, . . . di, that is, for all values which satisfy the equations, a^dx + a^dy + . . . + ay,dt = o ' b^dx + b^dy + . . . + b^dt = o (3) /idx + f^dy + . . . + f^dt = o J derived by differentiating equations (1). Hence, denoting the §VIII.] CONDITIONED OBSERVATIONS. "5 first member of equation (2) by P and those of equations (3) by ^1, kSi, . . . kSI, , the conditions are fulfilled by values which satisfy equations (i) and make ■ir ^~ lC\*3\ *~" ^gOg ~" • I • kvS, = o, (4) where ky, k^, . . . kv are any constants. This last equation will be satisfied if we can equate to zero the coefficient of each of the differentials, thus putting piX = kyOy, + AA + . . . + kvfl P^y = kyfli + kj>i f . . . + kvft ' I (5) and this it is possible to do because we have fi unknown quan- tities and V auxiliary quantities Jbi, k^, . . . kr which can be determined so as to satisfy the v + /^ equations comprised in the groups (i) and (5). 148. Substituting the values oix,y, . . . /from equations (5) in equations (i), we have a set of linear equations to determine the i's which are called the correlatives of the equations of con- dition. These equations may be written in the form P P hS'^ +k,I^4 + •'■+ *.^-$ = ^ (6) P ■ "- P ■ ■*' ' """"/ in which the summation refers to the coefficients of the several unknown quantities ; thus, for example, ^ -^ is the sum of the squares of all the coefficients in the first equation of condition each divided by the weight of the corresponding unknown quantity. The correlatives being found firom these equations, Il6 INDIRECT OBSERVATIONS. [Art. 148 the values of the corrections x,y,. . .t are given at once by equations (5). 149. When there is but one equation of condition OxX + a^y + . .. + af,i=£, the second members of equations (5) reduce to their first terms, and the equations require that the corrections of the several unknown quantities shall be proportional to their coefficients in the equation of condition divided by their weights. Equa- tions (6) then reduce to the single equation k£^ = £, and the corrections are •* ^3 ■*^» y ^3 -"i 2"- 2- P P In the very common case in which the numerical value oi each coefficient in the single equation of condition is unity (for example, when the successive angles at a point, or all the angles of a polygon, are measured, or when the sum of two measured angles is independently measured), we have the simple rule that the corrections are inversely proportional to the weights. Examples. I. Denoting the heights above mean sea level of five points by X, Y, Z, U, V, observations of difference of level gave, in feet: -^= 573-08 Z- 3^=167.33 U- F= 425.00 Y-X= 2.60 U-Z= 3.80 F= 319.91 J^= 575-27 u-Y= l^o.2& y= 319-75 Putting X= 573 + X, y= 575 +y, Z= 742 + z, U= 745 + tt^ §VIII.]^ EXAMPLES. 117 F= 320 + V, find the values and probable errors of the cor- rections x,y, z, u, V, supposing the observations to have equai, weight. J? = — 0.19 ± 0.23, J/ = 0.14 ± 0.21, ^^ = 0.05 ± 0.30.. M= 0.43 ±0.25, 2/ = 0.03 ±0.19. 2. Given the observation equations : •*^ = 4-5. y=i.6, x—y=2.T, with weights 10, 5 and 3 respectively, determine the values of X and J/. X = 4.468 ± 0.049, J' = ^-^^3 ± 0.063. 3. Measurements of the ordinates of a straight line corres- ponding to the abscissas 4, 6, 8 and 9, gave the values 5, 8, 10 and 12. What is the most probable equation of the line in the formj/ — mx + i? jy = i-339-^ — 0.029. 4. Given the observation equations of equal weight : x= 10, y— x=T, jj/ = 18, y — z=.g, X — z= 2, determine the most probable values of the unknown quantities, and the probable errors of an observation and of each unknown quantity. ;ir=iol, j/=i7l, 2 = 8i, r = ri= 0.29, rx = ry = 0.23. 5. In order to determine the length x at 0° C. of a meter bar, and its expansion y for each degree of temperature, it was measured at temperatures 20°, 40°, 50°, 60°, the corresponding observed lengths being 1000.22, 1000.65, 1000.90 and 1001.05 mm. respectively. Find the probable values of x and y with their probable errors. x = 999""".8o4 ± 0.033, y = o'°°'.02I2 ± 0.0007. 6. The length of the pendulum which beats seconds is known to vary with the latitude in accordance with Clairant's equation, /=/' + ^-§-y.-M)/'sin'A where I' is the length at the equator, }■ the ratio -j^^ of the cea- .Tl8 INDIRECT OBSERVATIONS. [Ait. 149 trifugal force at the equator to the weight, and ;* the compres- sion of the meridian regarded as unknown. Putting /' = 99i- + ^, (l^-;.^/'=_y, observations in different latitudes gave in millimeters : X + 0.969J)/ = 5.13, x-\- o.ogsjj' = 0.56, X + o.2,vjy = 1.70, ^ + 0.749JJ' = 3-97» X =0.19, .r + 0.685J)/ = 3.62, X + 0.426^ = 2.24, ;«: + o. 1 527 = 0.77, X + 0.793J/ = 4.23. Find the length at the equator with its probable error. /' = 99i""".o69 ± .026. 7. Find the value of fi in the preceding example and its prob- able error. /t = ^ ± 0.00046. 8. The measured height in feet of .(4 above O, B above A and B above O are 12.3, 14. i and 27.0 respectively. Find the most probable value and the probable error of each of these differences of level. 12.5 ± 0.17; 14.3 ± 0.17; 26.8 ± 0.17. 9. A round of angles at a station in the U. S. Coast Survey was observed with weights as follows: 65° 1 1' 52".500 with weight 3, 87° 2' 24".703 with weight 3, 66 24 15 .553 " " 3, 141 21 21 .757 " " i; find the adjusted values whose sum must be 360°. 65° II' 53".4i45. 87° 2' 25".6i7S, 66 24 16 .4675, 141 21 24 .5005. 10. Four observations on the angle A" of a triangle gave a mean of 36° 25' 47", two observations on Y gave a mean oi 90° 36' 28" and three on Zgave 52° 57' 57". Find the adjusted values of the angles and the probable error of a single obser- vation, r = 7".7 ; A" = 36" 25' 44"-23, Y= 90 36 22 .46, -^=52 57 53 'SI' tl. A round of four angles was observed as follows : Z^" 52' 14". 28 weight 2, 44" 35' 56".54 weight 3, 145 23 16 .35 " 4, 131 16 21 .47 " 3, find the adjusted values. 38° 51' 35".94. 44' 35' 30^'-9^ 145 22 57 .18, 131 9 55 .9^ §vni.] EXAMPLES. ^'9 12. Measurements of the angles between surrounding stations were made with weights as follows : Between stations i and 2, 55° 57' 58".68, weight 3^ " 2 " 3. 48 49 13 -64, " 19, " " I " 3- 104 47 12 .66, " 17, " 3 " 4, 54 38 15 .53, " 13. « « 2 " 4, 103 27 28 .99, " & Find the corrections of the angles in the order given. or'.a85, + lbl\y + \cl\z + . . . + \ll\t = [/«] . As mentioned at the end of Art. 126, we may suppose the observation equations (i) to have been reduced to the weight unity, so that \aa\, \ab\ . . . \ln\ stand for ^a", ^ab, . . 2/«. §ix.] THE HEDUCED NORMAL EQUATIONS. 121 151. The value of x in terms of the other unknown quanti- ties derived from the first of equations (2), or normal equation for x^ is \aa\ [aa] .+ \aa\ ' Substituting this in the f^— i other equations, they become (M-Mg]).+ (M-M[^]) .+. ■ [««] - M ([./]-[./]g-5.+...+(m-M[3)-M-Mg] in which it will be noticed that the coefficients of the unknown quantities have the same symmetry as in the normal equa- tions (2). These equations for the /^ — i unknown quantities y, z, . . . f are called the reduced normal equations, and are written in the form \bb, i]jc + lie, i]0 + . , . + \U, i]/ = \hn, i] \bc, i]j»' -f- \cc, i> -h . . . + \cl, i]/ = \cn, i] \bl, i]jc -f U !>+... + [//, i]^ = lln, ij . (3) it. which ■ • • • * {4V 122 GAUSS'S METHOD OF SUBSTITUTION. [Art. 151. Equations (4) show that the rule for the formation of the coefficients and the second members of the reduced normal equations is the same throughout; namely, from the correspond- ing coefficient in the normal equations we are to subtract the result of multiplying together the two expressions in whose symbols one of the letters in the given symbol is associated with a, and dividing the product by \ad\. The Elimination Equations. 152. Eliminating y by means of the first of the reduced normal equations (3) from each of the others, just as x was eliminated from the normal equations, and employing a similar notation, we have the fi — 2 equations \cc, 2]^ + . . . + [cl, 2]/ = [en, 2] \ [. • . (S) which may be called the second reduced normal equation*. The coefficients in these equations are derived from those in equations {3) exactly as the latter were found from those in equations (2). Thus (6) In like manner the third reduced normal equations are formed from these last, the coefficients being distinguished by the postfixed numeral 3, corresponding to the number of § IX.] THE ELIMINATION EQUATIONS. 123 variables which have been eliminated. We finally arrive at the single equation [//, n- x'\t = [In, f^- 1] (7) which determines the unknown quantity standing last in the order of elimination. 153. The quantity which immediately precedes / is next derived from the first of the preceding set of equations (that is, from the equation by means of which it was eliminated) by the substitution of the numerical value found for / ; and so on, until finally x is found from the first of the original normal equations. The equations from which the unknown quantities are actually determined are therefore the following : [aa]x + [a^]j)' + [m]:! + ■ ■ • + [al]t = [an] [6l>, i]y + [be, !]«+... + [bl, i]/ = [bn, i] [cc, 2]^ + . . . + [cl, 2]/ = \cn, 2] (8) [//,;U-l]/= [/«,/*-!]. These are called the finai, or elimination equations. The Reduced Observation Equations. 154. Let us suppose that there exists a relation between the variables which must be exactly satisfied, while the m observation equations are to be satisfied approximately. Let this relation be «* + /ly + . . • + ^' = " (0 Eliminating x from the observation equations (i), Art 150 by the substitution of x-=—-y — -z—-.. / + -I a-' a a a. 124 GAUSS'S METHOD OF SUBSTITUTION: [Art. 154. derived from this equation, we have {b- a,-jy +[c,- a,^z + . . . + [i,- a,-^t^n,- a^ f^m— ^m-jy+Km— «m^j«+ • • • + f 4 — ««- j ^= «m " ««- which may be called the reduced observation equations, and written in the form b:y + c:z + . . . + /.v = «/ ^ h:y + f/^ + . . . + i:t = «/ bmy + c^z + . . . 4- 4V = nj . ■ . (2) a comparison of which with the equations written ' above suffi- ciently indicates the values of b^ , <:/, . . . «„', • • • nj- The /^ — I normal equations derived from these are lb'b'-\y + \b'c'\z + . . . + Ib'l'y = Vb'n'\ Ib'c'^y + [.V']^ + . . . + Vnt = [^'«'l (3) in which .(4) §IX.] THE REDUCED OBSERVATION EQUATIONS. -125 155- Let us now suppose that the equation of condition (i) which is to be exactly satisfied is identical with the first of the normal equations (2) of Art 150, so that a = \ad\, ^=\ab\, ... v=.\an\\ then equations (4) become \b'b'\ = \bb\ - P^° \ad\ [,v] = \pc\ - MM \aa\ \l'n'\ = [/«] - (S) Comparison of these with equations (4), Art. 151, shows that the normal equations (3) of the preceding article now become identical with the first reduced normal equations of Art. 151. Hence the first reduced normal equations are the same as the normal equations corresponding to the reduced observation equations which would result if x were eliminated from the observation equations by means of the normal equation for x. It is evident that, in like manner, the second reduced normal equations are the same as the fx — 2 normal equation which would result from the reduced observation equations, if they were further reduced by the elimination of y by means of the reduced normal equation for j/ ; or, what is the same thing, the normal equations which would result if x and j' were eliminated from the original observation equations by means of the normal equations for x andj. Similar remarks apply to the other sets of reduced normal equations. 156. An important consequence of what has just been proved is that, among the coefficients in the reduced normal equations, or auxiliary quantities, those of quadratic form, {bb, ij, \cc, i], ... {cc, 2I, ... [//, // — i], 126 GAUSS'S METHOD OF SOBSTITUTION. [Art. 156. being, like the corresponding quantities in the normal equa- tions, sums of squares, are all positive. It is further to be noticed that each of these quantities decreases as its postfix increases, for the subtractive quantities in the formation of the successive values are themselves positive. For example, Weights of the Two Quantities First Determined. 157. The unknown quantity / has been determined in equation (7), Art. 152, after the manner described in Art. 133; that is to say, from its own normal equation — no reduction by multiplication or division having taken place in the course of the elimination. Hence, as proved in that article, its weight is the coefficient of the unknown quantity; that is to say, the weight of an observation being unity, that of / is pi = [//, /< - i]. which, as shown in the preceding article, is necessarily a posi- tive quantity.* The weight of any one of the unknown quantities might be determined, in like manner, by making it the last in the order of elimination. 158. Let s be the unknown quantity preceding t, so that [//,;.-.] = [//,;,- 3] -Ig^^', • As shown in Art. 156, the substitutions diminish the successive coef- Bcients of (. Compare the foot-note to Art. 133, p. 104. In fact [//] is the weight that t would have if the true values of all the other quantities were known; [//, i] is the weight which it would have if all the others except X were known— that is, if x and / were the only quantities subject to error ; and so on. § IX.] THE REDUCED EXPRESSION FOR 'Sxi'. 12J or [//, )X - {\lkk, fl-2-]= [II, fl - iMkk, Jil-2]- [kl, M - 2]\ If now the order of s and t be reversed, no other change of order being made, the auxiliaries with the postfix /i — 2 will be unaltered, and we shall have [kk, M - i][//. M-2] = [M, f* - 2][//, /i - 2] - [^l,M- 2]', hence [kk,fx - i][//, /^ - 2] = [//, M - i][kk,M - 2]. But [M, ;u — i] is the weight of s, therefore we have The weights of the other unknown quantities cannot be thus readily expressed in terms of the auxiliaries occurring in the calculation of /. A general method of obtaining all the weights will be given in Arts. 174-176. TAe Reduced Expression for 2v*. 159. We have found in Art. 139 for 2»' or [vv] the expres- sion \vv\ = — [an\x — [bn\y - ... — [/«]/ + [««], which is similar in form to the expressions equated to zero in the normal equations. If in this we substitute the value of X, as in Art. 151, it becomes [w'\ = — \pn, i\y- [en, i]z— ... - [/«, i]/ + [tm, i], 128 GAUSS'S METHOD OF SUBSTITUTION. [Art. 159^ in which \a,a\ \nn, I J = \nn\ — S^— > \aa\ after the analogy of the auxiliary quantities defined in equa- tions (4), Art. 151. In like manner, by the elimination of y, \yv\ is reduced to the form \yv\ = - [en, 2]z- . .. - \ln, 2]i + \nn, 2], and finally, by the substitution of the value of t, to [H = \nn, )x\ the postfix fi indicating that all the unknown quantities have been eliminated. Substituting in the expressions for the mean and probable error of an observation. Art. 138, we have The General Expression for the Sum of the Squares of the Errors. 160. The following articles contain an investigation* of the sum of the squares of the errors considered as a function of the unknown quantities, showing directly that the minimum * Gauss, " Theoria Motus Corponim Coelestium," Art. 182; Werke vol. vii. p. 238. g IX.] SUM OF THE SQUARES OF THE ERRORS. 1 29. value of this quantity corresponds to the values derived from the normal equations, and is equal to \nn, ;u], and also deriving from the general expression the law of facility of error in /, and thence its weight. Let W=[zw] (i) be the sum of the squares of the errors in the observation equations, that is to say, of the linear expressions of the form (Art. 119), • ax -{- dy -\- . . . -\- /t — n = V. The absolute term in tV is obviously [««]. Put Then X = 'Siy-z- = \av\ = [aa]x + [al>]y + . . . + [a/]f — [an]. (3) The equations X = o, Y = o, . . . T = o are the normal equations. Now, since - -^~- = X-,- = [aa\X, or, - -7- f— -f = X, 2 dx dx 2 dx \aa\ 1 d (^.^ X'X 2 dx\ \_aa\l hence, if we put W,= W-^-^, (4) W^ is a function independent of x. Now, in equation (4), Wi has for all values of the variables which make X =0 130 GAUSS'S METHOD OF SUBSTITUTION. [Art. 160. the same value as W ; hence W^ is what W becomes when x is eliminated from it by means of the first normal equation, X=o. 161. It follows from what has just been proved, that W, = \y'v'y, (S) that is to say, W^ is the sum of the squares of expressions of the form ey-\-<^z-\- ... +tf-n'=v', ' corresponding to the reduced observation equations, Arts. 154, 155. The absolute term in W^ is therefore [«'«'] or [««, i]. If, now, we put ^•~2 dy' '" ^'~'2 dt' • • • ^^' Y,=2v'^=[yv']=[yy]y+[6V]z+. . .+[dr]/-[yn']. (7) ay and F, = o, . . . T, = o, are the reduced normal equations. The relation between the expressions y, , . . . y, and X, Y, . . . 7' is derived from equation (4); thus, differentiating with respect to y, K=y-r^^=v-^x,. . . (8) ' [aa] dy [aa] which gives another proof of the identity of the coefficients [i'6'], . . . [p'n''\ with \bb, i], . . . \bn, i], established in Art. 153. We now prove, exactly as in the preceding article, that is a function independent oiy as well as of x, and is identical §IX.] SUM OF THE SQUARES OF THE ERRORS. I3I with \y"v"\ the sum of the squares of expressions of the form /'«+...+ /"/ - n" = v" corresponding to the second reduced observation equations, from which x and y have been eliminated by means of the equations ^ = o, K,= o. The absolute term in W^ is obviously [«"«"] or [««, 2]. 162. Proceeding in this way, we finally arrive at an expres- sion Wy. which is independent of all the variables, and consists simply of the absolute term [««, /<]. We have thus reduced W to the form The denominators \_aa\, \bb, i], . . . [//, /^ — i], being sums of squares, are all positive; hence the minimum value of JFis the value \nn, /*] corresponding to the values oi x, y, . . . t which satisfy the equations -Sf = o, F, = o, . . . 7)i _ , — o. 163. Since W is the sum of the squares of the errors, the probability that the actual observations should occur is proportional to ^ -*''*' as in Art. 62. Therefore, by the principle explained in Art. 30, the observations having been made, the probabilities of different systems of values of the unknown quantities are proportional to the corresponding values of this function. Hence, C being a constant to be determined, the elementary probability, Art. 21, of a given system of values ol x,y, . . . t'\% Ce-'<^^dxdy ...dt, (11) • This result is also derived by Gauss in a purely algebraic manner in the " Disquisitio de Elementis EUipticis Paladis; " Wtrke, vol. vi. p. 22. See also Encke, Birliner Astronomisekes Jahrbuch for 1853, pp. vji-Vll. 132 GAUSS'S METHOD OF SUBSTITUTION. [Art. 163. where h is the measure of precision of an observation, and C is such that the integral of the expression for all . possible - values of the variables is unity. The probability of a given system of values oi y, z, . . . t, while X may have any value, is found by summing this expression for all values of x. It is then Cdy . ..dt f e-'^'^dx =Cdy...dt e-^'f^'ij'^'^Ux, since PF, in equation (4) is independent of x. Since — = [aa], the value of the definite integral in this expression is, by equation (7), Art. 39, ^'"'^dx = ~ e ^""^ dX = \aa\ J-" \aa\ J-" h ^\aa\ Thus the probability of a given system of values of y, z, . . .t\s, -P-P^dydz . . . dt e-'^'^^ . . . : . (12) 164. Iri like manner, the probability of a given system of values oi z . . . i, X and y being indeterminate, is ^ ^'^-''~ . . . di r e-^"^'dy, h ^[aa] which, by equations (9) and (7), reduces to .^^^^^—dz...dte-^"^^. . . (13) §1X.] PROBABILITY OF A GIVEN VALUE OF t. 133 -^■Proceeding in this way, we have, finally, for the probability of a given value of t, . • . .z^"-' \/\\aa\bb, i\. . .\_kk,ti- i'W ■ ■ \ V Again, integrating this for all values of t, we have 1 -v:: • ^^FTIMPTi] ...[//, /^ - i]} - '• • • ^'5^ Substituting the value of C-thus determined, we obtain foi the probability of f, kj\/[n, fl - A^_H^(W^_^-[nn,y.-\)^t. . . . (l6) ^>-- - ■ '=T n .ri',! + [««' l^'\ But ill, /i - 1] and , . . . 7; _, = [//, /^ - i\t - {In, /< - i]; therefore, putting _ T^-r _ _ [ In, /^ - i] ; ; . afed viaittm^ dt, the expression (16) gives for the law of facility of'error in /, ^ l;^_,; . •. : AVL^^' y- — ^1 ^ -A»r//. »• --^It' (17) 134 GAUSS'S METHOD OF SUBSTITUTION. [Art. 164. This is of the same form as the law of facility for an ob- servation, except that the measure of precision is h V[U, M - i]. Thus the most probable Value of t is that which makes T = o, namely, and the weight of this determination, when that of an observed quantity is unity, is /. = U^, it - i]. TAe Auxiliaries Expressed in Determinant Form. 165. If, in the determinant of the coefficients of the normal equations, denoted by D in Art. 128, we subtract from the second row the product of the first row multiplied by % — 4, it becomes o. [^*f i]i \pi:, i], . . • \pl, i]. Treating the other rows in like manner, the determinant D is reduced to a form in which the first row is unchanged, and the rest are replaced by a column of o's and the determinant of the first reduced normal equations. Denoting this last de- terminant by Z>', we have D — \ad\D'. By a similar reduction of 2?', D is further reduced to a form in which the first two rows are as in that described above, and the rest are replaced by two columns of o's and the determi- nant, D", of the second reduced normal equations. Finally, D is thus reduced to the determinant of the elimination equa- tions (?), Art. 153. §IX.] AUXILIARIES IN DETERMINANT FORM. 135 The successive forms of D give the equations D=-\aci\iy-\aa'\\bb, i]Z?"= . . . = \_aa\bb, {\\cc, 2] . . \ll,)X-i\. 166. If, in the form of D involving Z)""', we take the first r rows, and then any other row (which will therefore be a row belonging to Z?**"'), the same reasoning shows that any deter- minant formed by selecting r + i columns of this rectangular block is equal to the minor occupying the same position in D. We can now express any auxiliary, say [a/?, r\ as the quo- tient of two minors, of the (r + i)th and rth degree respec- tively, in D. This auxiliary occurs in the form of D just mentioned. Taking the first r rows and columns together with the row and column in which the given auxiliary occurs, we have a determinant whose value is \aa\bb, i] . . . \yY> r - i][aA r\ because all the elements below the principal diagonal vanish. But this determinant is equal to that similarly situated in Z>, and the coefficient of [afi, r\ is equal to the determinant formed from the first r rows and columns of D. For example, for [de, 2] we have [aa] o o and therefore [abl M \bb, i] [be, i] = [de, 2] [ad\ [„ of Art. 139 is ■ . Z>„ = \aa\b'b, i] ...[//,/< — \\nn, /^] —' iy(nn, }i\; ■ therefore \nn, >"] = 2f . which is the same value that was found for \vv\ on j). 1 10. Form of the Calculation of the Auxiliaries. 168. In calculating the coefficients which occur in the elimination equations and the value of \vv\, it is important to arrange the work in tabular- form,' and to apply frequent checks to the computation to secure accuracy. In the an- nexed table,* which is constructed for four unknown quanti- ties, the first compartment contains the coefficients and second members of the normal equations together with the value of [««], which are derived from the observation equations, as explained in Art. 127. The coefficients are entered bpposit^' and below the letters in their symbols, those below the diag- onal line, whose values are the same as those symmetrically situated above, being omitted; Beneath those In thfe first line are written their logarithms, which are used 'in comfjutlng the sub tractive quantities placed beneath each of the other por, efficients. • * The tabular arrangement is taken from W. Jordan's " Handbuch dett Vermessungskunde." .See also Oppolzer's "Lehrbuch zur Bahnbestim- mung der Kometen und Planeten," vol. ii. p. 340 e( seq., where the table, with a somewhat different arrangeme:nt, is given for six unknown quantities, and an example is fully worked out. ' §IX.] CALCULATION OF THE AUXILIARIES. 137 a b f a' « s a \aa-\ log [oa] \_ab-\ log [fl^] log [o„K 3] ^ / = M = [»», 4] [ns, 4] 138 GAUSS'S METHOD OF SUBSTITUTION. [Art. 168. In expressing the subtractive quantities we have adopted for abridgment the notation ^'~M' °~\aaY ''~[aay "*""[««]• The logarithms of these quantities are placed at the side, and, adding them successively to the logarithms above, the antilogarithms of the sums are entered in their places. After this is done, the results of subtraction are the auxiliaries with postfix I , which are to be placed in corresponding positions in the compartment below. In like manner the third compartment is formed from the second, and in expressing the subtractive quantities we have put So also we have put and finally, _ [cd, 2] _ [en, 2] . [cc, 2] [cc, 2] n _ ['^«. 3 ] which is also the value of /. Thus the first four compartments correspond to the several sets of normal equations, and their first lines to the four elimination equations. Finally, in the fifth compartment we have computed \nn, 4], which is the value of [vv]. Check Equations. 169. The column headed j is added for the sake of the check equations [a«] + lab'l + M + W\ + [««] + [«^] = o \_ab\ + \bb-\ + \bc-\ + \bd-\ + \bn\ + \bs-\ = o [a«] + \bn'\ + [„a,, ... (2) and, for the determination of the a's, ^„+^,af. + «, =oX. . . . (3) Aa + BiPi^ + Qa, + «, = o J In like manner, to find y we multiply the second, third and fourth of equations (i) by i, A,/?,, respectively, and add. The result is y = ^„+C„/J, + ^»/?. (4) where the /?'s are determined by Again, multiplying the last two of equations (i) by i, y^,, and adding s=C„ + Ar.. (6) where Yi is determined by Ci-\rY, = o (7) 173. The form for the computation of «„ a„ a, , fi„ /?„ y„ according to equations (3), (5), and (7), and the numerical work for the example of Art. 170, is as follows: .144 GAC/SS'S METHOD OF SUBSTITUTION. [Art. 173. --^6 -^0 log \ "2 logOfg logOTj .050133 .009065 .002146^ — .000020 — .002948 9„26573 .059198 8.77331 — .000822 M1487 -B_ log/3. 8^69164 -B. d log'/J. .000106 .002448^ .002554 7.40722 log Xs = 8^69720 The values of a^, yS^ and ^3, are not found, as their: loga- rithms only are needed. , ■ We may now recompute the values of the unknown quan^ titles by means of equations (2), (4), (6) by way of verifying the values of ot^, . . . y,a.s well as those of *, ... A The form of computation will be as below u . ...,,. Bn ^» X y z t ■50325 ' .07927 .00106 . 00000 — .42989 — .00088 — .00001 .017847 .0001229 -.004415 .58358 -.43078 .018067 -.004415 " The numerical values agree exactly with those, fpund in; Art. 171. ^IXil JVE/G//TS OF THE UNKNOWN QUANTITIES. 145^ The Weights of the Unknown Quantities. 174- Tlie principle' by which we obtain expressions for the weights is that proved in Art. 132, namely: When the value of any one of the unknown quantities is expressed in terms of the second members of the normal equations, its weight is the reciprocal of the coefficient of the second member of its own normal equation; or'what^ is the same thing: The reciprocal of the weight is what the value of the unknown quantity becomes when the second member of its own normal equation is replaced by unity and that of each of the others by zero. Restoring the values of the quantities An, -Bn, Cn, JDn, the values of x,y, z, t, Art. 172, are [aa] + [bb, i]!"' '^ ice, 2f ^"^ [dd, 3] ' [a«] \aa {bn, y - \bb, i] _ \cn, 2] i] {en, 2 ]^ {dn,i\ ^ + [.., 2f-+[^^.3r {cc, 2] + {dd, zf> {dn, 3 ] {dd, 3] W Equations (3), (5), atid (7), Art. 172, show that the values of OTj,...;/, are independent of the values of {an\, {bn\, {cri\, and {dn\ ; hence the changes indicated abovcj in order to con- vert the second members of equations (i) into ;the expressions^ for the reciprocals of the weights, have only to be made in the numerators. {ali\, {bh, i]]. \cn, 2],, and , [], p. 140) we find for the mean errors e, = .02669, £, = .02750, €, = .02283, ^« = -o»»77; and hence for the probable errors r, = .01800, r, = .01855, '• = •°^ii9> U = -01536. r^& GAUSS':S METHOD OF SUBST:TUTI0.N.. {Art. ifi': .. . 1 log a^^ logo-/./ log A' ■'"■ -J log a,' log ^3' log r" ^°^ M [H . .. '■' «,' • I I .,[^^, i] - [M l] ■ '°g[5^,iT ■' ...'■ <' /S," 1 I [^^, 2] [^^,2] ^-°^ L-,::2l [-^^.2] [rf^, 3] [dd, 3] Vdd, 3] '°' K^3i "'■ ^ - I I I I A A A A--- log-^ A A '!'^ 8.53146, -..;... 7.'S4462 ' ■7.38328^ , 3-.82974 4.81444 7 -3944°. ^ .« -32034 ' ' 9-50561 :. .01201 ■35318 9.54800 :."j.!. ..; r-. . .o'oo8S! ... .-906^9 , : -24315 . 9-38587 .OOOBQ , ,P0QQCS . _■ ;.QSQ$p ; 9.38483. f_- • i\ •SSS'S'o •353y7- ' •243-75-= 1 :3 9.52270 9.54872 ^ 9.38694 9-38483 :■:.'-; ! .- g IX.] EXAMPLES. 149 Examples. 1. Show that the values of p^ when there are four unknown quantities given in Arts. 158 and 176 are identical. 2 . Show that the weight of the determination of [6m] is [M]~^ that of \bn, i] is [W, i]~i, and so on. 3. Show that, if the normal equation for x were known to be exactly true, the values of the unknown quantities and the weights relatively to that of an observation of all except x would be unchanged, and that the weight of an observation would be increased in the ratio m — }x-\- x : m — y.. 4 Solve the following normal equations which resulted from twelve observation equations : S.1143X — 0.2792JV + 3.34602 = — 0.7365, — 0.2792;*: + I4.6i42_j' -|- 0.19582 = 2.1609, 3.3460* + 0.1958;/ + 7.67542 = — 0.8927, M = 0-5379. and find the probable errors of the unknjj^wn quantities. X—- ,0803, y - .1475, z = - .0851; Tx = .034, Tp = .017, fa = .028. 5. Solve the normal equations '■ 5.2485* - 1.74727 - 2-1954^ = — o.-5399> — 1.7472.J' + I-8859J)' + 0.80412 = 1.4493. — 2.1954^ + 0.8041;/+ 4.04402 = .1.8681, [««] = 2.6322; and given m = 10, find the probable errors. [tw] = 0.5504, X = 0.422, y = 0.945, z = 0.503; y = 0.189,. ^a, = 0.108, /-„ = 0.166, ^2= 0.107, ISO GAUSS'S METHOD OF SUBSTITUTION. [Art. 177. 6. Show that the observation equations 0.707* + 2.052)' — 2.3724r — 0.221/ = — 6-S8» 0.471* + 1-347^ — i-7iS« — 0.085/ = -~ i-63» 0.260* + o.77o_j' — 0.356a + 0.483/ = 4.40^ 0.092* + 0.343^ + o.235« -f- 0.469/ = 10.21, 0.414* + 1.204J' — 1.5062 — 0.205/ = — 3-99, 0.040*+ o.i5qy + 0.104a + 0.206/= 4.34» give rise to the normal equations 0.971* + 2.8217— 3.175a — 0.104/= — 4-815, 2.821* + 8.208;' — 9.168a — 0.251/ = — 12.961, — 3.175* — 9.i68_j'+ 11.0282 + 0.938/= 25.697, — 0.104* — 0.2517 + 0.938a + 0.594/ = 10.218, and to [««] = 204.313. Determine the unknown quantities and the probable errors of an observation. «= — 86.41, y = 25.18, « = — 3.12, / = 17.66, ^ = 1.80. 7. Account for the small values of the weights, especially of X and y, in Ex- 6. Show directly from the value of [bb, i] that py < .012 and pi < .0015. 8. Ten C'bservation equations gave the normal equations 2.02530* + 0.638097 — 3.99285a = — 30.466, 0.63809* + 0.216497 — 1.12089a = — 11.959, — 3.99285* — 1.12089^ + lo.oooooa = — 6.000, together with [««] = 24928.; find the values and weights of the unknown quantities and the probable errors. §IX.] EXAMPLES. 151 X = - 202.8, y = 286.3, 2 = — 49-5; A= -oaM. A = -0066, ^= .9119; »" = 37-702. /-„ = 213, r, = 463, r, = 39. 9. Given the following observation equations of equal weight: .986;« + .056)' = .000, •953'*^ + • 182;; = I 060, .973je + .ioa)' = .S30, .943* + .219J = — .380. ^68* + .123^ = .680, .<)\()x -\- .ioiy= .200, •959* + •157.)' = -200. .9i6* + .3i7j('= — .530, .912* 4" 33rf = .000, And the normal equations and the value of [««] by the method of Art. 127. (Notice that when we put a + ^ + « + j = oas in Art. 169 a considerable saving of labor results from the fact that ^{fl + bY - 2{n + s)\ etc.) 8.0884*+ 1.6798;' = 1. 7160, 1.6798J1; + 0.438^)- = 0.1725, [««] = 2.iJ22. 10. Solve the normal equations found in Ex. 9. X = 0.642, y= — 2.07, r^ = 0.25, ry = 1.09. 11. Thirteen observation equations give the normal equa* tions 17.50*— 6.50J'— 6.502;= 2.14, — 6.50*+ 17.50^ — 6.502;= 13.96, — 6.50* — 6.5oy + 20.502; = — 5.40, [ftn] = 100.34; find the values and probable errors of the unknown quantities *= 0.67 ±0.60, j'= 1.17 ± 0.60, 2; = 0.32 ± 0.55- 152 GAUSS'S METHOD OF SUBSTITUTION. [An. 177. 12. Solve the normal equations 459^ — soSjc -3890 + 244^ = 507, — 308a: + 464JI' -|-4o8a — 269/ = — 695, — 389a: + 408,^+676^- 331/= — 653, 244^; — 269JI; —3310 + 469/ = 283, \nn\ = 1 1 29. X = —0.212, J = —1.471, z = —0.195, '^— — 6.4.8S; [»z'] = io;/;,= 207, />y=i86, p^ = 2<,o, pt = 2%i. Constants. p = 0.4769352, log p = 9.6784603 ; Pi/2 - 0.6744897, log P4/2 = 9.8289753 ; Pi/TT - 0.8453475^ log Pi/7r = 8.9270353 ; r = pV2 . e = pyn . r/. Note that p|/2 = « + /? + ;/ + * + ..., where a = fi VALUES OF THE PROBABILITY INTEGRAL, OR PROBABILITY OF AN ERROR NUMERICALLY LESS THAN X, Table I. — Values of Pi. t=hx', Pt = -4- t I 2 3 4 S 6 7 8 9 o.o 0.1 0.2 0.0000 1125 2227 0.0113 1236 233s 0.0226 1348 2443 0.0338 1459 255° 0.0451 1569 2657 0.0564 1680 2763 0.0G76 1790 2869 0.0789 1900 2974 0.0901 2009 3079 0. 1 1 3 2118 3183 °-3 0.4 0.5 0.3286 4284 5205 0.3389 4380 5292 0.3491 4475 5379 0.3593 4569 5465 0.3694 4662 5549 0.3794 4755 5633 0.3893 4847 5716 0.3992 4937 5798 0.4090 5027 5879 0.4187 5117 5959 0.6 0.7 0.8 0.6039 6778 7421 0.6117 6847 7480 0.6194 6914 7538 0.6270 698. 7595 0.6346 7047 7651 0.6420 71 12 7707 0.6494 7175 7761 0.6566 7238 7814 0.6638 7300 7867 0.6708 7361 7918 0.9 1.0 I.I 0.7969 8427 8802 0.8019 8468 883s 0.8068 8508 8868 0.81 16 8548 8900 0.8163 8586 893' 0.8209 8624 8961 0.8254 8661 8991 0.8299 8698 9020 0.8342 8733 9048 0.8385 8768 9076 1.2 '•3 1.4 0.9103 9340 9523 0.9130 9361 9539 0.9155 9381 9554 0.9181 9400 9569 0.9205 9419 9583 0.9229 9438 9597 0.9252 9456 961 1 0.9275 9473 9624 0.9297 9490 9637 0.9319 9507 9649 i.S 1.6 1-7 0.9661 9763 9838 0.9673 9772 9844 0.9684 9780 985° 0.9695 9788 9856 0.9706 9796 9861 0.9716 9804 9867 0.9726 9S11 9872 0.9736 9818 9877 0.9745 9882 0.9755 9886 1.8 1.9 i.O 0.9891 9928 9953 0.9895 9931 99S5 0.9899 9934 9957 0.9903 9937 9959 0.9907 9939 9961 0.991 1 9942 9963 0.9915 9944 9964 0.9918 9947 9966 0.9922 9949 9907 0.9925 995' 9969 2.1 2.2 2-3 0.9970 9981 9989 0.9972 9982 9989 0.9973 9983 9990 0.9974 9984 9990 0.9975 9985 9991 0.9976 9985 999' 0.9977 9986 9992 0.9979 9987 9992 0.9980 9987 9992 0.9980 998S 9993 2.4 0.9993 9996 9998 0.9993 9096 999S 0.9994 9996 9998 0.9994 9997 9998 0.9994 9997 9998 0.9995 9997 9998 0.9995 9997 9998 0.9995 9997 9998 0.9995 9997 9998 0.9996 9998 9999 2.7 2.8 ... ••.• *■* >•■ ■ •■ 0.9999 1 .0000 ... ;: Table II.— Values of Pt. r p' ^ ijn '^ V^Jo o.o O.I 0.2 0-3 0.4 o-S 0.6 0.7 0.8 0.9 I.O I.I 1.2 "•3 1.4 >S 1.6 1-7 1.8 1.9 2.0 2.1 2.2 2-3 2.4 2-5 2.6 2.7 2.8 2.9 3-0 31 3-2 3-3 3-4 3- 4- I:! 0.0000 o 0538 1073 0.1604 2127 2641 o-3'43 3632 4105 0.4562 5000 5419 0.5817 6194 6550 0.6883 7195 7485 0-7753 8000 8227 0.8433 8622 8792 0.8945 9082 9205 0.9314 941 1 9495 0.9570 9635 9691 0.9740 9782 9570 0.9930 9993 9999 ,0054 059 1126 0.1656 2179 2691 3.3192 3680 4x52 3.4606 S°43 5460 3.5856 6231 6584 ).69is 7225 7512 '•7778 8023 8248 1.8453 8639 8808 '•8959 9095 9217 1.9324 9419 9503 '•9577 9641 9696 •9744 9786 9635 0.0108 0645 1 180 0.1709 2230 2742 0.3242 3728 4198 0.4651 5085 5500 0.5894 6267 6618 0.6947 7255 7540 0.7804 8047 8270 0.8473 8657 8824 0.8974 9108 9228 '•9943 9994 .0000 0-9334 9428 95" 0.9583 9647 9701 0.9749 9789 9691 0.9954 9995 0.0161 0699 1233 0.1761 2282 2793 0.3291 3775 4244 0.469s 5128 5540 0-5932 6303 6652 0.6979 7284 7567 0.7829 8070 8291 0.8492 8674 8839 0.0215 0752 1286 0,1814 2334 2843 0.3340 3823 4290 9121 9239 0.9344 9437 95«9 0.9590 9652 9706 0-9753 9793 9740 0.9963 9996 0.4739 5170 5581 0.5971 6339 6686 0,7011 73'3 7594 0.7854 8093 8312 0.8511 8692 8855 0.9002 9133 9250 0-9354 9446 9526 0.9597 9658 9711 0-9757 9797 9782 0.9970 9997 0.0269 0806 1339 0.1866 2385 2893 0.3389 3871 4336 0.4783 5212 5621 0.6008 6375 6719 0.7042 7343 7621 0.7879 8116 8332 0.8530 8709 8870 0.9016 9146 9261 0.9364 9454 9534 0.0323 0859 >39 0.1919 2436 2944 0.3438 39'8 4381 0.4827 5254 5660 0.6046 6410 6753 0.7073 7371 7648 0.7904 8138 8353 0.8549 8726 8886 0.9029 9158 9272 0,9603 9664 9716 0.9762 9800 9818 0.9976 9998 0.9373 9463 9541 0.9610 9669 9721 0.9766 9804 9848 0.9981 9998 0.0377 0913 1445 0.1971 2488 2994 0.3487 3965 4427 0.4871 5295 5700 0.6083 6445 6786 0.7104 7400 767s 0.7928 8161 8373 8 0.0430 0966 1498 0.2023 2539 3044 0-3535 4012 4472 0.4914 5337 5739 0.6121 6480 6818 0.7134 7428 7701 0.7952 8183 8394 0.8567 8743 8901 0.9043 9170 9283 0-9383 947" 9548 0.9616 9675 9726 0.9770 9807 9874 0.9985 9999 0.8585 8759 8916 0.9056 9182 9293 0.9392 9479 9556 0.9622 9680 973' 0.9774 981 1 9896 0.9988 9999 0.0484 1020 «55i 0.2075 2590 3093 0.3584 4059 45'7 0.4957 5378 5778 6.6157 6515 6851 0.7165 7457 7727 0.7976 8205 8414 0.8604 87:6 8930 0.9069 9194 9304 0.9401 9487 9563 0.9629 9686 9735 0.9778 9814 99'S 0.9991 9999 Number Square. CuDe. Square Root. Cube Root. I I I I.OOOO I.OOOO 2 4 8 1.4142 1.2599 3 9 27 I.732I 1.4422 4 16 64 2.0000 1.5874 5 25 125 2.2361 1. 7100 6 36 216 2.449s 1.8171 7 49 343 2.6458 I.9129 8 64 512 2.8284 2.0000 9 81 729 3.0000 2.0801 10 I 00 I 000 3.1623 2.1544 II I 21 » 331 3.3166 2.2240 12 I 44 I 728 3.4641 2.2894 »3 I 69 2 197 3.6056 2.35'3 H I 90 2 744 3-7417 2.4101 IS 2 25 3 37S 3-8730 2.4662 i6 2 56 4 096 4.0000 2.5198 '7 2 89 4 9«3 4.I23I 2.S7«3 i8 3 24 5 832 4.2426 2.6207 '9 361 6859 4.3589 2.6684 20 4 00 8 000 4-4721 2.7144 21 4 4> 9 261 4.5826 2.7589 22 484 10 648 4.6904 2.8020 23 5 29 12 167 4-7958 2.8439 «4 5 76 6 25 13 824 4.8990 2.8845 *S IS 625 5.0000 2.9240 26 6 76 17 576 5.0990 2.9625 27 7 29 19683 5.1962 3.0000 28 784 21 952 5.2915 3.0366 29 841 24 389 5.3852 3.0723 30 9 00 27 000 S-4772 3.1072 3« 9 61 29 791 5.5678 3.14H 3* 10 24 32768 5.6569 3- 1748 33 10 89 35 937 5.7446 3-2075 34 II 56 39 304 5.8310 3.2396 35 12 25 42 87s 5.9161 32711 36 12 96 46 656 6.0000 3.3019 37 13 69 SO 653 6.0828 3-3322 38 14 44 54 87a 6.1644 3-3620 39 IS 21 59 3'9 6.2450 3-3912 40 16 00 64 000 6.3246 3.4200 41 16 81 68 921 6.4031 3-4482 42 17 64 74088 6.4807 3.4760 43 18 49 79 507 6.5574 35034 44 19 36 85 184 6.6332 3-5303 45 20 25 91 125 6.7082 3.5569 46 21 16 97 336 6.7823 3-5830 47 22 09 103 823 6.8557 3.6088 48 23 04 no 592 6.9282 3.6342 49 24 01 117 649 7.0000 3.6593 SO 25 00 125 000 7.0711 3.6840 Number Square. Cube. Square Root. Cube Root. 5' z6 01 132 651 7.14I4 3.7084 52 27 04 140 608 7.ZII1 3-7325 53 28 09 148 877 7.2801 3-7563 54 29 It) 157 464 7-3485 37798 55 30 25 166 375 7.4162 3. 80 JO 56 31 36 175 616 7-4833 3-8259 57 32 49 185 193 ^ 7.5498 3.8485 58 33 64 195 112 7.6158 3.8709 59 34 81 20s 379 7.68 1 1 3893° 60 36 00 216 000 7.7460 3-9149 61 37 2i 226 981 7.8102 39365 62 38 44 238 328 7.8740 39579 63 39 69 . 250 047 7-9373 3-9791 64 40 96 262 144 8.0000 4.0000 65 42 25 274 625 8.0623 4.0207 66 43 56 287 496 8.1240 4.0412 67 44 89 300 763 8.1854 4.0615 68 46 24 314 432 8.2462 4.0817 69 47 61 328 509 8.3066 4.1016 70 49 00 343 000 8.3666 4.12.3 71 50 41 357 9" 8.4261 4.1408 72 51 84 373 248 8.4853 4.1602 73 53 29 389 017 8.5440 4-1793 74 54 76 405 224 8. 602 -J 4-1983 75 56 25 421 875 8.6603 4.2172 76 57 76 438 976 8.7178 4.2358 77 59 29 456 533 8.7750 4-2543 78 60 84 474 552 8.8318 4.2727 79 62 41 493 °39 8.8882 4.2908 80 64 00 512 000 8.9443 4.3089 81 65 61 53' 441 9.0000 4-3267 82 67 24 551 368 9-°554 4-3445 83 68 89 571 787 9.1104 4.3621 84 70 56 592 704 9.1652 4-3795 85 72 25 614 125. 9.2195 4.3968 86 73 96 636 056 9.2736 4.4140 87 75 69 658 503 9-3274 4.4310 88 77 44 681 472 9.3808 4.4480 89 79 21 704 969 94340 4.464.7 go 81 00 729 000 9.4868 4.4814 9' 82 81 753 571 9-5394 4.4979 92 84 64 778 688 9-5917 4-5144 93 86 49 804 357 9-6437 4.5307 94 88 36 830 584 9.6954 4.5468 95 90 25 857 375 9.7468 4-5629 96 92 16 884 736 9.79S0 4.5789 97 94 09 912 673 9.8489 4-5947 98 96 04 941 192 9.8995 4.6104 99 98 01 970 299 9-9499 4.6261 100 I 00 00 I 000 000 10.0000 4.6416 Number Square. Cube. Square Root. Cube Root. lOI I 02 01 I 030 301 10.0499 4-6570 102 I 04 04 1 061 208 10.0995 4.6723 103 I 06 09 1 092 727 10.1489 4.687 s 104 I 08 16 I 124 864 10.1980 4.7027 los I 10 25 1 157 625 10.2470 4.7177 106 I 12 36 I 191 016 10.2956 4-73=6 107 I 14 49 I 225 043 10.3441 4-7475 108 I 16 64 I 2S9 712 10.3923 4.7622 109 I 18 81 I 29s 029 10.4403 47769 110 I 21 00 I 331 000 10.4881 4.7914 III I 23 21 I 367 631 '0-5357 4.8059 112 I 25 44 I 404 928 10.5830 4.8203 "3 I 27 69 I 442 897 10.6301 4.8346 114 I 29 96 I 481 544 10.6771 4.8488 115 1 32 25 I 520 875 10.7238 4.8629 116 I 34 56 I 560 896 10.7703 4.8770 117 I 36 89 1 601 613 10.8167 4.8910 118 I 39 24 I 643 032 10.8628 4.9049 119 I 41 61 I 685 159 10.9087 4.9187 120 I 44 00 I 728 000 10.9545 4-9324 121 I 46 41 I 771 561 1 1.0000 4.9461 122 I 48 84 I 815 848 11.0454 4-9597 123 I SI 29 I 860 867 11.0905 4-9732 124 I S3 76 I 906 624 i'-'355 4.9866 125 ' 56 25 I 953 125 11.1803 5.0000 126 I s8 76 2 000 376 11.2250 5-°i33 127 I 61 29 2 048 383 11.2694 5.026s 128 I 63 84 2 097 152 "■3137 5-°397 129 I 66 41 2 146 689 11-3578 5.0528 130 I 69 00 2 197 000 11.4018 5.0658 131 I 71 61 2 248 091 11.4455 5.0788 132 I 74 24 2 299 968 11.4891 5.0916 133 I 76 89 2 352 637 11.5326 5-1045 134 I 79 56 2 406 104 11-5758 S.1172 13s I 82 2S 2 460 375 11.6190 5.1299 136 1 84 96 2 515 456 11.6619 5.1426 137 I 87 69 2 571 353 11.7047 5-1551 138 I 90 44 2 628 072 11-7473 S.1676 139 I 93 21 2 685 619 11.7898 5.1801 140 I 96 00 2 744 000 11.8322 5.1925 141 I 98 81 2 803 221 11.8743 5.2048 142 2 01 64 2 863 288 1 1. 9164 5.2171 143 2 04 49 2 924 207 11.9583 S-2293 144 2 07 36 2 985 984 12.0000 5-2415 145 2 10 25 3 048 625 12.0416 5- 2536 146 2 13 16 3 112 136 12.0830 5.2656 147 2 16 09 3 176 523 12.1244 5.2776 148 2 19 04 3 241 792 12.165s 5.2896 149 2 22 01 3 307 949 12.2066 5-3015 ISO 2 25 00 3 375 000 12.2474 5-3133 Number Square, Cube. Square Root, Cube Root. IS" 2 28 01 3 442 95 « 12.2882 S-325' 152 2 31 04 3 511 808 12.3288 S-3368 153 2 34 09 3 581 577 12.3693 S-3485 154 2 37 16 3 652 264 12.4097 5-3601 '55 2 40 2S 3 723 875 12.4499 5-3717 .56 2 43 36 3 796 4«6 12.4900 5-3832 «S7 2 46 49 3 869 893 12.5300 S-3947 158 2 49 64 3 944 312 12.5698 5.4061 IS9 2 52 81 4 019 679 12.6095 S-4I7S 160 2 56 00 4 096 000 12.6491 5.4288 161 2 59 21 4 J73 281 12.6886 5.4401 i6z 2 62 44 4 251 528 12.7279 S-4SU 163 2 65 69 4 330 747 12.7671 5.4626 164 2 68 96 4 410 944 12.8062 5-4737 1 65 2 72 25 4 492 125 12.8452 5.4848 166 2 75 56 4 574 296 12.8841 5-4959 167 2 78 89 4 657 463 12.9228 5-5069 168 2 82 24 4 741 632 12.9615 5.5178 169 2 85 61 4 826 809 13.0000 5.5288 170 2 89 00 4 913 000 13.0384 5-5397 171 2 92 41 5 000 211 13.0767 5-5505 172 2 95 84 5 088 448 13.1149 S-5613 173 2 99 29 5 177 717 13.1529 5.5721 174 3 02 76 5 268 024 13.1909 5.582S I7S 3 06 25 5 359 375 13.2288 5-5934 170 3 09 76 5 45' 776 13.2665 5.6041 177 3 >3 29 5 545 233 13-3041 5-6147 178 3 1684 5 639 752 13-3417 5.6252 179 3 20 41 5 735 339 13-3791 5-6357 180 3 24 00 5 832 000 13.4164 5.6462 181 3 27 61 5 929 741 '3-4536 5.6567 182 3 3« 24 6 028 568 13.4907 5.6671 '!3 3 34 89 6 128 487 13-5277 56774 184 3 38 56 6 229 504 13-5647 5-6877 185 3 42 25 6 331 625 13.6015 5.6980 186 3 45 96 6 434 856 13.6382 S-7083 187 3 49 69 6 539 203 136748 5.7185 188 3 53 44 6 644 672 13-7113 5.7287 189 3 57 21 6 751 269 13-7477 5-7388 190 3 61 00 6 859 000 13.7840 5-7489 191 3 64 81 6 967 871 13.8203 5-7590 192 3 68 64 7 077 888 13.8564 5-7690 193 3 72 49 7 189 057 138924 5.7790 194 3 76 36 7 301 384 13.9284 5-7890 19s 3 80 25 7 414 875 13.9642 5-7989 196 3 84 16 7 529 536 14.0000 5.8088 197 3 88 09 7 645 373 14.0357 5.8186 198 3 92 04 7 762 392 14.0712 5.8285 199 3 96 01 7 880 599 14.1067 5-8383 200 4 00 00 8 000 000 14.1421 5.8480 Number Square. Cube. Square Root. Cube Root. 201 4 04 01 8 120 601 14.1774 5.8578 202 4 08 04 8 242 408 14.2127 5.8675 203 4 12 09 8 365 427 14.2478 5.8771 204 4 16 16 8 489 664 14.2829 5.8868 20S 4 20 25 8 615 125 14.3178 5.8964 206 4 24 36 8 741 816 ■4-3527 5.9059 207 4 28 49 8 869 743 '4.3875 59'5S 208 4 32 64 8 998 912 14.4222 5-9250 209 4 36 81 9 "29 329 14.4568 5.9345 210 4 41 00 9 261 000 14.4914 5-94 59 211 4 45 21 9 393 93' 14.5258 5.9533 212 4 49 44 9 528 128 14.5602 5.9627 213 4 S3 69 9 663 597 14.5945 5.9721 214 4 57 96 9 800 344 14.6287 5.9814 215 4 62 25 9 938 37 S 14.6629 5.9907 216 4 66 56 10 077 696 14.6969 6.0000 217 4 70 89 10 218 313 14.7309 6.0092 218 4 75 24 10 360 232 14.7648 6.0185 219 4 79 61 10 503 459 14.7986 6.0277 220 4 84 00 10 648 000 14.8324 6.0368 221 4 88 41 10 793 861 14.8661 6.0459 222 4 92 84 10 941 048 14.8997 6.0550 223 4 97 29 II 089 567 14-9332 6.0641 224 S 01 76 II 239 424 14.9666 6.0732 225 5 c6 25 II 390 625 15.0000 6.0822 226 S 10 76 II 543 176 15-0333 6.0912 227 S »S 29 II 697 083 15.0665 6 1002 228 5 "9 84 II 852 352 15.0997 6.091 229 5 24 41 12 008 989 15.1327 6. 1 180 230 S 29 00 12 167 000 15.1658 6.1269 231 5 33 6« 12 326 391 15.1987 6.1358 232 5 38 24 12 487 168 15-2315 6.1446 233 54289 12 649 337 15.2643 61534 234 S 47 56 12 812 904 15-297' 6.1622 23s 5 52 25 12 977 87s '5- 3297 6 1710 236 5 56 96 13 144 256 15.3623 6.1797 237 5 61 69 13 312 053 15.3948 6.1885 238 5 66 44 13 481 272 15.4272 6.1972 239 5 7' 21 13 651 919 15.4596 6.2058 240 5 76 00 13 824 000 1 54919 6.214S 241 5 80 81 13 997 521 '5-5242 6.2231 242 5 8564 14 172 488 '5-51^3 6.2317 243 5 90 49 14 348 907 15.5885 •6.2403 244 5 95 36 14 526 784 15.6205 6.2488 24S 6 00 25 14 706 125 15.6525 6.2573 246 6 05 16 14 886 936 15.6844 6.2658 247 6 10 09 IS 069 223 15.7162 6.2743 248 6 15 04 IS 252 992 15.7480 6.2828 249 6 20 01 15 438 249 15-7797 6.2912 250 6 25 00 IS 625 000 15.8114 6.2996 Number Square. Cube. Square Root. Cube Root. 251 6 30 01 IS 813 251 15.8430 6.3080 . 252 .6 35 04 16 003 008 15.8745 6.3164 253 6 40 09 16 194 277 1 5.9060 6.3247 254 6 45 16 16 387 064 15-9374 6-3330 255 6 50 25 16 581 375 15.9687 6.3413 256 6 55 36 16 777 216 16.0000 6.3496 257 6 60 49 16 974 593 16.0312 6.3579 258 6 65 64 17 173 512 16.0624 6.3661 259 6 70 81 17 373 979 16.093s 6-3743 260 6 76 00 17 576 000 16.1245 6.3825 2O1 6 81 21 17 779 581 16.1555 6.3907 262 6 86 44 17 984 728 16.1864 6.39S8 f 263 6 91 69 18 191 447 16.2173 6.4070 264 6 96 96 18 399 744 16.2481 6.4151 26s 7 02 25 18 609 625 16.2788 6.4232 266 7 07 56 18 821 096 16.3095 6.4312 267 7 12 89 19 034 163 16.3401 6-4393 268 7 18 24 I 9 248 832 16.3707 6.4473 269 7 23 61 19 465 109 16.4012 6-4553 270 7 29 00 19 683 000 16.4317 6.4633 271 7 34 41 19 902 511 16.4621 6.4713 272 7 39 84 20 123 648 16.4924 6.4792 273 7 45 29 20 346 417 16.5227 6.4872 274 7 50 76 20 570 824 16 5529 6.4951 275 7 56 25 20 796 875 16.5831 6.5030 276 7 61 76 21 024 576 16.6132 6.5108 277 7 67 29 21 253 933 16.6433 6.5187 278 7 72 84 21 484 952 16.6733 6.5265 279 7 78 41 21 717 639 16.7033 6.5343 280 7 84 00 21 952 000 16.7332 6.5421 281 7 89 61 22 188 041 16.7631 6.5499 282 7 95 24 22 425 768 16.7929 6.5577 283 8 00 89 22 665 187 16.8226 6.5654 284 8 06 56 22 906 304 16.8523 6.5731 28s 8 12 25 23 149 125 16.8819 6.5808 286 8 17 96 23 393 656 16.9115 6.5885 287 8 23 69 23 639 903 16.9411 6.5962 288 8 29 44 23 887 872 16.9706 6.6039 289 8 35 21 24 137 569 1 7.0000 6.6115 290 8 41 00 24 389 000 17.0294 6.6191 291 8 46 81 24 642 171 17.0587 6.6267 292 8 52 64 24 897 088 17.0880 6.6343 293 . 8 58 49 25 '53 757 17.1172 6.6419 294 8 64 36 25 412 184 17.1464 6.6494 295 8 70 25 25 672 375 17.1756 6.6569 296 8 76 16 25 934 336 17.2047 6.6644 297 8 82 09 26 198 073 17-2337 6.6719 , 298 8 88 04 26 463 592 17.2627 6.6794 299 8 94 01 26 730 899 17.2916 6.6869 , 300 9 00 00 27 000 000 17.3205 6.6943 Number Square. Cube. Square Root. Cube Root. 301 302 303 304 30s 9 06 01 9 12 04 9 18 09 9 24 16 9 30 25 27 270,901 27 543 608 27 818 127 28 094 464 28 372 625 17-3494 17-3781 17.4069 17-4356 17.4642 6.7018 6.7092 6.7166 6.7240 6.73'3 306 3°7 308 309 310 9 36 36 9 42 49 9 48 64 9 54 81 9 61 00 28 652 616 28 934 443 29 218 112 29 503 629 29 791 000 17.4929 17-5214 17-5499 17.5784 17.6068 6.7387 6.7460 6-7533 6.7606 6.7679 3" 312 313 3«4 315 9 67 21 9 73 44 9 79 69 9 85 96 9 92 25 30 080 231 30 371 328 30 664 297 30 959 144 31 255 875 17.6352 17.6635 17.6918 17.7200 17.7482 (=.7752 0.7824 6.7897 6.7969 6.8041 316 317 318 319 320 9 98 56 10 04 89 10 II 24 10 17 61 10 24 00 31 554 496 31 85s 013 32 157 432 32 461 759 32 768 000 17.7764 17.8045 17.8326 17.8606 17.8885 6.8113 6.8185 6.8256 6.8328 6.8399 321 322 323 324 325 10 30 41 10 36 84 10 43 29 10 49 76 10 56 25 33 076 161 33 386 248 33 698 267 34 012 224 34 328 125 17.9165 17.9444 17.9722 18.0000 18.0278 6.8470 6.8541 6.8612 6.8683 6.8753 326 327 328 329 330 10 62 76 10 69 29 10 75 84 ,10 82 41 '10 89 00 34 645 976 34 965 783 35 287 552 35 611 289 35 937 000 18-0555 18.0831 18.II08 18.1384 18.1659 6.8824 6.8894 6.8964 6.9034 6.9104 33' 332 333 334 33S 10 95 61 11 02 24 II 08 89 II IS 56 II 22 25 36 264 691 36 594 368 36 926 037 37 259 704 37 595 375 18.1934 18.2209 18.2483 18.3757 18.3030 6-9174 6.9244 6.9313 6.9382 6-945' 336 337 338 339 340 II 28 96 II 35 69 II 42 44 II 49 21 II 56 00 37 933 056 38 272 753 38 614 472 38 958 219 39 304 000 i8-33°3 18.3576 18.3848 18.4120 18.439' 6.9521 6.9589 6.9658 6.9727 6-9795 341 342 343 344 345 II 62 81 II 69 64 II 76 49 II 83 36 n 90 25 39 651 821 40 001 688 40 353 607 40 707 584 41 063 625 18.4662 18.4932 18.5203 18.5472 18.5742 6.9864 6.9932 7.0000 7.0068 7.0136 346 347 348 349 35° 11 97 16 12 04 09 12 II 04 12 18 01 12 25 00 41 421 736 41 781 923 42 144 19a 42 508 549 42 87s 000 18.601 1 18.6279 18.6548 18.6815 18.7083 7.0203 7.0271 7-0338 7.0406 7-0473 Number Square, Cube. Square Root. Cube Root. 351 352 353 354 355 12 32 01 12 39 04 12 46 09 12 53 16 12 60 25 43 243 55 « 43 614 208 43 986 977 44 361 864 44 738 87s 18.7350 18.7617 18.7883 18.8149 18.8414 7.0540 7.0607 7.0674 7.0740 7.0807 3S6 357 3S8 359 300 12 67 36 12 74 49 12 81 64 12 88 8i 12 96 00 4S 118 016 45 499 293 45 882 712 46 268 279 46 656 000 18.8680 18.8944 18.9209 18.9473 18.9737 7-0873 7.0940 7.1006 7.1072 7.1138 361 362 363 364 365 13 03 21 13 10 44 13 17 69 13 24 96 13 32 25 47 04S 881 47 437 928 47 832 147 48 228 S44 48 627 I2S 19.0000 19.0263 19.0526 19.0788 19.1050 7.1204 7.1269 7-I33S 7.1400 7.1466 366 367 368 369 370 13 39 56 13 46 89 13 54 24 13 61 61 13 69 00 49 027 896 49 430 863 49 836 032 50 243 409 SO 653 000 19-13" 19.1572 19-1833 19.2094 19.2354 7-1531 7.1596 7.1661 7-1726 7.1791 371 372 373 374 375 13 76 41 13 83 84 13 91 29 13 98 76 14 06 25 SI 064 811 SI 478 848 51 89s 117 52 313 624 52 734 375 19.2614 19.2873 '9-3'32 >9-339i 19.3649 7. "855 7.1920 7.1984 7.2048 7.2112 376 377 378 379 380 H 13 76 14 21 29 14 28 $4 14 36 41 14 44 00 53 >57 376 53 582 633 54 010 IS2 54 439 939 S4 872 000 19.3907 19.4165 19.4422 19.4679 19.4936 7.2177 7.2240 7.2304 7,2368 7.2432 3f' 382 383 384 385 14 51 61 14 59 24 14 66 89 14 74 56 - 14 82 25 55 306 341 ss 742 968 56 181 887 56 623 104 57 066 625 19.5192 19.5448 19.5704 19-5959 19.6214 7-2495 7-2558 7.2622 7.268s 7-2748 386 387 388 389 390 14 89 96 14 97 69 15 05 44 IS 13 21 IS 21 00 57 5'2 456 57 960 603 58 411 072 58 863 869 59 319 000 19.6469 19.6723 19.6977 19.7231 19.7484 7.281 1 7.2874 7.2936 7-2999 7.3061 39' 392 393 394 395 IS 28 81 IS 36 64 IS 44 49 IS 52 36 15 60 25 59 776 471 60 236 288 60 698 457 61 162 984 61 629 §75 19-7737 19.7990 19.8242 19-8494 19.8746 7-3'24 7-3'86 7.3248 7-33>o 7-3372 396 397 398 399 400 IS 68 16 15 76 09 IS 84 04 15 92 01 16 00 00 62 099 136 62 570 773 63 044 792 63 521 199 64 000 000 19.8997 19.9249 19.9499 19-9750 20.0000 7-3434 7-3496 7-3558 7.3619 7.3681 Number Square. Cube. Square Root. Cube Root. | 401 402 H03 404 40s 16 08 01 16 16 04 16 24 09 16 32 16 16 40 25 64 481 201 64 964 808 65 450 827 65 939 264 66 430 125 20.0250 20.0499 20.0749 20.0998 20.1246 7-3742 7-3803 7.3864 7-3925 7.3986 406 409 410 16 48 36 16 56 49 16 64 64 16 72 81 16 81 00 66 923 416 67 419 143 67 917 312 68 417 929 68 921 000 20.1494 20.1742 20.1990 20.2237 20.2485 7.4047 7.4108 7.4169 7.4229 7.4290 411 41 J 4«3 414 41S 16 89 21 16 97 44 17 0569 17 13 96 17 22 25 69 426 531 69 934 528 70 444 997 70 9S7 944 71 473 375 20.2731 20.2978 20.3224 20.3470 20.3715 7-4350 7.4410 7-4470 7-4530 7-4590 416 417 418 419 420 17 30 56 17 38 89 17 47 24 17 SS 61 17 64 00 71 991 296 72 511 713 73 034 632 73 560 059 74 088 000 20.3961 20.4206 20.4450 20.469s 20.4939 7.4650 7.4710 7-4770 7.4829 7.4889 421 422 424 42s 17 72 41 17 80 84 17 89 29 17 97 76 18 06 25 74 618 461 75 IS" 448 75 686 967 76 225 024 76 765 625 20.5183 20.5426 20.5670 20.5913 20.6155 7.4948 7:5007 7.5067 7.5126 7-5'85 426 427 428 429 430 18 14 76 18 23 29 18 31 84 18 40 41 18 49 00 77 308 776 77 854 483 78 402 752 78 953 589 79 507 000 20.6398 20.6640 20.6882 20.7123 20.7364 7-5244 7.5302 7-5361 7-5420 7-5478 43' 432 433 434 43S 18 S7 61 18 66 24 18 74 89 18 83 56 18 92 25 80 062 991 80 621 568 81 182 737 81 746 504 82 312 875 20.7605 20.7846 20.8087 20.8327 20.8567 7-5537 7-SS9S 7-5654 7.5712 7-5770 436 437 438 439 440 19 00 96 19 09 69 19 18 44 19 27 21 19 36 00 82 881 856 83 453 453 84 027 672 84 604 519 85 184 000 20.8806 20.9045 20.9284 20.9523 XO.9762 7.5828 7.5886 7-5944 7.6001 7.6059 441 442 443 444 445 19 44 81 •9 53 64 19 62 49 19 71 36 19 80 25 85 766 121 86 350 888 86 938 307 87 528 384 88 121 125 21,0000 21.0238 21.0476 21.0713 21.0950 7.61 17 7.6174 7.6232 7.6289 7-6346 446 447 448 449 450 19 89 16 199809 20 07 04 20 16 01 20 25 00 88 716 536 89 314 623 89 915 392 90 518 849 91 125 000 21.1187 21.1424 21. 1660 21.1896 21.2132 7.6403 7.6460 7.6517 7.6574 7-6631 Number Square. Cube. Square Root. Cube Root. 4S2 453 454 455 20 34 01 20 43 04 20 52 09 ZO 01 16 20 70 25 9' 733 851 92 345 408 92 959 677 93 576 664 94 196 375 21.2368 21.2603 21.2838 21.3073 21.3307 7.668S 7.6744 7.6801 7.6857 7.6914 456 457 458 459 460 20 79 36 20 88 49 20 97 64 21 06 81 21 16 00 94 818 816 95 443 993 96 071 912 96 702 579 97 336 000 21-3542 21.3776 21.4009 21.4243 21.4476 7.6970 7.7026 7.7082 7-7138 7.7194 : 461 462 463 464 465 21 25 21 21 34 44 21 43 69 21 52 96 21 62 25 97 972 181 98 611 128 99 252 847 99 897 344 100 544 625 21.4709 21.4942 21.5174 21.5407 21.5639 7.7250 7-7306 7.7362 ! 7-7418 i 7-7473 ; 466 467 468 469 470 21 71 56 21 80 89 21 90 24 21 99 61 22 09 00 101 194 696 loi 847 563 102 503 232 103 161 709 103 823 000 ' 21.5870 21.6102 21.6333 21.6564 21.6795 7.7529 ; 7-7584 : 7.7639 7.7695 7.7750 471 472 473 474 47S 22 18 41 22 27 84 22 37 29 22 46 76 22 56 25 104 487 III 105 154 048 105 823 817 106 496 424 107 171 875 21.7025 21.7256 21.7486 21.7715 21-7945 7.7805 7.7860 7-7915 7.7970 7.8025 476 477 478 479 480 22 65 76 22 75 29 22 84 84 22 94 41 23 04 CO 107 850 176 i°8 531 333 109 215 352 109 902 239 no 592 000 21.8174 21.8403 21.8632 21.8861 21.9089 7.8079 7-8134 7.8188 7-8243 7.8297 481 482 483 484 485 23 13 61 23 23 24 23 32 89 23 42 56 23 52 25 III 284 641 111 980 16S 112 678 587 "3 379 904 114 084 125 21.9317 21-9545 21.9773 22.0000 22.0227 7-8352 7.8406 7.8460 7.8514 7.8568 486 487 488 489 490 23 61 9S 23 71 69 23 81 44 23 91 21 24 01 00 114 791 256 115 501 303 116 214 272 116 930 169 117 649 000 22.0454 22.068l 22.0907 22.1133 22.1359 7.8622 7.8676 7.8730 7-8784 7-8837 491 492 493 494 495 24 10 81 24 20 64 M 30 49 24 40 36 24 50 25 118 370 771 119 095 '488 119 823 157 J 20 553 784 121 287 375 22.1585 22.1811 22.2036 22.2261 22.2486 . 7.8891 7-8944 7.8998 7.9051 7.9105 496 497 498 499 500 24 60 16 24 70 09 24 80 04 24 90 01 25 00 00 122 023 936 J22 763 473 123 S°5 992 124 251 499 125 000 000 22.27H 22.2935 22.3159 22.3383 22.3607 7.9158 7.9211 7.9264 7-9317 7.9370 \Nuinbn Square. Cube. Square Root. Cube Root. 501 S02 503 504 505 - 25 10 01 25 20 04 25 30 09 25 40 16 25 50 25 125 751 501 ■126 506 008 127 263 527 128 024 064 128 787 625 22.3830 22.4054 22.4277 22.4499 22.4722 7-9423 7.9476 7.9528 7.9581 7-9634 5o6 507 508 509 510 25 60 36 25 70 49 25 80 63 25 90 81 26 01 CO 129 554 216 130 323 843 131 096 512 131 872 229 132 651 000 22.4944 22.5167 22.5389 22.5610 22.5832 7.9686 7-9739 7.979' 7-9843 7.9896 5" 512 513 514 515 26 II 21 26 21 44 26 31 69 26 41 96 26 52 25 133 432 831 134 217 728 13s 005 697 13s 796 744 136 590 87s 22.6053 22.6274 22.6495 22.6716 22.6936 7.9948 8.0000 8.0052 8.0104 8.0156 5.6 518 519 520 26 62 56 26 72 89 26 83 24 26 93 61 27 04 00 137 388 096 138 188 413 138 991 832 139 798 359 140 608 000 22.7156 22.7376 22.7596 22.7816 22.8035 8.0208 8.0260 8.0311 8.0363 8 0415 521 522 523 524 525 27 14 41 27 24 84 27 35 29 27 45 76 27 56 25 141 420 761 142 236 648 143 055 667 143 877 824 144 703 125 22.8254 22.8473 22.8692 22.8910 22.9129 8.0466 8.0517 8.0569 8.0620 8.0671 526 527 528 529 '53° 27 66 76 27 77 29 27 87 84 27 98 41 28 09 00 M5 531 576 146 363 183 147 197 952 148 035 889 148 877 000 22.9347 22.9565 22.9783 23.0000 23.0217 8.0723 8.0774 8.0825 8.0876 8.0927 531 532 533 534 535 28 19 61 28 30 24 28 40 89 28 51 56 28 62 25 149 721 291 150 568 768 151 419 437 152 273 304 153 130 375 23-0434 23.0651 23.0868 23.1084 23.1301 8.0978 8.1028 8.1079 8.1130 8.1 180 536 537 538 539 540 28 72 96 28 83 69 28 94 44 29 05 21 29 16 00 ■S3 990 656 1 54 854 153 155 720 872 156 590 819 157 464 000 23. 1 5' 7 23-1733 23.1948 23.2164 23-2379 8.1231 8.1281 8.1332 8.1382 8.r433 541 542 543 544 545 29 26 81 29 37 64 29 48 49 29 59 36 29 70 25 158 340 421 159 220 088 160 103 007 160 989 184 i6i 878 625 23-2594 23.2809 23.3024 23.3238 23-3452 8.1483 c-'533 8.1583 8-1633 8.1683 546 548 549 55° 29 81 16 29 92 09 30 03 04 30 14 01 30 25 00 162 771 336 163 667 323 164 566 592 165 469 149 166 375 000 23.3666 23.3880 23.4094 23-4307 23.4521 8.1733 8.1783 8.1833 8.1882 8.1932 Number Square. Cube. Square Root. Cube Root. ' 55' 5S» 553 554 555 -50 36 01 30 47 04 30 58 09 30 69 16 30 80 25 167 284 151 168 196 608 169 112 377 170 031 464 170 953 87s 23-4734 23-4947 23.5160 23-5372 23-5584 8.1982 , 8.2031 8.2081 8.2130 8.2180 556 558 560 30 91 36 31 02 49 31 13 64 31 24 81 31 36 00 171 879 616 172 808 693 173 741 112 174 676 879 175 616 000 23-5797 23.6008 23.6220 23.6432 23.6643 8.2229 8.2278 8.2327 8.2377 8.2426 562 563 56s 31 47 21 3' 58 44 31 69 69 31 80 96 31 92 25 176 558 481 177 504 328 •78 453 547 179 406 144 180 362 125 23.6854 23.7065 23.7276 23-7487 23.7697 8.2475 8.2524 8-2573 8.2621 8.2670 566 567 569 570 32 °3 56 32 14 89 32 26 24 32 37 61 32 49 00 181 321 496 182 284 263 183 250 432 184 220 009 185 193 000 23.7908 23.8118 23.8328 23-8537 23.8747 8.2719 8.2768 8.2816 8.2865 8.29.3 57' 572 573 574 575 32 60 41 32 71 84 32 83 29 32 94 76 33 06 25 186 169 411 187 149 248 188 132 517 189 119 224 190 109 375 23.8956 23.9165 23-9374 23-9583 23-9792 8.2962 8.3010 8.3059 8.3107 8-3>55 576 577 578 580 33 >7 76 33 29 29 33 40 84 33 52 41 33 64 00 191 102 976 192 100 033 193 100 552 194 104 539 195 112 00.0 24.0000 24.0208 24.0416 24.0624 24.0832 8.3203 8.3251 8.3300 8.3348 8-3396 5f' 583 584 585 33 75 61 33 87 24 33 98 89 34 »o 56 34 22 25 196 122 941 197 137 368 198 155 287 199 176 704 200 201 625 24.1039 24.1247 24.1454 24.1661 24.1868 8.3443 8.3491 8-3539 8.3587 8.3634 586 587 588 589 590 34 33 96 34 45 69 34 57 44 34 69 21 34 81 00 201 230 056 202 262 003 203 297 472 204 336 469 205 379 000 24.2074 24.2281 24.2487 24.2693 24.2899 8.3682 8.3730 8.3777 8.3825 8.3872 8.3919 8.3967 8.4014 8.4061 8.4108 591 592 593 594 595 34 92 81 35 04 64 35 16 49 35 28 36 35 40 25 206 425 071 207 474 688 208 527 857 209 584 584 210 644 875 24.3105 24-3311 24.3516 24-3721 24.3926 596 597 598 35 52 16 35 64 09 35 76 04 35 88 01 36 00 00 211 708 736 212 776 173 213 847 192 214 921 799 216 000 000 24.4131 24-4336 24.4540 24-4745 24-4949 8.415s 8.4202 8.4249 8.4296 8-4343 Number Square. Cube. Square Root. Cube Root. 60 1 36 12 01 217 081 801 24.5153 8.4390 602 36 24 04 218 167 208 24-5357 8.4437 603 36 36 09 219 256 227 24.5561 8.4484 604 36 48 16 220 348 864 24.5764 8.4530 60s 606 36 60 25 221 445 125 24.5967 8.4577 36 72 36 222 545 016 24.6171 8.4623 607 36 84 49 223 648 543 24.6374 8.4670 608 36 96 64 224 755 712 24.6577 8.4716 609 37 08 81 225 866 529 24.6779 8.4763 610 37 21 00 226 981 000 24.6982 8.4809 6.1 37 33 21 228 099 131 24.7184 8.4856 612 37 45 44 229 220 928 24.7386 8.4902 613 37 57 69 230 346 397 24.7588 8.4948 614 37 69 96 231 475 544 24.7790 8.4994 6.5 616 37 82 25 232 608 375 24.7992 8.5040 37 94 56 233 744 896 24.8193 8.5086 617 38 06 89 234 885 113 24.8395 8.5132 618 38 19 24 236 029 032 24.8596 8.5178 619 38 31 6t 237 176 659 24 8797 8.5224 620 621 38 44 00 238 328 000 24.8998 8.5270 38 56 41 239 483 061 24.9199 8.5316 622 38 68 84 240 641 848 24-9399 8.5362 623 38 81 29 241 804 367 24.9600 8.5408 624 38 93 76 242 970 624 24.9800 8.5453 625 39 06 25 244 140 625 25.0000 8.5499 626 39 18 76 245 314 376 25.0200 8.5544 627 39 3' 29 246 491 883 25.0400 8.5590 628 39 43 84 247 673 152 25.0599 8.5635 629 39 56 41 248 858 189 25.0799 8.5681 6^0 39 69 00 250 047 000 25.0998 8.5726 631 39 81 61 251 239 591 25.1197 8.5772 632 39 94 24 252 435 968 25.1396 8.5817 6J3 40 06 89 253 636 137 25.1595 8.5862 634 40 19 56 254 840 104 25.1794 8.5907 .63s 40 32 25 256 047 87s 25.1992 8.5952 636 40 44 96 257 259 456 25.2190 8.5997 637 40 57 69 258 474 853 25.2389 8.6043 638 40 70 44 259 694 072 25.2587 8.6088 639 40 83 21 260 917 119 25.2784 8.6132 640 40 g6 00 262 144 000 25.2982 8.6177 641 41 08 81 263 374 721 25.31S0 8.6222 642 41 21 64 264 609 288 25-3377 8.6267 643 41 34 49 265 847 707 25-3574 8.6312 644 41 47 36 267 089 984 25-3772 8.6357 64s 41 60 25 268 336 125 25.3969 8.6401 646 41 73 16 269 586 136 25.4165 86446 647 41 86 09 270 840 023 25.4362 8.6490 648 41 99 04 272 097 792 25.4558 8.6535 649 42 12 01 273 359 549 25-4755 8.6579 650 42 25 00 274 625 000 2549S1 8.6624 Number 651 652 653 654 65s Square. Cube. Square Root. Cube Root. 42 38 01 42 51 04 42 64 09 42 77 16 42 90 25 27s 894 451 277 167 808 278 445 077 279 726 264 281 oil 37S 2S-S'47 25-5343 25-5539 25-5734 25-5930 8.6668 8.6713 8.6757 8.6801 8.6845 656 657 658 43 03 36 43 '6 49 43 29 64 43 42 81 43 56 00 282 30c 416 283 593 393 284 890 312 286 191 179 287 496 000 25.6125 25.6320 25.6515 25.6710 25 690s 8.6890 8.6934 8.6978 8.7022 8.7066 661 662 663 664 665 43 69 21 43 82 44 43 95 69 44 08 96 44 22 25 288 804 781 290 117 528 291 434 247 292 754 944 294 079 625 25.7099 25.7294 25.7488 25.7682 25.7876 8.7110 8.7154 8.7198 8.7241 8.7285 666 667 668 669 670 44 3S .56 44 48 89 44 62 24 44 75 6r 44 89 00 295 408 296 296 740 963 298 077 632 299 418 309 300 763 000 25.8070 25.8263 25-8457 25.8650 25.8844 8.7329 8-7373 8.7416 8.7460 8.7503 671 672 673 674 675 45 02 41 45 15 84 45 29 29 45 42 76 45 56 25 302 III 711 303 464 448 304 821 217 306 182 024 307 546 875 25.9037 25.9230 25.9422 , 25.9615 25.9808 8.7547 8.7590 8.7634 8.7677 8.7721 676 677 678 679 680 45 69 76 45 83 29 45 96 84 46 10 41 46 24 00 308 915 776 310 288 733 311 665 752 313 046 839 314 432 000 26.0000 '26.0192 26.0384 26.0576 26.076S 8.7764 8.7807 8.7850 8.7893 8.7937 681 682 683 684 68s 46 37 6i 46 51 24 46 64 89 46 78 56 46 92 25 315 821 241 317 214 568 318 611 987 320 013 504 321 419 125 26.0960 26. II 5 1 26.1343 26.1534 26.1725 8.7980 8.802 1 8.8066 8.8109 8.8152 686 687 688 689 690 47 OS 96 47 19 69 47 33 44 47 47 21 47 61 00 322 828 856 324 242 703 325 660 672 327 082 769 328 509 000 26.1916 26.2107 26.2298 26.2488 26.2679 8.8194 8.8237 8.8280 8.8323 8.8366 691 692 693 694 695 47 74 81 , 47 88 64 48 02 49 48 16 36 48 30 25 329 939 371 331 373 888 332 812 557 334 25s 384 335 702 375 26.2869 26.3059 26.3249 26.3439 26.3629 8.8408 8.8451 8.8493 8.8536 8.8578 696 697 698 699 700 48 44 16 48 58 09 48 72 04 48 86 01 49 00 00 337 153 536 338 608 873 340 068 392 341 532 099 343 000 000 26.3818 26.4008 26.4197 26.4386 26.4575 8.8621 8.8663 8.8706 8.8748 8.8790 1 Numbei Square, Cube. Square Root. Cube Root. 701 49 14 01 344 472 lOI 26.4764 8.8833 702 49 28 04 345 948 408 26.4953 8.8875 703 49 42 09 347 428 927 26.5141 8.8917 704 49 56 16 348 913 664 26,5330 8.8959 70s 49 70 25 350 402 625 26.5518 8.9001 706 49 84 36 351 895 816 26.5707 8.9043 707 49 98 49 353 393 243 26.5895 8.9085 708 50 12 64 354 894 912 26.6083 8.9127 709 50 26 81 356 400 829 26.6271 8.9169 710 50 41 00 357 911 000 26.6458 8.9211 7" 5° 55 21 359 425 431 26.6646 8.9253 712 50 69 44 360 944 128 26.6833 8.9295 713 50 83 69 362 467 097 26,7021 8-9337 714 5° 97 96 363 994 344 26,7208 8.9378 715 51 12 25 365 525 875 26.7395 8.9420 716 51 26 56 367 061 696 26,7582 8.9462 717 51 40 89 368 601 813 26,7769 8.9503 718 ■ 51 55 24 370 146 232 26,7955 8.9545 719 51 69 61 371 694 959 26,8142 8.9587 720 51 84 00 373 248 000 26,8328 8.9628 721 51 98 41 374 805 361 26.8514 8.9670 722 52 12 84 376 367 048 26.8701 8.971 1 723 52 27 29 377 933 °67 26.8887 8.9752 724 52 41 76 379 5°3 424 26.9072 8-9794 725 52 56 25 381 078 125 26.9258 8.983s 726 52 70 76 382 657 176 26.9444 8.9876 727 52 85 29 384 240 583 26.9629 8.9918 728 52 99 84 385 828 352 26.9815 8.9959 729 S3 14 4' 387 420 489 27,0000 9.0000 730 53 29 00 389 017 000 27,0185 9.0041 731 S3 43 61 390 617 891 27.0370 9.0082 732 S3 58 24 392 223 168 27-0555 9.0123 733 S3 72 89 393 832 837 27.0740 9,0164 734 53 87 56 395 446 904 27.0924 9.0205 735 54 02 25 397 065 375 27.1109 9.0246 736 54 16 96 398 688 256 27,1293 9^287 737 54 3J 69 400 315 553 27,1477 9.0328 738 54 46 44 401 947 272 27.1662 9.0369 739 54 61 21 403 583 419 27.1846 9.0410 740 54 76 00 405 224 000 27.2029 9.0450 741 54 90 81 406 869 021 27.2213 9.0491 742 55 05 64 408 518 488 27.2397 9-0532 743 55 20 49 410 172 407 27.2580 9.0572 744 55 35 36 411 830 784 27.2764 9.0613 745 55 5° 25 55 65 16 413 493 625 27.2947 9.0654 746 415 160 936 27.3130 9.0694 747 55 80 09 416 832 723 27-3313 9-°735 748 55 95 04 418 508 992 27.3496 9-0775 749 56 10 01 420 189 749 27.3679 9.0816 75° 56 25 00 421 875 000 27.3861 9.0856 Number Square. Cube. Square Root. Cube Root. 75' 752 753 754 755 56 40 01 56 55 04 56 70 09 56 85 16 57 00 25 423 564 751 425 259 008 426 957 777 428 661 064 430 368 875 27.4044 27.4226 27.4408 27.4591 27-4773 9.0896 9.0937 9.0977 9.IO17 9.1057 756 "^ 758 759 760 57 T5 36 57 3° 49 57 45 64 57 60 81 57 76 00 432 081 216 433 798 093 435 S>9 512 437 245 479 438 976 000 27.4955 27.5136 27.5318 27.5500 27.5681 9.1098 9.1 138 9.I178 9.I218 9.1258 761 762 763 764 765 57 91 21 58 06 44 58 21 69 58 36 96 58 52 25 440 711 081 442 45° 728 444 194 947 445 943 744 447 697 125 27.5862 27.6043 27.6225 27.6405 27.6586 9.1298 9- '338 9.1378 9.1418 9.1458 766 767 768 769 770 58 67 56 58 82 89 58 98 24 59 '3 61 59 29 00 449 455 096 451 217 663 452 984 832 454 756 609 456 533 o°o 27.6767 27.6948 27.7128 27.7308 27.7489 9.1498 9- "537 9-»577 9.1617 9.1657 771 772 773 774 775 59 44 41 59 59 84 59 75 29 59 9° 76 60 06 25 458 314 on 460 099 648 461 889 917 463 684 824 465 484 375 27.7669 27.7849 27.8029 27.8209 27.8388 9.1696 9-1736 9-1775 9.181S 9.1855 776 777 778 779 780 60 21 76 60 37 29 60 52 84 60 68 41 60 84 00 467 288 576 469 097 433 470 910 952 472 729 139 474 552 000 27.8568 27.8747 27.8927 27.9106 27.9285 9.1894 9-1933 9-1973 9.2012 9.2052 781 782 783 784 785 60 99 61 61 15 24 61 30 89 61 46 56 61 62 25 476 379 541 478 211 768 480 048 687 481 890 304 483 736 625 27.9464 27.9643 27.9821 28.0000 28.0179 9.2091 9.2130 g.2170 9.2209 9.2248 786 787 788 789 790 61 77 96 61 93 69 62 09 44 62 25 21 62 41 00 485 587 656 487 443 403 489 303 872 491 169 069 493 039 00° 28.0357 28.0535 28.0713 28.0891 28.1069 9.2287 9.2326 9.2365 9.2404 9-2443 791 792 793 794 795 62 56 81 62 72 64 62 88 49 63 04 36 63 20 25 494 9'3 671 496 793 088 498 677 257 500 566 184 502 459 875 28.1247 28.1425 28.1603 28.1780 28.1957 9.2482 9.2521 9.2560 9-2599 9.2638 796 797 798 799 800 63 36 16 63 52 09 63 68 04 63 84 01 64 00 00 504 358 336 506 261 573 508 169 592 510 082 399 512 000 000 28.213s 28.2312 28.2489 28.2666 28.2843 9.2677 9.2716 9.2754 9-2793 9.2832 Number Square. Cube. Square Root. Cube Root 80 1 802 803 804 805 64 16 01 64 32 04 64 48 09 64 64 16 64 80 25 513 922 401 515 849 608 517 781 627 519 718 464 521 660 125 28.3019 28.3196 28.3373 28.3549 28.3725 9.2870 9.2909 9.2948 9.2986 9.3025 806 807 808 809 810 64 96 36 65 12 49 65 28 64 65 44 81 65 61 CO 523 606 616 525 SS7 943 527 514 112 529 475 129 53 ( 441 000 28.3901 28.4077 28.4253 28.4429 28.4605 9.3063 9.3102 9.3140 9-3 '79 9.3217 8[i 812 814 815 65 77 21 65 93 44 66 09 69 66 25 96 66 42 25 533 4>i 731 S3S 387 328 537 367 797 539 353 144 541 343 375 28.4781 28.4956 28.5132 28.5307 28.5482 9-3255 9-3294 9-3332 9-3370 9.3408 816 8>7 818 819 820 66 58 56 66 74 89 66 9 [ 24 67 07 61 67 24 GO 543 338 496 545 338 513 547 343 432 549 353 259 551 368 000 28.5657 28.5832 28.6007 28.6182 28.6356 9-3447 9-3485 93523 9-3561 9-3599 82[ 822 823 824 825 67 40 41 67 56 84 67 73 29 67 89 76 68 06 25 553 387 661 555 4>2 248 557 441 767 559 476 224 561 51S 625 28.6531 28.6705 28.6880 28.7054 28.7228 9-3637 9-3675 93713 9-3751 9-3789 826 827 828 829 830 68 22 76 68 39 29 68 55 84 68 72 41 68 89 00 563 SS9 976 565 609 283 567 663 552 569 722 789 571 787 000 28.7402 28.7576 28.7750 28.7924 28.8097 9-3827 9-3865 9.3902 9-3940 9-3978 831 832 834 835 69 05 6i 69 22 24 69 38 89 69 55 56 69 72 25 573 856 191 575 930 368 578 009 537 580 093 704 582 182 875 28.8271 28.8444 28.8617 28.8791 28.8964 9.4016 9-4053 9.4091 9.4129 9.4166 836 837 838 839 840 69 88 96 70 05 69 70 22 44 70 39 21 70 56 00 584 277 056 586 376 253 588 480 472 590 589 719 592 704 000 28.9137 28.9310 28.9482 28.965s 28.9828 9.4204 9.4241 9.4279 9.4316 9-4354 841 842 843 844 84 s 70 72 81 70 89 64 71 06 49 71 23 36 71 40 25 594 823 321 596 947 688 599 077 107 601 211 584 603 351 125 29.0000 29.0172 29.0345 29.0517 29.0689 9-4391 9-4429 9.4466 9-4503 9-4541 846 847 848 849 850 71 57 16 71 74 09 71 91 04 72 08 01 72 25 00 60s 495 736 607 645 423 609 800 192 611 960 049 614 125 000 29.0861 29-1033 29.1204 29.1376 29.1548 9.4578 9.4615 9.4652 9.4690 9-4727 Number Square. Cube. Square Root, Cube Root. 8s. 72 42 OI 616 295 051 29.1719 9.4764 852 72 59 04 618 470 208 29.1890 9.4801 853 72 76 09 620 650 477 29.2062 9.4838 854 72 93 16 622 835 864 29.2233 9.4875 855 73 10 25 625 026 375 29.2404 9.4912 ' 856 73 27 36 627 222 016 29.2575 9.4949 i 857 73 44 49 629 422 793 29.2746 94986 • 858 73 61 64 631 628 712 29.2916 9-5023 i ^P 73 78 81 633 839 779 29.3087 9.5060 i 860 _ 73 96 00 636 056 000 29. ',258 9.5097 i 86[ 74 13 21 638 277 381 29.3428 9-5134 ! 862 74 30 44 640 503 928 29.3598 9.517I ! IP 74 47 69 642 735 647 29.3769 9.5207 i 864 74 64 96 644 972 544 29.3939 9-5244 865 74 82 25 647 214 625 29.4109 9.5281 866 74 99 56 649 461 896 29.4279 9-S3'7 j 867 75 16 89 651 714 363 29.4449 9-5354 1 868 75 34 24 653 972 032 29.4618 9-539> 869 75 5' 61 656 234 909 29.4788 9-5427 870 75 69 00 658 503 000 29.4958 9.5464 871 75 86 41 660 776 311 29.5127 9.5501 872 76 03 84 663 054 848 29.5296 95537 873 76 21 29 665 338 617 29.5466 9-5574 874 76 38 76 667 627 624 29.5635 9.5610 875 76 56 25 669 921 875 29.5804 5.5647 876 76 73 76 672 221 376 29-5973 9.5683 877 76 91 29 674 526 133 29.6142 9-5719 878 77 08 84 676 836 152 29.6311 9-5756 879 77 26 41 679 151 439 29.6479 9.5792 880 77 44 00 68 I 472 000 29.6648 9.5828 881 77 61 61 683 797 841 29.6816 9.5865 882 77 79 24 686 128 968 29.6985 9.5901 883 77 96 89 688 465 387 29-7153 9-5937 884 78 14 56 690 807 104 29.7321 9-5973 885 78 32 25 693 154 125 29.7489 9.6010 886 78 49 96 695 506 456 29.7658 9.6046 887 78 67 69 697 864 103 29.7825 9.6082 888 78 85 44 700 227 072 29.7993 9.61 18 889 79 03 21 702 595 369 29.8161 9.6154 890 79 21 00 704 969 000 29.8329 9.6190 891 79 38 81 707 347 971 J29.8496 . 9.6226 892 79 56 64 709 732 288 29.8664 9.6262 893 79 74 49 712 121 957 29.8831 9.6298 894 79 92 36 714 516 984 29.8998 9-6334 89s 80 10 25 716 917 375 29.9166 9.6370 896 80 28 16 719 323 136 29-9333 9.6406 897 80 46 09 721 734 273 29.9500 9.6442 898 80 64 04 724 150 792 29.9666 9.6477 899 80 82 01 726 572 699 29-9833 9-6513 , 900 81 00 00 729 000 000 30.0000 9.6549 Number Square. Cube. Square Root. Cube Root. 901 90Z 9°3 904 90s 81 18 01 81 36 04 81 S4 09 81 72 16 81 90 25 731 432 701 733 870 808 736 3'4 327 738 763 264 741 217 625 30.0167 30-0333 30.0500 30.0666 30.0832 9.6585 9.6620 9.6656 9.6692 9.6727 906 907 908 909 910 82 08 36 82 26 49 82 44 64 82 62 81 82 81 00 743 677 416 746 142 643 748 613 312 7SI 089 429 753 571 000 30.0998 3aii64 30.1330 30.1496 30.1662 9.6763 9-6799 9.6834 9.6870 9.6905 9" 912 9'3 914 91s 82 99 21 83 17 44 83 35 69 83 S3 96 83 72 25 756 058 031 758 55° 82s 761 048 497 763 551 944 766 060 87s 30.1828 30.1993 30.2159 30.2324 30.2490 9.6941 9.6976 9.7012 9.7047 9.7082 9.6 917 9.8 919 920 83 90 56 84 08 89 84 27 24 84 4S 61 84 64 00 768 S7S 296 771 09s 213 773 620 632 776 iS» 559 778 688 000 30.2655 30.2820 30.2985 30-3150 30-3315 9.7118 9-7153 9.7188 9.7224 9-7259 921 922 923 924 92s 84 82 41 85 00 84 85 "9 29 85 37 76 85 56 25 781 229 9C1 783 777 448 786 330 467 788 889 024 791 453 125 30.3480 30-3645 30.3809 30-3974 30.4138 9.7294 9-7329 9.7364 9.7400 9-7435 926 927 . 928 929 930 8s 74 76 55 93 29 86 II 84 86 30 41 86 49 00 794 022 776 796 597 983 799 178 752 801 765 089 804 357 000 30.4302 30-4467 30.4631 30.4795 30-4959 9.7470 9-7505 9.7540 9-7575 9.7610 93' 932 933 934 935 86 67 61 86 86 24 87 04 89 87 23 56 87 42 25 806 954 491 809 557 568 812 166 237 814 780 504 817 400 375 30.5123 30-5287 30.5450 30.5614 30.5778 9-7645 9.7680 9-7715 9.7750 9.7785 936 937 938 939 940 87 60 96 87 79 69 87 98 44 88 17 21 88 36 00 820 025 856 822 656 953 825 293 672 827 936 019 830 584 000 30.5941 30.6105 30.6268 30.6431 30.6594 9.7819 9.7889 9.7924 9-7959 941 942 943 944 945 88 S4 81 88 73 64 88 92 49 89 n 36 89 30 zs 833 237 621 835 896 888 838 561 807 841 232 384 843 908 625 30.6757 30.6920 30.7083 30.7246 30.7409 9-7993 9.8028 9.8063 9.8097 9-8132 946 947 948 949 950 89 49 16 89 68 09 89 87 04 90 06 01 90 25 00 846 59° 536 849 278 123 851 971 392 854 670 349 857 375 000 30.7571 30.7734 30.7896 30.8058 308221 1 9.8167 9.8201 9.8236 9.8270 9.8^05 'Number Square. Cube. Square Root. Cube Root. ' 95' 952 i 953 9S4 9SS 90 44 01 90 63 04 . 90 82 09 91 01 16 91 20 25 860 085 351 862 801 408 8;6s 523 177 868 250 664 8*70 983 875 . 30.8383 30.8545 30.8707 30.8869 30.9031 9.8339 9-8374 9.8408 9.8443 9.8477 956 957 958 959 960 9t 39 36 91 58 49 9t 77 64 91 96 81 92 16 00 873 722 816 876 467 493 879 217 912 881 974 079 884 736 000 30.9192 30.9354 30.9516 30.9677 30.9839 9.8511 9.8546 9.8580 9.8614 9.8648 961 962 1 9G3 ! 9.64 • 9^ 92 3 21 92 54 44 92 73 69 92 92 96 ' 93 '2 25 887 503 681 8^0 277 128 893 056 347 895 841 344 898 632 125 31.0000 31.0161 31.0322 31.0483 31.0644 9.8683 9.8717 9.8751 9.8785 9.8819 966 967 968 969 970 93 3« 56 93 5° 89 93 70 24 93 89 61 94 09 P° 901 428 696 9=4 231 063 967 039 232 909 853 209 912 673 000 31.0805 31.0966 31.I127 31.1288 31.1448 9.8854 9.8888 9.8922 9.§956 9.8990 971 972 973 ' 974 97 S 94 28 41 94, 47 84 94 <'7 29 94 86 76 95 06 is 9 1 5 498 6 1 1 918 330 048 921 1G7 317 924 01,0 424 926 859 375 31.1609 31.1769 31.1929 31.2090 31.2250 9.9024 9.9058 9.9092 9.9126 9.9160 976 977 978 979 980 95 25 76 95 45 29 95 64 84 95 84 41 96 04 00 929 714 176 932 574 833 9 5 44J 352 9 8 313 739 941 l;)2 000 31.2410 31.2570 31.2730 31.2890 31.3050 9.9194 9.9227 9.9261 9.9295 9.9329 981 982 983 984 9.85 96 23 61 96 43 24 96 62 89 96 82 56 97 02 25 944 076 141 946 966 168 949 862 087 952 763 904 9-5 671 625 31.3209 31-3369 31.3528 31.3688 3<.3847 9.9363 9.9396 9-9430 9.9464 9.9497 986 987 988 989 99° 97 21 96 97 41 69 97 61 44 97 81 21 98 01 00 958 585 256 961 504 803 964 430 272 967 361 669 970 299 000 31.4006 31.4166 31.4325 31.4484 31.4643 9.9531 9-9565 9.9598 9.9666 991 992 993 994 995 98 20 81 98 40 64 98 60 49 98 80 36 99 00 25 973 242 271 976 191 488 979 '46 657 982 107 784 985 074 875 31.4802 31.4960 3i.5'i9 31.5278 31.5436 9.9699 9.9733 9.9766 9.9800 9.9833 996 997 998 999 1000 99 20 16 99 40 09 99 60 04 99 80 01 I 00 00 00 988 047 936 991 026 973 994 on 992 997 002 999 I oco 000 000 31.5595 31.5753 31.59" 31.6070 31.6228 9.9866 9.9900 9-9933 9.9967 10.0000