CHAPTER XI

For that, of course, we need not be content with ordinary matter, with that which falls under our senses and whose motions we observe directly. Either we shall suppose that this common matter is formed of atoms whose internal motions elude us, the displacement of the totality alone remaining accessible to our senses. Or else we shall imagine some one of those subtile fluids which under the name ofetheror under other names, have at all times played so great a rôle in physical theories.

Often one goes further and regards the ether as the sole primitive matter or even as the only true matter. The more moderate consider common matter as condensed ether, which is nothing startling; but others reduce still further its importance and see in it nothing more than the geometric locus of the ether's singularities. For instance, what we callmatteris for Lord Kelvin only the locus of points where the ether is animated by vortex motions; for Riemann, it was the locus of points where ether is constantly destroyed; for other more recent authors, Wiechert or Larmor, it is the locus of points where the ether undergoes a sort of torsion of a very particular nature. If the attempt is made to occupy one of these points of view, I ask myself by what right shall we extend to the ether, under pretext that this is the true matter, mechanical properties observed in ordinary matter, which is only false matter.

The ancient fluids, caloric, electricity, etc., were abandoned when it was perceived that heat is not indestructible. But they were abandoned for another reason also. In materializing them, their individuality was, so to speak, emphasized, a sort of abyss was opened between them. This had to be filled up on the coming of a more vivid feeling of the unity of nature, and the perception of the intimate relations which bind together all its parts. Not only did the old physicists, in multiplying fluids, create entities unnecessarily, but they broke real ties.

It is not sufficient for a theory to affirm no false relations, it must not hide true relations.

And does our ether really exist? We know the origin of our belief in the ether. If light reaches us from a distant star, during several years it was no longer on the star and not yet on the earth; it must then be somewhere and sustained, so to speak, by some material support.

The same idea may be expressed under a more mathematical and more abstract form. What we ascertain are the changes undergone by material molecules; we see, for instance, that our photographic plate feels the consequences of phenomena of which the incandescent mass of the star was the theater several years before. Now, in ordinary mechanics the state of the system studied depends only on its state at an instant immediately anterior; therefore the system satisfies differential equations. On the contrary, if we should not believe in the ether, the state of the material universe would depend not only on the state immediately preceding, but on states much older; the system would satisfy equations of finite differences. It is to escape this derogation of the general laws of mechanics that we have invented the ether.

That would still only oblige us to fill up, with the ether, the interplanetary void, but not to make it penetrate the bosom of the material media themselves. Fizeau's experiment goes further. By the interference of rays which have traversed air or water in motion, it seems to show us two different media interpenetrating and yet changing place one with regard to the other.

We seem to touch the ether with the finger.

Yet experiments may be conceived which would make us touch it still more nearly. Suppose Newton's principle, of the equality of action and reaction, no longer true if applied to matteralone, and that we have established it. The geometric sum of all the forces applied to all the material molecules would no longer be null. It would be necessary then, if we did not wish to change all mechanics, to introduce the ether, in order that this action which matter appeared to experience should be counterbalanced by the reaction of matter on something.

Or again, suppose we discover that optical and electrical phenomena are influenced by the motion of the earth. We should be led to conclude that these phenomena might reveal to us notonly the relative motions of material bodies, but what would seem to be their absolute motions. Again, an ether would be necessary, that these so-called absolute motions should not be their displacements with regard to a void space, but their displacements with regard to something concrete.

Shall we ever arrive at that? I have not this hope, I shall soon say why, and yet it is not so absurd, since others have had it.

For instance, if the theory of Lorentz, of which I shall speak in detail further on in Chapter XIII., were true, Newton's principle would not apply to matteralone, and the difference would not be very far from being accessible to experiment.

On the other hand, many researches have been made on the influence of the earth's motion. The results have always been negative. But these experiments were undertaken because the outcome was not sure in advance, and, indeed, according to the ruling theories, the compensation would be only approximate, and one might expect to see precise methods give positive results.

I believe that such a hope is illusory; it was none the less interesting to show that a success of this sort would open to us, in some sort, a new world.

And now I must be permitted a digression; I must explain, in fact, why I do not believe, despite Lorentz, that more precise observations can ever put in evidence anything else than the relative displacements of material bodies. Experiments have been made which should have disclosed the terms of the first order; the results have been negative; could that be by chance? No one has assumed that; a general explanation has been sought, and Lorentz has found it; he has shown that the terms of the first order must destroy each other, but not those of the second. Then more precise experiments were made; they also were negative; neither could this be the effect of chance; an explanation was necessary; it was found; they always are found; of hypotheses there is never lack.

But this is not enough; who does not feel that this is still to leave to chance too great a rôle? Would not that also be a chance, this singular coincidence which brought it about that a certain circumstance should come just in the nick of time todestroy the terms of the first order, and that another circumstance, wholly different, but just as opportune, should take upon itself to destroy those of the second order? No, it is necessary to find an explanation the same for the one as for the other, and then everything leads us to think that this explanation will hold good equally well for the terms of higher order, and that the mutual destruction of these terms will be rigorous and absolute.

Present State of the Science.—In the history of the development of physics we distinguish two inverse tendencies.

On the one hand, new bonds are continually being discovered between objects which had seemed destined to remain forever unconnected; scattered facts cease to be strangers to one another; they tend to arrange themselves in an imposing synthesis. Science advances toward unity and simplicity.

On the other hand, observation reveals to us every day new phenomena; they must long await their place and sometimes, to make one for them, a corner of the edifice must be demolished. In the known phenomena themselves, where our crude senses showed us uniformity, we perceive details from day to day more varied; what we believed simple becomes complex, and science appears to advance toward variety and complexity.

Of these two inverse tendencies, which seem to triumph turn about, which will win? If it be the first, science is possible; but nothing proves thisa priori, and it may well be feared that after having made vain efforts to bend nature in spite of herself to our ideal of unity, submerged by the ever-rising flood of our new riches, we must renounce classifying them, abandon our ideal, and reduce science to the registration of innumerable recipes.

To this question we can not reply. All we can do is to observe the science of to-day and compare it with that of yesterday. From this examination we may doubtless draw some encouragement.

Half a century ago, hope ran high. The discovery of the conservation of energy and of its transformations had revealed to us the unity of force. Thus it showed that the phenomena of heat could be explained by molecular motions. What was the nature of these motions was not exactly known, but no onedoubted that it soon would be. For light, the task seemed completely accomplished. In what concerns electricity, things were less advanced. Electricity had just annexed magnetism. This was a considerable step toward unity, and a decisive step.

But how should electricity in its turn enter into the general unity, how should it be reduced to the universal mechanism?

Of that no one had any idea. Yet the possibility of this reduction was doubted by none, there was faith. Finally, in what concerns the molecular properties of material bodies, the reduction seemed still easier, but all the detail remained hazy. In a word, the hopes were vast and animated, but vague. To-day, what do we see? First of all, a prime progress, immense progress. The relations of electricity and light are now known; the three realms, of light, of electricity and of magnetism, previously separated, form now but one; and this annexation seems final.

This conquest, however, has cost us some sacrifices. The optical phenomena subordinate themselves as particular cases under the electrical phenomena; so long as they remained isolated, it was easy to explain them by motions that were supposed to be known in all their details, that was a matter of course; but now an explanation, to be acceptable, must be easily capable of extension to the entire electric domain. Now that is a matter not without difficulties.

The most satisfactory theory we have is that of Lorentz, which, as we shall see in the last chapter, explains electric currents by the motions of little electrified particles; it is unquestionably the one which best explains the known facts, the one which illuminates the greatest number of true relations, the one of which most traces will be found in the final construction. Nevertheless, it still has a serious defect, which I have indicated above; it is contrary to Newton's law of the equality of action and reaction; or rather, this principle, in the eyes of Lorentz, would not be applicable to matter alone; for it to be true, it would be necessary to take account of the action of the ether on matter and of the reaction of matter on the ether.

Now, from what we know at present, it seems probable that things do not happen in this way.

However that may be, thanks to Lorentz, Fizeau's results onthe optics of moving bodies, the laws of normal and anomalous dispersion and of absorption find themselves linked to one another and to the other properties of the ether by bonds which beyond any doubt will never more be broken. See the facility with which the new Zeeman effect has found its place already and has even aided in classifying Faraday's magnetic rotation which had defied Maxwell's efforts; this facility abundantly proves that the theory of Lorentz is not an artificial assemblage destined to fall asunder. It will probably have to be modified, but not destroyed.

But Lorentz had no aim beyond that of embracing in one totality all the optics and electrodynamics of moving bodies; he never pretended to give a mechanical explanation of them. Larmor goes further; retaining the theory of Lorentz in essentials, he grafts upon it, so to speak, MacCullagh's ideas on the direction of the motions of the ether.

According to him, the velocity of the ether would have the same direction and the same magnitude as the magnetic force. However ingenious this attempt may be, the defect of the theory of Lorentz remains and is even aggravated. With Lorentz, we do not know what are the motions of the ether; thanks to this ignorance, we may suppose them such that, compensating those of matter, they reestablish the equality of action and reaction. With Larmor, we know the motions of the ether, and we can ascertain that the compensation does not take place.

If Larmor has failed, as it seems to me he has, does that mean that a mechanical explanation is impossible? Far from it: I have said above that when a phenomenon obeys the two principles of energy and of least action, it admits of an infinity of mechanical explanations; so it is, therefore, with the optical and electrical phenomena.

But this is not enough: for a mechanical explanation to be good, it must be simple; for choosing it among all which are possible, there should be other reasons besides the necessity of making a choice. Well, we have not as yet a theory satisfying this condition and consequently good for something. Must we lament this? That would be to forget what is the goal sought; this is not mechanism; the true, the sole aim is unity.

We must therefore set bounds to our ambition; let us not tryto formulate a mechanical explanation; let us be content with showing that we could always find one if we wished to. In this regard we have been successful; the principle of the conservation of energy has received only confirmations; a second principle has come to join it, that of least action, put under the form which is suitable for physics. It also has always been verified, at least in so far as concerns reversible phenomena which thus obey the equations of Lagrange, that is to say, the most general laws of mechanics.

Irreversible phenomena are much more rebellious. Yet these also are being coordinated, and tend to come into unity; the light which has illuminated them has come to us from Carnot's principle. Long did thermodynamics confine itself to the study of the dilatation of bodies and their changes of state. For some time past it has been growing bolder and has considerably extended its domain. We owe to it the theory of the galvanic battery and that of the thermoelectric phenomena; there is not in all physics a corner that it has not explored, and it has attacked chemistry itself.

Everywhere the same laws reign; everywhere, under the diversity of appearances, is found again Carnot's principle; everywhere also is found that concept so prodigiously abstract of entropy, which is as universal as that of energy and seems like it to cover a reality. Radiant heat seemed destined to escape it; but recently we have seen that submit to the same laws.

In this way fresh analogies are revealed to us, which may often be followed into detail; ohmic resistance resembles the viscosity of liquids; hysteresis would resemble rather the friction of solids. In all cases, friction would appear to be the type which the most various irreversible phenomena copy, and this kinship is real and profound.

Of these phenomena a mechanical explanation, properly so called, has also been sought. They hardly lent themselves to it. To find it, it was necessary to suppose that the irreversibility is only apparent, that the elementary phenomena are reversible and obey the known laws of dynamics. But the elements are extremely numerous and blend more and more, so that to our crude sight all appears to tend toward uniformity, that is, everything seems togo forward in the same sense without hope of return. The apparent irreversibility is thus only an effect of the law of great numbers. But, only a being with infinitely subtile senses, like Maxwell's imaginary demon, could disentangle this inextricable skein and turn back the course of the universe.

This conception, which attaches itself to the kinetic theory of gases, has cost great efforts and has not, on the whole, been fruitful; but it may become so. This is not the place to examine whether it does not lead to contradictions and whether it is in conformity with the true nature of things.

We signalize, however, M. Gouy's original ideas on the Brownian movement. According to this scientist, this singular motion should escape Carnot's principle. The particles which it puts in swing would be smaller than the links of that so compacted skein; they would therefore be fitted to disentangle them and hence to make the world go backward. We should almost see Maxwell's demon at work.

To summarize, the previously known phenomena are better and better classified, but new phenomena come to claim their place; most of these, like the Zeeman effect, have at once found it.

But we have the cathode rays, the X-rays, those of uranium and of radium. Herein is a whole world which no one suspected. How many unexpected guests must be stowed away?

No one can yet foresee the place they will occupy. But I do not believe they will destroy the general unity; I think they will rather complete it. On the one hand, in fact, the new radiations seem connected with the phenomena of luminescence; not only do they excite fluorescence, but they sometimes take birth in the same conditions as it.

Nor are they without kinship with the causes which produce the electric spark under the action of the ultra-violet light.

Finally, and above all, it is believed that in all these phenomena are found true ions, animated, it is true, by velocities incomparably greater than in the electrolytes.

That is all very vague, but it will all become more precise.

Phosphorescence, the action of light on the spark, these were regions rather isolated and consequently somewhat neglected by investigators. One may now hope that a new path will beconstructed which will facilitate their communications with the rest of science.

Not only do we discover new phenomena, but in those we thought we knew, unforeseen aspects reveal themselves. In the free ether, the laws retain their majestic simplicity; but matter, properly so called, seems more and more complex; all that is said of it is never more than approximate, and at each instant our formulas require new terms.

Nevertheless the frames are not broken; the relations that we have recognized between objects we thought simple still subsist between these same objects when we know their complexity, and it is that alone which is of importance. Our equations become, it is true, more and more complicated, in order to embrace more closely the complexity of nature; but nothing is changed in the relations which permit the deducing of these equations one from another. In a word, the form of these equations has persisted.

Take, for example, the laws of reflection: Fresnel had established them by a simple and seductive theory which experiment seemed to confirm. Since then more precise researches have proved that this verification was only approximate; they have shown everywhere traces of elliptic polarization. But, thanks to the help that the first approximation gave us, we found forthwith the cause of these anomalies, which is the presence of a transition layer; and Fresnel's theory has subsisted in its essentials.

But there is a reflection we can not help making: All these relations would have remained unperceived if one had at first suspected the complexity of the objects they connect. It has long been said: If Tycho had had instruments ten times more precise neither Kepler, nor Newton, nor astronomy would ever have been. It is a misfortune for a science to be born too late, when the means of observation have become too perfect. This is to-day the case with physical chemistry; its founders are embarrassed in their general grasp by third and fourth decimals; happily they are men of a robust faith.

The better one knows the properties of matter the more one sees continuity reign. Since the labors of Andrews and of van der Waals, we get an idea of how the passage is made from the liquid to the gaseous state and that this passage is not abrupt. Similarly,there is no gap between the liquid and solid states, and in the proceedings of a recent congress is to be seen, alongside of a work on the rigidity of liquids, a memoir on the flow of solids.

By this tendency no doubt simplicity loses; some phenomenon was formerly represented by several straight lines, now these straights must be joined by curves more or less complicated. In compensation unity gains notably. Those cut-off categories quieted the mind, but they did not satisfy it.

Finally the methods of physics have invaded a new domain, that of chemistry; physical chemistry is born. It is still very young, but we already see that it will enable us to connect such phenomena as electrolysis, osmosis and the motions of ions.

From this rapid exposition, what shall we conclude?

Everything considered, we have approached unity; we have not been as quick as was hoped fifty years ago, we have not always taken the predicted way; but, finally, we have gained ever so much ground.

Doubtless it will be astonishing to find here thoughts about the calculus of probabilities. What has it to do with the method of the physical sciences? And yet the questions I shall raise without solving present themselves naturally to the philosopher who is thinking about physics. So far is this the case that in the two preceding chapters I have often been led to use the words 'probability' and 'chance.'

'Predicted facts,' as I have said above, 'can only be probable.' "However solidly founded a prediction may seem to us to be, we are never absolutely sure that experiment will not prove it false. But the probability is often so great that practically we may be satisfied with it." And a little further on I have added: "See what a rôle the belief in simplicity plays in our generalizations. We have verified a simple law in a great number of particular cases; we refuse to admit that this coincidence, so often repeated, can be a mere effect of chance...."

Thus in a multitude of circumstances the physicist is in the same position as the gambler who reckons up his chances. As often as he reasons by induction, he requires more or less consciously the calculus of probabilities, and this is why I am obliged to introduce a parenthesis, and interrupt our study of method in the physical sciences in order to examine a little more closely the value of this calculus, and what confidence it merits.

The very name calculus of probabilities is a paradox. Probability opposed to certainty is what we do not know, and how can we calculate what we do not know? Yet many eminent savants have occupied themselves with this calculus, and it can not be denied that science has drawn therefrom no small advantage.

How can we explain this apparent contradiction?

Has probability been defined? Can it even be defined? And if it can not, how dare we reason about it? The definition, it willbe said, is very simple: the probability of an event is the ratio of the number of cases favorable to this event to the total number of possible cases.

A simple example will show how incomplete this definition is. I throw two dice. What is the probability that one of the two at least turns up a six? Each die can turn up in six different ways; the number of possible cases is 6 × 6 = 36; the number of favorable cases is 11; the probability is 11/36.

That is the correct solution. But could I not just as well say: The points which turn up on the two dice can form 6 × 7/2 = 21 different combinations? Among these combinations 6 are favorable; the probability is 6/21.

Now why is the first method of enumerating the possible cases more legitimate than the second? In any case it is not our definition that tells us.

We are therefore obliged to complete this definition by saying: '... to the total number of possible cases provided these cases are equally probable.' So, therefore, we are reduced to defining the probable by the probable.

How can we know that two possible cases are equally probable? Will it be by a convention? If we place at the beginning of each problem an explicit convention, well and good. We shall then have nothing to do but apply the rules of arithmetic and of algebra, and we shall complete our calculation without our result leaving room for doubt. But if we wish to make the slightest application of this result, we must prove our convention was legitimate, and we shall find ourselves in the presence of the very difficulty we thought to escape.

Will it be said that good sense suffices to show us what convention should be adopted? Alas! M. Bertrand has amused himself by discussing the following simple problem: "What is the probability that a chord of a circle may be greater than the side of the inscribed equilateral triangle?" The illustrious geometer successively adopted two conventions which good sense seemed equally to dictate and with one he found 1/2, with the other 1/3.

The conclusion which seems to follow from all this is that the calculus of probabilities is a useless science, and that the obscureinstinct which we may call good sense, and to which we are wont to appeal to legitimatize our conventions, must be distrusted.

But neither can we subscribe to this conclusion; we can not do without this obscure instinct. Without it science would be impossible, without it we could neither discover a law nor apply it. Have we the right, for instance, to enunciate Newton's law? Without doubt, numerous observations are in accord with it; but is not this a simple effect of chance? Besides how do we know whether this law, true for so many centuries, will still be true next year? To this objection, you will find nothing to reply, except: 'That is very improbable.'

But grant the law. Thanks to it, I believe myself able to calculate the position of Jupiter a year from now. Have I the right to believe this? Who can tell if a gigantic mass of enormous velocity will not between now and that time pass near the solar system, and produce unforeseen perturbations? Here again the only answer is: 'It is very improbable.'

From this point of view, all the sciences would be only unconscious applications of the calculus of probabilities. To condemn this calculus would be to condemn the whole of science.

I shall dwell lightly on the scientific problems in which the intervention of the calculus of probabilities is more evident. In the forefront of these is the problem of interpolation, in which, knowing a certain number of values of a function, we seek to divine the intermediate values.

I shall likewise mention: the celebrated theory of errors of observation, to which I shall return later; the kinetic theory of gases, a well-known hypothesis, wherein each gaseous molecule is supposed to describe an extremely complicated trajectory, but in which, through the effect of great numbers, the mean phenomena, alone observable, obey the simple laws of Mariotte and Gay-Lussac.

All these theories are based on the laws of great numbers, and the calculus of probabilities would evidently involve them in its ruin. It is true that they have only a particular interest and that, save as far as interpolation is concerned, these are sacrifices to which we might readily be resigned.

But, as I have said above, it would not be only these partialsacrifices that would be in question; it would be the legitimacy of the whole of science that would be challenged.

I quite see that it might be said: "We are ignorant, and yet we must act. For action, we have not time to devote ourselves to an inquiry sufficient to dispel our ignorance. Besides, such an inquiry would demand an infinite time. We must therefore decide without knowing; we are obliged to do so, hit or miss, and we must follow rules without quite believing them. What I know is not that such and such a thing is true, but that the best course for me is to act as if it were true." The calculus of probabilities, and consequently science itself, would thenceforth have merely a practical value.

Unfortunately the difficulty does not thus disappear. A gambler wants to try acoup; he asks my advice. If I give it to him, I shall use the calculus of probabilities, but I shall not guarantee success. This is what I shall callsubjective probability. In this case, we might be content with the explanation of which I have just given a sketch. But suppose that an observer is present at the game, that he notes all itscoups, and that the game goes on a long time. When he makes a summary of his book, he will find that events have taken place in conformity with the laws of the calculus of probabilities. This is what I shall callobjective probability, and it is this phenomenon which has to be explained.

There are numerous insurance companies which apply the rules of the calculus of probabilities, and they distribute to their shareholders dividends whose objective reality can not be contested. To invoke our ignorance and the necessity to act does not suffice to explain them.

Thus absolute skepticism is not admissible. We may distrust, but we can not condemnen bloc. Discussion is necessary.

I. Classification of the Problems of Probability.—In order to classify the problems which present themselvesà proposof probabilities, we may look at them from many different points of view, and, first, from thepoint of view of generality. I have said above that probability is the ratio of the number of favorable cases to the number of possible cases. What for want of a better term I call the generality will increase with the number ofpossible cases. This number may be finite, as, for instance, if we take a throw of the dice in which the number of possible cases is 36. That is the first degree of generality.

But if we ask, for example, what is the probability that a point within a circle is within the inscribed square, there are as many possible cases as there are points in the circle, that is to say, an infinity. This is the second degree of generality. Generality can be pushed further still. We may ask the probability that a function will satisfy a given condition. There are then as many possible cases as one can imagine different functions. This is the third degree of generality, to which we rise, for instance, when we seek to find the most probable law in conformity with a finite number of observations.

We may place ourselves at a point of view wholly different. If we were not ignorant, there would be no probability, there would be room for nothing but certainty. But our ignorance can not be absolute, for then there would no longer be any probability at all, since a little light is necessary to attain even this uncertain science. Thus the problems of probability may be classed according to the greater or less depth of this ignorance.

In mathematics even we may set ourselves problems of probability. What is the probability that the fifth decimal of a logarithm taken at random from a table is a '9'? There is no hesitation in answering that this probability is 1/10; here we possess all the data of the problem. We can calculate our logarithm without recourse to the table, but we do not wish to give ourselves the trouble. This is the first degree of ignorance.

In the physical sciences our ignorance becomes greater. The state of a system at a given instant depends on two things: Its initial state, and the law according to which that state varies. If we know both this law and this initial state, we shall have then only a mathematical problem to solve, and we fall back upon the first degree of ignorance.

But it often happens that we know the law, and do not know the initial state. It may be asked, for instance, what is the present distribution of the minor planets? We know that from all time they have obeyed the laws of Kepler, but we do not know what was their initial distribution.

In the kinetic theory of gases, we assume that the gaseous molecules follow rectilinear trajectories, and obey the laws of impact of elastic bodies. But, as we know nothing of their initial velocities, we know nothing of their present velocities.

The calculus of probabilities only enables us to predict the mean phenomena which will result from the combination of these velocities. This is the second degree of ignorance.

Finally it is possible that not only the initial conditions but the laws themselves are unknown. We then reach the third degree of ignorance and in general we can no longer affirm anything at all as to the probability of a phenomenon.

It often happens that instead of trying to guess an event, by means of a more or less imperfect knowledge of the law, the events may be known and we want to find the law; or that instead of deducing effects from causes, we wish to deduce the causes from the effects. These are the problems calledprobability of causes, the most interesting from the point of view of their scientific applications.

I play écarté with a gentleman I know to be perfectly honest. He is about to deal. What is the probability of his turning up the king? It is 1/8. This is a problem of the probability of effects.

I play with a gentleman whom I do not know. He has dealt ten times, and he has turned up the king six times. What is the probability that he is a sharper? This is a problem in the probability of causes.

It may be said that this is the essential problem of the experimental method. I have observednvalues ofxand the corresponding values ofy. I have found that the ratio of the latter to the former is practically constant. There is the event, what is the cause?

Is it probable that there is a general law according to whichywould be proportional tox, and that the small divergencies are due to errors of observation? This is a type of question that one is ever asking, and which we unconsciously solve whenever we are engaged in scientific work.

I am now going to pass in review these different categories ofproblems, discussing in succession what I have called above subjective and objective probability.

II. Probability in Mathematics.—The impossibility of squaring the circle has been proved since 1882; but even before that date all geometers considered that impossibility as so 'probable,' that the Academy of Sciences rejected without examination the alas! too numerous memoirs on this subject, that some unhappy madmen sent in every year.

Was the Academy wrong? Evidently not, and it knew well that in acting thus it did not run the least risk of stifling a discovery of moment. The Academy could not have proved that it was right; but it knew quite well that its instinct was not mistaken. If you had asked the Academicians, they would have answered: "We have compared the probability that an unknown savant should have found out what has been vainly sought for so long, with the probability that there is one madman the more on the earth; the second appears to us the greater." These are very good reasons, but there is nothing mathematical about them; they are purely psychological.

And if you had pressed them further they would have added: "Why do you suppose a particular value of a transcendental function to be an algebraic number; and if π were a root of an algebraic equation, why do you suppose this root to be a period of the function sin 2x, and not the same about the other roots of this same equation?" To sum up, they would have invoked the principle of sufficient reason in its vaguest form.

But what could they deduce from it? At most a rule of conduct for the employment of their time, more usefully spent at their ordinary work than in reading a lucubration that inspired in them a legitimate distrust. But what I call above objective probability has nothing in common with this first problem.

It is otherwise with the second problem.

Consider the first 10,000 logarithms that we find in a table. Among these 10,000 logarithms I take one at random. What is the probability that its third decimal is an even number? You will not hesitate to answer 1/2; and in fact if you pick out in a table the third decimals of these 10,000 numbers, you will find nearly as many even digits as odd.

Or if you prefer, let us write 10,000 numbers corresponding to our 10,000 logarithms, each of these numbers being +1 if the third decimal of the corresponding logarithm is even, and −1 if odd. Then take the mean of these 10,000 numbers.

I do not hesitate to say that the mean of these 10,000 numbers is probably 0, and if I were actually to calculate it I should verify that it is extremely small.

But even this verification is needless. I might have rigorously proved that this mean is less than 0.003. To prove this result, I should have had to make a rather long calculation for which there is no room here, and for which I confine myself to citing an article I published in theRevue générale des Sciences, April 15, 1899. The only point to which I wish to call attention is the following: in this calculation, I should have needed only to rest my case on two facts, to wit, that the first and second derivatives of the logarithm remain, in the interval considered, between certain limits.

Hence this important consequence that the property is true not only of the logarithm, but of any continuous function whatever, since the derivatives of every continuous function are limited.

If I was certain beforehand of the result, it is first, because I had often observed analogous facts for other continuous functions; and next, because I made in my mind, in a more or less unconscious and imperfect manner, the reasoning which led me to the preceding inequalities, just as a skilled calculator before finishing his multiplication takes into account what it should come to approximately.

And besides, since what I call my intuition was only an incomplete summary of a piece of true reasoning, it is clear why observation has confirmed my predictions, and why the objective probability has been in agreement with the subjective probability.

As a third example I shall choose the following problem: A numberuis taken at random, andnis a given very large integer. What is the probable value of sinnu? This problem has no meaning by itself. To give it one a convention is needed. Weshall agreethat the probability for the numberuto lie betweenaanda+ is equal to ϕ(a)da; that it is therefore proportional to the infinitely small intervalda, and equal to this multiplied byafunction ϕ(a) depending only ona. As for this function, Ichoose it arbitrarily, but I must assume it to be continuous. The value of sinnuremaining the same whenuincreases by 2π, I may without loss of generality assume thatulies between 0 and 2π, and I shall thus be led to suppose that ϕ(a) is a periodic function whose period is 2π.

The probable value sought is readily expressed by a simple integral, and it is easy to show that this integral is less than

2πMk⁄nk,

Mkbeing the maximum value of thekthderivative of ϕ(u). We see then that if thekthderivative is finite, our probable value will tend toward 0 whennincreases indefinitely, and that more rapidly than 1/nk−1.

The probable value of sinnuwhennis very large is therefore naught. To define this value I required a convention; but the result remains the samewhatever that convention may be. I have imposed upon myself only slight restrictions in assuming that the function ϕ(a) is continuous and periodic, and these hypotheses are so natural that we may ask ourselves how they can be escaped.

Examination of the three preceding examples, so different in all respects, has already given us a glimpse, on the one hand, of the rôle of what philosophers call the principle of sufficient reason, and, on the other hand, of the importance of the fact that certain properties are common to all continuous functions. The study of probability in the physical sciences will lead us to the same result.

III. Probability in the Physical Sciences.—We come now to the problems connected with what I have called the second degree of ignorance, those, namely, in which we know the law, but do not know the initial state of the system. I could multiply examples, but will take only one. What is the probable present distribution of the minor planets on the zodiac?

We know they obey the laws of Kepler. We may even, without at all changing the nature of the problem, suppose that their orbits are all circular, and situated in the same plane, and that we know this plane. On the other hand, we are in absolute ignorance as to what was their initial distribution. However, we do nothesitate to affirm that their distribution is now nearly uniform. Why?

Letbbe the longitude of a minor planet in the initial epoch, that is to say, the epoch zero. Letabe its mean motion. Its longitude at the present epoch, that is to say at the epocht, will beat+b. To say that the present distribution is uniform is to say that the mean value of the sines and cosines of multiples ofat+bis zero. Why do we assert this?

Let us represent each minor planet by a point in a plane, to wit, by a point whose coordinates are preciselyaandb. All these representative points will be contained in a certain region of the plane, but as they are very numerous this region will appear dotted with points. We know nothing else about the distribution of these points.

What do we do when we wish to apply the calculus of probabilities to such a question? What is the probability that one or more representative points may be found in a certain portion of the plane? In our ignorance, we are reduced to making an arbitrary hypothesis. To explain the nature of this hypothesis, allow me to use, in lieu of a mathematical formula, a crude but concrete image. Let us suppose that over the surface of our plane has been spread an imaginary substance, whose density is variable, but varies continuously. We shall then agree to say that the probable number of representative points to be found on a portion of the plane is proportional to the quantity of fictitious matter found there. If we have then two regions of the plane of the same extent, the probabilities that a representative point of one of our minor planets is found in one or the other of these regions will be to one another as the mean densities of the fictitious matter in the one and the other region.

Here then are two distributions, one real, in which the representative points are very numerous, very close together, but discrete like the molecules of matter in the atomic hypothesis; the other remote from reality, in which our representative points are replaced by continuous fictitious matter. We know that the latter can not be real, but our ignorance forces us to adopt it.

If again we had some idea of the real distribution of the representative points, we could arrange it so that in a regionof some extent the density of this imaginary continuous matter would be nearly proportional to the number of the representative points, or, if you wish, to the number of atoms which are contained in that region. Even that is impossible, and our ignorance is so great that we are forced to choose arbitrarily the function which defines the density of our imaginary matter. Only we shall be forced to a hypothesis from which we can hardly get away, we shall suppose that this function is continuous. That is sufficient, as we shall see, to enable us to reach a conclusion.

What is at the instanttthe probable distribution of the minor planets? Or rather what is the probable value of the sine of the longitude at the instantt, that is to say of sin (at+b)? We made at the outset an arbitrary convention, but if we adopt it, this probable value is entirely defined. Divide the plane into elements of surface. Consider the value of sin (at+b) at the center of each of these elements; multiply this value by the surface of the element, and by the corresponding density of the imaginary matter. Take then the sum for all the elements of the plane. This sum, by definition, will be the probable mean value we seek, which will thus be expressed by a double integral. It may be thought at first that this mean value depends on the choice of the function which defines the density of the imaginary matter, and that, as this function ϕ is arbitrary, we can, according to the arbitrary choice which we make, obtain any mean value. This is not so.

A simple calculation shows that our double integral decreases very rapidly whentincreases. Thus I could not quite tell what hypothesis to make as to the probability of this or that initial distribution; but whatever the hypothesis made, the result will be the same, and this gets me out of my difficulty.

Whatever be the function ϕ, the mean value tends toward zero astincreases, and as the minor planets have certainly accomplished a very great number of revolutions, I may assert that this mean value is very small.

I may choose ϕ as I wish, save always one restriction: this function must be continuous; and, in fact, from the point of view of subjective probability, the choice of a discontinuous function would have been unreasonable. For instance, what reason couldI have for supposing that the initial longitude might be exactly 0°, but that it could not lie between 0° and 1°?

But the difficulty reappears if we take the point of view of objective probability, if we pass from our imaginary distribution in which the fictitious matter was supposed continuous to the real distribution in which our representative points form, as it were, discrete atoms.

The mean value of sin (at+b) will be represented quite simply by

(1/n) Σ sin (at+b),

nbeing the number of minor planets. In lieu of a double integral referring to a continuous function, we shall have a sum of discrete terms. And yet no one will seriously doubt that this mean value is practically very small.

Our representative points being very close together, our discrete sum will in general differ very little from an integral.

An integral is the limit toward which a sum of terms tends when the number of these terms is indefinitely increased. If the terms are very numerous, the sum will differ very little from its limit, that is to say from the integral, and what I said of this latter will still be true of the sum itself.

Nevertheless, there are exceptions. If, for instance, for all the minor planets,

b= π/2 −at,

the longitude for all the planets at the time t would be π/2, and the mean value would evidently be equal to unity. For this to be the case, it would be necessary that at the epoch 0, the minor planets must have all been lying on a spiral of peculiar form, with its spires very close together. Every one will admit that such an initial distribution is extremely improbable (and, even supposing it realized, the distribution would not be uniform at the present time, for example, on January 1, 1913, but it would become so a few years later).

Why then do we think this initial distribution improbable? This must be explained, because if we had no reason for rejectingas improbable this absurd hypothesis everything would break down, and we could no longer make any affirmation about the probability of this or that present distribution.

Once more we shall invoke the principle of sufficient reason to which we must always recur. We might admit that at the beginning the planets were distributed almost in a straight line. We might admit that they were irregularly distributed. But it seems to us that there is no sufficient reason for the unknown cause that gave them birth to have acted along a curve so regular and yet so complicated, which would appear to have been expressly chosen so that the present distribution would not be uniform.

IV. Rouge et Noir.—The questions raised by games of chance, such as roulette, are, fundamentally, entirely analogous to those we have just treated. For example, a wheel is partitioned into a great number of equal subdivisions, alternately red and black. A needle is whirled with force, and after having made a great number of revolutions, it stops before one of these subdivisions. The probability that this division is red is evidently 1/2. The needle describes an angle θ, including several complete revolutions. I do not know what is the probability that the needle may be whirled with a force such that this angle should lie between θ and θ +dθ; but I can make a convention. I can suppose that this probability is ϕ(θ)dθ. As for the function ϕ(θ), I can choose it in an entirely arbitrary manner. There is nothing that can guide me in my choice, but I am naturally led to suppose this function continuous.

Let ε be the length (measured on the circumference of radius 1) of each red and black subdivision. We have to calculate the integral of ϕ(θ)dθ, extending it, on the one hand, to all the red divisions and, on the other hand, to all the black divisions, and to compare the results.

Consider an interval 2ε, comprising a red division and a black division which follows it. Let M andmbe the greatest and least values of the function ϕ(θ) in this interval. The integral extended to the red divisions will be smaller than ΣMε; the integral extended to the black divisions will be greater than Σmε; the difference will therefore be less than Σ(M −m)ε. But, if the function θ is supposed continuous; if, besides, the interval ε is verysmall with respect to the total angle described by the needle, the difference M −mwill be very small. The difference of the two integrals will therefore be very small, and the probability will be very nearly 1/2.

We see that without knowing anything of the function θ, I must act as if the probability were 1/2. We understand, on the other hand, why, if, placing myself at the objective point of view, I observe a certain number of coups, observation will give me about as many black coups as red.

All players know this objective law; but it leads them into a remarkable error, which has been often exposed, but into which they always fall again. When the red has won, for instance, six times running, they bet on the black, thinking they are playing a safe game; because, say they, it is very rare that red wins seven times running.

In reality their probability of winning remains 1/2. Observation shows, it is true, that series of seven consecutive reds are very rare, but series of six reds followed by a black are just as rare.

They have noticed the rarity of the series of seven reds; if they have not remarked the rarity of six reds and a black, it is only because such series strike the attention less.

V. The Probability of Causes.—We now come to the problems of the probability of causes, the most important from the point of view of scientific applications. Two stars, for instance, are very close together on the celestial sphere. Is this apparent contiguity a mere effect of chance? Are these stars, although on almost the same visual ray, situated at very different distances from the earth, and consequently very far from one another? Or, perhaps, does the apparent correspond to a real contiguity? This is a problem on the probability of causes.

I recall first that at the outset of all problems of the probability of effects that have hitherto occupied us, we have always had to make a convention, more or less justified. And if in most cases the result was, in a certain measure, independent of this convention, this was only because of certain hypotheses which permitted us to rejecta prioridiscontinuous functions, for example, or certain absurd conventions.

We shall find something analogous when we deal with theprobability of causes. An effect may be produced by the causeAor by the causeB. The effect has just been observed. We ask the probability that it is due to the causeA. This is ana posterioriprobability of cause. But I could not calculate it, if a convention more or less justified did not tell mein advancewhat is thea prioriprobability for the causeAto come into play; I mean the probability of this event for some one who had not observed the effect.

The better to explain myself I go back to the example of the game of écarté mentioned above. My adversary deals for the first time and he turns up a king. What is the probability that he is a sharper? The formulas ordinarily taught give 8/9, a result evidently rather surprising. If we look at it closer, we see that the calculation is made as if,before sitting down at the table, I had considered that there was one chance in two that my adversary was not honest. An absurd hypothesis, because in that case I should have certainly not played with him, and this explains the absurdity of the conclusion.

The convention about thea prioriprobability was unjustified, and that is why the calculation of thea posterioriprobability led me to an inadmissible result. We see the importance of this preliminary convention. I shall even add that if none were made, the problem of thea posterioriprobability would have no meaning. It must always be made either explicitly or tacitly.

Pass to an example of a more scientific character. I wish to determine an experimental law. This law, when I know it, can be represented by a curve. I make a certain number of isolated observations; each of these will be represented by a point. When I have obtained these different points, I draw a curve between them, striving to pass as near to them as possible and yet preserve for my curve a regular form, without angular points, or inflections too accentuated, or brusque variation of the radius of curvature. This curve will represent for me the probable law, and I assume not only that it will tell me the values of the function intermediate between those which have been observed, but also that it will give me the observed values themselves more exactly than direct observation. This is why I make it pass near the points, and not through the points themselves.

Here is a problem in the probability of causes. The effects are the measurements I have recorded; they depend on a combination of two causes: the true law of the phenomenon and the errors of observation. Knowing the effects, we have to seek the probability that the phenomenon obeys this law or that, and that the observations have been affected by this or that error. The most probable law then corresponds to the curve traced, and the most probable error of an observation is represented by the distance of the corresponding point from this curve.

But the problem would have no meaning if, before any observation, I had not fashioned ana prioriidea of the probability of this or that law, and of the chances of error to which I am exposed.

If my instruments are good (and that I knew before making the observations), I shall not permit my curve to depart much from the points which represent the rough measurements. If they are bad, I may go a little further away from them in order to obtain a less sinuous curve; I shall sacrifice more to regularity.

Why then is it that I seek to trace a curve without sinuosities? It is because I considera prioria law represented by a continuous function (or by a function whose derivatives of high order are small), as more probable than a law not satisfying these conditions. Without this belief, the problem of which we speak would have no meaning; interpolation would be impossible; no law could be deduced from a finite number of observations; science would not exist.

Fifty years ago physicists considered, other things being equal, a simple law as more probable than a complicated law. They even invoked this principle in favor of Mariotte's law as against the experiments of Regnault. To-day they have repudiated this belief; and yet, how many times are they compelled to act as though they still held it! However that may be, what remains of this tendency is the belief in continuity, and we have just seen that if this belief were to disappear in its turn, experimental science would become impossible.

VI. The Theory of Errors.—We are thus led to speak of the theory of errors, which is directly connected with the problem of the probability of causes. Here again we findeffects, to wit, a certain number of discordant observations, and we seek todivine thecauses, which are, on the one hand, the real value of the quantity to be measured; on the other hand, the error made in each isolated observation. It is necessary to calculate what isa posteriorithe probable magnitude of each error, and consequently the probable value of the quantity to be measured.

But as I have just explained, we should not know how to undertake this calculation if we did not admita priori, that is to say, before all observation, a law of probability of errors. Is there a law of errors?

The law of errors admitted by all calculators is Gauss's law, which is represented by a certain transcendental curve known under the name of 'the bell.'

But first it is proper to recall the classic distinction between systematic and accidental errors. If we measure a length with too long a meter, we shall always find too small a number, and it will be of no use to measure several times; this is a systematic error. If we measure with an accurate meter, we may, however, make a mistake; but we go wrong, now too much, now too little, and when we take the mean of a great number of measurements, the error will tend to grow small. These are accidental errors.

It is evident from the first that systematic errors can not satisfy Gauss's law; but do the accidental errors satisfy it? A great number of demonstrations have been attempted; almost all are crude paralogisms. Nevertheless, we may demonstrate Gauss's law by starting from the following hypotheses: the error committed is the result of a great number of partial and independent errors; each of the partial errors is very little and besides, obeys any law of probability, provided that the probability of a positive error is the same as that of an equal negative error. It is evident that these conditions will be often but not always fulfilled, and we may reserve the name of accidental for errors which satisfy them.

We see that the method of least squares is not legitimate in every case; in general the physicists are more distrustful of it than the astronomers. This is, no doubt, because the latter, besides the systematic errors to which they and the physicists are subject alike, have to control with an extremely important source of error which is wholly accidental; I mean atmosphericundulations. So it is very curious to hear a physicist discuss with an astronomer about a method of observation. The physicist, persuaded that one good measurement is worth more than many bad ones, is before all concerned with eliminating by dint of precautions the least systematic errors, and the astronomer says to him: 'But thus you can observe only a small number of stars; the accidental errors will not disappear.'

What should we conclude? Must we continue to use the method of least squares? We must distinguish. We have eliminated all the systematic errors we could suspect; we know well there are still others, but we can not detect them; yet it is necessary to make up our mind and adopt a definitive value which will be regarded as the probable value; and for that it is evident the best thing to do is to apply Gauss's method. We have only applied a practical rule referring to subjective probability. There is nothing more to be said.

But we wish to go farther and affirm that not only is the probable value so much, but that the probable error in the result is so much.This is absolutely illegitimate; it would be true only if we were sure that all the systematic errors were eliminated, and of that we know absolutely nothing. We have two series of observations; by applying the rule of least squares, we find that the probable error in the first series is twice as small as in the second. The second series may, however, be better than the first, because the first perhaps is affected by a large systematic error. All we can say is that the first series isprobablybetter than the second, since its accidental error is smaller, and we have no reason to affirm that the systematic error is greater for one of the series than for the other, our ignorance on this point being absolute.

VII. Conclusions.—In the lines which precede, I have set many problems without solving any of them. Yet I do not regret having written them, because they will perhaps invite the reader to reflect on these delicate questions.

However that may be, there are certain points which seem well established. To undertake any calculation of probability, and even for that calculation to have any meaning, it is necessaryto admit, as point of departure, a hypothesis or convention which has always something arbitrary about it. In the choice of this convention, we can be guided only by the principle of sufficient reason. Unfortunately this principle is very vague and very elastic, and in the cursory examination we have just made, we have seen it take many different forms. The form under which we have met it most often is the belief in continuity, a belief which it would be difficult to justify by apodeictic reasoning, but without which all science would be impossible. Finally the problems to which the calculus of probabilities may be applied with profit are those in which the result is independent of the hypothesis made at the outset, provided only that this hypothesis satisfies the condition of continuity.

Fresnel's Theory.—The best example[5]that can be chosen of physics in the making is the theory of light and its relations to the theory of electricity. Thanks to Fresnel, optics is the best developed part of physics; the so-called wave-theory forms a whole truly satisfying to the mind. We must not, however, ask of it what it can not give us.

The object of mathematical theories is not to reveal to us the true nature of things; this would be an unreasonable pretension. Their sole aim is to coordinate the physical laws which experiment reveals to us, but which, without the help of mathematics, we should not be able even to state.

It matters little whether the ether really exists; that is the affair of metaphysicians. The essential thing for us is that everything happens as if it existed, and that this hypothesis is convenient for the explanation of phenomena. After all, have we any other reason to believe in the existence of material objects? That, too, is only a convenient hypothesis; only this will never cease to be so, whereas, no doubt, some day the ether will be thrown aside as useless. But even at that day, the laws of optics and the equations which translate them analytically will remain true, at least as a first approximation. It will always be useful, then, to study a doctrine that unites all these equations.

The undulatory theory rests on a molecular hypothesis. For those who think they have thus discovered the cause under the law, this is an advantage. For the others it is a reason for distrust. But this distrust seems to me as little justified as the illusion of the former.

These hypotheses play only a secondary part. They might be sacrificed. They usually are not, because then the explanation would lose in clearness; but that is the only reason.

In fact, if we looked closer we should see that only two things are borrowed from the molecular hypotheses: the principle of the conservation of energy and the linear form of the equations, which is the general law of small movements, as of all small variations.

This explains why most of Fresnel's conclusions remain unchanged when we adopt the electromagnetic theory of light.

Maxwell's Theory.—Maxwell, we know, connected by a close bond two parts of physics until then entirely foreign to one another, optics and electricity. By blending thus in a vaster whole, in a higher harmony, the optics of Fresnel has not ceased to be alive. Its various parts subsist, and their mutual relations are still the same. Only the language we used to express them has changed; and, on the other hand, Maxwell has revealed to us other relations, before unsuspected, between the different parts of optics and the domain of electricity.

When a French reader first opens Maxwell's book, a feeling of uneasiness and often even of mistrust mingles at first with his admiration. Only after a prolonged acquaintance and at the cost of many efforts does this feeling disappear. There are even some eminent minds that never lose it.

Why are the English scientist's ideas with such difficulty acclimatized among us? It is, no doubt, because the education received by the majority of enlightened Frenchmen predisposes them to appreciate precision and logic above every other quality.

The old theories of mathematical physics gave us in this respect complete satisfaction. All our masters, from Laplace to Cauchy, have proceeded in the same way. Starting from clearly stated hypotheses, they deduced all their consequences with mathematical rigor, and then compared them with experiment. It seemed their aim to give every branch of physics the same precision as celestial mechanics.

A mind accustomed to admire such models is hard to suit with a theory. Not only will it not tolerate the least appearance of contradiction, but it will demand that the various parts be logically connected with one another, and that the number of distinct hypotheses be reduced to minimum.

This is not all; it will have still other demands, which seem tome less reasonable. Behind the matter which our senses can reach, and which experiment tells us of, it will desire to see another, and in its eyes the only real, matter, which will have only purely geometric properties, and whose atoms will be nothing but mathematical points, subject to the laws of dynamics alone. And yet these atoms, invisible and without color, it will seek by an unconscious contradiction to represent to itself and consequently to identify as closely as possible with common matter.

Then only will it be fully satisfied and imagine that it has penetrated the secret of the universe. If this satisfaction is deceitful, it is none the less difficult to renounce.

Thus, on opening Maxwell, a Frenchman expects to find a theoretical whole as logical and precise as the physical optics based on the hypothesis of the ether; he thus prepares for himself a disappointment which I should like to spare the reader by informing him immediately of what he must look for in Maxwell, and what he can not find there.

Maxwell does not give a mechanical explanation of electricity and magnetism; he confines himself to demonstrating that such an explanation is possible.

He shows also that optical phenomena are only a special case of electromagnetic phenomena. From every theory of electricity, one can therefore deduce immediately a theory of light.

The converse unfortunately is not true; from a complete explanation of light, it is not always easy to derive a complete explanation of electric phenomena. This is not easy, in particular, if we wish to start from Fresnel's theory. Doubtless it would not be impossible; but nevertheless we must ask whether we are not going to be forced to renounce admirable results that we thought definitely acquired. That seems a step backward; and many good minds are not willing to submit to it.

When the reader shall have consented to limit his hopes, he will still encounter other difficulties. The English scientist does not try to construct a single edifice, final and well ordered; he seems rather to erect a great number of provisional and independent constructions, between which communication is difficult and sometimes impossible.

Take as example the chapter in which he explains electrostatic attractions by pressures and tensions in the dielectric medium. This chapter might be omitted without making thereby the rest of the book less clear or complete; and, on the other hand, it contains a theory complete in itself which one could understand without having read a single line that precedes or follows. But it is not only independent of the rest of the work; it is difficult to reconcile with the fundamental ideas of the book. Maxwell does not even attempt this reconciliation; he merely says: "I have not been able to make the next step, namely, to account by mechanical considerations for these stresses in the dielectric."


Back to IndexNext