Tuesday, January 6, 2026

AI Is Older Than You Think: The Not-So-Recent History of Artificial Intelligence

An increasing torrent of posts in every form of media during the year 2025 has made the topic of AI inescapable for anyone who looks at a screen or picks up an old-school physical magazine or newspaper. Much of this reporting and opining is redundant, but its sheer volume is both impressive and annoying.

Despite the immense amount of this writing and talking, and despite the repetitive over-reporting on some aspects of AI, other aspects of AI remain relatively obscure. One such aspect is its history.

Some readers may be surprised to learn how old the concept of AI is.

One early example comes from the literary world: in 1920 Czech playwright Karel Capek presented R.U.R.: Rossum’s Universal Robots, with a revised edition of the script appearing in 1921. Capek’s text explicitly considers many of the questions now being asked about AI. He invented the word ‘robot’ and refined the concept of what a robot might be. His fictional version of a robot was made of synthetic flesh instead of transistors and steel. Concerning unemployment created by AI, one of the characters in Capek’s play says, “Yes, people will be out of work, but by then there’ll be no work left to be done. Everything will be done by living machines.” Later in the plot, another character observes: “They’ve ceased to be machines. They’re already aware of their superiority.” Capek’s robots have superior memory and superior physical strength. Although not addressed directly in the text, they presumably also have superior calculating ability. In the final line of the play, a robot who’s led the revolution against human beings addresses his fellow robots: “Robots of the world! The power of man has fallen! A new world has arisen: the Rule of the Robots!”

In addition to raising some of the core questions about AI, Capek also exemplifies the murky boundary between science fiction and concrete examples of technological development in the real world: On the one hand, fiction can sometimes be surprisingly accurate in foreshadowing actual technological developments; on the other hand, science fiction can foster preconceptions which make it difficult accept new developments which diverge from those preconceptions.

A second early example is a famous academic paper titled “Computing Machinery and Intelligence” written by Alan Turing and published in journal Mind in October 1950 (Vol. LIX, No. 236). He was born in Manchester, England, and spent much of his life there. This text is widely read in university courses in psychology and philosophy. Turing describes how intelligence in a machine might manifest itself: “Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops.” He also explores at length the concepts of training, teaching, and learning as they apply to AI. In so doing, Turing precisely identified three of the concepts which received so much attention as the first quarter of the twenty-first century drew to a close: teaching, learning, and training. He wrote: “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside.” Further, “It will not be possible to apply exactly the same teaching process to the machine as to a normal child.”

Turing was 75 years ahead of his time. Discussions nearly a century after his death include marking the boundaries between LLM, AI, AGI, and ASI. Among the central AI concepts he examines is the definition of ‘intelligence’ which will decide when those boundaries have been crossed. One option is to embrace a behaviorist psychology, which asserts that if the machine makes intelligent statements and behaves intelligently, then it is intelligent: intelligence is the appearance of intelligence. Turing’s writings produced the idea of the “Turing test” in which the question is whether or not a computer can fool a human being: can a chatbot convince someone that it is a human?

A third early example is Joseph Weizenbaum’s program named ELIZA. Weizenbaum was born in Germany and spent the first thirteen years of his life there. After moving to the United States, he eventually did undergraduate and graduate work at Wayne State University in Detroit. Weizenbaum began designing the software that would become ELIZA around 1964, and ELIZA seems to have been functioning at a high level by 1966. ELIZA was what would later come to be known as a ‘chatbot’ and engaged with human beings in real-time live conversations. It achieved some of its most significant results when it was programmed with a script which caused it to adopt the persona of a psychologist. (In the computer jargon of the time, ELIZA was a program, and various “scripts,” which one might now call ‘data sets’, containing vocabulary and directions for using the vocabulary.) To test how the program operated, Weizenbaum invited his colleagues to have a ‘conversation’ with ELIZA using the DOCTOR script. Even though his colleagues knew that ELIZA was nothing more than a few lines of code, and even though they knew how ELIZA worked, they nonetheless entered into deep conversations, as if they were talking to a therapist, revealing details about their personal relationships, and confiding about their life problems. They spent more time with ELIZA than Weizenbaum needed or had requested. One person even asked Weizenbaum to leave the room because the conversation contained private details. He worried about the future implications of conversational interactions between humans and computers.

The surprising results from ELIZA point to three topics in AI. First, even though the people knew that they were talking to an inanimate device, and that the responses were simply the results of plugging vocabulary words into paradigms, they nevertheless trusted the computer, sharing confidential information with it; the people bonded emotionally with ELIZA. Second, Weizenbaum was moved to speak about the eventual dangers of AI, given the unanticipated degree to which the people trusted and confided in ELIZA. Third, the results offered a new perspective on Turing’s test. If a piece of software like ELIZA can convince a user that it is human, that might well not be enough evidence to conclude that ELIZA is intelligent. The code for ELIZA was relatively short and simple, even by 1966 standards.

These three narratives suffice to show that the roots of AI, and AI itself, are older than twenty-first century media reports suggest. The eventual capabilities of AI were apparent, as were concerns about potential dangers arising from those abilities.

Friday, November 28, 2025

The Birth of Quantum Mechanics: The First Steps

What is now known as “quantum mechanics” or “quantum physics” did not appear all at once in a complete or even comprehensive form. It emerged, bit by bit. Its initial application was to a narrowly-defined problem. The question at hand was the emission of radiation from a defined type of solid body, a so-called black body.

A black body is an idealized object, not found in the physical world: it has zero reflectivity. Happily for physicists, actual objects in the real world which approximate the behavior of a black body can be found. These real-world objects approach the ideal specifications of a black body closely enough that they can be used in experiments.

When such objects absorb energy, e.g. in the form of light or heat or radio waves, they radiate this energy outward, and so return to an initial lower-energy state. The question was when, how, and at which rate, the black bodies will emit such radiation.

A simple example is an incandescent lightbulb: its filament absorbs energy, in particular electromagnetic energy created by an electric current; it emits energy in the form of light and heat. The task was to find a mathematical expression of a descriptive law which could predict how, how much, and which type of energy a black body would emit, given the amount of energy it absorbed.

Another example is a piece of iron in a blacksmith’s shop. As the iron is heated, it begins to glow. Depending on the temperature, the iron can emit light of different colors. How much light will it emit at different temperatures? Which color of light will it emit?

Wilhelm Wien proposed one early attempt to answer these questions. He published his hypothesis in 1897. Wien’s hypothesis seemed to work for short wavelengths, but its results for longer wavelengths diverged from observational results.

Following Wein’s work, the next, or next significant, attempt to answer this question was the Rayleigh-Jeans law. Developed by Lord Rayleigh and Sir James Jeans and published in 1900, this formula seemed to predict energy emissions not well, but well enough, for longer wavelengths, but not at all for shorter wavelengths: The law’s predictions for the the visible segment of the electromagnetic spectrum were disastrous, and, looking back years later, physicists refer to those results as the “ultraviolet catastrophe.” This problem was the troubling divergence between what was predicted by the Raleigh-Jeans Law and what was observed.

The Rayleigh-Jeans law seemed to have a problem which was the opposite of the problem which Wien’s law had.

Readers detected the failure of the Rayleigh-Jeans law within a few months of its original publication. The failure of Rayleigh and Jeans, combined with the failure of Wilhelm Wien, hinted at a broader problem. Not only did these two particular attempts fail; they pointed to the possibility that Newtonian physics itself was failing.

Newton’s physics was known to be incomplete, and had given way to Newtonian physics, which added other concepts, like Michael Faraday’s Field Theory, to Newton’s original system. But now even the expanded Newtonian system seemed inadequate.

All of this happened quickly. By the end of the year 1900, physicists were at work to find a law to replace the Rayleigh-Jeans law.

Two questions were now on the table: Which laws predict the rate, amount, and timing of black body radiation? And what system of physics will reveal those laws if Newtonian physics do not apply in this context?

Before Wilhelm Wien published his work on the question of black body radiation, and before the Rayleigh-Jeans Law was published, Max Planck had been working on the same problem. He’d published a paper in 1897 already pointing toward the quantization of energy. By December 1900, Planck would solve the problem of black body radiation.

Planck’s achievement in solving the black body problem, however, was overshadowed by the implications of the method he used in solving it. Sometime during the second half of the year 1900, Max Planck discovered quantum mechanics, and he did so more-or-less singlehandedly. At the time, he had a premonition that his solution would have much broader implications than a specific question about black body radiation. He was correct.

Of course, Planck’s discovery was not purely independent or unassisted. He used the discoveries of several other physicists, notably Ludwig Boltzmann, and had discussed an early draft of the paper with the DPG (Deutsche Physikalische Gesellschaft: The German Physical Society). So Planck’s breakthrough was almost but not quite singlehanded.

What was this discovery? Planck reexamined the black body problem using a different concept of energy. Newton himself didn’t write much about energy, but the Newtonian concept of energy, as developed by later authors, was such that in an equation, the value for energy could be any real number, rational or irrational.

No discussion, however superficial, of quantum mechanics can be simple, easy, or intuitive. Even in Newtonian physics, there is a distinction between work and energy. For the present purposes, that distinction will be largely ignored, because work and energy are measured in the same unit: joules, BTUs, or foot-pounds. For example:

Work = Force * Distance

Or, by substitution:

Work = Mass * Acceleration * Distance

Planck introduced a constant into the Newtonian framework. For obvious reasons, this number became known as the Planck Constant, or Planck’s Constant. It is used to calculate the smallest possible quantity of energy in certain situations. No smaller amount of energy can be in those situations, and any larger amount of energy must be a multiple of this number.

In which situations does this quantizing occur? The energy of an electron in an orbit is quantized. But, e.g., a free electron, which is not orbiting a nucleus and is not part of some larger system, is not quantized, and can have energy at any arbitrary level, i.e., the coefficient can be any real number.

In the black body problem, even though the energy released from the black body was usually in the form of light or heat, the energy emission was the result of an electron changing from a higher energy state to a lower energy state, and it is this change which is quantized. Therefore the emission of light or heat from a black body is quantized, because the emission was occasioned by a quantized change in an electron’s energy level.

To say that an electron’s change in energy state is the same as an electron changing its orbit is somewhat misleading, because these orbits cannot be envisioned as circular or elliptical, like the orbits of plants and satellites. For this reason, physicists refer to the electrons as having “orbitals” instead of “orbits.” These orbitals have all kinds of strange shapes. A change in an electron’s energy state is a change in an electron’s orbital.

An electron’s energy level is related not only to an electron’s orbital, but also to the electron’s other variables, e.g., its spin, angular momentum, or nodal structure. It is therefore possible for two electrons with the same orbital to have different energy levels if they differ in these other variables.

Planck’s innovation was to hypothesize that energy couldn’t be delivered in any arbitrary quantity, but rather only in multiples of some unit which represents the smallest possible quantity of energy.

Yet the Newtonian equations for energy and work are structured algebraically so that between any two values for work or energy, a third value can be found. This is so because the values for force, mass, acceleration, and distance are real numbers, whether rational or irrational. Therefore, energy in a Newtonian framework is not quantized, but rather can have an uncountably infinite number of values, and therefore form a smooth curve, and not the step-like curve which would result from quantizing.

How did Planck alter the Newtonian equations so that they would yield quantized results? Mathematicians use the phrase “boundary conditions” to describe the placing of limits on an equation. Many algebraic equations and many differential equations have infinitely many solutions. Boundary conditions limit the domain of an equation. Boundary conditions are not arbitrary, but represent real-world conditions in which an equation is to be applied. A more precise description and explanation of boundary conditions is beyond the scope of the present discussion. It will be left as an exercise for the reader to spend a semester or two in advanced mathematics at a nearby university.

Suffice it to say that Planck introduced his constant and formulated the equations in such a way that the resulting values for energy were discrete and quantized. Planck’s method here was not arbitrary, because it, unlike the work of Wilhelm Weil and Rayleigh and Jeans, was capable of predicting the energy radiated from a black body.

The use of boundary conditions might seem like cheating, or like rigging the equation, but such use is justified because this is not a situation of pure mathematics, but rather of applied mathematics. Given confirmed precise data-points, the stipulation of domains is the way to find an equation which describes and predicts the data. It is intrinsic to the nature of the observational sciences, or the natural sciences, that an investigator looks for, or creates, a best-fit line for the known data points, which is a hypothesis which fits with the collected evidence. Such a line of best fit expressed mathematically is an equation.

Planck was reluctant to settle on the idea of quantized energy, knowing that it would end the unchallenged universal application of Newtonian physics. Werner Heisenberg writes that Planck was “a conservative personality in all his views.” Eventually, however, Planck persuaded himself that he had enough evidence, had read the evidence correctly, and was ready to publish his results.

Heisenberg reports that Planck continued to try to find some way to harmonize or integrate his results into the Newtonian system, but finally had to abandon this attempt:

Der Gedanke, daß Energie nur in diskreten Energiequanten emittiert und absorbiert werden könnte, war so neu, daß er nicht in den überlieferten Rahmen der Physik eingefügt werden konnte. Ein Versuch Plancks, seine neue Hypothese mit den älteren Vorstellungen der Strahlungslehre zu versöhnen, scheiterte an den wesentlichen Punkten. Es dauerte etwa fünf Jahre, bis der nächste Schritt in der neuen Richtung erfolgen konnte.

By 1901, Planck had addressed the black body problem, and in the process discovered quantum mechanics. There was more to be done: the concept of quantum mechanics would be applied to other problems. Planck had started the ball rolling, but others would continue to expand the realm to which quantum mechanics would be applied.

Moving on from the black body problem, the next area of investigation was the photoelectric effect. Under certain conditions, when light strikes the surface of an object, usually a metalic object, the object will emit electrons. Philipp Lenard discovered that the energy of individual emitted electrons was independent of the applied light intensity. Lenard writes:

Es sind aber die Grössen der Anfangsgeschwindigkeiten unabhängig von der Intensität des Lichtes

Writing about Lenard’s work, Bruce Wheaton reports:

Philipp Lenard discovered in 1902 that the maximum velocity with which electrons leave a metal plate after it is illuminated with ultra violet light is independent of the intensity of the light.

While Lenard was able to identify and give evidence of the counterintuitive behavior of light and electricity in the photoelectric effect, he was unable to explain it; he suggested some vague hypotheses which in hindsight were not helpful. Werner Heisenberg writes:

Dieses Mal war es der junge Albert Einstein, ein revolutionärer Genius unter den Physikern, der sich nicht scheute, noch mehr von den alten Begriffen aufzugeben. Einstein fand zwei neue Probleme, bei denen er die Planckschen Vorstellungen mit Erfolg anwenden konnte. Das eine war der sogenannte photoelektrische Effekt, die Aussendung von Elektronen aus Metallen unter dem Einfluß von Licht. Die Experimente, die besonders sorgfältig von Lenard ausgeführt worden waren, hatten gezeigt, daß die Energie der ausgesandten Elektronen nicht von der Intensität des Lichtes abhängt, sondern nur von der Farbe oder, genauer gesagt, von der Frequenz oder der Wellenl&aunl;nge des Lichtes. Dies konnte auf der Grundlage der früheren Strahlungstheorie nicht gedeutet werden. Einstein konnte aber die Beobachtungen erklären, indem er die Plancksche Hypothese durch die Annahme interpretierte, daß das Licht aus sogenannten Lichtquanten, d.h. aus Quanten von Energie bestehe, die sich wie kleine Korpuskeln durch den Raum bewegen. Die Energie des einzelnen Lichtquantums sollte, in Übereinstimmung mit Plancks Annahmen, gleich sein dem Produkt aus der Frequenz des Lichtes und der Planckschen Konstante.

So it turned out that not only the energy of orbiting electrons was subject to quantization. Photon energy is also quantized: the energy of a photon can occur only at certain discrete levels. What Planck had done for electrons in the black body problem, Einstein had done for photons in the photoelectric effect.

The third problem concerned specific heat (sometimes called ‘specific heat capacity’) which is the amount of energy required to raise the temperature of a specified mass of a substance by a specified number of degrees, for example, the number of joules needed to raise the temperature of 1 gram of iron by 1 degree Kelvin, or the number of BTUs needed to raise 1 pound of water by 1 degree Fahrenheit. Note that ‘specific heat capacity’ is different from ‘thermal capacity’ or ‘heat capacity’ and only ‘specific heat capacity’ is relevant to the present discussion.

Counterintuitive results, or at least results not consistent with Newtonian systems, resulted when the specific heat of the same substance varied depending on the starting temperature. Newtonian thought called for the specific heat to be the same for any substance no matter what the starting point might be.

For example, the number of joules needed to raise the temperature of 1 gram of iron 1 degree Kelvin will be different depending on the starting temperature. A certain number of joules is required to raise 1 gram of iron if the iron starts at 3 degrees Kelvin and is raised to 4 degrees Kelvin. A different number of joules is required if the iron starts at 300 degrees Kelvin and is raised to 301 degrees.

Like the black body problem and like the photoelectric effect, the problem of specific heat pointed toward quantization, as Werner Heisenberg explains:

Das andere Problem war die spezifische Wärme fester Körper. Die übliche Theorie führte zu Werten für die spezifische Wärme, die zwar mit den Experimenten im Bereich hoher Temperaturen gut übereinstimmten, die aber bei sehr tiefen Temperaturen viel höher als die beobachteten Werte waren. Wieder konnte Einstein zeigen, daß man dieses Verhalten der festen Körper verstehen konnte, indem man die Plancksche Quantentheorie auf die elastischen Schwingungen der Atome im festen Körper anwandte. Diese beiden Ergebnisse stellten einen sehr wichtigen Fortschritt dar, denn sie zeigten die Wirksamkeit der Planckschen Konstante in verschiedenen Erfahrungsbereichen, die gar nicht unmittelbar mit dem Problem der Wärmestrahlung zu tun hatten. Auch enthüllten sie den zutiefst revolutionären Charakter der neuen Hypothese; denn die Einsteinsche Fassung der Quantentheorie hatte zu einer Beschreibung des Lichtes geführt, die völlig verschieden war von der seit Huyghens üblichen Wellenvorstellung. Licht konnte also entweder als eine elektromagnetische Wellenbewegung gedeutet werden, so wie es seit Maxwells Arbeiten und Hertz’ Experimenten angenommen wurde, oder als bestehend aus einzelnen »Lichtquanten« oder »Energiepaketen«, die sich mit hoher Geschwindigkeit durch den Raum bewegen. Aber konnte das Licht beides sein? Einstein wußte natürlich, daß die bekannten Erscheinungen der Beugung und Interferenz nur auf der Grundlage der Wellenvorstellung erklärt werden können. Er konnte auch nicht bestreiten, daß ein zunächst unauflösbarer Widerspruch bestand zwischen der Wellenvorstellung und seiner Lichtquantenhypothese. Einstein versuchte auch gar nicht, den inneren Widerspruch dieser Deutung zu beseitigen. Er nahm den Widerspruch hin als etwas, das vielleicht sehr viel später durch ganz neue Gedankengänge verstanden werden könnte.

Together, these three problems, and their solutions, launched the project of quantum mechanics. All three dealt with situations of matter absorbing energy and emitting it again; in all three cases, the pattern of these energy emissions could not be described or predicted using Newtonian physics; and in all three cases, the solution pointed to the quantized nature of energy in certain circumstances. It is significant that not only was energy quantized in these three situations, but its quantization proceeded by means of Planck’s constant.

Max Planck had discovered not only a solution for the rather narrow niche of the black body problem, but rather he had discovered a constant which applied to the entire electromagnetic spectrum, which called into question the universal hegemony of Newtonian physics, which required a re-thinking of how energy is conceptualized, and which eventually required a major change in the concept of light.

Many students will be familiar with a formulation like “sometimes it’s useful to think of light as waves, and sometimes it’s useful to think of light as particles.” This statement, and the various rephrasings of this same thought in different words, applies not only to light, but rather to the entire electromagnetic spectrum: radio waves, thermal radiation, and even heat conduction can justifiably be conceptualized sometimes as waves and sometimes as particles.

The fact that heat conduction is brought under the umbrella of quantum mechanics shows that Planck’s constant and the concept of quantization apply to phenomena which are not purely electromagnetic. This understanding of heat, already indirectly indicated in Ludwig Boltzmann’s work — although Boltzmann himself might not have understood its import — , reveals the broad scope of the eventual applications of quantum mechanics.

The approach in the first decade of the twentieth century was a case-by-case heuristic approach, introducing the needed repairs to the Newtonian system as each set of non-conforming results appeared out of the experiments and measurements made by various physicists.

The quantized solutions to these three problems — black body emissions, the photoelectric effect, and the specific heat of solid bodies — do not properly constitute a complete system of quantum mechanics but rather are merely a sort of “patch” — like a software patch — on Newtonian physics, which allowed the Newtonian conceptual framework to survive, and yet which pointed the way to a thoroughly quantized physics which would be built “from the ground up” as its own discipline and not merely as a revised version of Newtonian physics.

Wednesday, August 27, 2025

Advances in Electromagnetism: Phases in Faraday’s Career

The British physicist and chemist Michael Faraday is credited with a long list of discoveries and inventions. His work paved the way for many advances in electromagnetism: radio, television, electrical generators, electrical motors, microphones, speakers, telephones, chemical batteries, rechargeable batteries, fuel cells, electroplating, electrorefining, shielded cables, fluorescent lighting, LED lighting, capacitors, inductors, MRI scanners, and many others.

His name is given to some of his inventions, like the Faraday Cage and the Faraday Lamp, to some of his discoveries, like Faraday’s laws of electrolysis and Faraday’s law of induction, to units of measurement, like the Faraday to measure electrical charge and the Farad to measure capacitance, and even to objects in outer space, like a crater on the moon and an asteroid. It is difficult to navigate twenty-first century life without encountering the impact of Michael Faraday, his inventions, and his discoveries.

Faraday’s career spanned several decades. As a teenager, he was already interested in electricity, and starting around the age of 20, he attended lectures at the Royal Institution in Westminster and the Royal Society in London. In 1813, at the age of 24, he was appointed as chemical assistant at the Royal Institution. He was also an assistant to Humphry Davy, a chemist, physicist, and inventor.

Toward the end of his life, he was still active in physics. Some of his last work, in the 1860s, involved spectroscopy.

Science was one aspect of Faraday’s life; complementing it was a serious and intense spiritual involvement. Not only did Faraday attend church every Sunday, and not only was he a committed church member, but he was appointed to be an Elder in his church, and preached regularly on Wednesday evenings. He was a member of an obscure denomination known by two names: it can be called the Sandemanian Church or the Glasite Church. The two names are synonymous.

Faraday downplayed the connection between his faith and his science. He presented them as two separate aspects of his life. The two are connected — Faraday sought to discover: whether electromagnetic realities or spiritual realities, he thirsted for knowledge. He also sought to apply: electromagnetic principles led to inventions; spiritual principles led to peace of mind and a sensitive Christian community.

The inflection points in Faraday’s spiritual life coincide curiously with the milestones in his electromagnetic work.

The years of 1831, 1832, and 1833 constitute a sort of flourishing for Faraday.

In 1831, he discovered the principle of electromagnetic induction and used this principle to invent the electrical generator; in this year he also built the world’s first electrical transformer. In 1832 he was appointed to the office of Deacon in the church. In 1833 he discovered the laws of electrolysis and was appointed to be a professor in the Royal Institution.

In 1840, Faraday was appointed to the office of Elder, an even higher office than Deacon.

The years 1844 and 1845 contained both struggle and triumph for Faraday.

On 31 March 1844, Faraday was removed from the office of Elder and excluded from the church entirely. In the jargon of his particular denomination, to be “excluded” or “put away” would have been the equivalent of “being placed on administrative leave” or “being suspended.”

This event was a sudden and unexpected glitch in the otherwise upward trajectory of Faraday’s life. Historians have investigated the cause of this event. There are few direct written references to Faraday’s exclusion, and even fewer to the reasons for the exclusion. The church tended to keep such matters private.

Some historians have conjectured that Faraday was excluded because he failed to attend worship on a particular date and instead dined with the queen at the time. A variation on this hypothesis argues that Faraday was excluded, not for dining with the queen, but for being impenitent about having done so. Geoffrey Cantor has researched the matter extensively, publishing both an article in The British Journal for the History of Science and a book-length biography of Faraday.

Cantor concludes that Faraday’s conflict with the church was not about supper with the queen, or about impenitence about such suppers. Rather, Cantor has uncovered that Faraday’s exclusion was one a several which happened at the same point in time. Cantor suggests that the exclusion of multiple members of the church was the result of a disagreement about an arcane theological detail of this obscure denomination. The details are complicated even for an academic church historian, but seem to have centered around a question of which decisions were to be made by the Elders and which decisions by the entire gathered church membership.

In any event, Faraday’s path took yet another quick turn when he was reinstated into membership a mere five weeks later, on 5 May 1844. Faraday was restored. He resumed his office as Elder and would preach regularly on Wednesday evenings. His career was promptly on the upswing again.

The work that Faraday was doing that year would culminate early in the next year. In 1845 he discovered diamagnetism and made the related discovery of the Faraday Effect.

Although Faraday minimized the interplay between his spiritual life and his scientific career, the correlations of the timing of these events suggests some significant connection. The early 1830s were a flourishing in both, and Faraday’s ecclesiastical rebound and rehabilitation in the mid-1840s apparently led to two major discoveries.

Tuesday, September 24, 2024

What Did Einstein Write? Does God Gamble?

Albert Einstein seems to have been a one-man factory of memorable quotes and aphorisms, producing them in large quantities.

His large output, however, created opportunities for misquotations and false attributions. Posters and bumper stickers bear slogans which never came from Einstein’s pen — or mouth.

For this reason, it is important to verify any phrase ascribed to Einstein.

One of his famous sayings asserted that “God does not play dice with the universe.” This was Einstein’s assessment of quantum mechanics. He argued for a more robust sense of causation, lawlike regularity, and mathematical certainty instead of mere probabilities.

But what exactly did he write?

In a letter dated 4 December 1926, sent to Max Born, Einstein wrote:

Die Quantenmechanik ist sehr Achtung gebietend. Aber eine innere Stimme sagt mir, dass das noch nicht der wahre Jakob ist. Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, dass der nicht würfelt.

Although this is a well-attested text, the phrase kaum näher is often cited as nicht näher. The lexical difference is not significant, and the straightforward meaning of the quote doesn’t change.

Sometimes the text is quoted with an individual word italicized for emphasis.

Only by examining the letter itself could one contend for or against italicizing any particular word in the text. Was the original letter handwritten or typed? Does it still exist in some library or archive?

Some historians assert that the same or similar phrasing may have been used in a letter to Niels Bohr. Again, archival research would be required to confirm or deny such statements.

Einstein wrote a letter dated 21 March 1942 to Cornelius Lanczos. Einstein wrote:

Es scheint hart, dem Herrgott in die Karten zu gucken. Aber dass er würfelt und sich telepathischer Mittel bedient (wie es ihm von der gegenwärtigen Quantentheorie zugemutet wird), kann ich keinen Augenblick glauben.

Commenting on this letter, Helen Dukas and Banesh Hoffmann write that Einstein was “expressing his dissatisfaction with quantum theory, with its denial of determinism and its limitation to probabilistic, statistical predictions.”

Dukas and Hoffmann offer translations of parts of the letter; in one, Einstein’s view of physics is rendered as “the comprehension of reality through something basically simple and unified.”

Although the transmissions of these texts is a bit wobbly, there is certainly enough evidence to justify assertions that Einstein sought, and asserted the existence of, lawlike regularities and causations in physics. He was not content with only the statistical phenomena, but rather wanted the concrete noumena which lay beneath those generalizations.

Einstein didn’t disagree with statistical methods, like those, e.g., of Boltzmann. But he seems to have viewed them as merely characterizations of observations, as phenomenal, and not an explanation or a mathematical modeling of the underlying noumena.

Tuesday, July 9, 2024

A Debatable Word: Defining ‘Atheism’

Among both philosophers and non-philosophers, the word ‘atheism’ appears in a variety of different types of text. It is uttered in conversations, lectures, and debates. Many readers or listeners have strong and passionate reactions to this word — pleasant or otherwise. Nearly everyone who uses the word believes that he or she knows what it means — or what he or she means by it.

An analysis of this word’s use reveals that its significance is not so obvious.

It is not, e.g., a synonym for ‘irreligious’ or ‘antireligious’ — as is clear from the existence of many theists who are both antireligious and irreligious.

A lucid and intelligible definition of ‘atheism’ is needed. Many words have more than one definition. If ‘atheism’ has more than one definition, then what is most needed is the definition which will serve to clarify philosophical analysis. The word may have other definitions which circulate in the non-philosophical word, but they are of no import.

It would be reasonable to propose that ‘atheism’ be defined as primarily an ontological word. The word is used when discussing whether or not a certain object exists, and that object is usually labeled ‘God.’ This type of definition raises the question of how ‘God’ is defined, and so puts off or postpones the question. Yet, despite the fact that this definition merely delays, and doesn’t directly solve, the situation, it still has a certain merit, because it directs attention to the question of whether or not something exists.

Like ‘atheism,’ the word ‘God,’ tends to be informally associated with various passions and experiences — pleasant or painful. The word ‘God’ may precipitate or prompt — or “trigger” — memories of encounters with organized religion or spiritual texts. It might be helpful to use synonyms: a ‘higher power’ or a ‘deity’ or another word can serve, as can circumlocutions like ‘unmoved mover’ or ‘that which must necessarily be’ or ‘prime mover’ or ‘prime cause.’

The question of existence is central in the word ‘atheism.’ This question is often lost in various side-issues which are tangential to a discussion of atheism.

It must be stated explicitly that a discussion of atheism is not a discussion of religion, and that these two discussions are quite separate. Religion is a socio-cultural artifact: a civilization’s response to whichever possible answer — or answers — the civilization has proposed to the question of atheism.

It is a recurrent mistake to label someone as an atheist merely because she or he directs a profound, impassioned, and accurate critique of religious institutions. A person can find a particular religion, or religion in general, to be wrong, dangerous, and repulsive without being an atheist. Often, the most intense critique of religion is delivered by those whose belief in God is equally intense.

The question of atheism, as a simple ontological question, would still be a reasonable question in a universe in which no human being existed — and in a universe which was also otherwise unpopulated. A question of the form “Does a certain object X exist?” is a simple question and is often — always? — reasonable to ask. The question is independent of any psychological or cultural traditions which a civilization may have formed around the concept of object X — whether or not it exists.

Whether one be “for” or “against” atheism, it is necessary that one isolate the question of atheism from any discussion or exploration of organized religion. Religion is ultimately a human institution. For the theist, religion is a response — and here the theist may label it a correct or an incorrect response — to the existence of God. For the atheist, religion is a construct which makes reference to a non-existent object.

The most insightful critique of religion, or even the rejection of religion altogether, is not equivalent to atheism. Some of the harshest criticisms of religion come from committed theists.

Conversely, there are atheists who are quite fond of religion.

It is possible, and in point of fact has happened, that people who work within religious institutions and who hold titles like priest, rabbi, imam, preacher, pastor, minister, etc., are in fact atheists, despite their affection for the spiritual organization in which they carry out their daily duties.

Refining the definition of ‘atheism’ can help to expose fallacies. To identify atheism with a critique of religion is a common mistake; more than one freshman has been heard to say “I’m an atheist because I hate going to church.” A variation on this fallacy is to label anyone who attacks orthodoxy as an atheist; this has in fact happened among those who work in the academic discipline of the History of Ideas.

Rigorously applied, a clarified definition of ‘atheism’ might yield some surprises: those often considered atheists might be theists, and those regarded as theists might be atheists.

Thursday, June 27, 2024

Can Time Run Backward? Boltzmann’s Statistical Understanding

Time has long fascinated physicists for many reasons, but perhaps chiefly because of the four dimensions — three spatial and one temporal — only time is directional. Objects can move, or be moved, in space from up to down, and from down to up. They can go from right to left, or from left to right. Objects can travel from fore to aft, and from aft to fore.

But in time, objects — or people — can travel only from the past to the present and then on to the future, but never from the future into the past.

This unidirectionality leads to many questions: Is the unidirectionality of time truly a universal rule? Are there any exceptions? If it is a rule, why? Why are the other directions not unidirectional? What would be the difference between an ‘object traveling backwards through time’ and ‘time itself running backwards’?

To borrow a bit of verificationist jargon, if it makes sense to say that certain events can’t happen, then those events — the events that can’t happen — need to be adequately described in a way that an observer could know what they are. If it makes sense to say that an object can’t move backwards through time — i.e., travel from the future to the past — then in order for this saying to make sense, the observer will need to be know what to look for, and when the observer doesn’t find what he’s looking for — when the observer sees no objects moving backwards through time — then the observer can confirm this saying: the saying will have been verified. But in order to confirm that no objects are moving backward through time, the observer will need to know what to look for: there will need to be a description of what it means for an object to move backward through time, and not simply any type of description, but rather specifically a description which can be compared to any bit of sense data, or compared to any bit perception; this comparison will yield one of two answers: either “yes, it’s an object moving backward through time,” or “no, it’s not an object moving backward through time.” Presumably, then, the answer will always be “no.”

The description which either does, or does not, match an observer’s experience, is necessary in order for the saying to have meaning. Rudolf Carnap famously wrote that “we conclude that there is no way of understanding any meaning without ultimate reference to ostensive definitions, and this means, in an obvious sense, reference to ‘experience’ or ‘possibility of verification’” and “the meaning of a proposition is the method of its verification.”

Carnap’s phrases became slogans for the verificationist school of thought.

For the notion of an object moving backward through time, there are many such potential descriptions, provided in print by science fiction writers, and on screen by science fiction films. Of course, some of those descriptions may be more useful than others. But there must be some description which answers the question: What does it look like for an object to move backward through time? If there is no description, then the observer has nothing with which to compare his observations.

But it seems more difficult to give meaning to the phrase “time itself moving backwards.” What would this mean? What would it look like? What description of “time moving backwards” would one give to an observer, to tell him what to look for?

If it were taken as axiomatic that time can’t run backwards, then to give that phrase meaning, the observer would examine various events to confirm that in none of them was time running backwards. What would the observer seek, and not find, in those events?

To say that “there are no purple flowers in the garden,” and to ensure that this saying has meaning, one would have to know what a purple flower looked like, so that when one looked in the garden, one could confirm that one did not find a purple flower.

Likewise, to give meaning to the utterance, “time can’t run backwards,” one would need to know what it would look like for time to run backwards, so that one could confirm that one did not see it.

Unless, that is, time actually can run backwards.

In aScientific American article, Martin Gardner details how some physicists attempt “to give an operational meaning to ‘backward time’” and do so “by imagining a world in which shuffling processes went backward, from disorder to order.”

One might imagine shuffling a deck of cards, and starting with 52 cards in a pile, in no discernable order. After shuffling them, one might find them arranged in a clear order, e.g., by rank and suit. This is improbable, of course — massively, overwhelmingly improbable.

Improbability can be overcome by iteration. If billions of men were shuffling billions of decks of cards, and did so endlessly for billions of years, then it is not so improbable that one day, a scrambled deck of cards would be neatly ordered after shuffling.

Similarly, the reader will be familiar with the story about monkeys and typewriters, which, if left to their task for enough time, would eventually type out an accurate copy of Shakespeare’s works.

Of course, nobody is organizing billions of card dealers or billions of monkeys. But the air in any room is filled with billions and trillions of gaseous molecules, each of which is constantly vibrating and moving about in a random or near-random fashion. Martin Gardner explains how one physicist viewed this:

Ludwig Boltzmann, the 19th-century Austrian physicist who was one of the founders of statistical thermodynamics, realized that after the molecules of a gas in a closed, isolated container have reached a state of thermal equilibrium — that is, are moving in complete disorder with maximum entropy — there will always be little pockets forming here and there where entropy is momentarily decreasing. These would be balanced by other regions where entropy is increasing; the overall entropy remains relatively stable, with only minor up-and-down fluctuations.

Boltzmann transferred this pattern from a container of gas to the universe at large. If the Brownian motion created these situations in a glass jar of nitrogen or carbon dioxide on a table in Boltzmann’s office in the University of Vienna, then the universe as a whole with its many galaxies might exhibit the same behavior, simply on a larger scale:

Boltzmann imagined a cosmos of vast size, perhaps infinite in space and time, the overall entropy of which is at a maximum but which contains pockets where for the moment entropy is decreasing. (A “pocket” could include billions of galaxies and the “moment” could be billions of years.) Perhaps our flyspeck portion of the infinite sea of space-time is one in which such a fluctuation has occurred. At some time in the past, perhaps at the time of the “big bang,” entropy happened to decrease; now it is increasing. In the eternal and infinite flux a bit of order happened to put in its appearance; now that order is disappearing again, and so our arrow of time runs in the familiar direction of increasing entropy. Are there other regions of space-time, Boltzmann asked, in which the arrow of entropy points the other way? If so, would it be correct to say that time in such a region was moving backward, or should one simply say that entropy was decreasing as the region continued to move forward in time?

One way to formulate Boltzmann’s question is to compare these two phrases:

Increasing entropy in time
Increasing entropy is time

Does disorder increase over time — in time, within time? Or is time the increase in disorder?

Is it valid to transfer, as Boltzmann does, the pattern from gas confined in a container to the entire universe? There may be some differences between the two cases. In the situation of gas in a sealed glass jar, the observer, in this case the physicist in the laboratory, is in a situation which, if not that of omniscient God, is at least similar: the observer hovers above the jar, and even if the pockets of disequilibrium are not visible, is able to view the entire “universe” of the experiment at once, and able to watch the the experiment from inception to completion.

When the concept is transferred to the entire physical universe, or to one like it in a thought experiment, the situation of the observer is more problematic. The word ‘entire’ entails that the observer be in the universe. The observer is not in an omniscient role and able to view the universe from the outside. If one were to be in a universe, then how would one know if some pockets of this universe had a time which ran in a different direction than other pockets of this universe?

There might be some empirical problems, as Martin Garnder points out: if the observer is in a pocket of the universe in which time is running forward, and attempted to observe a pocket of the universe in which time were running backwards, he would not be able to see such a pocket, even if it were present, because it would be inhaling light rather than exhaling it. A separate problem would arise when the observer asked whether he was resident in a pocket of the universe in which time ran forward, or whether he resided in a region in which time ran backward. Here one might invoke the Leibnizian principle of the identity of indiscernibles: time running forward and time running backward would be empirically the same for the observer.

The notion of a pocket of the universe absorbing rather than emitting light, because time was running backwards in that pocket, has caused some, like Raphael Bousso and Netta Engelhardt, to ask whether a “black hole” has a local time which runs backward.

The difficulties in determining about a pocket of space, whether its time runs forward or backward, tempts one to posit a sort of meta-time, which would act as a framework around the entire universe, and would allow the observer to determine whether times in different pockets of space were running in the same direction or in different directions.

When one describes a universe in which some regions have time running forward and other regions have time running backward, it is tempting to add a phrase to this description: “at the same time” or “simultaneously.”

One might say: “In this universe, the time in region A is running forward, and at the same time, the time in region B is running backward.” But in saying this, one has introduced the notion of meta-time.

But the idea of meta-time is problematic. It can lead to an infinite regress of meta-meta-time and meta-meta-meta-time, etc.

It is worth noting that Boltzmann came to this problem from the discipline of physics, or chemistry, or physical chemistry. One of his books is titled Gastheorie. The behavior and properties of gasses are treated statistically. A jar of nitrogen or CO2 contains billions and trillions of molecules, randomly vibrating and traveling to and fro. It is not possible to detail the exact movements of each individual molecule, and such information would yield little insight into the physics of gasses. But a statistical view of the movements of all the molecules helpfully characterizes the behavior and properties of the gas in question.

When Boltzmann then begins to ponder entropy decreasing and time running backwards in his glass jar of gas, Martin Gardner explains that “No basic laws would be violated, only statistical laws.”

Given that our general, even universal, experience is one of time moving forward, then how might one justify the hegemony of forward-moving time in human experience, and the absence of backward moving time? The emphasis here is on ‘experience,’ given that the hypothetically possible cases of backward moving time are not part of direct experience; those cases are invisible: the gas in the jar, or the galaxy which absorbs light rather than emitting it.

What explains the ubiquity of forward-moving time, given the possibility of backward-moving time? The answer again relates to statistics, as Gardner writes:

It was here, in the laws of probability, that most 19th-century physicists found an ultimate basis for time’s arrow. Probability explains such irreversible processes as the mixing of coffee and cream, the breaking of a window by a stone and all the other familiar one-way-only events in which large numbers of molecules are involved. It explains the second law of thermodynamics, which says that heat always moves from hotter to cooler regions, increasing the entropy (a measure of a certain kind of disorder) of the system. It explains why shuffling randomizes a deck of ordered cards.

Young students are often given this example: a solid object, e.g., a rock, could suddenly disappear in a cloud of dust and vapor, if the Brownian motion of all its molecules randomly happened to behave in precisely the right way. But the probability of this happening is so close to zero that it is treated as zero. It is a statistical explanation.

Similarly so with the directionality of time, as Gardner writes:

Physicists and philosophers argued that statistical laws provide the most fundamental way to define the direction of time.

So Boltzmann’s view of time is ultimately a statistical one, and the questions which his view raises are to be understood in that way.

The questions which Boltzmann’s work, directly or indirectly, raises are still open, and still the focus of research and debate.

Wednesday, March 27, 2024

Edward Teller: Physics and Politics

Edward Teller was born in Hungary in 1908, and moved to the United States in 1935. In 1942, he became part of the Manhattan Project, and in 1943 started working in Los Alamos. While there, he worked somewhat on the core mission of the project, the development of a fission bomb, but he worked more eagerly on the development of a fusion bomb. In the postwar years, he continued to work in varying capacities on the fusion bomb. The first successful demonstration of it happened in 1952.

Some newspaper reporters began referring to Teller as the “father of the hydrogen bomb.” Yet this is debatable. Stanislaw Ulam also did significant work on the project, and could conceivably earn this title. More accurately, it was a team effort, and no one individual could claim sole credit. The hydrogen bomb even had a mother: Maria Goeppert Mayer. Born in Germany, she worked with both Teller and Ulam.

By the same token, probably no one individual earns the title of “father of the atomic bomb,” although that phrase has been used on several individuals: Albert Einstein, whose famous equation pointed to the convertibility of matter to energy and who wrote of it to President Roosevelt; General Leslie Groves, who directed the Manhattan Project; Robert Oppenheimer, who managed the project; President Truman, who directly ordered the use of the bomb; and perhaps others.

Scientific research and industrial development of processes like fission and fusion are too complicated to be the product of one man. It was all teamwork.

At several points in time in the postwar century, various individuals raised the question of whether it had been ethically acceptable to develop and use the atomic bomb; later the same question was raised about the development of the hydrogen bomb.

In a 1999 interview with Teller, author Gary Stix asked about these ethical concerns:

What would have happened, I ask, if we hadn’t developed the hydrogen bomb? “You would now interview me in Russian, but more probably you wouldn’t interview me at all. And I wouldn’t be alive. I would have died in a concentration camp.”

Teller understood the dynamics of deterrence. Ultimately, the Cold War ended without a face-to-face war between the USSR and the United States. World War III was averted. Brinkmanship avoided the many millions of casualties and the devastating nuclear explosions which would probably have been part of that war.

The analytic skills needed in physics are transferable to geo-political history: Teller concluded that the Soviet Socialists were essentially of the same nature as the Fascists, the Nazis, and the Japanese militarists. He opposed them all, as Gary Stix writes:

Teller’s persona — the scientist-cum-hawkish politico — is rooted in the upheavals that rocked Europe during the first half of the century, particularly the Communist takeover of Hungary in 1919. “My father was a lawyer; his office was occupied and shut down and occupied by the Reds. But what followed was an anti-Semitic Fascist regime, and I was at least as opposed to the Fascists as I was to the Communists.”

Technological and scientific development would be the way to preserve freedom and liberty, and to eventually dismantle Soviet Socialism, in Teller’s view. He promoted a full effort to develop nearly every aspect of high-tech warfare, from the atomic and hydrogen bombs to missile defense systems to protect America’s civilian population.

Avoid war by continually developing ever more powerful weapons, and by showing the enemy that the United States was ready to use them. In the end, the Soviet Socialists couldn’t keep up the pace of research and development: financially, they couldn’t afford it.

It was a war of economic attrition:

The Soviets could never compete with America’s electronic weaponry — and even less with the northern Californian economic vibrancy that produced Macintosh computers and Pentium processors.

Teller’s vision of technology as the best path to peace was confirmed:

In the end, microchips and recombinant DNA — two foundations of the millennial economy — helped to spur the end of the cold war.

Edward Teller was not only a superlative physicist and a brilliant geo-political strategist, but rather he also explored questions of earth science. He was one of the first to use the phrase “greenhouse effect” — perhaps he was even the very first. He spoke of it in 1959. He also later proposed his own solution to it: having concluded that reduced CO2 emissions were impractical, he advocated releasing fine particles into the upper atmosphere.

On various topics, when he thought, he thought big. The term “Tellerism” is still occasionally used to describe a grandiose way of thinking and the promotion of grandiose solutions to problems.