Wednesday, September 4, 2019

Extinction as the Default Fate: Concerning the Ubiquity of Selective Pressure

Throughout the history and prehistory of our planet, countless species have become extinct. There are many reasons for extinction, but most of them can be lumped together as changing habitats.

Any change in a habit will introduce selective pressure.

As a concrete example, consider the activity of beavers in North America. In a savanna or in a forest, beavers will construct a dam across a small creek or stream, and thereby create a small pond.

The populations in this ecosystem will change almost immediately. The numbers and types of fish, waterfowl, and trees will change.

As the beaver’s pond ages, it can gradually turn into a larger lake, or it an fill in with sediment and become a swamp. In either case, the populations of flora and fauna will change again.

The changes brought about by the beavers will mean that numerous individuals belonging to these various species will die.

This is merely one example of the many different ways in which a habitat can change, and thereby exert selective pressure on the populations in that habitat. But not only can numerous individuals die when habitats change. When changes are large enough and numerous enough, entire species can become extinct.

Aside from beaver activities, there are other ways in which habitat change: solar activity, volcanic activity, changes in the earth’s magnetic field, etc.

These factors, and many others, cause extinction, as David Wallace-Wells writes:

The earth has experienced five mass extinctions before the one we are living through now, each so complete a wiping of the fossil record that it functioned as an evolutionary reset, the planet’s phylogenetic tree first expanding, then collapsing, at intervals, like a lung: 86 percent of all species dead, 450 million years ago; 70 million years later, 75 percent; 125 million years later, 96 percent; 50 million years later, 80 percent; 135 million years after that, 75 percent again.

A small cluster of microbes, which can travel hundreds of miles by clinging to a bird or to a bit of driftwood, can introduce diseases into regions, and quickly wipe out an entire species — or several species.

These forces are non-anthropogenic. The vast majority of extinctions happen without any human influence. The natural forces of selective pressure are much more ruthless than the effects of human activity.

The earth’s atmosphere changes spontaneously, again without any human intervention. Both the climatic temperature and the carbon dioxide levels have demonstrated their ability to vary wildly. Of the mass extinction events which have regularly happened throughout the planet’s history,

all but the one that killed the dinosaurs involved climate change produced by greenhouse gas. The most notorious was 250 million years ago; it began when carbon dioxide warmed the planet by five degrees Celsius, accelerated when that warming triggered the release of methane, another greenhouse gas, and ended with all but a sliver of life on Earth dead.

The direction of causation between extinction events and climate change is not always clear: in some cases, an extinction event may have caused the climate change; in other cases, climate change may have caused the extinction event.

In any event, these events were non-anthropogenic.

The chances of any one species becoming extinct are high; the chances of it surviving over the long run are low. Survival should be seen as an exception; extinction is the norm.

Viewing earth’s past on a geological time scale, it is clear that climate change, CO2 levels, and extinction events happen spontaneously and sometimes independently over the millennia. It is clear that in the past and in the present, they can and often do happen non-anthropogenically.

Saturday, July 27, 2019

Every Species Was Once an Invasive Species: Concerning the Ubiquity of Selective Pressure

The efforts to identify, and slow the progress of, so-called “invasive species” have occupied the time and energy of botanists, zoologists, park rangers, and good-hearted volunteers over the last several decades.

Who would not want to protect the presumed “native species” from these invaders?

Two aspects of such invasions should be made explicit. First, such encroachments are usually not anthropogenic. Second, such incursions are inevitable.

Species were invading each other’s habitats long before humans had the ability, by means of long-range travel, to accidentally or knowingly introduce alien plants and animals into new domains. Insects clinging to driftwood can cross oceans. Fish eggs on the feet of waterfowl can travel from one inland lake to another.

If a species is originally confined to one habitat, then it is certain that it will one day either go extinct, or it will find its way into another habitat — and thereby become an invasive species. A microbe or a plant originally found in Africa will, sooner or later, arrive in Asia, Europe, or the Americas — or it will become extinct.

The species which now seem to be the native species in a given domain, whether it’s a North American grassland or an Asian rainforest, were once invaders. There was a time when those species were not to be found in that place. The arrival of those flora and fauna into the current location had nothing to do with humans.

In a tendency analogous to entropy, all the species eventually swirl in slow-motion around the globe. Every species is an invasive species, and by the same token, every species is a native species.

It is also inevitable that the vast majority of these species will eventually become extinct, and this again will not be an anthropogenic process.

Any effort to stop or slow invasive species may be aesthetically productive, but the judgment about which species has the right to be in a particular habitat is at best unclear, and the efforts to stop such invasions will ultimately fail.

Thursday, July 18, 2019

Archaic Greek Religion: Its Inability to Hold Adherents Fostered the Birth of Philosophy

One important preliminary question, when examining the archaic Greeks, and even more so when examining pre-Socratic Greek philosophy, concerns the relationship between the Greeks and their religion.

It is common among historians to treat archaic Greek religion - and Classical Greek and Hellenistic Greek religion - as a matter of personal belief. This approach to ancient Greek religion is perhaps influenced by the effects of Jesus, which began around 35 A.D.

To retroject this approach to religion onto 700 B.C., or onto 500 B.C., is anachronistic.

Instead of seeing archaic Greek religion as analogous to religious belief as manifested over the last two millennia, as analogous to medieval and modern conceptualization of Judaism or Islam, it is perhaps more accurate to see archaic Greek religion as a cultural or societal reference point instead of a personal faith.

By way of comparison, the archaic Greeks may have treated their gods in the way in which twenty-first century people treat figures like Santa Claus or the Easter Bunny, like Darth Vader or Yoda, like Uncle Sam or the Energizer Bunny.

In sum, the archaic Greeks may not have ‘believed’ in their gods in the way in which twenty-first century people believe in Jesus, Moses, Muhammad, or Abraham — in God, Yahweh, Jehovah, or Allah.

The small ‘g’ in the archaic gods may denote that the ancients did not have a personal belief in their deities in the way in which medievals, moderns, and postmoderns have a personal belief in their God.

Even a scholar of the rank of Eduard Zeller may have fallen prey to the temptation to think of archaic Greek religious belief as analogous to the Christian faith as manifested in European culture.

Zeller seems to credit archaic Greek religion with properties that fostered the birth of philosophy. It might have been more accurate to credit the archaic Greeks with a lack of religious belief that fostered the birth of philosophy.

In this text, Zeller seems to credit Greek religion with qualities that nurtured philosophy, whereas it was perhaps the Greek lack of religion that fostered philosophy:

Die Religion der Griechen steht, wie jede positive Religion, zur Philosophie dieses Volkes theils in verwandtschaftlicher theils in gegensätzlicher Beziehung. Was sie aber von den Religionen aller andern Völker unterscheidet, ist die Freiheit, welche sie der Entwicklung des philosophischen Denkens von Anfang an gelassen hat.

Perhaps, when he alludes to the ‘freedom’ which archaic religion gave to its followers, and to their development of philosophical thought, Zeller is correct. But he is perhaps incorrect when he sees archaic religion as in any way related to philosophy.

As Nietzsche wrote, it is precisely in the break with archaic religion that philosophy is born: in the break with mythology. In this case, ‘mythology’ means narrative as explanation — and therefore includes true myths as well as false ones. But archaic religion was essentially mythic: it used narrative as explanation.

By contrast, the birth of philosophy yielded explanation without myth. The first philosophers — Thales, et al. — were not necessarily more ‘true’ than the archaic religion, but they were more rational.

For these purposes, the ‘archaic’ era in Greek history can be understood as lasting from approximately 800 B.C. to approximately 480 B.C.; being a construct and not a concrete bit of data, such an “era” cannot have clear or precise beginning or ending points.

The loose grip — the lack of personal faith — which the archaic Greeks had on their mythological religions had the same salutary effect that the tight grip — the profound personal engagement — which the medieval Christians had on their faith. The Greek lack of faith and the medieval surplus of faith both nurtured philosophical thought.

The pantheon of archaic deities and demigods served primarily to fuel poetry, painting, storytelling, sculpture, and other arts. Their ubiquity in literature and archeological findings should not mislead the modern student into thinking that the archaic Greeks incorporated these gods intimately into their inner lives.

The archaic Greeks did not relate to their gods in the personal way in which twenty-first-century people relate to Jesus, Allah, Yahweh, Jehovah, or God.

The relationship of the archaic Greeks and their gods was probably like the relationship between twenty-first-century people and figures like Batman, Spiderman, Charlie Brown, and Snoopy.

Sunday, June 16, 2019

Neuroplasticity and Mind Control: Dissociative Thought on a Cellular Level

Two fields of study have made great progress in the last half-century: the understanding of the physical structure of the brain, and the understanding of the mechanisms of thought control.

The advances in knowledge about neuroplasticity are exemplified in, e.g., a book titled Mindsight by Daniel Siegel.

The progress in explaining the techniques of thought reform have been explored by scholars like Louis West, Robert Jay Lifton, Edgar Schein, and Steven Hassan.

The reader is now in a position to make connections between these two different fields of study.

For the present purposes, the terms ‘thought control’ and ‘mind control’ and ‘thought reform’ and ‘unethical influence’ and ‘undue influence’ and ‘controlling relationships’ are taken as nearly synonymous, if not entirely so. To avoid, to the extent possible, the use of the word ‘brainwashing’ is perhaps wise.

The interaction between neuroplasticity and thought control can be imagined in this way: consider how people ordinarily learn numbers and the skill of counting. Small children, around the age of two or three years old, are taught to count by repeating, along with a teacher or parent, “one, two, three, four, five, six, seven” over and over again.

The process of learning to count is not only a mental process, but a physical one as well. Neural pathways are being created, and are being reinforced so that they become default pathways.

Later in life, in a math class, students will be taught alternative forms of counting, like “two, four, six, eight” or “one, three, five, seven.” But the “one, two, three, four, five” pattern will remain dominant. That pattern will remain the default pattern for reasons which are physical, not mental. The pattern of counting numbers is stronger because it has been built into the brain: pathways have been created, reinforced, and strengthened.

Each time a person says, “one, two, three, four,” that pathway is strengthened. People are continually counting all types of things in all types of situations in daily life. The occasions on which one counts “two, four, six, eight” or “one, three, five, seven,” are relatively rare.

An ordinary adult will, then, have a cellular structure in the brain which corresponds to counting.That’s why people can count automatically, while not thinking about counting, and while thinking about something else.

If some evil genius decided to mislead a group of people about the nature of numbers, he could form a group, in which people spent large amounts of time chanting together: “one, two, three, four, eight.”

In addition, he would isolate his group, as far as possible, from any situation in which they’d hear the correct pattern of counting.

At first, the members of the group would find this odd. They’d ask lots of questions, and need to be persuaded to explore this new way of counting. The evil genius leading the group would need to articulate rationalizations for this new way of counting.

The more experienced members in the group - those who’ve already been counting in the new way for some period of time - might encourage or cajole the newcomers to try this new way. They might explain how their lives are better because they count this way.

The veteran members of the group could reward the newcomers emotionally, applauding and praising them when the count in the new way. Likewise, failure to count in the new way could meet with expressions of disapproval.

Although counting “one, two, three, four, eight” would feel odd to the newcomers, each time they did it, a new neural pathway would be strengthened. It would feel odd for a long time, but each time, it would feel a tiny bit less odd.

Likewise, the old neural pathway of counting the correct way would suffer from disuse, and eventually grow a tiny bit weaker.

Even if there were some lingering doubts on the cognitive level about the new way of counting, on a behavioral level it would eventually feel less odd, normal, and finally automatic.

Thus, in an everyday situation, prompted to count, the newcomers would get to the point which they automatically responded “one, two, three, four, eight” without thinking.

They might eventually grow suspicious of those who count correctly.

This example strives to show how a pattern which seems wrong and counterintuitive can eventually, despite initial doubts on the part of the newcomer, become thoroughly ingrained in the mind.

The next step would be working out the logical implications of the new pattern. Any task of daily life which required counting would have to be reimagined.

In reality, there are no evil geniuses hoping to change the way in which the human race counts. But there are evil geniuses hoping to instill various political and spiritual doctrines into the minds of innocent people. They do this by building and reinforcing neural pathways.

Even newcomers who are skeptical, who doubt the aberrant doctrines which the evil genius wishes to implant into their consciousness, will find that their skepticism does not prevent the new patterns from being built in to their synaptical structures.

So it is that a belief - what seems behaviorally to be a belief - can be instilled into someone’s mind without consent or full awareness. This principle lies behind some forms of successful advertising as well as behind the more sinister forms of mind control.

Understanding that the implanted belief is a physical structure also hints at therapeutic options to help the individual who is recovering from thought control.

Wednesday, April 10, 2019

A Philosophical Classic: Determinism, Moral Responsibility, and Truth Claims

Throughout the history of philosophy, certain themes recur regularly. Various philosophers in various eras return again and again to perennial topics.

The relationship between certain types of psychological determinism and ethical responsibility is one example. A typical formulation goes something like this: if an individual is determined, logically and temporally prior to his acting, in a way beyond his control or awareness, then when he acts, he cannot be morally responsible for his action.

This formulation is, of course, one of many, and is often used in introductory philosophy classes to begin a discussion.

A second textbook example looks at the relationship between determinism and truth claims. If an individual is in certain ways determined, what does it mean for that individual to utter, or write, a truth claim? Does his lack of freedom in making assertions affect his belief in his own truth claim? Should it affect our evaluation of his truth claim? Was he able to examine the proposition, and alternative propositions, before expressing it?

A third exemplar is a mixture of the first two. What is the relationship between the ability to be morally responsible and the ability to make truth claims? If both are called into question by determinism, then what is the common element in both?

Such classic investigations can be found in Greek antiquity, in contemporary philosophy, and at many points in between.

Tuesday, April 9, 2019

Overeager Claims about Self-Replication: Thinly Disguised Speculations about the Origins of Life

A perennial question in the philosophy of science addresses the origin of life. The natural sciences themselves investigate this question, but the philosophy of science ponders both the methods of such investigations and any results.

One aspect of this question is exploring the possibility of self-replicating molecules. Is it possible that there could be a chemical compound which somehow reproduces itself?

This history of science admits of a number of “holy grail” quests, some of which have plausible claims to success, like the search for metallic hydrogen, and others of which have failed spectacularly, like attempts to isolate samples of phlogiston or aether. Other “holy grail” quests include the Grand Unified Theory (GUT) or the possibility of life outside of planet Earth.

Historically, such quests often trigger hasty claims, which must then be retracted. Such is the case with self-replicating molecules.

With barely-suppressed fanfare, an article titled “Oligoarginine peptides slow strand annealing and assist non-enzymatic RNA replication” appeared in June 2016. The authors wrote of “self-folding” molecules and the “self-assembly” of compounds.

The core motive of such research is the unstated subtext that self-reproducing non-living chemical structures could eventually lead to life.

Living structures reproduce themselves routinely. Non-living structures have never, so far, been observed to reproduce themselves.

The question for the philosophy of science is whether it is possible, even in principle, for a molecule to self-replicate.

The empirical question searches for instances of self-replication. The a priori question asks if such a thing is at all possible.

In any case, the particular publication mentioned above was retracted in October 2017. The article and its retraction both appear in the journal Nature Chemistry, edited by Stuart Cantrill. It was not the first, and will not be the last, overeager announcement of progress toward the discovery of a self-reproducing compound.

More promising than the observational task is the theoretical question. Without examining any particular chemical structure, the philosophy of science can ask what would be required to demonstrate the plausibility of the idea of a self-replicating molecule in general. Which general principles of covalent bonding, of ionic bonding, or of chemical reactions, etc., would indicate that it is at all possible, in principle, for there to be a self-replicating molecule?

It would be a mistake to confidently predict any outcome to this search - such predictions lead to retractions like the one mentioned above, made by Tony Jia, Albert Fahrenbach, Neha Kamat, Katarzyna Adamala, and Jack Szostak.

The fewer triumphalist claims on behalf of self-replicating compounds, the better.

Wednesday, February 27, 2019

Always and Again: The Centrality of Definition in the Process of Doing Philosophy

A student in the first semester and a professor with decades of professional activity are both obliged to wrestle with the process of definition, and to return again to sharpen or revise definitions.

One notorious word is ‘religion,’ which seems to frustrate more attempts to define it than many other words. Often definitions are implicit, and sometimes the used with only partial awareness. Consider this example from Richard Lenski:

While I am not a historian or a theologian, I think the case can be made that many religions have historically (and probably prehistorically) been conflicted between two distinct functions. On the one hand, religions have often sought to provide explanations about the natural world — how it came into being, and especially our own place in the world. The stories from Genesis of the creation in six days, and of the tower of Babel leading to different languages, are two familiar examples. On the other hand, religions have also sought to direct actions by explaining which behaviors were morally acceptable and which were not, and often prescribing rewards and punishments (in this life or beyond) to encourage moral behavior. The ten commandments and the parables of Jesus are examples in which religion gives moral direction. Thus, many religions, in an intellectual sense, have served two masters — understanding our place in nature and giving moral guidance.

Lurking behind Lenski’s “two functions” are two definitions, or two parts of one definition, for the word ‘religion.’

The first may be called a ‘mythological’ definition, inasmuch as a myth is often defined as a narrative which explains. As an aside, ‘myth’ is not synonymous with ‘falsehood,’ because there are some true myths, i.e., true narrative which explain. Lenski is attributing a mythological function to religion.

Second, Lenski attributes a moral function to religion, a legislative function.

While religion certainly often connects to mythology and to morality, neither is central or essential to religion. One can have mythology and morality without religion, and one can have religion without mythology or morality.

More central and more essential to a definition of religion would be the feature of relationship, i.e., a relationship between one or more human beings and one or more deities. This feature is necessary and essential to religion.

In sum, when trying to refine a definition of ‘religion,’ mythology and morality are red herrings.

Wednesday, February 6, 2019

Faraday’s Electromagnetism: Discoveries Founded upon a Worldview

The list of inventions, discoveries, and technologies which are possible only because of Michael Faraday’s work is a very long list indeed. Telephones, smartphones, radios, computers, televisions, microwave ovens, and radar would be a mere start to that list.

Of Scottish heritage and born in England in 1791, Faraday explored a relatively new branch of science: electromagnetism. Apparently, his interest in this topic began while working in chemistry.

His explorations in both chemistry and physics were informed by a worldview that saw the universe as structured around rational principles like algebra and geometry. Part of Faraday’s genius was to explore essentially mathematical topics intuitively and by means of images, often using few or even no equations or formulas.

Pearce Williams, Chairman of Cornell University’s Department of Science and Technology Studies, describes Faraday’s work, which has become an indispensable foundation for much of modern science:

Faraday, who became one of the greatest scientists of the 19th century, began his career as a chemist. He wrote a manual of practical chemistry that reveals his mastery of the technical aspects of his art, discovered a number of new organic compounds, among them benzene, and was the first to liquefy a “permanent” gas (i.e., one that was believed to be incapable of liquefaction). His major contribution, however, was in the field of electricity and magnetism. He was the first to produce an electric current from a magnetic field, invented the first electric motor and dynamo, demonstrated the relation between electricity and chemical bonding, discovered the effect of magnetism on light, and discovered and named diamagnetism, the peculiar behaviour of certain substances in strong magnetic fields. He provided the experimental, and a good deal of the theoretical, foundation upon which James Clerk Maxwell erected classical electromagnetic field theory.

European culture, and Western Civilization generally, served as an incubator for modern chemistry and physics by promulgating a worldview which included the notion that the physical universe is structured around algebra and geometry. Already present in medieval scholasticism, and made more explicit in Newtonian thought, this worldview saw lawlike regularity as evidence of underlying mathematical principles in the observable world.

The fact that an equation like F = ma is applicable and verifiable not merely here and there, but everywhere, motivated the thought that there is a universal structure to matter.

Alan Hirshfeld is Professor of Physics at the University of Massachusetts and a noted astronomer. He writes:

From the start, Faraday’s investigations were more than a joyous commune with nature; they were a sincere attempt to discern God’s invisible qualities through the very design of the world. Through well-constructed observations and experiments, he sought to distill nature’s seemingly diverse phenomena to a common, irreducible basis - and in this fundamental unity of the universe, he would witness the divine signature. The intense spirituality that imbued Faraday’s science derived from his upbringing in the Sandemanian faith, a tightly knit Protestant sect founded in the mid-1700s by Scottish minister John Glas and his son-in-law Robert Sandeman.

As already noted, medieval thinkers paved the way for modern physics and chemistry by hypothesizing that there was a uniformity throughout the empirical world. Whether on the Earth, on the Moon, or on Jupiter, there were certain essential feature to matter and to energy. These features are immutable and can often be expressed mathematically.

This view of the physical universe is a distinctive feature of Western Civilization and of the culture into which Faraday was born.

Faraday, and modern observational science, inherited a second notion from the Middle Ages. From the Augustinian tradition come the notion of experimental error. Francis Bacon systematically catalogue sources of experimental error and create a taxonomy of them.

Experimental error is the scientist’s version of humility: acknowledging the possibility of mistakes.

The synonymous words ‘Sandemanian’ and ‘Glasite’ (or ‘Glassite’) are used to describe Faraday's views.

For Faraday, the concept of experimental error was already contained in a spiritual view of human nature, as Alan Hirshfeld writes:

Inspired by a literalist reading of the New Testament, Sandemanians eschewed pride and wealth in favor of piety, humility, and community with fellow Sandemanians. Much of Faraday’s over serenity owed itself to the affirmative aspects of his religion. “He drinks from a fount on Sunday which refreshes his soul for a week,” noted friend and biographer John Tyndall. Faraday, the Sandemanian, took human fallibility as a given, so he never staked his ego on the correctness or acceptance of his own ideas. He was a scientific pilgrim, inching his way toward the heart of a complex universe. Whether his chosen path proved mistaken was of little consequence; there was always another path. The joy was in the journey.

Whether implicit or explicit, the desire for a sort of ‘Grand Unified Theory’ is present in the work of many different scientists. They hunt for a single systemic explanation behind a manifold phenomena.

Ian Hutchinson, Professor of Nuclear Science and Engineering at the Massachusetts Institute of Technology’s Plasma Science and Fusion Center, writes about Faraday’s search for unity behind the range of contrasting empirical data:

One example of the influence of his theological perspective on his science is Faraday’s preoccupation with nature’s laws. ‘God has been pleased to work in his material creation by laws,’ he remarked, and ‘the Creator governs his material works by definite laws resulting from the forces impressed on matter.’ This is part of the designer’s art: ‘How wonderful is to me the simplicity of nature when we rightly interpret her laws.’ But, as Cantor points out, ‘the consistency and simplicity of nature were not only conclusions that Faraday drew from his scientific work but they were also metaphysical presuppositions that directed his research.’ He sought the unifying laws relating the forces of the world, and was highly successful in respect of electricity, magnetism, and light. His program was less successful in attempting to unify gravity and electricity, for which failure he may readily be forgiven, since 150 years later we still don’t know how to do that!

The nearly universal quest, among scientists operating within the framework of Western Civilization or European culture, for unifying principles is a quest fueled by an understanding of the physical world as structured by, and built upon, rational and mathematical principles.

Faraday’s work is a prime example, but also only one of many examples, of scientific work powered by a particular worldview. Some version of medieval Scholasticism, with its Augustinian realism about human error and its Thomistic optimism about the power of human reason, is the energizing force not only behind Faraday’s brilliant work, but also behind much of modern physics and chemistry.

Wednesday, January 30, 2019

Kierkegaard’s Odd Attack Begins: Monday, December 18, 1854

The latter part of Soren Kierkegaard’s career was devoted to explaining that the two words ‘Christianity’ and ‘Christendom’ were opposites. This explanation was startling because many of his readers had taken the two as nearly synonymous.

Kierkegaard had a thorough education in philosophy, and, as Wayne Kraus writes,

His life’s mission was the rediscovery of New Testament Christianity, which he insisted had been abolished by “Christendom.”

The same point can be made with different terminology. It is the distinction between human religious institutions and traditions on the one hand, and on the other hand an encounter with, and a devotion to, God. It is the distinction, and in some cases the opposition, between religion and God.

In everyday speech, twenty-first century speakers of American English blur the line. The statement, “I’m not religious,” is not an answer to the question, “Do you believe in God?”

It is in fact, the most sincere belief in God which often launches the harshest attacks on religion. Humans construct religion: practices, holidays, ceremonies, rites, rituals, etc.

Humans often construct religions with the noblest of motives, hoping that these religions will in some way connect people to God. Religions often do exactly that: offer enlightenment, education, care for the poor, work for peace and justice, etc.

But religions can break bad.

Kierkegaard encountered a church which claimed to be faithful to the New Testament, and which claimed to be following Jesus. What Kierkegaard experienced, however, was a significant number of clergy who used the church as a way to ensure material comfort in their own lives, and as a way to garner admiration from society at large.

Instead of serving their fellow human and serving Jesus, some of the Danish clergy in Kierkegaard’s day were serving themselves.

He saw a segment of the institutional church pursuing worldly prosperity, and he saw this as a clear contradiction to the text of the New Testament and as a clear contradiction to the factual documented words and actions of Jesus.

Kierkegaard’s incisive analysis of the Danish church was both logical and passionate. His critique both reflected and provoked a crisis, as coauthors Guram Tewsadse and Hans-Martin Gerlach note:

Kierkegaard [ist ein] dänischer Theologe und Philosoph, dessen Denken als ein erster theologischer Ausdruck der Krise der bürgerlichen philosophischen und religiösen Weltanschauung in der Mitte des 19. Jh. gewertet werden muß.

Tewsadse and Gerlach articulate a particular interpretation of Kierkegaard, which emerges from a broader interpretation of the history of philosophy: Kierkegaard is part of a cosmic unfolding of history, in this case, the history of philosophy. This grand unfolding follows a definite direction through a series of discernable stages.

One competing interpretation of Kierkegaard might see him as engaging in eternal questions which, rephrased for context, could have just as easily been asked a few centuries earlier or later; and perhaps they were.

Another competing view of Kierkegaard might see him as responding neither to universal questions, nor to his specific era in the unfolding of some material dialectic. Rather, this view might see him as responding primarily to his own inner world: his emotions and experience.

Whichever view one might take of Kierkegaard, Tewsadse and Gerlach point out the directness and frankness of the writings he produced during the final phase of his career:

[Kierkegaard gründete] eine von ihm herausgegebene und geschriebene Zeitschrift, die K.s Kampfansage gegen die offizielle dänische Kirche und ihre Verbundenheit mit dem Staat artikulierten.

This critique of the Danish church had long simmered in Kierkegaard’s mind, but it was finally occasioned by two events. First, the death of Bishop Mynster, who had been a friend of Kierkegaard’s father, and for whose sake Kierkegaard had kept silent; with Mynster gone, Kierkegaard could write freely.

The second, and more specific, event which occasioned the beginning of Kierkegaard’s attack was a funeral sermon given by a Professor Martensen at Mynster’s funeral. Martensen, in eulogizing Mynster, had called the deceased bishop “a witness to the truth.”

While that phrase might strike many hearers, or readers, as generally complimentary, if somewhat bland, tribute, to Kierkegaard’s ears, it was blasphemy. As a scholar who’d studied ancient Greek, Kierkegaard knew that the phrase meant one who’d made the ultimate sacrifice. The Greek word for ‘witness’ is the English word ‘martyr.’

Kierkegaard knew that, textually, the martyrs were the people who, starting with Stephen and the others killed under the supervision of Saul, had been tortured and murdered simply because they followed Jesus. Under the Roman Empire’s government, during the first three centuries after the start of the Jesus movement, thousands of people were martyred.

By contrast, Bishop Mynster had lived a materially comfortable life, enjoying social admiration, to an old age. In Kierkegaard’s mind, he was in no way a martyr, and to call him ‘witness to the truth’ was an insult to the thousands of martyrs who had died humiliating deaths.

Mynster’s death removed any hesitations Kierkegaard had about publishing his critique of the Danish church, and Martensen’s funeral sermon precipitated the actual writing and publication. Mynster died in January 1854, and Kierkegaard published the first installment of his attack in December of that year.