Thursday, December 1, 2016

Pseudo-Aristotle on Xenophanes

Among the pseudepigrapha attributed to Aristotle, but most probably written by someone else, is a famous treatise dealing with Melissus, Xenophanes, and Gorgias. Melissus was a pre-Socratic philosopher living on the island of Samos; he did his work in the mid-400s B.C., and was a follower of Parmenides.

Gorgias lived in Sicily, and was likely a contemporary of Socrates; he apparently relocated to Greece, and died there around 375 B.C.

Xenophanes is the most well-known of the three, a pre-Socratic from Ionia who also travelled around Greece. He is thought to have died around 478 B.C.

Questions about the pseudo-Aristotelian text include: Who wrote it? When was it written? Where? What did the author know about Xenophanes? How accurately did the author explain the views of Xenophanes?

Famously, Xenophanes asserted, and possibly introduced, a strikingly modern concept of God. Pseudo-Aristotle asserts that Xenophanes argued for the eternal origin of God (977a14).

Xenophanes declares that if anything is, it cannot possibly have come into being, and he argues this with reference to God, for that which has come into being must necessarily have done so either from that which is similar or from that which is dissimilar; and neither alternative is possible.

On the principle of “like begets like,” God must have existed for eternity, because anything which could give rise to God would have to be like God. A chicken can produce another chicken, and tree can produce another tree, but this might not be the case with God, to whom Xenophanes might attribute omnipresence, invisibility, omniscience, omnipotence, etc.

Exactly which properties Xenophanes attributes to God is an interpretive question with a plurality of plausible answers, and some details may be lost to history, but the general tenor of his views will have been something similar to what is mentioned above.

Xenophanes might be characterized as arguing that to have a beginning, i.e. to have been begotten or created or generated or produced, is a limitation which is incompatible with his concept of God as unlimited. To be sure, Xenophanes doesn’t use the word ‘unlimited’ in any surviving texts, and so this is already to some extent an interpretation, but a reasonable one.

Pseudo-Aristotle continues:

For it is no more possible for like to have been begotten by like than for like to have begotten like (for since they are equal, all the same qualities inhere in each and in a similar way in their relations to one another), nor could unlike have come into being from unlike.

It is noteworthy that pseudo-Aristotle does not comment on the explicit or implicit monotheism of Xenophanes. Many among both the pre-Socratic and Classical Greek philosophers were, if not detailed monotheists, probably functional monotheists.

Readers sometimes perceive Aristotle, and the Greek philosophers, through the lens of the oft-repeated maxim that Greeks were polytheists. While there is some reason to doubt that the Greeks, taken as a whole, were as polytheistic as is sometimes assumed, there is much more reason to doubt this about the philosophers, both pre-Socratic and later.

For if the stronger could come into being from the weaker, or the greater from the less, or the better from the worse, or conversely worse things from better, then what is not could come to be from what is, or what is from what is not; which is impossible. Accordingly for these reasons God is eternal.

If we accept pseudo-Aristotle as a reliable historian, then Xenophanes is explicit both in his monotheism and in his declaration that God is eternal - more specifically, that God has no finite starting point in time, and no progenitor.

The pseudepigraphic text also seems to claim that Xenophanes asserts that God is omnipotent. Additionally, and mysteriously, pseudo-Aristotle attributes to Xenophanes the assertion that God is spherical.

Perhaps this sphericality is to be understood as an attempt to conceptualize omnipresence in an infinite cosmos. If not infinite in space, perhaps Xenophanes conceived of the universe as infinite in other ways. It is not clear that Xenophanes is arguing for, or against, a cosmos that is spatially or otherwise infinite. Some other pre-Socratics asserted a type of pantheism, identifying God with the universe and the universe with God, and arguing that the cosmos could see and hear. Perhaps Xenophanes was in some way influenced by these colleagues.

Wednesday, November 30, 2016

The Valuation of Values: Corporate vs. Individual

There is an immense quantity and range of values which can be incorporated into, and prioritized within, an ethical system. One need merely think of a list of virtues (honesty, charity, loyalty, courage, etc.), or a list of people and things which one can hold dear (family, friends, comrades, nations, God, etc.).

Values can be held either individually or corporately. To be sure, there are public implications which follow from individual values. If an individual places a high value on cannibalism, she or he may find tensions with neighbors. But the value of cannibalism will, despite its communal effects, remain an individual value.

The effects of corporately-held values on individuals is stronger than the effects of individually-held values on the community.

Because of this asymmetry, corporate values constitute limiting factors on the range of values which an individual may pursue. The communal values do not limit the values which an individual may hold, in the sense that private thoughts and valuations, qua private, elude detection and control by the community.

But the communal values can limit concrete actions, and thereby frustrate the values which motivate such actions. The cannibal, e.g., may find his efforts to act on his values frustrated by his neighbors, even though he is free to privately hold such values.

Most, or perhaps all, values are therefore more effective when held communally instead of individually.

There exists, then, a difference in the levels of significance which a value has, depending on whether it is privately or corporately held. This difference, however, may vary among values.

One value may be only slightly more efficacious when held corporately than when held privately, while another value may be much more impactful when communally held than when privately held.

To be sure, there is the question of how one might observe, measure, or quantify the efficaciousness of a value.

The values of liberty and freedom, in particular, would seem to be values which have a much greater impact when corporately held than when privately held.

In order for an individual to act on her or his values, in any non-trivial sense, there must be, with logical and temporal priority, a corporately-held value of liberty and freedom.

In a society which perceived no value whatsoever in freedom or liberty, the individual could act on her or his values only in the trivial case in which the individual’s values were identical with the community’s.

The community’s ability to police and enforce its values would determine the extent to which it would be possible for the individual to act upon her or his personally-held values. A community, e.g., which saw no value in freedom or liberty, but which was very bad at policing and enforcing its values, might unintentionally allow individuals to pursue their own personal values.

In any non-trivial case, however, communities have some ability to police and enforce. In many cases, communities have significant abilities to police and enforce.

In order to allow individuals have a maximal range of personal values, it is necessary for a society to place a high value on freedom and liberty.

A society which values regulation and intervention will therefore limit the range of personal values on which individuals may act.

Sunday, October 9, 2016

Greek Philosophy’s Big Turning Point

Sometime around 590 B.C., give or take a few years, Thales of Miletus did the work which made him known as the first philosopher.

It’s possible that there were others before him, but we have no evidence of them. So most historians are content to say that philosophy began with Thales.

To be sure, there are alternative views. A more modest, and almost universally accepted claim, is that Greek philosophy began with Thales. That allows for Hebrew or Sanskrit thinkers who may have philosophized a few centuries earlier.

Greek philosophy continued for the next 150 years or so, filling the ‘pre-Socratic’ era. Many of this first wave of Greek philosophers did not live in Greece, but rather in Greek settlements on islands in the Mediterranean, in southern Italy, or in Ionia. (Ionia is a western coastal region in Turkey.)

The pre-Socratic philosophers explored topics often related to time, space, mathematics, and physics. They were interested in cosmology and logic.

The focus and location of Greek philosophy would change.

Geographically, this second wave of Greek philosophers - the ‘classical’ philosophers - would be located in Greece.

In terms of their content, these classical thinkers turned away from the abstract topics of pure philosophy and toward social, political, moral, and ethical matters.

This trend began with Socrates. He’d been a soldier in the Peloponnesian War (431 to 404 B.C.). The war and the political rhetoric surrounding the war posed problematic questions.

Cognitive dissonance arose from ethically questionable Athenian actions: extorting cash and goods from other Greek city-states.

Following Socrates, Plato and Aristotle would also wrestle with such questions.

In the ‘classical’ era, philosophers addressed questions about justice and about an ideal society. The problems of the time lured them away from the more disciplined and less dramatic questions of the pre-Socratics.

Tuesday, September 27, 2016

Does Empiricism Sire Utilitarianism?

Taking J.S. Mill as an example, there seems to be a rough correlation between empiricism and utilitarianism. To be sure, there are many exceptions and ambiguities in this thesis.

John Locke, for example, is clearly an empiricist and clearly not a utilitarian. Yet in his political thought, there may be discerned, in the notions of majority rule and popular sovereignty, at least room for a type of utilitarian calculus.

David Hume, too, is an empiricist without being a utilitarian. Yet Hume uses the word ‘utility’ to express his emotivist view of ethics.

What is the connection, then, between utilitarianism and empiricism? If empiricism is generally allergic to metaphysics, then it will seek an ethical system which minimizes ontological commitments.

An empiricist would, presumably, flee in horror from a Platonic ethical schema which includes the existence of something called ‘the Good’ and includes the existence of numerous ‘ideal forms.’

Empiricism also is attracted toward observation, measurement, and detection. Utilitarianism, despite the notorious difficulty of trying to quantify utility, looks to somehow observing and comparing the utilities of different possible courses of actions.

Significantly, in that part of Copleston’s history titled “British Empiricism,” the first chapter is titled “The Utilitarian Movement.” In that chapter, he writes:

The first phase of nineteenth-century empiricism, which is known as the utilitarian movement, may be said to have originated with Bentham. But though we naturally tend to think of him as a philosopher of the early part of the nineteenth century, inasmuch as it was then that his influence made itself felt, he was born in 1748, twenty-eight years before the death of Hume.

Certainly, antecedents of both empiricism and utilitarianism are found well before the nineteenth and eighteenth centuries. The traditional roots of empiricism are found in Epicurus and Aquinas. Although it is common to classify Aristotle as an empiricist, there are reasonable arguments which place him outside the mainstream of empiricism.

Unsurprisingly, then, Epicurus is also seen as a historical antecedent of utilitarianism, along with Aristippus. Copleston writes about Bentham:

And some of his works were published in the last three decades of the eighteenth century. It is no matter of surprise, therefore, if we find that there is a conspicuous element of continuity between the empiricism of the eighteenth century and that of the nineteenth. For example, the method of reductive analysis, the reduction, that is to say, of the whole to its parts, of the complex to its primitive or simple elements, which had been practised by Hume, was continued by Bentham. This involved, as can be seen in the philosophy of James Mill, a phenomenalistic analysis of the self. And in the reconstruction of mental life out of its supposed simple elements use was made of the associationist psychology which had been developed in the eighteenth century by, for instance, David Hartley, not to speak of Hume’s employment of the principles of association of ideas. Again, in the first chapter of his Fragment on Government Bentham gave explicit expression to his indebtedness to Hume for the light which had fallen on his mind when he saw in the Treatise of Human Nature how Hume had demolished the fiction of a social contract or compact and had shown how all virtue is founded on utility. To be sure, Bentham was also influenced by the thought of the French Enlightenment, particularly by that of Helvetius. But this does not alter the fact that in regard to both method and theory there was a notable element of continuity between the empiricist movements of the eighteenth and nineteenth centuries in Great Britain.

By contrast, those philosophers whose epistemology leans toward rationalism, or at least away from empiricism, tend to develop ethical systems which are not utilitarian.

An appeal to utility is ultimately an appeal to sense-data. A philosopher with ontological commitments to metaphysical entities - things not detectable, not directly or indirectly detectable, not in principle detectable by the senses - tends to conceive ethical systems which do not rely primarily, or exclusively, on a posteriori knowledge.

An empiricist, having to varying extents ruled out metaphysical objects, or at least having ruled out allowing metaphysical objects to play foundational roles in his system, has no alternative to but to use sense-data as the primary source of knowledge for his system. Such a system with therefore probably be utilitarian in nature.

Wednesday, May 4, 2016

Theories about Theories: Types of Theories and Competing Theories

Although people often use the word ‘science’ in ordinary, everyday language, its definition is not a simple matter. By ‘science,’ people usually mean ‘observational science’ or ‘empirical science’ or ‘natural science,’ as opposed to other types of science.

Such science possesses, first, a set of reports: what has been measured, recorded, or otherwise perceived. We might term these ‘observation statements.’

A long catalogue of observation statements, however, does not by itself constitute a science. A science possesses, second, a conceptual framework which organizes, sorts, or explains these observations.

We can understand, then, what is not a science. A data-base, a massive amount of recorded measurements, is by itself not a science: it lacks a conceptual structure.

Conversely, an a priori conceptual framework, a structure of ideas without any sense-data or empirical observations connected to it, is also not a natural science (or an empirical science, or an observational science), although it might be some other type of science.

Paul Davidson Reynolds proposes a taxonomy of scientific structures, categorizing the possible types of conceptual frameworks which one might add to data in order to generate a science:

Scientific knowledge is basically a collection of abstract theoretical statements. At present, there seem to be three different conceptions of how sets of statements should be organized so as to constitute a “theory”: (1) set-of-laws, (2) axiomatic, and (3) causal process.

We might acknowledge two aspects to scientific knowledge. One aspect is concrete observations, ultimately the product of sense-data, which are measurable and quantifiable. Another aspect is theoretical: the systematic organization of these empirical perceptions.

A simple example might be seen on a Cartesian plane: individual observations are represented by dots (x,y); theory is represented by a “best-fit line” or “best-fit curve.”

It is significant that more than one theory can be paired with a set of observations: data can underdetermine the choice of theory. How does one choose between two competing theories, both of which correspond to the measurements? Or must one choose?

To extend our example, image a set of dots on the Cartesian plane, a set for which there might be more than one “best-fit curve,” where both curves have an equal degree of correspondence to the points.

Multiple best-fit curves can result either from the location of the points, or from competing methods of generating the best-fit curve. It would be necessary to distinguish between trivial non-trivial cases of sets with more than one best-fit curve.

Three conceptions of theory have been discussed: the set-of-laws form, or the view that scientific knowledge should be a set of theoretical statements with overwhelming empirical support; the axiomatic form, or set of theoretical statements, divided into axioms and propositions, those statements that can be derived from the axioms; the causal process form, or sets of statements organized in such a fashion that the causal mechanism between two or more concepts is made as explicit as possible. It may be possible to present the statements of some axiomatic theories in causal process form.

Most interesting are the theories which, as Paul Davidson Reynolds notes above, might simultaneously satisfy the conditions for more than one of the three conceptions of theory.

Richer and more complex theories, rather than the simple example of a best-fit curve, offer examples of each of the three conceptions of theory.

Thursday, March 10, 2016

The Truth about Truth: Ayer on Knowledge

One beginning point for philosophizing is to inquire about the nature of knowledge. What does it mean to know something?

One frequent textbook definition of ‘knowledge’ is that it is a justified true belief. Each of the three parts of that definition can be understood in light of its negation: I can’t be said to know something if I don’t believe it, or if I lack any justification for that belief, or if the belief is untrue.

Linguistically, the English language refers to distinctly different types of knowledge with the same one word. This leads to confusions.

Compare ‘I know my friend Tom well’ with ‘I know that 37 times 7 is 259.’

Both proposition include the word ‘know,’ but it seems that the word does not refer to the same state. One is a familiarity or acquaintance with a person, place, or thing. The other is the possession of some mental content.

If there are different types of knowledge, then we might ask, what do they have in common, and what distinguishes them? Is truth a part of every different form of knowledge?

Truth seems to be explicitly or implicitly a part of most attempts to define ‘knowledge.’ If something is not true, then I usually cannot be said to know it. Propositions like ‘I know that 4 plus 9 is 37’ appear to us to be misuses of the word ‘know.’

It is difficult to conduct an investigation of knowledge without simultaneously conducting an investigation of truth. Just as there are competing understandings of knowledge, so there are competing understandings of truth.

A typical textbook explanation of knowledge relies on the concept of correspondence. According to this ‘correspondence theory of truth,’ a proposition is true if, and only if, it corresponds to the situation which it claims to represent.

One competing understanding of truth is represented by Martin Heidegger. He used the word ‘aletheia’ meaning ‘not hidden’ or ‘uncovered.’ Heidegger’s understanding of truth was not so much it corresponded to reality, but rather that it disclosed reality.

While Heidegger’s writings on truth are significant in the history of philosophy, and indicate a line of thought worth investigating, the mainstream of philosophy in Europe and North America during the twentieth century tended toward other understandings of truth. As a representative of British and analytical philosophies, the empiricist A.J. Ayer wrote:

I conclude then that the necessary and sufficient condition for knowing that something is the case are first that what one is said to know be true, secondly that one be sure of it, and thirdly that one should have the right to be sure. This right may be earned in various ways; but even if one could give a complete description of them it would be a mistake to try to build them into the definition of knowledge, just as it would be a mistake to try to incorporate our actual standards of goodness into a definition of good. And this being so, it turns out that the questions which philosophers raise about the possibility of knowledge are not all to be settled by discovering what knowledge is. For many of them reappear as questions about the legitimacy of the title to be sure. They need to be severally examined; and this is the main concern of what is called the theory of knowledge.

What Ayer calls the ‘theory of knowledge’ is perhaps more commonly known as epistemology. Just as it is difficult to discuss knowledge without also examining the notion of truth, so also questions about language invite themselves into the investigation.

Truth is generally understood to be a property of a proposition. Can a sentence, which is a linguistic artifact which represents a proposition, also be true?

One can know a proposition, and the proposition can be true, but it is a further step to know that a proposition is true. Presumably, one cannot know that a proposition is true without first having understood the proposition.

Someone can teach me to make a series of sounds, or a series of ink marks on paper, and tell me that they are a sentence which in turn represents a proposition, and that the proposition is true. I might take his word for it, and believe all this. Finally, he might be right, which would in turn make me right. But I could not be said to ‘know’ or to ‘understand’ the sentence or the proposition.

Sunday, January 17, 2016

Science or Medicine?

To compare the words ‘biology’ and ‘medicine’ is at once to encounter the question, posed in this case by Siddhartha Mukherjee, of whether medicine is a science.

Those who work in some educational institutions will find this question bizarre, because the assumption that medical activity is scientific activity has long been in place.

The alternative is perhaps to view medicine as a skill, a craft, an art. Lying at the core of the question is, of course, the definition of ‘science.’

Siddhartha Mukherjee begins with a somewhat belabored comparison of ‘science’ and ‘scientific,’ but makes the important point that merely being technological, or applying technology, does not by itself qualify an activity as science.

Is medicine a science? If, by science, we are referring to the spectacular technological innovations of the past decades, then without doubt medicine qualifies. But technological innovations do not define a science; they merely prove that medicine is scientific - i.e., therapeutic interventions are based on the rational precepts of pathophysiology.

One feature of science, he argues, is that it has laws, or identifies lawlike regularities. Mathematics and some branches of physics are here paradigmatic.

Sciences have laws — statements of truth based on repeated experimental observations that describe some universal or generalizable attributes of nature. Physics is replete with such laws. Some are powerful and general, such as the law of gravitation, which describes the force of attraction between two bodies with mass anywhere in the universe. Others apply to specific conditions, such as Ohm’s law, which only holds true for certain kinds of electrical circuits. In every case, however, a law distills a relationship between observable phenomena that remains true across multiple circumstances and multiple conditions. Laws are rules that nature must live by.

Mathematics and physics are abstract and conceptual. As sciences become more concrete - more tied to empirical observation - their ‘scientificness’ decreases.

Imagine, if you will, the differences you might encounter in scientific texts over a fifty-year span of time. If you looked at a chemistry textbook published today, and you find a precise measurement of the atomic mass of an element, say Osmium, and it was so precise that it went to ten or twenty or even thirty decimal places, then you would have the most precise empirical data available.

But continue to imagine that fifty years later, another chemistry textbook was published. It might also list the atomic mass for the same isotope of Osmium, but quite conceivably, at the very last digit of the ten or twenty or thirty decimal places, there might be a difference between the later book and the earlier book.

There might have been some revision based on more careful experiments, based on new instrumentation, based on new understandings of how to get the most accurate value.

Far from being unusual, such revision of values is in fact part of empirical or observational science.

By contrast, imagine two mathematics books published fifty years apart: there will be no difference, no revision, in the quadratic formula or the Pythagorean theorem.

This difference, in our little imaginary thought experiment, between mathematics and chemistry, gives us an insight into laws, lawlike correlations, and science - although this does not yet fully explain what it is to be a science.

There are fewer laws in chemistry. Biology is the most lawless of the three basic sciences: there are few rules to begin with, and even fewer rules that are universal. Living creatures must, of course, obey the fundamental rules of physics and chemistry, but life often exists on the margins and in the interstices of these laws, bending them to their near-breaking limit. Even the elephant cannot violate the laws of thermodynamics — although its trunk, surely, must rank as one of the most peculiar means to move matter using energy.

Surely chemistry and biology - along with the less abstract and more empirical, applied, and experimental aspects of physics - are sciences, even if they are a bit fuzzy around the edges.

We are tempted to say, “They are sciences because they have laws.” But we must first ask if everything that is a science has laws, and then we must ask, if everything that has laws is a science.

Does medicine have laws? Siddhartha Mukherjee asks this question, because it will go a long way toward deciding whether medicine is a science.

Medicine has general rules and practices and guidelines, but does it have laws?

But does the “youngest science” have laws? It seems like an odd preoccupation now, but I spent much of my medical residency seeking the laws of medicine. The criteria were simple: a “law” had to distill some universal guiding principle of medicine into a statement of truth. The law could not be borrowed from biology or chemistry; it had to be specific to the practice of medicine.

If medicine does have laws, would they be in some essential way different than the laws of the various sciences? Medicine is applied: the essence of medicine is to do something, while sciences are essentially about knowing something.

To phrase it another way, do applied sciences have their own laws, distinct from pure sciences? Siddhartha Mukherjee writes:

I was genuinely interested in rules, or principles, that applied to the practice of medicine at large.

Consider for titles of well-known journals like Pure and Applied Mathematics. Consider that branch of physics which studies electromagnetism compared with the profession of being an electrician. Consider biology and medicine.

In each of these pairs of pure and applied science, do we find that the applied version of the science has its own laws, or does it merely help itself to the laws of its pure sibling?

Of course, these would not be laws like those of physics or chemistry. If medicine is a science at all, it is a much softer science. There is gravity in medicine, although it cannot be captured by Newton’s equations. There is a half-life of grief, even if there is no instrument designed to measure it. The laws of medicine would not be described through equations, constants, or numbers. My search for the laws was not an attempt to codify or reduce the discipline into grand universals. Rather, I imagined them as guiding rules that a young doctor might teach himself as he navigates a profession that seems, at first glance, overwhelmingly unnavigable. The project began lightly — but it eventually produced some of the most serious thinking that I have ever done around the basic tenets of my discipline.

Siddhartha Mukherjee introduces the distinction between ‘science’ and ‘soft science,’ and even then wonders “if medicine is a science at all.” Crucially, he notes the lack of quantifiability in anything which might present itself as a ‘law of medicine.’

Although modern medicine is conducted with endless quantifications and measurements, the core of practice, medical decision-making, is non-numerical. There is some skill of judgment, of non-quantified reasoning, which the medical practitioner has.

We will leave the question unanswered, whether medicine is a science, but the consideration of the question gives opportunities for insights nonetheless.