Thursday, March 10, 2016

The Truth about Truth: Ayer on Knowledge

One beginning point for philosophizing is to inquire about the nature of knowledge. What does it mean to know something?

One frequent textbook definition of ‘knowledge’ is that it is a justified true belief. Each of the three parts of that definition can be understood in light of its negation: I can’t be said to know something if I don’t believe it, or if I lack any justification for that belief, or if the belief is untrue.

Linguistically, the English language refers to distinctly different types of knowledge with the same one word. This leads to confusions.

Compare ‘I know my friend Tom well’ with ‘I know that 37 times 7 is 259.’

Both proposition include the word ‘know,’ but it seems that the word does not refer to the same state. One is a familiarity or acquaintance with a person, place, or thing. The other is the possession of some mental content.

If there are different types of knowledge, then we might ask, what do they have in common, and what distinguishes them? Is truth a part of every different form of knowledge?

Truth seems to be explicitly or implicitly a part of most attempts to define ‘knowledge.’ If something is not true, then I usually cannot be said to know it. Propositions like ‘I know that 4 plus 9 is 37’ appear to us to be misuses of the word ‘know.’

It is difficult to conduct an investigation of knowledge without simultaneously conducting an investigation of truth. Just as there are competing understandings of knowledge, so there are competing understandings of truth.

A typical textbook explanation of knowledge relies on the concept of correspondence. According to this ‘correspondence theory of truth,’ a proposition is true if, and only if, it corresponds to the situation which it claims to represent.

One competing understanding of truth is represented by Martin Heidegger. He used the word ‘aletheia’ meaning ‘not hidden’ or ‘uncovered.’ Heidegger’s understanding of truth was not so much it corresponded to reality, but rather that it disclosed reality.

While Heidegger’s writings on truth are significant in the history of philosophy, and indicate a line of thought worth investigating, the mainstream of philosophy in Europe and North America during the twentieth century tended toward other understandings of truth. As a representative of British and analytical philosophies, the empiricist A.J. Ayer wrote:

I conclude then that the necessary and sufficient condition for knowing that something is the case are first that what one is said to know be true, secondly that one be sure of it, and thirdly that one should have the right to be sure. This right may be earned in various ways; but even if one could give a complete description of them it would be a mistake to try to build them into the definition of knowledge, just as it would be a mistake to try to incorporate our actual standards of goodness into a definition of good. And this being so, it turns out that the questions which philosophers raise about the possibility of knowledge are not all to be settled by discovering what knowledge is. For many of them reappear as questions about the legitimacy of the title to be sure. They need to be severally examined; and this is the main concern of what is called the theory of knowledge.

What Ayer calls the ‘theory of knowledge’ is perhaps more commonly known as epistemology. Just as it is difficult to discuss knowledge without also examining the notion of truth, so also questions about language invite themselves into the investigation.

Truth is generally understood to be a property of a proposition. Can a sentence, which is a linguistic artifact which represents a proposition, also be true?

One can know a proposition, and the proposition can be true, but it is a further step to know that a proposition is true. Presumably, one cannot know that a proposition is true without first having understood the proposition.

Someone can teach me to make a series of sounds, or a series of ink marks on paper, and tell me that they are a sentence which in turn represents a proposition, and that the proposition is true. I might take his word for it, and believe all this. Finally, he might be right, which would in turn make me right. But I could not be said to ‘know’ or to ‘understand’ the sentence or the proposition.

Sunday, January 17, 2016

Science or Medicine?

To compare the words ‘biology’ and ‘medicine’ is at once to encounter the question, posed in this case by Siddhartha Mukherjee, of whether medicine is a science.

Those who work in some educational institutions will find this question bizarre, because the assumption that medical activity is scientific activity has long been in place.

The alternative is perhaps to view medicine as a skill, a craft, an art. Lying at the core of the question is, of course, the definition of ‘science.’

Siddhartha Mukherjee begins with a somewhat belabored comparison of ‘science’ and ‘scientific,’ but makes the important point that merely being technological, or applying technology, does not by itself qualify an activity as science.

Is medicine a science? If, by science, we are referring to the spectacular technological innovations of the past decades, then without doubt medicine qualifies. But technological innovations do not define a science; they merely prove that medicine is scientific - i.e., therapeutic interventions are based on the rational precepts of pathophysiology.

One feature of science, he argues, is that it has laws, or identifies lawlike regularities. Mathematics and some branches of physics are here paradigmatic.

Sciences have laws — statements of truth based on repeated experimental observations that describe some universal or generalizable attributes of nature. Physics is replete with such laws. Some are powerful and general, such as the law of gravitation, which describes the force of attraction between two bodies with mass anywhere in the universe. Others apply to specific conditions, such as Ohm’s law, which only holds true for certain kinds of electrical circuits. In every case, however, a law distills a relationship between observable phenomena that remains true across multiple circumstances and multiple conditions. Laws are rules that nature must live by.

Mathematics and physics are abstract and conceptual. As sciences become more concrete - more tied to empirical observation - their ‘scientificness’ decreases.

Imagine, if you will, the differences you might encounter in scientific texts over a fifty-year span of time. If you looked at a chemistry textbook published today, and you find a precise measurement of the atomic mass of an element, say Osmium, and it was so precise that it went to ten or twenty or even thirty decimal places, then you would have the most precise empirical data available.

But continue to imagine that fifty years later, another chemistry textbook was published. It might also list the atomic mass for the same isotope of Osmium, but quite conceivably, at the very last digit of the ten or twenty or thirty decimal places, there might be a difference between the later book and the earlier book.

There might have been some revision based on more careful experiments, based on new instrumentation, based on new understandings of how to get the most accurate value.

Far from being unusual, such revision of values is in fact part of empirical or observational science.

By contrast, imagine two mathematics books published fifty years apart: there will be no difference, no revision, in the quadratic formula or the Pythagorean theorem.

This difference, in our little imaginary thought experiment, between mathematics and chemistry, gives us an insight into laws, lawlike correlations, and science - although this does not yet fully explain what it is to be a science.

There are fewer laws in chemistry. Biology is the most lawless of the three basic sciences: there are few rules to begin with, and even fewer rules that are universal. Living creatures must, of course, obey the fundamental rules of physics and chemistry, but life often exists on the margins and in the interstices of these laws, bending them to their near-breaking limit. Even the elephant cannot violate the laws of thermodynamics — although its trunk, surely, must rank as one of the most peculiar means to move matter using energy.

Surely chemistry and biology - along with the less abstract and more empirical, applied, and experimental aspects of physics - are sciences, even if they are a bit fuzzy around the edges.

We are tempted to say, “They are sciences because they have laws.” But we must first ask if everything that is a science has laws, and then we must ask, if everything that has laws is a science.

Does medicine have laws? Siddhartha Mukherjee asks this question, because it will go a long way toward deciding whether medicine is a science.

Medicine has general rules and practices and guidelines, but does it have laws?

But does the “youngest science” have laws? It seems like an odd preoccupation now, but I spent much of my medical residency seeking the laws of medicine. The criteria were simple: a “law” had to distill some universal guiding principle of medicine into a statement of truth. The law could not be borrowed from biology or chemistry; it had to be specific to the practice of medicine.

If medicine does have laws, would they be in some essential way different than the laws of the various sciences? Medicine is applied: the essence of medicine is to do something, while sciences are essentially about knowing something.

To phrase it another way, do applied sciences have their own laws, distinct from pure sciences? Siddhartha Mukherjee writes:

I was genuinely interested in rules, or principles, that applied to the practice of medicine at large.

Consider for titles of well-known journals like Pure and Applied Mathematics. Consider that branch of physics which studies electromagnetism compared with the profession of being an electrician. Consider biology and medicine.

In each of these pairs of pure and applied science, do we find that the applied version of the science has its own laws, or does it merely help itself to the laws of its pure sibling?

Of course, these would not be laws like those of physics or chemistry. If medicine is a science at all, it is a much softer science. There is gravity in medicine, although it cannot be captured by Newton’s equations. There is a half-life of grief, even if there is no instrument designed to measure it. The laws of medicine would not be described through equations, constants, or numbers. My search for the laws was not an attempt to codify or reduce the discipline into grand universals. Rather, I imagined them as guiding rules that a young doctor might teach himself as he navigates a profession that seems, at first glance, overwhelmingly unnavigable. The project began lightly — but it eventually produced some of the most serious thinking that I have ever done around the basic tenets of my discipline.

Siddhartha Mukherjee introduces the distinction between ‘science’ and ‘soft science,’ and even then wonders “if medicine is a science at all.” Crucially, he notes the lack of quantifiability in anything which might present itself as a ‘law of medicine.’

Although modern medicine is conducted with endless quantifications and measurements, the core of practice, medical decision-making, is non-numerical. There is some skill of judgment, of non-quantified reasoning, which the medical practitioner has.

We will leave the question unanswered, whether medicine is a science, but the consideration of the question gives opportunities for insights nonetheless.

Friday, November 27, 2015

Locke, Medicine, and Bacon

Among the unpublished posthumous papers of John Locke is a manuscript titled, variously, either Ars Medica or De Arte Medica, and dated, by Henry Richard Fox Bourne, to approximately 1669.

This text is of interest because it represents a point of contact between Locke’s empirical epistemology and the mundane concerns of utilitarian medicine. It is a point, so to speak, at which the theoretical meets the practical.

Locke points out that medicine, being an applied and concrete craft, was perhaps more susceptible to the natural, cultural, and subconscious influences which can distort reasoning. There might be a parallel here between Locke’s critique of medical thinkers and Francis Bacon’s identification of four sources of error in logic.

Because these earlier thinkers seemed unaware of these error-inducing influences, and took no measures against them, Locke finds most of what has been written about medicine to be, at the least, built on unsteady groundwork:

If, therefore the learned men of former ages employed a great part of their time and thoughts in searching out the hidden causes of distempers, were curious in imagining the secret workmanship of nature and the several imperceptible tools wherewith she wrought, and, putting all these fancies together, fashioned to themselves systems and hypotheses, ‘tis no more to be wondered at or censured that they accommodated themselves to the fashion of their times and countries, and so far complied with their most natural inclinations as to desire to have some basis to rest their thoughts upon, and some grounds to guide them in the practice of their art. Their being busy and subtile in disputing upon allowed principles was but to be employed in the way of fame and reputation and the learning valued in that age; and that their practice extended no farther than the sacred principles they believed in would permit, is no more to be admired than that we find no fair and lasting fabrics left to us by our ancestors upon narrow and unsound foundations.

Locke hastens to add that he respects these earlier writers, despite the flaw in their writings which he is identifying, because they catalogued a significant body of observations, and developed laws based on correlation.

Those writings, however, are saturated with unmerited conjectures which, left uncorrected, will lead readers astray:

I would not be thought here to censure the learned authors of former times, or disown the advantages they have left to posterity. To them we owe a great number of excellent observations and several ingenious discourses, and there is not any one rule of practice founded upon unbiased observation which I do not receive and submit to with veneration and acknowledgment; yet I think I may confidently affirm that the hypothesis which tied the long and elaborate discourses of the ancients, and suffered not their enquiries to extend themselves any farther than how the phenomena of diseases might be explained by those doctrines and the rules of practice accommodated to the received principles, has at last but confined and narrowed men's thoughts, amused their understanding with fine but useless speculations, and diverted their enquiries from the true and advantageous knowledge of things.

Locke seems, then, to be engaged in a Baconian task: freeing an observational empirical science, or the applied form of it, from systematic methodological shortcomings which nudge it toward error, or at least toward unfounded hypotheses.

The reader might further wonder if Locke were directly inspired by Bacon’s texts to this task, or whether he were indirectly inspired as Bacon’s influence came down through other thinkers like Robert Boyle, or whether Locke happened upon the same concerns independently by coincidence. Peter Anstey writes:

There is sufficient evidence to claim that Locke owed a significant debt to Bacon's conception of how natural philosophy should be done.

Concerning the connection, if any, between Locke and Bacon, Anstey notes:

So it would appear that Locke’s comments on method in natural philosophy, whatever their peculiarities, stand in a tradition stretching back to Bacon. Yet surprisingly, John Locke and Francis Bacon are not normally associated with each other. There are few references to Bacon in Locke's writings and the weight of scholarship seems to lean to the view that Bacon had little, if any, influence on Locke.

Anstey goes on to cite an article by John R. Milton about the influence of Bacon on Locke:

It should be pointed out that Milton also claims that the quantity of Bacon’s books owned by Locke is “strong prima facie evidence that Locke was interested in the thought” of Bacon and that parallels in the intellectual content and stylistic expression of Bacon and Locke suggest Bacon as a source of influence on Locke.

That there are points of comparison between Locke and Bacon is clear; there are doubtless points of contrast, as well. It seems that at least some of Bacon’s influence on Locke was direct; we know that Locke was aware of Bacon and had read some of Bacon’s texts.

The flaws which Locke notes can be understood to correlate with Bacon's four sources of experimental error.

There should therefore be no surprise that Locke’s writings about medicine have a Baconian flavor.

Thursday, October 8, 2015

Finding the Old Kant in the Young Kant's Texts

Immanuel Kant published his Kritik der Reinen Vernunft in 1781, when he was approximately 57 years old. He was born in 1724. The book, known also by the English translation of its title, The Critique of Pure Reason, took Kant from obscurity to fame, and constitutes a major turning-point in the history of philosophy.

Prior to this publication, the few who knew Kant respected him and regarded him as brilliant. Most of his publications until this point, however, had been less remarkable.

In hindsight, scholars have found indications in some of those early writings which point toward the development of what would become Kant’s trademark thinking. The centrality of space and time, and Kant’s distinctive understanding of them, appear, at least in part, in passages like this:

They who hold this disquisition superfluous are confuted by the concepts of space and time, conditions, as it were, given by their very own selves and primitive, by whose aid, that is to say, without any other principle, it is not only possible but necessary for several actual things to be regarded as reciprocally parts constituting a whole.

Qui hanc disquisitionem insuper habent, frustrantur conceptibus spatii ac temporis, quasi condicionibus per se iam datis atque primitivis, quarum ope, scilicet, absque ullo alio principio, non solum possibile sit, sed et necessarium, ut plura actualia se mutuo respiciant uti compartes et constituant totum.

Kant’s peculiar doctrine that time and space are not only somehow products of the rational mind, but also the instruments by which that mind processes sensations into perceptions and ultimately forms concepts, is not only one of the foundational cornerstones of Kantian metaphysics, but rather also represents a possibility of moving beyond the stalemate which existed between Newton’s view of space and Leibniz’s view of space.

One scholar, J.H.W. Stuckenberg, sees one of Kant’s early publications, titled De Mundi Sensibilis atque Intelligibilis Forma et Principiis, as a turning point both in Kant’s career and in the written expression of Kant’s characteristic thought. Stuckenberg writes:

In order that he might become a professor, it was necessary for him again to present a Latin dissertation. In its subject and treatment the one prepared for this occasion was worthy of the man who was called to teach metaphysics, and it is historically significant from the fact that in it Kant for the first time publicly gave some of the most important principles afterwards developed in the “Kritik.” It was a discussion of the difference between sensation and understanding, with the title, “The Form and Principles of the World of Sense and of the Intellect.”

Kant’s thought took shape over time, as John Henry Wilbrandt Stuckenberg traces it through Kant’s correspondence.

There is some orthographic ambiguity surrounding Stuckenberg. It probably started as Johann Heinrich Wilbrandt, but might also have been Johannes. Wilbrandt also occasionally appears as Wilbrand (without the ‘t’) or even as Wilburn. Sometimes the ‘l’ is doubled to Willbrandt, Willbrand, or Willburn.

Kant understandably saw his Kritik as properly not being metaphysics, but rather as something logically prior to metaphysics, a foundation which would make metaphysics possible and determine its scope, limits, and methods. Stuckenberg writes:

Kant's correspondence also indicates that he frequently changed his plans. When the book was already in press, he wrote to Herz that the “Kritik” “contains the results of all kinds of investigations, which began with the ideas which we discussed under the title of the Mundi Sensibilis and Intelligibilis” referring to his Inaugural Dissertation. At other times he expected to limit the contents much more. It may surprise some that at any time Kant regarded such a “Kritik” as lying outside of the sphere of metaphysics; but this significant passage occurs in a letter written to Herz in the winter of 1774-75: “I shall rejoice when I have finished my transcendental philosophy, which is really a critique of pure reason. I shall then work on metaphysic, which has only two parts, namely, the metaphysic of nature and that of morals, of which I expect to publish the latter first ; and I already rejoice over it in anticipation.” At this time, therefore, he held the view which he also held for years after the “Kritik” appeared, that it was only the preparation for metaphysics; nevertheless he regards it as belonging to transcendental philosophy. His letters and books, together with his last manuscript, show that his view of metaphysic was subject to numerous changes.

Kant’s development, then, demonstrates a number of changes in his views, but also a continuity from his 1770 publication, and perhaps from even earlier writings.

It is, of course, a matter of close textual reading to determine to which extent scholars find forerunners of Kant’s critical philosophy in his pre-critical writings, and to which extent scholars read into those early texts ideas which might not actually be there. But an evenhanded reading of the 1774/1775 letter by Kant to Herz clearly manifests, as Stuckenberg notes, the mature Kantian view.

Wednesday, July 1, 2015

Unraveling Morality from Religion

There is ever a great gap between the philosopher’s careful use of words and the sensationalistic verbiage of the popular press. The reader will see this clearly when it comes to questions of religion and morality.

At the outset, it may be stipulated that religion and morality are two different and distinct things. A man’s religion underdetermines his morality, and his morality underdetermines his religion: merely because he tells me that he is follows a certain religion, I cannot deduce from that his morality; and because he tells me that he holds to a certain morality, I cannot deduce from that his religion.

Writing about the controversial political questions of our day - abortion, homosexuality, race relations - the popular news media habitually assert an automatic and invariable connection, correlation, and causality between religion and morality. In such narratives, the words “religious” and “Christian” are meant to describe, not spiritual worldviews, but rather specific moral prescriptions or proscriptions.

Put simply: on many, if not all, moral questions, one can find atheists on both sides, Christians on both sides, Jews on both sides, Hindus on both sides, Buddhists on both sides, and Muslims on both sides. The same is true of Sikhs, Jains, Jehovah’s Witnesses, Mormons, etc.

Contrary to the impression given by the contemporary newspapers, there are pro-life atheists and pro-abortion Christians. In elections about normalizing homosexual relationships, in various of America’s fifty states, significant numbers of atheists have voted in favor of the standard definition of marriage as one man and one woman, while significant numbers of Christians have voted against it.

In short, the news media assert a relationship between religion and morality which simply does not exist.

The ubiquity of the popular press’s assertion, however, has clouded the logic of ethical reasoning.

Consider, e.g., the writing of Supreme Court Justice Scalia in Lawrence v. Texas, a 2003 case. Note his studious avoidance of any religious vocabulary:

Many Americans do not want persons who openly engage in homosexual conduct as partners in their business, as scoutmasters for their children, as teachers in their children’s schools, or as boarders in their home. They view this as protecting themselves and their families from a lifestyle that they believe to be immoral and destructive. The Court views it as “discrimination” which it is the function of our judgments to deter. So imbued is the Court with the law profession’s anti-anti-homosexual culture, that it is seemingly unaware that the attitudes of that culture are not obviously “mainstream”

A precise use of vocabulary is necessary to properly distinguish, then, between moral questions and religious questions. Definitions of moral or religious concepts should be carefully formulated.

The confusion of these two categories - religion and morality - in ubiquitous in vernacular usage. The borderline between the two is habitually blurred.

It is true, admittedly, that there are certain points of connection between religion and ethical, or meta-ethical, considerations. But that connection is not as determining as is commonly supposed.

This clarification can proceed by conducting moral analyses, as far as possible, without reference to religion. Likewise, religious analyses should be conducted, to the extent possible, without consideration of morality.

If these investigations refer to each other only when necessary, it will become clear how seldom such necessity occurs.

Tuesday, June 16, 2015

Gender Identity

Two of the many tasks surrounding the topic of gender are the task of definition, and the task of sorting out which aspects of gender are fixed independently of any one person or of any one person’s experience.

These tasks are complicated by the wide, constantly changing, inconsistent, and mutually incompatible usages found in the popular press and in casual conversation in society about this topic.

At the very outset, then, of this task, the philosopher faces a dilemma. He can begin by examining ordinary actual usage of words like ‘gender’ and ‘sex’ and ‘male’ and ‘female’ - and soon find himself in an unwieldy swamp of definition and idiom.

The other option is to largely ignore popular usage, and to begin with a few definitions which are taken as axiomatic.

One example of the contradictions which quickly emerge when studying common usage is that one dictionary defines ‘gender’ as a social and cultural distinction between male and female, while defining ‘sex’ as the biological distinction. Yet a biology textbook may define ‘gender’ as the biological grouping of male or female, and ‘sex’ as activity relating to procreation.

With such rampant inconsistency in usage, sorting out a definition becomes a difficult chore.

Turning to the other task, a philosopher asks, which aspects of such identity are independent of the individual and his experience? This question can be rephrased in slightly different ways: one can ask about the objective versus the subjective distinction between male and female; one can ask about the physiological versus the psychological distinction; or one can ask about whether the various elements of such a distinction are knowable a priori or a posteriori.

Several bits of data form traditional points of departure for such discussion. First, one the level of cellular structure, genetic information codes an individual’s gender, independently of that person’s experiences, prior to that person’s conscious conceptualization of gender, and immutably. No gender reassignment therapy can change an individual’s gender at the level of DNA. (Mitochondrial DNA is a definitive gender differentiator.)

Note here that ‘gender’ has entered the discourse, despite not yet having obtained a working definition. Note also that “experience” includes emotions, intuitions, physical actions and interactions, as well as social and cultural settings.

Second, certain bone structures are determined by, and reveal, gender. Archeologists and paleoanthropologists can determine the gender of human remains given no more than a few bones from a skeleton. A scientist can discover the gender of a long-deceased individual, given a fibula, a tibia, a humerus, a radius, an ulna, and maybe a rib, an anklebone, or a finger.

Third, male brain anatomy and female brain anatomy differ measurably and significantly. Not only is the anatomy quite distinct, but also the physiology is quantitatively and observably divergent. The study of the differences between the male and female amygdala and hippocampus has become an entire academic discipline unto itself.

Both the differences in bone structure and in brain functionality are impervious to any attempted gender reassignment.

By contrast, other aspects of gender are social and cultural artifacts, formed by convention, and susceptible to change. Traditional associations of color - pink for girls, blue for boy - or fashion - girls have long hair, boys have short hair - are neither a priori nor necessary. They are mutable.

One area for investigation is, then, the distinction between those gender and sex differentiators which are necessary and immutable, and those which are merely cultural conventions.

There are those who would make the claim that most, or even all, of what we call ‘gender’ or ‘sex’ is a social construct: that it has no objective basis in physical reality. In the words of a 2015 report issued in Missouri:

This is an element of what is sometimes referred to in gender studies as the “social constructionism” movement in psychological theory.

The notion that gender is a social construct entails that it is thoroughly mutable, and that statements about gender identity are incorrigible. Statements made by individuals who “identify as” one gender or the other would therefore be allowed to stand, and no argument against such statements would be possible.

“Gender” has become a matter of uncertainty. Rather than male or female, many see gender as a relative matter, or even a continuum. They consider gender or sexual identity to be less a reality given at conception than a matter of personal discovery. Reflective of such a theoretical perspective, increasing attention is also given to individuals who are personally uncertain about their own gender or sexual identity — in particular, individuals who are “transsexual” or “transgendered,” as well as those who identify themselves as “bisexual” or are “questioning” their gender and in the process of determining what they perceive to be their true gender identity.

Yet not only does empirical evidence in the physical realm speak of a gender identity which is temporally and logically prior to an individual’s self-identification, but data from the psychological realm also presents a case for gender which is independent of the individual’s “identifying as” one gender or the other.

Across all demographic variables, males more than females are likely to commit violent crime. This probability is by a significant statistical margin.

Separately, learning styles between the genders are markedly different, so much so that one can deliberate organize a presentation such that one gender will learn more than the other from it.

It seems, then, that the mutable aspects of gender identity which are social constructs are few and insignificant: clothing styles, hair length, fashionable colors, etc. We see, e.g., how the Scots wear kilts, which to the rest of the world seem like women’s dresses, but which to them are masculine.

The significant and essential aspect of gender identity, by contrast, seem immutable, and independent of the individual’s self-perception or self-description. First-person gender statements may, after all, be corrigible.

Monday, May 25, 2015

Phases in Sartre's Career

In 1980, a Roman Catholic priest named Marius Perrin published his memoirs of his time in a POW camp with the philosopher Jean-Paul Sartre. Both Frenchmen, Sartre and Perrin were lodged in Stalag XII-D near the city of Trier. Sartre had been serving as a meteorologist with French army when he had been captured by the Germans.

In the camp, the intellectuals among the prisoners formed a cultural society: artists, priests, and others - including Sartre. They had discussions and even organized lectures. Some of them took careful notes. Others did extensive writing.

Captured in 1940 and released in 1941, Sartre spent a little less than a year in the POW camp. He read extensively during this time. One major point in his intellectual development was his study of Heidegger which he undertook at Stalag 12-D. He also wrote a great deal during these months.

Shortly after Perrin published his book about his time with Sartre, Alfred Desautels wrote a review of it. Desautels begins his analysis of Perrin’s biographical account of Sartre’s war years by examining its reliability.

The veracity of Perrin’s book is relevant, because it is one piece of evidence used to support a particular view of Sartre’s career. One group of Sartre scholars divide his working years into four phases. Briefly, they assert that Sartre’s career began with a phase of despair, followed by a phase of hope, followed by a second phase of despair, and ending with a second phase of hope.

Scholars who embrace this view of Sartre lean on Perrin’s work, because the first ‘hopeful’ phase in Sartre’s career is defined, under this view, as his time in Stalag 12-D. Desautels writes:

At the outset we should assume, I think, that his testimony is accurate for the following reasons: 1) the priest is an unabashed admirer of Sartre, confessing that he is deeply indebted to him for a change of outlook on life; 2) the account is based on extensive notes he took daily during the nine months together; 3) he was encouraged by good friends of Sartre to publish his account of prison life; 4) in his preface, he urges his fellow-prisoners who are still alive to make known their own recollections of their rapport with the philosopher. A Docteur-es-Lettres even at the time of his captivity, Perrin undoubtedly had the academic background to appreciate the value of Sartre’s intellectual stature.

Thus Desautels defends Perrin’s account of Sartre, for with it stands or falls not only Perrin’s book, but also a large school of thought about the progression of Sartre’s career.

Sartre’s months as a POW were highlighted by his intense engagement with Heidegger’s writings, by his authorship of the stage play Bariona, and by his intellectually stimulating conversations with his fellow prisoners, many of whom were Roman Catholic priests from France.