(Originally appeared in Lognet 94/1)
For some years now I have been aware of a disquieting paradox. While Loglan is, as it was designed to be, remarkably easy to learn, it is also—once one passes the “kindergarten stage” (represented by, let us say, the completion of the first input file to MacTeach 1)—devilishly difficult to use. Closer inspection of this phenomenon reveals that the regions of use in which our more advanced logli are now encountering serious difficulties are those that involve precisely the logical transformations that Loglan was designed to facilitate. Irony upon irony! Could it be that Loglan is failing at precisely the point at which it was designed to succeed? Have we somehow made a language that is worse for thinking in than the natural languages are?
Before rushing off to join the Anti-Logical Language Society in the hope of averting the worsening of human thought, soi crano, let’s look at both sides of this paradox a little more closely. First, as to the ease of learning Loglan: what, exactly, do we mean when we say that Loglan is “easy to learn”? We mean that its grammar is so small and regular that its rules can be mastered in a few short weeks; we mean that its morphology is so regular that its words can be resolved from one another in the flow of speech, that, once resolved, their grammatical roles can easily be inferred; and we mean, above all, that its vocabulary can be acquired at rates that are unheard of in learning other second languages, and that any size vocabulary, whether small or large, can then be kept at whatever level of response strength the learner chooses (simply by using the vocabulary maintenance tools available from The Institute). Finally, the twin arts of forming and understanding Loglan utterances—which means learning to call upon all this easily acquired knowledge while reading, writing, listening, or speaking Loglan ...up to a certain level of grammatical intricacy, anyway—are very rapidly acquired. There is little doubt that if this is what “learning Loglan” means, then Loglan is, by comparison with other second languages, very easily learned.
But what about the other side of this paradox? How and where do these “formidable difficulties in using Loglan” arise? And apparently they do arise for nearly all serious students of the language. There seems to be a kind of “sound-barrier” lying in wait for us that seems abruptly to slow us down if we push our use of Loglan far enough. Let us consider where this barrier arises. In what places in ordinary speech is it likely to be encountered? In what kinds of text is it likely to appear?
Well, the first place one is likely to encounter this barrier-like difficulty is in translating natural language text. One is rollicking along through an English text from the 18th Century, say—for example, a passage from Jonathan Swift—and suddenly the question arises ‘What, exactly, is the author saying here?’ Or ‘What does this claim amount to?’ ‘There’s a quantification somewhere in these words, but what kind of quantification is it?’ ‘Is this the distributive sense of the he’s using? Or is he designating a set with the?’ And so on. The technical nature of the questions one is forced to ask in rendering natural language into Loglan strongly suggest the kind of trouble that Loglan is stirring up. It appears always to be logical trouble; the sort of trouble a careful philosopher, scientist, or lawyer encounters when he wishes to express some claim precisely...or to lay bare his premises for the inspection of others...or to disclose a previously hidden inference so as to make his argument utterly clear.
Alright. So that’s the region where the trouble always seems to arise. It’s the region of clarity, of logic, of saying what one means and meaning what one says; it happens when one wishes to conserve whatever truth there is in one’s premises, or convince one’s readers/auditors that one’s arguments are valid. But knowing that this is the region where these difficulties arise—the region of the validity-conserving transformations for which every language must provide—does not tell us why these logical problems should be so formidable in Loglan! They seem not to be so formidable in the natural languages...though that may be an illusion. (We will look into that proposition a little later.) Why, then, are they so troublesome when we encounter them in our “logical language”?
The answer I’m going to propose to this disquieting question is still a tentative one, but I warn you that that, too, is fairly disquieting. It occurred to me last summer in Oranienbaum, Russia, while I was looking at the Loglan enterprise from the perspective of my scientific colleagues in the Language Origins Society, that is, from the biological point of view. But I believe it is the answer that is currently most warranted by the neuro-psychological, the paleontological, and the anthropological data ...none of which are silent on this point. It is, very simply, that there is no logic gene in the human genome. There are, of course, language genes aplenty; but it appears that there are no logic ones. In fact, I suspect there are several illogic genes lurking in our genomes. Not only is “good thinking” not provided for in the basic architecture of the human head—not only, that is, are we not hard-wired to think “correctly”—but there are things we humans regularly do with our brains that logicians tell us we should not do with them...but which we all do anyway. At least we all start out doing these foolish things until some influential thinker, a teacher or an exemplar, tempts us to put aside our “bad habits” with the promise that our thinking will bear more useful fruit if we do.
I believe that one of these “illogic genes”—perhaps the most pervasive one—is the one that predisposes us to make the post hoc, ergo propter hoc (“PHEPH”) inference (“after this, therefore because of this”), a trick of thought which logicians and philosophers of science have long known to be fallacious; that is, they warn us that to argue causation from mere precedence is unwarranted by the laws of reasoning and of nature. In fact that’s what it’s called, in academic circles: the “post hoc, ergo propter hoc fallacy”. But it’s a fallacy that seems to be hard-wired in all of us. We all do it...until we learn to control it (if in fact we ever do control it). It is the basis of all human superstition (of rules about not walking under ladders, of not letting black cats cross your path, and so on), of magic (blow on the dice if you want good luck, pick up that eagle feather and stick it in your headdress if you want good hunting), of early religions (of Greek sea-captains making sacrifices to Poseidon before setting sail, of Aztec priests cutting out the living hearts of captives on the altar of Quetzalcoatl), and, indeed, of nearly all modern religious beliefs as well. I suspect that if human beliefs were carefully classified according to their evidential bases as well as the inferences by which we got them, a huge proportion of them would, in most times and places, turn out to be based on the PHEPH fallacy. Yet PHEPH is clearly wrong; it is scientifically indefensible. Equally clearly, it is an “error” we are somehow impelled by our antique nervous systems to make.
I suspect that PHEPH-type reasoning has not always been maladaptive. In fact, how could this fallacy now be instinctive—as it looks suspiciously to be—if it had not been adaptive for our ancestors? So I suspect that for most of the millions of years that hominids have arguably been thinking in PHEPH-like ways, it was very adaptive for them to do so. PHEPH is, after all, a veritable cornucopia of beliefs; you can believe almost anything by courtesy of PHEPH. So PHEPH may, in fact, be one of the neurological mechanisms by which all traditional cultures have been built. And who can doubt that what we now call “traditional cultures”—but which, before the “pre-Socratic” Greeks began to ask embarrassing questions about them (of which the questions put by that most illustrious of the “Pre-Socratics”, Socrates himself, were evidently the most embarrassing; for he was the one who was obliged to drink the hemlock!), were all the cultures that we had—were very important adaptive features in hominid life. Perhaps, after hunting, the easy believing that PHEPH and the other “fallacies” enabled was the chief adaptation of the hominid line. So let us not be too hard on “fallacious thinking”; it may have nourished us over the longest part of the long biological journey we’ve made to get here.
Whatever else they may be, human cultures are systems of belief. So PHEPH could have remained one of the most productive generators of human culture until Aristotle, sparked by the impudent questions of the pre-Socratics (‘What do you know?’ ‘How do you know it?’ ‘What can you be sure of?’), began to think about thinking some 2,300 years ago. After Aristotle, humans began—slowly, to be sure—to clean up their thinking act. Whether it was Aristotle who discovered the faultiness of PHEPH or some later logician, subsequent generations of logic-workers have by this time uncovered a veritable suite of intellectual errors that we unregenerate human animals are apparently still prone to make.
In short, that our brains should be continuing to make PHEPH and other sorts of faulty inferences is just one more way in which human biology has failed to keep pace with the extraordinarily rapid rate of human cultural evolution in the last two-and-a-half millenia...especially the extraordinary development, during this period, of that self-correcting part of it we call “science”. (The genes that predispose humans to vengeance and warfare is another set of antiquated biological dispositions that has recently given us considerable trouble, soi kecri; but our illogicality may now be even more threatening to human survival than even our taste for vengeance.) As beneficiaries of our “hypermodern” cultures, we of the Western philosophico-scientific persuasion “know”, of course, that PHEPH is a fallacy. But as young, or careless, or uneducated, or preliterate humans—or as any of those billions of adult humans who still live today in traditional cultures—we are nevertheless very happy to indulge our inner natures by sometimes making it.
There are several other logical errors that people make with suspicious regularity. One of them is to mistake a conditional for a biconditional—to mistake if...then for if and only if—an error that logic teachers call “affirming the consequent”. The erroneous inference goes like this. A says ‘p implies q.’ B says ‘Well, q is true. Therefore p must be true also.’ Putting it nakedly this way, the mistake is obvious. But what if it’s not naked? What if the conditional is hidden in an all-sentence? What if it’s more deeply hidden yet, in a sentence that is not even marked by all, as in English it usually isn’t? Suppose A says ‘Communists believe in socialized medicine.’ And B remarks, ‘Well that proves it, then. John believes in socialized medicine. Therefore he must be a communist!’ All of us have heard such arguments. The amazing thing is that they are usually put to us with an air of serene conviction. Indeed, affirming the consequent is one of the most common types of human reasoning. And the surprising thing is that it’s utterly indefensible from the logical point of view. (It takes either p or not-q to get anything sensible out of ‘p implies q’.) Yet we all know people whose entire belief systems seem to have been erected by affirming the consequent. For them, conditional knowledge seems not to exist. Knowledge seems to come to them in absolute pieces, that is, in biconditionals. Again, this mistake—a mistake that nearly all human children start their inferential careers out making—seems to be so regular, so natural, so nearly inexpungible from the heads of the adults who still practice it, that one is tempted to argue that there “must be a gene for it”...and that that gene, unfortunately, seems to be very widely distributed in the human gene pool.
Whatever the case may turn out to be with these putative “illogic genes”, the positive side of the “there are no logic genes” hypothesis seems unimpeachable. Good logic is almost certainly not inborn, any more than good mathematics or good physics is. Logic must be learned. And in the process of learning or teaching it, one discovers that much good thinking is counterintuitive. Apparently our intuitions lead us in one direction while our patient teachers coax us to take another.
Now what has all this to do with our “Loglan is both hard and easy” paradox? Actually, quite a lot. If there is no logic gene, if the proper handling of the machinery of logic must be learned—if, in fact, logic, like mathematics, is a biologically recent invention of human genius and so is not hard-wired in any of our heads—then a language like Loglan which lays this extra-genetic machinery out for us on its very surface will at first have the unpleasant effect of forcing us—or at least it will seem to be “forcing” to adults attempting to learn it as a second language...who knows how it might seem to children?—to deal with an apparatus we do not naturally know how to use...indeed, of the very existence of which our experience with natural languages has failed to apprise us...of which, in fact, they have caused us to be totally unaware. (Who was aware before the 19th Century logicians found it out for us that every universal “secretly contains” a conditional...or at least can be profitably so-expressed?) So we are likely to regard this new linguistic experience with “spoken logic”—an experience that a “logical language” like Loglan seems to force upon us—as difficult if not downright annoying. What in fact do we do with all these distinctions that Loglan thrusts upon us? asks Member Dunn in a letter in this issue. It seems to force us to make distinctions that other languages have apparently kept decently hidden from us! Is there any way, asks Dunn, that we can avoid making them and still speak Loglan? Don’t we need, he asks, a set of special Loglan words that will actually blur the too-sharp edge of our distinctions?
The plain fact is that most of us had never even noticed the difference between the “and, jointly” sense of English and and its “and, independently” sense. Or between the distributive sense of English the (The men arrived) and its set-designating one (He was one of the men). Or of the astonishingly diverse ways we announce, in English, that we are dealing with a mass term rather than a denumerable one...not to mention the numerous, and often inconsistent, ways by which all natural languages, including English, express universals and existentials. Most of us weren’t even aware of these distinctions, or of the need to address them routinely, until we began to play with Loglan. Thus Loglan seems to invite us to play the logic game. But it does more than that. Its structure actually obliges us to master these distinctions if we are even to speak the language! Apparently we must learn how to handle le and lo, leu and lea, e and ze, ji and ja, and so on, in ways that accurately reflect our “intentions.” (Actually, we generally have no intentions about these matters until we start the play!) Evidently we must to learn to have such intentions, and to express them, before we can even open our mouths as logli...whether we are then inclined to use them in the logic game or not.
So the fact that logic is part of the surface structure of Loglan makes at least one level of logic inescapable for the aspiring logli. The language actually forces us to pay attention to the logically fundamental distinctions even if we don’t know how to use them. (For example, if you’re going to speak a relative clause in Loglan, you have to use either ji or ja. As Member Dunn would say, there’s nothing in between, no neutral way of saying it. So that forces you to consider whether you intend the restrictive or the non-restrictive sense of your clause; and nothing—you slowly realize—is going to be logically more consequential than that!) Of course, by doing this, Loglan also tempts us to learn to use such distinctions well and fruitfully...not only in our speech and writing, but also in our thinking. But what Loglan does not do, and could not possibly do however it was designed, is plug into our heads while we are learning it all the skills of the logician. If I am right in this analysis, the logical tools of thought are only made available to us when we learn Loglan. In fact they are inescapably, unblinkably, even frighteningly “made available”. But any skill in using them must be acquired.
Now this, it seems to me, is the challenge that we logli face these days: that of somehow acquiring those “unnatural” logical skills. Some of us are attempting to thrash our way across the logic-barrier hoping there will be plain sailing on the other side. (I’m a sailor; and one of the marvellous things about resistance barriers in the physical world—like the one sailors used to call “hull speed”—is that there actually is some pretty glorious sailing on the other side. Let's hope that's the way it is with logic barriers, too!) We are attempting to acquire what for most of us are brand-new skills. And, let’s face it, in some cases these skills are turning out to be fairly difficult to acquire. At the same time, some of us are encouraging each other to go much further even than that. We are devising new, easier, and more economical ways of using our rich logical apparatus. We are inventing standard speechways called “usages” that may eventually bring an unprecedented degree of logical precision and awareness into the speech and writing of ordinary logli...and perhaps, as a consequence, an equally unprecedented level of transformational validity into their thinking as well. If it does the latter, and we can show experimentally that it does, then we will have demonstrated—or at least failed to disconfirm—at least one version of the Whorf hypothesis, namely the one that says that local languages facilitate certain kinds of local thought. This is not the strong form of the Sapir-Whorf hypothesis, which speaks of limits; but it is equally important. And it is a form that I, at least, as an experimentalist, would be willing to settle for as a first step.
It may be time to consider briefly why it is that natural languages do not present a similar “logic barrier” to adult second-language learning. We do not have to look long for a probable cause. If I am right, the fundamental architecture of all natural languages—and, by imitation, of all those artificial languages which, like Esperanto, have been serenely modeled on them—was laid down long ago...well before any but the earliest parts of modern logic were even thought of. So naturally other second-languages than the logical ones don’t force the management of logical distinctions on their adult learners. They require no such distinctions; therefore they could hardly mount a barrier composed of them before their learners.
Remarkably enough, some natural languages do not even permit some logical distinctions to be made. Chinese, for example, is said to have no (structural) means whatever for making the distinction between restrictive and non-restrictive relative clauses. Indeed, English doesn’t do a very good job of making this particular distinction. When the restrictive/non-restrictive distinction exists at all in English, it is likely to be confined to a writer's use of commas in writing, together with some vague and variable intentions concerning the use of which and that in speech. But even these stylistic conventions vary so widely from speaker to speaker, and even from occasion to occasion, that one can hardly regard these idiolectical variations as structural features of the English tongue. Thus, the restrictive/non-restrictive distinction—something that is absolutely crucial for many kinds of logical operations—apparently remains internal and subjective in most natural speech. It remains a matter of the speaker’s internal intentions only, and is hardly ever robustly inferrable from the external structure of his or her speech.
In summary, natural languages simply do not put the tools of logical thought on the work-bench for us. What they present to us in the way of logical tools is always ad hoc, imperfect, incomplete, often semantically opaque, and usually quite recent...like the English logician’s quaint 19th Century phrase if and only if. (This is, by the way, the only nod we English-speakers give to the otherwise completely absent biconditional!) So of course the natural languages do not force logical distinctions upon us. In natural speech these distinctions remain implicit; so they are made unconsciously or not at all. Indeed, some logical distinctions are not even makable without elaborate circumlocutions...for example, the logician’s There is an x such that for the existential quantification. And all this is very likely to be true of all the languages that are rooted in the long prehistory of the race...which is all the natural languages that there are.
If I am right about all this, then the difficulties we logli are now experiencing, as we battle our way through the thickets of natural illogic that clutter all our heads, is nothing more nor less than the difficulty any student of logic must expect to encounter as the formal tools of reasoning are, for the first time, thrust into his hands. Some of us—perhaps most readers of this journal—have had this experience before. We have taken university courses in symbolic logic, and some of us have even taught them. Do you remember the sentences in the back of the book? In some texts, at the chapter ends? They were often normal-looking English sentences that, as students, we were asked to render in the still-mysterious code of symbolic logic, and, as teachers, obliged to assess and correct. Often they were sentences that contained unsuspected quantifications and implied claims which were to be teased out of the thickets of English and symbolically rearranged. Those exercises, as I recall them—both as a teacher and as a student—were seldom easy. For some students, they proved almost impossibly difficult to do. But they were daunting in a way that was reminiscent of the so-called “story problems” of high-school algebra. These also seemed to be beyond the capacity of some students; but many of these same students could easily solve the equations lurking behind these story-problems once someone else had found them out. Logic may be like that; and if so, there is hope. I seem to recall that, in logic classes, even students who could not translate ordinary language into symbolic forms could often be helped to see the transformational light once a suitable symbolic rendering had been publicly displayed.
In my opinion, that is where we stand now, Logli. Some of us are attempting to rewrite in what amounts to a new symbolic logic those tricky sentences at the back of the book. We are calling the results of doing so “nurvia logla”, and we are publishing this visible Loglan in the column by that name. In preparing such text, we are attempting nothing less than the formidable task of casting both our own thoughts and the thoughts of others in transformable forms. Perhaps more importantly, with the help of our lodtua (“logic-worker”), Randall Holmes, we are also attempting to develop a “catalog” of standard ways of performing this rewriting task (see his Sau La Lodtua in this issue). Our hope is that once we’ve done so, then others will be able to use these same usages “on the fly”. We are attempting to build usages, in short, that are transparent at first sight, that are not “idiotismos” (as the Spanish say), and so are not essentially unfathomable (as the logical uses of unless and only are unfathomable in English), usages that (we hope) any logli can easily acquire by studying our examples. But they must also be economical from the standpoint of the energy requirements of everyday speech, and they must be defensible from the standpoint of the laws of thought, that is, they must be both brief and beautiful...“elegant”, as the mathematicians like to say. With such elegant usages as Da nu tugle neba for X is one-legged, we have made a fine beginning. But we still have a long, long way to go to cover the human semantic domain.
It’s a challenging enterprise. Some of us who are already deeply involved in it are enjoying it hugely, and I can only hope that, in the months to come, many more logli will join us at the logic work-bench. What we’re doing there—working on submitted texts, preparing pieces for Lo Nurvia Logla, or thrashing out usage problems in the meetings of the Keugru—may not be the most efficient way of getting this formidable mapping task accomplished. I confess that in the dream-world I would call all the “Loglan logic teachers” together and get it done in a 3-day conference. All of the conferees would of course be competent logli as well as journeyman logicians. We would sit around and gaze at one another and argue about how best to say a thing and what the most elegant way was; and move on to the next. And, of course, we would magically accomplish in a hundredth part of the time the mapping job that we, the editors and contributors to Lognet, are now attempting to do in the real world. But in the real world, the resources necessary to hold such a conference are not even in principle available to us yet. Even if the funding for such a conference were available, I fear we don’t yet have enough logli logic teachers to play a singles tennis game...much less attend a roundtable discussion! So we will have to do the best we can by e-mail “conferencing” with the amateur lodtua among our tiftua, and the amateur logli among our lodtua. (It is my hope that this paper will entice a few more of the lurking latter to make their presence known.)
But what about the second question in the title of this essay? Does the “no logic gene” answer to the first question auger well or ill for the Whorf hypothesis? Surprisingly—at least it surprised me when I thought of it last summer in Oranienbaum—I think it augers very well. If logic actually is the kind of game that we can play on the fly, if we actually can do logic during the course of speech by following linguistic trails hacked out for us by others, and, finally, if Loglan, by making its logic tools available to us on its surface—by putting them nakedly there for all of us to hear and see—actually does facilitate their spontaneous and proper use in the course of speech, then there is a very good chance, it seems to me, that the experience of speaking such a language, and coming to handle its established usages fluently, will indeed lead to an improvement in the validity-handling behavior of at least some of its speakers. And if it does all that, then the facilitative role of Loglan in producing this effect can hardly be doubted; and this version of the Whorf hypothesis, at least, will have been to that extent confirmed.
Copyright © 1994 by The Loglan Institute. All rights reserved.