It's old news but when Rick Warren had this year's two presidential candidates over to church for a little Q & A introduction to the evangelicals, he asked the Hon. Mr. Obama when he thought human life began. To this Mr. Obama replied that the question was "above his pay grade", suggesting that he did not have the capacity to make a judgment about the answer. The further suggestion was that no human being has the capacity to judge when a human being begins to live and that people who think otherwise (most everyone in the room at the time, I expect) are being presumptuous for whatever motive. This seemed to involve a serious disconnect between Mr. Obama and a good share of the audience in that he certainly must of thought that he was displaying an appropriate humility where that audience perceived arrogance. One recalls that that the character of mainline liberal theology made it a fashion to posture itself as awash in a sea of moral uncertainty, except when it came to certain romantic policies. ("I don't know about the arms race but I do care about the human race!" says a mainline pastor in an old political ad.) Evangelicals are especially sensitive to this as evidence of a rejection of theistic realism.
Someone in the democratic commentariat gave his own analysis of this disconnect as a case of the audience expecting to hear a saintly answer but getting a "head of state" type of answer. I think this shows how much evangelicals a almost perfectly misunderstood. Rather, I don't think evangelicals were expecting a Jesus but they hoped to find was a half-way decent Socrates. Mr. Obama's answer had a certain "hipness" to it continuing to reflect the defects of of old school university ethics education and not the recent developments of applied ethics classes. The old approach highlighted the difficulties of moral deliberation unnecessarily, relying on pedantic trolley and lifeboat examples that only focused on desperate situations. These courses were influenced by prevailing skepticism on moral judgments fueled by positivism about science. This pedagogy came under increased attack as the demand for moral guidance in opening fields of business, medicine, and technology were pressing new cases that required a determination from the moral point of view. Fresh writing from these fields brought new life to ethics. As someone who teaches courses in applied ethics myself, we still focus on practical reasoning and moral dilemmas but with a view that they are actually rare and typically resolvable, thus leading to an optimism about applying ethics to real life. If anything, the "above my pay grade" remark is woefully out of fashion and hearkens back to a deleterious point of view.
Is the question of when human life begins below everyone's pay grade? It seems not because most people don't think so. In answering, let me make three points.
(1) If it is human life we are talking about, clearly the question is when do humans begin to exist and the answer for humans is the same as for any other plant or animal -- at conception. The basis for saying this is as evident for biologists as it is for farmers or anyone else. Our experience with any birth triggers basic intuitions about the beginning of existence of an enduring living substance which can only be threatened by very sophisticated forms of skepticism, which can at least be rebutted. The judgment that human life begins at conception is certainly available to anyone and not above their pay grade.
(2) Of course, having said that, the real question then is not so much when does human being begin to exist but rather when does it become a person. Whatever a person is, it is seen to be the bearer of human rights and duties, a creature of moral standing. However, being a person also has something to do with displaying the attributes of an agent; self-consciousness, rationality, choice, etc. The concept of person is the concept of a natural kind and not just a set of attributes and yet it is also a moral concept. It is a concept that applies on both sides of the fact/value distinction and its the same concept in each case. If we assume that there is a time when the human becomes a person and that some humans may not be persons, this means that personhood is an accidental feature of humanity. But this seems straightforwardly false since the moral dignity of personhood seems to be an objectively intrinsic dignity of that which is a person which could not be the case if personhood were accidental. Our primary evidence for this is in our encounter with another person in the 'second person' -- as a 'you' which we say in face to face contact with the other. It might be suggested that this is an illusion but if so the mechanism of illusion has to be described to include the perception of personhood which is more complex that simply explaining it as a veridical insight. So the simplest explanation is that being a person is co-essential with being a human. It may be easy for people to believe in illusions but it is often harder to believe from a rational point of view that something is best described as an illusion. Certainly, choosing to think that humans are necessarily also persons (with moral dignity) as the most straightforward account is not above anyone's pay grade.
(3) Finally, whatever lingering doubt a person may have about the metaphysics of personhood, one is certainly entitled to think that we would all be better off as people if we choose to promote a culture of life rather than a culture of death. The importance of having an ethical culture is impressed on everyone when we witness the impact of culture on the morals of Southerners and Germans during slavery and the Holocaust. We continue to discover the impact of culture in institutions and governments. The attitude toward persons must be a a crucial factor in a leader. Certainly, to adopt the view that personhood is accidental to human beings is to look at human life as a whole as a case like a lifeboat example, in which people are saved or thrown over depending on whether they are a brain surgeon or a hobo. To be certain that we are better off thinking that humans are persons even though not metaphysically certain that humans are persons is a rational approach. To be neutral on this is just as good as to deny essential personhood to humans and to choose against incorrigible human dignity. The moral rationality of gambling on human dignity is certainly not above anyone's pay grade.
So the folks at the meeting were well within they rights to be miffed by Mr. Obama's remark. They need not be construed as hoping for a perfect saint, but rather as expecting that someone running for executive office would display some sufficient modicum of moral courage. Further, an option open to Mr. Obama would have been to say that while humans as persons begin to be at conception, there are various factors that may make it legitimate to not let them live, such as, at least, the case where they would threaten the mother's life. That is certainly not above anyone's pay grade either even though one might expect that many evangelicals would not like to hear it. But it is conspicuous that Mr. Obama did not take that approach.
Welcome to Gnu's blog ! This is an online posting of my musings which concern things related to topics like Christian faith, theology, philosophy, and my hobby, Fantasy Role-playing Games.
'What did you expect to see out of a Torquay hotel bedroom window? Sydney Opera House perhaps? The Hanging Gardens of Babylon? Herds of wildebeest sweeping majestically?!' -Basil Fawlty
Total Pageviews
Monday, September 29, 2008
Friday, September 26, 2008
Pulp in the Cards
Here is a narrative very-rules-lite mechanism I've been toying with, very much like my "Pulp in the Cup" game, only with cards instead of dice.
SOCIAL CONTRACT: Players design characters and navigate them through a world managed by a Game Master (GM) who conceives of the world and functions as the eyes and ears of the world for the characters. The Game Master also fairly applies the rules of the game, designs the story that the player's characters experience, and tries to assure that the players have a high quality time.
SETTING: The game setting is any setting familiar to all the player's and the Game Master, whether from some popular franchise or designed by the GM.
CHARACTERS: Character design is by straight description. Given the setting and other restrictions set by the GM, the player conceives of a character that fits the setting in a logical and coherent way. The working assumption is that the level of ability of the character is generally adequate to the level of challenge provided by the adventure according to the world of the narrative and is not either over qualified or under qualified for it. A good way to invite the player to design his character is ask him to be able to answer questions such as "How did your character come to be qualified to face this adventure?" and "What motivation led your character to accept this adventure?". The "Character Sheet" is simply the character notes made by the player about his character.
THE DECK: From a standard deck of playing cards, construct a deck of fourteen cards like so:
One black ace
One black deuce
One black four
Both red aces
Both red deuces
Seven picture cards (any mix of suits)
The value of the cards are as follows:
All aces are 1 point.
All deuces are 2 points.
The four is 4 points.
All picture cards are 0 points.
The sum of the points of the black cards represents the quality of the action taken by the character (higher is better). The sum of the points of the red cards represents the quality of the residence to the action by the circumstances (higher is greater). If there are no cards for one or the other color, the sum for that color is 0.
THE RULES:
Standard actions: In the course of playing, if a player 's character finds himself in a situation where he needs to make an action, the situation should be resolved logically from the character's description and the description of the setting. As much as possible, logic and plausibility should decide if a player's action was successful. If there is a chance of failure, resolve the action by the following means:
Have the character's player (which is the GM in the case of non-player characters) shuffle and cut the deck, then deal seven cards off the top. Fan and examine the hand. If the sum of the black cards is larger than the sum of the red cards, the action was a success. The difference between the black and red sums indicates the degree of success of failure. As with 'Pulp in the Cup', the GM describes what happened is such a way as to justify the figures in the result.
If its decided that the cards are necessary to resolve an action, the results should as often as possible be unmodified. But the logic of the description may require that the character be considered especially advantaged or disadvantaged in performing an action. In such a case, the GM may make a subjective judgment of the net difficulty and assign a value from +4 (nearly impossible) to -4 (nearly fool-proof) before the cards are dealt and add the modifier to the black card result before comparing. This modifier can be discussed with the players for cogency but the GM's ruling is final.
Critical successes and fumbles: If the degree of success on the pre-modified deal is +5 or more, the actions was a "critical success" and may have benefits beyond the goal of the action. Similarly, if the degree of failure of an pre-modified deal was -5 or less, the action was a fumble and has other detrimental consequences for the character making the action. The GM determines what these benefits or consequences are.
Resistance deals: If the character is not the acting agent but the recipient of the action (being enchanted, being poisoned, being diseased, etc.) the character's player shuffles, cuts and deals the cards as with a standard action, except the black cards represent the success of the action performed on the character and the red sum represents the resistance to it by the character. These deals may also be modified if deemed appropriate and they may also be critical successes and fumbles.
Combat: Combat between characters, if the story calls for it, proceeds in the following steps:
1. Determine surprise
2. Determine initiative order
3. Resolve combat actions according to initiative order
4. Repeat steps 2 and 3 until combat is over
If characters do not simply enter the arena of combat on an equal footing, it may be that one opponent may surprise the other. This is settled by a standard action deal using the relevant attribute of the character who is trying to gain surprise against the resistance of the character being surprised (such stealthy movement versus keen sightedness). Again, the description of the characters is decisive. If surprise is successful, the character gaining surprise gets a free attack without retaliation before combat starts. Once surprise has been determined and resolved, proceed to 2.
Initiative Order: Initiative is determined by shuffling the deck and dealing two cards to each combatant. The total number of points (red and black) is taken and whoever has the highest sum is first, then the next and the next. If there are ties, each tied character receives another card to determine the order between them. Keep doing this until all tied characters are resolved. If you run out of cards, go by birth dates or first names. The GM may decide to redo the initiative order after each cycle of combat or keep the initiative order until combat is finished, only adjusting it if combat effects it. It is possible that a character may have an advantage in initiative. In this case, the GM subjectively assigns a number (from 1 - 4) and adds that to each deal (initial deal plus tie breakers).
Combat Action: When a player's turn (or the GM's when the attacker is a non-player character) in the initiative order comes up, the player describes the attack and makes a standard action deal. This may be modified as any other standard action. The effects of a successful attack (a hit) are proportionate to the degree of success indicated by the deal and are determined by the GM (based on skill of the attacker, type of weapon used, etc.). A critical success indicates extended effects (crippling, losing an arm, insanity). A fumble indicates damage to the attackers self (his body or weapon). One of the effects of an attack could be to advance or lose one's place in the initiative order. The GM is encouraged to think creatively about combat and consider how to resolve running away, attacks prepared for in advance of combat, attacks of opportunity, and so on.
End of Combat: Combat continues until one character is incapable of fighting or tries to escape, possibly creating an opportunity for a free attack as in the case of surprise. Not all combat necessarily ends in death. It may end when a character is unconscious or incapacitated. If the character does not need emergency help he should get better over time with sufficient attention. Otherwise, the player will die without special assistance. Again, this should be resolved in the GM's story.
Powers: It is important to maintain the rule that the level of the characters ability is appropriate to the level of the adventure. Introducing powers must be done carefully. Some powers can be treated as skilled standard actions (psychic powers), some as resistance deals (spells) that cost the power user in some way (money, sanity, mana, physical damage), or some may just be part of the definition of the world and always resolve in some predetermined way (super powers). It's up to the GM to decide what world best for his story.
THE END OF THE GAME: This is also a pick-up game and there is no provision for experience points. However, it is possible to have the players keep their character sheets and replay there characters in another adventure. The GM is free to augment the character description to correspond with the in-game experience of the character but the real indication of this is that the character is promoted in rank and is qualified to go on even more challenging adventures than before.
When the game is over, pick up your deck and put it in your shirt pocket.
SOCIAL CONTRACT: Players design characters and navigate them through a world managed by a Game Master (GM) who conceives of the world and functions as the eyes and ears of the world for the characters. The Game Master also fairly applies the rules of the game, designs the story that the player's characters experience, and tries to assure that the players have a high quality time.
SETTING: The game setting is any setting familiar to all the player's and the Game Master, whether from some popular franchise or designed by the GM.
CHARACTERS: Character design is by straight description. Given the setting and other restrictions set by the GM, the player conceives of a character that fits the setting in a logical and coherent way. The working assumption is that the level of ability of the character is generally adequate to the level of challenge provided by the adventure according to the world of the narrative and is not either over qualified or under qualified for it. A good way to invite the player to design his character is ask him to be able to answer questions such as "How did your character come to be qualified to face this adventure?" and "What motivation led your character to accept this adventure?". The "Character Sheet" is simply the character notes made by the player about his character.
THE DECK: From a standard deck of playing cards, construct a deck of fourteen cards like so:
One black ace
One black deuce
One black four
Both red aces
Both red deuces
Seven picture cards (any mix of suits)
The value of the cards are as follows:
All aces are 1 point.
All deuces are 2 points.
The four is 4 points.
All picture cards are 0 points.
The sum of the points of the black cards represents the quality of the action taken by the character (higher is better). The sum of the points of the red cards represents the quality of the residence to the action by the circumstances (higher is greater). If there are no cards for one or the other color, the sum for that color is 0.
THE RULES:
Standard actions: In the course of playing, if a player 's character finds himself in a situation where he needs to make an action, the situation should be resolved logically from the character's description and the description of the setting. As much as possible, logic and plausibility should decide if a player's action was successful. If there is a chance of failure, resolve the action by the following means:
Have the character's player (which is the GM in the case of non-player characters) shuffle and cut the deck, then deal seven cards off the top. Fan and examine the hand. If the sum of the black cards is larger than the sum of the red cards, the action was a success. The difference between the black and red sums indicates the degree of success of failure. As with 'Pulp in the Cup', the GM describes what happened is such a way as to justify the figures in the result.
If its decided that the cards are necessary to resolve an action, the results should as often as possible be unmodified. But the logic of the description may require that the character be considered especially advantaged or disadvantaged in performing an action. In such a case, the GM may make a subjective judgment of the net difficulty and assign a value from +4 (nearly impossible) to -4 (nearly fool-proof) before the cards are dealt and add the modifier to the black card result before comparing. This modifier can be discussed with the players for cogency but the GM's ruling is final.
Critical successes and fumbles: If the degree of success on the pre-modified deal is +5 or more, the actions was a "critical success" and may have benefits beyond the goal of the action. Similarly, if the degree of failure of an pre-modified deal was -5 or less, the action was a fumble and has other detrimental consequences for the character making the action. The GM determines what these benefits or consequences are.
Resistance deals: If the character is not the acting agent but the recipient of the action (being enchanted, being poisoned, being diseased, etc.) the character's player shuffles, cuts and deals the cards as with a standard action, except the black cards represent the success of the action performed on the character and the red sum represents the resistance to it by the character. These deals may also be modified if deemed appropriate and they may also be critical successes and fumbles.
Combat: Combat between characters, if the story calls for it, proceeds in the following steps:
1. Determine surprise
2. Determine initiative order
3. Resolve combat actions according to initiative order
4. Repeat steps 2 and 3 until combat is over
If characters do not simply enter the arena of combat on an equal footing, it may be that one opponent may surprise the other. This is settled by a standard action deal using the relevant attribute of the character who is trying to gain surprise against the resistance of the character being surprised (such stealthy movement versus keen sightedness). Again, the description of the characters is decisive. If surprise is successful, the character gaining surprise gets a free attack without retaliation before combat starts. Once surprise has been determined and resolved, proceed to 2.
Initiative Order: Initiative is determined by shuffling the deck and dealing two cards to each combatant. The total number of points (red and black) is taken and whoever has the highest sum is first, then the next and the next. If there are ties, each tied character receives another card to determine the order between them. Keep doing this until all tied characters are resolved. If you run out of cards, go by birth dates or first names. The GM may decide to redo the initiative order after each cycle of combat or keep the initiative order until combat is finished, only adjusting it if combat effects it. It is possible that a character may have an advantage in initiative. In this case, the GM subjectively assigns a number (from 1 - 4) and adds that to each deal (initial deal plus tie breakers).
Combat Action: When a player's turn (or the GM's when the attacker is a non-player character) in the initiative order comes up, the player describes the attack and makes a standard action deal. This may be modified as any other standard action. The effects of a successful attack (a hit) are proportionate to the degree of success indicated by the deal and are determined by the GM (based on skill of the attacker, type of weapon used, etc.). A critical success indicates extended effects (crippling, losing an arm, insanity). A fumble indicates damage to the attackers self (his body or weapon). One of the effects of an attack could be to advance or lose one's place in the initiative order. The GM is encouraged to think creatively about combat and consider how to resolve running away, attacks prepared for in advance of combat, attacks of opportunity, and so on.
End of Combat: Combat continues until one character is incapable of fighting or tries to escape, possibly creating an opportunity for a free attack as in the case of surprise. Not all combat necessarily ends in death. It may end when a character is unconscious or incapacitated. If the character does not need emergency help he should get better over time with sufficient attention. Otherwise, the player will die without special assistance. Again, this should be resolved in the GM's story.
Powers: It is important to maintain the rule that the level of the characters ability is appropriate to the level of the adventure. Introducing powers must be done carefully. Some powers can be treated as skilled standard actions (psychic powers), some as resistance deals (spells) that cost the power user in some way (money, sanity, mana, physical damage), or some may just be part of the definition of the world and always resolve in some predetermined way (super powers). It's up to the GM to decide what world best for his story.
THE END OF THE GAME: This is also a pick-up game and there is no provision for experience points. However, it is possible to have the players keep their character sheets and replay there characters in another adventure. The GM is free to augment the character description to correspond with the in-game experience of the character but the real indication of this is that the character is promoted in rank and is qualified to go on even more challenging adventures than before.
When the game is over, pick up your deck and put it in your shirt pocket.
Thursday, September 18, 2008
Gnucomb's Paradox Revisited
Newcomb's Paradox as a model of the rationality of believing in a limited atonement.
Cut from post I wrote on another forum.
A perfect atonement points to a limited atonement, but concerning this I said that, though limited, faith in the atonement for your own salvation was rational even if you could not know for whom exactly Christ died. I had said this as if it were a decision based on uncertainty, such that even if you didn't know if Jesus died for you it was better to gamble that he did than that he didn't (like Pascal's Wager). But it's more complex than that and I was reluctant to go into detail about it, but here it is if you are interested.
The problem is: If Jesus died for you, then you will believe in him in a saving way, but if not then not. So there is no way to isolate the effect of Christ's death for the belier from the believer's belief, as my first pass on this assumes. To illustrate why faith in the atonement is rational, consider the following fanciful story.
The Fairy Godmother appears to Cinderella and gives her a chance to go to the Ball. She presents Cinderella with two boxes, a really big box and a small box. "Listen carefully", says the Fairy, "You must make a choice. You can choose to just open the big box or you can open both boxes. In the small box there is a lovely new mop that will help with your chores, but if you just pick the big box, you don't get what's in the small box. Now you are very dear to me and I know you even better than you know yourself (and I'm magical and can see the future). If you take the big box alone, you will find that I foresaw that and placed a beautiful gown, glass slippers, a carriage and nine, and an invitation to the Ball. But if you take both boxes, I will foresee that and the big box will be empty. (I thought about giving you the trip to the Ball but making you come home at midnight, but what fun is that?)"
"Now, my dear, I have already foreseen your decision and have loaded up the boxes as I said I would. So I will leave you to your choice. Ta-ta." And thus, Cinderella was left to choose the boxes. Now, since it was already a settled fact whether or not something was in the big box, it seems that the most rational thing for Cinderella to do is to open both boxes and take everything, after all, a new mop would come in handy whether or not one goes to the Ball. But given that the state of affairs of the boxes was conditioned on what Cinderella would do, it seems that the most rational thing for her to do is to just open the big box. After all, no new mop is worth giving up a dance with the Prince. So what should Cinderella do? It seems clear that just choosing the big box is rational for Cinderella to given the story, even in spite of the apparent rationality of choosing both.
In a way the sinner who is convinced that the atonement is limited is in a similar situation. He knows that it is already settled who is saved and who isn't by the fact of Christ's death. Those who are saved will be saved and those who are not saved will never be saved. But one only receives the benefits of salvation through faith, so that only those who believe in Christ are saved. So Christ died for all those and only those who believe in him for salvation, and Christ's death foresees faith in those for whom he dies precisely because it is due to Christs death that they come to believe (as we will discuss later). So when a person is confronted with the claims of the gospel and they believe them to be true, they can decide to either believe in Christ or not. To not believe is like choosing both boxes by enjoying a pre-Christian life as much as possible -- after all if Christ died for you then you be saved anyway but if not what difference does it make? But it is clear that no one is atoned for that does not believe. So believing is like choosing just the big box.
Cut from post I wrote on another forum.
A perfect atonement points to a limited atonement, but concerning this I said that, though limited, faith in the atonement for your own salvation was rational even if you could not know for whom exactly Christ died. I had said this as if it were a decision based on uncertainty, such that even if you didn't know if Jesus died for you it was better to gamble that he did than that he didn't (like Pascal's Wager). But it's more complex than that and I was reluctant to go into detail about it, but here it is if you are interested.
The problem is: If Jesus died for you, then you will believe in him in a saving way, but if not then not. So there is no way to isolate the effect of Christ's death for the belier from the believer's belief, as my first pass on this assumes. To illustrate why faith in the atonement is rational, consider the following fanciful story.
The Fairy Godmother appears to Cinderella and gives her a chance to go to the Ball. She presents Cinderella with two boxes, a really big box and a small box. "Listen carefully", says the Fairy, "You must make a choice. You can choose to just open the big box or you can open both boxes. In the small box there is a lovely new mop that will help with your chores, but if you just pick the big box, you don't get what's in the small box. Now you are very dear to me and I know you even better than you know yourself (and I'm magical and can see the future). If you take the big box alone, you will find that I foresaw that and placed a beautiful gown, glass slippers, a carriage and nine, and an invitation to the Ball. But if you take both boxes, I will foresee that and the big box will be empty. (I thought about giving you the trip to the Ball but making you come home at midnight, but what fun is that?)"
"Now, my dear, I have already foreseen your decision and have loaded up the boxes as I said I would. So I will leave you to your choice. Ta-ta." And thus, Cinderella was left to choose the boxes. Now, since it was already a settled fact whether or not something was in the big box, it seems that the most rational thing for Cinderella to do is to open both boxes and take everything, after all, a new mop would come in handy whether or not one goes to the Ball. But given that the state of affairs of the boxes was conditioned on what Cinderella would do, it seems that the most rational thing for her to do is to just open the big box. After all, no new mop is worth giving up a dance with the Prince. So what should Cinderella do? It seems clear that just choosing the big box is rational for Cinderella to given the story, even in spite of the apparent rationality of choosing both.
In a way the sinner who is convinced that the atonement is limited is in a similar situation. He knows that it is already settled who is saved and who isn't by the fact of Christ's death. Those who are saved will be saved and those who are not saved will never be saved. But one only receives the benefits of salvation through faith, so that only those who believe in Christ are saved. So Christ died for all those and only those who believe in him for salvation, and Christ's death foresees faith in those for whom he dies precisely because it is due to Christs death that they come to believe (as we will discuss later). So when a person is confronted with the claims of the gospel and they believe them to be true, they can decide to either believe in Christ or not. To not believe is like choosing both boxes by enjoying a pre-Christian life as much as possible -- after all if Christ died for you then you be saved anyway but if not what difference does it make? But it is clear that no one is atoned for that does not believe. So believing is like choosing just the big box.
Sorry, I've been really sick.
This summer was a bad adventure. One week into teaching, I contracted e coli, as well as seeing the doctor for Gall Bladder disease. It was a nightmare but the good news is that both are behind me and I didn't need surgery. So back to it.
Currently, I am trying to work on a "Pulp in the Cup" scenario set in the Serenity/Firefly universe of Joss Whedon. I also have one set in the "Ghost in the Shell" universe that I have rune a couple of times now and which I should just "write-up" somehow. The question is how.
Currently, I am trying to work on a "Pulp in the Cup" scenario set in the Serenity/Firefly universe of Joss Whedon. I also have one set in the "Ghost in the Shell" universe that I have rune a couple of times now and which I should just "write-up" somehow. The question is how.
Thursday, May 01, 2008
On the Intellect
A long while ago, I took a stab at trying to defend the irreducibility and existence of the intellect as a defining feature of human beings, which amounted to only a sort of spelling out what I meant without providing a reason to accept it, as (a fellow forumite at ABC Forum) Hal at the time readily and disappointedly noted. I want to try again, this time focusing more on the defense of the faculty. Still, my defense is going to be circumlocutory, (an A.D.D. trait) the basic strategy being to retrace the steps in the dialectic between the three great fathers of western thought; Socrates, Plato, and Aristotle. I won't be directly commenting on their works though, just briefly giving my own take on their thought.
To understand is to be able to reduce the plurality of facts to their essential principles and thus make sense of them. Assuming that is possible, how could it be so? Do we first know what things count as examples of what principles and then abstract the principles? But sometimes we realize that we have mistakenly identified things with the wrong principles and sometimes we disagree about which things are to be identified with which principles (especially in cases of morality and religion).
Assume for the sake of argument that we must somehow know the principles already and thus use them to identify the facts and things which exemplify them. But this leads to the following dilemma; If I already know the principles then I don't need to investigate them. If I don't already know the principles, then I cannot investigate them, since they, by assumption, have to be known in order to recognize them in their true examples. Of course, I either know them or I don't. Therefore, either way, investigating principles is unmotivated. But this would mean the end of all reasoning and discovery and the task of understanding is finished before it begins. This cannot be right since in my experience and in the experience of the race, we seem to make genuine progress in understanding through systematic efforts of inquiry and even in particular cases one who has had no education can make discoveries of some universal truths, such as mathematical truths.
How can that be possible, given our dilemma? Well, suppose that knowledge is ambiguous. Suppose that there is some sense we know and some other sense that we do not know what something is or what principle applies to it. Just suppose for the sake of argument, that we knew the truth of something and had forgotten it, but we recalled it through our efforts to investigate which led us to something that served as an inadvertent menomic device, like "a string tied around one's finger". Of course, to make that avoid the dilemma in every possible case, we would have to say that there was a time that we knew everything and that must be some time before any actual day in our life. We must have known it before we were born, forgotten it, but life and investigation bring it back to mind.
This would explain the apparent experience of "discovering truth on our own", which argues that this way of things might be the case. It also avoids the dilemma by showing it to be a fallacy of equivocation; What does the dilemma mean by "I know"? If "I know" means "I knew it before I was born", then my "knowing" does not mean I can't find out about it, since I might have forgotten it, and may still remember it. On the other hand, if "I know" means "I remember it from before I was born", then my not "knowing" it does not mean i cannot find it, because I could have known it from before I was born and not remembered it. So given this theory that identifies discovery as remembering and the two senses of "knowledge" it involves, there is no way to read the dilemma that does not make one or another of its premises false. So if the doctrine of remembering is true, then the dilemma fails, and we can go on investigating and seeking understanding.
Is this doctrine true? We can see it has some explanatory value as a possible way to explain cases of individual discovery of universal truth, but that does not prove it to be true, just that it is prima facie rationally possible. Still, even if that is the best we can do, that means it is not necessarily the case that the dilemma is sound. So even if we can not provide a sufficient theoretical justification for it, we have good reason on prudential grounds to accept it. Extrinsically, many great things have been accomplished through the pursuit of knowledge which would not have if we had followed the consequences of the dilemma. But perhaps even more importantly, the rigor and discipline of study has made us better people in the use of our faculties so that are better people for it. So even if uncertain, it is better to have the fruits of study even if it turns out that study is ultimately hopeless, than if study would be actually rewarding but we choose not to pursue it because of this dilemma. Since we cannot be certain whether the doctrine of remembering is true or false, it is still rational to risk that it is true rather than otherwise.
So we have prudential grounds, if not theoretical grounds, for accepting the recollection view of knowledge. In accepting the risk that our inquiries are not muted by the dilemma above, we must accept that we are risking at least that the world is objectively such a place that such a dilemma is unsound. A recollection world is just such a world. But once we have decided that such a risk is reasonable and accept it, we can ask if a recollection world is the most reasonable account that avoids the dilemma mentioned. We have good reasons for finding a better solution. For example, to redefine what in our experience is discovery and learning as a kind of remembering makes false our normal intuition that when I learn I learn something I didn't know before, that is, it makes false our ordinary judgments about learning and replaces them with something else. In that sense, it seems to be guilty of changing the subject as a way of dealing with the question. Is this really necessary? Further, as a way of explaining inquiry, it seems to make things more complicated than simple. If to understand is to reduce to principles, this theory seems to multiply the phenomena to be explained and does so by introducing hypothetical entities (like pre-existing souls). Also it seems to be redundant. We explain learning -- an encounter between a mind and a truth -- as a form of remembering some previous encounter of a mind and a truth. Why not use whatever works in the supposed prior encounter for the one you are trying to explain? These principles -- don't multiply facts unnecessarily, preserve the appearances, and avoid redundant or regressive explanations -- seem to be principled reasons for preferring one explanation over another. But notice that along with these desiderata is the principle that the explanation should avoid the dilemma above because a view which does so is more prudent than otherwise.
So instead of postulating a mythical prior encounter with the truth, postulate that the resources for understanding are already available in the context of the present encounter you are trying to explain. That means that rather than locating the universal apart from the concrete facts, say that the universal is present with the facts. Rather than postulating a mind prior to the body of the person encountering the facts, say that the mind is present with and as an essential part of the person encountering the facts. Finally, rather saying that the perception of the facts is a mere memento that reminds of a prior encounter with the truth, postulate that the mind directly encounters the universal in the facts as the person focuses on the facts, thus getting a progressively clearer account of what the universal is. This fits better with the principles above and it avoids the dilemma by diagnosing the equivocation in it not be priority in time but by priority in the natural order of cognitive functions. We discover the principle when we discover the fact, since the principle is there with the fact, but that principle is not clear to us until we think hard about what it could be, yet it may be sufficiently clear to us to discriminate between the things that exemplify that principle and the things that do not. It is possible we could be mistaken about that but it is not necessary. And if we have a mind that is developing by use, we may come to reasonably rely on it.
Which means we have good reason to replace the original assumption we started with -- that we have to know the principles before we can know the facts, if "know before" means "know prior in time". We can start with the facts and reason to principals in the sense that the facts already bear the principles and thus present them directly to the mind while the facts are mediately presented to the senses, not because of an arbitrary memory association but rather by the explanatory relationship that exists between the principles and the facts. Of course, this adaptation between the mind and the principles must be no mere accident but find the same principal of explanation in both in a common cause, most particularly finally and formally but also necessarily ultimately efficiently ("and this we call 'God'"). And this mind that is caused and causes knowledge we call the intellect, about which more could be said to follow, such as the immateriality of the intellect and so on, but that's for another time.
So the argument for accepting the doctrine of the intellect is this:
(1) The doctrine of the intellect explains how inquiry is possible. (Inquiry being the task of reducing the diverse facts to their ultimate principles.)
(2) The doctrine of intellect is a better explanation than the doctrine of recollection (or any other rival that tries to show how inquiry is possible, as far as we know).
(3) It is better to accept the doctrine of intellect and continue to pursue inquiry, than it is to reject the doctrine of the intellect and stop inquiry (because we and society will be better for doing so).
--------------------------------------------
(4) Therefore, we should accept the doctrine of the intellect as true.
To understand is to be able to reduce the plurality of facts to their essential principles and thus make sense of them. Assuming that is possible, how could it be so? Do we first know what things count as examples of what principles and then abstract the principles? But sometimes we realize that we have mistakenly identified things with the wrong principles and sometimes we disagree about which things are to be identified with which principles (especially in cases of morality and religion).
Assume for the sake of argument that we must somehow know the principles already and thus use them to identify the facts and things which exemplify them. But this leads to the following dilemma; If I already know the principles then I don't need to investigate them. If I don't already know the principles, then I cannot investigate them, since they, by assumption, have to be known in order to recognize them in their true examples. Of course, I either know them or I don't. Therefore, either way, investigating principles is unmotivated. But this would mean the end of all reasoning and discovery and the task of understanding is finished before it begins. This cannot be right since in my experience and in the experience of the race, we seem to make genuine progress in understanding through systematic efforts of inquiry and even in particular cases one who has had no education can make discoveries of some universal truths, such as mathematical truths.
How can that be possible, given our dilemma? Well, suppose that knowledge is ambiguous. Suppose that there is some sense we know and some other sense that we do not know what something is or what principle applies to it. Just suppose for the sake of argument, that we knew the truth of something and had forgotten it, but we recalled it through our efforts to investigate which led us to something that served as an inadvertent menomic device, like "a string tied around one's finger". Of course, to make that avoid the dilemma in every possible case, we would have to say that there was a time that we knew everything and that must be some time before any actual day in our life. We must have known it before we were born, forgotten it, but life and investigation bring it back to mind.
This would explain the apparent experience of "discovering truth on our own", which argues that this way of things might be the case. It also avoids the dilemma by showing it to be a fallacy of equivocation; What does the dilemma mean by "I know"? If "I know" means "I knew it before I was born", then my "knowing" does not mean I can't find out about it, since I might have forgotten it, and may still remember it. On the other hand, if "I know" means "I remember it from before I was born", then my not "knowing" it does not mean i cannot find it, because I could have known it from before I was born and not remembered it. So given this theory that identifies discovery as remembering and the two senses of "knowledge" it involves, there is no way to read the dilemma that does not make one or another of its premises false. So if the doctrine of remembering is true, then the dilemma fails, and we can go on investigating and seeking understanding.
Is this doctrine true? We can see it has some explanatory value as a possible way to explain cases of individual discovery of universal truth, but that does not prove it to be true, just that it is prima facie rationally possible. Still, even if that is the best we can do, that means it is not necessarily the case that the dilemma is sound. So even if we can not provide a sufficient theoretical justification for it, we have good reason on prudential grounds to accept it. Extrinsically, many great things have been accomplished through the pursuit of knowledge which would not have if we had followed the consequences of the dilemma. But perhaps even more importantly, the rigor and discipline of study has made us better people in the use of our faculties so that are better people for it. So even if uncertain, it is better to have the fruits of study even if it turns out that study is ultimately hopeless, than if study would be actually rewarding but we choose not to pursue it because of this dilemma. Since we cannot be certain whether the doctrine of remembering is true or false, it is still rational to risk that it is true rather than otherwise.
So we have prudential grounds, if not theoretical grounds, for accepting the recollection view of knowledge. In accepting the risk that our inquiries are not muted by the dilemma above, we must accept that we are risking at least that the world is objectively such a place that such a dilemma is unsound. A recollection world is just such a world. But once we have decided that such a risk is reasonable and accept it, we can ask if a recollection world is the most reasonable account that avoids the dilemma mentioned. We have good reasons for finding a better solution. For example, to redefine what in our experience is discovery and learning as a kind of remembering makes false our normal intuition that when I learn I learn something I didn't know before, that is, it makes false our ordinary judgments about learning and replaces them with something else. In that sense, it seems to be guilty of changing the subject as a way of dealing with the question. Is this really necessary? Further, as a way of explaining inquiry, it seems to make things more complicated than simple. If to understand is to reduce to principles, this theory seems to multiply the phenomena to be explained and does so by introducing hypothetical entities (like pre-existing souls). Also it seems to be redundant. We explain learning -- an encounter between a mind and a truth -- as a form of remembering some previous encounter of a mind and a truth. Why not use whatever works in the supposed prior encounter for the one you are trying to explain? These principles -- don't multiply facts unnecessarily, preserve the appearances, and avoid redundant or regressive explanations -- seem to be principled reasons for preferring one explanation over another. But notice that along with these desiderata is the principle that the explanation should avoid the dilemma above because a view which does so is more prudent than otherwise.
So instead of postulating a mythical prior encounter with the truth, postulate that the resources for understanding are already available in the context of the present encounter you are trying to explain. That means that rather than locating the universal apart from the concrete facts, say that the universal is present with the facts. Rather than postulating a mind prior to the body of the person encountering the facts, say that the mind is present with and as an essential part of the person encountering the facts. Finally, rather saying that the perception of the facts is a mere memento that reminds of a prior encounter with the truth, postulate that the mind directly encounters the universal in the facts as the person focuses on the facts, thus getting a progressively clearer account of what the universal is. This fits better with the principles above and it avoids the dilemma by diagnosing the equivocation in it not be priority in time but by priority in the natural order of cognitive functions. We discover the principle when we discover the fact, since the principle is there with the fact, but that principle is not clear to us until we think hard about what it could be, yet it may be sufficiently clear to us to discriminate between the things that exemplify that principle and the things that do not. It is possible we could be mistaken about that but it is not necessary. And if we have a mind that is developing by use, we may come to reasonably rely on it.
Which means we have good reason to replace the original assumption we started with -- that we have to know the principles before we can know the facts, if "know before" means "know prior in time". We can start with the facts and reason to principals in the sense that the facts already bear the principles and thus present them directly to the mind while the facts are mediately presented to the senses, not because of an arbitrary memory association but rather by the explanatory relationship that exists between the principles and the facts. Of course, this adaptation between the mind and the principles must be no mere accident but find the same principal of explanation in both in a common cause, most particularly finally and formally but also necessarily ultimately efficiently ("and this we call 'God'"). And this mind that is caused and causes knowledge we call the intellect, about which more could be said to follow, such as the immateriality of the intellect and so on, but that's for another time.
So the argument for accepting the doctrine of the intellect is this:
(1) The doctrine of the intellect explains how inquiry is possible. (Inquiry being the task of reducing the diverse facts to their ultimate principles.)
(2) The doctrine of intellect is a better explanation than the doctrine of recollection (or any other rival that tries to show how inquiry is possible, as far as we know).
(3) It is better to accept the doctrine of intellect and continue to pursue inquiry, than it is to reject the doctrine of the intellect and stop inquiry (because we and society will be better for doing so).
--------------------------------------------
(4) Therefore, we should accept the doctrine of the intellect as true.
Science and Whistleblowing
In the Dover trial, one of Judge Jones arguments that he gave to dismiss as religion was the usual one that ID fails to live up to this or that set of criteria for what counts as science. The assumption seems to be that what counts as science must conform to an absolute set of criteria as if such criteria were the necessary and sufficient conditions for science.
Can you imagine if some materialist philosopher in Athens had bumped into Socrates and was confronted by him with the question "What is science?"? Finally, Socrates would find someone who can actually give him the sort of definition he demands, a complete conceptual analysis of "science" without any slave boys to encourage him. But I imagine that Socrates would say that in the end what the philosopher is doing is simply providing an example of reasoning and that the real relevant target is to know what reason is. And to identify the definition of reason with the example of science would certainly fail Socrates test. "Science" would then be form of reasoning the philosopher happens to like the most.
But it occurs to me that identifying what has the right to considered science is very similar to identifying whether someone has the right to blow the whistle against his superiors. In such a case, instead of providing the would be whistle blower with hard and fast rules or criteria, ethicists typically provide a set of guidelines to help people determine this such as; do you have documentation for the wrongdoing you are about to expose, have you tried the instituted channels first, will whistle blowing make significant difference, etc. However, the guidelines provide an unnecessary but sufficient criteria that succeeds in identifying the right to blow the whistle, but it is clear that in the nature of some cases, one could have the right to do it without satisfying one or another of the guidelines. For example, the whistle blower may not be able to provide documentation because of national or corporate security reasons (i.e. trade secrets) or he may already have evidence that the ones who are involved in the proper channels are part of the conspiracy. But exceptions like this would not disqualify him from the right to blow the whistle, since the provide legitimate exceptions to the guidelines.
Now why is this not true of science? There are certainly certain conditions that we would like to satisfy in providing any scientific theory such as reproducibility, explanatory simplicity, fecundity, and predictive power. However there are many things that we can explain but not predict and that we can predict but not explain. In particular, there are many things that empirical examination informs us about which are not in principle reproducible, such as the Big Bang, the extinction of the dinosaurs, and other original phenomena in nature. Why aren't these cases similar to cases where one or another guideline of whistle blowing is legitimately suspended because of the nature of the case? There may be debates about whether a particular case counts as a case of original phenomena or a phenomena that ordinarily happens but that would simple turn out to be a debate over whether a scientific approach would legitimately suspend the criterion of reproducibility or not, not whether or not the account would be scientific.
It seems to me that ID draws it conclusions based on highly specialized empirical investigations into phenomena occurring in nature. That ID argues for these cases as original phenomena does not seem to make it unscientific, given what I have said. Paul Davies in one place postulates the existence of "Informational Laws" that are other than the physical laws but which account for certain cases of sudden information complexity in nature, since physical laws alone cannot. However, he thinks that these information laws are emergences of natural processes. If this means that there was a time when information laws didn't exist and then the did, I don't see how this improves upon Dawkin's "Aliens did it" claim, now made famous in the new film "Expelled". It just removes the explanation one step back. But if information laws "immediately emerge", that is, if they are the ultimate explanation of what seems to be a fundamental and irreducible part of nature itself, then it would be explain by virtue of being a necessary part of our view of nature. There must be a non-contingent source of information laws in order to explain cosmological phenomena (and this is what everyone means by "God"). But this is science not disconfirming theology rather than theology substituting for science.
Can you imagine if some materialist philosopher in Athens had bumped into Socrates and was confronted by him with the question "What is science?"? Finally, Socrates would find someone who can actually give him the sort of definition he demands, a complete conceptual analysis of "science" without any slave boys to encourage him. But I imagine that Socrates would say that in the end what the philosopher is doing is simply providing an example of reasoning and that the real relevant target is to know what reason is. And to identify the definition of reason with the example of science would certainly fail Socrates test. "Science" would then be form of reasoning the philosopher happens to like the most.
But it occurs to me that identifying what has the right to considered science is very similar to identifying whether someone has the right to blow the whistle against his superiors. In such a case, instead of providing the would be whistle blower with hard and fast rules or criteria, ethicists typically provide a set of guidelines to help people determine this such as; do you have documentation for the wrongdoing you are about to expose, have you tried the instituted channels first, will whistle blowing make significant difference, etc. However, the guidelines provide an unnecessary but sufficient criteria that succeeds in identifying the right to blow the whistle, but it is clear that in the nature of some cases, one could have the right to do it without satisfying one or another of the guidelines. For example, the whistle blower may not be able to provide documentation because of national or corporate security reasons (i.e. trade secrets) or he may already have evidence that the ones who are involved in the proper channels are part of the conspiracy. But exceptions like this would not disqualify him from the right to blow the whistle, since the provide legitimate exceptions to the guidelines.
Now why is this not true of science? There are certainly certain conditions that we would like to satisfy in providing any scientific theory such as reproducibility, explanatory simplicity, fecundity, and predictive power. However there are many things that we can explain but not predict and that we can predict but not explain. In particular, there are many things that empirical examination informs us about which are not in principle reproducible, such as the Big Bang, the extinction of the dinosaurs, and other original phenomena in nature. Why aren't these cases similar to cases where one or another guideline of whistle blowing is legitimately suspended because of the nature of the case? There may be debates about whether a particular case counts as a case of original phenomena or a phenomena that ordinarily happens but that would simple turn out to be a debate over whether a scientific approach would legitimately suspend the criterion of reproducibility or not, not whether or not the account would be scientific.
It seems to me that ID draws it conclusions based on highly specialized empirical investigations into phenomena occurring in nature. That ID argues for these cases as original phenomena does not seem to make it unscientific, given what I have said. Paul Davies in one place postulates the existence of "Informational Laws" that are other than the physical laws but which account for certain cases of sudden information complexity in nature, since physical laws alone cannot. However, he thinks that these information laws are emergences of natural processes. If this means that there was a time when information laws didn't exist and then the did, I don't see how this improves upon Dawkin's "Aliens did it" claim, now made famous in the new film "Expelled". It just removes the explanation one step back. But if information laws "immediately emerge", that is, if they are the ultimate explanation of what seems to be a fundamental and irreducible part of nature itself, then it would be explain by virtue of being a necessary part of our view of nature. There must be a non-contingent source of information laws in order to explain cosmological phenomena (and this is what everyone means by "God"). But this is science not disconfirming theology rather than theology substituting for science.
Tuesday, April 29, 2008
Darwin and Hitler
The new film Expelled!, which I haven't seen and am not sure if I will see, makes some kind of claim about the relation between Darwinism and Hitler, only I am not sure yet what it is. The reviewers seem to perceive different claims being made.
One claim that is certainly false is that Naturalistic Darwinism (hereafter 'Darwinism') logically entails Social Darwinism. But there is nothing inconsistent with being a Darwinist and being against eugenics of any kind. There is still an account to be given about what sort of reasons might be given by a Darwinist to oppose eugenics, but it is not hard to imaging a possible social costs versus social benefits analysis type of account as possibly being empirically justified. So hopefully the movie is not embarrassingly making this claim.
However, one thing that can be said is that Darwinism sees all things as merely instrumental or extrinsic causes, and that includes human beings. So there is nothing inconsistent about being a Darwinist and being a eugenicist either. It is inconsistent with Darwinism to think that human beings are actual ends and not means (or mere means). It may be that even if Darwinism does not entail social Darwinism, it may be that there is no good comprehensive social policy that is consistent with Darwinism.
A possible exception to that claim would be treating human beings as if they were ends because that is the way the majority of people prefer to be treated. This can be secured through a social contract. Of course, if people's preferences change, so would the contract and there would be no motivation to resit it. Whether or not people ever adopted such a contract would be a matter of probability. However, if the eligible preferences are to be restricted to rational preferences such that they comport to the widest view of all the facts and all that we accept as true, it seems that we would reject a fortuitous preference for "as-if-endship" for a Weberian bureaucracy.
One connection that people are claiming to find is that Darwinism "inspires" eugenics or social Darwinism. This seems to mean that when one comprehends the meaning of Darwinism, one tends to adopt eugenics policies rather than otherwise -- something about the Darwinian vision creates a proclivity to accept eugenics. Perhaps something about the affirmation that, after all, humans are mere means, tends to attract the simpler minded to adopt the apparently most radical and clear way to affirm this (i.e. through adopting eugenics or Social Darwinism) rather than consider all the possibilities. This for the Darwinist would be a sociological question rather than a theoretical one. The evidence usually sited -- that Hitler continuously refers to Darwin as support for his ideology -- only fails to disconfirm the claim, but does not provided a significant sample for support.
One claim that is certainly false is that Naturalistic Darwinism (hereafter 'Darwinism') logically entails Social Darwinism. But there is nothing inconsistent with being a Darwinist and being against eugenics of any kind. There is still an account to be given about what sort of reasons might be given by a Darwinist to oppose eugenics, but it is not hard to imaging a possible social costs versus social benefits analysis type of account as possibly being empirically justified. So hopefully the movie is not embarrassingly making this claim.
However, one thing that can be said is that Darwinism sees all things as merely instrumental or extrinsic causes, and that includes human beings. So there is nothing inconsistent about being a Darwinist and being a eugenicist either. It is inconsistent with Darwinism to think that human beings are actual ends and not means (or mere means). It may be that even if Darwinism does not entail social Darwinism, it may be that there is no good comprehensive social policy that is consistent with Darwinism.
A possible exception to that claim would be treating human beings as if they were ends because that is the way the majority of people prefer to be treated. This can be secured through a social contract. Of course, if people's preferences change, so would the contract and there would be no motivation to resit it. Whether or not people ever adopted such a contract would be a matter of probability. However, if the eligible preferences are to be restricted to rational preferences such that they comport to the widest view of all the facts and all that we accept as true, it seems that we would reject a fortuitous preference for "as-if-endship" for a Weberian bureaucracy.
One connection that people are claiming to find is that Darwinism "inspires" eugenics or social Darwinism. This seems to mean that when one comprehends the meaning of Darwinism, one tends to adopt eugenics policies rather than otherwise -- something about the Darwinian vision creates a proclivity to accept eugenics. Perhaps something about the affirmation that, after all, humans are mere means, tends to attract the simpler minded to adopt the apparently most radical and clear way to affirm this (i.e. through adopting eugenics or Social Darwinism) rather than consider all the possibilities. This for the Darwinist would be a sociological question rather than a theoretical one. The evidence usually sited -- that Hitler continuously refers to Darwin as support for his ideology -- only fails to disconfirm the claim, but does not provided a significant sample for support.
Thursday, April 03, 2008
Worldview and Truth
This is sort of a continuation of my thoughts about worldview and the two types of virtue.
Let's look at intellectual virtue as a moral virtue again. I take it that worldview formation would be an intellectual virtue of this sort, a virtue the good of which is intrinsic to itself and not merely extrinsic to it. Assuming that is right, what is the relation between such an intellectual virtue and truth? Not that the virtue maximizes truth relative to error in a reliable way, since that would just be extrinsic. The merit of the virtue is self-authenticating in the sense that if the virtue has a character then it succeeds by fulfilling its own characteristic ends.
This seems to point to some kind of epistemic theory of truth for such a virtue: truth is the final form such thinking takes and insofar as the state of my thinking is similar to that final state (CS Pierce). This conclusion is unattractive for a realist and I would intuitively want to be a realist, one that thinks of truth as a correspondence at least between thought and reality. What to do? If I want to recommend worldview formation as an intellectual intrinsic virtue but be a realist at the same time, how is that possible?
One approach to worldviews and truth is to see a worldview in itself as a set of truth claims. A worldview is a set of defining beliefs such that one has to hold such beliefs in order to be a member in good standing of a certain community. As such these beliefs are truth valued and evaluable by tests such as confirmation by evidence, logical consistency, existential confirmation, and so on. It seems to be a necessary condition of a worldview that it would receive a positive evaluation on such an approach.
However, it clearly would not be sufficient if worldview formation is an intrinsic intellectual virtue. One could have the appropriate set of beliefs that pass the test above based on nothing else but extrinsic intellectual virtues (and moral ones -- with the possible exception of whatever 'existential confirmation' turns out to mean).
(In fact, if existential confirmation means an intrinsic satisfaction, that raises the question of whether it is truth indicative.)
So something could satisfy the account of truth evaluation of worldviews given and still not be a worldview.
What to say then? As I understand the idea of worldview formation as an intrinsic intellectual virtue, worldview formation makes an integrated agent possible. A person with a well formed worldview is able approach everything with a common identity. She is not one person in one set of circumstances and another person in another where what determines which she is is the circumstances and not her reasons. So worldview formation is an essential to soul-making and and character building, which is a form of coming into being. This suggests that the truth maker for worldview formation as an intellectual virtue is the soul being made, whether or not and to what extent that soul is actualized.
Let's look at intellectual virtue as a moral virtue again. I take it that worldview formation would be an intellectual virtue of this sort, a virtue the good of which is intrinsic to itself and not merely extrinsic to it. Assuming that is right, what is the relation between such an intellectual virtue and truth? Not that the virtue maximizes truth relative to error in a reliable way, since that would just be extrinsic. The merit of the virtue is self-authenticating in the sense that if the virtue has a character then it succeeds by fulfilling its own characteristic ends.
This seems to point to some kind of epistemic theory of truth for such a virtue: truth is the final form such thinking takes and insofar as the state of my thinking is similar to that final state (CS Pierce). This conclusion is unattractive for a realist and I would intuitively want to be a realist, one that thinks of truth as a correspondence at least between thought and reality. What to do? If I want to recommend worldview formation as an intellectual intrinsic virtue but be a realist at the same time, how is that possible?
One approach to worldviews and truth is to see a worldview in itself as a set of truth claims. A worldview is a set of defining beliefs such that one has to hold such beliefs in order to be a member in good standing of a certain community. As such these beliefs are truth valued and evaluable by tests such as confirmation by evidence, logical consistency, existential confirmation, and so on. It seems to be a necessary condition of a worldview that it would receive a positive evaluation on such an approach.
However, it clearly would not be sufficient if worldview formation is an intrinsic intellectual virtue. One could have the appropriate set of beliefs that pass the test above based on nothing else but extrinsic intellectual virtues (and moral ones -- with the possible exception of whatever 'existential confirmation' turns out to mean).
(In fact, if existential confirmation means an intrinsic satisfaction, that raises the question of whether it is truth indicative.)
So something could satisfy the account of truth evaluation of worldviews given and still not be a worldview.
What to say then? As I understand the idea of worldview formation as an intrinsic intellectual virtue, worldview formation makes an integrated agent possible. A person with a well formed worldview is able approach everything with a common identity. She is not one person in one set of circumstances and another person in another where what determines which she is is the circumstances and not her reasons. So worldview formation is an essential to soul-making and and character building, which is a form of coming into being. This suggests that the truth maker for worldview formation as an intellectual virtue is the soul being made, whether or not and to what extent that soul is actualized.
Wednesday, March 26, 2008
Darwin and Worldview again
Here is something that a Darwinist might say in reply to my linked post. "It seems that the crucial aspect of worldview forming for you is the role it plays in the personal integration of the agent, which means the role it has in cultivating virtues in the agent. An integrated agent is one whose actions and reason are appropriately tied to virtues. But if that is right, then this poses no inconsistency for Darwinism. What makes a tendency a virtue is that in tends to maximize some good whether that end be truth, in the case of intellectual virtues, or happiness, in the case of moral virtues. The fruitfulness of virtues with respect to these ends is what makes those virtues virtuous (or simply "right"). And if one has several virtues, whether intellectual or moral, each of which is right, then a person is right through and through. What more can "integration of the agent" ask for? But it is clear that there is no paradox between this picture of personal integration and Darwinism. Methodological naturalism is an example of an intellectual virtue in this sense and right dispositions can be selected by fitness. In fact, Darwinism expects that picture. So Darwinists do get their worldview without a hitch after all."
This plays on a distinction that Aristotle makes between intellectual and moral virtues, which is a distinction not only between intellect and character but also a distinction between two senses of how something can be a virtue. On standard interpretations, for Aristotle an intellectual disposition is virtuous if it maximizes truth relative to error, but the benefit of a moral virtue is intrinsic to the disposition itself. Further, intellectual virtues are passive and receptive, while moral virtues are active and agent-expressive.
However, one could suggest that we see moral virtues as being like Aristotle's intellectual virtues, as virtuous because they maximize goods relative to bads. And one could even suggest that we see intellectual virtues as being like Aristotle's moral virtues, as privileging the truth that intrinsically results from a certain form of inquiry. To make a long story short one could identify four kinds of virtue; (a) intellectual passive virtues, (b) intellectual active virtues, (c) moral passive virtues, (d) moral active virtues. A problem with Aristotle's selections, (a) and (d), is that many see them to be in a kind of tension. One solution is to either adopt (a) and (c) or (b) and (d) to deny the tension. But another option is to accept all four to embrace the tension more uniformly.
In my first post, it seems clear that by worldview formation, I had in mind cultivating an intellectual virtue of type (b) to facilitate cultivating moral virtue of type (d). But I also think that virtues in the sense of (a) and (c) a relevant and necessary for this to be possible. This means i embrace the strategy of seizing the tension. The cultivation of active virtues in necessarily involved in rendering the results of the passive virtues into a coherent system for ourselves as agents. But that means accepting the point that this system is more tenuous than either (a) and (c) or (b) and (d), although the only (b) and (d) option is still open for me.
The Darwinist however avoids my objection by embracing only (a) and (c) and he does so successfully.
This plays on a distinction that Aristotle makes between intellectual and moral virtues, which is a distinction not only between intellect and character but also a distinction between two senses of how something can be a virtue. On standard interpretations, for Aristotle an intellectual disposition is virtuous if it maximizes truth relative to error, but the benefit of a moral virtue is intrinsic to the disposition itself. Further, intellectual virtues are passive and receptive, while moral virtues are active and agent-expressive.
However, one could suggest that we see moral virtues as being like Aristotle's intellectual virtues, as virtuous because they maximize goods relative to bads. And one could even suggest that we see intellectual virtues as being like Aristotle's moral virtues, as privileging the truth that intrinsically results from a certain form of inquiry. To make a long story short one could identify four kinds of virtue; (a) intellectual passive virtues, (b) intellectual active virtues, (c) moral passive virtues, (d) moral active virtues. A problem with Aristotle's selections, (a) and (d), is that many see them to be in a kind of tension. One solution is to either adopt (a) and (c) or (b) and (d) to deny the tension. But another option is to accept all four to embrace the tension more uniformly.
In my first post, it seems clear that by worldview formation, I had in mind cultivating an intellectual virtue of type (b) to facilitate cultivating moral virtue of type (d). But I also think that virtues in the sense of (a) and (c) a relevant and necessary for this to be possible. This means i embrace the strategy of seizing the tension. The cultivation of active virtues in necessarily involved in rendering the results of the passive virtues into a coherent system for ourselves as agents. But that means accepting the point that this system is more tenuous than either (a) and (c) or (b) and (d), although the only (b) and (d) option is still open for me.
The Darwinist however avoids my objection by embracing only (a) and (c) and he does so successfully.
Tuesday, March 25, 2008
"There's this guy named William Alston . . ."
At the link, Pastor Tim Keller of Church of the Redeemer, gives a summary of some of his recent book of arguments for God and their relation to faith at Author's@Google. Keller is a highly competent pastor and church organizer and leader. He is also a fairly good reader of philosophy as a representative of an educated profession. His presentation gives examples of some of the best stuff out there in current philosophy, science, and social thought. This is pretty much as well as we can expect from him and he has done a good job. But he immediately gets sandbagged by the first objection, which clearly is given by someone who knows either professional philosophy or some other highly academic field. However, this hardly amounts to being a reason for giving up, since his own experience in reading let's hims know that there could be something that someone could say in professional philosophy against it and his own argument was that it is reasonable to take a chance on God in spite of the evidential uncertainty surrounding the claim of His existence.
If we accept that God is bigger than us and may have reasons for allowing suffering that we are not in an adequate position to detect, does it follow that anything follows from the nature of God and his "goodness", such as rewarding an atheist for his lack of faith or condemning a theist for believing? Does it follow that since some things that God tolerates are not what we expect that we cannot form reasonable expectations at all in what counts as good or evil for us within our ability to judge? I reasonable expect that if I trust Him, He will respond to me even though its possible, for all I know, that he could be justified in not doing so. I reasonable expect that if I do not trust Him, He will not respond to me even though its possible, for all I know, that he could be justified in doing so. This is because it is reasonable to think that God's character constrains His actions such that not just anything at all is possible, even if I cannot always tell what should or shouldn't happen. It still seems that Keller's conclusion is sustained that it takes less of a risk to believe in God than to not do so.
If we accept that God is bigger than us and may have reasons for allowing suffering that we are not in an adequate position to detect, does it follow that anything follows from the nature of God and his "goodness", such as rewarding an atheist for his lack of faith or condemning a theist for believing? Does it follow that since some things that God tolerates are not what we expect that we cannot form reasonable expectations at all in what counts as good or evil for us within our ability to judge? I reasonable expect that if I trust Him, He will respond to me even though its possible, for all I know, that he could be justified in not doing so. I reasonable expect that if I do not trust Him, He will not respond to me even though its possible, for all I know, that he could be justified in doing so. This is because it is reasonable to think that God's character constrains His actions such that not just anything at all is possible, even if I cannot always tell what should or shouldn't happen. It still seems that Keller's conclusion is sustained that it takes less of a risk to believe in God than to not do so.
Do Darwinists get a worldview?
A worldview is serious attempt to reflectively integrate ones thinking into a coherent, comprehensive, and practical interpretation of all of life in order to situate oneself as an intelligent unified agent within the world. A worldview is necessary to unified agency. Without one, we become morally schizophrenic, having personaes isolated from one another that are engaged according to circumstances rather than reflective choice, A worldview is a life time achievement that is one of the necessary tasks of progressive moral development. Which is puzzling if you are talking about Darwinism.
On the one hand, it seems to be clear that Darwinism is one of the most rigorous worldviews one might have, with a strict methodology and a strict standard of evidence, which is meant to systematically apply to every sphere of life. Darwinism holds to methodological naturalism and scientific evidentialism, with the result that if there is any moral duty at all it must be hedonism.
But on the other hand, if Darwinism is true then there are no unified moral agents whose unity is prior to its properties. If Darwinism is right then its "moral schizophrenia" all the way down. Human beings are not natural agents. At most they are random artifacts with no intrinsic unity so that even hedonism is relativism.
This suggests to me that Darwinism is a view that we cannot take seriously, since it is logically totalizing and fragmentizing at the same time.
On the one hand, it seems to be clear that Darwinism is one of the most rigorous worldviews one might have, with a strict methodology and a strict standard of evidence, which is meant to systematically apply to every sphere of life. Darwinism holds to methodological naturalism and scientific evidentialism, with the result that if there is any moral duty at all it must be hedonism.
But on the other hand, if Darwinism is true then there are no unified moral agents whose unity is prior to its properties. If Darwinism is right then its "moral schizophrenia" all the way down. Human beings are not natural agents. At most they are random artifacts with no intrinsic unity so that even hedonism is relativism.
This suggests to me that Darwinism is a view that we cannot take seriously, since it is logically totalizing and fragmentizing at the same time.
Tuesday, March 18, 2008
ARCON VII
Thanks again to the Story Teller's Guild for letting me run a game at Arcon VII and for their hospitality. I ran a Hackmaster and Oriental Adventures combined one-shot campaign with pregen characters (which I billed under the name "Katana Sensei" yuk, yuk). It was set in Edo period Japan and was intended to model the great Japanior stories in pulp film and anime. It was also based on ideas from Ned Block, John Searle, Thomas Nagel, Masemune Shirow, and Phillip K. Dick, so it was pretty high concept for your typical Hackmaster game.
It was unfortunately very story driven, not really a good fit with a medium for a simulation type RPG, so a lot of the time it felt like GM plot hammering. So I apologize to the players for that. I didn't think that it would be much of a problem in a game that was also designed to introduce the system. The game would probably work better with Pulp in a Cup (not available in stores).
I especially want to apologize to the young lady who told me that she really wanted to play a good character when she found out that her good character was really a sleeper cell for an evil character (like in Total Recall). I really wanted your character to be free to be good too. My vision was that your character would be at a moral crossroad, having been both good and evil, and thus ambivalent about which to choose, a great opportunity for creative roleplaying. But my head was so full of details and trying to get finished on time that I forgot to mention it. I will incorporate that possibility into the notes to the player next time.
I also want to apologize to the Bushi warrior player who scored a crit. Hackmaster has an elaborate crit system but it is worthless if it does not translate into a great description of the drama of whats happening, what's the point? Here is a belated description of what happened:
"The bushi warrior begins to swing from a standard attack sword stance, but flips the sword effectively around her wrist to avoid the oni's parry attempt. As a result, she has a clean shot to a vulnerable leg and is able to use her acquired momentum in the slice. After the thrust she resumes her defensive stance. At first it seems that nothing has happened, her blade appears clean. Then a spritzer of black ooze spouts out of the demon's red thigh, and then several gallons of black arterial blood, coming out at 150o p.p.s.i. spray in a 360 degree circle from the demon's leg. Eventually, the leg slides of its stump and the demon falls leaning toward you. You suddenly feel a pulsing grip from the oni's hand as it encircles your neck, as the demon ninja concentrates his ki to make one final gesture. 'What is this? Fool a friend to fool an enemy? How risible! But it is just what I would ought to expect from a shinobi.' Then the creature dies, it final laugh echoing through the cemetery and into the woods beyond."
Hope that helps.
I am planning on getting Castles and Crusades, since I am still a little anxious if Hackmaster will survive and keep the vibe of old school roleplaying. Then I will try to improve the episode, maybe for distribution. I would hate to waste it on a one shot.
The link takes you to another review of ARCON VII by a regular thier who appears to be having his own bittersweet "Genshiken" moment. Good luck to you.
It was unfortunately very story driven, not really a good fit with a medium for a simulation type RPG, so a lot of the time it felt like GM plot hammering. So I apologize to the players for that. I didn't think that it would be much of a problem in a game that was also designed to introduce the system. The game would probably work better with Pulp in a Cup (not available in stores).
I especially want to apologize to the young lady who told me that she really wanted to play a good character when she found out that her good character was really a sleeper cell for an evil character (like in Total Recall). I really wanted your character to be free to be good too. My vision was that your character would be at a moral crossroad, having been both good and evil, and thus ambivalent about which to choose, a great opportunity for creative roleplaying. But my head was so full of details and trying to get finished on time that I forgot to mention it. I will incorporate that possibility into the notes to the player next time.
I also want to apologize to the Bushi warrior player who scored a crit. Hackmaster has an elaborate crit system but it is worthless if it does not translate into a great description of the drama of whats happening, what's the point? Here is a belated description of what happened:
"The bushi warrior begins to swing from a standard attack sword stance, but flips the sword effectively around her wrist to avoid the oni's parry attempt. As a result, she has a clean shot to a vulnerable leg and is able to use her acquired momentum in the slice. After the thrust she resumes her defensive stance. At first it seems that nothing has happened, her blade appears clean. Then a spritzer of black ooze spouts out of the demon's red thigh, and then several gallons of black arterial blood, coming out at 150o p.p.s.i. spray in a 360 degree circle from the demon's leg. Eventually, the leg slides of its stump and the demon falls leaning toward you. You suddenly feel a pulsing grip from the oni's hand as it encircles your neck, as the demon ninja concentrates his ki to make one final gesture. 'What is this? Fool a friend to fool an enemy? How risible! But it is just what I would ought to expect from a shinobi.' Then the creature dies, it final laugh echoing through the cemetery and into the woods beyond."
Hope that helps.
I am planning on getting Castles and Crusades, since I am still a little anxious if Hackmaster will survive and keep the vibe of old school roleplaying. Then I will try to improve the episode, maybe for distribution. I would hate to waste it on a one shot.
The link takes you to another review of ARCON VII by a regular thier who appears to be having his own bittersweet "Genshiken" moment. Good luck to you.
Thursday, March 13, 2008
And Gary Gygax makes three.
Darkly, a friend of mine began to speculate what significant figure would most likely be dead to complete a trifecta for me personally and offered a short least of likely names. Gary Gygax, the co-developer of the original Dungeons and Dragons was on the list.
I had been exposed to roll playing in my first college years, thanks to my younger brother who was always more in tune to the world than I was. As a new Christian, I didn't quite know what to make of it. My brother and I did a few newbie dungeon crawls together. I DM'd a trip through one of the hells for his character, which in my scenario functioned more like Dante's inferno and less like the sort of place evil creatures look forward to vacationing in. I also explained to my brother a scenario based on the argument for God from the reality of evil, given by CS Lewis in "Mere Christianity" under the description of being a criticism of (moral) dualism. Basically, the alignment compass of D&D (Law - Chaos, Good - Evil) suggested an objective standard beyond the points of the compass for basing judgements of good vs Evil and lawful versus chaotic, otherwise such distinctions could not be made. So I designed a trap room such that if characters were caught in the room, the whole room would progressively teleport into an apocalyptic future, with the characters experiencing each of the outer planes condensing against the outside of the walls according to there relative locations on the alignment compass and the respective sounds and climate conditions being sense by the occupants of the room. At the end of this tour, the room would open again, and the characters would essentially walk out into the face of a final tribunal of judgment by That by Which Sets the Ultimate Standard. My brother actually thought this was interesting and told me he incorporated such a room in a dungeon he DM'd for others. I asked him what the players thought of it. "They didn't much care for it," he said.
My brother actually did more to collect D&D stuff than play it. I also let it go in college, not because it had demons and magic in it, but that I was concerned mostly about the time consumption about it. I did have a friend in my Christian College dorm who was devoted to it and even had developed a magnificent campaign setting. He also experienced much friction with his mother over this. But one day -- he told me -- that he was with his date and had an experience of oppressiveness that prevented him from enjoying himself. They went back to his home and he told his mother what was going on and how it felt. His mother immediately blamed his role playing game hobby. He told me that at first he denied this as usual but this time he felt that she was right. So he burned all his D&D stuff (except his campaign notes which he felt were a major creative investment on his part) and gave the game up.
Latter in the military, I met a couple of people who told me they played D&D and reported bizarre experiences with the game. One said that he started playing good characters until he discovered it was more fun for him to play evil characters. Soon playing evil characters was not enough for him and later he became a Satanist and showed me his copy of LeVey's Satanic Bible. but this seemed to be a clear case of someone finding himself through the game as if the game were a personality test -- not really a surprise. Another guy told me that he was a DM in his unit and ran a regular game. Once two guys who played in his campaign were fighting in the barracks and no one seemed to get them apart. He ordered them to stop in his DM posture and they suddenly quit. He certainly felt that he was exercising an unusual type of authority that looked very cult-like.
These stories did not impress me though they were interesting. However, I didn't seriously deal with the game again until after I was forty. I was in grad school for philosophy and miserable because (1) I was a believer in a secular department, (2) my background and interests in philosophy were not along the same lines as the department, (3) I could not discern when my department disdained my work for the quality of work from when it was because it was out of accord with the departments outlook, and (4) the situation with my peers had significantly eroded my faith so that I was wrestling with doubt and skepticism, but I was not ready to give up. At any rate, I had no connection with my colleagues and professors. (Total Truth: I walked away from my Ph.D. attempt.)
My friend from the department was having an after hours in his apartment with several other grad students and young profs from our department. While they were there, someone mentioned D&D. It turned out that most all of them went through a D&D phase in there development. My friend still had his books from his D&D days and had actually started an on the spot campaign. Later, he told me about this and asked me if I wanted to join up with them.
If he had asked me this years ago I would have hesitated. But at that time I thought I was mature enough to simply enjoy it as a pastime for the reasons it originally attracted me -- the creativity, the social element, the research, etc. -- and so I said, "Yes" quite handily. That turned out to provide something I could not get otherwise, a level ground on which to connect with the key people in my department where the perception of regard was mutual, something I really needed. It was also a connection is which the plausibility structure of the fictional world was acceptable to Non-Christians but friendly to Christians -- a world of magic and the supernatural. It created a forum in which to introduce Christian themes at their best without explicit associations -- something which was also important to me.
It also pointed to a space in shared public thought where I could follow a familiar path back to faith, namely the road that CS Lewis described his own conversion as a conversion first in his imagination. Roleplaying games became resource for spiritual/intellectual recuperation. CS Lewis describes in one place about romantic movements that the fascination with the road signs eventually gives way to a desire to just get to the destination to which they refer. But if you've lost your sense of your destination you might find it again by going back and examining the signs again until that passion rekindles. RPGs provided the signs to examine. Finally. RPGs do not demand one to be a top quality author to participate so they were egalitarian in an important way.
But my friend was also helpful in helping me to see that there was an adolescent approach to gaming and a second order adolescence, one that is not immediately self-gratifying but which is self-conscious, one that allows you to laugh at yourself and criticize yourself. That showed that there was a certain healthy detachment possible that made RPGs available to adults. And finally, that RPGs are something traditionable, especially since they had become 'traditioned' by the 90's. By traditioned, I mean that they had a track record of experiences that was able to demonstrate a tried and true character. And the tap root of that tradition was the work of Gary Gygax and Dave Arneson and friends which in turn was based on research both into medieval history and classical tactical games. This was evident in the then recent release of "HackMaster", a version of AD&D that attempted to be a self-conscious reappropriation of classic D&D. HackMaster ostensible is meant to appeal to munchkins -- people who abuse the rules for the sake of a low view of the rewards of role playing -- but in fact was an admission that we all wanted to be munchikins -- something very Thurberian.
RPGs and RPG theory have become my hobby and to a certain extent my vehicle for pre-evangelism to a watching world. And my preferred systems are those that tie back into the work of Gary Gygax.
This weekend, I will be at Arcon VII, the gaming convention of the Storyteller's Guild at SUNY Oswego. I am attempting to run a one-shot adventure combining the Basic HackMaster rules with Gary's original "Oriental Adventures" book. Since HackMaster already incorporates much of the mechanics and spirit of OA, it should be a happy marriage. I even called the game "Katana Sensei".
Good-bye, Gary. I came to love you late.
I had been exposed to roll playing in my first college years, thanks to my younger brother who was always more in tune to the world than I was. As a new Christian, I didn't quite know what to make of it. My brother and I did a few newbie dungeon crawls together. I DM'd a trip through one of the hells for his character, which in my scenario functioned more like Dante's inferno and less like the sort of place evil creatures look forward to vacationing in. I also explained to my brother a scenario based on the argument for God from the reality of evil, given by CS Lewis in "Mere Christianity" under the description of being a criticism of (moral) dualism. Basically, the alignment compass of D&D (Law - Chaos, Good - Evil) suggested an objective standard beyond the points of the compass for basing judgements of good vs Evil and lawful versus chaotic, otherwise such distinctions could not be made. So I designed a trap room such that if characters were caught in the room, the whole room would progressively teleport into an apocalyptic future, with the characters experiencing each of the outer planes condensing against the outside of the walls according to there relative locations on the alignment compass and the respective sounds and climate conditions being sense by the occupants of the room. At the end of this tour, the room would open again, and the characters would essentially walk out into the face of a final tribunal of judgment by That by Which Sets the Ultimate Standard. My brother actually thought this was interesting and told me he incorporated such a room in a dungeon he DM'd for others. I asked him what the players thought of it. "They didn't much care for it," he said.
My brother actually did more to collect D&D stuff than play it. I also let it go in college, not because it had demons and magic in it, but that I was concerned mostly about the time consumption about it. I did have a friend in my Christian College dorm who was devoted to it and even had developed a magnificent campaign setting. He also experienced much friction with his mother over this. But one day -- he told me -- that he was with his date and had an experience of oppressiveness that prevented him from enjoying himself. They went back to his home and he told his mother what was going on and how it felt. His mother immediately blamed his role playing game hobby. He told me that at first he denied this as usual but this time he felt that she was right. So he burned all his D&D stuff (except his campaign notes which he felt were a major creative investment on his part) and gave the game up.
Latter in the military, I met a couple of people who told me they played D&D and reported bizarre experiences with the game. One said that he started playing good characters until he discovered it was more fun for him to play evil characters. Soon playing evil characters was not enough for him and later he became a Satanist and showed me his copy of LeVey's Satanic Bible. but this seemed to be a clear case of someone finding himself through the game as if the game were a personality test -- not really a surprise. Another guy told me that he was a DM in his unit and ran a regular game. Once two guys who played in his campaign were fighting in the barracks and no one seemed to get them apart. He ordered them to stop in his DM posture and they suddenly quit. He certainly felt that he was exercising an unusual type of authority that looked very cult-like.
These stories did not impress me though they were interesting. However, I didn't seriously deal with the game again until after I was forty. I was in grad school for philosophy and miserable because (1) I was a believer in a secular department, (2) my background and interests in philosophy were not along the same lines as the department, (3) I could not discern when my department disdained my work for the quality of work from when it was because it was out of accord with the departments outlook, and (4) the situation with my peers had significantly eroded my faith so that I was wrestling with doubt and skepticism, but I was not ready to give up. At any rate, I had no connection with my colleagues and professors. (Total Truth: I walked away from my Ph.D. attempt.)
My friend from the department was having an after hours in his apartment with several other grad students and young profs from our department. While they were there, someone mentioned D&D. It turned out that most all of them went through a D&D phase in there development. My friend still had his books from his D&D days and had actually started an on the spot campaign. Later, he told me about this and asked me if I wanted to join up with them.
If he had asked me this years ago I would have hesitated. But at that time I thought I was mature enough to simply enjoy it as a pastime for the reasons it originally attracted me -- the creativity, the social element, the research, etc. -- and so I said, "Yes" quite handily. That turned out to provide something I could not get otherwise, a level ground on which to connect with the key people in my department where the perception of regard was mutual, something I really needed. It was also a connection is which the plausibility structure of the fictional world was acceptable to Non-Christians but friendly to Christians -- a world of magic and the supernatural. It created a forum in which to introduce Christian themes at their best without explicit associations -- something which was also important to me.
It also pointed to a space in shared public thought where I could follow a familiar path back to faith, namely the road that CS Lewis described his own conversion as a conversion first in his imagination. Roleplaying games became resource for spiritual/intellectual recuperation. CS Lewis describes in one place about romantic movements that the fascination with the road signs eventually gives way to a desire to just get to the destination to which they refer. But if you've lost your sense of your destination you might find it again by going back and examining the signs again until that passion rekindles. RPGs provided the signs to examine. Finally. RPGs do not demand one to be a top quality author to participate so they were egalitarian in an important way.
But my friend was also helpful in helping me to see that there was an adolescent approach to gaming and a second order adolescence, one that is not immediately self-gratifying but which is self-conscious, one that allows you to laugh at yourself and criticize yourself. That showed that there was a certain healthy detachment possible that made RPGs available to adults. And finally, that RPGs are something traditionable, especially since they had become 'traditioned' by the 90's. By traditioned, I mean that they had a track record of experiences that was able to demonstrate a tried and true character. And the tap root of that tradition was the work of Gary Gygax and Dave Arneson and friends which in turn was based on research both into medieval history and classical tactical games. This was evident in the then recent release of "HackMaster", a version of AD&D that attempted to be a self-conscious reappropriation of classic D&D. HackMaster ostensible is meant to appeal to munchkins -- people who abuse the rules for the sake of a low view of the rewards of role playing -- but in fact was an admission that we all wanted to be munchikins -- something very Thurberian.
RPGs and RPG theory have become my hobby and to a certain extent my vehicle for pre-evangelism to a watching world. And my preferred systems are those that tie back into the work of Gary Gygax.
This weekend, I will be at Arcon VII, the gaming convention of the Storyteller's Guild at SUNY Oswego. I am attempting to run a one-shot adventure combining the Basic HackMaster rules with Gary's original "Oriental Adventures" book. Since HackMaster already incorporates much of the mechanics and spirit of OA, it should be a happy marriage. I even called the game "Katana Sensei".
Good-bye, Gary. I came to love you late.
Friday, February 29, 2008
Thank you, Story Teller's Guild, SUNY Oswego
This is a bit over due but last weekend I was able to attend a pre-conference in Oswego, New York at SUNY campus hosted by the Storyteller's Guild student association there. Every spring this guild hists a massive game and anime convention and this was an opportunity to try things out before hand. I was able to play test a pick up game of "Pulp in the Cup", using the "Ghost in the Shell" TV franchise as the setting. Much to my delight, we had a group that was a great group of role-players and we had a very satisfying experience that seemed to work well. This is helping me with my "Play-By-Facebook" Pulp in a Cup campaign.
For the con though, I will be running an adventure that combines the HackMaster with Gygax's "Oriental Adventures" and set in the Tokugawa Period of Japan. It will be a case of Kurosawa meets "Ninja Scroll" (with a little Phillip K. Dick (?) because - er, because- because I can't help myself).
I just want to bear witness that the Storyteller's Guild is an extremely well run and responsible organization and to say thanks for providing a great environment in which to game.
For the con though, I will be running an adventure that combines the HackMaster with Gygax's "Oriental Adventures" and set in the Tokugawa Period of Japan. It will be a case of Kurosawa meets "Ninja Scroll" (with a little Phillip K. Dick (?) because - er, because- because I can't help myself).
I just want to bear witness that the Storyteller's Guild is an extremely well run and responsible organization and to say thanks for providing a great environment in which to game.
What would Achilles do?
Preparing to teach my intro to philosophy course again, I was impressed in a fresh way with what Socrates was trying to say in behalf of his reasons for doing philosophical inquiry in such life threatening circumstances in his Apology. I have been exploring this in several ways, including making it the centerpiece of a role-playing campaign I'm refereeing on my Facebook page.
In order to explain to the Athenian court the rational for his risky behavior, he holds up the example of Achilles, the great Greek hero of the Trojan War, recounted in Homer's Iliad. Achilles mother, a prophetess warns Achilles that if he kills Hector he himself will also die. But Achilles decides that not killing Hector would be to dishonor his friend who was killed by Hector, and that it would be far worse to live on with dishonor than to die with honor. He kills Hector and is killed himself.
The important point to get from Achilles example for Socrates is that he thinks that Achilles decision is rational, and that what makes it so is that the outcome of death is inevitably uncertain. Socrates had previously explained that his mission from the Oracle at Delphi was to disclose to all who held a pretentious faith in their own wisdom that in fact they were not wise at all through testing their "wisdom" through questioning. Now Socrates argues that one of the ways people claim to be wise when they are not is in thinking that they know what death is when they really don't. It seems clear that what most people think that death is the cessation of human existence, but this is what is in fact not known. Death might mean continued life somewhere and somewhen else, for all we can tell. if it does, then the things we do in this life may have consequences in the next and an act of dishonor in this life may mean moral retribution in the next, while acts of honor in this life might be recognized in the next.
Given that we don't know, it is more reasonable to live honorably even if it means death, since the value of what we miss out on if we act as if death is not the end and are wrong pales in comparison to the value of what we miss out on if sacrifice honor to live longer believing that death is the end and turn out to be wrong. In other words, it is rational to choose death before dishonor in the same way that taking Pascal's Wager is rational. Following Achilles example then, Socrates says that it is rational for him to go on questioning men and improving their souls even if they threaten his life, than to passively neglect their well being in order to escape death and that Socrates will serve God rather than man.
We can be persuaded that this reasoning is right even while disagreeing that what Achilles did was in fact honorable. That depends on how Homer describes the case, whether is Achilles pursuing his own vendetta of whether Achilles is acting properly as a court of necessity and not taking the matter personally. But it is reasonable to see life as a probation of character rather than try to hold on to it no matter what, given the uncertainty of death's outcome. And this is a bit of common grace exhibited in the thought of Socrates.
In order to explain to the Athenian court the rational for his risky behavior, he holds up the example of Achilles, the great Greek hero of the Trojan War, recounted in Homer's Iliad. Achilles mother, a prophetess warns Achilles that if he kills Hector he himself will also die. But Achilles decides that not killing Hector would be to dishonor his friend who was killed by Hector, and that it would be far worse to live on with dishonor than to die with honor. He kills Hector and is killed himself.
The important point to get from Achilles example for Socrates is that he thinks that Achilles decision is rational, and that what makes it so is that the outcome of death is inevitably uncertain. Socrates had previously explained that his mission from the Oracle at Delphi was to disclose to all who held a pretentious faith in their own wisdom that in fact they were not wise at all through testing their "wisdom" through questioning. Now Socrates argues that one of the ways people claim to be wise when they are not is in thinking that they know what death is when they really don't. It seems clear that what most people think that death is the cessation of human existence, but this is what is in fact not known. Death might mean continued life somewhere and somewhen else, for all we can tell. if it does, then the things we do in this life may have consequences in the next and an act of dishonor in this life may mean moral retribution in the next, while acts of honor in this life might be recognized in the next.
Given that we don't know, it is more reasonable to live honorably even if it means death, since the value of what we miss out on if we act as if death is not the end and are wrong pales in comparison to the value of what we miss out on if sacrifice honor to live longer believing that death is the end and turn out to be wrong. In other words, it is rational to choose death before dishonor in the same way that taking Pascal's Wager is rational. Following Achilles example then, Socrates says that it is rational for him to go on questioning men and improving their souls even if they threaten his life, than to passively neglect their well being in order to escape death and that Socrates will serve God rather than man.
We can be persuaded that this reasoning is right even while disagreeing that what Achilles did was in fact honorable. That depends on how Homer describes the case, whether is Achilles pursuing his own vendetta of whether Achilles is acting properly as a court of necessity and not taking the matter personally. But it is reasonable to see life as a probation of character rather than try to hold on to it no matter what, given the uncertainty of death's outcome. And this is a bit of common grace exhibited in the thought of Socrates.
Thursday, February 28, 2008
Why would a good God allow ideology?
Much to my great delight, I went to the theatre since prayer meeting was canceled and lo, it was showing "Persepolis", a film I thought would never be shown around these parts. "Persepolis" is an animated feature from France based on an autobiographical graphic novel by a young woman, Marjane Satrapi, who grew up in Tehran during the last tears of the Shaw and reign of the Khomeinists. I give it five stars out of five.
CAUTION: SPOILERS
I like films like this because they challenge my perspective on things in a way that makes me want to listen. Another example of this, although a completely different style and situation, was "Tales from the Hood". The film recounts the author's experiences as a young and precocious child growing up in a Communist family that posed itself as an enemy of the state before the fall of the Shah, and as a teenager during the reign of the Isalmic fundamentalists, including the war with Iraq. We learn from her parents about the provenance of the Shah's father, how he was foolish and would have instituted a democracy but was told by Britain that a democracy would not work with such a populace and that he should be an emperor instead. This led to an autocracy where political dissenters are imprisoned and tortured, including Ms. Satrapi's very beloved uncle, who recieved training in Marxism-Leninism in Russia. When the Shah fell, the prisoners are released and the family is reunited and delighted -- until the new regime recaptures the uncle and has him executed. At this point, Ms. Satrapi who started out in life ambitious to be God's next prophet, dismisses God from her life. Satrapi's family and friends begin to lead a cryptic life to avoid the regulators who patrol the streets to make sure Islamic dress and behavior codes are maintained while Satrapi is being indoctrinated at school/
When Saddam attacks and begins to war with Iran for years, Satrapi's parents send her to Vienna to school. She experiences life for once without mullahs and without war and violence. However, what she does find in Europe is prejudice, anarchists, and nihilism. Youth in Europe are immersed in ideologies that are as bleak as the ones she left behind. We see a humorous and Thurber-esque account of her trials with love and pot. But ultimately she suffers from terrible guilt at the thought that while she is enjoying life in Europe her family is still suffereng the pains of the war. However, she gets kicked out of her last house and winds up stranded and starving in Vienna, finally asking to come home and be with her family again.
At this point the war with Iraq is over, but her family tells her that if anything things are much worse. As a young woman, she goes through a terrible struggle looking for direction and wondering how to make sense of what has been happening to her. She goes to a psychiatrist who listenes to her in Freudian detachment and then prescribes a lot of medicine --"Depression is curable". The medicine only leaves her doped up and during one of those experiences she re-connects with God (with Karl Marx cheering her on from the side). As a result she is able to pull herself together and go back to college to study art. (Now think, of the ironies attached to studying art in a fundamentalist controlled university). After this are several other events, confronting the authorities at school, a failed marriage, and so on, but in the end the situation deteriorates enough, both circumstantially and interiorly, where her family sends her again to France, forbidding her to return to Iran.
In spite of the tragic circumstances the film is infused with the good humor of the author which shines in several different places with great comic sensibility that fits well with the original medium. We are reminded that these are comics and it is an interesting medium to tell a story such as this. The film reminded me of "Maus". It was also a delight to see such geeky tropes, indicating that the geek sensibility is transnational. The film has references to Godzilla, Terminator, Iron Maiden, and so on. As separated as we are from one another by religion and politics, here was at least one note we could connect on. Of course, this is because it is a coming of age story.
But what is being accomplished is far more important. By telling her story, the author makes it clear that not all Iranians are what they are often represented to be in the media, that there are people who will respond when seriously treated as rational people. Some Iranians are ready for a modern state that recognizes human rights. On the other hand, where the Iranians are modern, they are modernistic, embracing theory as the basis for life and allowing life to operate willy nilly where theory does not apply. Where our heroine is most lost is in dealing with the frontiers of permissiveness without much to help her in dealing with relationships. This is what leads to starvation, in a world were she remains isolated and alone.
Of course, it seems understandable given her situation. The tragic aspect of the film is that she in every phase of her life is that she seems surrounded by bleak ideologies either Marxian or anarchistic on one side or Islamism reduced to a politcal ideological machine on the other. A religion of liberation or a Western constitution of liberty are approaches not available to her given the West's collusion in rise of the Shah, the support for both sides of the Iraq war, the western interest in oil, and the training of the torturers by he CIA. Whatever one could have hope another country might learn from our history as been eclipsed by the western policy of supporting authoritarian governments as a hedge against the spread of Communism. The movie illustrates how the conduct of such governments left few without any recourse except to turn to communism.
The movie also illustrates how the hedonism of the Iranians contrasted with the hedonism of the European youth. Given the repressive regime, engaging in parties and illegal wine consumption was not mere fun but a protest of the regime, a demonstration of liberties denied. But in Europe, this is not necessary. This also applies to the feminism one sees lifted up in the movie which clearly must be seen first and foremost as a version of equity feminism. By going against the obvious double standard in which clothing prescriptions were only applied to women, while men were allowed to were normal western clothing, the characters are challenging a specific distortion of sex.
The sea of abstractions that Satrapi is constantly swimming in offers no hope and she finds herself facing the dilemma of choosing the bleak ideology of an imminent engineered theoretically perspicuous deity like the state, or face absolute nihilism. But the way that she ultimately escapes this false dilemma and becomes post-ideological is rediscovering and re-appreciating her family and in particular her grandmother. The movie is to be held up as providing one of the few pieces of cinema that illustrates a properly functioning family. Her parents have a solid committed relationship and display their love for their daughter through their infinite patience with her. Her father encourages her to think and to be independent while her mother guides and encourages her. Her uncle fascinates her with his adventures and deeply impresses her with his suffering for a principle. But her grandmother turns out to be her best counselor, not so much by appealing to this or that prescription (at least not in this movie) but encouraging in every with her humor, her proactive attitude toward life (she puts jasmine leaves in her bra to make a nice smell -- I won't say where she puts her breasts), and her guidance. She encourages Satrapi to have integrity, scolds her for delighting in cleverness when she should be thinking about the other person, applauds her for her courageous displays, and tells her not bear a grudge or seek personal vengeance. In other words it is in these vague features that prove to be more effective for her and sustain her. In a way, this story is about her grandmother. It ends when we find out the day she left Iran for France was the last day she saw her alive.
It is also noteworthy that God is always pictured as a large grandfatherly old man who teaches and guides, even when He is being dismissed by her and even when He returns in her dreams. We cannot but project into our transitional representations of God the images we have of our own parents. So in Satrapi's picture God is the grandfather who loves and guides, just like her own parents and relatives and even when he is mad at Him for the death of her uncle. God pleads with the free will defense, "But these are the acts of men". She can't but see Him as good when she accuses Him of evil. This and the fact that God never really quite goes away is impressive.
It is a real challenge to think of this person being is such empty circumstances precise at the same time (the 80's) when it really felt like morning in America for guys like me. For Iran, is was pitch black in what seems to me like a tomb. it was amazing to see how the human spirit can still find touches of wonder and humor is such situations but that is all the more reason to see such people set free from them. The big question at the end of the movie for me is whether she had found or ever will find true freedom from the inside out. Even though free from the repressive regime in Iran, was she free in being open to other possibilities than what her world made available to her.
CAUTION: SPOILERS
I like films like this because they challenge my perspective on things in a way that makes me want to listen. Another example of this, although a completely different style and situation, was "Tales from the Hood". The film recounts the author's experiences as a young and precocious child growing up in a Communist family that posed itself as an enemy of the state before the fall of the Shah, and as a teenager during the reign of the Isalmic fundamentalists, including the war with Iraq. We learn from her parents about the provenance of the Shah's father, how he was foolish and would have instituted a democracy but was told by Britain that a democracy would not work with such a populace and that he should be an emperor instead. This led to an autocracy where political dissenters are imprisoned and tortured, including Ms. Satrapi's very beloved uncle, who recieved training in Marxism-Leninism in Russia. When the Shah fell, the prisoners are released and the family is reunited and delighted -- until the new regime recaptures the uncle and has him executed. At this point, Ms. Satrapi who started out in life ambitious to be God's next prophet, dismisses God from her life. Satrapi's family and friends begin to lead a cryptic life to avoid the regulators who patrol the streets to make sure Islamic dress and behavior codes are maintained while Satrapi is being indoctrinated at school/
When Saddam attacks and begins to war with Iran for years, Satrapi's parents send her to Vienna to school. She experiences life for once without mullahs and without war and violence. However, what she does find in Europe is prejudice, anarchists, and nihilism. Youth in Europe are immersed in ideologies that are as bleak as the ones she left behind. We see a humorous and Thurber-esque account of her trials with love and pot. But ultimately she suffers from terrible guilt at the thought that while she is enjoying life in Europe her family is still suffereng the pains of the war. However, she gets kicked out of her last house and winds up stranded and starving in Vienna, finally asking to come home and be with her family again.
At this point the war with Iraq is over, but her family tells her that if anything things are much worse. As a young woman, she goes through a terrible struggle looking for direction and wondering how to make sense of what has been happening to her. She goes to a psychiatrist who listenes to her in Freudian detachment and then prescribes a lot of medicine --"Depression is curable". The medicine only leaves her doped up and during one of those experiences she re-connects with God (with Karl Marx cheering her on from the side). As a result she is able to pull herself together and go back to college to study art. (Now think, of the ironies attached to studying art in a fundamentalist controlled university). After this are several other events, confronting the authorities at school, a failed marriage, and so on, but in the end the situation deteriorates enough, both circumstantially and interiorly, where her family sends her again to France, forbidding her to return to Iran.
In spite of the tragic circumstances the film is infused with the good humor of the author which shines in several different places with great comic sensibility that fits well with the original medium. We are reminded that these are comics and it is an interesting medium to tell a story such as this. The film reminded me of "Maus". It was also a delight to see such geeky tropes, indicating that the geek sensibility is transnational. The film has references to Godzilla, Terminator, Iron Maiden, and so on. As separated as we are from one another by religion and politics, here was at least one note we could connect on. Of course, this is because it is a coming of age story.
But what is being accomplished is far more important. By telling her story, the author makes it clear that not all Iranians are what they are often represented to be in the media, that there are people who will respond when seriously treated as rational people. Some Iranians are ready for a modern state that recognizes human rights. On the other hand, where the Iranians are modern, they are modernistic, embracing theory as the basis for life and allowing life to operate willy nilly where theory does not apply. Where our heroine is most lost is in dealing with the frontiers of permissiveness without much to help her in dealing with relationships. This is what leads to starvation, in a world were she remains isolated and alone.
Of course, it seems understandable given her situation. The tragic aspect of the film is that she in every phase of her life is that she seems surrounded by bleak ideologies either Marxian or anarchistic on one side or Islamism reduced to a politcal ideological machine on the other. A religion of liberation or a Western constitution of liberty are approaches not available to her given the West's collusion in rise of the Shah, the support for both sides of the Iraq war, the western interest in oil, and the training of the torturers by he CIA. Whatever one could have hope another country might learn from our history as been eclipsed by the western policy of supporting authoritarian governments as a hedge against the spread of Communism. The movie illustrates how the conduct of such governments left few without any recourse except to turn to communism.
The movie also illustrates how the hedonism of the Iranians contrasted with the hedonism of the European youth. Given the repressive regime, engaging in parties and illegal wine consumption was not mere fun but a protest of the regime, a demonstration of liberties denied. But in Europe, this is not necessary. This also applies to the feminism one sees lifted up in the movie which clearly must be seen first and foremost as a version of equity feminism. By going against the obvious double standard in which clothing prescriptions were only applied to women, while men were allowed to were normal western clothing, the characters are challenging a specific distortion of sex.
The sea of abstractions that Satrapi is constantly swimming in offers no hope and she finds herself facing the dilemma of choosing the bleak ideology of an imminent engineered theoretically perspicuous deity like the state, or face absolute nihilism. But the way that she ultimately escapes this false dilemma and becomes post-ideological is rediscovering and re-appreciating her family and in particular her grandmother. The movie is to be held up as providing one of the few pieces of cinema that illustrates a properly functioning family. Her parents have a solid committed relationship and display their love for their daughter through their infinite patience with her. Her father encourages her to think and to be independent while her mother guides and encourages her. Her uncle fascinates her with his adventures and deeply impresses her with his suffering for a principle. But her grandmother turns out to be her best counselor, not so much by appealing to this or that prescription (at least not in this movie) but encouraging in every with her humor, her proactive attitude toward life (she puts jasmine leaves in her bra to make a nice smell -- I won't say where she puts her breasts), and her guidance. She encourages Satrapi to have integrity, scolds her for delighting in cleverness when she should be thinking about the other person, applauds her for her courageous displays, and tells her not bear a grudge or seek personal vengeance. In other words it is in these vague features that prove to be more effective for her and sustain her. In a way, this story is about her grandmother. It ends when we find out the day she left Iran for France was the last day she saw her alive.
It is also noteworthy that God is always pictured as a large grandfatherly old man who teaches and guides, even when He is being dismissed by her and even when He returns in her dreams. We cannot but project into our transitional representations of God the images we have of our own parents. So in Satrapi's picture God is the grandfather who loves and guides, just like her own parents and relatives and even when he is mad at Him for the death of her uncle. God pleads with the free will defense, "But these are the acts of men". She can't but see Him as good when she accuses Him of evil. This and the fact that God never really quite goes away is impressive.
It is a real challenge to think of this person being is such empty circumstances precise at the same time (the 80's) when it really felt like morning in America for guys like me. For Iran, is was pitch black in what seems to me like a tomb. it was amazing to see how the human spirit can still find touches of wonder and humor is such situations but that is all the more reason to see such people set free from them. The big question at the end of the movie for me is whether she had found or ever will find true freedom from the inside out. Even though free from the repressive regime in Iran, was she free in being open to other possibilities than what her world made available to her.
William F. Buckley R.I.P.
This has been a rough week.
The debt I owe Bill Buckley to my personal development goes beyond even the debt I owe Larry that I cannot express it, even though I have nothing like the closeness with Buckley's world as many who are now posting at NRO this week. So I will let them express my mourning for as well as with me.
The debt I owe Bill Buckley to my personal development goes beyond even the debt I owe Larry that I cannot express it, even though I have nothing like the closeness with Buckley's world as many who are now posting at NRO this week. So I will let them express my mourning for as well as with me.
Tuesday, February 26, 2008
I Hope I'll See You in Heaven
When I was in college the second time, my friends and I considered ourselves members of the "3L decade", mostly because of our experiences the first time we went to college, but especially because we were all Christians in our teens when we were impacted by the still fresh evangelical MOVEMENT still going on at the time. We read books by Francis Schaeffer, CS Lewis, Os Guiness, John Stott, James Packer, and Martin Lloyd-Jones, and we listened to music by Phil Keaggy, Randy Stonehill, the Darrell Mansfield Band, and the group Daniel Amos. The helped shape our first steps in seeing Christianity not as a peripheral religious activity and to see it instead as a world and life view that informed every area of life. Such was our version of young idealism.
"3L" was our alternative to the 3M of the previous decade -- Marx, Mao, and Marcuse. "3L" stood for L'abri, Lausanne, and Larry Norman. The first was an alternative style mission run by Francis Schaeffer, pastor-thinker for post 60's students which he started with his family in Switzerland and now exists in many places through out the world. The second stood for the first evangelical international missions conference that renewed proclamation centered missions in contrast to the shrinking role of proclamation in what had become of the mainline missions movement, and which still continues today. And the third . . .
Well, the third was the father of Christian alternative music, the guy who asked the question, "Why should the Devil have all the good music?" He was our Bob Dylan (until Bob Dylan became our Bob Dylan for awhile -- thanks in part to Larry). Larry was the first to try to establish a distinctive Christian Rock on a distinctively Christian Record label, and he helped to put together a distinguished array of talented and faithful Christian performers as well as encouraging many others.
Larry Norman had been suffering from severe health conditions. On Feb. 25, they finally caught up to him and he passed away. Even though I didn't wind up agreeing with all of Larry's stuff all the time, I whole heartedly agreed and agree with his effort to that Christian distinctiveness in arts (including popular arts) does not mean exclusiveness to arts within the church. I, in my roleplaying, try to live up to Larry in his music.
Please pray for the Normans.
Hasta La Vista, Larry.
"3L" was our alternative to the 3M of the previous decade -- Marx, Mao, and Marcuse. "3L" stood for L'abri, Lausanne, and Larry Norman. The first was an alternative style mission run by Francis Schaeffer, pastor-thinker for post 60's students which he started with his family in Switzerland and now exists in many places through out the world. The second stood for the first evangelical international missions conference that renewed proclamation centered missions in contrast to the shrinking role of proclamation in what had become of the mainline missions movement, and which still continues today. And the third . . .
Well, the third was the father of Christian alternative music, the guy who asked the question, "Why should the Devil have all the good music?" He was our Bob Dylan (until Bob Dylan became our Bob Dylan for awhile -- thanks in part to Larry). Larry was the first to try to establish a distinctive Christian Rock on a distinctively Christian Record label, and he helped to put together a distinguished array of talented and faithful Christian performers as well as encouraging many others.
Larry Norman had been suffering from severe health conditions. On Feb. 25, they finally caught up to him and he passed away. Even though I didn't wind up agreeing with all of Larry's stuff all the time, I whole heartedly agreed and agree with his effort to that Christian distinctiveness in arts (including popular arts) does not mean exclusiveness to arts within the church. I, in my roleplaying, try to live up to Larry in his music.
Please pray for the Normans.
Hasta La Vista, Larry.
Thursday, February 21, 2008
"Good" Art and "Good" Science
Yet another random thought: Thomas Wolfe, the conservative counterpart to Hunter S. Thompson (sort of), wrote a bright piece of non-fiction about "modern art" called "The Painted Word". Specifically, he was criticizing the Pop and Op Art movements that developed around the 1960's. One of his crucial points from which the book derives its title is that rather than the critics sitting at the feet so to speak of the works of the artists and coming to discover and appreciate the value within them, it seemed that the art was driven by the critics stipulations as to what counted or did not count as art. The theory drove the art, and so what was "painted" was a set of conventions; the painted word. Wolfe imagines that a future retrospective of this period would not have the actual paintings from this period but rather just a display of the original articles that appeared in various critical reviews, as the true representation of what this period of art was about.
The criteria stipulated by these critics seemed to completely ad hoc. For example, the painting had to be something that could be seen "fast", that is, the viewer could immediately appreciate what the painting was about. Imagine seeing a painting of three concentric bands like a target each of different colors -- you got it in one. Another rule was that the painting had to have precisely the same amount of paint in each area, the painting had to be uniformly thick. It seems clear that on Wolfe's account, the critics were trying give an analysis of art -- a work A is art if and only if a work satisfied a set of empirically verifiable features, a kind of aesthetic positivism, one that dictated what could and could not be a possible work of art. It is also possible to imagine a work of art just falling out of the sky.
In the linked article, Phillip Johnson compares the thought of two outstanding evolutionary scientists, Theodosius Dobzhansky and Pierre Grassé. Grassé was considered a heretic scientist because he only accepted evolution as term describing the general phenomena of zoological distribution and rejected the idea of natural selection as a plausible account of species variation. Dobzhansky rejected this conclusion in his review of a book by Grassé, but made it clear that this rejection was to imply any demerit or lack of qualification on the part of Grassé, who was Dobzhansky recognized as one of the most distinguished French zoologists and as having an encyclopedic knowledge. In other words, Grassé thought that explanation for the appearance of genetic information in natural history could not reasonably be explained by natural selection and common descent given a knowledge of all the facts and further even if we knew all the facts we would still not know the answer to the question.
Dobzhansky, on the other hand, while affirming that we cannot question Grassé as being unscientific, rejects his position. We must accept the natural selection account because it allows us to keep doing science. While the facts provide no support for thinking that selection model provided for us in microevolution within also applies to macroevolution of new species, we "must" apply it as such as the best candidate that allows us to continue doing science as normal. It seems in this case, it is a matter of following the rules rather than sitting a the feet of the object. As Johnson points out, modern media is more apt to say, unlike Dobzhansky himself, that Dobzhansky is doing is science and Grassé is being unscientific. It seems that, here as in Wolfe's case about art, what counts as science is deduced from certain stipulations which are prior to the work of research. Will the Smithsonian someday display the columns of commentators as the achievements of modern science? In this context, Johnson brings up Larry Laudan's criticism of the idea that we can demarcate science from non-science. He might also have mentioned O. K. Bousma's remark about the only girl for the boys in the alley is our girl Sally.
One other point. Wolfe points out in an afterword that one particular phase of the pop art movement led to an other than expected responses from the critics, namely so-called "Photographic Realism", where paintings seem to be paintings of photographs as photographs. The thing about such paintings is that they satisfied all the stipulations set by the critics, and yet the critics hated them. One suspects that what irritated the critics was that such paintings were able to restore what was hoped to be eliminated by the stipulations, namely representationalism. The photographs that that were painted were already photographs of something or another.
On the other hand, the hope in the stipulations of science is that by only allowing explanations that fit the rules they will still "capture" representationalism or at least the phenomena we call representation, along with features that like representation, are irreducibly complex or which require more than just two mutually interacting terms to explain. Perhaps both science and art in this sense hope to isolate and remove "naive" representation but the trouble is that the specter of representation will not stay away. And if naive representation exists, it seems certainly spectral.
Anyway, its just as clear that one cannot answer the charge that Intelligent Design is not science by affirming that it is as it is that one cannot complain that Photographic Realism is art but shouldn't be.
The criteria stipulated by these critics seemed to completely ad hoc. For example, the painting had to be something that could be seen "fast", that is, the viewer could immediately appreciate what the painting was about. Imagine seeing a painting of three concentric bands like a target each of different colors -- you got it in one. Another rule was that the painting had to have precisely the same amount of paint in each area, the painting had to be uniformly thick. It seems clear that on Wolfe's account, the critics were trying give an analysis of art -- a work A is art if and only if a work satisfied a set of empirically verifiable features, a kind of aesthetic positivism, one that dictated what could and could not be a possible work of art. It is also possible to imagine a work of art just falling out of the sky.
In the linked article, Phillip Johnson compares the thought of two outstanding evolutionary scientists, Theodosius Dobzhansky and Pierre Grassé. Grassé was considered a heretic scientist because he only accepted evolution as term describing the general phenomena of zoological distribution and rejected the idea of natural selection as a plausible account of species variation. Dobzhansky rejected this conclusion in his review of a book by Grassé, but made it clear that this rejection was to imply any demerit or lack of qualification on the part of Grassé, who was Dobzhansky recognized as one of the most distinguished French zoologists and as having an encyclopedic knowledge. In other words, Grassé thought that explanation for the appearance of genetic information in natural history could not reasonably be explained by natural selection and common descent given a knowledge of all the facts and further even if we knew all the facts we would still not know the answer to the question.
Dobzhansky, on the other hand, while affirming that we cannot question Grassé as being unscientific, rejects his position. We must accept the natural selection account because it allows us to keep doing science. While the facts provide no support for thinking that selection model provided for us in microevolution within also applies to macroevolution of new species, we "must" apply it as such as the best candidate that allows us to continue doing science as normal. It seems in this case, it is a matter of following the rules rather than sitting a the feet of the object. As Johnson points out, modern media is more apt to say, unlike Dobzhansky himself, that Dobzhansky is doing is science and Grassé is being unscientific. It seems that, here as in Wolfe's case about art, what counts as science is deduced from certain stipulations which are prior to the work of research. Will the Smithsonian someday display the columns of commentators as the achievements of modern science? In this context, Johnson brings up Larry Laudan's criticism of the idea that we can demarcate science from non-science. He might also have mentioned O. K. Bousma's remark about the only girl for the boys in the alley is our girl Sally.
One other point. Wolfe points out in an afterword that one particular phase of the pop art movement led to an other than expected responses from the critics, namely so-called "Photographic Realism", where paintings seem to be paintings of photographs as photographs. The thing about such paintings is that they satisfied all the stipulations set by the critics, and yet the critics hated them. One suspects that what irritated the critics was that such paintings were able to restore what was hoped to be eliminated by the stipulations, namely representationalism. The photographs that that were painted were already photographs of something or another.
On the other hand, the hope in the stipulations of science is that by only allowing explanations that fit the rules they will still "capture" representationalism or at least the phenomena we call representation, along with features that like representation, are irreducibly complex or which require more than just two mutually interacting terms to explain. Perhaps both science and art in this sense hope to isolate and remove "naive" representation but the trouble is that the specter of representation will not stay away. And if naive representation exists, it seems certainly spectral.
Anyway, its just as clear that one cannot answer the charge that Intelligent Design is not science by affirming that it is as it is that one cannot complain that Photographic Realism is art but shouldn't be.
Friday, February 15, 2008
The Greatest Conceivable President
(1) Let Obama df= the president the greater than which cannot be conceived.
(2) A being which exists in the mind and in reality is greater than if it existed in the mind and not reality.
(3) Obama exists in the mind and not in reality. (assumed for indirect proof)
------------------------------------------------------------------
(4) Therefore, a candidate for president is conceivable which is greater than X. (from 2 and 3)
(5) Therefore, a candidate for president is conceivable which is greater than the president the greater than which cannot be conceived. (from 1 and 4)
------------------------------------------------------------------
(6) Therefore, Obama either exists neither in mind or in reality of Obama exists in both. (from 1-5 by indirect proof)
Comment: Even if we accept (2), the argument fails because the inference from 2 and 3 to 4 and again to five is false. The valid entailment is actually:
(5') Therefore, a being is conceivable which is greater than the president the greater than which cannot be conceived.
which is not absurd and this does not lead to an indirect proof of the existence of Obama, much less that this or that empirical candidate is the Obama candidate.
However, in the case of letting God be the greatest conceivable being, the inference does work and it a least shows that the concept of God necessarily involves necessary existence as if existence were a great making property of God, even if it fails to show that God actually does exist. Further, if there is any recognition of a hierarchy of good things that goes beyond the empirical facts about the world, as seems often to be the case, that would be explained by the existence of God as the greatest conceivable being, so it is at least epistemically possible that God exists. So given the conceivability and epistemic possibility of God, a trusting belief in God is a rationally live option for someone who is brave enough to risk seeking God rather than just living life.
However, it is not clear that it is conceivable or epistemically possible for any candidate to be the Obama. As far as I know, not one of them in the 2008 USA presidential race has claimed to be God yet. So it still seems to me that religion is more a rational choice than Obamism.
(2) A being which exists in the mind and in reality is greater than if it existed in the mind and not reality.
(3) Obama exists in the mind and not in reality. (assumed for indirect proof)
------------------------------------------------------------------
(4) Therefore, a candidate for president is conceivable which is greater than X. (from 2 and 3)
(5) Therefore, a candidate for president is conceivable which is greater than the president the greater than which cannot be conceived. (from 1 and 4)
------------------------------------------------------------------
(6) Therefore, Obama either exists neither in mind or in reality of Obama exists in both. (from 1-5 by indirect proof)
Comment: Even if we accept (2), the argument fails because the inference from 2 and 3 to 4 and again to five is false. The valid entailment is actually:
(5') Therefore, a being is conceivable which is greater than the president the greater than which cannot be conceived.
which is not absurd and this does not lead to an indirect proof of the existence of Obama, much less that this or that empirical candidate is the Obama candidate.
However, in the case of letting God be the greatest conceivable being, the inference does work and it a least shows that the concept of God necessarily involves necessary existence as if existence were a great making property of God, even if it fails to show that God actually does exist. Further, if there is any recognition of a hierarchy of good things that goes beyond the empirical facts about the world, as seems often to be the case, that would be explained by the existence of God as the greatest conceivable being, so it is at least epistemically possible that God exists. So given the conceivability and epistemic possibility of God, a trusting belief in God is a rationally live option for someone who is brave enough to risk seeking God rather than just living life.
However, it is not clear that it is conceivable or epistemically possible for any candidate to be the Obama. As far as I know, not one of them in the 2008 USA presidential race has claimed to be God yet. So it still seems to me that religion is more a rational choice than Obamism.
Tuesday, February 05, 2008
New York, 2040
NEW YORK CITY, NEW ENGLAND PREFECTURE, 2040 C.E.
Due to the pressures of globalization, the rise of new international powers, the effectiveness of the new bloodless form of cyber-information warfare, and the loss of American influence, New York City has become the capital of a semi-autonomous political entity called the New England Prefecture, which includes much of the former northeastern American states and eastern Canada. The loss of influence lead to the dissolution of both the American federal government and local state governments. The world is divided between Eurasia, India, China, and Japan now and and the NEP is a political franchise of Eurasia, basically allowed to keep its historic traditions alive but under contract with the Eurasian market. Global climate change has made Long Island and some of downtown Manhattan much like Venice except extremely poor and unstable. The government buildings and offices are based in Yonkers. In spite of the Eurasian franchise, the new business centers are primarily dotted with corporations from India, Japan, and Japanese Australia.
Society has been strongly divided with the progress of both genetic programming and cybernetics. While there is no law against having a child naturally (such children are called "godchildren"), society vastly favors people who receive genetic programming prior to fertilization. This has led to a major and sweeping division among social classes between those who had the means to receive genetic programming, which has become the certificate of reliability for all economic and social matters, and those who are godchildren. Even in the cases where the luck of the draw has lead to optimal performance, the fact that the person in question still speaks against them. Consequently, only geneticized people hold the top positions in society, while godchildren are chronically disadvantaged.
The other technology that has impacted society is cyberization. Thanks to new available surgical methods, many human ills can be taken care of through prosthetics and computer and cellular software can be installed directly into a person's skull case. Such components can access the web by wifi or through an external jack system. Cyberized individuals have enhanced computation and communication links, can display information and images inside their own field of vision and audio and can immediately link into most computer systems. Two important consequences from this are brain hacking, in which an outside source can download a virus into someone's cybernetics and make that person a puppet, and the new social mobility, where the only way a godchild can overcome to a substantial extent the social stigma attached to them is by getting cyberized. But this being very expensive, the only way it could be made available is by joining the military and serving in special ops, which require a full body cybernetic reconstruction for military purposes. As you can imagine, while cyberization bestows many benefits, it is very expensive to maintain, and cyborgs typically wind up working as security agents for corporations in exchange for regular maintenance.
In the future, crime has taken on new meaning, including both high level security issues, organlegging, data crimes, and political terrorism, all of which can count on the genius to exploit the new technological advances. Such crime calls for special police task forces and advanced hired militias. That is where your character comes in.
YOUR CHARACTER
Your character has been assigned to the NYC Police Department Special Ops Unit, which is in charge of Cyber-crime and Cyber-terrorism. Your character is one of three possible archetypes:
Cyber-Detective: Your character is genetically programmed and represents the upper tier of society but has for some reason or another devoted themselves to police work. Your character is at least minimally cyberized with communication and computer access facilites and has the training t o make the most of these for security purposes. Your character also has leadership experience and functions as a liason between the unit and significant stakeholders in the unit's performance, such as higher ups, executives, and ordinary citizens.
Cyber-Specialist: Your character has benefited from some specialized service and is sufficiently cyberneticly and roboticly modified to accommodate that service, typically because of special circumstances such as military service and training, such as being equipped with built in tether wires for second story work or with a satellite synchronized camera-eye and robotic steady arm in order to be a sniper. Your background may be in some way mixed but your appearance is effected enough by your enhancements to make it socially difficult. You are not a complete cyborg and some parts of you are vulnerable to normal damage.
Cyber-Inventory: Your character came into the ranks from the street by joining the military special ops. Your character is completely cyberized but still designed to look as human as possible. However, some parts of you may not even look human (bazooka arm, scope eye, or tank treads) but that is pretty rare. Generally, your character is super strong, super fast, super destructive, and competent in basic weapons and tactics. Plus you have had great experience, usually more than most detectives, who the nevertheless have to serve under. They are also foreboding in appearance and often have to interact with polite society through intermediates. They do however, possess a great deal of street cred.
DESIGNING YOUR CHARACTER
There are no stats in this game. The success of your character depends on how well you conceive of him. Pick one of the three archetypes and make it your characters own by tricking it out with specifics. Then tell a good story about how the birth of your character and how he eventually wound up working for the NYPD. Then based logically on your character's description, identify four "virtues" and two "faults" that your character has. They need not be moral virtues, the could also be intellectual or other virtues. Examples, Dick the cyber-detective lists GOOD WITH NUMBERS, LADIES MAN, HISTORY BUFF, and DIPLOMACY as virtues and SHORT-TEMPERED and FEAR OF HEIGHTS, which are all justified by the story he tells about his character. If the player wants a bonus in making a roll, he can get it by incorporating the mention of the virtue in the description of the action you are taking. This gives you a +1d6 bonus to your action. However, if the GM can invoke one of your faults in the description of a setting of your character's action, you get a +1d6 penalty to your action. Further, of the four virtues, pick one that is the best fit with your character. If you use that one, you get a +2d6 bonus.
Finally, add 2 free Mulligan points, and your character is finished.
Due to the pressures of globalization, the rise of new international powers, the effectiveness of the new bloodless form of cyber-information warfare, and the loss of American influence, New York City has become the capital of a semi-autonomous political entity called the New England Prefecture, which includes much of the former northeastern American states and eastern Canada. The loss of influence lead to the dissolution of both the American federal government and local state governments. The world is divided between Eurasia, India, China, and Japan now and and the NEP is a political franchise of Eurasia, basically allowed to keep its historic traditions alive but under contract with the Eurasian market. Global climate change has made Long Island and some of downtown Manhattan much like Venice except extremely poor and unstable. The government buildings and offices are based in Yonkers. In spite of the Eurasian franchise, the new business centers are primarily dotted with corporations from India, Japan, and Japanese Australia.
Society has been strongly divided with the progress of both genetic programming and cybernetics. While there is no law against having a child naturally (such children are called "godchildren"), society vastly favors people who receive genetic programming prior to fertilization. This has led to a major and sweeping division among social classes between those who had the means to receive genetic programming, which has become the certificate of reliability for all economic and social matters, and those who are godchildren. Even in the cases where the luck of the draw has lead to optimal performance, the fact that the person in question still speaks against them. Consequently, only geneticized people hold the top positions in society, while godchildren are chronically disadvantaged.
The other technology that has impacted society is cyberization. Thanks to new available surgical methods, many human ills can be taken care of through prosthetics and computer and cellular software can be installed directly into a person's skull case. Such components can access the web by wifi or through an external jack system. Cyberized individuals have enhanced computation and communication links, can display information and images inside their own field of vision and audio and can immediately link into most computer systems. Two important consequences from this are brain hacking, in which an outside source can download a virus into someone's cybernetics and make that person a puppet, and the new social mobility, where the only way a godchild can overcome to a substantial extent the social stigma attached to them is by getting cyberized. But this being very expensive, the only way it could be made available is by joining the military and serving in special ops, which require a full body cybernetic reconstruction for military purposes. As you can imagine, while cyberization bestows many benefits, it is very expensive to maintain, and cyborgs typically wind up working as security agents for corporations in exchange for regular maintenance.
In the future, crime has taken on new meaning, including both high level security issues, organlegging, data crimes, and political terrorism, all of which can count on the genius to exploit the new technological advances. Such crime calls for special police task forces and advanced hired militias. That is where your character comes in.
YOUR CHARACTER
Your character has been assigned to the NYC Police Department Special Ops Unit, which is in charge of Cyber-crime and Cyber-terrorism. Your character is one of three possible archetypes:
Cyber-Detective: Your character is genetically programmed and represents the upper tier of society but has for some reason or another devoted themselves to police work. Your character is at least minimally cyberized with communication and computer access facilites and has the training t o make the most of these for security purposes. Your character also has leadership experience and functions as a liason between the unit and significant stakeholders in the unit's performance, such as higher ups, executives, and ordinary citizens.
Cyber-Specialist: Your character has benefited from some specialized service and is sufficiently cyberneticly and roboticly modified to accommodate that service, typically because of special circumstances such as military service and training, such as being equipped with built in tether wires for second story work or with a satellite synchronized camera-eye and robotic steady arm in order to be a sniper. Your background may be in some way mixed but your appearance is effected enough by your enhancements to make it socially difficult. You are not a complete cyborg and some parts of you are vulnerable to normal damage.
Cyber-Inventory: Your character came into the ranks from the street by joining the military special ops. Your character is completely cyberized but still designed to look as human as possible. However, some parts of you may not even look human (bazooka arm, scope eye, or tank treads) but that is pretty rare. Generally, your character is super strong, super fast, super destructive, and competent in basic weapons and tactics. Plus you have had great experience, usually more than most detectives, who the nevertheless have to serve under. They are also foreboding in appearance and often have to interact with polite society through intermediates. They do however, possess a great deal of street cred.
DESIGNING YOUR CHARACTER
There are no stats in this game. The success of your character depends on how well you conceive of him. Pick one of the three archetypes and make it your characters own by tricking it out with specifics. Then tell a good story about how the birth of your character and how he eventually wound up working for the NYPD. Then based logically on your character's description, identify four "virtues" and two "faults" that your character has. They need not be moral virtues, the could also be intellectual or other virtues. Examples, Dick the cyber-detective lists GOOD WITH NUMBERS, LADIES MAN, HISTORY BUFF, and DIPLOMACY as virtues and SHORT-TEMPERED and FEAR OF HEIGHTS, which are all justified by the story he tells about his character. If the player wants a bonus in making a roll, he can get it by incorporating the mention of the virtue in the description of the action you are taking. This gives you a +1d6 bonus to your action. However, if the GM can invoke one of your faults in the description of a setting of your character's action, you get a +1d6 penalty to your action. Further, of the four virtues, pick one that is the best fit with your character. If you use that one, you get a +2d6 bonus.
Finally, add 2 free Mulligan points, and your character is finished.
Wednesday, January 30, 2008
"Pulp in a Cup!" A pick-up RPG engine.
"Pulp in a Cup" (a.k.a. PnC, and formerly known as "LAME-O") is an RPG kind of an ultimate universal game mechanic that can be used to jump start a game most anywhere you happen to be. It for players who have a hard time find ways to get people together but still want to play if an occasion presents itself. Players pick a setting based on a well known movie, animation, or comic franchise that everyone in the group is familiar with (e.g. Superman, X-Men, Star Trek, Ghost in the Shell, Hitchhiker's guide to the Galaxy, etc.). One person acts as the Game Master and prepares a story or at least a scenario that is set in the franchise world while the other players pick favorite characters to role play in that setting. Play proceeds by acting in character in ways consistent with the story familiar to everybody. When the situation comes to a point where there needs to be a resolution to some action or encounter use the simple mechanic below.
ADDITIONAL SUGGESTIONS:
And that's all the rules. What follows are simply optional suggestions on how to implement the rules of PnC.
- Rule 1: All players creatively conceive and communicate their characters and their specific actions in the game narrative and the Game Master conceives and communicates the world setting and specific situations in enough detail such that the resolution of player actions is clearly indicated by the logic of the story.
- Rule 2: If there is no clear outcome even after detailed exposition, this means that the opposing forces in the story are fairly evenly matched. If so, use the following die mechanic. Every time it comes down to a dice role, then the player rolls 1d20 and 3d6 (This is called a standard role). The 3d6 represents the difficulty of the task and the d20 represents the degree of success. Then the GM adapts the narrative of the action (with the players' input) to justify the results of the roll. If the d20 is greater than the 3d6 roll, the task is a basic success for the player. A 20 on the d20 is an automatic critical success for the player and a 1 on the d20 is an automatic failure. The GM then determines what additional benefits beyond success occur on 20 and what additional liabilities occur on 1.
ADDITIONAL SUGGESTIONS:
And that's all the rules. What follows are simply optional suggestions on how to implement the rules of PnC.
- Degree of Success (DoS): Degree of success (or failure) is the difference between the d20 result and the 3d6 result. This is often worth noticing in order determine exactly how extensive the impact of success is, especially in assessing damage from combat. It is important to realize that the degree of success actually increases geometrically as the difference increases incrementally. See the example below.
- Resolving Critical d20 results: Make the d20 roll open ended to determine critical results. If the d20 results in "20", roll the d20 again, treating another result of "20" as zero, and add it to 20 as a total result. If the d20 results in "1", roll the d20 again, treating another result of "20" as zero, and subtract it from 1 as a total result. By this means results are possible from -18 to +39. Compare the d20 result with the 3d6 result.
- Example: Deactivating a Time Bomb: The difference (d20 - 3d6) determines the degree of success based on the following scale;
- (+11 or more)----"Divine intervention", "Terrorists give up and turn themselves in."
- (+8 - +10)------- Spectacular success (substantial collateral benefits), "Bomb deactivated and source for terrorist technology discovered."
- (+5 - +7) --------Great success (marginal collateral benefits), "Bomb deactivated and components salvaged."
- (+2 - +4) --------Sufficient success (no collateral benefits), "Bomb deactivated."
- (-1 - +1) ---------Partial, insufficient success (may roll again), "Bomb still active but timer pauses."
- (-4 - -2) ---------Sufficient failure (no collateral costs), "Bomb still active. Run away."
- (-7 - -5) ---------Great failure (marginal collateral costs), "Bomb active and timer speeds up."
- (-10 - -8) --------Spectacular failure (substantial collateral costs), "Ka-boom! The player's character is dead."
- (-11 or less) ------"Divine retribution", "Player's character dead, family loses NSA pension, and her favorite candidate loses re-election."
- Dice rolls in combat and other competitions: In combat situations, an attack roll works just like a standard roll except that the attacking character rolls the d20 and the defending character rolls the 3d6. The roll is made by whoever controls the attacker or defender (the GM rolls for non-player characters). If the roll is a success for the attacker, the degree of success determines the amount of damage which is handled narratively. If the role is a great failure or more there may be self-inflicted damage. (Other possible penalties could include losing place in the initiative order (the original initiative order in a combat situation can be established by having each character roll 1d6, rolling off tie results, and having everyone go from highest result to lowest), losing the next attack, or the defender getting a free attack on the attacker.) It is up to the GM whether an attack roll or a standard roll is used. Attack rolls are expected for combat situations but some contests of skill can be resolved by by attack rolls also (like trying to sneak up on a very perceptive guard). Other competitions between characters can be resolved by comparing the degree of success of the results of two standard rolls (like a footrace).
- Conducting combat scenes: If characters find themselves in a combat situation, the GM should first determine who can attack at that moment and who cannot. Maybe some characters were caught off guard or cannot immediately react because of their stance. Of those who can attack, have each make a standard roll to determine the order of initiative. Greatest degree of success attacks first, next greatest second, and so on. You may want to give players the option of holding on to their attacks even until the next time through, so that they can attack after their turn comes up but not before. On their turn, make an attack roll with the attacker rolling the d20 and the defender rolling the 3d6. The damage is determined by the degree of success along with the player's presentation of what sort of way their character is attacking. A degree of success of 1 with a sword may just graze the opponent's armor, while a degree of success of 6 with a sword may mean an open wound that needs immediate attention. Combat continues until one of the opposing characters surrender, fall unconscious or die, or escape.
- Levels of Difficulty (LoD): There are times when the rules still seem too inflexible to accommodate the story in a credible way, times when you want to make a roll but one adjusted to fit the circumstances. These differences of degree of challenge are called higher or lower levels of difficulty and can be resolved with bonus or penalty d6's. If you think that some things ought to be more challenging than other things and that everything should not just be averaged together, you can use the following system. Suppose you think a task should be especially hard for a character. In that case, the player makes a standard roll except that she adds 1-3 additional penalty d6's to her roll depending on how much difficult it is thought to be, and the d20 is compared to the total result of the three highest d6's. Similarly, for situations where the player has a distinct advantage, add 1-3 more bonus d6's and compare the d20 to the total result of the three lowest d6's. Players and GMs should be free to negotiate on their own behalf which factors may provide distinct advantages or disadvantages for the character in any situation, appealing to the features already given in the story, but the GM has final say. The dice should reflect the final net advantage, whether positive, negative, or inconsequential, of all the relevant factors. Players and GMs can anticipate such judgements by using a ladder of descriptive terms (very easy, easy, average, hard, very hard) when describing the characters features or the circumstances. For example, if the character is a "good jumper" but the ravine is "not just wide but very wide", the bonus of the character's good skill is offset by the especially difficult jump and the GM may decide he still gets one but only one difficulty d6. However, if levels of difficulty are introduced, the GM, is going to have to be careful that they are not abused and that the characters and situation are both fairly balanced in terms of levels of difficulty.
- Levels of Difficulty in Combat: You can also use bonus/penalty dice in combat by adding them to the defenders 3d6, but remember that what is to the defender's advantage is to the attacker's disadvantage. So adding a bonus d6 for the defender having plate armor is to add a penalty d6 to the attacker -- the attacker must roll a d20 result that is greater than the three highest d6 rolls to succeed in significantly hitting the defender. Bonus/penalty dice can also be used in cases where the enemy is surprised or attacked from above. They can also be used in the standard roll that determines the initiative order.
- Mulligan Points: A GM may want to assign three mulligan points to each player before the beginning of the session. A player may spend a mulligan in order to re-roll any particularly unfavorable standard roll or her part (the d20 or the 3d6) of a combat roll. The point must be spent before the re-roll. This gives the player more input over the way the story goes. The GM may also give a player a mulligan point every time he rolls a 20 and take a mulligan point away every time he rolls a 1 on the d20. However, one cannot have negative mulligan points and only loses a mulligan point when he has one to lose.
Subscribe to:
Posts (Atom)