Okay, I concede the point, not necessarily reluctantly. But while Fred has been clear and has run an excellent campaign by addressing issues in a substantial way and showing that political conservativism is a defensible position with real consequences for policy (and not a reactionary knee jerk belligerent form of ignorance), he is not the candidate with the greatest amount of relevant experience who holds such views. Fred and Mitt are comprehensive in their appreciation of conservative policy on economic, social, and national security issues. And Mitt's religion speech proves that he gets the relation between faith and politics right as well as being an articulate statement of the view. So that the relevant difference between them is the amount of executive experience. Only Guiliani competes with Mitt here but he is clearly tone deaf to social issues and that will bleed over into his radar for picking judges, no matter what he promises.
One thing that bothers me is RomneyCare, which does not seem to either work or fit with free market or marginally sufficiently sized government solutions and the ink on that is still pretty fresh. I guess it may too much to hope for a Mitt/Fred ticket.
Actually, it looks like Fred may be on his last legs. I will stick with Fred for loyalty's sake and send him some cash and wait and see how he holds up through the Iowa caucus. If Fred is out, I am definitely switching to Mitt and not to Huck.
Welcome to Gnu's blog ! This is an online posting of my musings which concern things related to topics like Christian faith, theology, philosophy, and my hobby, Fantasy Role-playing Games.
'What did you expect to see out of a Torquay hotel bedroom window? Sydney Opera House perhaps? The Hanging Gardens of Babylon? Herds of wildebeest sweeping majestically?!' -Basil Fawlty
Total Pageviews
Friday, December 14, 2007
Monday, December 10, 2007
You can call me "Otaku-sensei"
I finally took the leap and dared to include anime as part of my lecture in my college class. The class is called "Ethics of Technology and Cyberspace". It is basically an introduction to critical thinking applied to ethics. Ethics involving the consumption of technology is a good foil for this because it is not a hot button issue (and thus most students do not have a pre-formed opinion about it) but it is still one that attracts interest and is also a hidden issue in the sense that (I believe) people tend to underestimate the importance of it. I do treat the course as fulfilling an important calling in modern society.
The course is focused on real live questions regarding the use of technology and the Internet. But near the end we introduce questions about possible concerns about the future of technology and its potential to raise ethical challenges. One such challenge is the question concerning whether or not we will have to alter our conception of who/what counts as a stakeholder, particularly in the face of possible progress in artificial intelligence (A.I.) and genetic engineering. At this point in the school's curriculum, most students have been exposed to the metaphysical questions surrounding A.I.
It was at this point that I showed the class Episode 15 of "Ghost in the Shell: Stand Alone Complex" titled "Time of the Machines: MACHINES DÉSIRANTES". You can read a summary of this episode at the link in the post title.
Before showing the episode, we had already discussed the relevant points of the issue and I already gave a presentation of material about questions involving expanding the scope of moral concern. Just before showing the episode, I gave a brief sketch of the world and background of the story setting, focusing especially on the provenance of both the nature of the police SWAT unit and its composition and mission, and the provenance of the Tachikoma units used by the team. For class purposes, I explained that, even though this was a work of fiction, I wanted the students to treat is like the other cases we looked at so far in the semester.
The assignment was to look at the situation from the point of view of Major Kusanagi as the responsible agent. The Major confronts the decision of whether or not to decommission the Tachikoma and decides to go ahead and do so. The question to the class was simply whether they thought that Major could have made a better decision than she did. While easily stated, of course, a good answer would require making one's own assessment of what the right thing to do in the situation is and using that account to defend the Major's decision if the student's conclusion was the same as the Major's in the story or arguing why the account properly differs from the Major's choice. Such reflection will have to confront other questions like:
Was the Major faced with at least a prima facie moral dilemma?
What is the moral status of the Tachikoma? Should they be included among the various other stakeholders in the situation?
What would you reasonable want to know before deciding? Are there any other options, not yet considered by the Major, but which would resolve the situation? Can you imagine any technological solutions?
Some other related questions are: What difference would viewing technology as being like experimenting on rational moral agents have made in this case? Do you think it could have mitigated or prevented the issue from arising? Is there some amends that ought to be made? Given that some think we have an obligation to future persons, does that extend to future artificial persons?
The episode depicts the Tachikoma as emergent intelligences with various features in a plausible way. The students tried to make explicit what features of the Tachikoma made the Tachikoma relevant for moral consideration in there own right. They also tried to identify the concerns the Major had in making her decision, the most obvious one being public safety. Students had no trouble finding other considerations that might have slowed the Major's hand, such as saving them for further research into their behavior. But no one was willing to say that they were entitled to certain rights, nor could they bury the impression that "after all, they are just robots" even after I tried to press them a little to explain how the students themselves were different from robots. It was also an uphill fight to get them to make a serious effort since after all it was pure speculation and nothing could really be concluded from it (right?).
But one thing can be concluded from it is that in such surprising cases, the same approaches to ethical reasoning play a role such that even if the future is uncertain, a future situation can be handles with the skills we are learning now. Given that critical reasoning about ethical cases can only take us so far and that in the end we must count on the cultivation of judgment as a virtue, the future of agents like us stands in the same relation to our present as our present did to any of our previous experiences. This seems to be true if we hold to some moral theory or if we reject theory for judgment.
Some advantages to using this approach: An episode is short and left sufficient time to prepare and discuss the case. The episode spends a great deal of time watching the behavior of the Tachikoma, giving plenty of grist for the exercise. Also, this being a Ghost in the Shell: SAC episode, the level of discussion already incorporated in the dialog already raises some of the relevant concerns to guide reflection on it. Further, compared to typical episode of this type in most other sci-fi shows, the status of the Tachikoma was much more ambiguous, (more so than, say, Data in Star Trek: Next Generation), and the students had to decide which contrasts and similarities to humans were relevant and which were not (e.g. appearing human, having similar values to humans, being able to discuss cybertheology. etc.) It is also a story well written to raise important questions, especially in presenting the unintended emergence of individuality in a machine that was programed with A.I. to be an autonomously functioning urban warfare weapon. The contrast between man and machine is made blurrier by the fact that the other principal characters are cyborgs -- humans enhanced with hi-tech components. Also the moral drama is suggested by the tension between the Major and her chief officer, Mr. Batou, who has a much closer relationship to the Tachikoma, and his dismay at the Major's decision suggests the possibility that there is something possibly wrong with it. From the point of view of class time usage, the use of distracting imagery was very slight. This particular episode was very light on action and heavy on content and conversation. Finally, using anime was a surprise and the exercise benefited from the shock of the unexpected. It also contributed to attendance and participation.
On the other hand, there were problems, besides the problem of the speculative nature of the case. One is the shooting. The episode opens with a sniper blowing the head off a hostage-taking terrorist which turns out to be a dummy and not a real human being, in order to test a new sniper computer system. There is also a "hogan's alley" training scene involving guns. Also, the word "crap" is used but no other strong language. Finally, the Major is not very modestly dressed although she is mostly covered up with a bomber jacket and does not appear in full through most of the episode.
I apologized to the class for the comic book orientation with its typical catering to 14 year old male tastes and for the objectification in the Major's outfit. If pressed I would have been able to set the offending details in the context of the story to show that they played an essential role. In the Major's case, it is part of the story that her apparently twenty-something body is a perk made possible by the extent of cyberization the Major underwent. The Major herself is able to occupy any cyber body and her age is significantly older than she appears. How she looks has to be considered the responsible choice of a mature woman and not the shortsighted impulsiveness of a "lolita" type character. Also, this device points back to the vision of the original author of setting of the franchise, Masemune Shirow, by inducing something that jars are moral sensibilities as a feature of the over-all culture shock brought about by the development of technology. So it is not mere fan service (although it is definitely that also) and as far as fan service goes, it is not very gratuitous. In fact, given the point being made, it is quite restrained. But I did not want to take up class time to say this.
The class is made up of juniors at a Jesuit college. I did not assign a paper for this (although they have the option of doing a writing exercise discussing this case) but I did say it would be part of what would be covered on the exam. My main objective is to get them to review the basic principles of ethical reasoning in a fresh situation so they can build up the practice of cultivating moral judgment. My secondary objective is to make a connection between reasoning by balancing moral considerations in ethics and the metaphysical issues discussed in their earlier philosophy course work (God, the Mind, Free Will). It seems that there is a moral question that turns on settling on the conception of the status of Artificial Lifeforms but that question cannot be settled except on the basis of decisions making under conditions of uncertainty. So even if we cannot satisfactorily resolve the relevant metaphysical question, it does not necessarily follow that we cannot satisfactorily answer the question on prudential grounds and this applies to the deep questions studied in their previous course work.
The course is focused on real live questions regarding the use of technology and the Internet. But near the end we introduce questions about possible concerns about the future of technology and its potential to raise ethical challenges. One such challenge is the question concerning whether or not we will have to alter our conception of who/what counts as a stakeholder, particularly in the face of possible progress in artificial intelligence (A.I.) and genetic engineering. At this point in the school's curriculum, most students have been exposed to the metaphysical questions surrounding A.I.
It was at this point that I showed the class Episode 15 of "Ghost in the Shell: Stand Alone Complex" titled "Time of the Machines: MACHINES DÉSIRANTES". You can read a summary of this episode at the link in the post title.
Before showing the episode, we had already discussed the relevant points of the issue and I already gave a presentation of material about questions involving expanding the scope of moral concern. Just before showing the episode, I gave a brief sketch of the world and background of the story setting, focusing especially on the provenance of both the nature of the police SWAT unit and its composition and mission, and the provenance of the Tachikoma units used by the team. For class purposes, I explained that, even though this was a work of fiction, I wanted the students to treat is like the other cases we looked at so far in the semester.
The assignment was to look at the situation from the point of view of Major Kusanagi as the responsible agent. The Major confronts the decision of whether or not to decommission the Tachikoma and decides to go ahead and do so. The question to the class was simply whether they thought that Major could have made a better decision than she did. While easily stated, of course, a good answer would require making one's own assessment of what the right thing to do in the situation is and using that account to defend the Major's decision if the student's conclusion was the same as the Major's in the story or arguing why the account properly differs from the Major's choice. Such reflection will have to confront other questions like:
Was the Major faced with at least a prima facie moral dilemma?
What is the moral status of the Tachikoma? Should they be included among the various other stakeholders in the situation?
What would you reasonable want to know before deciding? Are there any other options, not yet considered by the Major, but which would resolve the situation? Can you imagine any technological solutions?
Some other related questions are: What difference would viewing technology as being like experimenting on rational moral agents have made in this case? Do you think it could have mitigated or prevented the issue from arising? Is there some amends that ought to be made? Given that some think we have an obligation to future persons, does that extend to future artificial persons?
The episode depicts the Tachikoma as emergent intelligences with various features in a plausible way. The students tried to make explicit what features of the Tachikoma made the Tachikoma relevant for moral consideration in there own right. They also tried to identify the concerns the Major had in making her decision, the most obvious one being public safety. Students had no trouble finding other considerations that might have slowed the Major's hand, such as saving them for further research into their behavior. But no one was willing to say that they were entitled to certain rights, nor could they bury the impression that "after all, they are just robots" even after I tried to press them a little to explain how the students themselves were different from robots. It was also an uphill fight to get them to make a serious effort since after all it was pure speculation and nothing could really be concluded from it (right?).
But one thing can be concluded from it is that in such surprising cases, the same approaches to ethical reasoning play a role such that even if the future is uncertain, a future situation can be handles with the skills we are learning now. Given that critical reasoning about ethical cases can only take us so far and that in the end we must count on the cultivation of judgment as a virtue, the future of agents like us stands in the same relation to our present as our present did to any of our previous experiences. This seems to be true if we hold to some moral theory or if we reject theory for judgment.
Some advantages to using this approach: An episode is short and left sufficient time to prepare and discuss the case. The episode spends a great deal of time watching the behavior of the Tachikoma, giving plenty of grist for the exercise. Also, this being a Ghost in the Shell: SAC episode, the level of discussion already incorporated in the dialog already raises some of the relevant concerns to guide reflection on it. Further, compared to typical episode of this type in most other sci-fi shows, the status of the Tachikoma was much more ambiguous, (more so than, say, Data in Star Trek: Next Generation), and the students had to decide which contrasts and similarities to humans were relevant and which were not (e.g. appearing human, having similar values to humans, being able to discuss cybertheology. etc.) It is also a story well written to raise important questions, especially in presenting the unintended emergence of individuality in a machine that was programed with A.I. to be an autonomously functioning urban warfare weapon. The contrast between man and machine is made blurrier by the fact that the other principal characters are cyborgs -- humans enhanced with hi-tech components. Also the moral drama is suggested by the tension between the Major and her chief officer, Mr. Batou, who has a much closer relationship to the Tachikoma, and his dismay at the Major's decision suggests the possibility that there is something possibly wrong with it. From the point of view of class time usage, the use of distracting imagery was very slight. This particular episode was very light on action and heavy on content and conversation. Finally, using anime was a surprise and the exercise benefited from the shock of the unexpected. It also contributed to attendance and participation.
On the other hand, there were problems, besides the problem of the speculative nature of the case. One is the shooting. The episode opens with a sniper blowing the head off a hostage-taking terrorist which turns out to be a dummy and not a real human being, in order to test a new sniper computer system. There is also a "hogan's alley" training scene involving guns. Also, the word "crap" is used but no other strong language. Finally, the Major is not very modestly dressed although she is mostly covered up with a bomber jacket and does not appear in full through most of the episode.
I apologized to the class for the comic book orientation with its typical catering to 14 year old male tastes and for the objectification in the Major's outfit. If pressed I would have been able to set the offending details in the context of the story to show that they played an essential role. In the Major's case, it is part of the story that her apparently twenty-something body is a perk made possible by the extent of cyberization the Major underwent. The Major herself is able to occupy any cyber body and her age is significantly older than she appears. How she looks has to be considered the responsible choice of a mature woman and not the shortsighted impulsiveness of a "lolita" type character. Also, this device points back to the vision of the original author of setting of the franchise, Masemune Shirow, by inducing something that jars are moral sensibilities as a feature of the over-all culture shock brought about by the development of technology. So it is not mere fan service (although it is definitely that also) and as far as fan service goes, it is not very gratuitous. In fact, given the point being made, it is quite restrained. But I did not want to take up class time to say this.
The class is made up of juniors at a Jesuit college. I did not assign a paper for this (although they have the option of doing a writing exercise discussing this case) but I did say it would be part of what would be covered on the exam. My main objective is to get them to review the basic principles of ethical reasoning in a fresh situation so they can build up the practice of cultivating moral judgment. My secondary objective is to make a connection between reasoning by balancing moral considerations in ethics and the metaphysical issues discussed in their earlier philosophy course work (God, the Mind, Free Will). It seems that there is a moral question that turns on settling on the conception of the status of Artificial Lifeforms but that question cannot be settled except on the basis of decisions making under conditions of uncertainty. So even if we cannot satisfactorily resolve the relevant metaphysical question, it does not necessarily follow that we cannot satisfactorily answer the question on prudential grounds and this applies to the deep questions studied in their previous course work.
Subscribe to:
Posts (Atom)