How's this for node your homework?
This essay was written for a pleasantly freaky Finnish Information Technology & Ethics course.
It is an anecdotal, rather than comprehensive, look at the issues raised by killer robots (not that the
latter would fit on Everything) - my knowledge of both fields is yet rudimentary. Its primary advantage
over the works of e2's more accomplished ethicists and roboticists is that it HAS BEEN WRITTEN.


Some Ethical Issues With The Use Of Robot Combatants

1. Introduction

Science fiction has been outdone yet again. After long being too awkward, limited and unreliable, robots crossed the threshold of having feasible military value in the mid-1990s. (Blitch, 1998) Robots are currently used or about to be used in such tasks as reconnaissance, bomb disposal, handling of hazardous materials, guard duty, naval point defense, medical evacuation – and combat. Remote-controlled aircraft have fired on and killed human beings. (Associated Press, 2008) On the ground, the first few armed patrol drones have been deployed into Iraq. They have not yet opened fire, but have the means and the authorization. (Shachtman, 2007) This role is only set to grow in the next few years. As is characteristic of human history in general but recent times in particular, we have taken this step without knowing what we’re doing. The rapid progress of high technology has largely followed what’s known as the Technological Imperativedoing what can be done because it can be. (De George, 2003)

Study of the ethical impact of these new developments has lagged behind, yet that single factor will ultimately be the decisive one among their effects. This write-up is intended to examine some of the ethical matters – and associated discourse – that arise from the fact that "warrior" is, for the first time in history, being separated from "person."

א. For those of you joining us online

The essay was written for an audience with some knowledge of ethics in philosophy. Those lacking it are urgently recommended to trawl that other site, us, or Google for the sake of enriching their lives but here's a tiny primer for reference purposes.

The course's main perspectives were utilitarianism, Kant's deontology and virtue ethics.

  1. Utilitarianism is a prominent form of consequentialism (the ends justify the means). Developed by Jeremy Bentham and John Stuart Mill in the 19th century, it holds that the morality of an act depends entirely on the amount of caused good, or utility, usually taken to be pleasure. This emphatically does not mean mindless hedonism - Mill is clear on overindulgence and the higher pleasures - but has its share of other practical problems, such as the difficulty of quantifying pleasure and pain. The traditional types are act and rule utilitarianism: the former judges every single act on its own merits, the latter considers evaluating the rules behind the acts to give better overall results.
  2. Immanuel Kant was a 18th-century German and is the poster boy for deontology (let justice be done though the heavens fall!). He determines the morality of an act entirely on the basis of its intention, especially its accordance with the Categorical Imperative of treating humanity as ends rather than means, a moral principle that any rational being would follow. Kant is considered to be one of the greatest thinkers to have ever lived. Unfortunately this need not overlap with the ability to express one's thoughts, and he's worse than Heraclitus the Obscure in places. Do not expect consensus on the specifics.
  3. Virtue ethics is a mid-to-late 20th century revival of Aristotlean thought, and does not know or care what good is. It claims that to live well is to live by cultivating virtues, a lifelong process of introspection and improvement. I don't know its significance due to a surprising lack of fundamental worldview polls.

These three pop up from time to time, but given the write-ups subject matter what's relevant for much of it is their common factor, namely the claim that less killing is good, and more killing is bad.
Argue with that, and I'll poke my naive intuitionism into your eye.

2. Definitions

The United States Department of Defense, the world leader in robot combatants, defines a robot as “a machine or device that works automatically or operates by remote control.” (Cowan Jr., 2007) Under this definition, robots have been used in war since at least the German experiments of WWII, though armed ones are a thing of the 21st century. (Nelson, Bolia, 2006)

Cowan Jr. cites ones Robert Finkelstein and Steven Shaker dividing robots into three groups by their mode of operation: autonomous ones that are capable of accomplishing their objectives completely without human intervention, semi-autonomous ones that need assistance at critical moments, and remote-controlled ones, the most numerous group by far, but decreasingly so. (Cowan Jr., 2007)

The term “robot combatant” is cumbersome but semantically necessary. The concept appears to have no established name beyond military acronyms for its subtypes. A combatant is here defined as an entity that actively engages in armed hostilities, which should come very close to formal uses of the term, U.S. PR tricks notwithstanding*. Auxiliary and support forces such as unarmed medics or chaplains, or scout or minesweeper robots, are not combatants and are not within the scope of this paper for reasons of feasibility. Neither are robot combatants as moral agents, for reasons of sanity. Such developments are not imminent and the definitions of personhood and innate rights are difficult enough when the prospective people involved aren’t getting shot at. Robots are considered to be humans’ sophisticated tools.

*: The United States are going to pop up several times during this write-up. I wish to clarify that, despite the fact that a great deal of the Bush administration's military, economic, enviromental and human rights decisions might as well have been scientifically designed to get on my tits, I harbor no animosity towards the U.S. of A. It has conducted the most prominent wars after I started paying attention, is likely the only state to have killed with unmanned aircraft, and holds bizarre new weapon superiority.

The source of this footnote, however, is a complete jab. You had that coming, o beacon of freedom.

2.2. Effects of robot combatants

Ground robots vary from the currently popular design of small tracked vehicles less than half a meter high, (Cumming, 2006) to wheeled cars the size of small tanks, (Fox News, 2008) but none have the versatility of human locomotion. Their range of actions is much more limited than that of their traditional counterparts, nor can they match the manipulation capacity of a hand. (Blitch, 1998) Remote-controlled and semi-autonomous robots are vulnerable to signal loss and jamming; autonomous ones run straight into the problems of image identification and threat assessment.

The disadvantages do limit the use of robots, but rather than crippling it encourage specialization into areas that downplay them. Identification problems are of less consequence on the Korean border amid barbed wire fences and minefields. Contemporary mobile ground robots are geared for urban combat. They cannot vault fences, but they can climb stairs. (Cumming, 2006) The difference is less marked in the air, where fine manipulation is not in demand and a man who’s not already relying on a machine suffers from limited effectiveness.

Robot combatants’ usefulness is roughly twofold. First, they can replace - or supplement, the more the merrier – humans in roles where their advantages outmatch flesh and blood. Robots have the edge on human beings in such respects as larger fields of vision due to the use of wide-angle or multiple cameras, better integration of advanced technology such as sensors and digital communication, a limitless attention span, literally inhuman accuracy due in part to the lack of breathing and involuntary muscle movements (Cumming, 2006) and unquestioning obedience, not to mention simple cost-effectiveness (considering a soldier’s training, upkeep, and any possible medical and psychological care as well as injury pensions, killing machines may well prove to be cheaper) and added manpower. (Wilson, 2005)

The second fashion is the introduction of entirely new kinds of missions, ways of applying force dependant on exclusively nonhuman properties. (Blitch, 1998) Again, UAVs (Unmanned Aerial Vehicles) in particular are not limited by the protection, mass, G-force tolerances and endurance of a pilot. The U.S. Air Force is already producing backpack-sized recon drones for ground troops’ use. (Rutherford, 2008) The most conspicuous one is their expendability. Robots can not only take humans’ place in high-risk roles, but take tactical options that would otherwise be unacceptably dangerous in the current situation or downright suicidal. “In the current situation” because expendability is most emphatically not innovative in warfare, but its use as an asset still has thresholds, ones that are much higher with humans than with machines. The kamikaze was a desperation maneuver. Western countries, in particular, have developed a historically very conspicuous aversion to friendly casualties. (Bissett, 2003)

3. Jus ad Bellum

The concept of a just war is a centerpiece of military ethics. The ability to conduct what’s otherwise one of Man’s greater atrocities in a morally defensible or even commendable manner has implications that can hardly be overstated. Just war theory defines strict sets of criteria, the first of which concerns Jus ad Bellum, the right to go to war. The Internet Encyclopedia of Philosophy gives the commonly accepted ones as “having just cause, being declared by a proper authority, possessing right intention, having a reasonable chance of success, and the end being proportional to the means used.”

Just cause, first among peers, sets the stage for the others by restricting war to last resort defense of human beings against aggression (Moseley, 2006); humanitarian war. This may or may not be achievable, but even if it's not the concept can be valuable as a baseline or as grounds for further thought. Extending it to intervening in aggression towards others is popular but a hot potato of an issue, while doing so to pre-emptive defense is in comparison a plasma zucchini.

The most straightforward effect (and prime selling point) of robot combatants concerns proportionality. An army that substitutes machines for humans in roles where they take bullets can reduce the expenditure of human lives needed to bring the war to a close. It could also be argued that proliferation of high-tech robot combatants would help to fulfill the criterion of reasonable chance since the states that stand to gain the most from it, the rich and advanced ones, are also the ones most likely to have the resources for a successful just war. The entirety of just war is highly theoretical, but this approach is still conspicuous by its practical problems.

3.1. Right intention

Robots are not known for their authority or political acumen, but still affect opinions indirectly. In his article Carl von Clausewitz and high technology war, A.K. Bissett examines the psychological effects of high-tech weaponry with ever-increasing dismay. (His primary foci are “smart” missiles and “surgical” strikes, but much of his argument is applicable to soldiers who do not bleed.) Technology that removes the impression of war from the massive death and destruction that is the reality of war serves to give a strong impression of better, cleaner, more controllable warfare. Such a thing is downright absurd. As Winston Churchill put it: “Never, never, never believe any war will be smooth and easy, or that anyone who embarks on the strange voyage can measure the tides and hurricanes he will encounter. The statesman who yields to war fever must realize that once the signal is given, he is no longer the master of policy but the slave of unforeseeable and uncontrollable events.” (Herbert, 2002)

The significance of easy war's lure was given an unavoidable demonstration with the U.S. invasion of Iraq. There, public opinion and strategic thinking relied overmuch on victory through crushing force, “Shock and Awe,” with not enough thought for the end result and where to go from it. One particularly unfortunate contemporary newspaper reported claims that technological superiority would bring a war against Iraq to a completion within a week. (Bissett, 2003)

Right intention is not a straightforward criterion: concentrating solely on the intentions of an act does not rule out the possibility that they are paving on the road to Hell, and the mandate of just war can only extend so far as circumstances change or are found to be different than what was thought. Ultimately right intention raises concerns with both practicalities and consequences if it is to give acceptable results. (Moseley, 2006). In this light it is more than troubling to consider that advanced military technologies can make, and at least once have already made it significantly easier to start a war while intending to start a wholly different one.

Inclinations aside, robots in no way rule out having valid intentions for a just war – the concept remains valid.

4. Jus in Bello

Noble as it may be to go to war for the right reasons, it does not address the issues of Jus in Bello, just conduct in war. These the Encyclopedia gives as “discretion and proportionality.” (Recent works in ethics suggest a third category, Jus post Bellum, but this is not yet close to being as well established.) (Orend, 2002)

4.1. Discretion

Specialist Colby Buzzell illustrates action in Operation Iraqi Freedom in his memoir My War (2005): “At one time, I saw a dog try to run across the street and somebody shot it. … On the way to the FOB we passed a watermelon stand, and all the watermelons had bullet holes in them. In fact, everything on that street had bullet holes in it, the cars, the buildings, everything. There were thousands and thousands of brass shell casings littering the streets.”

Mortal danger is no place for acts of rational discretion. The justifiability of the innocent deaths that result is a matter of self-defense, a minefield of contradicting basic rights that, with war, must begin by establishing how killing civilians differs from killing soldiers in the first place. (Moseley, 2006) Several theories would deem them unjust. Tightening rules of engagement to require positive confirmation of hostility is also ethically problematic (to say nothing of implementation) as the added risk can be an unreasonable strain on the soldiers’ right to self-preservation. Reaching an equitable solution seems to require soldiers who would fire at enemy combatants but not at any other perceived immediate threat, making them either saints or emotionless machines.

Remote-controlled or semi-autonomous emotionless machines still have humans making the decisions, but it is feasible that “lowering the stakes” by taking the operators out of danger would reduce the twitch decisions that characterize kill-or-be-killed situations, as well as berserking, “contagious shooting” – firing at a target because others do (Wilson, 2006)** – outright retaliatory murder and other reactions to extreme stress that have been demonstrated in such famous locations as Kent State University and My Lai. Another factor is that a robot literally has no moral right to defend itself. Certainly it should fire if that proves necessary for other reasons, but there is no personhood to protect that would justify playing safe or using unnecessary force.

**: The term is a neologism that has enjoyed an upsurge of popularity following the recent shooting of Sean Bell, a re-enactment of the Amadou Diallo case. It has drawn its share of criticism for being presented as a reflex action that nicely removes actual culpability. I use it here because it explains the phenomenon, not because it excuses it. If you can't stand the heat, get out of the kitchen.

4.2. Culpability

In his article Killer Robots (2007), Robert Sparrow classifies the apportion of moral and legal responsibility for deaths in war not as much a criterion of Jus in Bello than a precondition of making such things possible at all. To do otherwise is to remove the constraints on conduct imposed by the most basic expression of the enemy’s worth, to allow murder without consequences, to make discretion impossible to uphold, in short to ‘treat our enemy like vermin, as though they may be exterminated without moral regard at all.’

Sparrow’s reasoning is both Kantian and consequentialist: the primary argument is that people should be accorded at least minimal status as ends, rather than means, but at the same time the implications of a killing without anyone to blame would certainly be abhorrent. Aside from act utilitarianism, which has no inherent link between offense and punishment, it certainly seems difficult to find an ethical theory that disagrees. Even forms of rule utilitarianism, which Feldman presents in his Introductory Ethics (1970) as a tamer version split from its cousin in search of plausibility, would likely hold that wrongful killing should be actionable.

It follows that war for the sake of human beings’ worth cannot be waged with weapons that spite it. Accidents will always happen, but for some weapons the failure to take responsibility is not incidental but typical. In Sparrow’s terms, this is behind a part of the detestability of WMDs and land mines: They do not discriminate between legitimate and illegitimate targets, therefore their users do not consider the latter important enough to be worth the decision. This becomes relevant with the advent of autonomous robots. Current weapon autonomy is limited to “smart” munitions with some ability to determine their trajectory and does not raise new ethical questions. By contrast, the author estimates that near-future autonomous robots can be expected to act on their own on matters of deadly force and adjust these actions by learning to a degree that they will not be easily predictable (without raising questions of sapience, which he backs away from.). Such a weapon would be capable of committing a war crime without human input. Who can rightly be considered responsible for such an inventive atrocity?

The programmer? If his negligence caused the crime, but this need not be the case if the problem was an acknowledged one and the robot’s purchaser informed – then it was knowingly sent to battle nevertheless. More importantly, an autonomous system is by definition able to act unpredictably. If its capability to learn is high enough, there comes a point where its actions cannot be said to be caused by its maker in any meaningful way. Sparrow compares persecuting him for something that he could neither control nor predict to holding parents culpable for the actions of children who have left home.

The officer? Autonomous weapons can be considered morally equivalent to modern long-range artillery: the person who orders their use knows their limitations and chooses to take a risk. This approach is favored by militaries seeking to deploy such weapons. Sparrow objects, pointing out that autonomy cannot be defined simply by unpredictability. 'Smart' weapons with a say in their own use are usually considered more reliable means to attack an enemy than 'dumb' weapons, and many consider the former to be morally superior. Neither are autonomous robot combatants merely guided weapons, but the more autonomous such a machine is, the larger the risk that its orders do not determine its actions. At some point, the commanding officer can no longer be fairly held responsible.

The machine? Here guilt is indisputable. Sparrow is forced to attempt applying responsibility to inanimate objects and delves into its exact definition. By his reasoning, to be morally responsible is to be an appropriate target of blame or praise, punishment or reward. Is it possible to punish machines? Sparrow's sources suggest that 'intelligent' robot behavior is likely to be executed using internal states that they are programmed to pursue, effectively forming desires. These can be hampered, the robots damaged or destroyed. Yet such acts would be like knocking down an offending wall, spanking a pistol or piling obstacles into the route of a robotic vacuum cleaner, that is, satisfying the psychological need for revenge but not affecting an insensate target.

A punishment must, by definition, cause the right sort of response in its target. The specifics vary from theory to theory; Sparrow assumes what is the most plausible option to him, that the target must suffer. In turn, this means that it must do so in a morally compelling way - refusing to oil a tool is hardly punishment. Punishable robots must be able to suffer in ways that evoke similar responses than human suffering, including sympathy, concern, grief, even remorse for wrongful treatment, granting them moral personhood. This not only defeats one of the main points of robot combatants, but is far beyond both the scope of both Sparrow's article and, most definitely, the progress of robotics in the foreseeable future.

Sparrow concludes by musing that currently existing autonomous systems are still morally analogous to existing weapons, such as the aforementioned long-range artillery, but technological development (not to mention tech races between countries) can soon be expected to push them into the uneasy region where lives will sometimes be taken indiscriminately and/or the innocent will sometimes be blamed. There is no exit from this region in the foreseeable future. Ever eager to use an analogy, he compares advanced autonomous robots to child soldiers – not because of their value, but because both are outside the control of those who order them into action, sufficiently autonomous to make the attribution of responsibility to their commanders unclear, but not enough to be responsible for their own actions. Sparrow ultimately finds these problems severe enough to forbid the use of autonomous robot combatants that near or pass the threshold.

5. Distancing

Ever since the introduction of the pointy rock, advancements in weaponry have generally served to physically divorce their users from their deeds. This causes a corresponding mental distancing as the user does not experience the deed as directly; it comes across as less real. Here the march of progress fortuitously helps along what has taken armies countless psychological tricks: letting soldiers commit the inherently repugnant act of taking human lives without the responsibility and weight. Finnish execution squads in the Winter War were armed with a mix of real bullets and blanks. No member would become the killer. Implementation of the Holocaust became a much smoother process (and required far less alcohol) once gas chambers replaced mass shootings.

The leap from wielding a weapon to guiding it from afar means that the operator no longer needs to experience a physical connection to his target, any more than there is between an actor in a studio and a TV viewer at home. This has the effect of removing a sense of substantiality from the operator’s actions, and associating it with virtual violence – a form of violence that a person in a first-world nation has already grown accustomed to, and learned to dismiss. (Bissett, 2003) This already happens with some guided missiles, so the proliferation of robot combatants would spread the effect but would not be fundamentally groundbreaking.

Such trivialization of human beings into pixels on a screen in the mind of another would surely have Kantians up in arms. Virtue ethics fare no better, for as one Der Derian notes in his study: “there is a high risk that one learns to kill but not how to take responsibility for it. One experiences ‘death’ but not the tragic consequences of it.” (Bissett, 2003) This is almost the antithesis of an ethical system where goodness is to be cultivated by virtuous actions made with full understanding of their nature, together with introspection.

In contrast, from a simple utilitarian perspective and all other things being equal, killing with effective avoidance of associated pain is a marvelous invention. If there is to be a just war, if killing can be a lesser evil, then who better for doing it than a jolly giant, merrily waltzing, without a clue of the people he is crushing beneath his feet?

6. Discretion

Robot combatants’ ability to save lives (or, rather, to take fewer lives than their alternatives) in a just war was previously examined, but discretion has another direction entirely: the ability is dependent on being politically, militarily and ideologically preferable. The distancing effect of indirectly wielded force would also reduce resistance to the use of remote-controlled robots in mass murder, to say nothing of semi-autonomous and autonomous ones.

Librarian Matthew White’s meticulous tallies of 20th century statistics estimate 42 million (10^6) soldier deaths during those hundred years, together with 19 million unintentional civilian deaths in war, 83 million deaths from genocide and tyranny (including intentional civilian deaths in war and intentional famine), and 44 million deaths from other kinds of man-made famine, mostly in China’s Great Leap Forward (White, 2005). Cited “atrocitologists” give varying figures, but all are similar enough to cast very serious doubt on the idea that more efficient ways of killing with the optional extra of fewer civilian deaths would lead to fewer civilian deaths. In mankind’s defense projecting future trends on the basis of a century’s worth of explosive development is not an exact science, and the rise and fall of communism may constitute a statistical anomaly.

7. Brutalization

Cowan Jr., whose research project (2007) is here only considered reliable enough for a secondary source on technical data, quotes no one when he states that robots “will ultimately remove man from the battlefield.” The same opinion has been voiced in a number of other places and therefore deserves a closer examination.

The idea betrays a deeply and fundamentally flawed concept of “battlefield.” Battlefields are not discrete, designated people-killing arenas where soldiers can be replaced with machines that would shred each other until the loser’s side capitulates: They are a form following a function. Carl von Clausewitz, a 19th-century Prussian general and one of the most influential military strategists, described that function as “an act of violence intended to compel our opponent to fulfill our will.” (Bissett, 2003) Unless robot combatants somehow lead to a way to non-lethally disarm and demoralize an enemy, disable its infrastructure and silence its leadership, a battlefield can lie wherever humans need to be killed.

“Skilled pessimist” Evelyn Waugh put it: “They are saying, ‘The generals learned their lesson in the last war. There are going to be no wholesale slaughters.’ I ask, how is victory possible except by wholesale slaughters?” (Fussell, 1990)

8. Larger context

One possible perspective to robot combatants comes with the introduction of an external factor that cannot go unmentioned in modern warfare: the existence of tens of thousands of nuclear warheads. While the end of the Cold War has made its particular brand of doomsday scenario unfashionable, mankind’s capability of self-annihilation at present and in the foreseeable future creates a genuine risk of it being used, if not intentionally then inadvertently (Blair, 1993). Lowering tensions can be compared to reducing the number of bullets chambered in a game of Russian roulette, but not to actually stopping pulling the trigger.

In ethical systems even remotely based on human desires, worst-case scenarios don’t come much worse than nuclear holocaust. Effects on the world’s strategic situation should therefore be a major factor in the normative status of military decisions. The intuitive conclusion is that robot combatants should be condemned as being a destabilizing factor in our current, non-irradiated state. If the same state is assumed to be unsustainable over time – with the political situation as a variable but nukes as a constant – then all is lost by default. Even reckless and unrestrained technological development can become desirable as a possible way of producing a solution before the otherwise inevitable. What I have no choice but to call the “headless chicken” scenario is simplistic and not without alternatives, but it applies to robotics: the effects of new technology are seldom limited to intended ones. An effective ballistic missile defense system might well require heights of information processing and coordination that are only reachable by sophisticated autonomous AIs. (Sparrow, 2007)

9. Conclusions

While robotics is a field too young and too vast for its developers to grasp what they've gotten themselves into, experts and the sizes of military investments suggest that robot combatants may be a major asset to those who can afford them and may go as far as significantly alter the way wars are waged. What they will not do is change the basis of war as a violation of others' rights by the destruction of property and persons; they will not remove slaughter.

In a just war (no doubt taking place in Plato's world of Forms), the temptation of starting one is irrelevant. Robots can reduce just fighters’ casualties while allowing them to apply force more effectively and/or precisely in order to bring the conflict to a desired conclusion. As long as alternatives exist, they need not feature in situations that they would worsen. It is clear enough that robot combatants have the potential to produce ethical results. Proponents of less than consequentialist theories may still object to how they are achieved.

On the side, the bar for just war rises. Determining whether this makes pragmatic wars more immoral than before, or if the former possibility is meaningless sophistry, is beyond the scope of this write-up.

In a reasonably just war, that is, one where it’s possible to afford the luxury of limiting one’s arsenal, the positive qualities of robot combatants may shine through when it is expedient to their owners. Here the effect depends on the oversight of a sufficiently ethical, informed and influential body, whether it’s the public, the state or something else, or alternatively on simple strategy. For instance, a new manual by the US army incorporates some of its recent discoveries by placing increased emphasis on the hearts of a target country’s populace and less on the rest of their vital organs. (Ghattas, 2008)

When the strong are set against the strong, nations fight for their existence, Clausewitzian military theory (as Bissett quotes it) and the World Wars are clear: Inter arma silent leges. Each side can afford nothing else but to do its utmost to subjugate the other, so robots will be used to further remove conscience from the act of pulling the trigger, turn treatment of enemy civilian manpower over to the total amorality of programmed killers, and/or commit whatever beneficial atrocities, conventional or novel, that would not overwhelm the sensibilities of a war-fevered state or populace, or those of onlookers, to the point that the reaction would make for an unprofitable trade-off.

When the strong go against the weak, a country descends upon either another one or undesirable elements within itself, robot combatants are one more tool among the many that allow those who can afford them to kill more and better impose their will over those who cannot. Here robots, incapable of moral arguments, still serve to provide their owners with the means and the opportunity to use force. The effect ultimately depends on whether said owners are willing to forgo their own gain in favor of respecting others’ humanity.


References

Full

De George, Richard (2003): Post-September 11: Computers, ethics and war. Article, Ethics and Information Technology 5, 2003, Kluwer Academic Publishers, the Netherlands, 2003.

Bissett, A.K. (2003): Carl von Clausewitz and high technology war. Article, Proceedings of Computer Ethics, Philosophical Enquiry 2003, Boston College, Chestnut Hill, MA, June 25-27, 2003.

Sparrow, Robert (2007): Killer Robots. Article, Journal of Applied Philosophy, Volume 24, Issue 1, February 2007.

Feldman, Fred (1978): Introductory Ethics. Book, Prentice-Hall, Inc.

Nelson, Todd; Bolia, Robert (2006): Supervisory Control of Uninhabited Combat Air Vehicles From an Airborne Battle Management Command Platform: Human Factors Issues. Article, Human Factors of Remotely Operated Vehicles, Advances in Human Performance and Cognitive Engineering Research, Volume 7.

Blair, Bruce (1993): The Logic of Accidental Nuclear War. Book, Brookings Institution Press.

Blitch, John (1998): Semi-Autonomous Tactical Robots for Urban Operations. Article, Proceedings of the 1998 IEEE/ISIC/CIWS/ISAS Joint Conference, Institute of Electrical and Electronics Engineers, Gaithersburg, Maryland, September 14-17, 1998.

Moseley, Alexander (2006): Just War Theory. Web page, The Internet Encyclopedia of Philosophy.

Orend, Brian (2002): Justice after War. Article, Ethics & International Affairs, Volume 16.1, Spring 2002.

Fussell (1990): Wartime: Understanding and Behavior in the Second World War. Book, Oxford University Press, New York.

White, Matthew (2005): Deaths by Mass Unpleasantness: Estimated Totals for the Entire 20th Century. Web page, Historical Atlas of the Twentieth Century. (In defense of including a personal website on this part of the list, the pages are extensively sourced, and also cited in publications whose authors can be expected to know what they are doing.)

Limited

These sources are not peer-reviewed and not suitable as academic references. They are used here with care to provide secondary sources for more reliable information, and verifiable technological – not ethical – information about the state of the field.

Herbert, Bob (2002): Dancing in the Dark. Article, The New York Times.

Cowan Jr., Thomas (2007): A Theoretical, Legal and Ethical Impact of Robots on Warfare. Research project, U.S. Army War College.

Rutherford, Mark (2008): Air Force commits to micro air vehicle. Blog entry, CNET News.com.

Cumming, David (executive producer) (2006): Future Shock - S1E5 - "Smart Weapons." Television show, Discovery Channel.

Wilson, Daniel (2005): How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion. Book, Bloomsbury Publishing.

Buzzell, Colby (2005): My War, Killing Time in Iraq. Book, G. P. Putnam’s Sons, New York.

Associated Press (2008): A Look at the Predator Drone.

Shachtman, Noah (2007): First Armed Robots on Patrol in Iraq. Blog entry, WIRED Danger Room. August 2, 2007.

Ghattas, Kim (2008): New approach for US army manual. Article, BBC News, Washington. 8 February 2008.

Wilson, Michael (2006): 50 Shots Fired, and the Experts Offer a Theory. Article, The New York Times. 27th November 2006.

Fox News (2008): Pentagon’s ‘Crusher’ Robot Vehicle Nearly Ready To Go. Article, Fox News. 26th February 2008.


There are some personal reasons for taking the hours to tweak and soft-link SEIWTUORC, though.
The essay was the last great hurdle in recovering control over my mind after student health care's
ADHD testing. Apparently I don't have ADHD and should under no circumstances be given the medication
used to find that out. Getting a grip on most things was relatively easy, but days would go by
as I "worked" on this essay, getting stuck over every word, meandering in every distraction without
accomplishing anything, putting off classes and people because there was only
a few hours' work left.

This went on for months.
In retrospect I have no idea how this could happen, but noding the beast that has
kept me from noding brings a very smug sense of closure, like coming back from
the Dark Continent with one arm and a lion skin rug.
Ha.