"Campaign to Stop Killer Robots.” That may sound like a clique of conspiracy theorists or the title of a summer B movie, but it's actually an alliance of human rights groups raising legal and ethical concerns about people's willingness to cede life-and-death decisions to computers.

Who is responsible if an armed robot fails to distinguish between civilians and combatants when unleashing lethal force against a target that meets its programmed criteria?

And how, skeptics wonder, can a “fully autonomous weapon” be taught to recognize soldiers attempting to surrender or those already wounded and no longer a threat?

If national military forces can rely on machines to take on the front-line hazards of armed combat, will that reduced risk of human casualties remove an important deterrent to waging war?

The Campaign to Stop Killer Robots was joined Thursday by a diverse array of peace advocates and diplomats at a session of the U.N. Human Rights Council in calling for reflection on the wisdom of creating lethal technology that operates without human oversight -- and agreed rules for its use.

“Their deployment may be unacceptable because no adequate system of legal accountability can be devised and because robots should not have the power of life and death over human beings,” the United Nations’ watchdog on extrajudicial killings, Christof Heyns, told the council.

In calling for U.N. member nations to freeze development of robotic weapons “while the genie is still in the bottle,” Heyns warned of the risk of rapidly advancing technology outpacing political and moral consideration of unintended consequences.

In a 22-page report submitted to the U.N. rights forum, Heyns detailed the precursors to “fully autonomous weapons” already in operation:

-- Soldier-robots patrol the demilitarized zone between North and South Korea, and though remotely commanded by humans now, the programmed sentinels from Samsung Techwin are equipped with an automatic option.

-- The U.S. Navy launched an unmanned jet this month, the X-47B stealth drone developed by Northrop Grumman. Like generations of aerial drones that came before it, the X-47B is being billed as a surveillance tool. But it also has the capacity to carry more than 4,000 pounds of munitions.

-- Israel’s Harpy combat drone is designed to detect, attack and destroy radar emitters and suppress enemy air defenses.

-- Britain’s BAE Systems has developed its Taranis superdrone, which  can autonomously search, locate and identify enemy targets. The device requires human authorization to fire, but it has the technological capability of determining on its own when to attack or respond.

Existing drone technology has stirred plenty of controversy and frustrated relations between the United States, its foremost developer and user, and countries like Pakistan, Afghanistan and Yemen, where airstrikes and targeted killings have inflicted “collateral damage,” the military euphemism for civilian casualties.

Getting the international community united on ground rules for fully autonomous weapons is likely to pose at least as much challenge as balancing the pros and cons of using drones, but one that legal experts contend isn’t beyond the realm of possibility.

There is already significant recognition among the technologically advanced countries that there should be limits to the degree to which computerized systems can take action without human involvement, said Bonnie Docherty, a Harvard Law School lecturer and senior instructor at its International Human Rights Clinic. The rights clinic co-wrote a report with Human Rights Watch late last year on the hazards of leaving battlefield decisions to machines, “Losing Humanity: The Case Against Killer Robots.”

Docherty pointed to the Pentagon’s November directive that fully autonomous weapons would be banned for the foreseeable future except to apply non-lethal or non-physical force, such as some forms of electronic attack.

Steve Goose, arms division director at Human Rights Watch, told journalists covering the U.N. meeting in Geneva this week that several governments have expressed willingness to take the lead in getting a global moratorium on lethal robotics in place.

The burgeoning alliance against “killer robots” is hopeful that world leaders can be brought together on the need for keeping humans in control.

“There is a good chance of success because we are trying to act preemptively, to prevent states from investing so much in this technology that they don’t want to give it up,” said Docherty.

M. Ryan Calo, a University of Washington law professor with expertise in robotics and data security, notes that there are upsides to robotic warfare, like the speed at which computers can make decisions and their ability to approach problem-solving in ways that are beyond humans.