Good afternoon comrades, friends and colleagues. Can I start by saying how pleased I am to see this conference taking place and can I take this opportunity to thank the organisers for inviting me to share the platform today with such distinguished friends. My name is Harry Rogers and I live in West Wales about 12 miles from Aberporth where the MOD have been carrying out their tests on the Watchkeeper Drone.
I am sixty five years old and I almost never got born because of a UAV. My mother and my aunt were in the back bedroom over the top of my grandfathers public house in West Croydon in June 1944 just ten seconds before one of Hitler’s V1 doodle bug flying bomb drones blew the back of the pub clean off. Ten seconds earlier and I would never have been born. My point here is that UAVs have been around a long time and the Watchkeeper is nothing really new in concept. A lot has been written about this piece of almost obsolete already Army surveillance equipment so I won’t add to the reams you can find on the internet.
I am a member of a local peace group called Bro Emlyn For Peace and Justice and we formed in 2003 as a response to the decision by Bush and Blair to attack Iraq. During the time we have been involved in a number of campaigns including the campaign against the development of Cardigan Airport as a testing ground for UAVs, also the proposed introduction of an unmanned aerial systems technology hub at Parc Aberporth and also the management buyout of the MOD missile testing base at Parcllyn by QinetiQ.
There is much I could talk to you about concerning the history of this campaigning but that is not why I agreed to come here today. The past is something that we can learn from but not something that we can undo. My speech today is primarily about the future of drone technology and why it is vital that we all start to pay attention to what the research and development bods at MIT and DARPA, QinetiQ and BAE systems are cooking up for the future. I make no apologies for basing most of this speech on a report made by the RAF in October 2012 about the future of Unmanned Aerial Systems to the Government.
I am interested in ensuring that we all go away from today with an understanding that the big issue we face in terms of UAV development is that of Autonomy. The next generation of drones may well be able to think for themselves and act autonomously, that is without any human in the loop as the Military say. That means flying machines that can make their own decisions in search and destroy operations based on a set of algorithmic decisions pre-determined by human masters who may or may not be benign in their intentions.
Sceptics amongst you are already muttering balderdash and hokum, science fantasy and other such epithets. Well virtually all of the rest of the speech will, I hope, convince you otherwise.
* The RAF say that “As UAVs are developed with increasing levels of automation it will reduce the requirement for operator training in the more traditional piloting skills of flying the aircraft, such as landing and takeoff, and focus the training more towards operating the payload.”
* The MOD and in particular the RAF are discussing issues about the levels of automation they are comfortable with. They recognise that highly automated weaponry systems are unlikely to be able to apply judgement and pragmatism to “situations”. They are worried about legal and ethical considerations that occur when there are no human beings in the loop leading to loss of life or injury.
* The future is one where outdated weaponry will give way to what they term “precision weaponry” and where the battle-spaces increasingly involve unmanned and cyber operations.
* Autonomous weapon systems are capable of understanding higher level intent and direction, and perception of their environment, leading to the ability to take appropriate actions to bring about desired states. To be able to decide on a course of action, from a number of alternatives, without depending on human oversight and control. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.
* Low intensity tasks for Autonomous UAVs requiring minimal human oversight rather than continous control include the following:-
– pattern of life surveillance tasks over fixed locations or in support of littoral maneuver
– maintenance of standing anti-submarine warfare or anti-surface warfare radar barriers
– counter-piracy tasks; monitoring of arrays of sonobuoys or other sensors
– electronic warfare tasks
– acting as a communications relay
– air-to-air refueling tankers
* The RAF consider the use of UAVs ideal in hostile environments for a manned aircraft or it’s crew in operations related to Chemical, Biological, Radiological, and Nuclear. Tasks include the following:-
– Could carry sensors for local, tactical or global use
– systems easily sacrificed in a safe area after data gathering
– use in situations where fire and smoke make human activity hazardous
* In risky situations UAV systems can be used instead of aircrew or soldiers where the threat of ground to air action is high and also where it is necessary to suppress an integrated air defence system.
– multiple cheap UAVs used sacrificially to swamp detection and command and control systems
– to encourage enemies to fire large numbers of missiles
– observe engagement tactics and transmit data back to intelligence collators
– convey tactical supplies
– sweep for improvised explosive devices
* Of course UAVs are used in scenarios which are highly distasteful and we all know that they are used for surveillance and targeting, and carrying out, weapon attacks inside the borders of countries such as Pakistan and Palestine. All of this is currently done with “man in the loop” systems but it will not be too long before drones are flying with what the RAF says is ” the ability to independently locate and attack mobile targets, with appropriate proportionality and discrimination.”
The RAF are worried about issues relating to the Geneva Convention with regard to autonomous UAV development and usage and they say that “compliance will become increasingly challenging as systems become more automated. In particular, if we wish to allow systems to make independent decisions without human intervention, some considerable work will be required to show how such systems will operate legally.”
Already there are automated weapons systems in use in Afghanistan, for example, the Phalanx and Counter-Rocket, Artillery and Mortar (C-RAM) systems used because there is deemed to be insufficient time for human response to counter incoming fire.
Future autonomous UAV systems will have to adhere to legal requirements and civilian airspace regulations. This will require political involvement in getting the necessary changes made. In my view it is absolutely vital that politicians understand the issues clearly and concisely because once made it will mean that there is going to be a lot of military and civilian hardware flying about in the airspace without any human interface whatsoever. The RAF say “As systems become increasingly automated, they will require decreasing human intervention between the issuing of mission-level orders and their execution.” and they go further saying “It would be only a small technical step to enable an unmanned aircraft to fire a weapon based solely on its own sensors, or shared information, and without recourse to higher, human authority.” They also discuss the timescale for the introduction of increased autonomy via Artificial Intelligence, saying ;- ” Estimates of when artificial intelligence will be achieved (as opposed to complex and clever automated systems) vary, but the consensus seems to lie between more than 5 years and less than 15 years.” Their words, not mine.
Currently the MOD “has no intention to develop systems that operate without human intervention in the weapon command and control chain, but it is looking to increase levels of automation where this will make systems more effective.”
The RAF are clearly worried about the direction all this is going in and they say “As technology matures and new capabilities appear, policy-makers will need to be aware of the potential legal issues and take advice at a very early stage of any new system’s procurement cycle.” I believe this highlights a degree of paranoia on the part of the RAF vis a vis it’s own future role.
Not only are the RAF exercised about legal dilemmas. Ethics and morals related questions also are in their thoughts such as when, where and how automated and autonomous unmanned systems may be used. This applies not just to the use of drones of course but also to all other forms of weaponry in any environment. Will all future wars be fought remotely with little or no loss of friendly or military personnel? Will future conflicts be waged between increasingly complex unmanned systems?
In my view a problem that we face today is that the accountants have control of governments and the most expensive resource used in public services is human beings. So autonomy offers massive savings in manpower and support for that manpower both before and after conflict occurs. As artificial Intelligence comes on board we are likely to see more complicated tasks arising that are beyond the capability of humans to deal with due to speed, complexity and information overload. No doubt some of you are probably suffering that now, but I only have a bit more to say before I will take questions, so bear with me.
The RAF and many others in the field are grappling with issues such as whether it is possible to develop AI that has the capability to focus on the unique (at the moment) ability that a human being has to bring empathy and morality to complex decision-making. The RAF say “To a robotic system, a school bus and a tank are the same – merely algorithms in a programme – and the engagement of a target is a singular action; the robot has no sense of ends, ways and means, no need to know why it is engaging a target. There is no recourse to human judgement in an engagement, no sense of a higher purpose on which to make decisions, and no ability to imagine (and therefore take responsibility for) repercussions of action taken.”
So we need to pose the following questions to our politicians:-
Are they happy to allow Autonomous Robots to take on the responsibility of choosing which of us lives and which of us dies?
Can an autonomous robot be considered capable of waging ethical and legal warfare?
Can software and hardware accidents be distinguished from war crimes when robots go wrong?
Is it possible to establish clear policies on acceptable machine behavior?
How far out of the bottle is the technological genie?
Are we doomed to a Terminator style future?
Is it possible have a sensible debate about technological development anymore?
Do we really want Artificial Intelligence with a greater capacity to think than a human to be in any way involved in future theatres of War?
The RAF pose very important questions when they ask the following:-
– Do military planners and politicians understand the full implications of the systems they are currently tasking and those they hope to procure?
– In the current economic climate, who will decide the best balance between keeping existing equipment and personnel, or whether to give these up to fund new unmanned systems?
– Do we understand even the basic implications of such decisions for the associated defence lines of development?
– Crucially, do we have a strategic level of understanding as to how we will deliver the considerable number of changes that will need to be made to existing policy, concepts, doctrine, and force structures?
* Finally we must expect to see governments bringing in changes to the law of armed conflict in order to accommodate the use of autonomous UAS and we must shout our opposition to this from the rooftops.
So there we have it friends, there is a lot for us to consider when we look into the issue of drones. We have to view all of this in an holistic way. That is we must not just lumber along from one demo to the next thinking only about the impact of the current use of drone warfare, terrible though that is. It is absolutely vital that we start to consider what future implications there are for the maintenance and development of basic human rights. We must also consider what the use of technology means in terms of social control measures such as we see the beginnings of in Gaza and elsewhere. I believe we are at, if not all ready passed, a dangerous turning point in the way we occupy this planet.
It is incumbent on us all to make a fuss about these issues if we want a planet where Human Rights are protected. If we don’t then we condemn the world to a dystopian future where all kinds of as yet un-thought of technology is used to maintain the position of a global elite above and beyond that of the majority of the people. The choice is ours, make a ruckus or bury our head in the sand and wait for the worst possible sci-fi future to engulf us all.
Thanks for listening, I’m happy to take questions and take part in debate.
Harry Rogers 24/05/2013