Sunday, February 26, 2017

The FCC and Congress Must Preserve the Privacy Rights of Broadband Internet Customers! | Center for Digital Democracy

The FCC and Congress Must Preserve the Privacy Rights of Broadband Internet Customers! | Center for Digital Democracy

The FCC and Congress Must Preserve the Privacy Rights of Broadband Internet Customers!

Posted by Yewande Ogunkoya

Written by Katharina Kopp

After a long and fair rulemaking process during most of 2016, the Federal Communications Commission (FCC) adopted ground-breaking privacy rules last October protecting the personal information of broadband Internet service customers. Both industry—including powerful phone and cable companies that provide the majority of broadband connections—as well as consumer, civil rights, and privacy groups, had ample opportunity to make their case—which they did. Public-interest and grass-roots organizations used their limited resources to advocate for consumers' basic right to access and use the Internet (via Internet service providers—ISPs) in private, without having their information gathered. Industry and its allies tried to oppose or water down attempts to give their customers meaningful privacy protections. Despite a significant power imbalance between the parties, the process resulted in rules that give consumers and citizens legal rights that many assumed are already theirs to enjoy, but which they, in fact, had been denied until this historic broadband privacy rule making. 

Prior to the Privacy Order, it was not since the 1998 Children's Online Privacy Protection Act that U.S. regulators affirmatively granted consumers meaningful online privacy rights on this scale. While the Federal Trade Commission has played an important role as the lead agency in protecting the privacy and data-security rights of U.S. consumers for a large section of the U.S. economy, it has not enacted any policy that that would restore the power imbalance between consumers and large corporations in our ever-growing commercial surveillance world. The FCC broadband privacy rule, however, aims to provide the public with some fairness and balance to the lopsided relationship between the average individual and data-insatiable powerful ISPs.  

The FCC rules set limits on what Internet service providers may do with the highly sensitive data that they have already collected in the course of providing internet service (a service for which consumers already pay dearly with their pocket books). "Sensitive" information includes precise geo-location, financial information, health information, children's information, social security numbers, web browsing history, app usage history and the content of communications. The most important aspect of the rule requires Internet service providers to obtain an opt-in consent for the use or sharing of such information for purposes other than providing broadband service, such as billing. What this means is that unless you ask me and I give permission, what I do on the Internet is off-limits for ISPs to monetize. 

The final rule emphasized the distinction between sensitive and non-sensitive data. The FCC felt it had to accommodate industry pressure to follow the FTC's framework, which is based on that agency's very limited authority to protect consumer privacy. Advocates and the FCC recognized that the distinction between sensitive and non-sensitive information is less and less meaningful in an age when companies can use data analytics and modeling to infer the most personal traits of an individual without ever collecting "sensitive data." What is particularly noteworthy is that the new FCC rule grappled with the concepts of what kinds of uses and sharing are permissible and which are not. The rule, in fact, makes it clear that it is precisely the unexpected and unrelated or secondary uses of data that a company must first obtain permission to use before it can do so. In other words, each of us has a right to control the collection, use, and exploitation of data about us. 

This important and basic human right was finally made into a legal right with the 2016 FCC Privacy Order. It should stay that way, even with a new leadership at the Commission. The benefits that accrue to each of us individually and as members of a group, as well as to society at large, from this new policy safeguard, are manifold and invaluable to an equitable, just, and fair democracy and marketplace. Without an opt-in, there would be no limitations placed on how ISPs can use the data about us. As we all know too well, the existing individual privacy self-management model in the U.S., which typically offers only an opt-out, has proven to be ineffective in putting limits on corporate data uses and sharing, although the public expresses an increasing opposition to these corporate surveillance practices.  

Not only do the FCC privacy rules affirm the basic individual right to have one's privacy protected and individual autonomy preserved, the requirement to obtain an affirmative consent prior to any secondary uses by ISPs is equally critical in guarding against profiling and group discrimination. The profiling of an individual, or the association of an individual with a class of people, requires very little information about the person who is being profiled. So, the less data collected about others like me, the less likely that I will be profiled. While not perfect, and just a small safeguard in this world of ubiquitous and constant data surveillance, this rule helps to guard us against the classifying and predictive analytics that often represent a biased, discriminatory, and entrenched inequality that incorporate past inequities into decisions about the future.  

Given that the rule identifies information about children as sensitive data, it is also important in protecting the fundamental rights of children to enjoy privacy and freedom from age-inappropriate commercial exploitation. The use of data about us as consumers and citizens during the 2016 elections, moreover, should serve as an important reminder of how pervasive technologies of data surveillance analytics have become. Political campaigns and special interests have unfettered access to commercial data and marketing practices designed to influence how we think, act, and vote, but there are no regulations or corporate practices that aim to curtail these developments'—unless, that is, we hold onto the FCC broadband privacy rule now and build on it in the future.  

Given the ominous start of Ajit Pai to his FCC chairmanship—he has already used his "delegated authority" to undermine important communications rights—we are now facing the very real likelihood that the new chairman will do away with the rule that was adopted by the previous FCC Commission. (CDD recently filed an Opposition to Petitions for a Stay of the Federal Communications Commission's Broadband Privacy Order in response to a filing by a coalition of industry associations and interests.) Similarly, there is a real risk that Congress might repeal the rule via the Congressional Review Act, which would prevent the FCC from revisiting broadband privacy rules at all in the future. But Americans want to have control over their lives and data, and want to be able to make decisions unencumbered by powerful corporate interests. Thus the FCC and Congress must preserve the privacy rights of broadband internet customers!

---

Written by Katharina Kopp, Deputy Director, Director of Policy for Center for Digital Democracy



^ed 

FCC proposes new privacy rules for ISPs | TechCrunch

FCC proposes new privacy rules for ISPs | TechCrunch

FCC proposes new privacy rules for ISPs

As widely expected, the FCC has proposed expansive new data privacy rules for broadband providers that, if adopted, could see ISPs required to gain explicit consent from users for using or sharing their data.

Although the Notice of Proposed Rulemaking (NPRM), adopted by the FCC late last week, seeks to tighten privacy rules it would not require broadband providers to gain explicit opt in customer consent for all purposes — with leeway left for providing the broadband service, marketing the specific service to users, and for certain other specific purposes "consistent with customer expectations", such as public safety contacts.

ISPs would also still be able to share customer data for marketing other comms-related services and with any affiliates providing these services — provided broadband customers have not opted out of receiving this type of marketing missive.

However all other uses of customer data by ISPs would require explicit opt in consent from users under the proposals.

The FCC said the aim is to implement the privacy requirements of Section 222 of the Communications Act for broadband ISPs, with chairman Tom Wheeler arguing that tighter privacy rules are necessary because of the visibility ISPs have into users' online activity.

In a statement, Wheeler said the proposal is aimed at giving consumers "the tools we need to make informed decisions about how our ISPs use and share our data, and confidence that ISPs are keeping their customers' data secure".

Unlike with Internet services consumers cannot easily swap to another broadband provider or choose not to use an ISP at all, he noted.

"Our ISPs handle all of our network traffic. That means an ISP has a broad view of all of its customers' unencrypted online activity — when we are online, the websites we visit, and the apps we use. If we have mobile devices… our providers can track our physical location throughout the day in real time. Even when data is encrypted, our broadband providers can piece together significant amounts of information about us — including private information such as a chronic medical condition or financial problems — based on our online activity," said Wheeler.

SPONSORED

In January a group of U.S. privacy and consumer rights groups called on the FCC to tighten privacy rules for ISPs, arguing that ISPs have increasingly been using the same big data analytics/tracking techniques as Internet ad platform giants like Google — posing massive risks to user privacy.

As you'd expect, broadband providers are opposed to the proposals. Commenting in a blog post after the NPRM was published last week, Comcast's SVP for public policy, David L. Cohen, wrote: "The proposed rules will not provide meaningful consumer Internet privacy protections, and will block ISPs from bringing new competition to the online advertising market that could benefit consumers."

Cohen added that the NPRM is "inexplicably targeted to block ISPs… from entering and competing as disruptors and upstarts in the online advertising marketplace", noting the latter is "dominated by edge providers and other non-ISPs".

But Wheeler couched it as "narrowly focused" on ISPs because personal data gathered by ISPs is done so as a function of providing broadband connectivity — rather than because a consumer chooses to visit a website or use a particular online service.

"This proposal does not prohibit ISPs from using and sharing customer data — it simply proposes that the ISP first obtain customers' express permission before doing so," Wheeler added.

The FCC proposal was approved by a 3-2 Democratic majority, with Republican commissioners dubbing it corporate favoritism, according to NPR. The next steps in the FCC's process will be a period of public consultation before a final vote to set new rules.



^ed 

New FCC rule protects users from the prying eyes of ISPs | TechCrunch

New FCC rule protects users from the prying eyes of ISPs | TechCrunch

New FCC rule protects users from the prying eyes of ISPs

It's a good day for consumers, and the advertisers are tearing out their hair: The FCC today voted to adopt new privacy rules that severely restrict what data ISPs can collect from you without your consent. The Association of National Advertisers called the rules "unprecedented, misguided and extremely harmful." If that isn't a strong endorsement, I don't know what is!

The rules are briefly summarized, which is usually a sign they're strong and far-reaching. The nut of the new rule, first proposed earlier this year, is this: "ISPs are required to obtain affirmative 'opt-in' consent from consumers to use and share sensitive information." Note that it doesn't say they can't collect that information — more on this later.

And just what is sensitive information? The rule lists the following, but it's not meant to be all-encompassing:

  • Precise geo-location
  • Children's information
  • Health information
  • Financial information
  • Social Security numbers
  • Web browsing history
  • App usage history
  • The content of communication

It's perhaps easier to define it by what sensitive information isn't. Simple things like email address, service tier, IP address, bandwidth used and other information along those lines doesn't require your permission for use.

This affects terrestrial ISPs like Comcast as well as mobile carriers like T-Mobile (Note: TechCrunch is owned by Aol, which is owned by Verizon, definitely an ISP).

Naturally, that kind of data is extremely valuable for advertisers, and this particular golden goose just stopped laying. Who, after all, will opt into having that information shared with their ISP for the purpose of being advertised to?

FCC Chairman Tom Wheeler at Disrupt in 2015.

FCC Chairman Tom Wheeler at Disrupt in 2015.

Advertisers say that this info is critical for offering consumers ads relevant to them, and that they use it responsibly. And that's largely true. But FCC Chairman Tom Wheeler makes the only response necessary to that particular objection:

"There is a basic truth: It is the consumer's information. It is not the information of the network the consumer hires to deliver that information."

This is your data, and you control who sees it. Do you want to provide it so you can get ads targeted to your browsing habits and demographics? Go for it! That's your choice now.

One concern is that ISPs will simply bury this consent in one of the many documents we tend to agree to without reading them. But you can always opt out, and the rule specifically prohibits the ISP from refusing service if you don't opt into data sharing. Even if they decide to incentivize it — discounts or service improvements — the FCC will examine these situations case by case to see if they are reasonable or predatory.

Now, there is a bit of a loophole. A section of the FCC order fact sheet saying that de-identified, or anonymized, data is usable "outside the consent regime." The phrasing was unclear so I checked with the FCC on this. It turns out that ISPs can collect sensitive information without your consent — provided they properly de-identify it before using it. That sounds a bit like the honor system to me.

Don't expect an email tomorrow asking what you'd like to share with your internet provider, though. These rules won't go into effect for at least a year — although that doesn't prevent companies from complying earlier than that.

Featured Image: Pablo Martinez Monsivais/AP


^ed 

Coping with Chronic Illness in the For Profit Medical Marker, by Elyssa D. Durant © 2017

Saturday, February 25, 2017

The Pentagon is building a ‘self-aware’ killer robot army fueled by social media – INSURGE intelligence – Part 2

The Pentagon is building a 'self-aware' killer robot army fueled by social media – INSURGE intelligence – Medium


The Pentagon is building a 'self-aware' killer robot army fueled by social media

Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and Instagram

Investigative journalist, recovering academic, tracking the Crisis of Civilization patreon.com/nafeez

Imagine one of these giant robot dog things being weaponized and chasing you through the jungle because you turned up on a Pentagon kill list after posting angry stuff on social media

by Nafeez Ahmed

This exclusive is published by INSURGE INTELLIGENCE, a crowd-funded investigative journalism project for the global commons

An unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.

Despite official insistence that humans will retain a "meaningful" degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of "self-aware" interconnected robots to design and execute kill operations against robot-selected targets.

More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.

The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work's denial that the DoD is planning to develop killer robots.

In a widely reported March conversation with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:

"We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete."

But, he insisted, "We will not delegate lethal authority to a machine to make a decision," except for "cyber or electronic warfare."

He lied.

Official US defence and NATO documents dissected by INSURGE intelligence reveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of "collateral damage."

Behind public talks, a secret arms race

Efforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.

A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.

In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).

That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.

Prototype Terminator Bots?

Most media outlets have reported the fact that so far, governments have not ruled out the long-term possibility that intelligent robots could be eventually authorized to make decisions to kill human targets autonomously.

But contrary to Robert Work's claim, active research and development efforts to explore this possibility are already underway. The plans can be gleaned from several unclassified Pentagon documents in the public record that have gone unnoticed, until now.

Among them is a document released in February 2016 from the Pentagon's Human Systems Community of Interest (HSCOI).

The document shows not only that the Pentagon is actively creating lethal autonomous weapon systems, but that a crucial component of the decision-making process for such robotic systems will include complex Big Data models, one of whose inputs will be public social media posts.

Robots that kill 'like people'

The HSCOI is a little-known multi-agency research and development network seeded by the Office of the Secretary of Defense (OSD), which acts as a central hub for a huge plethora of science and technology work across US military and intelligence agencies.

The document is a 53-page presentation prepared by HSCOI chair, Dr. John Tangney, who is Director of the Office of Naval Research's Human and Bioengineered Systems Division. Titled Human Systems Roadmap Review, the slides were presented at the NDIA's Human Systems Conference in February.

The document says that one of the five "building blocks" of the Human Systems program is to "Network-enable, autonomous weapons hardened to operate in a future Cyber/EW [electronic warfare] Environment." This would allow for "cooperative weapon concepts in communications-denied environments."

But then the document goes further, identifying a "focus areas" for science and technology development as "Autonomous Weapons: Systems that can take action, when needed", along with "Architectures for Autonomous Agents and Synthetic Teammates."

The final objective is the establishment of "autonomous control of multiple unmanned systems for military operations."

Such autonomous systems must be capable of selecting and engaging targets by themselves — with human "control" drastically minimized to affirming that the operation remains within the parameters of the Commander's "intent."

The document explicitly asserts that these new autonomous weapon systems should be able to respond to threats without human involvement, but in a way that simulates human behavior and cognition.

The DoD's HSCOI program must "bridge the gap between high fidelity simulations of human cognition in laboratory tasks and complex, dynamic environments."

Referring to the "Mechanisms of Cognitive Processing" of autonomous systems, the document highlights the need for:

"More robust, valid, and integrated mechanisms that enable constructive agents that truly think and act like people."

The Pentagon's ultimate goal is to develop "Autonomous control of multiple weapon systems with fewer personnel" as a "force multiplier."

The new systems must display "highly reliable autonomous cooperative behavior" to allow "agile and robust mission effectiveness across a wide range of situations, and with the many ambiguities associated with the 'fog of war.'"

Resurrecting the human terrain

The HSCOI consists of senior officials from the US Army, Navy, Marine Corps, Air Force, Defense Advanced Research Projects Agency (DARPA); and is overseen by the Assistant Secretary of Defense for Research & Engineering and the Assistant Secretary of Defense for Health Affairs.

HSCOI's work goes well beyond simply creating autonomous weapons systems. An integral part of this is simultaneously advancing human-machine interfaces and predictive analytics.

The latter includes what a HSCOI brochure for the technology industry, 'Challenges, Opportunities and Future Efforts', describes as creating "models for socially-based threat prediction" as part of "human activity ISR."

This is short-hand for intelligence, surveillance and reconnaissance of a population in an 'area of interest', by collecting and analyzing data on the behaviors, culture, social structure, networks, relationships, motivation, intent, vulnerabilities, and capabilities of a human group.

The idea, according to the brochure, is to bring together open source data from a wide spectrum, including social media sources, in a single analytical interface that can "display knowledge of beliefs, attitudes and norms that motivate in uncertain environments; use that knowledge to construct courses of action to achieve Commander's intent and minimize unintended consequences; [and] construct models to allow accurate forecasts of predicted events."

The Human Systems Roadmap Review document from February 2016 shows that this area of development is a legacy of the Pentagon's controversial "human terrain" program.

The Human Terrain System (HTS) was a US Army Training and Doctrine Command (TRADOC) program established in 2006, which embedded social scientists in the field to augment counterinsurgency operations in theaters like Iraq and Afghanistan.

The idea was to use social scientists and cultural anthropologists to provide the US military actionable insight into local populations to facilitate operations — in other words, to weaponize social science.

The $725 million program was shut down in September 2014 in the wake of growing controversy over its sheer incompetence.

The HSCOI program that replaces it includes social sciences but the greater emphasis is now on combining them with predictive computational models based on Big Data. The brochure puts the projected budget for the new human systems project at $450 million.

The Pentagon's Human Systems Roadmap Review demonstrates that far from being eliminated, the HTS paradigm has been upgraded as part of a wider multi-agency program that involves integrating Big Data analytics with human-machine interfaces, and ultimately autonomous weapon systems.

The new science of social media crystal ball gazing

The 2016 human systems roadmap explains that the Pentagon's "vision" is to use "effective engagement with the dynamic human terrain to make better courses of action and predict human responses to our actions" based on "predictive analytics for multi-source data."

Are those 'soldiers' in the photo human… or are they really humanoid (killer) robots?

In a slide entitled, 'Exploiting Social Data, Dominating Human Terrain, Effective Engagement,' the document provides further detail on the Pentagon's goals:

"Effectively evaluate/engage social influence groups in the op-environment to understand and exploit support, threats, and vulnerabilities throughout the conflict space. Master the new information environment with capability to exploit new data sources rapidly."

The Pentagon wants to draw on massive repositories of open source data that can support "predictive, autonomous analytics to forecast and mitigate human threats and events."

This means not just developing "behavioral models that reveal sociocultural uncertainty and mission risk", but creating "forecast models for novel threats and critical events with 48–72 hour timeframes", and even establishing technology that will use such data to "provide real-time situation awareness."

According to the document, "full spectrum social media analysis" is to play a huge role in this modeling, to support "I/W [irregular warfare], information operations, and strategic communications."

This is broken down further into three core areas:

"Media predictive analytics; Content-based text and video retrieval; Social media exploitation for intel."

The document refers to the use of social media data to forecast future threats and, on this basis, automatically develop recommendations for a "course of action" (CoA).

Under the title 'Weak Signal Analysis & Social Network Analysis for Threat Forecasting', the Pentagon highlights the need to:

"Develop real-time understanding of uncertain context with low-cost tools that are easy to train, reduce analyst workload, and inform COA [course of action] selection/analysis."

In other words, the human input into the development of course of action "selection/analysis" must be increasingly reduced, and replaced with automated predictive analytical models that draw extensively on social media data.

This can even be used to inform soldiers of real-time threats using augmented reality during operations. The document refers to "Social Media Fusion to alert tactical edge Soldiers" and "Person of Interest recognition and associated relations."

The idea is to identify potential targets — 'persons of interest' — and their networks, in real-time, using social media data as 'intelligence.'

Meaningful human control without humans

Both the US and British governments are therefore rapidly attempting to redefine "human control" and "human intent" in the context of autonomous systems.

Among the problems that emerged at the UN meetings in April is the tendency to dilute the parameters that would allow describing an autonomous weapon system as being tied to "meaningful" human control.

A separate Pentagon document dated March 2016 — a set of presentation slides for that month's IEEE Conference on Cognitive Methods in Situation Awareness & Decision Support — insists that DoD policy is to ensure that autonomous systems ultimately operate under human supervision:

"[The] main benefits of autonomous capabilities are to extend and complement human performance, not necessarily provide a direct replacement of humans."

Unfortunately, there is a 'but'.

The March document, Autonomous Horizons: System Autonomy in the Air Force, was authored by Dr. Greg Zacharias, Chief Scientist of the US Air Force. The IEEE conference where it was presented was sponsored by two leading government defense contractors, Lockheed Martin and United Technologies Corporation, among other patrons.

Further passages of the document are revealing:

"Autonomous decisions can lead to high-regret actions, especially in uncertain environments."

In particular, the document observes:

"Some DoD activity, such as force application, will occur in complex, unpredictable, and contested environments. Risk is high."

The solution, supposedly, is to design machines that basically think, learn and problem solve like humans. An autonomous AI system should "be congruent with the way humans parse the problem" and driven by "aiding/automation knowledge management processes along lines of the way humans solve problem [sic]."

A section titled 'AFRL [Air Force Research Laboratory] Roadmap for Autonomy' thus demonstrates how by 2020, the US Air Force envisages "Machine-Assisted Ops compressing the kill chain." The bottom of the slide reads:

"Decisions at the Speed of Computing."

This two-staged "kill chain" is broken down as follows: firstly, "Defensive system mgr [manager] IDs threats & recommends actions"; secondly, "Intelligence analytic system fuses INT [intelligence] data & cues analyst of threats."

In this structure, a lethal autonomous weapon system draws on intelligence data to identify a threat, which an analyst simply "IDs", before recommending "action."

The analyst's role here is simply to authorize the kill, but in reality the essential importance of human control — assessment of the integrity of the kill decision — has been relegated to the end of an entirely automated analytical process, as a mere perfunctionary obligation.

By 2030, the document sees human involvement in this process as being reduced even further to an absolute minimum. While a human operator may be kept "in the loop" (in the document's words) the Pentagon looks forward to a fully autonomous system consisting of:

"Optimized platform operations delivering integrated ISR [intelligence, surveillance and reconnaissance] and weapon effects."

The goal, in other words, is a single integrated lethal autonomous weapon system combining full spectrum analysis of all data sources with "weapon effects" — that is, target selection and execution.

The document goes to pains to layer this vision with a sense of human oversight being ever-present.

AI "system self-awareness"

Yet an even more blunt assertion of the Pentagon's objective is laid out in a third document, a set of slides titled DoD Autonomy Roadmap presented exactly a year earlier at the NDIA's Defense Tech Expo.

The document authored by Dr. Jon Bornstein, who leads the DoD's Autonomy Community of Interest (ACOI), begins by framing its contents with the caveat: "Neither Warfighter nor machine is truly autonomous."

Yet it goes on to call for machine agents to develop:

"Perception, reasoning, and intelligence allow[ing] for entities to have existence, intent, relationships, and understanding in the battle space relative to a mission."

This will be the foundation for two types of weapon systems: "Human/ Autonomous System Interaction and Collaboration (HASIC)" and "Scalable Teaming of Autonomous Systems (STAS)."

In the near term, machine agents will be able "to evolve behaviors over time based on a complex and ever-changing knowledge base of the battle space… in the context of mission, background knowledge, intent, and sensor information."

However, it is the Pentagon's "far term" vision for machine agents as "self-aware" systems that is particularly disturbing:

"Far Term:
•Ontologies adjusted through common-sense knowledge via intuition.
•Learning approaches based on self-exploration and social interactions.
•Shared cognition
•Behavioral stability through self-modification.
•System self-awareness"

It is in this context of the "self-awareness" of an autonomous weapon system that the document clarifies the need for the system to autonomously develop forward decisions for action, namely:

"Autonomous systems that appropriately use internal model-based/deliberative planning approaches and sensing/perception driven actions/control."

The Pentagon specifically hopes to create what it calls "trusted autonomous systems", that is, machine agents whose behavior and reasoning can be fully understood, and therefore "trusted" by humans:

"Collaboration means there must be an understanding of and confidence in behaviors and decision making across a range of conditions. Agent transparency enables the human to understand what the agent is doing and why."

Once again, this is to facilitate a process by which humans are increasingly removed from the nitty gritty of operations.

In the "Mid Term", there will be "Improved methods for sharing of authority" between humans and machines. In the "Far Term", this will have evolved to a machine system functioning autonomously on the basis of "Awareness of 'commanders intent'" and the "use of indirect feedback mechanisms."

This will finally create the capacity to deploy "Scalable Teaming of Autonomous Systems (STAS)", free of overt human direction, in which multiple machine agents display "shared perception, intent and execution."

Teams of autonomous weapon systems will display "Robust self-organization, adaptation, and collaboration"; "Dynamic adaption, ability to self-organize and dynamically restructure"; and "Agent-to-agent collaboration."

Notice the lack of human collaboration.

The "far term" vision for such "self-aware" autonomous weapon systems is not, as Robert Work claimed, limited to cyber or electronic warfare, but will include:

"Ground Convoys/Air-ground operations"; "Ballistic rate multi-agent operation"; "Smart munitions."

These operations might even take place in tight urban environments — "in close proximity to other manned & unmanned systems including crowded military & civilian areas."

The document admits, though, that the Pentagon's major challenge is to mitigate against unpredictable environments and emergent behavior.

Autonomous systems are "difficult to assure correct behavior in a countless number of environmental conditions" and are "difficult to sufficiently capture and understand all intended and unintended consequences."

Terminator teams, led by humans

The Autonomy roadmap document clearly confirms that the Pentagon's final objective is to delegate the bulk of military operations to autonomous machines, capable of inflicting "Collective Defeat of Hard and Deeply Buried Targets."

One type of machine agent is the "Autonomous Squad Member (Army)", which "Integrates machine semantic understanding, reasoning, and perception into a ground robotic system", and displays:

"Early implementation of a goal reasoning model, Goal-Directed Autonomy (GDA) to provide the robot the ability to self-select new goals when it encounters an unanticipated situation."

Human team members in the squad must be able "to understand an intelligent agent's intent, performance, future plans and reasoning processes."

Another type is described under the header, 'Autonomy for Air Combat Missions Team (AF).'

Such an autonomous air team, the document envisages, "Develops goal-directed reasoning, machine learning and operator interaction techniques to enable management of multiple, team UAVs." This will achieve:

"Autonomous decision and team learning enable the TBM [Tactical Battle Manager] to maximize team effectiveness and survivability."

TBM refers directly to a battle management autonomy software for unmanned aircraft.

The Pentagon still, of course, wants to ensure that there remains a human manual override, which the document describes as enabling a human supervisor "to 'call a play' or manually control the system."

Targeting evil antiwar bloggers

Yet the biggest challenge, nowhere acknowledged in any of the documents, is ensuring that automated AI target selection actually selects real threats, rather than generating or pursuing false positives.

According to the Human Systems roadmap document, the Pentagon has already demonstrated extensive AI analytical capabilities in real-time social media analysis, through a NATO live exercise last year.

During the exercise, Trident Juncture — NATO's largest exercise in a decade — US military personnel "curated over 2M [million] relevant tweets, including information attacks (trolling) and other conflicts in the information space, including 6 months of baseline analysis." They also "curated and analyzed over 20K [i.e. 20,000] tweets and 700 Instagrams during the exercise."

The Pentagon document thus emphasizes that the US Army and Navy can now already "provide real-time situation awareness and automated analytics of social media sources with low manning, at affordable cost", so that military leaders can "rapidly see whole patterns of data flow and critical pieces of data" and therefore "discern actionable information readily."

The primary contributor to the Trident Juncture social media analysis for NATO, which occurred over two weeks from late October to early November 2015, was a team led by information scientist Professor Nitin Agarwal of the University of Arkansas, Little Rock.

Agarwal's project was funded by the US Office of Naval Research, Air Force Research Laboratory and Army Research Office, and conducted in collaboration with NATO's Allied Joint Force Command and NATO Strategic Communications Center of Excellence.

Slides from a conference presentation about the research show that the NATO-backed project attempted to identify a hostile blog network during the exercise containing "anti-NATO and anti-US propaganda."

Among the top seven blogs identified as key nodes for anti-NATO internet traffic were websites run by Andreas Speck, an antiwar activist; War Resisters International (WRI); and Egyptian democracy campaigner Maikel Nabil Sanad — along with some Spanish language anti-militarism sites.

Andreas Speck is a former staffer at WRI, which is an international network of pacifist NGOs with offices and members in the UK, Western Europe and the US. One of its funders is the Joseph Rowntree Charitable Trust.

The WRI is fundamentally committed to nonviolence, and campaigns against war and militarism in all forms.

Most of the blogs identified by Agarwal's NATO project are affiliated to the WRI, including for instance nomilservice.com, WRI's Egyptian affiliate founded by Maikel Nabil, which campaigns against compulsory military service in Egypt. Nabil was nominated for the Nobel Peace Prize and even supported by the White House for his conscientious objection to Egyptian military atrocities.

The NATO project urges:

"These 7 blogs need to be further monitored."

The project was touted by Agarwal as a great success: it managed to extract 635 identity markers through metadata from the blog network, including 65 email addresses, 3 "persons", and 67 phone numbers.

This is the same sort of metadata that is routinely used to help identify human targets for drone strikes — the vast majority of whom are not terrorists, but civilians.

Agarwal's conference slides list three Pentagon-funded tools that his team created for this sort of social media analysis: Blogtracker, Scraawl, and Focal Structures Analysis.

Flagging up an Egyptian democracy activist like Maikel Nabil as a hostile entity promoting anti-NATO and anti-US propaganda demonstrates that when such automated AI tools are applied to war theatres in complex environments (think Pakistan, Afghanistan and Yemen), the potential to identify individuals or groups critical of US policy as terrorism threats is all too real.

This case demonstrates how deeply flawed the Pentagon's automation ambitions really are. Even with the final input of independent human expert analysts, entirely peaceful pro-democracy campaigners who oppose war are relegated by NATO to the status of potential national security threats requiring further surveillance.

Compressing the kill chain

It's often assumed that DoD Directive 3000.09 issued in 2012, 'Autonomy in Weapon Systems', limits kill decisions to human operators under the following stipulation in clause 4:

"Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

After several paragraphs underscoring the necessity of target selection and execution being undertaken under the oversight of a human operator, the Directive goes on to open up the possibility of developing autonomous weapon systems without any human oversight, albeit with the specific approval of senior Pentagon officials:

"Autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets… Autonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside the policies in subparagraphs 4.c.(1) through 4.c.(3) must be approved by the Under Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the CJCS before formal development and again before fielding."

Rather than prohibiting the development of lethal autonomous weapon systems, the directive simply consolidates all such developments under the explicit authorization of the Pentagon's top technology chiefs.

Worse, the directive expires on 21st November 2022 — which is around the time such technology is expected to become operational.

Indeed, later that year, Lieutenant Colonel Jeffrey S. Thurnher, a US Army lawyer at the US Naval War College's International Law Department, published a position paper in the National Defense University publication, Joint Force Quarterly.

If these puppies became self-aware, would they be cuter?

He recommended that there were no substantive legal or ethical obstacles to developing fully autonomous killer robots — as long as such systems are designed in such a way as to maintain a semblance of human oversight through "appropriate control measures."

In the conclusions to his paper, titled No One At The Controls: Legal Implications of Fully Autonomous Targeting, Thurnher wrote:

"LARs [lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed. Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the United States should fully commit itself to harnessing the potential of fully autonomous targeting. The feared legal concerns do not appear to be an impediment to the development or deployment of LARs. Thus, operational commanders should take the lead in making this emerging technology a true force multiplier for the joint force."

Lt. Col. Thurnher went on to become a Legal Advisor for NATO Rapid Deployable Corps in Munster, Germany. In this capacity, he was a contributor to a little-known 2014 official policy guidance document for NATO Allied Command Transformation, Autonomy in Defence Systems.

The NATO document, which aims to provide expert legal advice to government policymakers, sets out a position in which the deployment of autonomous weapon systems for lethal combat — in particular the delegation of targeting and kill decisions to machine agents — is viewed as being perfectly legitimate in principle.

It is the responsibility of specific states, the document concludes, to ensure that autonomous systems operate in compliance with international law in practice — a caveat that also applies for the use of autonomous systems for law-enforcement and self-defence.

In the future, though, the NATO document points to the development of autonomous systems that can "reliably determine when foreseen but unintentional harm to civilians is ethically permissible."

Acknowledging that currently only humans are able to make a "judgement about the ethical permissibility of foreseen but unintentional harm to civilians (collateral damage)", the NATO policy document urges states developing autonomous weapon systems to ensure that eventually they "are able to integrate with collateral damage estimation methodologies" so as to delegate targeting and kill decisions accordingly.

The NATO position is particularly extraordinary given that international law — such as the Geneva Conventions — defines foreseen deaths of civilians caused by a military action as intentional, precisely because they were foreseen yet actioned anyway.

The Statute of the International Criminal Court (ICC) identifies such actions as "war crimes", if a justifiable and direct military advantage cannot be demonstrated:

"… making the civilian population or individual civilians, not taking a direct part in hostilities, the object of attack; launching an attack in the knowledge that such attack will cause incidental loss of civilian life, injury to civilians or damage to civilian objects which would be clearly excessive in relation to the concrete and direct military advantage anticipated;… making civilian objects, that is, objects that are not military objectives, the object of attack."

And customary international law recognizes the following acts as war crimes:

"… launching an indiscriminate attack resulting in loss of life or injury to civilians or damage to civilian objects; launching an attack against works or installations containing dangerous forces in the knowledge that such attack will cause excessive incidental loss of civilian life, injury to civilians or damage to civilian objects."

In other words, NATO's official policy guidance on autonomous weapon systems sanitizes the potential for automated war crimes. The document actually encourages states to eventually develop autonomous weapons capable of inflicting "foreseen but unintentional" harm to civilians in the name of securing a 'legitimate' military advantage.

Yet the NATO document does not stop there. It even goes so far as to argue that policymakers considering the development of autonomous weapon systems for lethal combat should reflect on the possibility that delegating target and kill decisions to machine agents would minimize civilian casualties.

Skynet, anyone?

A new report by Paul Scharre, who led the Pentagon working group that drafted DoD Directive 3000.09 and now heads up the future warfare program at the Center for New American Security in Washington DC, does not mince words about the potentially "catastrophic" risks of relying on autonomous weapon systems.

"With an autonomous weapon," he writes, "the damage potential before a human controller is able to intervene could be far greater…

"In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other autonomous weapons of the same type, a military could face the disturbing prospect of large numbers of autonomous weapons failing simultaneously, with potentially catastrophic consequences."

Scharre points out that "autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces," due to any number of potential reasons, including "hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors."

Noting that in the software industry, for every 1,000 lines of code, there are between 15 and 50 errors, Scharre points out that such marginal, routine errors could easily accumulate to create unexpected results that could be missed even by the most stringent testing and validation methods.

The more complex the system, the more difficult it will be to verify and track the system's behavior under all possible conditions: "… the number of potential interactions within the system and with its environment is simply too large."

The documents discussed here show that the Pentagon is going to pains to develop ways to mitigate these risks.

But as Scharre concludes, "these risks cannot be eliminated entirely. Complex tightly coupled systems are inherently vulnerable to 'normal accidents.' The risk of accidents can be reduced, but never can be entirely eliminated."

As the trajectory toward AI autonomy and complexity accelerates, so does the risk that autonomous weapon systems will, eventually, wreak havoc.

Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the 'System Shift' column for VICE's Motherboard, and is a weekly columnist for Middle East Eye.

He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work, and was twice selected in the Evening Standard's top 1,000 most globally influential Londoners, in 2014 and 2015.

Nafeez has also written and reported for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, The Ecologist, Alternet, Counterpunch, Truthout, among others.

He is a Visiting Research Fellow at the Faculty of Science and Technology at Anglia Ruskin University, where he is researching the link between global systemic crises and civil unrest for Springer Energy Briefs.

Nafeez is the author of A User's Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel ZERO POINT, among other books. His work on the root causes and covert operations linked to international terrorism officially contributed to the 9/11 Commission and the 7/7 Coroner's Inquest.


This story is being released for free in the public interest, and was enabled by crowdfunding. I'd like to thank my amazing community of patrons for their support, which gave me the opportunity to work on this story. Please support independent, investigative journalism for the global commons via Patreon.com, where you can donate as much or as little as you like.



^ed