A recurring theme in sci-fi is the danger that new technology presents to mankind.
Perhaps the pinnacle of dystopic scenarios is the Singularity, that moment where artificial intelligence (AI) begins continuously self-improving to the point where we potentially lose control. This was the premise for the popular Terminator movies and others such as I, Robot and Transcendence, each featuring a race to shut the technology down before it grew out of control.
In this discussion, I will be making the argument that defending us from technology on a per-item basis is an intractable problem, thus the best solution requires focusing on the human beings who would erroneously or maliciously utilize technology to cause harm. I’m going to suggest a far more radical measure than simple psychological profiling or background checks. In order to appreciate its necessity, the intractability problem must be fully understood.
Since this post is so large, it is split into six sections:
- What is Intractability?
- Intractability in Computer Science
- Intractability in Computer Security
- The False Sense of Security
- Detecting Harmful Intent is a Tractable Problem
- “That’s Impossible”
What is Intractability?
For our discussion, a problem is considered to be intractable if it is very difficult to solve completely. This is quite a nuanced statement:
1. Intractable and impossible are not the same thing. Impossible problems are usually prevented from being completely solved by physics as it is currently understood. For example, physically modeling a given particle that exists in our universe with 100% accuracy and the ability to query its exact state at any time is prevented by the Uncertainty Principle. This is despite the fact that we have had ample computing power and knowledge to simulate all of the measurable traits of a particle for decades.
2. “Very difficult” represents a comparison between the amount of resources we put into solving a problem vs. the value we obtain from the solution. It can be a simple ratio such as “It took me five hours to figure out the best route to get to the supermarket two miles away.” I also diverge a bit from the computational complexity definition of intractable by making “very difficult” contextual – we must describe both the problem and attempted method of solution.
For example, determining the total value of 100 random bills of currency that I hand you is generally not considered an intractable problem – you just go through them one by one and add the value of each to your total until you run out of bills. What if, however, I were to scatter the bills randomly across the Earth without telling you where (or even when) I put them? It would seem that I have made the problem ridiculously difficult, yet it’s still not truly impossible. In fact, computational complexity theory would say that I have simply created a new problem (finding the bills), which if solved, reduces down to the trivial task I describe before.
The point I will be driving home throughout this post is that the problem is only intractable because of the way we’re trying to solve it. So now you’re trying to solve the problem of finding the bills, indeed, one could think of the myriad ways of tasking the combined surveillance power of the NRO to spot them from space. Wouldn’t it be easier though to simply look at my bank statement and note the value of my last cash withdrawal?
3. Finally, “solve completely” is the nuance that puts a testable end condition to my statement. Real life does not usually avail itself of a perfect solution like in my previous example. Fortunately, many intractable problems have practical solutions (and uses) that are, as we say, “good enough.” Allow me to wax a bit pedagogical here and take you through a guided tour of a few examples:
Intractability in Computer Science
There are two classic problems in computer science that, even if you’ve never heard of them, affect your daily life. One applies to virtually every physical task we do and the other we have bet our collective financial stability on it being very difficult. They are the Traveling Salesman Problem (TSP) and Integer Factorization.
Traveling Salesman Problem
The TSP is in the set of problems that attempt to determine the most efficient route to take when visiting a given set of destinations. Note that this is not the same thing as the shortest path problem solved by navigation systems to provide you a route from point A to B – what differentiates them is whether you specify a set of locations to pass through on the way.
So say for instance you were planning a marathon vacation to visit Manhattan, Los Angeles, Miami, Chicago, and Washington D.C. for as cheaply as possible. This seems simple at first, and you might try to solve it by picking your first city and then picking the closest city to that one and so on until you have visited all of them.
Does that work? Not necessarily.
This approach is known as the nearest neighbor algorithm and a naive implementation can actually fail to find any solution at all, even if there are many – just imagine a diamond shaped collection of five cities with one in the middle. There are also cases where it does not choose the most efficient solution, especially if you consider additional criteria such as hotel and travel costs.
Even so, I’m sure you could figure out the best route if I gave you all of the needed information and some time to plan. So how is the TSP intractable? The problem is a matter of runaway scaling – the number of routes you have to analyze increases exponentially as you add more destinations. For instance, the best known algorithm for perfectly solving the TSP in all cases would only need to evaluate 800 different routes to tackle the 5 city problem. But 10 cities would require evaluating 102,400 routes, 20 cities would try 419,430,400 routes and what about 50 cities? Around 3 billion billion routes.
This may all seem completely academic and trivial until you realize that tackling the TSP is something UPS and FedEx do when delivering packages, something the military does when establishing a logistical supply train and something you do every time you prioritize a set of errands. If the TSP is so intractable, then how do our packages get delivered in a (usually) timely manner?
By not trying to solve the problem in an intractable way! UPS, for instance, uses the nearest neighbor algorithm on small, localized chunks of deliveries and then combines the chunks into larger ones, optimizing and combining again and again until a solution is obtained. More to the point, they broke the vicious cycle of piling ever more resources into the intractable approach and created a near-optimal solution that will scale with the population. I can’t stress this point enough.
Factorization is also a deceptively simple task – if I give you a number, you tell me what other numbers, when multiplied together, will equal the original number. For instance, a factorization of the number 10 would be the numbers 2 and 5, because 2 times 5 equals 10. The numbers 2 and 5 are the factors. Integer factorization requires that the factors don’t have decimals (e.g., 2.5) and a special class of factorization, called prime factorization, requires that the factors are prime numbers. 2 and 5 are the prime factorization of 10.
It turns out that the prime factorization of very large numbers (several hundred or more digits) appears to be so difficult that it’s not even worth trying. We depend upon this difficulty in order to conduct secure transactions over the internet using the RSA algorithm. This method of public-key encryption is cryptographically secure so long as creating numbers with only prime factors is much faster than factoring them.
Note I say “appears to be difficult” because we technically haven’t proven that it’s intractable yet. So keep in mind that little lock icon up next to the URL in your browser doesn’t mean “secure”, it means “probably secure for now” unless someone proves that P = NP. Or until quantum computers become reliable and powerful enough to implement Shor’s algorithm, which will make integer factorization decidedly tractable.
“My Eyes are Glazed Over”
I bring up factorization primarily because it is an interesting example of where we have cleverly applied an intractable problem as a (temporary) solution to another problem. Go team! Now let’s look at how humans interacting with computers can create their own issues of intractability:
Intractability in Computer Security
Most computer science problems are neat and precisely defined; we can usually provide a mathematical equation to show how difficult they are. Computer security, however, is chaotic, and often times reactionary. It is common to not even know there is a problem until it’s exploited. Take for example some major incidents that occurred in 2015:
- The Office of Personnel Management was hacked. As a result, someone now has detailed background information on over 21 million current and former federal employees. This includes anyone who has completed an SF85 or SF86 in the last 15 years. 1.1 million of those records include fingerprints.
- Unpatched 2014-2015 models of Chrysler vehicles are literally hackable and controllable over the internet if Chrysler’s UConnect service is enabled. Technical details will be available a few weeks after Black Hat USA 2015 concludes. UPDATE: here is the write-up.
- Over 950 million Android phones have been remotely hackable since the day they were released. All an attacker has to do is send a text message. This can be done without the victim’s knowledge or interaction. You do realize that many states make your voter registration information public record?
- Hacking Team, an Italian hacking company with clients including the FBI, was hacked. The breach revealed that they had been selling a cache of Adobe Flash exploits that can compromise computers that simply visit a malicious webpage. How rare do you think Flash vulnerabilities are? HINT: click the link. Those bright red squares with “10.0” in them are bad.
What makes it so difficult? The intractability of securing a computer system is a multi-variable equation:
Proportional to System Capability / Usefulness
Consider, for instance, the difference between a Fisher-Price laptop and a PC on Intelink. The kid’s toy lacks any sort of networking capability or even a proper keyboard, thus has far fewer attack vectors compared to the PC which may participate on multiple networks and run hundreds of disparate software packages.
Speaking of attack vectors, do you realize it is possible to guess passwords by the sound a keyboard makes, read your screen by the electromagnetic radiation it emits, capture your every key press via wireless keyboard loggers and attack wireless networks with off-the-shelf solutions?
Proportional to Size and Complexity of Source Code
This tends to correlate with capability/usefulness. It is undeniable that software will be released with bugs. There are research efforts to automate the analysis of source code safety, but there is an upper bound to this. One limitation is that these tools can only test for a certain class of problems that we already know about.
Another, harder limitation is that we can not always prove a particular piece of code will even stop running, a famous proof by Alan Turing. The corollary of this is that if we can not determine if a piece of code will finish, we can not evaluate all of its possible executions ahead of time, especially within the context of a running system. Keep in mind that it’s not enough to simply determine if a piece of code will crash or rewrite the application’s memory – we also must test whether it will leak sensitive information by its very behavior.
Proportional to Number of Individuals with Access
Computer security by and large revolves around keeping unauthorized users out of a system. Unfortunately, strong passwords and multi-factor authentication won’t prevent a user with access from being malicious or socially engineered. Compartmentalization of information is a not a solution either, it is merely a method of mitigating the damage from a single breach. Speaking of which…
Proportional to Ratio of Attackers to Defenders
The Advanced Persistent Threat (APT) is probably the most intractable aspect of computer security. It’s a simple numbers game – no matter how well you prepare, you only have to make one mistake. In addition, it is not always (or even usually) possible to know the exact target until the attack is in progress. Thus, you cannot achieve safety by simply having more defenders than attackers, because the allocation of resources is asymmetrical:
For example, say a company of 500 employees hires 10 IT professionals to secure its computer systems. A small hacking team of five people decides to breach it. This is not 10 white hats vs. 5 black hats in a Hollywood style hack-a-thon. It is actually 10 people trying to ensure the other 500 people in the company do not fall victim to bribes, manipulation, blackmail, phishing, incompetence, social engineering and good old fashioned eavesdropping or theft. All of this is in addition to the automated penetration testing the hackers will be doing the entire time, looking for even a single mistake made by anyone involved in the process of creating, securing or maintaining the system.
If this all seems hypothetical to you, perhaps you haven’t heard about the breaches of RSA Security, Lockheed Martin, Boeing, DuPont, Sony, or a few others. For an up-to-date list, check with your friendly neighborhood Google news aggregator. The RSA breach should be the most eye opening – they were targeted because they built the technology used to secure other high-value systems. Yet the breach was not due to a failure of their technology, it was a failure of the humans employed there.
If by now you still don’t believe that computer security is an intractable problem, I’d encourage you to attend a black hat or Def Con event. Just don’t bring any electronic devices.
Yeah. I go into so much detail here because I’m about to move from the area of academics into that of educated hypothesis. The premise of my argument is that the intractability we face is mathematical in nature, even though we currently lack a precise equation to define it. Thus it’s very important that I’m not making a simple argument from incredulity or committing other logical fallacies as I continue, because I shouldn’t have to.
On that note, I think we’re ready to come full circle and start tackling some elephants. Let’s start with gun control:
The False Sense of Security
NOTE: the following is a thought exercise demonstrating why you can’t simply delete a technology to protect us from it. It is not intended to make a political statement.
Near the beginning of this post I posited:
“defending us from technology on a per-item basis is an intractable problem”
Guns are a form of technology. Violence by gun is a threat to our safety. Eliminating guns, however, is an intractable approach to increasing our safety that does not solve the actual problem: murderous and suicidal intent. Indulge with me in a thought experiment to pry that sentence open:
The Magic Gun Removal Button
Let’s say I design and push a one-time button that magically eliminates every gun and gun manufacturing facility on the planet. Disregarding the collective mass confusion I’ve caused, what have I actually achieved?
“You’ve eliminated gun violence.” Ok. Have I though? Or did I simply defer the next incident of gun violence to a future date? I have not removed from our collective knowledge the composition of modern gun powder. Or removed the ability to purchase raw materials and 3D print or CNC them into new firearms using open-sourced designs. What I have certainly done is make the law enforcement effort of preventing future gun violence a more nebulous exercise that now requires them to track every possible method of creating a firearm and intercede before one is constructed or used.
Another problem is that there are more effective ways to commit mass violence that are still readily obtainable. For those who take interest in how to hurt people, the methods are obvious and plentiful, for anyone else, they will be harder to recognize than a loaded gun. Thus we are playing a potentially zero-sum game of Whac-A-Mole whereby whacking the largest and most obvious mole has spawned newer, less obvious and more insidious moles. NOTE: I have intentionally withheld a listing of viable, present-day methods that would have reinforced this point. If you do not accept the point here, know that I will address it again later with future technologies.
“But you’ve still reduced gun violence, and you cannot prove that other methods of violence will increase proportionately.” Absolutely, and I won’t try to. Instead, let’s continue the experiment.
One thing I’d like to point out now is that I have deprived law enforcement and law abiding citizens a universal method to defend themselves from deadly harm. If I go one step further and outlaw the production of new firearms, haven’t I now guaranteed that the only individual who will have one will be a criminal? How then do you properly diffuse an active shooter situation?
“How can there be active shooters if you have eliminated all guns and banned the production of new ones?” A popular argument against the war on drugs is that it doesn’t work anyways. Whether you subscribe to that or not, what is irrefutable is that the illegal drug trade remains viable and enormous. The advantage a firearm confers to someone over his knife and bat wielding counterparts is too great for an organized crime outfit or paramilitary to ignore. Just like drugs, the production of guns will begin anew, however now it will be entirely black market with no oversight.
“Fine, give the police and military access to guns.” And now we’ve come full circle with two points: protecting us from technology by eliminating it is intractable, and banning it prevents us from guiding its use and adoption.
“You don’t have to ban guns, just make them harder to get via background checks and psychological screening.” This does not prevent their illegal acquisition.
“But once again, you’ve still reduced overall gun violence. I fail to see how this is a bad thing.” I alluded to the real problem earlier. Right now the 0.888 guns per citizen in the United States is a low hanging, diversionary fruit that feels good to scapegoat as the reason for all violence. There will come a time, sooner than later, where for every gun there exists another common item just as lethal and just as readily available. When we can no longer point to any one technology as more lethal than the other, we will still face the same issue of humans having harmful intent.
So now we bring our experiment to a close by pointing out the obvious: there is no magic technology removal button. How then do we protect ourselves? There is a strong statistical correlation between culture, poverty, education and violence, but I want to leave the first three for other (huge) posts I’m going to write. So let’s focus on directly addressing violence:
Detecting Harmful Intent is a Tractable Problem
The crux of the matter is that some humans desire to do others harm. My argument is that detecting individuals with harmful intent is a tractable problem, whereas defending everyone from every possible threat method is not. Here’s a question for you: if a terrorist had a capsule of smallpox suspended in the ink of a pen, would any TSA agent on Earth recognize the danger?
Thus I advocate the research, development and lawful application of what I call an intent detector. This would be a device that is capable of determining whether a human being means to maim or commit murder in the near term future. Installed for instance at gateways to public venues and methods of transport, it would assist in screening out perpetrators of violence before they obtained access to their victims.
Such a device would invert the nature of the public safety problem. Instead of attempting to keep pace with the rate of technological advancement and proliferation, what I am proposing would enable us to focus on that which can not evolve so rapidly: the human being.
“This was a decent post until you started the crazy talk”
OK. Let me ask you a few questions then:
- Do you believe there is any law enforcement or counter-terrorism agency in the world that would not want access to such a device?
- Do you believe such a device is impossible to design?
- Do you believe that not wanting such a device to exist will prevent its invention?
- Would you prefer to think about how to lawfully utilize such a device before or after it is invented?
These questions are purposely rhetorical to assist you in challenging any cognitive dissonance that may be occurring in your mind right now. I will address the impossibility issue in a moment, but first let’s precisely define what the device should do and how it should be used:
The Intent Detector
Ideally, the intent detector shall be a device that reliably answers the question “does this individual intend to kill or maim?” with a “yes” or “no” answer. The first iterations of the technology may not be capable of a precise “yes” or “no” and instead would provide a probability or percentage. The interpretation of that value would be dependent upon whatever protocol was in place, but the end result, whether a human or the device provides it, would be “yes” or “no.”
A “yes” should result in simple denial of access. For instance, if the individual were entering a security checkpoint at the airport, they should be turned away and their ticket refunded. There are simply too many threat vectors available to prevent a determined individual with harmful intent from succeeding in a crowded airport or airplane, so why try? Just send them away.
That’s it. Certainly, the portability of such a device and its scanning rate would be a limiter to its utilization. Also, if such a device requires physical contact with the person being scanned it would not be as useful for preventing “ordinary” violent crime.
A device with the ability to determine a person’s intent obviously has the potential for invasion of privacy at a level that has never existed before, so let’s get in front of that now. When used to scan the public:
- Such a device MUST NOT be capable of storing or transmitting any identifiable details about the individual being scanned. This is to prevent the bulk collection of personal information.
- Any determination the device makes regarding a person’s intent SHOULD NOT be permissible as evidence in a court of law for the purpose of prosecuting a crime. Access to a person’s intent seems like a surefire way to prevent wrongful convictions, but it would also enable the prosecution of thoughtcrimes.
- Detected intent SHOULD be similar to probable cause in terms of what it permits law enforcement to do. Intent detection will be a valuable crime prevention tool, but again, it should not be utilized in the prosecution of an actual crime – that prosecution must be successful without needing to read a person’s thoughts.
Single Point of Failure?
You may be thinking that relying on a single device for our safety would be dangerous. It is. It can’t replace the existing security infrastructure we have in place. What it can do is enable ordinary security personnel to protect against threats they weren’t trained for.
In addition, being a single point of failure enables us to know the target of attack. Instead of focusing on creative ways to do us harm that elude detection, potential harm doers are now forced to focus on creative ways to bypass the intent detector. If you remember back to the discussion on computer security, it was not knowing the target that created an untenable, asymmetrical allocation of resources.
So now, as any good futurist should do, let me conclude with how such a device might be possible:
Is a harmful intent detector even possible? I believe we can answer that by answering another question: “what can we know about what a person’s mind is doing right now?”
The Functional MRI
We can use functional MRIs to localize brain activity in response to stimuli. For instance, if I show you two cat pictures and ask you which one is funny and which one is cute, I can use a functional MRI to determine what parts of your brain are responsible for differentiating the two. I can also determine if a picture I show you is funny or cute by imaging your brain while you look at the picture. The downside to fMRI right now is that it uses enormous, super-cooled electromagnets to make your protons stand up and scream hello. It’s not dangerous, it’s just very big technology.
Seem like pie-in-the-sky? We are already researching the use of fMRIs for lie detection. A perfect lie detector would also satisfy the requirement of being a harmful intent detector. Remember that the harmful intent detector only needs to answer a single yes/no question!
Deception and Body Language Detection
Similar to lie detection, Dr. Paul Ekman’s research, and others like it, attempts to detect deception by reading what are known as micro expressions – quick, involuntary displays of emotion. Although his work and materials focus on training humans to do this, artificial neural networks are well suited to the automation of such detection, in addition to being indefatigable and free of cultural or personal bias.
You could even take this concept one step further by training neural networks to recognize harmful intent via body language and behavior. Imagine such a system watching inmates in a prison and correlating certain behaviors with subsequent attacks. At first it would be terrible at detection, but over time it would begin to recognize common behaviors preceding violence such as averted glances, loitering and higher pitch speech.
This is no longer science fiction. We’re seeing the beginnings of this now. Take for instance this adaptive face reading ad. It uses an evolutionary algorithm to improve its appearance over time based on the happy, sad and neutral facial expressions of the people who view it. In a few years time these ads could be worldwide, part of a collective effort to train machines to understand how we’re feeling, without having to ask us.
Speech Emotion Recognition
We can also tell how a person is feeling by the way they talk. This is a steadily advancing field that we’ve actually been doing for a while now. As amazing as it would be, speech emotion recognition alone is probably not enough to reliably detect harmful intent, even though it could still be a useful heuristic within a larger system.
Humans leak information through their pores and by the breath they exhale. An intent detection system could be made even more effective by learning to detect the biometric signatures of, for instance, an adrenal reaction. A crude version of this ability already exists in the trace-detection machines used at airports now to detect explosive residue.
Electroencephalography and Magnetoencephalography
These technologies are important because they instantly detect the electrical component of synaptic activity – which is the signaling between neurons that we call thought. MEG measures the magnetic fields and EEG directly measures electrical potentials.
Although our current understanding of the brain is far too limited to pluck complete sentences from our head, neuroscience is a burgeoning area of study. Research efforts with MEG have been able to detect certain psychological conditions such as schizophrenia and portable EEGs are already being used to do really cool things like drive cars and operate your mouse and keyboard.
This is also an area I am personally conducting research on – using deep machine learning to make sense of the pattern of brain emissions created by thinking. I’ll be posting my results in my other blog in the next few months.
Prediction via Analyzation of Historical Data
In theory, if enough prior information is known about someone, we should be able to determine the likelihood of them committing a violent crime in the near future. This is something statisticians have been attempting to do for hundreds of years and more recently we’re seeing machine learning being applied to the task. I’d like to point out though that this is not the same thing as the intent detection I describe:
- Prediction can not work without access to historical information about an individual. A predictor trained to analyze the last year of a person’s Facebook posts for instance will fail to operate if that feed is not available.
- Prediction can not be anonymous or universal. It requires information specific to a person’s identity. An intent detector, on the other hand, should be able to identify imminent violent behavior simply from the current or very recent state of the individual. This is the key distinction between prediction and intent detection.
In other words, intent detection provides instantaneous protection of a specific location, whereas prediction monitors an entire population over time.
So does prediction have any place in preventing or reducing crime? Absolutely. In fact it has the potential to prevent as much or more crime over time than intent detection. It will allow people to be helped via counseling or other intervention before they commit to carrying out whatever violence they’re contemplating.
We’re There, Get Ready
I’ll argue that the only thing we’re actually missing now is a precise physiological description of harmful intent. And that the necessary technology to detect it already exists. Machine learning will be the key to tying raw sensory inputs to accurate classifications of intent.
Although not the panacea of safety, the ability to detect imminent violence in the field will unquestionably save lives. Functioning as the last line of defense, intent detection will help us tackle the currently intractable problem of protecting us from our biggest threat: each other.
Thank you for reading! I welcome any feedback, this blog is meant to start a conversation.
The featured image of this post is an optimal route for traveling to a set of 13,509 US cities with a population of more than 500. It was computed by a team of scientists in collaboration with the now defunct CRPC.
6 thoughts on “The Intractability Problem”
Dude, that was quite a long post. Came here from D3 forums. Good stuff. Easily understandable and a mix of known and unknown stuff for me. And well written. Hope you can write shorter though haha.
“Harmful Intent Detector” is definitely going to happen. Whether people want it or not. However, like you said it is all about the application and not the technology that is more important. I mean the autocratic states will use such and any other technology to maximize their viewpoints and ‘laws’. However, it has to go through the necessary cycles and fights (if necessary) until it can be used ‘balanced’. For e.g. you have given one solution where it is used to stop people at the border instead of using it to ‘punish’. But most people (Govs) would want to go overboard with the technology that is present rather than go with a more ‘balanced’ solution.
Anyways, very good write up. If I start writing back, it will be another blog post instead of a comment. Lol. Will bookmark your blog and hope I can read more in the future.
I blog at http://anw.in/ (redirects).
I know, this post was huge! Since it was my first on here I wanted to be extra careful to establish my argument. I’m hoping to keep the future posts a bit shorter, but no guarantees…
One thing I can certainly imagine is even democratic governments using a device like this to restrict access to classified information, regardless of whether the public accepts it. And it may be the more oppressive governments that bring to light how not to use it.
Thank you for the feedback anwin 🙂
Excellent, well written, thought-provoking subject that is critically important. We’re on the development ground floor now and in need of this kind of technology now, but we’re still a very long way from implementation. One of the biggest sticking points to actual use is the potential for lawsuits of those that get denied a service because they triggered a threat warning. The Supreme Court will eventually have to decide whether our thoughts are admissible as evidence, or privacy-protected information.
Pat, thank you so much for the feedback. It is going to be interesting seeing how this plays out legally – even more so than watching the technology develop. I expect this issue to come up as we develop neural interfaces to computers – just like how Cortana, Siri and Google Now defer speech recognition to internet based servers, they will probably have to do the same thing with processing EEGs. Where does it go from there?