Tag Archives: intractability

Apple vs. The FBI

A court order can only do so much…

By now I’m sure most of you have heard about the debacle between the FBI and Apple.

Here’s a brief summary before I explain the ramifications:

The FBI has the San Bernardino shooter’s iPhone 5C but they can’t get into it because it’s locked behind a passcode. The passcode is used to key device-wide encryption so they can’t just remove the flash RAM and read it – what they read would be indecipherable random noise.

They also can’t brute force the passcode because Apple phones have a feature (which can be enabled by the user) that wipes the phone clean after a certain number of wrong guesses. That feature may be enabled on this phone so they can’t risk it.

What the FBI has asked Apple to do is create a purposely compromised version of their operating system that allows unlimited guesses, does nothing to delay each guess and allows guesses to be submitted over the Lightning jack and possibly through Bluetooth or WiFi. They then want Apple to sign this compromised OS image with Apple’s private software distribution key (so that the phone accepts it) and boot the phone with it so a law enforcement agent can brute force crack the passcode.

The legal issue: the FBI has a court order permitting them access to this specific iPhone and ordering Apple to provide it as described above. Apple has the technical ability to fulfill this request but is refusing to do so on principle. This is somewhat akin to the FBI having permission to attempt to crack a bank vault but Apple won’t let them in the door.

Why?

Tim Cook’s open message misrepresents a few of the technical issues but his legal argument is very sound: Apple will probably challenge this all the way to the Supreme Court but if they lose and comply with this order the legal precedent will be set and the government will have the (potential) ability to order this procedure done with other phones.

Granted, that’s a slippery slope argument, but I’ve got another one for you: this newly acquired ability will quickly become pointless as terrorists begin using their own custom software to provide end-to-end encryption of communications. This type of software can be written by a single developer with a smidgen of crypto competency and decent computer science savvy. The necessary encryption algorithms are freely available in open source libraries.

This loops us back to Tim Cook’s original point, but not in the way he was thinking: once Apple designs a backdooring process it will become increasingly useful only for nefarious people who want access to the phone’s normal features – the ones you and I use (banking, paying with Apple Pay, emails, SMS messages, Facebook, etc.) Any decently sophisticated criminal or terrorist organization would have nothing to fear after word spread and solutions were developed. If you’re the paranoid type, it would also enable the government to more easily spy on you.

There’s one more complication here: nothing is stopping Apple from building in hardware that makes it intractable for them to override the passcode behavior in the future. Indeed the only reason a software solution is somewhat viable now is because the 5C uses the older A6 chip. The A7 and up (used in all later iPhone models) already enforces an escalating delay after each wrong guess.

Bottom line: this is not something Apple can be reasonably asked to do going forward. Building in a backdoor from the factory is absolutely unacceptable for the obvious reasons. I really don’t know what law enforcement can do about this – good, well implemented encryption is impervious regardless of what resources are applied to cracking it, barring specific mathematical discoveries and dramatic increases in computing power.

Micro Expressions are Getting the Machine Learning Treatment

It Knows How You Feel

A few months I ago I started this site with an article on intent detection. In it I made the argument that protecting us from technology by banning it is intractable, so it’s probably more beneficial to focus on detecting humans who are about to carry out malicious acts.

One of the tools I proposed for achieving intent detection is the application of machine learning to detect micro expressions – those quick, involuntary displays of emotion we do but have little to no control over.

It is fascinating watching a field develop before your very eyes. Recently, a paper was published to arXiv on the application of machine learning to the detection of micro expressions. MIT Technology Review has also written a summary of the paper.

So, have these researchers solved the problem? Not yet, but as it goes with most computer science, their work sets a new bar for “state of the art.” To paraphrase, the advances they have made are:

1) They developed a new method to detect when a micro expression occurs during a video that does not require prior training. This is important because in order to recognize something, most computer vision systems need to be told where it is in the first place. The authors use the terms “spotting” and “recognizing” to distinguish between the two. Since their method does not require training, it makes the process of acquiring a large sample of micro expressions to work with easier. This is probably the most important achievement.

2) They utilized a new method to amplify the distinguishing features of a micro expression to make it more recognizable. This is important because micro expressions are generally very subtle, which makes them harder to classify.

3) They investigated multiple methods to mathematically represent the features needed to distinguish micro expressions, and analyzed their relative performances. This is important because the feature representation that works best on say, a color video does not work as well with near-infrared videos. Also (somewhat obviously), using a high speed camera makes it easier to recognize micro expressions.

4) They created an automatic recognition system that can detect and classify micro expressions from long videos with performance comparable to humans.

For those into machine learning, they performed the actual classification task with a linear Support Vector Machine. No fancy deep learning or neural networks, just good old large margin classification with customized feature descriptions.

It will be interesting to watch the field evolve over the next few years as researchers start applying the advancements of deep learning to the problem. We’re getting ever closer to viable intent detection.

Hundreds of Automatic License Plate Readers Found Wide Open by the EFF

Oops…

Hundreds of automatic license plate recognition (ALPR) systems were found wide open on the Internet by the Electronic Frontier Foundation (EFF). These systems potentially store years worth of archival data indicating which vehicles drove by a particular place at a particular time. Amazingly, this is not even that big a deal anymore, as it is not beyond the capability of a single decent developer now to create an ALPR system from scratch.

This is a good example of how one’s digital footprint grows over time by factors outside our direct control. It also demonstrates one component of the intractable nature of computer security – incompetence.

The Privacy Equation

Edward Snowden recently did an AMA (“Ask Me Anything”) on Reddit where he said:

Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.

A pithy statement, is it not? Unfortunately, the situation is not so simple – privacy and free speech are at odds with each other both technologically and legally. This is because the ability to preserve privacy and free speech are inversely related by the same fundamental processes. Simply stated, that which makes free speech more possible makes privacy less possible.

In this article I will show how the degradation of one’s privacy is inevitable and potentially accelerates over time by factors outside one’s direct control. This is a recent phenomenon brought upon by the digitization of information, always-on connectivity and continuous advancements in machine learning. These technologies and the infrastructures built from them also facilitate the propagation of uncensored free speech.

Thus one can accept the futility of preserving their privacy yet still cherish their freedom of expression. One day we will truly have very little to hide, regardless of whether we have something to say. Continue reading The Privacy Equation

The Intractability Problem

A recurring theme in sci-fi is the danger that new technology presents to mankind.

Perhaps the pinnacle of dystopic scenarios is the Singularity, that moment where artificial intelligence (AI) begins continuously self-improving to the point where we potentially lose control. This was the premise for the popular Terminator movies and others such as I, Robot and Transcendence, each featuring a race to shut the technology down before it grew out of control.

In this discussion, I will be making the argument that defending us from technology on a per-item basis is an intractable problem, thus the best solution requires focusing on the human beings who would erroneously or maliciously utilize technology to cause harm. I’m going to suggest a far more radical measure than simple psychological profiling or background checks. In order to appreciate its necessity, the intractability problem must be fully understood. Continue reading The Intractability Problem