So, in the day or two since I posted the piece about “Big Filter“, I’ve gotten several calls, comments and emails that all seemed to focus on the scary notion of “machines that think like us”. Some folks went all “isn’t that what Skynet and The Matrix, and (if you’re older, like me) The Forbin Project, and W.O.P.R were on about?” If machines start to think like us, doesn’t that mean all kinds of bad things for humanity?
Actually, what I said was, “We have to focus on technologies that can encapsulate how people, people who know what they’re doing on a given topic, can inform those systems… We need to teach the machines to think like us, at least about the specific problem at hand.” Unlike some people, I have neither unrealistic expectations for the grand possibilities of “smart machines”, nor do I fear that they will somehow take over the world and render us all dead or irrelevant. (Anyone who has ever tried to keep a Windows machine from crashing or bogging down or “acting weird” after about age 2 should share my comfort in knowing that machines can’t even keep themselves stable, relevant or serviceable for very long.)
No, what I was talking about, to use a terribly out-of-date phrase, was what used to be known as “Expert Systems”, a term out of favor now, but that doesn’t mean the basic idea is wrong. I was talking about systems that are “taught” how someone who knows a very specific topic or field of knowledge thinks about a very specific problem. If, and this is a big if, you can ring-fence the explicit question you’re trying to answer, then it is, I believe, possible, to teach a machine to replicate the basic decision tree that will get you to a clear, and correct, answer most of the time. (I’m a huge believer in the Pareto Principle or “80-20 rule” and most of the time is more than good enough to save gobs and gobs of time and money on many many things. More on that in a moment.)
A few years ago now, I read a book called “The Gift of Fear” by Gavin de Becker, an entertaining and easy read for anyone interested in psychology, crime fighting, or the stuff I’m talking about. The very basic premise of that book, among other keen insights, is that our rational minds can get in the way of our limbic or caveman brains telling us things we already “know”, the kind of instantaneous, can’t-explain-it-but-I-know-I’m-right, in-our-gut knowledge that our rational brains sometimes override or interfere with, occasionally to our great harm. (See the opening chapter of The Gift of Fear, in which a woman who’s “little voice” as I call it told her there was something wrong with that guy, but she didn’t listen, and was assaulted as a result. Spoiler alert, she did, however, also escape that man, who intended to kill, her using the same intuition. Give it a read.)
De Becker, himself a survivor of abuse and violence, went on to study the evil that men do in great detail, and from there, to codify a set of principles and metrics that, encoded into a piece of software, enabled his firm to evaluate risk and “take-it-seriously-or-not-ness” for threats against the battered spouses, movies stars and celebrities his Physical Security firm often protects. Is this Skynet taking over NORAD and annihilating humanity? Of course not. What is is, however, is the codification of often-hard-won experience and painful learning, the systematizing of smarts.
I was thinking about all this in part because, in addition to the comments on my last post, I’m in the middle of re-reading “Blink” (sorry, I appear to be on a Malcolm Gladwell kick these days.) It’s about snap decision making and the part of our brain that decides things in two seconds without rational input or logical thought. A few years ago, as some of you know, my good friend Nick Selby of (among many other capes and costumes) the Police Led Intelligence Blog, decided he was so passionate about applying technology to making the world better and communities safer that he both founded a software company (streetcred software - Congrats on winning the Code for America competition this year!) and became a police officer to gain that expertise he and his partner would encode into the software. He told me a story from his days at the Police Academy. I may have the details wrong on this bit of apocrypha, but you’ll get the point.
During training outside of Dallas, there was an experienced veteran who would sometimes spend time helping catch smugglers running north through Texas from the Mexican border. “Magic Mike” I call this guy, I can’t remember his real name, could stand on an overpass and tell the rookies, “Watch this.” He’d watch the traffic flowing by beneath him, pick out one car seemingly at random and say, “That one.” (Note that, viewed at 60 mph and looking at the roof from above, age, gender, race or other “profiling” concerns of the occupants is essentially a non-issue here.)
Another officer would pull over the car in question a bit down the road, and, with shocking regularity, Magic Mike was exactly right. How does that happen?! And can we capture it? My argument from yesterday is that we can, and should. We’re not teaching intelligent machines in any kind of scary, Turing-Test kind of way. No, it’s much clearer and more focused than that. Whatever went on in Magic Mike’s head – the instantaneous Mulligan Stew of car make, model, year, speed, pattern of motion, state of license plate, condition etc. – if it can be extracted, codified and automated, then we can catch a lot more bad guys.
I personally led a similar effort in Cyber space. Some years ago, AOL decided that member safety was a costly luxury and stared laying off lots of people who knew an awful lot about Phishing and spoof sites. Among those in the groups being RIF’ed was a kid named Brian, who had spent untold hours sitting in a cube looking at Web pages that appeared to be banks, or Paypal or whatever, saying, “That one’s real. That one’s fake. That one’s real, that one’s fake.” He could do it in seconds. So, we hired him, locked him in an office and said, “You can’t go to the bathroom til you write down how you do that.”
He said it was no big deal – over the years he’d developed a 27-step process so he could teach it to new guys on the team. Just one of those steps turned out to be “does it look like any of the thousands of fake sites I’ve gotten to know over the years?” Encapsulating Brian’s 27 steps in a form a machine could understand took 400 algorithms and nearly 5,000 individual steps. But… so what? When weeks of effort was done, we had the world’s most experienced Phish-spotter built into a machine that thought the way he did, and worked 24×7 with no bathroom breaks. We moved this very bright person on to other useful things, while a machine now did what AOL used to pay a team of people to do, and it did it based not on simple queries or keywords, but by mimicking the complex thought process of the best guy there was.
If we can sit with Brian, who can spot a Phishing site, or De Becker who can spot a serious threat among the celebrity-stalker wannabes, or Magic Mike who can spot a smuggler’s car from an overpass at 70 miles an hour, when we can understand how they know what they know in those instant flashes of insight or experience, then we can teach machines to produce an outcome based not just on simple rules but by modeling the thoughts of the best in the business. Whatever that business is – catching bad guys, spotting fraudulent Web sites, diagnosing cancer early or tracking terrorist financing through the banking system, that (to me) is not Skynet, or WOPR, or Colossus. That’s a way to better communities, better policing, better healthcare, and a better world.
Corny? Sure. Naive? Probably. Worth doing? Definitely.
Filed under: Big Data, Crime and Scams, Cyber Crime, Gladwell, Internet Crime, Law Enforcement, Open Source Intelligence, Phishing, Smuggling | Tagged: Big Data, Blink, data analytics, Gavin de Becker, Law Enforcement, Malcolm Gladwell, Phishing, Policing, The Gift of Fear | 2 Comments »