Delivering Project & Product Management as a Service

Can AI prevent wars?

Is Israel to blame with the death of its citizens by Hamas?
Can you blame a victime murdered while walking a dark alley?
Can your AI app prevent those incidents?

“We can’t change humans, but we can change the conditions under human’s work” (James Reason).

James was a British Psychologist who researched Human errors. He wrote about the systemic approach to errors. Humans are fallible and errors are to be expected! A good man-machine system design is supposed to prevent those mistakes.

Complex man-machine interfaces are common in heavily regulated technical industries, ranging from Airlines to Medical, but also in modern armies. And system engineering or in other cases reliability engineering is used to create safeguards and defenses against those errors.

Using the Hamas surprise attack as an example, the Israelis demonstrated all three generic types of mistakes:

1. Skill based slips – Those are lapses in the execution of the routine and procedures – Hamas was routinely drilling for years near the border, so the routine sequence made the defense ignore the real event.

2. Rule based mistakes – Since mass attacks are usually done at early morning there is usually an early morning readiness, as a rule. It wasn’t done and also Israeli generals discussed the sensor data indicating activity of the Hamas and “rationally” decided is a false alarm. Also

3. Knowlege based mistakes – If you don’t know something there is a good probability you will be wrong. Israel’s Humint in Gaza was lacking, 3000 terrorists and no one snitched.

So how can an AI help?
If human errors can be classified into those categories and reported in a post activity debriefing (provided that people actually tell the truth about their mistakes). Then we can load this taged data into a vector DB. And just like in Sentiment Analysis that can score if a sentence is negative or positive, we can score this DB for positive or negative activities and also generate recommendations based on it.

AI does not suffer from human fallacies, so, just before you are entering the dark alley, it would vibrate your smart phone and make you think again.

Maybe Skynet will be a merciful safeguard after all?