Delivering Project & Product Management as a Service

Blog

Why LinkedIn and others’ ATS (Applicant Tracking Systems) suck?

👉🏽 At the beginning of my career, I worked for Pilat, a company that supplied HR services and psychological testing services. We used bag or words statistical techniques to match job description to applicants resumes. It was better than nothing. 👉🏽 To AI or not to be, I don’t think it changed much for the better. 👉🏽 Most ATS don’t expose the algorithms behind filtering and matching candidates to jobs, but MIT technology review exposes some of the logic behind it, like: Profile information, Keyword matching, Behavioral data like interaction with job posts, availability signals i.e “open to work” and location. 👉🏽 What I think is missing and is especially relevant for LinkedIn environment, is matching between the candidates and the organizations, especially because LinkedIn “knows” a lot about the organization that is recruiting and the employees that are in it. A metric that will calculate the distance between the core personal data on the candidate and the core data about the job as it relates to other employees in the organization, might do a better job. 👉🏽 We have all the tools now with dimensionality reduction, LLMs and more to increase accuracy, remove much of the load from recruiters and reduce type II mistakes, missing good candidates. MIT’s article.

Read More »

From Project Manager to COO: The Evolution of Roles in Product Development

At the beginning there was the Project Manager, an oligarch owning every activity within his domain and responsible to all. Yet those people are expensive, and one needs a wide spectrum of skills needed to deliver a big project successfully. Read about Admiral Rickover as a good example. It’s also hard to be profitable in a onetime project since the risks of doing a one-time bespoke activity to a specific deadline are high. It’s better to develop a product you match to different clients, so you have production line in which you can automate activities and reduce the customization as much as posible. This brought in the Product Manager who took the responsibilities relating to the product functional definition through its lifecycle as well as market access. Then, due to the lack of physics of software development and human bounded rationality, Scrum methodology was developed to produce value in small human incremental steps so we the people with short attention span and memory can deal with large development efforts. Scrum like all religions defined some rituals to hold its believers firmly in G*d trust, as well as some clerical duties and priesthoods, like Scrum Master to hold the comunal rituals as well as Product Owner to represent the will of G*d (or in this case the will of the customer) as depicted in the backlog scroll. So, the old Project Manager was left with dealing with the non-functional aspects such as intra-team coordination, budgets and time constraints, and sometimes replaced by a PMO (Project Management Officer) who looked at all the non-functional activities in the organization. While the Product manager became more like Marketing professional, gathering frequent flyers’ miles while promoting the product and monitoring the PNL and growth matrix. And now the COO became the project manager. A more detailed article on those jobs’ descriptions, (and less humoristic one).

Read More »

The Indirect Approach: Lessons from Liddell Hart in Modern Conflicts

Sir Basil Liddle Hart wrote a book in the 70ies called “Strategy. The Indirect Approach. 👉🏾I was a military doctrine formulation designed to avoid the standstill of the first world war. 👉🏾This was done by establishment of movement or attack vector positioning it in the place of the least effort. This principle is not new, Sun Tzu also dealt with it. 👉🏾Yet it seems that both Ukraine and Israel forgot this principle in their battle fronts (of the same war BTW), and are being tied up in war of attrition. 👉🏾Iran on the other hand, is practicing it quite nicely with its proxies in Lebanon, Gaza and Yamen. 👉🏾Like Russia, the Middle East in not open to Democratic free speech indoctrination, as proved by US failed attempts in Iraq and Afghanistan. However, being a tribal culture, it’s wide open for internal manipulation that can reduce the risks to outside civilized world. 👉🏾Iran is especially vulnerable with a regime that is a minority within its borders. The West can use proxies as well… 👉🏾In some conflict situations with clients, this approach can be utilized for crisis management as well.

Read More »

CrapGPT: AI Meets Microbiome at the Uehara Symposium

CrapGPT that what my eye focused on in the Uehara International Symposium that took place in June 2023. Apparently, it wasn’t critique over ChatGPT’s delusions but more about AI ability to get some sense from bigdata in biology and medicine. Namly dealing with data complexity and size, in new and innovative ways. It seems that microbiome contains 100 trillion microbes that are playing a key role in both health and disease. I S#*t you not! So, what they were talking about mentioning CrapGPT is a dietary application that is recommending diet according to the personalized microbiome profile of a person, as well is using bacteriophages to fight Helicobacter Pylori. Talking about bad marketing? I checked the domain of crapgpt.com and it’s already used. Shame, quite literally 😎 https://lnkd.in/dndCZYMt

Read More »

Iran & Israel standoff game

👉🏾 There are a lot of commonalities between the Soviet American Cold war and what is happening now between Iran and Israel. 👉🏾 Even before entering the nuclear factor into the game, both sides can inflict damage that is unacceptable by the other – Iran may sustain direct hit to its nuclear weapons’ program which is key to sustaining its ruling theocratic regime, and Israel democracy is expected to get multiple civilian casualties from missiles attracts. 👉🏾 What we see is extreme communications between the hostile sides to remove what is called in game theory “Schelling point” which is a decision that is induced by communication Failing, and thus increase the information in the game. 👉🏾 The second thing we must consider is that in order to assure deterrence, both nations have to be in a situation that neither could gain advantage by attacking the other (MAD) or Mutually Assured Destruction. 👉🏾 This directly connects us to the Prisoner’s Dilemma, assuming each player is rational and limiting risk, by adopting dominant strategy. 👉🏾 However, assuming western values and rationality in the Middle East has failed miserably. A suicide bomber is taking the ultimate risk to inflict some damage to his adversary. So, can this logic be extended? 👉🏾 In the Russian Ukraine war, it works since, Russia still hopes for a long run success and Putin’s kleptocracy is directly threatened by it. Iran is a different game, since Islamic radicalism is spread geographically, while Israel is very densely populated, so Iran my assume that MAD strategy is no longer valid.

Read More »

Epistemology (theory of knowledge) and AI safety

👉AI breakthroughs in the last years brought back to life from science-fiction the concept of AGI (Artificial General Intelligence). 👉Some people resigned because of it (Hinton, Illia) and other fear from this AGI taking over. 👉Even earlier than AGI, the concept of consciousness was dealt with by humans, since HGI started, well, earlier. The problem we are dealing now with AI is what Epistemology calls “The other Mind Problem”. 👉In general, it’s our inability to KNOW what the other is feeling, is the color Red that I’m seeing is the same color you see? I believe so, but belief is not knowledge. 👉Moving to clinical terms, when a patient is feeling pain, the surgeon has no way of knowing the pain, even if the patient describe it on a pain scale and even if the surgeon had the same operation, she is still a different person than him, so the pain may not be the same. Our feelings and thoughts are a closed box. 👉Artificial Neural Networks are a bit different, as the physical model of the network can be introspected by mapping the state of the weights activations as every feature in an AI model is made by combining neurons, and every internal state is made by combining features. All this without the process of an ethics committee. 👉But even if you map the physical structure of the network, which Anthropic is doing as part of their AI safety effort on Claude. I wonder if they will be able to measure a “self” or Qualia in ANN. This I think will have to involve (or evolve) an introspective component in the network. https://lnkd.in/eynYQtEQ

Read More »

“The best way to appreciate your job is to imagine yourself without one.” — Oscar Wilde

👉The World Economic Forum provide periodical predictions on world trends. Since I’m heavily invested in AI since youth, here are some insights from 2023 report. Please take them with a grain of salt. 👉50% of surveyed companies expected AI to create job growth, when 25% anticipated job losses. 👉Tasks based jobs like clerical and secretarial are expected to decline, while data and analytics jobs will grow. 👉44% of workers skills will undergo disruption; my guess is that democratization of knowledge due to AI will touch most and reduce the costs of analytics jobs. 👉They expect essentials skills for the future to be: Analytical & Creative Thinking, and Technology Literacy. The report was printed in May 2023 while ChatGPT exploded in Nov 23, to I’ll reduce the needed future skills, only to asking the right questions and critical thinking to catch LLM hallucinations…😎 https://www3.weforum.org/docs/WEF_Future_of_Jobs_2023.pdf

Read More »

AI Regulation: What We See Today, We’ll Understand Tomorrow

“The first time you see something, you don’t really see it. It’s not until later that you realize what you’ve seen.” – (Yoko Ono) First glimpse of AI regulation from the Biden administration, I wish I would be that active at his age 😉 1) Watermarking and labeling AI based creations – The how is still unknown. 2) Call for NIST to provide standards to test models before launching (good luck with that).3) All companies which develop models above a certain size should share the results of the unspecified NIST tests per the Defenece Production Act – Another relic from the Covid-19 pandemic that is being used. I wonder what regulation would follow once Quantum computing will render common encryption protocols useless. We’ll just wait and see what we’ve seen.

Read More »

Automation and Uprising: The Next Revolution

In 1848 the French revolution swept over Paris. 👉🏾 One of the causes for the revolution was the vast economic gap between the lower classes and the two upper classes of the nobility and clergy. 👉🏾 When the dissatisfied lower classes are 90% of the population, you get a revolution. 👉🏾 www.figure.ai is providing a fleet operated humanoid robots to perform low wages’ tasks that are now done by minimal wages workforce. Think Amazon and Walmart warehouses, and at-home caretakers. 👉🏾 I’m sure Brett Adcock the founder of Figure found a valid economic niche, however those who will lose their work are exactly the crowds that took over the bastille with pitchforks. ✋🏾This time the heads that will roll are the local heads of state, while the global corporate decision makers will stay protected behind the Limited Liability walls. https://www.youtube.com/watch?v=Sq1QZB5baNw

Read More »

The Shift from Project Management to Product Management: A Strategic Evolution

How did Project Management convert into Product Management:According to Gartner – 40% of large organizations, will manage internal business capabilities as products. We feel that in Job posting for Product Managers and transition from Project Managers roles to PMOs. The main reasons are:Projects have an end date and Products are supported through the lifecycle.Projects tend to be one of a kind and thus have higher risks associated with them.Products “package” value in a more standard way than a bespoke project, so you can stack them more easily just like you stack Docker components, but at a higher level.Agile methodology has pushed incremental upgrades to the activity and thus promoted a more “productive” process way of thinking.

Read More »

Natural Language Programming: The Era of Prompt Engineering

ChatGPT as well as other LLMs brought us the era of natural language programming – Meaning, Prompt Engineering. I guess to differentiate it from the act of REAL programming. In Prompt Engineering just like real programming, you don’t always get the right answers, the logical bugs of yesterday’s are now called hallucinations. One way to deal with hallucinations is to write the right prompts – Those techniques are not just heuristics, and they have nice names that go with them like: Chaing of thought prompting, Self-consistency etc. Whenever the prompting gets too confusing or too long and complex, there frameworks that enable integrating them with procedural languages – Tools like AutoGen and LangChain bring some lost respect for the old developer. I’m attaching a link in the comments for prompt engineering guide that is quite extensive.

Read More »

𝐀𝐈 𝐠𝐞𝐭𝐭𝐢𝐧𝐠 𝐢𝐧𝐭𝐨 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐌𝐚𝐧𝐚𝐠𝐦𝐞𝐧𝐭

👉 ProjectCopilot is a small startup that started to provide ChatGPT Jira addon that assist you with writing User Stories and maybe more in the future. 👉 This is a small step in the right direction as PM is full of small annoying activities that can be enhanced and automated – Thinking about complexity estimation in story points. 👉 But, looking at how AI can contribute to PM, it should do more than doing RAG at ticket level. It should build on a GraphDB containing temporal data from planning to building and monitoring of tasks, so that it will be able to grasp issues that are not visible to the PM. 👉 Remember, Agile & Scrum methodologies was devised due to human limitations in planning and writing code. AI/ML are less limited in memory and computational capacity. https://projectcopilot.co/

Read More »

Confluent Data in Motion conference

👉 I ate too much in #confluent Data-In-Motion conference. In 1999 Eric Raymond wrote a book about called: “The cathedral and the Bazar” about open-source development. 👉 From my perspective: The difference between those two OS development methods is whether you enjoy free meals and have a single focal point to reach when stuck. Or, you have to wonder through obscure forums, chase a lone developer in Ukraine that pushed to Git two years ago, OR fork-it (an F word) and make the changes yourself to the code. 👉 Escaping the developer point of view – The decision which technology stack to choose, and from whom, is much more important today than before. Because of complexity growth in the indstry and the transition to cloud and framework-based ecosystems. 👉 If in the past you spent 20% of your project budget on tools and the rest was devlopment and testing effort, today 80% is spent on tool selection, planning and integration and the rest is coding to the CI/CD. 👉 Think about it as an insurance for your development activity, if you have a small operation with good developers and the risk of GTM delays are small, you can go commando. But if you’re an entrprise with a large software factory, and clients / investors with great expectations. Pay the premium… and enjoy the “free lunch”. 😉

Read More »

“Never do statistics without having a model in mind”

🕕 Back in the days, I had a professor that used to say: “𝐍𝐞𝐯𝐞𝐫 𝐝𝐨 𝐬𝐭𝐚𝐭𝐢𝐬𝐭𝐢𝐜𝐬 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐡𝐚𝐯𝐢𝐧𝐠 𝐚 𝐦𝐨𝐝𝐞𝐥 𝐢𝐧 𝐦𝐢𝐧𝐝”. ► Let’s translate to in an example – Make a hypothesis about a behavior, like assuming a relation between some features (parameters) like height, weight, and age to heartrate variability for example. Then, view, perform some statistics and change or at least not accept your hypothesis. ► This is not how things are done using AI and ANN.The main difference is that AI is able to tackle multiple features all at once while humans are limited by number of dimensions, they are able to percept and analyze. So, our theories and hypothesis are built according to our limitations. ► AI and Artificial Neural Networks in particular, are able to concurrently process high dimensional data and compress it to something we can absorb and interface with and thus build a theory that describe the distilled product of the ANN. ► Instead of building theories based on our direct perception, we now build them on top of an ANN broker that is compressing it so we can make some sense of it. ► Think of it like wearing a polarizing sunglass. They reduce the glare in a hot sunny day, but you may start seeing some annoying stress marks on your car’s windshield that are not seen with the naked eye. ► This is at the heart of the Explainable AI problem.

Read More »

How to manage R&D activity in AI?

Research and Development (R&D) has two flavors: Basic research – Experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of AI – Like developing the LLaMA model by Meta. Applied Research – Investigation directed primarily towards a specific practical aim or objective. Examples include developing AI applications for specific industries, such as healthcare or finance, or fine-tuning an existing model to specific needs.   Team building and organization: Centralized AI unit – Establish a centralized AI unit or center of excellence to provide guidance, best practices, and support to AI projects across the organization. Interdisciplinary teams – Best practice is assembling cross-functional teams consisting of AI researchers, data scientists, domain experts from the relevant fields, engineers, and product managers to foster collaboration and fast decision making within the team. Skill development and training – Invest in ongoing skill development and training programs to keep the workforce up to date with the latest AI technologies and best practices. Partnership ecosystem – Foster partnerships with academic institutions, research organizations, and technology vendors to access cutting-edge research, talent, and tools.   To do AI you need tools: High-performance computing infrastructure to deal with large datasets and computations involving GPU and TPU (google) – This is mostly provided by cloud services like AWS, GCP and Azure. Data engineering tools to store, process and analyze data. AI frameworks and libraries – Like TensorFlow, PyTorch, and scikit-learn to build on the shoulders of others. Collaboration and version control tools – Notebooks and using platforms like GitHub and GitLab enable collaboration, code sharing, and version control.   Once you have the tools you need to plan how to operate them: Data governance – Data rules (and regulated), so establish data governance policies and procedures to ensure data quality, security, and compliance with regulations. Agile development – Agile became standard in software, and its principles like iterative and incremental development of AI prototypes and solutions should be used as much as possible. Continuous integration and deployment (CI/CD) – When moving to production, the “production line” should be automated as much as possible, so changes are fast and agile. Model monitoring and maintenance – Any deployed AI model should be monitored for performance and maintenance when accuracy is reduced due to natural data changes.   Stopping rules or KPIs. R&D should produce results with value, those should be traced with Key Performance Indicators like: Research output – For basic research count the number of publications and impact factor. Innovation pipeline – Count the number of AI projects in various stages of development, from ideation to deployment, to ensure a healthy innovation pipeline. Average time to market – Measure the time taken from project initiation to the deployment of AI solutions to drive efficiency and competitiveness. Number of patents – Patents are the gates to commercialization, so in applied research it’s a good measure. Return on investment (ROI) – Evaluate the financial impact of AI projects, considering factors such as cost savings, revenue generation, and operational

Read More »

Truth or Lie with medical errors

Once a year, I attend the memorial for my aunt, this event is organized by my uncle and his daughters and unlike other other memorial ceremonies, the crowds are growing every year.  This is partly due to the Israeli trend of having three kids per family, as well as the increase in the lifespan of the living. But mostly it’s due to the organization that includes not just standing at the gravesite but a healthy lunch and some cultural nurishment like a visit in a museum, a hike to a historical place or a lecture.  This time there was a lecture by Prof. Avinoam Reches who is an MD and was on the Board of Medical Ethics of Israel. Since I’m interested in both healthcare, Risk and decision making, I took notes. He talked about the history of medical ethics in Israel, as far as updating the patient regarding her situation. Apparently in the distant past the MD was “allowed” to tell the patient about the diagnosis and the Disease progression, it wasn’t obligatory as it is right now! This was the time before Dr. Google gave access to medical information, and before we were able to chat with a medical generative AI that holds this data and more. Then he brought the subject of medical errors and how, in spite of most patients leaving hospitalization in better health, some were damaged during the treatment and some of those damages are not immediate and not known to them. I guess this is a first world parallel to exiting a hospital in China with a suspicious scar near your kidney.  He defined the difference between Error and Mistake:  Error is generally an unintentional inaccuracy doing the right procedure – Like stitching a wound in a less optimal way.  A mistake is choosing the wrong procedure instead of the right one, mainly due to some degree of carelessness, inattention, or poor judgment – Like doing vasectomy to a patient instead of varicocelectomy.  Terminology is sometimes confusing in medical errors so an alternative definition is:  Error of execution is – The failure of a planned execution to be completed as planned, i.e. (Error).  Error of planning – Choosing the wrong plan to achieve a goal (Mistake).  Errors can be scaled according to severity: Slight error – Wrong medication causes a rush that goes away after reporting the side effects. Medium error – During hospitalization the medical team forgot to give preventive medication that caused inflammation that needs further treatment.   Severe error – CT was wrongly interpreted and now there is a high risk cancer with imminent danger to the life of the patient.  Apparently physicians are more likely to report severe side effects as a result of the error than to report on death due to the error. 90% vs. 30% respectively. After all Medical doctors are humans too.However only 54% reported the error to the resident physician, and only 24% reported the error to the patient family.  This means that there is both a gap in systemic knowledge about the fact that an error was done, and a gap in reporting this error to the stakeholders including the family. This

Read More »

LLM and some Monte Carlo simulation in A/B testing

When testing a new feature, it’s common to use Focus groups or to do A/B testing. Meaning you show or test various implementation alternatives with relevant customers and get feedback. This is costly, slow and not part of your DevOps pipeline. A step in the right way is creating Synthetic Personas, using prompts engineering to describe each Persona’s character, presenting them the feature using multi-modal LLM if necessary (like Figma screen prototypes). And then, measuring the feedback from this virtual crowd with some Monte Carlo Simulation over the LLM parameters (like temperature) so before you prioritize this feature in the backlog, you get some feedback! You can also change your existing customer’s profiles to assess impact on new markets, and I’ll bet you can also compute rough implementation complexity, as a setup to the planning meeting about the future’s future.  A start of embryonic implementation can be found here.

Read More »

How come we have a bipartisan system in the US and how does the conflict between “Left” & “Right” arise naturally in so many cultures?

We need to understand that at the base those are two strategies that are driven by different behavioral strategies, it’s modeled in game theory by a game called Hawk vs. Dove.  For our discussion let’s call it Wolf vs. Pigeon:The pigeon behavioral pattern (Left Wing) is based on very well defined cultural norms and rules that enable cooperation between members in the flock. The Wolf pattern (Right Wing) is not taking for granted rules and social norms but assume the famous “Homo Homini Lupus Est” saing, meaning that to each his own under some profit loss economic and survival constraints.  In both strategies there is benefit in creating groups, either flocks of pigeons or a pack of wolves. The pigeons create a flock, motivated by self protection, better foraging and energetic efficiency. While the wolves create a hierarchical pack to hunt better prey and to reduce the risk of self injury from other canine teeth carrying members of the pack. We see that the main difference between the pack and the flock or Right Wing and Left Wing is the concept of Hierarchy. The Left Wing is bound by implicit social rules so the flocking is based on equality assumption while for the Right Wing the packing “order” is hierarchical because, otherwise there is chaos. Democratic order is a mechanism based on equality of voting used to create a hierarchy of political representatives, so the difference between how Right and Left Wings see this mechanism is the emphasis of the Right on the hierarchical structure and the Left on the equality of the representation process.  Left Wing members will always be pulled to processes that stress ongoing control of the social power and hierarchy, while Right Wing members will stress order and conservation of the hierarchical system.  At the extremes we can see the Anarchists vs. the Fascists point of views modeling this theory, but it’s just an individual selection of the member into which flock or pack he feels that she belongs. In democratic processes humans make the process more complicated because some wolves have to be elected by doves and vice versa, so one gets more strategies in the game like: wolf-like pigeon and pigeon-like wolf, and at the end total dislike from political representatives, and the common saying that Democracy is necessary evil.  Actually this is exactly the case – Arrow’s paradox that was defined by the economist Kenneth Arrow in his 1951 book “Social Choice and Individual Values.” states mathematically that there are inherent difficulties in creating a perfect method for aggregating individual preferences into a socially optimal outcome while satisfying a set of desirable criteria – i.e democratic decision process is flawed by nature. That is of course till AGI (Artificial General Intelligence) will decide for us all. 

Read More »

Can AI prevent wars?

Is Israel to blame with the death of its citizens by Hamas?Can you blame a victime murdered while walking a dark alley? Can your AI app prevent those incidents? “We can’t change humans, but we can change the conditions under human’s work” (James Reason). James was a British Psychologist who researched Human errors. He wrote about the systemic approach to errors. Humans are fallible and errors are to be expected! A good man-machine system design is supposed to prevent those mistakes. Complex man-machine interfaces are common in heavily regulated technical industries, ranging from Airlines to Medical, but also in modern armies. And system engineering or in other cases reliability engineering is used to create safeguards and defenses against those errors. Using the Hamas surprise attack as an example, the Israelis demonstrated all three generic types of mistakes: 1. Skill based slips – Those are lapses in the execution of the routine and procedures – Hamas was routinely drilling for years near the border, so the routine sequence made the defense ignore the real event. 2. Rule based mistakes – Since mass attacks are usually done at early morning there is usually an early morning readiness, as a rule. It wasn’t done and also Israeli generals discussed the sensor data indicating activity of the Hamas and “rationally” decided is a false alarm. Also 3. Knowlege based mistakes – If you don’t know something there is a good probability you will be wrong. Israel’s Humint in Gaza was lacking, 3000 terrorists and no one snitched. So how can an AI help?If human errors can be classified into those categories and reported in a post activity debriefing (provided that people actually tell the truth about their mistakes). Then we can load this taged data into a vector DB. And just like in Sentiment Analysis that can score if a sentence is negative or positive, we can score this DB for positive or negative activities and also generate recommendations based on it. AI does not suffer from human fallacies, so, just before you are entering the dark alley, it would vibrate your smart phone and make you think again. Maybe Skynet will be a merciful safeguard after all?

Read More »

Marketing the war of attention as a digital service

Fake news over Hamas hospital bombing blaming Israel for a self-inflicted demerge by the terrorists themselves, and the several hours it took Israel to counter with the truth and real evidence, made me think about the parallels of marketing a comercial service and “marketing a war”. Touching people in modern digital world requires understanding that you’re competing in time dimension not only in features and price space. Because complex products tend to grow a feature list that is not easily grasped by the client’s bounded rationality (That’s why we pay Gartner or others that provide Comparision services). And pricing schemes are very dependent on usage, which is never easy to predict before actual implementation. So, how do you manage the war for attention?In the software industry as well as in interdisciplinary industries there are three slogans that are often used: 1) Be Agile so you get value early to the market.2) Do Concurrent Engineering so to reduce the intra-team communication times, by doing design work in parallel.4) And finally Automate, so you get a QUICK pipeline to the market. In war like in peacetime one has to apply those principles:1) Concurrency is already performed by joint forces where operators with different skills are working together in united task forces involving armor, infantry and air support (and even legal counsel at some level). 2) Agility means you have to subdivide the military operations on the timeline and output to media data that is collected from the field, naturally, when it’s not hindering your operations. In times where each soldier or officer is carrying a body camera and drones are picturing everything, the data is there. 3) RPA – or Robotic Process Automation is the missing piece right now in the media war pipeline, and this is the cause of the delays we see when providing factual evidence-based information to the public. Decision automation in Realtime on video data steams is not science fiction, there are tools like Apache Kafka that do just that for years. Yet, there is always a man in the middle. Because automation is not trusted enough by the authorities, and mostly because the data is controlled by the intelligence organizations using C3I (Command, Control, Consultation and Intelligence) systems which by nature are not designed to distribute data. This pipeline should recognize media events and trends in Realtime and match them to intelligence events. Where there is a match provide data and evidence that support that event and output it to various channels. Maybe next war?

Read More »