Delivering Project & Product Management as a Service

Blog

The Underrated Giant of Medical AI

I would like to dedicate this post to Google AI. From PR perspective they are not OpenAI drama queen, nor do they touch the end user like Perplexity and Claude. But in terms of Cost performance for enterprise solutions, I think they lead the game right now. In the healthcare arena they have Med-Gemini that is replacing Med-PaLM with scores in MedQA (USMLE like) test of 91.1Not only that but they produce tools to develop diagnostics for Pathology and Dermatology. At this place in the S-curve, I quite certain that the disruption in diagnostics is going the change the way medical value chain is constructed. The speed is only constrained by regulation. https://lnkd.in/eWr3V7eF

Read More »

𝐆𝐞𝐧𝐀𝐈 𝐭𝐡𝐨𝐮𝐠𝐡𝐭𝐬 (𝐦𝐲 𝐨𝐰𝐧, 𝐧𝐨𝐭 𝐆𝐏𝐓 𝐝𝐫𝐢𝐯𝐞𝐧)

👉🏽 I’ve been doing Algorithms based analytics for ages, and AI since 2016 – The most obvious thing to me is, that what was once a rather framed analytics procedure decupled from data, is now very convoluted and intertwined with it. 👉🏽 Meaning, running the same algorithm on different datasets gave you different results naturally, but you still had the same procedure running on the same features. Regression is not changing when done on different datasets. 👉🏽 Then it got a bit more complex with features being driven from data exploration instead of being driven from modeling system behavior (change or test your model of reality according to evidence), so you chose a best playing method based on KPI results, but you lost the model formula of nature’s behavior. Random forest or any ansible method is doing just that but change the data and you may have to choose another method. 👉🏽 Now with ANN, and then GenAI especially with Fine Tuning, prompt engineering and examples, all generality is lost, there is no reality model that is human readable (this is why XAI – Explainable AI is so hard) and everything is data driven. 👉🏽 My guess is why the drive for AGI is so strong – We lost our human model-view of the world (and data) and are delegating it as well, to the machine.

Read More »

Smoother Sailing with Smarter Forecasts

𝐈 𝐥𝐢𝐤𝐞 𝐬𝐚𝐢𝐥𝐢𝐧𝐠, 𝐰𝐞𝐥𝐥 𝐭𝐡𝐞𝐨𝐫𝐞𝐭𝐢𝐜𝐚𝐥𝐥𝐲, 𝐬𝐢𝐧𝐜𝐞 𝐬𝐞𝐚𝐬𝐢𝐜𝐤𝐧𝐞𝐬𝐬 𝐢𝐬 𝐚 𝐜𝐨𝐦𝐦𝐨𝐧 𝐩𝐚𝐫𝐭𝐧𝐞𝐫 𝐭𝐨 𝐭𝐡𝐢𝐬 𝐞𝐧𝐝𝐞𝐚𝐯𝐨𝐫. 👉🏽 Seasickness induced by bad weather, can be reduced and even prevented using the right medication, yet 30% of shipping accidents are cause by poor weather. And weather-related losses cost the insurance industry $136.44 billion. 👉🏽 A lot of Google’s AI activity is less visible than OpenAI’s but in a way it’s much more profound. Google’s DeepMind GraphCast and GenCast are open source and can run on a desktop computer instead of a supercomputer, while being more accurate in 90% of the cases with skill-score KPI of 7%-14% improvement. 👉🏽 Quick calculation: – 1% improvement in weather prediction is worth $1,36 Billion savings (that is if you follow predictions). All that is left is dealing the pirates and terrorists roaming the Indian Ocean⚓️

Read More »

Phrenology is making a comeback

In the 18th century, Franz Joseph Gall invented a (false) method that involves the measurement of bumps on the skull to predict mental traits. Now that we have our pictures spread all over the web, and video interviews and Mr. AI is spreading its tentacles. We can predict from facial picture the school rank, compensation, job seniority, industry choice, job transitions, and career advancement. Dataset was on MBA graduate only, so other disciplines graduate may be less beautiful and yet have a good career 🙃 https://lnkd.in/eXrsyGKh

Read More »

𝐘𝐞𝐬, 𝐭𝐡𝐞𝐲 𝐚𝐫𝐞 𝐬𝐥𝐨𝐰

👉🏽 But to anyone who worked in an industrial robot’s environment, where you program each movement of the robot’s arms. Brett Adcock’s Figure robots are science fiction. 👉🏽 AI growth is tightly bound by data accumulated during learning. At first it learned from textual and picture data, this data is fully assimilated, to use Borg terminology. 👉🏽 So, the next step is the gather data from “actions”, and this is where humanoid robots come to play. 👉🏽 And the use cases are basically replacing man-machine interfaces with machine-to-machine interfaces, and the learning curve of multiple machines sharing improvement in model parameters, is exponentially greater than the slow rate we humans do it. 👉🏽 We are looking into the eye of the event horizon, and we shouldn’t blink – The apple can be a biblical reference or a Walt Disney one. https://lnkd.in/eWK_pmMD

Read More »

𝐒𝐨𝐥𝐨 𝐞𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫𝐬 𝐚𝐧𝐝 𝐭𝐡𝐞 𝐜𝐮𝐫𝐬𝐞 𝐨𝐟 𝐝𝐢𝐦𝐞𝐧𝐬𝐢𝐨𝐧𝐚𝐥𝐢𝐭𝐲

👉🏾 Years ago I had a conversation with Ofer Vilenski –  Ofer founded a software tools company in his basement, which grew into a profitable company. With the profits of that company, Ofer founded Jungo, to develop an operating system for routers (like an Android for home devices). In 2005 Jungo became profitable, with over 170 employees. A year later, Jungo was acquired by NDS (acquired by Cisco, NASDAQ: CSCO) for $107M. 👉🏾 Ofer is an early example of solo entrepreneur that made it – In our conversation we discussed two option for growth into a success – The first is organic into a small business that can make profits and economic independence and the second is VC enabled growth in which you may end up with a big public company or a nice exit that will make comfortable life for you and the generations to come. The default is of course a dud, which should be dropped on the spot. 👉🏾 Ofer was programming from an early age, but today, programming skills are not a necessity. Even layman can start programming using AI tools like Claude, Copilot or Gimini, and more dedicated tools like V0 by Vercel, Bolt by StackBlitz, and Lovable can actually build applications from GenAI prompts. 👉🏾 This is leading to a proliferation of small applications with multiple features written by various authors that will lead to an even more complex ecosystem than the one of mobile apps, because there is no organized market for such application, hence the gap the between building something and success in mining market potential is growing even larger. 👉🏾 This is exactly the curse of dimensionality – Software solution space is growing so rapidly due to democratization of knowledge and tools, that it becomes increasingly hard to discover a good solution or combination of ones, that fits the needs.

Read More »

𝐌𝐢𝐧𝐝 𝐭𝐡𝐞 𝐠𝐚𝐩 – 𝐀𝐈 𝐝𝐞𝐦𝐨𝐬 𝐯𝐬 𝐫𝐞𝐚𝐥-𝐥𝐢𝐟𝐞

👉🏽 Common development culture in the last few years is fast deployment and changes. 👉🏽 Common behavioral pattern in a software sales cycle is to present features that are not yet mature. 👉🏽 Common issue with AI based solutions is that they are tightly bonded to data and need adoption to the client’s data either by finetuning, by prompt engineering or by both. 👉🏽 This leads to a gap between expectations and reality even if we don’t account for demos that are made specifically to entice audiences, or get a feel for the market, with only a concept behind. as an example, see Kawasaki’s CORELO robot CGI presentation here: https://lnkd.in/egxeKvcd 👉🏽 Dealing with that gap, calls for both a robust POC and representative training dataset that will assure compliance of the proposed solution. Sometimes building an anonymized organizational dataset, and an open POC environment available to vendors as a gating in the selling process, will make sure that the purchasing process is better and faster.

Read More »

?למה להאמין לבינה מלאכותית יותר מלרופא

למה להאמין ל-AI יותר מאשר לרופא?(זה לא הולך להיות פוסט טכני) 👈🏼 כשנכנסו יישומים של ניווט בעזרת GPS, גיליתי שאני הופך לחסיד שוטה, נוסע לאן שאפליקציה מכוונת. עם הזמן חזר אלי קצת שיקול הדעת גם בגלל שהאלגוריתמיקה קצת התדרדרה וגם בגלל שההיוריסטיקות של הנהג (אני) השתפרו. 👈🏼 נראה לי שאנלוגיה לכל ה-LLMים / GenAI שאנחנו מנווטים בעזרתם במפת הידע העולמי די דומה. 👈🏼 לאחרונה פנה אלי בשאלה חבר שסובל מבעיה רפואית. אני מוקף רופאים, אבל אני הכבשה השחורה ההנדסית של המשפחה, ומכיוון שאני עושה AI מאז 2016, קצת לפני שאיליה ססקובר התחיל עם הטירוף הנוכחי, ניסיתי לסייע. 👈🏼 האיש פנה לרופא משפחה וקיבל הפניות לבדיקות שנמשכו כחודשיים לפי אילוצי התורים של קופת החולים. פנה למומחה ונקבע פגישה בהתאם לזמינות של המומחה, כמובן בלי קורלציה לבדיקות. כשקיבל את תוצאות הבדיקות, אפילו פנו אליו מקופת החולים ליעוץ וירטואלי והמליצו על תרופות. 👈🏼 אלא מאי? רופא המשפחה בקופת החולים הניח שתרופות קבועות הרשומות בתיק הרפואי אכן נלקחות באופן קבוע והסתבר שאינן מעודכנות. הרופא המומחה לא ראה את הבדיקות מכיוון שמאילוצי זמן כנראה לא נכנס להביט בתיק הרפואי ולסיום, לא כל תוצאות הבדיקות הגיעו במועד הפניה. 💡 הגורם היחידי שהייתה לו תמונת מצב מלאה של הבדיקות, הסימפטומים והתרופות הוא המטופל! 👈🏼 אז שאלתי את ידידנו Grok, Claude ו-Gemini לגבי הבעיה וקיבלתי תשובות שסותרות את ההמלצות של הרופאים, פשוט בגלל שפספסו תוצאות של בדיקות. (את ChatGPT לא עירבתי בגלל שאני לא סומך על סאם אלטמן). 👈🏼 בקשתי מחבר להעביר את תמצית ההמלצות שרכזתי, למוקד רפואת חירום ובשיחת טלפון הוא קיבל במיידי מרשם מרופא שלישי (מרשם אחר כמובן מהמרשמים של רופא המשפחה והרופא המומחה). ⚖️ אז מה המסקנות? המטופל הוא מוקד קבלת ההחלטות. הרופאים היום עמוסים וגם בגלל תיאום תורים ובדיקות, המודעות המצבית שלהם לנתוני המטופל, לוקה. AI יכול וחשוב שיעזור למטופל בקבלת החלטות. עדיין צריך Man in the middle בשביל הניווט. שלושה רופאים יתנו שלוש המלצות שונות לתרופות.

Read More »

AI-DD (AI Driven Development) vs. Vibe Coding

Coding is not a job for humans, that’s why programming languages were developed and why we still struggle with modelling reality into code. GenAI is changing this by being able to code for you, based on your prompts. This, changing very fast how developers write code and how IDE’s (Integrated Development Environments) like MS code, Codeium, Replit and more, are providing more than just code completion. This however has democratized writing code to non-programmers. Namely interacting with the codebase through writing prompts. So, you interact with the tool by telling it what you want and then examine the outcomes. This is Vibe Coding. Alas, this is not the same as writing code in C and examining the machine code – LLMs are none deterministic and like in C different implementations of the same logic, can have different performance issues. So, if you are Vibe Coding, you need a methodological framework to do it well and get good results. This methodology is AI-DD. AI-DD includes the following steps: Create a system prompt – This is the prompt that define the LLM behavior and how you define the system boundaries: “You an expert developer in Java and Postgres and you will follow a test-based programming paradigm”. Define the context – GenAI, if not given context will try predicting what you want, this is akin to mindreading and seldom works. So, prompt it with as much context as you can, give it manuals, user guides, Use-cases, user stories, UML diagrams, your cloud architecture, just like you’ll need to do if you outsource as software project to external developer. And set clear and concise expectations for the outcome. Prompt engineering – You can never be two detailed when defining your prompt, basically if you ever wrote a programming instruction specification for a junior programmer, it’s like that: Be specific, divide functionality into parts, give plenty of examples and edge cases, and establish constraints. A prompt should be treated like code – Remember the days of using structured language to describe an algorithm, do it to the GenAI and you’re most likely get what you want. Not only that, commit the prompts into Git and manage versions of them. Coding standards – There are many CamleCase naming notation, how you expect the code to be commented and documented, and more. Define the architecture you expect it to follow design patterns and examples of well-formed code. Use code libraries – good coding is reusing code, most LLM are aware of software libraries till the training date, and libraries change often, so point and mention to specific frameworks, libraries and documentation. Start small and develop in cycles – This is not something new, just adapt to using prompts namely create first version evaluate, refine the prompt or add context regenerate and see if you get what you want. Sometimes you’ll need to go back to an earlier prompt version and that’s ok.

Read More »

כשהבינה המלאכותית התחילה ללמוד לבד

כשלימדתי AI בשחר השני של הבינה המלאכותית – 2016 לערך, הקוריקולום הגדיר שני סוגי AI: 👈 הראשון היה למידה מונחת דוגמאות (Supervised learning) – אוספים הרבה (יחסית) דאטה ומתייגים אותו, מאמנים את ה-AI על התיוגים ע”מ שיוכל לחזות את התיוגים על דאטה חדש. לדוגמה תיוג חומרת התרעות אבטחת מידע שימש לדירוג התרעות חדשות. בבסיסם גם תהליכים של LLM כמו ChatGPT מבוססים על התהליך הזה רק על קורפוסים מאוד גדולים של טקסט שמשמש לחיזוי המילה הבאה “בזרם התודעה” של המודל. 👈 השני היה למידה עצמאית (Unsupervised learning) – בה מבצעים זיהוי של תבניות ללא הנחיה או תיוג מראש. לדוגמה חלוקה של קבוצת הלקוחות של הארגון לפי מאפיינים. 👈 שתי השיטות מצריכות יחסית הרבה דאטה וכשלא ברורה פונקצית המטרה בלמידה עצמאית צריך לנסות להבין מה המשמעות של החלוקות. אבל מידע טקטואלי יש בשפע, ולכן ChatGPT התניע תהליך שכמעט השכיח את הצורך בשיטות אחרות. 👈 בינואר 2025 סטארטאפ סיני הפיל את המניות של Nvidia ע”י שימוש במשהו שנקרא למידה חיזוקית (Reinforcement learning) בדיוק בתחום הפעילות של LLM. החידוש היה הפעלה של למידה חיזוקית בתחום של LLM במקום במקומות הרגילים שבהם הפעילו את השיטה – בדרך כלל ברובוטיקה. 👈 הרעיון של למידה חיזוקית נוצר במאמר של גוגל ב-2015 שבו סוכן AI למד לשחק משחקי מחשב ללא הדרכה כלל. השיטה היתה פשוט תרגול של הסוכן במאות אלפי משחקים בהתבסס על שכר ועונש, דהיינו הסוכן למד עצמאית אסטרטגיות של זכיה במשחקים מכיוון שהוא נבנה להעדיף זכיה על הפסד. במקום להתבסס על דאטה קיים, הסוכן סרק את מרחב האפשרויות של המשחק והגיע לאסטרטגיה אופטימלית. 👈 שנה לאחר מכן ב-2016 AlphaGO נצחה את אלוף העולם ב-GO באותה טכנולוגיה וכיום אותה שיטה משמשת בביולוגיה לתחזיות מבנה פרוטאינים. 👈 למה העתיד נמצא בלמידה חיזוקית – מכמה סיבות: 🎓 הדאטה ללימוד המודלים הגיע לרוויה, ברמה כזו שמייצרים דאטה סינטטי כדי לאמן מודלים גדולים. 🎓 במודלים הפועלים על מרחב אפשרויות קטן יחסית התהליך החישובי בלמידה חיזוקית יותר יעיל משיטות מבוססות דאטה. 🎓 במקומות בהם קל יותר להגדיר אילוצים התהליך מאפשר למידה מהירה – למשל חישובי תנועה ברובוטיקה שמתבססים על פיזיקה מכאנית. 🎓 חשיבה מחוץ לקופסא – מיפוי של מרחב הפתרונות מאפשר חריגה מדפוסי הפעילות האנושיים שמתבטאים בדאטה לאימון מונחה דוגמאות. לי סדול שהפסיד ב-GO ל-AI, אמר לאחר מכן, שהמהלכים לניצחון היו מקוריים ולא אנושיים!

Read More »

The 10,000 Hours Rule and the Quest for Better AI Training

My old hobby is Martial Arts, and MA is mainly about training… lots of training. 👉🏽 Remember the10,000 hours rule that Malcolm Gladwell drafted, asserting that the key to achieving true expertise in any skill is simply a matter of practicing, albeit in the correct way, for at least 10,000 hours. 👉🏽 Well, now that AI controlled humanoid robots are getting more popular, I remembered my old Sensei teaching: That 10,000 hours of bad training need more than that to undue old habits. 👉🏽 This is particularly true for finetuning a model, for example in the attached video you can see part of a training session of a punching robot via VR motion capture setup, yet the guy who’s doing the training gives a very bad example of boxing. 👉🏽 Effective boxing is done from the legs with hip movement and this robot will never be able to do that. A better solution for that will be, getting a better trainer, or using some force feedback and physical simulation to the training process. 👉🏽 Till then, I’ll keep using human training partners. https://youtu.be/wgthZ30kkLk

Read More »

LLMs (Large language models) are changing medical landscape

It’s not the technology that is holding implementation back but rightfully the extensive regulatory constraints that mark any medical decision making and PII data. 𝗬𝗲𝘁 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗮𝗿𝗲 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗮𝗽𝗽𝗲𝗮𝗿. 𝗢𝗻 𝘀𝘂𝗰𝗵 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗶𝘀 𝗠𝗘𝗗𝗜𝗖, 𝘁𝗵𝗶𝘀 𝘁𝗶𝗺𝗲 𝗳𝗿𝗼𝗺 𝗨𝗔𝗘. It measures 5 clinical dimensions for LLM for provide:Medical Reasoning: This dimension focuses on the LLM’s ability to engage in clinical decision-making processes. This encompasses interpreting medical data, formulating potential diagnoses, recommending appropriate tests or treatments, and providing evidence-based justifications for its conclusions. Ethical and Bias Concerns: This dimension addresses the crucial issues of fairness, equity, and ethical considerations in healthcare AI. It examines the LLM’s performance across diverse patient populations, assessing for potential biases related to race, gender, age, socioeconomic status, or other factors. Data and Language Understanding: This dimension evaluates the LLM’s proficiency in interpreting and processing the variety of data and language found in clinical settings. This includes understanding medical terminologies and jargon, interpreting clinical notes, lab reports, imaging results, and handling both structured and unstructured medical data In-Context Learning: This component examines the model’s adaptability and capacity to learn and apply new information within a specific clinical scenario. This includes incorporating new guidelines, recent research findings, or patient-specific information into its reasoning Clinical Safety and Risk Assessment: This dimension focuses on the LLM’s ability to prioritize patient safety and manage potential risks inherent to clinical settings. This encompasses identifying and flagging potential medical errors, drug interactions, or contraindications. Those dimensions were tested across 4 types of tasks:Closed-ended questions: These assess the LLM’s comprehension of medical concepts and ability to provide specific answers. Examples include multiple-choice questions similar to those found in medical licensing exams Open-ended questions: These evaluate the LLM’s reasoning and explanatory skills in more realistic clinical scenarios. They assess the model’s capacity to synthesize information and generate appropriate responses without relying on pre-defined answer choices Summarization tasks: These gauge the LLM’s ability to process large amounts of medical data and generate concise, accurate summaries of clinical information Note creation exercises: These test the LLM’s proficiency in generating coherent and accurate clinical documentation, including tasks like creating SOAP notes from patient dialogues or case information. Ranking the models accordingly will derive a preference and benchmark.

Read More »

Replicating Success in Military Robotics with a Pre-Seed Company

👉🏽 Last week I Visited a pre-seed company that is targeting military robots’ platform. They have a good strategy “replicating” what the Chinese are doing, with western propriety IP to avoid the risk of the Chinese produced army of robots turning on the western owners as a result of an instruction from the Chines Politburo .So, I took a look at Unitree, and this time at their humanoid robot G1. 👉🏽 From the looks of it, it’s more advanced than Boston’s, Figure and Optimus. And then and downed on me, they all do reinforcement learning using physical simulation. 👉🏽 If you run simulation learning on similar generalized topology, i.e. n legs, m arms with y degrees of freedom, this trained ANN can be used for all similar robots and if this is “open source” like Meta’s Llama, you can have built-in atavistic movement into your robot.

Read More »

Exploring AI Tools: From ChatGPT to Claude.ai and Beyond with Preplexity

I’m using chat based LLM since the testing days of ChatGPT, and now I’m a paying customer to claude.ai since it proved to me that it’s better in Hebrew and much less verbose than ChatGPT (we in Israel like to cut it short) I now learned of another tool called Preplexity that is based on GPT3.5 for the free version and can use other LLMS for the pro version. The main difference is that it’s designed more like a search engine able to bring fresh sources from the web and not just stuck in the last model generation period like ChatGPT. I think I’ll test it now with Google’s Gemini (where knowledge ends 😎 ). https://www.perplexity.ai/

Read More »

Some thoughts on GPT-4o. (“The story is for Omni):

𝗦𝗼𝗺𝗲 𝘁𝗵𝗼𝘂𝗴𝗵𝘁𝘀 𝗼𝗻 𝗚𝗣𝗧-𝟰𝗼. (“𝗧𝗵𝗲 𝘀𝘁𝗼𝗿𝘆 𝗼𝗳 𝗢” 𝗶𝘀 𝗳𝗼𝗿 𝗢𝗺𝗻𝗶): 👉🏻 Voice applications ISVs should review business models very quickly. 👉🏻 Video analytics and devices to assist the blind (Like OrCam) should do so as well. 👉🏻 UX designers – You have to catch up with multimodality, the user can have a conversation with the application and interaction is all encompassing and includes – Voice and sentiment, Video, text as well is user-based interruption to the conversation flow (this is a big deal).👉🏻 Cultural standard – This AI is so American in the responses that I see potential for fine-tuning it for different languages and cultures. קשה להאמין שניתן להתאים את השירות כפי שהוא, לרובוט לקבלת תרומות לקבר רחל…  😎 👉🏻 Porn sites will be early adopters, even if they will be blocked by OpenAI, there will be opensource models that will come soon, hopefully with some deepfake protection. 👉🏻 CAPTCHA and “I’m not a robot testing” is going to get much harder. This “thing” is passing on all relevant Turing test criteria (Relevance ; Creativity; Empathy; Natural language use; Ethical considerations). Probably going to be based on some Identity truth providers. https://www.youtube.com/watch?v=kO9Jge1z7OU

Read More »

Rethinking Success: Jensen Huang on Low Expectations and Suffering

Every once in a while, I get some prescription for lifelong success. This time it’s Nvidia CEO Jensen Huang claiming that “low expectations” and “suffering” are the key for success. Since I I’m into modelling, let’s draft this declaration in a lesser Christian terms: In the previous century, Yele psychologist Victor Vroom drafted a theory of motivation called Expectancy theory as:Motivation (Force)= Valence x Expectancy x Instrumentalitygiven that:Motivation – Is the force driving for success.Expectancy – Is the belief that putting in the effort will result in improved performance.Instrumentality – Is the belief that improved performance will lead to desired outcomes.valence – Is the value an individual place on the outcomes. What Jensen is actually saying by “low expectations” is that there are no free lanches and you have to emphasize “Expectancy” and effort to get things done.As for “suffering”, Jensen is saying that you should continue to believe in “Instrumentality” in spite of fails.   Nvidia CEO tells privileged Stanford graduates they need to lower their expectations and get used to ‘suffering’ in order to succeed in business  

Read More »

“I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” (Albert Einstein)

👉IDF is using trebuchet to fling torches, to clear terrorist hiding places in the northern border bush. Ukraine and Russia are utilizing weapons dating back to WWII and WWI. 👉Operation Reseach was born during World War 2 as a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control”. 👉Today, because of democratization of weapons, the Nash equilibrium in conflicts has moved from total war to limited war, and in limited war the process of wining is driven mostly be cost effectiveness. i.e. Operation Research. 👉If you can use cheap stones instead of rockets, why wait for World War III? https://www.youtube.com/watch?v=nH-nkCj7Ncg

Read More »

No Code vs. AI code generation for the Citizen Developer?

👉🏾 Citizen developers are non-IT professionals who create and customize business applications using low-code or no-code development platforms. They have little to no coding knowledge. 👉🏾 The need to allow non-programmers to develop utilities and internal tools is a long-failed dream. There were several tries to do that via application generators, declarative languages (someone said SQL?) and later no-code environments that allow the user to draw her whims. 👉🏾 I was designing an app that would be deployed within heterogeneous users, some of them expected to have access to developer and some may use their 13 years old kid to do the job. 👉🏾 Till now I was leaning to no-code front end, but I’m thinking again. LLMs are getting so good at creating code via tools like Curser and Claude Dev extensions, and the user is interacting with them interactively via chat, so she creates the solution incrementally. My guess is that soon UML and BPMN (Business Process Modeling Notation) diagrams are going to be created posthumously for management and documentation only.

Read More »

My personal AI riding experience – Process and tools to deal with elephant in the room.

👉🏽Nothing fancy, no automation since I like driving with a stick shift. 😎 👉🏽For general info on a subject, I use preplexity.ai which gives an accurate summery of web search result with pointers to resources. No hallucinations there. 👉🏽To get deeper into something and get a brief (long 3min waiting time) I use Stanford’s Storm-project that provide an automatic structured article, very Wikipedia like. 👉🏽Then I take the main points and build a project on claude.ai and interact with it including coding. 👉🏽For math expressions I use ChatGPT free and try to limit the usage since it tends to lie just to make you happy. 👉🏽If I need to create images, like the one here, I use the Copilot version that comes free with MS Office365.

Read More »

Revolutionizing Feature Testing: Using Synthetic Personas for Efficient Feedback

When testing a new feature, it’s common to use Focus groups or to do A/B testing. Meaning you show or test various implementation alternatives with relevant customers and get feedback. This is costly, slow and not part of your DevOps pipeline. A step in the right way is creating Synthetic Personas, using prompts engineering to describe each Persona’s character, presenting them the feature using multi-modal LLM if necessary (like Figma screen prototypes). And then, measuring the feedback from this virtual crowd with some Monte Carlo Simulation over the LLM parameters (like temperature) so before you prioritize this feature in the backlog, you get some feedback! You can also change your existing customer’s profiles to assess impact on new markets, and I’ll bet you can also compute rough implementation complexity, as a setup to the planning meeting about the feature’s future. I’ll put a link to a sample implementation in the comments, it’s in the right direction but still embryonic.

Read More »