Delivering Project & Product Management as a Service

Blog

“There are three kinds of lies: Lies, Damned lies & statistics.”

Mark Twain in his autobiography attributed the saying to Britain Jewish Prime Minister Benjemin Disraeli as an example why he didn’t like dealing with figures. It was the same Mark Twain that in 1869 visited the Holy Land before the Jewish people return to the land, and he wrote while riding North to South in Israel: “There is not a solitary village throughout its whole extent – not for 30 miles in either direction. There are two or three small clusters of Bedouin tents, but not a single permanent habitation. One may ride 10 miles, hereabouts, and not see 10 human beings.” The first modern Jewish immigration back to Israel was just 13 years later in 1882, and those early 25,000 settlers brought new life to this old baran territory. What we now call Palestines are mostly work immigrant that can after, as part of the new economic activity initiated by Jews. Today 140 years later, Israel has a GDP that ranked 4th by the Economist amongst developed countries in 2022, with more than 20% of its population being Israeli Arabs that enjoy this prosperity. I have worked with many of them in the Israeli Hi-Tech industry and also with Palestinian contractors from the West Bank. All good productive smart people. So, how can a small violent minority perform such atrocities against Jews? This question is not a new one, my grandfather asked himself the same question in 1940 when he and his family were sent by Nazi Germany to work camps. My interpretation is twofold one: 1) Moral prejudice – Jews wrongly believe that moral codes contributed to the world by the Old Testament are universal and are followed surrounding cultures. This is the famous conformation bias, which is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values. 2) Underestimating tribal law – This type of economic thinking is common in the US and democratic countries, why shouldn’t the Iraqis or the Afgan people chose democracy, prosperity and freedom over corrupted misogamist violent tribal law? Because in non-homogenous tribal society, one care first for family and tribe in order to survive. It took Europe a thousand years to get out of this paradigm, so why expect the Middle East to change so quickly? So, first we have a lie about the Palestinians population origin. Then we have the Damned lie assuming that Palestinian problem would be solved by “fare deal” that will distribute the land and ignore the tribal law in this violent Neiborhood. And finally we have the statistics that the Jews after each disaster are becoming more prosperous: “But the more they afflicted them, the more they multiplied and grew. And they were grieved because of the children of Israel.” (Exodus chapter 1 verse 12). Remember the “buy low sell high law”? Proven history, it’s a good time to invest in Israel’s economy.

Read More »

Timeboxing the war effort, and how to use AI to assist in tactical decision making?

We have war now in Israel, and so it’s hard to think about other things, yet to paraphrase Mark Twain’s saying that “God created war so that Americans would learn geography.” Let’s see how war management is following the same agile rules of Project Managment and how modern AI can be used in it. Click Here Click Here Click Here Click Here Click Here Click Here Click Here Click Here Click Here Click Here Previous Next

Read More »

Growing a technical team in the age of GPT / LLM /  AI and how Link Analysis is related

Working in the software and information industry for quite some time now, so I had my share of arranging technical interviews and team building. Besides that, I worked for an HR testing and placement company for a while, so I’m familiar with the backoffice aspects of recruitment including competency testing and doing statistical similarity between applicants resumes and job descriptions. Usually the process follows the following guidelines: Define the job description. Meet with the HR to define together the profile of the candidate and having he/r exposed to the more soft aspects of the job and team, so that the HR interview will take into consideration those aspects Publish the job internally and/or in job boards and get ready to get flushed with Resumes. HR is doing the first screening using ATS (Applicants Tracking Systems) that score the relevance of the resume test to the job description. Out of hundreds of applicants only the top scoring candidate are contacted by HR and further screening interviews. Those that pass that stage are invited to a professional interview and/or some testing out of a bank psychological testing, and some time give some professional challenge to the candidates. Those that survive that stage are given a payment proposal and if they agree they join the team. This sorting process is broken in two places:  The first, is that ATS resume filtering is prone to low accuracy – Regardless of how good the filtering algorithm is! The problem lies in the fact that most ATS are not connected to the actual performance of the candidate once she’s on the job, so there is no positive feedback loop that correctes wrongly placed candidates. Nor is there a mechanism for identification of errors of type II (false negatives) since you never know what you miss! The second break line is the fact that job description and HR evaluation as well as the candidate resume are very sparse on the actual data needed for a successful job placement, and there is always pressure to reduce the cost of recruitment. So, the fitment is very rough, and that’s the reason companies are pushing “bring a friend bonus”. Years ago, when developing a psychological application implementing test banks, I learned that the best predictors for job success is not GPA grades or creative logic problems, but the following traits: Cognitive ability – Logical reasoning always helps whether you are developing software or driving a truck. Emotional stability – Big Five Personality skills test is a good predictor for how good one can cope with setbacks and a hard nosed boss.  Being creative – The ability to deal with the unforeseen challenges, originality and the ability to solve problems. This covers all the things  you don’t know about the job description, and the candidate will have to deal with.  Ability to change – Across all job types, if you’re able to change and adapt to changing conditions, you’ll survive, again, this works even if the job description is bad, since the candidate will adapt.  Historical performance – If one did good in

Read More »

To SQL or No SQL this is the question

Some History In the beginning God created the heaven and the earth. And the earth was without form, and void; and darkness was upon the face of the deep (Genesis 1:2). Ages ago, serialization of data was the responsibility of the application programmer, you built the libraries that saved your data to the disk in various file formats and indexes. Those were the days before ISO and ANSI, and there was an organization called CODASYL, which tried to standardize those formats. Some of relics of those data formats were alive till the mid 2000 and I was involved in an archaeological software activity to move to modern standard only because the data was hardware dependent and there was no hardware available to run the software, and even emulation of the hardware was being phased out. Then there were vendors who provided their own standard methods to save data, IBMs ESAM and VSAM so if you used their computing platform you just needed to reuse their methods. This was circa 1970. It was clear that it would be helpful to find a standard method of mapping data and relationships into a model that would be standardized. Digital Equipment Corporation (DEC) produced a network modeled database called DENDB on their VAX/VMS operating system, in that period. And IBM provided a hierarchical database called IMS/DB. The main key point was that all generic DB entities related to objects (entities) and links or relations between them, and suggested ways to ease their organization. Then came Edgar F. Codd the prophet of relational databases and proved mathematically that a relational structure between table entities can be optimized to reduce duplication, enhance efficiency of access and reliability. Relational databases quickly caught on and multiple implementations were created. Since you had a standard way of managing data, a language called SQL was quickly adopted and soon became the prevailing data access language. Fast forward to the new millennium, and Big Data came to be. Everyone was saving and retrieving data, not just corporate workers, but the anyone who can hold a smartphone, and applications had to deal with huge volumes of data, which was not the old mom and dad accounting data, but included pictures, sounds and all things related to people life. And it had to be done fast since the users are free to change vendors and not just locked in their cubicles getting paid to wait for the screen to render. Relational databases, although mathematically well defined, were breaking. Time for a change As with all programming, the problem is mapping reality into binary data. This may be simpler in the future considering the Simulation Hypothesis, but we are not yet there, and for now, we have to deal with analog reality. When modelling reality, we must simplify complexity in RDBM (Relational Data Base Systems) this is translated to tables, but in other cases to other logical entities. Key-value DB In KV DB the atomic entity is a record that holds a value and

Read More »

Medical device product checklist – The place where OCD rules!

Preface I’ve sampled the healthcare industry both from the administrative services point of view, developing actuarial Risk Adjustment for US health providers, from the production side, designing face recognition application to reduce the errors in medical treatments as well as being involved in the implementation and planning of biometric solution to reduce the tracking effort of shop-floor workers in the pharma industry. Also adopting a medical device that used a sensor to track vitals for bedridden patients, and via AI calculate the patient various health parameters and risks. In all those activities for various companies, with different SOPs (Standard Operating Procedures), the common denominator was traceability, not quality but traceability! Edward Deming who was one of the fathers of scientific management said that “Quality comes from not from inspections but from improvement”. If you don’t have measurement and traceability of your processes, you can’t improve them, nor can you inspect or control them. Traceability is the ability to verify the history, location, or application of an item by means of documented recorded identification. So, it’s a log of the process that is used to develop the item, as well as it’s outcomes. Waterfall and agile, software and hardware – Like water and oil When dealing with interdisciplinary products we have a clashing of cultures and different development speeds. In software you can and probably do Agile or Scrum, as it has proven to be the most effective way to develop code. You can automate stuff by CI/CD and practically not be limited by the laws of physics. In Hardware things are a bit different in terms of development rate as well as building a BOM that is using external resources with hard supply chain, and physical packaging constraints. The usual way to deal with those differences is to incapsulate the faster processes in the slower ones – I.e. build a Gant that has relevant cyclic activities like sprints shown within a container task. Since sprints are managed quite well within tools like Jira and MS Devops, there is no need to track them in detail, unless there are dependencies with the slower process, for example. Developing a driver for a hardware component. In this case it’s worthwhile to manage it with the Hardware development. *Contained Agile within a waterfall Gantt – Multiple iterations are shrunken into an activity if they have no outside dependencies. Building traceability into the process. Traceability is defined as: “The ability to discover information about how the product was made”. This means that there should be an audit trail that documents the decision process and the activities that took place during the product’s lifecycle. There are several levels of decision making that can, and should be dealt with differently: Strategic – Those are document driven processes that broadly define product direction, like MRD (Market requirement definition) and PRD. Tactical – Documents that describe what is planned in terms of technical design documents and processes like SDD (Software Design Document) and Electrical schemas. Operational – How the

Read More »

Scramming to get Agile results from your team?

Take all rituals with a grain of salt. I’ve being learning and teaching Martial Arts for many years as a hobby, and there are similarities between doing this and leading Products and Projects. Martial arts teaching in most disciplines, contains drills that enhance your flexibility (agility) as well as routines that are rehearsed multiple times to enhance your ability to react to different stimulations. In traditional martial arts those are called Forms or Katas. Yet, in “Real Life” martial situations, you rarely see those routines played by the book. The reason is that those forms are just learning aids, not tools to use in actual fights. Same with Agile and Scrum – Those are not to be practiced as religions with real-life Product teams. One has to understand that Agile manifesto was published in 2001 as guidelines for good software development, however it’s describing the principles, and not the how-to. Scrum on the other hand was developed in 1986 in Harvard and in 2002 the Scrum Alliance defined the roles and workflows of what is now called Scrum as an implementation of Agile principles. Scrum, when taught, have multiple rituals that are taking place per sprint – Sprint definition, Planning, Daily Scrum or Standup, Post Scrum lessons learned and Backlog Pruning. Defined to provide the beating drum of an iterative process lead from bottom up. New roles were defined to reduce hierarchy and brake dependency on traditional roles – Produc Owner replaced the Product manager role focusing on representing the customer voice while prioritizing task in the backlog, Team members are all other developers and testers on the team, democratizing the development activities, and Scrum Master at the high priest to guide the team in the intensities of the methodology, replacing the customary Project Manager role, and further flattening the hierarchy. However, following Scrum rituals blindly is like putting a Aikido master in an MMA ring, you’re bound to be punched in the face. It’s comforting to have fixed habits and guidelines to follow instead of succumbing to the chaotic uncertainty of software development but there are some drawbacks to that: Do you really want to engage the whole team in sprint planning? Time cost money and not all people contribute to such a discussion. So, they listen passively and hopefully, work on code in parallel. Pick wisely who’s time to use and syn all later. Same for daily scrum, is it relevant to review all tasks that are on the table, daily? Some tasks takes longer and some are not important to waste other people’s time listening to. Is the DBA really interested in the tasks of the UX designer? I’m all for setting the beat of the process but select the right Bongo. Lessons learned and post sprint review and pruning are not to be neglected, yet isn’t it better for a leader to have the team point to improvement and problems on their own initiative? I feel that structuring this feedback loop is just not natural. Then

Read More »

PDLC – SDLC != NULL

SDLC – Sofware Development Lifecycle is a bottom-up framework for developing Sofware, i.e. from requirements gathering through design, development, testing deployment and maintenance. Since most products risks lay in the translation of market demands into requirements, and the translation of the requirements into features, PDLC (Product Development Life Cycle) was created as a farmwork that wraps around SDLC and adds:1. Idea generation & market research, to the requirements definition to reduce the first risk.2. A/B testing, POC, customer success KPIs, and other verification methods to verify fitment of features to customers’ demands to reduce the second risk. Both methods are part of an ongoing trend that was called Concurrent Engineering years ago, in the interdisciplinary industry, but why not use 4 letters acronyms instead?

Read More »

OSINT

OSINT plays a major role now in the Ukraine war, both because of social network and the fact that any smartphone is loaded with sensors. I’ve witnessed OSINT changes from my first product experience using printed newspaper’s data to enhance financial decision making on a trading station, Later, trying to adapt relevant technologies and products, for the Israel Police intelligence branch. Now got this poster of the Open Sources Intelligence, and how the playing field has grown!

Read More »

PLG or not to be

Product Led Growth strategy is a business focus on product performance that is supporting PM tactical decisions. It came to be since software ate the world and Open Source economy is producing products with zero direct price and positive value to the users. In PLG you answer the Who, Where, Why, How questions differenly from the classical way: 1. Who is buying your product? In a product-led approach, you are targeting users not buyers, since not all users will pay. 2. Where will they find out about your product? ‍Because of the information overload on users, PLG relies on virality and word of mouth, rather than traditional promotion strategies. Specifically, satisfied users will share your product with friends and coworkers. 3. Why are they using your product? ‍Your product should be more trustworthy, deliver more value, and have better UX than your competitors. So you compare your product to competitors not your organization or team. It’s not the resources that counts but the outcome. 4. How are they buying your product? ‍Users should become buyers within the product itself (In-app) or at least after experiencing the product first-hand with a clear easy payment solution, rather than via sales reps. I strongly suggest supporting this strategy if possible.

Read More »

Reducing external risk in product development by using Chat GPT

Where I’m ranting on corporate PM, while suggesting ways to reduce market risk by using raw GPT and some ideas as to go along from there. The Legends- Steve Jobs used to say – “You’ve got to start with the customer experience and work back toward the technology – not the other way around.” Steve’s biography tells the real story of a tyrant who listened to none and single handedly created a market from imaginary needs in his head, not from non-existent customer experience. While Eylon Musk says – “Any product that needs a manual to work is broken.” Yeah, sure, I’m confident SpaceX is not using manuals… Mind the Gap- The point taken is that there is a huge gap between what is perceived to be the customer, and what is the “real need”, as well as taking CEOs’ personal marketing hype with a grain of salt. Creating a product, especially an engineering one, is a creative endeavor, and the hard part is doing the creative process in an organization. There is another gap between the lonely artist paradigm painting in his attic, and pushing technological innovation inside an organization in an open space environment . This is why large organizations are continuously buying new product companies, and have investment bodies in the startup ecosystem. Searching Under the Lamp- The issue is that corporate thinking is very structured, and HR are thinking in terms of Experience, Roles, Responsibilities and Skills (Sometimes even accounting with personal traits, like leadership?), while the product’s dimensions are measured in the knowledge domain for which its being developed (technology, market constraints, etc.) as well as the clients’ expected needs. This dimensionality mapping is the cause that if once there was a classical product manager, or project manager if the task was more bespoke and time dependent. Now we have: Product manager that is outbound (marketing dimension) Product owner that is inbound oriented – Focused more on the how. Product success manager – that is, well, like a product manager but more oriented on implementation and receiving of the product on the client side. A PMO that is responsible for the non-functional management tools of the product/project within the organization. A Project Manager that is actually doing planning and hands-on monitoring of the activities. Oh, and then you have the Scrum Master, and whatnot, if you’re doing Agile.  I’m not promoting maverick operations, but Fred Brooks’ Mythical Man Month book pointed to the diminishing returns of adding resources to a software operation in 1975, so what is this pattern of multiple managers with reduced responsibilities? I think it all originates from two reasons:  The first is that it’s hard to get one person equipped with all that is needed to manage all the product aspects. And if you find such a rare person, he’s probably doing something better…  The second reason is that products are getting more and more complex, despite the famous KISS principles, (Keep It Simple Stupid). This deviation from the KISS principle is partly due to the tendency of systems to get

Read More »

The God of dogs or how General Artificial Intelligence will affect Humankind

Asimov (1920-1992) wrote some of the best hard science fiction in the middle 20th century. His Foundation book series was / is brought to TV but lacks the colors my mind imagined it as a kid. He wrote about robots and a multitude of other things, but he is most known for his 3 laws of robotics that he devised as part of his Human & Robots Syfy universe. https://www.youtube.com/watch?v=sitHS6UDMJc 1 Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.2 Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.3 Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. He wrote about robots and a multitude of other things, but he is most known for his 3 laws of robotics that he devised as part of his Human & Robots Syfy universe. 1 Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.2 Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.3 Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Jef Hinton who is the discoverer of the Backpropagation algorithm which is the backbone of most if not all Artificial Neural Networks, has recently joined Elon Musk in voicing his fears from AGI (General Artificial Intelligence) to humankind. I’m attaching his interview to the MIT technology review. To make a long story short he is proclaiming that LLM models are getting closer to human intelligence but are more efficient in the number of connections in the ANN (Artificial Neural Network) vs. the brain, as well as being identical and replicable so they can work on problems in parallel as well as communicate directly with no effort the is required for a biological NN to learn new subject. Also mentioning the ability to replace physical infrastructure of the network and thus achieve infinite lifespan. LLM will be able to program themselves so protecting those three rules in a Unix “kernel” that is not approachable by the AGI, would be a fallacy just as the paradox: “Could God create a stone so heavy that even he could not lift it?” When dealing with deities, one must assume that they will be able to lift any stone that we create. Asimov’s rules are just a work of fiction like his Syfy work. So where is our protection? I’m not that worried. Mankind is already playing with its Genome and just like dogs which adapted from wolves to serve their human masters, we will adapt to do that, like we did over and over again with religions, since Stoneage.

Read More »

“success has many fathers, failure is an orphan”

In Spite of “A Success” having so many parents (single sexed so it seems), not much attention has been given to the definition of success.  The PMI standard definition is the famous triangle of: On budget, On time and at the required quality, but this is simplistic, and as a simple example, would you consider a product that matched all those KPIs and still failed to give benefits to the customer, a success? Or a venture that missed all deadlines and costs and still managed to develop an economic way to send cargo to space (SpaceX) a failure? It seems that we have to consider several hidden dimensions (alas smaller than the 11 hidden ones in modern physics), that control our definition of success. The first dimension is efficiency – Did you indeed stood-up to time, budget and quality constraints? This is most notable in Production type activities, where you’ve done those things before and have low uncertainties regarding technologies, customers and internal processes. Think about adding a new chewing gum flavor as a new product for Wrigley, or a new feature on a SAP CRM platform. All the machinery or software pipelines are there and all you have to do is fill a template as best as you can. However, there are activities that have less certainties in terms of customer definition and needs. Those are the places where you ask yourself who is the customer and what impact are you striving to achieve? Products and projects that target a new market, or provide a new tool for existing customers are of Exploratory type of activities. For example a company that is planning to provide a healthcare B2C product will try to Digital Persona Analysis to validate the needs and value it provides the the user, engage in a POC and maybe run for some time with an Alpha non-paying customer. This is much more relevant than just plugging some KPI in the Value/quality triangle.  Those product / project activities are sometimes done within an organization with a portfolio of activities, so the next question one has to ask herself is if this is the case, what is the impact of the product on the company’s activities. A good example is a company that is adding new applications to increase salespeople productivity. In this case, the value can only be measured on the organization as a whole, this type of activity is Integral one and creates wide side-effects on other activities. The last success dimension is if the activity is aiming to prepare for the future. This is the classical R&D activity that targets new technology, future markets or maybe including a different line of services to the existing ones. This type of activity can for sure be part of a new venture or a Startup, but can also be part of large organizations that are strategically looking for the future. In those cases the measurement is actually how fast you adapt to feedback from the present and change direction to coming needs. A good example is a retail

Read More »

My trip to CI/CD

Most Devops engineers start their journey from application programmers into what is now the software production engineering as building of the software supply chain. Touching code decades ago, my view of software development is a bit from a historical perspective. I remember my visit to the Google campus a few years ago, visiting the computer museum and feeling a bit of an archaeological relic myself. Walking there and seeing the Commodore 64 and Apple II was just like walking the Museum Of Natural History and seeing  Jimmy you pet Dinosaur shown on display.  Historically software engineering went from being Maister based industry in which the master programmer was bringing his owned self developed libraries to his project and developing the application in a very bespoke way; Into an industrial venture in which multiple programmer are assembling a software machine using pre configured artifacts (parts) that follow multitude of standards. Today, a software development project or developing a software product is mostly an industrial venture that is very much like assembling a car with various degrees of customisations. Being in an industrial environment brought back to the field old skills of industrial engineering dealing with the software plant machinery. Now it’s called CI/CD (Continuous Integration / Continues Development) In a way the development cycle of the product was cut into manageable pieces via working in sprints or production batches in the old language, and each piece of software is cycled through the plan, develop, test, integrate (build) – But now those activities are automated via triggers on the source control system that deliver the artifacts via various pipelines through those stages up to the end results of a built working software. Since unlike a physical plant, software development is related to the laws of nature via Anthropy alone, you can actually go and deploy the software automatically and provide the users with new functionality with physical delays. A good way to look at CI/CD engineering activities today is to go back to the old ways of industrial production-line planning:Break down the product tree into componentsDraw the “virtual” pipelines in which each artifact is flowing through.Create the activities “machines” that should operate along this pipeline – Those are mostly defined on the Integration server: Testing scripts, Integration builds etc.Deploy the binaries and resources of the product to the binary repository / site / Client. No hands got dirty writing this post.

Read More »
REA Accounting model

Financial data analysis as part of crime for profit and fraud investigation

As part of the implementation of the FATF guidelines for a local AML (anti money laundry) law, we were involved in the development of countrywide financial data analysis system. It was a learning experience that we would like to share.Most investigations that deal with crime for profit require that some analysis will be made regarding the economic data that was gathered in the case. Such data can include credit card transactions when dealing with CC fraud, Bank transactions when dealing with money laundering and even transactions in property such as vehicles and real-estate when trying to account for hidden assets as part of a layering scheme.We tested forensic accounting software packages that were available at the time and unfortunately they were not suitable for the job for several reasons: We had to deploy the software organizational wide and those packages were standalone and not scalable. Forensic accounting software is designed for accountants, and thus was too complicated for most investigative needs, not to mention that most users didn’t have the forensic accounting background to use it properly. So we had to manage a custom development effort for the software. Software development risks are largely increased when you develop software for a new business process and data in the organisation, as was the case. So part of the risk mitigation was finding the right data model and integrate it into the proper existing business processes. We knew that financial and economic data is temporal and transactions have a very dominant while we still wanted to be able to review and balance different types of assets for changing groups of suspects. We tested several DW procedures but since we didn’t have a known and fixed number of dimensions, we couldn’t use them. We based our data model loosely on the REA accounting model (McCarthy 1982) so that all financial and assets data will be mapped into a graph based ontology where everything is transformed into the following entities: Resource – Have value, or can own. Edge – Ownership and other types of relational links. Economic Event – Temporal change of value or ownership link. Using that type of entities enabled us to perform both temporal value flow and calculate balance sheets for group of entities. For example the user were able to view a monetary transaction that evolved into a purchase of an asset that was securing a loan for other entities. Calculating the assets balance for all entities including those who got the loan the users could get support for the assumption that the group of entities were operating a crime organisation.Financial data that is lawfully gathered during the investigative process is transformed via ETL into the REA like schema and we exposed that data for further analysis and exploration by the investigators. Because the model was very abstract we had to realize the user interface and UX with proper names that are more common knowledge. We also implemented various algorithms like Benford analysis and more, in a very kiosk like

Read More »
Money

Building a Banking Governance system API

Banks are extremely ordered organizations. One get to be like that when dealing with other people’s money and charging a retainer for those services, all while employing skilled workers on average wages. Banking industry have evolved since the middle-ages in a very Darwinian way, so some of the risk mitigation procedures they have is work having a look at. One of the leading banks in Israel have a very rigorous authorization scheme in which employees are given strict internal permits for various actions. For example if the teller had finished credit course stage IV, she was entitled to give credit up to a certain sum to certain clients. The bank was interested in formalizing this authorization procedure and had asked us to design a governance system to enforce those rules. Since banking industry is a mature and conservative industry, the IT is extremely heterogeneous and ranges from IBM mainframe to Unix based trading systems and up to Microsoft based clients and branch servers. So the rational was to implement a central engine that would expose an API to the core banking applications, allowing  all verification to be done  in one place and administered centrally. Core banking apps were required as part of the approving process by the risk controlling office to call the Authorization API during each business process that was controlled. At first the design looked classical for an implementation of a rule based engine BRMS (Business Rules Management System), yet the organisation wasn’t keen on inserting yet another technology into its already large bag technologies so we were constrained to use existing legacy infrastructure. Fortunately, a closer look at the administered rules and authorizations revealed that they had a very similar syntactic structure which implied a composite IF THEN ELSE sentences with numerical or enumerated constraints. This structure enabled us to implement the data structure on a relational DB (Sysbase) and implement the API as simple DAL calls to the database. The only user interface was an administrative one for entering the structure of the rules and constraints. A notable mention was the hardware architecture that was designed to be fail-safe since most of the front office data-entry systems were dependent on the system performance. We also let failed timeout calls continue with pending authorizations.

Read More »
OSINT salad

WEBINT / OSINT and the semantic salad

It’s very noticeable that we are undergoing some new changes in enterprise computing. Some would say that it’s the cloud computing buzz, while other would say that it’s the fact that consumers now have more computing power then before while it’s mobile and close to their fingertips. Basically it’s different views of the same world. Fast communications, cheap chips and universal standards (and habits) had made data creation and data consumption extremely easy and cost effective. As a result we now have more data created outside the organizational walls that is relevant for decision making. In commercial enterprises, this data is usually applicable to customers and potential markets, or for R&D of new technologies, but in GRC (Governance, Risk management & Compliance) based organisations, this new and extra data can be extremely useful in adding some new insights: Customer’s background can be easily verified by exploring social networks and public personal space,  and thus know  your customer policies can be better enforced. This can be achieved with customer consent or even without it. It seems that “Tell me whom your friends are and I’ll tell you who you are” was never more true. Sentiment analysis which is the process of aggregating trends in public social space can be used not just for measuring brand recognition and campaign effectiveness, but also for recognizing inside information leaks from publicly traded companies or finding hot-spots of public unrest whether geographically or semantically oriented . Mashing up this outside data with organisational data can by achieved more easily using mature semantic technologies. The usual way of  doing a mesh-up is implementing an ETL (Extract, Transform, Load) process from one data-source to another. In the case of multiple external data-sources which change frequently, this process is extremely work intensive and requires mapping from one physical source to another. When making this mapping via semantic association, one can reduce mapping workload since ontologies can provide rules that associate family names fields with surname fields. Web Intelligence (WEBINT) and Open Sources Intelligence (OSINT) have come off age, and although semantic technologies still look exotic to the standard world of Information Technology, they are here to stay. Providing solutions  based on those technologies is easily adapted as SAAS (Software As A Service) model since those systems deal with external data anyhow and internal data can be selectively anonymized. Software engineering is a young discipline, when I learned it, it was more of an art form where guru artisans were building their own object libraries and carrying then from project to project. There were no QA teams and information security standards meant that you had to have a magnetic card to get access to the computer room. Now, of course things are different, and when planning a software or data project, you have to choose between various ecosystems and navigate between industry standards, but still while not an infant, SW engineering is rapidly evolving. Real Estate on the other hand, was dealt with since Sumerians and Egyptians discovered geometry to measure plots and orient buildings. So one major difference is historical depth. The other issue is physics. When dealing with software and data, you’re not dealing with physics, your

Read More »
error 2000 massage

Cyber fad vs. y2k Bug

I’m old enough to remember fondly the 2000 Bug fad, where the whole IT industry was busy trying to avoid legacy code crashing due to the millennial change. I know retired Assembly programmers who returned from the nursing homes to collect very generous consulting fees, but January 1st 2001 was a quite morning. It had me thinking about IT market memes and the need for marketing guys to hang their banding on.  I’m not sure Y2K was all fad but it had one vital flaw. I was limited in time, well untile Y3K at least. Cyber fad is not. Like Y2K bug, Cyber crime is all about people. In Y2K you were fighting crappy code that didn’t consider a change is the in the most significant date digit in essence it was a crime of negligence. In Cybercrime you fight malicious code written with criminal intent. People will keep on being lazy and some proportion of the population will ever be more likely to commit a crime. But in terms of  IT resource allocation, Cyber Crime is not limited by date or geography,  and because of that it will be a long term venue for crime economy. In 2012 there was a report on Malware toolbox called Blackshades, which for $40 enabled my grandma to hack into computers. It took 2 years for Law Enforcement Agencies to take the commercial site offline, but still you can get the tools and use them. Great booming economy is born. In this case it was a shame that FBI and Europol take-down of Blackshades software factory was shortsighted. Police forces should have continued the operation using the code-base to honey trap malicious users. 

Read More »
a graph

When to outsource Project Management?

A project is a temporary endeavor undertaken to create a unique product, service or result. A project is temporary in that it has a defined beginning and end in time, and therefore defined scope and resources (PMI org). The law of small numbers If a project must have a beginning and an end with resource allocation between those points, then there is an effort distribution along its life-cycle. This effort distribution is usually an aggregate of separate phases of the project, from initiation to planning, execution, monitoring and closure. But the main problem is that it’s highly stochastic in nature, since by definition if the project is unique and temporary then you don’t have a lot of statistics to draw from. The PP dance – Product or Project? One way to mitigate this issue is to to try to productise your projects – Mainly, minimize the one off parts that are needed for customization and make the bulk of the effort into a product – A product can be made automatically and repetitively with lesser margin of error. A good example is Amdocs which deploy large projects over large product infrastructure, or of that matter any CRM vendor. The other way is to employ resources with large experience in the project domain. Namely people who have sampled reality for quite a while, and got the feel for the type of effort needed for this rare activity. Those are seldom cheap and you have to keep a projects flow to keep them. Even if your organization have a constant inflow of projects it’s hard to smooth the effort. In the long term. It’s the usual obese or starve situation – Either sales is pushing in gold plated projects to the backlog, or there are periods of transition with redundant resources, low activity, and layoffs. (figure .1) As you see above, in red the costs, and in blue the revenues. One solution for that, is to outsource activities and thus quickly adapting supply with demand, and cutting costs. How far can you outsource? It’s quite common to outsource standard activities to external entities. Those can be companies that specialize in certain verticals, from UX on one side, to cleaning services on the other hand. However, there are several hidden costs to outsourcing: The first is the setup cost – One has to account for setup time for choosing the vendor, contracting, as well as knowledge transfer. Then there are communication costs as outsourced groups usually talk different “English”, have a time difference and come outside the cultural background of the business. Lastly, there is the possibility of loss of control, either by vendor incompetence or just because there is different interests, other priorities or other tasks competing on the vendor’s resources. Outsourcing large parts of the project in very large projects like those in the security market, or in civil-engineering, is mandatory. Since the scope in usually larger then what is needed for one vendor to fulfill by itself. Those projects are either led by the prime contractor

Read More »
Dining plates in the dining area and open laptop. There is a zoom meeting

How Covid19 did to IT what Bug2000 did 20 years earlier

…as seen from the point of view of a knowledge worker who experiences the world through the monitor, and is working with remote resources for years. Bug 2000 and Year 2K problem In the late 90ies there was growing discomfort with the centennial change. This was based on the realistic assumption that old archaeological code that existed since the 60ies utilized the two most significant bytes in the date format for other usages due to structural limitations in data as well as the so human assumption that it will be an NMP (not my problem). Between 1998-2000 the “Year 2000 problem” was growing from the discomfort to major itch, and then a problem as IT managers and CIOs were scrambling to get resources to solve the issues – Old COBOL and Mainframes Assembly programmers where requited from retirement, and huge IT budgets were allocated to solve the problem. I remember that the hype was, that planes may be falling out of the sky and elevators dropping due to unattended old code. To make a long story short, on January 1st 2000. There was no major issue. accept for some Norwegian trains delays. I’m not saying that Year2K problem was invented or imagined – Only that it was used by the computer industry to get budgets and those budgets among dealing with date issues, were utilized for modernization of IT applications and other positive side effects. Covid19 / Corona 2020 problem What is in common between a modern day epidemic of Corona (the 2019 Covid flue virus) and the historical Y2K problem? Nissim Taleb in his book “The black swan” describe is as: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable”. However not all black ugly ducklings are turned into swans. Some are just ugly duckling and those are differentiated from swan chicks by being perceived as outliers, yet, they are just not frequently experienced or are mere complex to explain, and presented as outliers by those who are extremely risk averse, or just have an interest to do so. Secondly their expected impact (probability times risk) is high. Since decision makers are not risk takers, especially in the health and IT industry where they are mostly cost centers, and have an incentive to blowup risks in order to get more budgets. Third, 20X20 hindsight is nothing new, but that’s because we don’t fear the past. In short no dinosaur extinction meteorite here… yet. So, Covid19 just like Y2K are FUD (Fear, Uncertainty & Doubt) based sociological processes in which a rare event is interpreted non maliciously by decision makers so, that in hindsight they we not be

Read More »
Fold paper on a light green background

Network driven innovation

Preface I’ve had the opportunity to work for organization ranging from start-ups, to multinationals and government organizations. Some of them with a history that exceeded human life span and some that died in their infancy. In all of them the question of innovation was relevant to some degree. In startups it was a matter of survival, you either rubbed your MVP with the market till success, or you had to change paradigm. In larger organizations the challenges are different, since the ratio between the circumference of the organization that is in touch with the market and the volume of the organization are grater – Just like in chemistry, the ratio of SA/V is controlling the speed of the reaction. How to speed-up innovation Besides resizing and restructuring to multiplies of Dunbar’s Number the real issue is increasing the reaction speed. In organizations, just like in organisms, there there are mechanisms for sensing the environment. The dominant ones are financial and sales based – Sales is responsible for the foraging strategy while finance is the metabolic digestion in the organizational animal, however in large organizations they are working at a yearly/quarterly beat, which is all too slow. In smaller scale organizations this is less of a problem because those formal mechanisms are the side-effects of not so formal activity. and you have by nature a quicker organizational innovation metabolism. what is innovation David J. Hughes at all describes innovation as follows: “Workplace creativity concerns the cognitive and behavioral processes applied when attempting to generate novel ideas. Workplace innovation concerns the processes applied when attempting to implement new ideas. Specifically, innovation involves some combination of problem/opportunity identification, the introduction, adoption or modification of new ideas germane to organizational needs, the promotion of these ideas, and the practical implementation of these ideas”. This means that you have to have three components in innovation: Problem/opportunity identification. Introduction of new ideas to solve that process. Promotion and implementation of the right idea. So how to speedup innovation? Problem/opportunity identification is automatic when it’s personal. Ideation process is generally more productive when done in groups, so there has to be some process of collection entities with the same interests and allocation of time for collecting ideas and prioritizing it. Promotion and implementation requires recruiting stakeholders and allocation of budget which are usually outside of the domain space (i.e. you have to get the money from outside the group). Grouping If we look at organizations as networked entities where the nodes are employees, we can see that in network terms we have a community or a cluster. The nodes in the cluster are interconnected by communication links that can be mapped via APIs like Microsoft graph for business interactions and Facebook graph for leisure connections or Google knowledge graph for more general entities. Problems / Needs For knowledge workers and digital organizations, one can deduce from the network topology and content both the communication connectivity between notes as well as the content subjects (knowledge entities). Those are not necessarily coinciding within a well defined organizational business

Read More »