Artificial intelligence, machine learning, and digital twins — why are we hearing so much about them and why do they suddenly seem critical? The simplest explanation is this: When something is too complex for a human to easily process or there is too little time for a human to make a critical decision, the only choice is to remove the human. That requires the ability to replicate the thought process a human might go through, which requires a lot of data and a deep understanding of the decision environment.
So why now? For decades, we saw huge advancements come primarily from the integration and shrinking of electronics. Smaller products, consuming less power, and offering dramatic increases in functionality per square inch were the hallmarks of technology progress.
Software applications also have evolved over the decades, one of the most notable ways being the dramatic acceleration of the application adoption cycle. In the past two decades alone, users have shifted at alarmingly fast rates from treating applications as novelties, to using them as a convenience, and then to expecting them to work flawlessly all the time. At each adoption stage, a user’s expectation rises, meaning the product must evolve and mature at very fast, scalable rates.
The combination of the hardware and software trends formed a convergence of product development requirements. New “critical need” applications suddenly must feature higher capacity of real-time processing, time-sensitive decision-making, high to very high availability, and expectations that platform-generated decisions be correct, every time.
While most people think of AI primarily as an end-user resource, AI has become necessary for faster product design and development. From the earliest stage of a chipset design or layout of a circuit through end-product validation, emulators have become necessary for building complex interfaces and environments. These emulators, known as digital twins, are a virtual manifestation of a process, environmental condition, or protocol capable of serving as a “known good signal”. In test terms, a digital twin can be a simple signal generator, a full protocol generator or a complete environment emulator. Digital twins allow developers to rapidly create a significantly wider range of test conditions to validate their product before shipping. High-performance digital twins typically contain their own AI engines for troubleshooting and regression testing new product designs.
AI-Driven Development and Digital Twins
The shift to AI-driven development and digital twins has become necessary due to the amount of functionality and autonomous decision-making expected in new products. Basic design principles specify features and functionality of a product, then set up tests to validate them. The sheer number and complexity of interface standards makes that virtually impossible to construct by hand. By using digital twins, a much wider set of functional tests can be programmed in much less time. AI functionality then automates test processes based on what it discovers and predicts actions that might be needed. To understand this better, it’s useful to understand the core of what makes any AI possible.
In its simplest form, software decision-making starts with algorithms. Basic algorithms run a set of calculations, and if you know what constitutes acceptable results, you can create a finite state machine using decision tree outcomes. This would hardly be considered intelligent. By adding a notation of state, however, and inserting a feedback loop, your basic algorithm can make outcome decisions a function of the current conditions compared to the current state. Combine this while evolving the decision tree into a behavior tree and you have formed the genesis of AI.
The need for AI and digital twins is real, and when you question the veracity of one — yours or someone else’s — go back to its genesis, otherwise known as the data. Data source(s) are the foundation of any digital assessment tool, and those sources determine the potential of an algorithm’s modeling accuracy. If multiple data-rich sources are available, then the accuracy potential is high. If only basic data is available, the resulting algorithm or digital twin will not be accurate. This is something you can assess yourself.
Here are steps to assess the potential of any AI algorithm or digital twin:
- Make a crude drawing of the closed-loop decision process – inputs, condition considerations, outputs — the AI is supposed to replicate or environment the digital twin is supposed to emulate. Write out as many variables as you can brainstorm. Don’t take more than 30 minutes on this step.
- In the case of an AI algorithm, look at the data sources the vendor is claiming to use. In the case of a digital twin, look at the system performance specs and background knowledge of the vendor. Their collective depth is proportional to the algorithm’s potential. This might take an hour or two of research.
- Based on what you learned in Steps 1 and 2, ask the vendor lots of questions. The clarity — or lack of it — will rapidly shape your understanding.
We are at the early stage of AI, which means lots of products will be making lots of claims. Understanding what a product is supposed to deliver will allow you to assess it. Understanding which data sources it processes will tell you how accurately it can deliver the results the vendor promises. Digital twins are much further along in maturity — especially those that emulate specific elements rather than entire ecosystems. Remember, though, that the more finite the environment, the more likely the digital twin will accurately replicate it.
We all want to understand how something works and how it produces its outcomes. With an understanding of the basic elements inside every AI system and digital twin, you can ask questions about their fundamental elements. If you get stuck, use the steps as a guide for questions to ask of the vendor. Most will share all or some of the key background or parameters to help you understand. I they don’t, their competitors will.