In exploring the expertise gap we find a word that represents a reason the gap exists in the first place but also a way to start building the much needed bridges.
That word is model.
What is a model?
We can find the word model used in physics, statistics, mathematics, engineering, architecture, sociology, artificial intelligence, machine learning, and simulation. While different in nuance and detail, the commonality between all the uses is that a model is a smaller, more manageable representation of a larger system.
All models represent some chosen aspects of the larger system and leave other aspects out. As George Box
supposedly said: All models are wrong, but some are useful.
Consider the abstract example of a model of a circle. In a Cartesian x-y coordinate system with the center of the circle at (a, b) and a radius of r, you can model a circle as:
(x - a)^2 + (y - b)^2 = r^2
But that does not look like a circle to most people. If we look at how a computer draws a circle, we get a wide variety of useful wrongness.
You may be familiar with the idea that the process of Supervised Machine Learning outputs a type of model. In this case the model is a piece of software derived from the original data set provided during AI training. This software is then tested against previously un-analyzed data.
In Deep Reinforcement Learning (DRL), the kind of learning at the heart of Microsoft Project Bonsai, a model (sometimes called a policy or agent) is trained to control a system by taking action then learning from the result. Some choose to train these agents in the real world, which can be very slow and, in industrial scenarios, dangerous.
At Bonsai we train agents against simulation models. In this case the model is a simulation of a system, which means the agent trains much faster and more safely while exploring many more scenarios than are practical in the real world.
Training against more scenarios means the resulting agent is more robust.
Why is speed so important for training AI?
As we have seen in the previous section, simulations are important for training agents in the evolution of AI.
Training AI takes a lot of data. More specifically, it takes a lot of the right data, delivered in the right way and at the right time. With
supervised learning, you provide your data upfront and fully curated (tagged and sanitized) for the learning algorithm to process into a model.
With DRL the data used in training the AI is the response from the simulation model in the form of state data and a reward metric to indicate how well the AI is performing. To get the scale of data needed for effective training, we need the simulation to be fast and accurate, in the order of hundreds of per second for a single run.
A fast simulation means we can get a large amount of data in a short amount of time. This is especially true when using scale out parallelism enable by public clouds.
There are many routes to simulation
You may already have a simulation model which you use within your business. For example, for planning new infrastructure or training human operators.
But what if you do not have a simulation model?
Don’t worry! There are many routes to building one for your purpose. Two common
Use a dedicated software package. You can use existing software to build a simulation model from first principles or discrete events. The simulation can be derived from physical equations that describe the dynamics of your system or modeled based on events that define your process. Example applications are Matlab Simulink, Anylogic, Siemens Amesim, Ansys Fluent, VPLink, and many more.
Evaluate historic data. You can curate existing data to define the distributions or structure of the simulation using dedicated software or custom code written in an AI-friendly language like Python.
Useful AI is an ensemble of techniques
As you track the evolution of AI and see the emergence of the next wave of useful AI. An evolution which combines subject matter expertise and human collaboration through Machine Teaching. In the article linked at the start, we read that: Machine teaching attempts to bridge the gap between the algorithm, the data, and the experts by providing experts in different fields with ways to break down problems into steps, which they could then translate into a system that machine learning algorithms can understand and learn from.
The next wave of useful AI combines subject matter expertise and human collaboration through Machine Teaching. Riding the oncoming wave successfully will require a variety of models and an ensemble of training techniques.
At Bonsai we think quality AI training should combine best in class solutions in supervised learning, mathematical modelling, programming, and deep reinforcement learning. Bonsai "brains" learn from skilled subject matter experts and best-in-class simulated environments so they can take their place besides human operators to drive the next level of business value.