Spotting opportunities to build AI systems that complement, not outright replace, people on the job.
When people talk about AI-based automation, they often speak of outright replacing people who perform a task. Given the current capabilities of AI, I think it's more complicated than that. This is why the single, "all-encompassing AI solution" strikes me as too ambitious at the moment. We'd do better to build AI-based systems that work with a person, to make them more effective at performing a given task.
Still, finding solutions in this vast space of human/AI interaction is easier said than done. I've started to develop a set of lenses through which to see business operations, to help me identify and evaluate opportunities.
I'll share three of those today. I call them "exoskeletons," "sidekicks," and "blinking lights."
Software-based automation is suitable for tasks that are all of: dull, repetitive, and predictable. People are unlikely to enjoy this kind of mind-numbing work. In some cases the monotony makes the job more error-prone, similar to highway hypnosis when driving. From that standpoint, we see a clear dividing line: use software for rote tasks and involve a human for anything else.
Well, that was an easy decision until AI came along. Compared to software's hard-coded rules, AI-based automation is built on uncovering patterns in data to make decisions. That's why I sometimes refer to this as probabalistic automation. AI automation disrupts that traditional software/human work split because its capabilities fall somewhere in-between.
But where, specifically, does it fall along that range? AI is overkill for hard-coded rules, but it's also too limited to completely replace a person.
What we deem a straightforward task for a human being -- "drive a car," "tell me if you see see something weird," "decide whether this post constitutes harmful content" -- is a complicated process that involves a lot of different skills and experiences. Getting a machine to do that reasonably well will require not just a single model that exhibits strong performance, but strong technology engineering skills to stitch several models together such that they can act in concert. You'll also need policies and human intervention for those times the model is wrong.
That presents a formidable challenge. Choosing model performance metrics for real-world projects can be more of an art than a science, as you must account for business, ethical, and regulatory matters above and beyond the purely mathematical idea of "better." And getting models to act as a team can lead to issues of complication and complexity. Complication, because this requires sorting out which model takes priority in a given situation. Complexity, because mixing multiple models can lead to weird, unintended connections and side-effects.
This is why we need to develop AI systems that work alongside people, to lighten their cognitive load and free them up for tasks that require more detail, more nuance, and wider real-world experience. Such systems are force multipliers: tools that permit fewer people to accomplish a greater amount of work.
You may recognize exoskeletons as a staple of sci-fi: mechanized frames that wrap around a person to grant them extra strength and protection. You may need several people working in concert to lift a heavy object, but thanks to an exoskeleton's motorized arms, one person can juggle several such objects without exerting any effort. Like their frame-on-human counterparts, AI-based exoskeletons extend a person's capabilities.
Consider the process of discovery in a legal case. This usually requires a team of people to sift through hundreds or even thousands of documents to find those which may be relevant to the matter at hand. Seen through an AI lens, discovery is a machine learning classification problem masquerading as a search problem. A suitably trained e-discovery system can do the heavy lifting to parse thousands, even millions of documents in short order and flag the potential matches. You still need people to review the results, but you'll need far fewer people to do so.
Compared to blinking lights and sidekicks, exoskeletons are the most obvious force multipliers because they directly reduce human labor. They also take the most human direction.
A sidekick is a subtle twist on the exoskeleton. In this scenario, the human and AI split the work by tackling different yet complementary parts of a single task.
In warehouse automation, "co-bots" (mentioned in this Vogue Business piece ) move around the warehouse and bring items to a human. The human is responsible for packing those items into boxes.
This spares a human from having to run around the warehouse, grabbing items to be packed. It also spares the AI from having to sort out the physics and spatial understanding required to properly box the items. Each participant can optimize for what they do well, instead of trying to stretch into a role for which they aren't suitable.
Blinking lights, in the sense of AI-based automation, are a set of extra eyes. They are discerning-yet-dispassionate observers that track activity and tell a person, "hey, I've found something that requires your experience and judgement."
(You've already experienced some blinking lights in the form of your car's "check engine" light and its highway lane-drift detection.)
Blinking lights are force multipliers in that they spare a human from having to constantly confirm that everything is in order. They flip that script, and only get the human involved when something seems odd.
Instead of one person staying glued to screens watching security feeds, for example, an AI-based system could detect motion and ask a human for additional review. The human is free to tackle other, less-monotonous work until they get an alert. They also decide what (if any) action to take -- maybe the wind moved some tree branches -- so you can afford some level of error in such a system.
These AI-based blinking lights lend themselves well to adaptive systems that can, over time, tune the definition of what counts as anomalous, alert-worthy activity. Trained on enough historical data, such a system could detect seasonality to adjust alert thresholds based on time of day or year.
Not all blinking lights involve immediate action. If you stretch the time scale, you'll see opportunities for systems that look for "interesting" connections.
Consider a system that runs a lot of data together, checking for correlations and anomalies. (Granted, not all correlations imply causation. Then again, sometimes the correlation is all that matters.) In the world of algorithmic trading, systems scan massive amounts of historical stock market data for heretofore-unnoticed price oddities that indicate possible arbitrage opportunities. A human can then decide whether to incorporate that in a trading strategy.
In some ways, an exoskeleton is the counterpart to a blinking light: the former waits for a human to send it off on a mission, whereas the latter scouts ahead and informs the human of possible work to do.
As you look around your company, you can use the lenses of exoskeletons, sidekicks, and blinking lights to spot opportunities for human/AI interaction.
You may notice that these lenses will identify opportunities that are a far cry from fully-autonomous systems. That is true! The systems you uncover will be just a few AI-based steps beyond the traditional, rules-encoded-in-software automation. That is precisely why they are all well within reach, why they will be easier to test, and why using them can pose less risk than a fully-autonomous system.
Periods, Question Marks, and now Ellipses: The Punctuation Marks of Data Analysis
BI is periods. AI is question marks. Simulation is ellipses.
Reducing Risk in Building ML/AI Models
Best practices to balance the risk and reward of building predictive models.