Continuous AI Adapts to a Changing World

Over the past several years, AI has gone from obscurity to headline news but not always for the right reasons. While AI systems have matured from science experiments to vital business-as-usual tools, not every AI project lives up to the hype. “Despite the promise of AI, many organizations’ efforts fall short…only 8% of firms engage in core practices that support widespread adoption,” says McKinsey.

Although too many stories of embarrassing AI failures are about unfair bias, there’s a more common but less recognized barrier to AI-driven success—many AI systems were not designed for resilience in a world that keeps changing!

The current generation of AI, called narrow AI, is based upon pattern recognition. It isn’t truly intelligent in the way humans are intelligent. It has no common sense, no general knowledge, and is incapable of critical thinking or logical reasoning. It is not as “cognitive” as marketers would have you believe.

A Changing World

The strength of modern AI is detecting patterns within historical data and using those learned patterns to make informed decisions on new data from the present. The unspoken assumption is that the world never changes—that what you learned from the past remains relevant today. It’s a bit like steering a car by looking out the rear window.

But the world is continually changing. People change over time. Fashions change with the seasons. Fads and crazes come and go. Economies cycle through boom and bust. New technologies, products, and services are developed. Society is in a state of constant flux, shaped by these forces.

For example, in March 2020, AI-driven supply chain management systems failed to predict the panic buying of toilet paper and antiseptic wipes. The AI systems had not been trained on data that included pandemics. Similarly, load balancing systems used by telecommunications companies to route data through their networks didn’t foresee changes in data usage triggered by work-from-home and videoconferencing trends. And AI-powered human resources systems were not prepared for the great resignation of 2021. 

It is unrealistic to expect AI systems to predict the unpredictable. None of us expected COVID-19. But it is reasonable to expect AI systems to be resilient, to identify and adapt to changing circumstances. After all, if an AI system never learns, it is not “intelligent.”

The Vital Need for AI Governance

AI is not truly autonomous. It is a type of computer system created by humans, for humans. Humans design AIs. Humans build and train AIs. Humans deploy and run AIs. Humans are ultimately responsible for AI system behaviors.

One of the most popular AI governance architectures is human-over-the-loop. One or more people in the organization are responsible for policies, system authorities, and model validation. They create the rules and procedures to ensure that an AI system behaves consistently with your organization’s values, business rules, regulations, and goals.

But humans can fail too. We have cognitive biases and limits on our cognitive loads. For example, the IKEA effect is a cognitive bias that causes data scientists to overvalue AI systems that they have personally built. Status quo bias and sunk cost fallacy cause us to fear the risk of updating an AI system more than the risk of leaving an underperforming AI system in production. And sensory gating causes our brains to filter out information that isn’t novel, resulting in a failure to notice gradual data drift or slow deterioration in system accuracy.

Best practices in AI governance acknowledge humans and computers’ comparative strengths and weaknesses. They take advantage of the rule-based consistency of computer systems, their ability to do mathematics and data manipulation at scale, 24 hours each day without tiring. And they take advantage of our human strengths of common sense, general knowledge, domain knowledge, critical thinking, ethics, and creative problem-solving.

Continuous Learning

Continuous learning is an exciting new product feature, available with version 8.0 of DataRobot. 

With DataRobot MLOps, you already have automated monitoring with a notification system. You can configure proactive notifications to alert you when the service health, data drift status, model accuracy, or fairness exceed your defined acceptable levels. 

  • Service Health tracks metrics about a deployment’s ability to respond to prediction requests quickly and reliably.
  • Data Drift assesses how the distribution of data changes across all features.
  • For Accuracy, the notification conditions relate to a performance optimization metric for the underlying model in the deployment.
  • Fairness notifications alert you when a production model is at risk of meeting predefined fairness criteria.

Proactive monitoring alerts use the strengths of computers (data processing, mathematics, and always-on) to overcome the sensory gating shortcomings of humans. Rather than relying upon a person to regularly check system metrics and discover issues, proactive monitoring reduces the cognitive load by reducing the amount of information for a person to process, only alerting the operator when a problem exists.

The Japanese have a word, kaizen, that means “change for the better” or “continuous improvement.” It is a Japanese business philosophy regarding the processes that continuously improve operations, and kaizen sees improvement in productivity as a gradual and methodical process.

Continuous learning takes MLOps to the next level, introducing kaizen to your AI system. When a notification is triggered, DataRobot will not only contact the operator, it will proactively train a challenger model on the latest data, comparing the challenger model’s performance to the currently deployed (champion) model.

Champion challenger comparison reports take advantage of the contrast effect, a cognitive bias that changes our perception of something when we compare it to something else, enhancing their differences. Contrasting overcomes sensory gating limitations, empowering AI system administrators to use their human strengths (common sense, general knowledge, and critical thinking) to validate the challenger model and decide whether to authorize the model to become the new champion.

You get the ability to automatically build, operate, and improve the quality of all your production models, all of the predictions they generate, and, ultimately, all of the AI-powered decisions you make. And you can do this continuously and on autopilot—an evolving system, continually learning and constantly improving itself. Only DataRobot can do this and scale it across your entire organization.

As always, choose the AI governance processes that match your circumstances. The champion challenger approach supports existing human-in-the-loop model validation processes for high-risk use cases (e.g., healthcare treatment protocols) and regulated industries (e.g., banking). For low-risk use cases in unregulated environments (e.g., content recommendations in streaming media), you have the option to authorize DataRobot to update models without human intervention automatically.

Get Started with Continuous AI Today

If you already have DataRobot, get started with Continuous AI. This new capability is a part of our AI Cloud 8.0 release and is available for all editions and all deployment options on-premise and in the cloud. 

Continuous AI works with your existing models and deployments and is accessible via the DataRobot UI along the top-level navigation. No additional licenses are required to get started – just enable the feature in your settings.

In the DataRobot Community, find videos and articles explaining Continuous AI in more detail to help you get started. Contact us to request a personal demo.

About the author

Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest


Source link

Tags: No tags

Add a Comment

Your email address will not be published. Required fields are marked *