Discussion
Pegasystems Inc.
NL
Last activity: 11 Jul 2022 7:21 EDT
CLSA Community Meetup: A CLSA guide to Process AI (June 2022) Recording + handout + Q&A
Harnessing big data is key to optimizing processes. Without real-time insights, processes can become slow, stale, and inefficient. Many organizations have invested in AI – yet have left it disconnected from the processes it needs to impact – creating delays in acting on insights.
To unleash your data, you need a process automation engine with enterprise scale machine learning predictions, event processing, natural language processing, and decision arbitration – all built in. Combining case management, back-end integrations, automation, and AI seamlessly in a single, low-code platform accelerates time-to-market and unlocks next-level workflow optimisation.
Process AI is an exciting new feature of the Pega Platform to support this and it lowers the threshold for using AI and machine learning in a wide range of cases such as predicting and avoiding SLA breaches, preemptively addressing delays and service issues, and routing work more effectively. This webinar that was conducted on June 28 and 30 by Ivar Siccama (Senior Product Manager, Machine Learning & AI, Product R&D), shows how to add AI to a process, it discusses common use cases and sheds light on how it works under the hood.
In this post you will find the recording of the webinar including the Q&A recordings off the live Q&A of both sessions and the slides downloadable as PDF and the Q&A of the session (below the recordings).
The first video is the full recording of the session. To make it easier the below table has some time points for the different agenda items of the session to allow you to jump to the topic of your interest. Note that there is live Q&A after each use case as well.
Time | Topic |
---|---|
00:00 |
Introduction to Process AI |
10:27 |
Use case 1: Event stream triage |
29:25 |
Use case 2: Reduce fraud and straight-through-processing |
1:03:02 |
Use case 3: Predict missing SLA |
1:27:41 |
Licensing & PegaCloud services |
1:31:21 |
Final Q&A |
The below video is used to explain use case 2 around fraud prediction:
The below video is used to explain use case 3 around case completion:
The Q&A of the session
From which Pega version does the Process AI capabilities available?
It is available from Pega platform v8.6
For custom case management predictions, can we only use data classes and not work classes?
Yes this is intended behaviour; best practice is to create a clean data class for decisioning that will include all the predictors and embed this in your case type.
Are there any differences between the models created for Case Mgmt and CDH?
In terms of mechanics, and how they work, not real differences. The main difference is the outcome that it predicts (e.g. Fraud versus Churn). When predicting case completion, in case of Case mngt we automatically create an adaptive model for all stages, wheras in CDH you typically create a mode for every action.
Do LSAs need to learn H2O.ai modeling basics?
No need to learn. Models are typically developed by data scientists. The only thing you need to know what a model is and how to use it as part of a prediction. Details can be found in this academy challenge and corresponding module: https://academy.pega.com/challenge/importing-predictive-models/v3/in/34746
Ideally, who should create H20.ai? LSA, LDA, or Data Scientist?
The actual model creation and prediction creation would typically be done by a Data Scientist. And the usage of it either by an LSA (in the process itself) or by an LDA say in a decision strategy
Can we use custom models?
Yes. You can leverage custome models from H2O.ai or import models in PMML format, or even leverage model services like Amazon Sage Maker and Google ML. More about this in the mission on Pega Academy. https://academy.pega.com/module/creating-predictive-models/v1/in/34746
What are typical process AI use cases for CDH applications?
The Process AI use cases are distinct from CDH use cases. This will become more clear as we are covering the 3 use cases today. There are also separate predictions that you create for Case management (Process AI), respectively 1:1 Customer Engagment (CDH).
Sometime there is a overlapping between Lead Decision Consultant and a Lead System Architect. Is the role in future a blended between the two roles?
You are absolutely right. We are creating new enablement content that we encourage both an LDA and LSA to take, so that they better understand the bigger picture, even if the actual work would be done by one of the roles. See: https://academy.pega.com/mission/preparation-11-customer-engagement-implementation/v1 (note this is for 1:1 Customer Engagement and NOT Process AI though)
Is there any way through which we can use the Process AI in Pega Chatbot to determine the dynamic response of users and produce the suggestions/results accordingly ? if yes can you please suggest one sample use case?
Yes. We do cover this in the Pega NLP mission. For details please check https://academy.pega.com/mission/pega-nlp-essentials/v1, specifically https://academy.pega.com/module/using-entity-extraction-chatbot-channel/v1/in/39911
Does the data model of the data-type that act as input need to be exactly the same than the one expected by the prediction? Is there any chance to do any mapping to keep predicions input data model indepedent on application data model?
The data model is where the input of a prediction is derived from. Depending on where the prediction is created, the input values are taken from that context
How are the input parameters mapped for the predictive model execution from the Case?
On the predictive model rule form, there is a tab where you can map the predictors to the properties from the claim class (or subject class). we do support nested pages too.
Sometime for costs in projects might not be possible to have LDA and LSA in the project, That is why the LSA might need to know also about AI i.e. intelligent automation with AI?
It really depends on your use case and project, but an LDA may not be required at all. As discussed, the actual models are in general developed by a data scientist, but nobody can stop an LSA to learn how to develop a predictive model. The leveraging of the models in a case, is a task for an Application Developer/LDA.
Do we have capability to modify the adaptive pattern or algorithm or its standard based only on predictors?
Currently the adaptive models are using a Naive Bayes algorithm, and we are working on a new technique, Adaptive Gradient Boosting, which will be released soon. These models learn online, from every incomming response
If there are multiple casetypes in an application, do we need to have specific models built for every case type? Also, can the outcome of one casetype be used as an input to the model in a different casetype?
For case completion use case it is required to have a prediction per case type as the class of the case would be different. However for other templates where the predictions are created in an embedded data type, they can be case agnostic.
Bring your own model means Predictive models, right?
Yes it refers to the different avenues from which we can bring in predictive model built by the data scientists of the customer
Isn't too many input case info predictors reduces the accuracy of the Adaptive Model?
No. The adaptive model decideds which predictors are relevant (active) and which will be classified as irrelevant (inactive).
This particular use cases relies on processing a set of data (Claims). Would identifying a missing SLA for example also rely on processing a "data" set? I could simply write a report to get the list of assignment missing an SLA.
The predictive model is built on a data set. However, at runtime, we are using the claims data to predict the outcome for tha particular claim.
Can we build predictive models in Pega using the Predictive Studio or it has to be done with third-party tools?
Yes. We do have our own tool to build predictive models, Pega ML, and we can leverage external models (like Open.AI, PMML) and external services like Amazon Sage Maker, GoogleML.
Will it be possible, instead of predicting if the case will complete before SLA escalation, calculate the SLA values for each case individually using AI. E.g. this case probably will require less time than the other one?
You could build a spectrum predictive model , which indeed predicts a value, instead of a scoring model (which is a yes/no).
What if I have to connect to a model that is running as a service? And cannot be imported into Pega? Does Data flow / Strategy / Event strategy provide integration rules to connect to an externally deployed model?
Yes. We can use Amazon Sage Maker or GoogleML. For more details you can check out: https://academy.pega.com/topic/using-machine-learning-services/v1/in/34746/14731
Are Process AI constructs supported by Pega DevOps (DM)? E.g. can we deploy predictive models to higher environment in already approved state?
We are using MLOps for deploying a new predictive model into higher environments, using model shadowing. For details please check out: https://academy.pega.com/module/mlops/v1/in/34746. A prediction and a predictive model are regular rules, and you can package them as any other Pega rules and deploy using deployment manager (application pipeline). However, once you have the model working, the recommended way to replace it is using the MLops process.
Can we connect to external models (eg Google or Amazon) via APIs rather than importing them? Are there performance considerations if so?
We are NOT importing a Google ML or Amazon SM model into Pega. We are just creating the predictive model rule, which 'connects' to these externally executed models. The actual execution happens on the external platforms.
Is it possible to output confidence/model performance scores along with the segment and the probability and use them in the logic? Say, if the confidence is low - switch to the rule-based logic.
Yes. if you check out the PegaNLP mission we are show casing exactly that. We can take the confidence factor of a topic detection model, and route emails based on this value.
Is it possible to add one or more models in a prediction?
Yes. A prediction can be as comlex as you want it to be. Using one or more models, arbitrating between their performance, learning window etc.
After an adaptive model becomes mature, does the action of manually removing or adding new predictors restart the learning process from scratch?
No, unless you manually clear the model. If you delete a predictor and you readd it though, or if you change the type of it, the information about that specific predictor is removed.
Adaptive models are based on delayed learning. How long are the predictor values stored (assume in cassandra) waiting on case completion?
There is a response time-out configuration on the prediction itself. So different predictions can have different 'waiting' time (until response is captured).
How can we move learning/training data to another instance? for example from production instance to QA instance.
For offline predictive models, the model itself is part of the rule itself, which has learned on the data set you created on. So by just moving the rule, you move the model. For adaptive models, there is a way to move the models from one environment to another. We can provide a link to the article on how to do it after the session.
On the discssion we can safely purge the resolved cases data.....Is that true only after the model has learnt from it right? Because it is possible model havent yet learnt from resolved cases
Yes, once the case is resolved you can safely purge and archive without affecting the model