Discussion

Pegasystems Inc.
US
Last activity: 6 Oct 2021 10:35 EDT
Pega Community Events: Deep Dive: X-ray Vision Unveiled
Thank you for attending our recent webinar for robotics professionals on Thursday, August 19 at 8:00 AM ET.
Description: X-ray Vision is the next generation of RPA for creating resilient bots in a world of continuous change. This 60-minute webinar will provide a close look into X-ray Vision, giving you an overview of current capabilities and its future.
So, what's next?
Let's keep the conversation going! If you have any additional questions for Becky and Francis reply here to let them know.
We hope you understand we couldn't get to every question during our Q&A session, but we'd like to provide answers to as many as we can.
Check them out below:
Now that X-ray has eliminated the need for match rules, will Deep Robotics address the concern developers have with extremely dynamic sites such as those built in Angular?
Without a doubt, X-ray Vision will improve matching for extremely dynamic websites. However, it will not help with perfecting some of the automation challenges that are inherent to Angular. We are working on other innovations to address that.
I have a question about Deep Robotics running while humans are also working alongside it. If, for example, a robot is running through a list of messages and it comes across one that says it's in use by a human, will Deep Robotics be able to address that on its own, or must a developer program it?
Deep Robotics is the technique that we use to get access to controls used in automation; it allows us to automate these controls without stealing the keyboard or the mouse – this makes it possible for us to automate applications without negatively impacting a user that may be working on the machine at the same time. You could always automate in methods that make it possible to understand that a human user is working in the same application as the robot, if desired.
As mentioned, Pega injects some code in applications, especially web-based apps, to control objects as the original developers. This is great but sometimes causes application issues, such as leading some apps to crash, especially with RDA projects. How does Pega plan to improve that with X-ray? I know it is expected to add more memory and CPU consumption.
If you are encountering an application where Deep Robotics is causing application issues, we urge you to open a ticket with Pega’s GCS team so that we can address it. This is extremely rare and often due to the enterprise application itself, (which we can help diagnose).
Is all this new in v21?
We added X-ray Vision for .Net, WinForms, and WPF applications to v19.1.73. It will be available for Universal Web Applications (UWA) for Chrome and Edge applications in v21 later this year.
Is X-ray Vision similar to different spying modes available in Blue Prism? For instance, Windows Spying mode / UIA mode / Region modes?
X-ray Vision is built on top of Deep Robotics. Deep Robotics uses injection and hooking to identify controls and gain access to the application controls that are the same as the original application developer. Deep Robotics and X-ray Vision are similar to our competitors’ spying modes in that they are ways of identifying controls used in automation. However, the underlying technology used by Deep Robotics and X-ray Vision are much more technically advanced than the techniques like UI Automation or Region used by our competitors.
Does X-ray Vision look for changes and adjust while running, or does it not apply the changes until the next time the application is running?
X-ray Vision learns while the application is running, and the learning is applied immediately so the controls can continue to be automated. When in interrogation mode the learning persists for future sessions. At this time, when learning occurs in Runtime, the learning is not kept for future sessions.
Do all Pega adapters (Web, Windows, and text) support X-ray Vision and the new matching rule?
Not yet. Windows adapters that are automated, .NET, WPF, and WinForm applications are enabled. UWA adapters will be added next and will support Chrome/Edge. Support for Java and Native applications is in development.
Will the matching speed be affected now that Pega Robotics is learning? Also, will it always select the most efficient/fastest match rule?
The X-ray Vision engine will always choose the most efficient/fastest attributes and techniques. We have noticed a slight degradation of performance when an application initially loads, but for a properly powered machine, your average user will not notice the impact. Keep in mind that Deep Robotics automates applications at 10 to 20 times the speed of other automation techniques. Optimizing the engine is of the utmost importance to our R&D teams.
Isn’t allowing that “learning” dangerous? If it was our control that had its text size change and the new control had same properties, it could mismatch. We need to understand exactly how the “learning” works behind the scenes to be able to teach, support, and troubleshoot issues/bugs.
The X-ray Vision engine is monitoring the attributes of controls and monitoring how they change. If an attribute that is being used to identify the control changes, it will select a new attribute that is static. Learning can be turned off during production if you are concerned about the risk.
Where are the values of these attributes? Can they be changed?
The value of the attributes is inherent to the control itself and cannot be changed. In essence, for a Pega Robot Studio user, they are read only. If the Pega Robot Studio user knows that these values should not be used, a legacy match rule can always be used instead.
How does this all work when we move from dev, to integration, to production environments? We typically use match rules to ensure that matching between all environments will work because the rules never perfectly match. How will X-ray be able to handle those regional differences?
For controls that require unique matching per environment, we recommend using legacy match rules so you can fine tune the matching and add configurations as needed.
There could be a situation where two or more controls have the same font size, alignment, column span, etc. How does Deep Robotics deal with such a situation? In the older Pega Robot Studio, a red color error used to show up, labeled "Unable to uniquely identify control.
X-ray Vision would simply move on to another attribute or another technique. If X-ray Vision failed to identify the control, a legacy match rule could still be used. If you find that this occurs in the field, please contact Pega GCS so that we can investigate it.
We sometimes set generic match rules to match multiple controls and use clones to loop through them. Will we still have support for this? If yes, how will that work?
Initially, cloning will continue to use legacy match rules.
Where does X-ray Vision store the learning on runtime machine?
X-ray Vision data is stored in the project directory on the Pega Robot Studio machine and added to the automation package. When delivered to a Runtime machine, it will be stored wherever your automation packages are stored. At this time, learning at Runtime is not persisted, meaning that new information will not be added to the extracted package.
Is X-ray Vision available for web application? If not, when can we expect it?
We will add X-ray Vision for Universal Web Applications (Chrome and Edge) in v21 later this year.
Some developers like the Visual Studio plugin. Will Pega continue supporting the Pega 19.1 Plugin? Is there a plan to stop it? Will Pega publish a new runtime v21 as well? Will Pega Robot Studio v21 integrate with source control tools?
We will not be adding support for a Visual Studio Plugin with version 21 of Pega Robot Studio. Version 21 will have a new Robot Runtime that is distributed with it. Source control will be available at the file/folder level but not integrated into the Pega Robot Studio UI in the initial release.
If you have multiple tabs of the same website open in a MS Edge window, how do you uniquely identify these different tabs in X-ray Vision? Would the PegaID be the same or different?
When there are multiple tabs open to the same application in Chrome/Edge, we suggest cloning (UseKeys=True) the top level webpage and assigning known keys to each instance when it is created. Top-level webpages and cloned object both use legacy match rules; the PegaID match rule would not be used.
If the application vendor made a mistake and replaced a button at same location with the same ID, but supplies a totally different functionality, would that have a big impact on bot behavior?
Yes, it would. And this would affect any/every automation vendor in the same way.
Is it possible to add a specific matching rule manually, if we want?
Yes, you can still use legacy match rules when X-ray Vision is enabled. Just go to the control in the application designer, delete the PegaID match rule, and select a new one.
Will the solutions built in version 19.1 still work in the new version or will interrogation be required again due to the X-ray Vision technology?
All existing solutions will continue to work. But in order to take advantage of X-ray Vision, you must enable it for an adapter and change the match rule for a control to the PegaID match rule (or re-interrogate using Replace Control).
Do HTML tables need to be interrogated in the same way as in earlier Pega Robot Studio versions?
here are no changes to HTML tables at this time. Also, tables commonly use cloning for row data (UseKeys). Cloning will continue to use legacy match rules when X-ray Vision is enabled.
We have bots currently in production, developed on v19.1.66. Do we need to rebuild these bots, or will the technology automatically learn the new parameters on the go?
The applications will need to be re-interrogated when you are ready to take advantage of X-ray Vision.
Is it possible to manipulate the container that we are referencing?
No, the containers are part of the object hierarchy of the application and therefore inherent to the application. They cannot be manipulated. Also of note is that the container hierarchy used by X-ray Vision will most likely include layers that are not visible within the Object Explorer hierarchy.
Will X-ray Vision be able to identify the parameters of an element? For instance, there are some elements that have properties that are not unique to identify. Usually in such cases, we use image to image comparison (Surface Automation).
Since X-ray Vision uses Deep Robotics, there will always be an attribute that will allow for a unique identification. Image comparison will never be needed unless we are automating through OCR.
Thank you again for attending, and we hope to see you at our upcoming events.
You can find the full schedule on our Pega Community Events page.
If you'd like to download this presentation in PDF format and are logged in to your Pega Collaboration Center account, you can access the attachment below.