Discussion
Pegasystems Inc.
US
Last activity: 20 Aug 2018 10:45 EDT
LSA Readiness Program
This Discussion Group is intended for those progressing through the 7.3 LSA Readiness Mentoring Program. It is intended as a place to post questions related to the weekly topics. Please do not post here if you are not part of the program.
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Groupama
FR
Hi,
About the Node Types list, one of the articles from Steve Mekkelsen Madden gives the following answer :
Node Types provided by Pega 7.3
Starting with Pega 7.3, there are a total of 11 different Node Types that you can use to define the nodes in the Pega cluster of your deployment.
There is no effect on the Dynamic System Settings or prconfig.xml settings when you use a Node Type.
Hi,
About the Node Types list, one of the articles from Steve Mekkelsen Madden gives the following answer :
Node Types provided by Pega 7.3
Starting with Pega 7.3, there are a total of 11 different Node Types that you can use to define the nodes in the Pega cluster of your deployment.
There is no effect on the Dynamic System Settings or prconfig.xml settings when you use a Node Type.
- Universal
- WebUser
- BackgroundProcessing
- BIX
- Search
- RunOnAllNodes
You cannot start a node with this Node Type. Agents and Listeners can be mapped to this Node Type so that they run on all nodes in a cluster. - Custom1
- Custom2
- Custom3
- Custom4
- Custom5
Reply
IT
Hi all,
following last session about agents i'd like to share a scenario about managing high workload.
Say for example we have a queue for a standard agent where each item processing requires about 30 seconds.
Those queue items are unrelated to each other, meaning we don't have to process them in a FIFO logic
This queue gets populated with thousands of records each day. So the challenge is to be able to keep up with all the workload.
Our production environment consist of 3 nodes. So the obvious solution is to have the agent run on each node and that means we can manage 8640 queue item in a day.
What if our load is 20K queue item per day?
Session proposed solution:
- Have multiple agent running on the same queue: say your queue is System-Queue-MyQueue. When invoking queue-for-agent API you can specify different agents in a round robin fashion (Agent1, Agent2, Agent3)
Each one of those agent is fetching items from System-Queue-MyQueue so you would have 3 times the processing power (8640*3)
The con here is that we can can't scale on the load without changing the queue logic. Meaning that if we have to add a new agent working on the same queue (say Agent4) we need to update our round robin queue logic and create the new agent.
This requires a release cycle and can't be done by the production team managing the environment
Hi all,
following last session about agents i'd like to share a scenario about managing high workload.
Say for example we have a queue for a standard agent where each item processing requires about 30 seconds.
Those queue items are unrelated to each other, meaning we don't have to process them in a FIFO logic
This queue gets populated with thousands of records each day. So the challenge is to be able to keep up with all the workload.
Our production environment consist of 3 nodes. So the obvious solution is to have the agent run on each node and that means we can manage 8640 queue item in a day.
What if our load is 20K queue item per day?
Session proposed solution:
- Have multiple agent running on the same queue: say your queue is System-Queue-MyQueue. When invoking queue-for-agent API you can specify different agents in a round robin fashion (Agent1, Agent2, Agent3)
Each one of those agent is fetching items from System-Queue-MyQueue so you would have 3 times the processing power (8640*3)
The con here is that we can can't scale on the load without changing the queue logic. Meaning that if we have to add a new agent working on the same queue (say Agent4) we need to update our round robin queue logic and create the new agent.
This requires a release cycle and can't be done by the production team managing the environment
- Leveraging data flow component which has multi threading capabilities: this solution has been proposed in the session but i'm not familiar with it and can't elaborate much on it for now. Maybe the person that shared it can pop in and detail it a bit
So, what's your take on this one?
Pegasystems Inc.
US
Further to the proposed session solution, create enough agents to handle the absolute maximum amount of tasks. Create an advanced agent to monitor queue sizes, enable or disable agents and determine which agent receives the next task. Store this value in a dynamic system setting which is used by the activity which performs the queueing.
-
Gerrit Smink Oscar Rivas Martín Jennifer Parker KAVIYASANTHOSH G Muthuvelkumaran Kumaravel and 5 More
Greenfield
DE
I have a question regarding the UI chapter, especially regarding the auto generated CSS:
Is there a way to get PRPC to package all the CSS files into a single CSS file?
Background of this question is that we use a network based access system which authenticates every single URL query. That means accessing pybootstrap.css and SAEDefaultSkin.css are two separate requests towards the authentication servers - which add up. Additionally search engines like Google punish you via bad ranking if you're referencing too many JS and CSS files instead of bundling them into a single file.
Is there an option or setting to enforce Pega to behave like this?
Pegasystems Inc.
US
You might try investigating use of a Rule-File-Bundle.
Designer Studio -> Records -> Technical -> Static Content Bundle -> style
Note how the pyPegaSocial control's JSP text contains:
Pegasystems Inc.
AU
Few questions on this use case:-
who write the data into queue items? is it some external API is calling Pega API to capture the data and process it later?
can you capture the data into an external table? - Fully exposed table i.e. table with all exposed columns and without blob/clob columns
Looking at the use case high level, Dataflow Batch run can easily support this.
Dataflow batch Requirements:
- All the data should be in a fully exposed table – BLOB/CLOB columns are not supported
- To support parallel processing we need to have a column in the input table which acts as a Partition Key for e.g. a column name “Key” holds values between 1- 10 where all the records in the input table are associated
- The parallel process of Dataflow Batch is determined with the number of unique partitions input table have. As mentioned in step 2, 10 thread can process the Batch in Pega cluster.
Dataflow Design:
1) New Pega Data class: Data-XXX point to the external table - this is where the current queue items data captured will be stored
2) Make Pega API or any other process capture the data and writes into Data-XXX
3) Create a new Dataset of type Database Table under --> Data-XXX
4) Create a Dataflow under Data-XXX as a source with the Dataset created at step 3
Few questions on this use case:-
who write the data into queue items? is it some external API is calling Pega API to capture the data and process it later?
can you capture the data into an external table? - Fully exposed table i.e. table with all exposed columns and without blob/clob columns
Looking at the use case high level, Dataflow Batch run can easily support this.
Dataflow batch Requirements:
- All the data should be in a fully exposed table – BLOB/CLOB columns are not supported
- To support parallel processing we need to have a column in the input table which acts as a Partition Key for e.g. a column name “Key” holds values between 1- 10 where all the records in the input table are associated
- The parallel process of Dataflow Batch is determined with the number of unique partitions input table have. As mentioned in step 2, 10 thread can process the Batch in Pega cluster.
Dataflow Design:
1) New Pega Data class: Data-XXX point to the external table - this is where the current queue items data captured will be stored
2) Make Pega API or any other process capture the data and writes into Data-XXX
3) Create a new Dataset of type Database Table under --> Data-XXX
4) Create a Dataflow under Data-XXX as a source with the Dataset created at step 3
5) in Dataflow add the filters to process the data based on different conditions
6) Dataflow Destination can be configured to call an Activity to perform the action business logic needs
Configuration:
1) Single Advanced Pega agent which run the Dataflow Batch run periodically, you need to ensure that Batch run is started with only one node
2) Decision --> Services Landing page allows configuring the number of threads that batch can parallel process the load.
-
Steve Heydendahl
Pegasystems Inc.
US
This is a question concerning float from the Customizing the User Experience SSA Advanced Course. How did the Float Layout Option Evolve in Pega 7.3.1 and 7.4? It appears that the only options are "Auto" and "Right (Flex-End)" in 7.3.1 whereas in 7.3 the options where "None", "Right", and Left".
Accenture
DE
I just stumbled across this in 7.3.1: on the topmost dynamic layout there is still None, Left and Right availble for float. In each embedded layout however there is Auto and Right (Flex-End). To change this you need to switch the upper dynamic layout to legacy code then you can float the inner layout to left or right.
Pegasystems Inc.
US
This is a question concerning enabling accessibility from the Customizing the User Experience SSA Advanced Course. When enabling the, "Enable accessibility add-on (Please ensure PegaWAI has first been installed and added to Production Rulesets list below)" check box in an access group the refresh of the layout group in the Search and Select screen stops working in Pega 7.3. It appears as if the selection of a Metro area is not being posted to the clipboard when accessibility is enabled. Why? And is there a work around?
Reply
IT
Hi,
i have a question about security. More specificaly about customizing an authentication activity.
I have a prpc 7.1.7 environment where i have to provide mashup functionality to an external webapp. Said app will use IACAuthenticationService. Since i need to figure a few stuff out i decided to specialize the authentication activity to add a few log. Now when i try to login an error shows up telling me that my activity can't be find in the ruleset list. In hindsight that's absolutely understandable because at this point in time the user hasn't be authenticated and so his ruleset list can't be built. The only ruleset available for rule resolution are those in Pega-Rules application.
I managed to get the scenario working by changing the application in the access group PRPC:Unauthenticated. By setting my application all my rulesets are visible and my custom Authentication activity can be used. Not sure that's the right approach though. What do you think?
Pegasystems Inc.
US
The correct approach is to define the minimal app stack that supports the rules you need when unauthenticated. Then define an Access Group that points to that app, the Access Role being a clone of the Unauthenticated Role. Lastly modify the "Pega" Browser Requestor Type to point to that Access Group. Be sure to test in a different browser session before logging out. If you get locked out you will have to modify prconfig.xml to temporarily change the system name to "prpc".
Capgemini
IN
Hi,
I have a question on LDAP authentication. In the authentication service rule, we have an option to map LDAP properties. But if we see the authentication activity, the auth activity actually opening the auth service rule and then mapping the properties.
Looking at the auth service rule, it seems to me that those property mapping are done at engine level. If some one try to create an LDAP auth service form scratch, I think he might miss the mapping.
Your thoughts?
Thanks
Saikat
Pegasystems Inc.
US
If you are creating a new LDAP Auth Service you would start with prweb/WEB-INF/web,xml to declare the AuthenticationType parameter as "PRCustom" and the AuthService parameter as your Data-Admin-AuthService instance name.
For the Auth Service's Authentication Activity you would specify "LDAPauthentication" which calls LDAPVerifyCredentials.
Java step 1 in LDAPVerifyCredentials opens a page name "AuthService" based on the AuthService parameter.
Java step 2 in LDAPVerifyCredentials puts name-value pairs from the LDAP authentication response into an "attributesObj" HashMap defined as a local Object parameter.
The Mapping tab of the AuthService page is consulted to convert LDAP attribute names to Data-Admin-Operator-ID property names,
If a match is found, only then is a put made to the HashMap, the HashMap key being the mapped-to Data-Admin-Operator-ID property name.
Later in java step 8, after an Operator page has been created - perhaps by opening an OrgUnit Model Operator, the HashMap is iterated to set values for Mapping-tab specified Data-Admin-Operator-ID properties
-
Josh Guo
Capgemini
IN
Hello,
Looking at the auth service rule, it looks like it does the mapping. But in reality it is just a instance which store the configuration, everything is done by the auth activity. Even the LDAP connection is done by the service activity.
My initial thought by looking at the auth service rule was it does the connection and the mapping and then calls the activity to set the operator context.
Thanks
Saikat