Discussion
Pegasystems Inc.
NL
Last activity: 29 Sep 2021 4:38 EDT
CLSA Community Meetup: C2E DevOps Excellence (September 2021) Q&A
In September we hosted the cLSA DevOps Excellence webinar as part of the CLSA Continuous Excellence Enablement (C2E) program that is focussed on targeted content for CLSA on platform topics and from a Pega 8 perspective. This webinar was focussed on DevOps and discussed development best practices, deployment manager and how to integrate with other (non Pega) DevOps tools.
The recording and handout of this event can be found here: https://collaborate.pega.com/discussion/clsa-community-meetup-c2e-devops-excellence-september-2021-recording-handout
This discussion will contain a number of the questions asked with the answers provided.
Branching
Can the brach review process be customized? OOTB there aren't really many ways it's intended to be customized. The branch review component that exists on the marketplace might be a good component to use; It adds a lot of capability and the team that built it does a good job support it.
Custom task
How does the deployment process leverage Security/scan capabilities, like blackduck, checkmarx, fortify, etc for checking security flaws or coding practices? First of all the Pega security team does not recomend usin SAST tools because we already run those tools against our code base, and the results aren't easily actionable for generated code. DAST tools can be used against a Pega Application but we don't have any particular tools that we advocate for. There aren't any out-of-the-box integrations in deployment manager for these tools, but integration can be created using a Jenkins job, or creating a custom task.
Cloud
We have a PegaCloud environment using deployment manager to promote packages and we now need to deploy the packages same to customer cloud environment as well. What is the best way to accomplish the same? The biggest challenge you will have is that the customers cloud environment will not have access to the PegaCloud S3 environment. You would have to use a client managed artifact repository, like JFrog artifactory. At that point, you would need to expose access to the client cloud environment from the devops environment. In DM v4 there was a feature called "Deploy existing artifact" which could have been used in use cases where this access could not be provided. It will be reintroduced in 5.5
We are using Pega cloud env. Do we get S3 repository by default? Do we get a view to access it on cloud? Yes, you can browse artifacts in Pega Deployment Manager, but also access it from any candidate selecting target for attachments and navigating through s3.
On PegaCloud environment, is there an Orchestrator environment (that hosts Deployment Manager) available by default? Yes, a Deployment Manager orchestrator instance is provided for every new PegaCloud customer without an additional charge
Testing
Can we run Scenario Based Testing automatically after we deploy a branch using Deployment manager? Scenario tests are not enabled on Merge since you will end up creating test data instances in the Development System (The System of Record). You can run PegaUnits prior to Merge.
As for Regression testing is there support for Selenium chromium driver? This could be implemented as custom task either to call another pipeline on 3rd party orchestrator to run selenium tests, or to trigger tests directly running your selenium framework of choice and parsing results on callback to Pega Deployment Manager
Open Devops
Can the Pega DevOps Capability orchestrate enterprise wide DevOps such that it can accomodate Pega and DOT NET applications? Theoretically yes, but I would strongly recomend not doing that - Deployment Manager's domain is PegaApplications. Instead you should consider the "hybrid" approach. Look at the recording of the open devops part in the C2E DevOps Excellence webinar.
Database
Does the deploy process support the database changes that could be potentially part of a release? Meaningdatabase table updates? Yes, as long the changes are additive and the environments are configure to allow automatic updates. It works just like the import wizard.
Rollback is a DB restore to a restore snapshot point which essentially means that it will impact and roll back data of other application also hosted in the same instance? If yes , is there any plan to rollback only the deployment rules? Rollback is different from a DB restore because it just stores a timestamp and then rolls back all events back to that point. Rollback can be limited in scope by the application, and that's exactly what Deployment Manager does so it should not impact other applications.
How are schema changes handled in the deployment cycle? The pega platform will automatically handle additive schema changes, and deployment manager can apply them automatically unless the environment has configured to reject it. If the environment does not allow auto-apply of schema changes, then the deployment Manager will share the schema defs with the user.
Does the rollback option support also data instances that are part of the package? By design rollback down not support data or case instances. We don't want to introduce data loss when performing an application rollback.
When we deploy using devops , it will be deployed to all the environments which we have configured in devops. if we want to roll back this deployment in all the environments , is it possible in automation way. Rollback is only supported on failures and it is limited to the environment where deployment fails.
Deployment
Using a single pipeline for an application; how can we deploy any subsequent product through the pipeline while a product is waiting in pre-prod state and yet to be deployed to production env as Deployment Manager doesn't allow us to update the product details on the pipeline while one deployment is in progress? We have some clients who change the product rule between each deployment, but this is really considered to be an anti-pattern. It will increase the likelihood of failed production deployments - which we explicitly want to avoid. It should be very rare to change which product rule is referenced in a pipeline. The developer can edit the product rule itself to add new rules.
Does deployment manager also manage tasks like HotFixes ? or is that totally separate? Hotfixes are not likely to recieve enhanced support in Deployment Manager.
Is there any known limitation on the size of the artifact being generated & deployed from Deployment Manager? No known limitations other than the time it may take for generating and deploying large artifacts. We've tested with really massive apps.
During Multispeed Deployment, if there is any failure in one of the deployment in any environment, will all the other deployment process stop / abort automatically ? Or it doesn't impact any other deployment in the same process ? If a deployment fails, it would not end up queuing alongside the other deployments. Each deployment is intended to be additive so it should not depend on earlier deployments.
How to manage data meant for production and lower environments? Manually? Either manually or via ad hoc tasks. Systems are made up of three types of assets, data, configuration and application logic. Application Logic is the primary domain of deployment manager deployments at this time.
If there are 2 deployments queued and the first one have rule a and second one have rule a changes... if Ipromote only second deployment ...what happens? This is exactly the reason we recomend that deployments are addititive and you package either the whole app, or the whole release. If you are following that guidance this will not be an issue.
While deploying to Production, downtime is required? If downtime is not required wouldn't it lead to data loss in case of rollback? Downtime is not required. Rollback should not roll back data instances - it's not a full database rollback. It just applies to rule updates in the current application.
How is wait for user action different from manual approval in the multi speed deployment ? Multi speed is not blocking other deployments on the same stage, for example we can deploy 10x to QA before we decide which artifact to promote to STG, PROD... Once one promoted other artifacts before will be superseeded.
Do we need to always have this structure - Base, Base + Test, Base + Dev? Yes it is recommended best practice to ensure you are not accedentally promoting unintended changes in a target application. The idea is to keep the target application locked and clean.
Regarding multi-speed deployment, when a deployment is in progress, the deployment manager is not allowing another deployment to start. In that case, we don't come across a scenario as mentioned in the slide. Please explain how to trigger a new deployment when a previous deployment is in progress and waiting for manual approval. Multispeed deployment allows you to have multiple active deployment in a phase (collection of stages saparated by the manual transition configuration).
Architecture
Do we have any restriction on number of environments which gets involved in pipeline with Deployment Manager? No, v4 of Deployment Manager was restricted to Dev, QA, Staging, Production, but in v5 there is no restriction.
Are these two DevOps applications provided out of the Box? The framework and deployment manager orchestrator are provided to any new pegacloud customer wihout additional charge. For on premise or custom cloud situations this can be downloaded from the pega marketplace.
Do we need to deploy the latest version PegaDevopsFoundation app every time when the new version released into our Pega application? or will this taken care in Pega platform upgrade? For our pegacloud users both the pega deployment frameworrk and deployment manager components on the orchestrator and candidate environments will automatically update during routine maintenance windows to the latest compatible version
Can the Authentication between Candidate systems be changed from Basic to something else? Candidate environments connect to the orchestrator using OAuth. Orchestrator connecting to candidates is only officially supported with basic auth. We intend to eliminate the need for the orchestrator to connect with candidates in an upcoming release, so this ask would be irrelevant.
Can we change the generated artificat name ? and how? Ability to change the artifact name is not exposed. We pretty much want all interactions with the repository to be obscured and the users just interacting with the DM and it's resources. That being said, one customer is using custom tasks to customize the file path.
Is deploying using PRPCUtils command tool is recommended? Per the name "ad hoc" tasks, it's really not recomended, but it's a workaround available for exception use cases.
Is there an option of a timer in the pipeline model which can help to schedule production deployment during off times? It can be automated using the REST api. We have customers who have done this in a variety of ways including Pega Job Schedulers. This is on our road map to add to the Deployment Manager UI in the near future - this is particularly useful for Data Migration pipelines.
Is Keystore file creation mandatory when we use Pega DM 5.4 ? We have recently started using Deployment manager with 5.4 but after all the setup we are getting the error which is there in this slide. It's never required on PegaCloud. For on prem it's not explicitly required but should be done to ensure a secure connection. DM ships with a default jks key.
DM pipelines are application centric. What is the best way to create pipelines for components? There's nothing blocking users from creating a pipeline for a component. The won't show up in the dropdown, but if you copy and paste the name it will work. One other downside is that components names have a timestamp which is kind of ugly in the DM UI. You can consider creating a wrapper application for your component to get around these two downsides.
We are using prpcsvcutils APIs today with Jenkins to orchestrate our pega 7 and 8 deployments and artifactory as our binary repo. Can I use deployment manager APIs to extend our dev ops pipeline and install deployment manager in all target servers? I read online the suggested approach is to have one separate server/node with deployment manager. Yes, you can user Deployment Manager API and leverage the quality gates offered by default. We suggest a separate DevOps instance to minimize the impact to you product development. However there two components and only the pegadevopsfoundation is required on the candidate systems.
Is it possible to restrict the view of pipelines for user. For example if there are pipelines for 10 applications and a user is associated with 2 apps only, can his view be restrcited to see pipelines for his apps? Yes, a user must be associated with the allowed applications and user would be retricted to view the allowed application pipelines only. What can the user do with the allowed pipeline is defined by the role a user is associated with.
Pipeline
Manual Approval process at the moment only allows to be assigned to a single person, is there any plan to have this assigned to a workqueue/workbasket? Improving approvals is on our roadmap. In the meantime there is a custom task you can download to add "role based approvals" to a pipeline.
How does the pipeline modeliing work, if testing is being carried out outside in TFS and not using Pega unit tests/Pega Scenario tests. Would that mean, we introduce manual approval steps in the pipeline model? Manual approval step is the easiest way to account for external or manual testing, but ideally it would be implemented as a custom task that integrates with TFS
For the platform update pipeline, is this executed post upgrade? The pipeline runs through the production upgrade and waits for it to complete however is not responsible for the upgrade itself.
Why is the production environment upgrade done first even before non prod?https://community.pega.com/knowledgebase/articles/keeping-current-pega/86/premises-upgrade-process-pega-infinity-release-842-and-later-tomcat-and-postgresql