Question


Trident Consulting Inc
US
Last activity: 6 Apr 2018 19:26 EDT
While generating DDL for Pega7.2, i am unable to see rules schema in the system generated DDL file.
Hello,
I am upgrading my 6.1 SP2 application to Pega 7.2. As per our company policy, i cant use UI tool to update DB.
The DDL has to be generated and our DBA review the standards before executing this in our DB. When performing this, i see only my data schema name in the DDL. no where in the .sql file i see rules schema. Can someone help me on this?
sample migrateSystem.properties attached below.
#The system where the tables/rules will be migrated from
pega.source.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.source.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.source.database.type=udb
pega.target.jdbc.url=jdbc:db2://XXXX:60005/PRPC:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.source.jdbc.username=username
pega.source.jdbc.password=password
#Custom connection properties
#pega.source.jdbc.custom.connection.properties=
pega.source.rules.schema=GQDBA
#Set the following property if the source system already contains a split schema.
pega.source.data.schema=
Hello,
I am upgrading my 6.1 SP2 application to Pega 7.2. As per our company policy, i cant use UI tool to update DB.
The DDL has to be generated and our DBA review the standards before executing this in our DB. When performing this, i see only my data schema name in the DDL. no where in the .sql file i see rules schema. Can someone help me on this?
sample migrateSystem.properties attached below.
#The system where the tables/rules will be migrated from
pega.source.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.source.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.source.database.type=udb
pega.target.jdbc.url=jdbc:db2://XXXX:60005/PRPC:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.source.jdbc.username=username
pega.source.jdbc.password=password
#Custom connection properties
#pega.source.jdbc.custom.connection.properties=
pega.source.rules.schema=GQDBA
#Set the following property if the source system already contains a split schema.
pega.source.data.schema=
#The system where the tables/rules will be migrated to
pega.target.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.target.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.target.database.type=udb
pega.target.jdbc.url=jdbc:db2://XXXX/PRPC:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.target.jdbc.username=username
pega.target.jdbc.password=password
#Custom connection properties
#pega.target.jdbc.custom.connection.properties=
pega.target.rules.schema=GQRULES
#Used to correctly schema qualify tables in stored procedures, views and triggers.
#This property is not required if migrating before performing an upgrade.
pega.target.data.schema=
#Set this property to bypass udf generation on the target system.
#pega.target.bypass.udf
#The location of the db2zos site specific properties file. Only used if the target system is a db2zos database.
pega.target.zos.properties=config/db2zos/DB2SiteDependent.properties
#The commit count to use when loading database tables
db.load.commit.rate=100
################### Migrate System Properties ###########################################
#The directory where output from the bulk mover will be stored. This directory will be cleared when pega.bulkmover.unload.db is run.
#This property must be set if either pega.bulkmover.unload.db or pega.bulkmover.load.db is set to true.
pega.bulkmover.directory=
#The location where a temporary directory will be created for use by the migrate system utilities.
pega.migrate.temp.directory=C:\AIG\PRPC7.2\temp
**Moderation Team has archived post**
This post has been archived for educational purposes. Contents and links will no longer be updated. If you have the same/similar question, please write a new post.
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Accepted Solution


Paypal Inc
IN
Then migratesystem.sh has to be utilized to generate the DDL. The jdbc connection information in your migratesystem.property file is correct. Please set the following properties (available in my second post) and run migratesystem.sh. You can see the DDL generated in the <PRPC_BUILD_HOME>/schema/generated folder.
Note after applying the DDL by the DBA you have to run the migratesystem.sh again to unload records from the old schema and load into the new schema


Citigroup technology inc
US
Hello
We are updating from Pega 7.1.8 to 7.2.0, we wanted to know what java version we should be in when we run the update script from our db server, currently we are in java 1.5.0?


Pegasystems Inc.
US


Pegasystems Inc.
US
During the pre-upgrade migration step from a single schema (6.1 SP2) to a split schema (7.2) only the rule tables will be generated in the resulting DDL for the target system. With the configuraiton you have posted, the resulting generated DDL will only apply to GQRULES table.


Trident Consulting Inc
US
unfortunatly its not the case. I dont see a schema named GQRULES there. it all shows GQDBA.
Any thing i missed at migrate file?


Pegasystems Inc.
US
Perhaps. Can you post the full log?


Trident Consulting Inc
US
# Properties File for use with migrateSystem.xml Update this file
# before using migrate.bat/sh script.
# Set the DB connection
# Properties File for use with migrateSystem.xml Update this file
# before using migrate.bat/sh script.
# Set the DB connection
################### COMMON PROPERTIES - DB CONNECTION (REQUIRED) ##################
###################################################################################
# For database that uses multiple JDBC driver files (such as DB2). you may specify
# semi-colon separated values for 'pega.jdbc.driver.jar'
#
# pega.jdbc.driver.jar -- path to jdbc jar
#
# pega.jdbc.driver.class -- jdbc class. valid values are:
#
# Oracle oracle.jdbc.OracleDriver
# IBM DB/2 com.ibm.db2.jcc.DB2Driver
# SQL Server com.microsoft.sqlserver.jdbc.SQLServerDriver
# PostgreSQL org.postgresql.Driver
#
# pega.database.type valid values are: mssql, oracledate, udb, db2zos, postgres
#
# pega.jdbc.url valid values are:
#
# Oracle jdbc:oracle:thin:@//localhost:1521/dbName
# IBM DB/2 z / OS jdbc:db2://localhost:50000/dbName
# IBM DB/2 jdbc:db2://localhost:50000/dbName:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
# SQL Server jdbc:sqlserver://localhost:1433;selectMethod=cursor;sendStringParametersAsUnicode=false
# PostgreSQL jdbc:postgresql://localhost:5432/dbName
#
# pega.jdbc.username db username
# pega.jdbc.password db password
#The system where the tables/rules will be migrated from
pega.source.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.source.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.source.database.type=udb
pega.target.jdbc.url=jdbc:db2://GCQUPNI1.AIG.NET:60005/GQV7:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.source.jdbc.username=gqusrq
pega.source.jdbc.password=xxxxxxx
#Custom connection properties
#pega.source.jdbc.custom.connection.properties=
pega.source.rules.schema=GQDBA
#Set the following property if the source system already contains a split schema.
pega.source.data.schema=
#The system where the tables/rules will be migrated to
pega.target.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.target.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.target.database.type=udb
pega.target.jdbc.url=jdbc:db2://GCQUPNI1.AIG.NET:60005/GQV7:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.target.jdbc.username=gqusrq
pega.target.jdbc.password=xxxxxx
#Custom connection properties
#pega.target.jdbc.custom.connection.properties=
pega.target.rules.schema=GQRULES
#Used to correctly schema qualify tables in stored procedures, views and triggers.
#This property is not required if migrating before performing an upgrade.
pega.target.data.schema=
#Set this property to bypass udf generation on the target system.
#pega.target.bypass.udf
#The location of the db2zos site specific properties file. Only used if the target system is a db2zos database.
pega.target.zos.properties=config/db2zos/DB2SiteDependent.properties
#The commit count to use when loading database tables
db.load.commit.rate=100
################### Migrate System Properties ###########################################
#The directory where output from the bulk mover will be stored. This directory will be cleared when pega.bulkmover.unload.db is run.
#This property must be set if either pega.bulkmover.unload.db or pega.bulkmover.load.db is set to true.
pega.bulkmover.directory=
#The location where a temporary directory will be created for use by the migrate system utilities.
pega.migrate.temp.directory=C:\AIG\PRPC7.2\temp
######## The operations to be run by the utility, they will only be run if the property is set to true.
#Set to true if migrating before an upgrade. If true admin table(s) required
#for an upgrade will be migrated with the rules tables.
pega.move.admin.table=true
#Generate an xml document containing the definitions of tables in the source system. It will be found in the schema directory of the
#distribution image.
pega.clone.generate.xml=true
#Create ddl from the generated xml document. This ddl can be used to create copies of rule tables found on the source system.
pega.clone.create.ddl=true
#Apply the generated clone ddl to the target system.
pega.clone.apply.ddl=true
#Unload the rows from the rules tables on the source system into the pega.bulkmover.directory.
pega.bulkmover.unload.db=true
#Load the rows onto the target system from the pega.bulkmover.directory.
pega.bulkmover.load.db=true
### The following operations should only be run when migrating upgraded rules
#Generate the rules schema objects (views, triggers, procedures, functions). The objects will be created in the pega.target.rules.schema
#but will contain references to the pega.target.data.schema where appropriate.
pega.rules.objects.generate=false
#Apply the rules schema objects (views, triggers, procedures, functions) to pega.target.rules.schema.
pega.rules.objects.apply=false


Paypal Inc
IN
Hi RaffieAIG,
There is no pega.source.jdbc.url in your sample configuration. I would like to review the file. Can you please share the complete property file and configurations
Thank you
Sudharsan


Trident Consulting Inc
US
Sudharsan,
Uploaded the same..


Paypal Inc
IN
RaffieAIG,
You have set all the migration properties to true, which means the tool will create and apply DDL as well as unload and load the DB records as well. Also in the "system where tables will be migrated from" section pega.source.jdbc.url is not available.
Please do the following,
- change the pega.target.jdbc.url to pega.source.jdbc.url in the source section
- Since you only want to create DDL but not apply or migrate records, use the below option for the operations run by the utility.
-----------------------------options for ONLY create DDL for new schema---------------------------------------
RaffieAIG,
You have set all the migration properties to true, which means the tool will create and apply DDL as well as unload and load the DB records as well. Also in the "system where tables will be migrated from" section pega.source.jdbc.url is not available.
Please do the following,
- change the pega.target.jdbc.url to pega.source.jdbc.url in the source section
- Since you only want to create DDL but not apply or migrate records, use the below option for the operations run by the utility.
-----------------------------options for ONLY create DDL for new schema---------------------------------------
######## The operations to be run by the utility, they will only be run if the property is set to true.
#Set to true if migrating before an upgrade. If true admin table(s) required
#for an upgrade will be migrated with the rules tables.
pega.move.admin.table=true
#Generate an xml document containing the definitions of tables in the source system. It will be found in the schema directory of the
#distribution image.
pega.clone.generate.xml=true
#Create ddl from the generated xml document. This ddl can be used to create copies of rule tables found on the source system.
pega.clone.create.ddl=true
#Apply the generated clone ddl to the target system.
pega.clone.apply.ddl=false
#Unload the rows from the rules tables on the source system into the pega.bulkmover.directory.
pega.bulkmover.unload.db=false
#Load the rows onto the target system from the pega.bulkmover.directory.
pega.bulkmover.load.db=false
### The following operations should only be run when migrating upgraded rules
#Generate the rules schema objects (views, triggers, procedures, functions). The objects will be created in the pega.target.rules.schema
#but will contain references to the pega.target.data.schema where appropriate.
pega.rules.objects.generate=false
#Apply the rules schema objects (views, triggers, procedures, functions) to pega.target.rules.schema.
pega.rules.objects.apply=false


Trident Consulting Inc
US
I still dont see GQRULES in the DDL that was generated. Should i need to modify any at setupDatabase.properties file?
if i change, rules.schema.name=GQRULES then pr4_base table wont be picked up. when i use rules.schema.name=GQDBA than all the DDL has GQDBA.
for example, lets say i have this value empty, then the user name is there in all the generated DDL.
DDL attached for reference
pega.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.database.type=udb
pega.jdbc.url=jdbc:db2://GCQUPNI1.AIG.NET:60005/GQV7:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.jdbc.username=username
pega.jdbc.password=password
#Uncomment this property and add a list of ; delimited connections properties
#For example jdbc.custom.connection.properties=user=usr;password=pwd
#jdbc.custom.connection.properties=
#Rules schema name : Used for all databases.
#The user name is used for default schema name
rules.schema.name=GQDBA
# Data schema name : Used for systems running on a Split Schema
# The value of rules.schema.name is the default value for data.schema.name
data.schema.name=
I still dont see GQRULES in the DDL that was generated. Should i need to modify any at setupDatabase.properties file?
if i change, rules.schema.name=GQRULES then pr4_base table wont be picked up. when i use rules.schema.name=GQDBA than all the DDL has GQDBA.
for example, lets say i have this value empty, then the user name is there in all the generated DDL.
DDL attached for reference
pega.jdbc.driver.jar=C:\DB2\db2jcc.jar;C:\DB2\db2jcc_license_cisuz.jar
pega.jdbc.driver.class=com.ibm.db2.jcc.DB2Driver
pega.database.type=udb
pega.jdbc.url=jdbc:db2://GCQUPNI1.AIG.NET:60005/GQV7:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
pega.jdbc.username=username
pega.jdbc.password=password
#Uncomment this property and add a list of ; delimited connections properties
#For example jdbc.custom.connection.properties=user=usr;password=pwd
#jdbc.custom.connection.properties=
#Rules schema name : Used for all databases.
#The user name is used for default schema name
rules.schema.name=GQDBA
# Data schema name : Used for systems running on a Split Schema
# The value of rules.schema.name is the default value for data.schema.name
data.schema.name=
#User Temp Directory. Will use default if not set to valid directory
user.temp.dir=C:\AIG\PRPC7.2\temp
#z/OS Site Specific Properties file
pega.zos.properties=
# Generate schema will be skipped if this property is set to true
# Note: Leave this property blank if you need to generate the schema
bypass.pega.schema=
# Generate UDF will be skipped if this property is set to true
# Note: Leave this property blank if you need to generate the UDF
bypass.udf.generation=
# Note: The utility will skip any DDL Generation if all three bypass properties above
# are set to true.
# Truncate UpdatesCache will be skipped if this property is set to true
# Note: This property should be left blank so that the UpdatesCache table
# may be truncated automatically by the installer. If your site
# chooses to bypass this reset of the UpdateCache table, then the
# site must do the truncate of the 'PR_SYS_UPDATESCACHE' in the Data
# schema, manually after the install, upgrade or update before the
# system is started up.
#bypass.truncate.updatescache
# Rebuild Database Rules Indexes after Rules Load to improve Database Access Performance
# It can be executed manually by running the stored procedure SPPR_REBUILD_INDEXES
# Default is false except for z/OS, where the default is true
rebuild.indexes
# The system name uniquely identifies a single Process Commander System.
# Since multiple PRPC Systems may reference the same database, it is important that each
# system has a unique name in order to distinguish them from each other.
# During installs, the following system name will be created.
system.name=pega
# During installs, the above system name is generated with the following production level.
# The system production level can be set to one of the below integer values (1-5):
# 5 = production;
# 4 = preproduction;
# 3 = test;
# 2 = development;
# 1 = experimental
production.level=2
# Is this a multitenant system?
# A multitenant system allows organizations to act as distinct PRPC installations
multitenant.system=false
# Run Update Existing Applications activity after upgrade?
# Default setting is false.
# For upgrades or updates: Upgrade Existing Applications is run if this setting is set to true.
# Update Existing Applications can be run from upgrade scripts, prpcUtils, or by directly launching in PRPC after upgrade.
update.existing.applications=false
# Workload manager to load UDFs into db2zos
db2zos.udf.wlm
# Run RuleSet Cleanup
# Generate and execute a SQL script to clean old rulesets and their rules from the system
# If you would like to only generate the script, not execute it, see cleanup.bat or cleanup.sh script
run.ruleset.cleanup
Updated: 7 Sep 2016 10:34 EDT


Paypal Inc
IN
Hi,
Which step are you in ? Migrating the rulebase to a new schema or upgrading the rulebase ?
The following steps are part of split schema upgrade
- Migrating rules to new schema -uses migratesystem.properties & migratesystem.bat/sh
- Upgrading rulebase - uses setupdatabase.properties & upgrade.bat/sh
- Generating rules schema objects - uses migratesystem.properties & migratesystem.bat/sh
- Data upgrade - uses setupdatabase.properties & upgrade.bat/sh
Assuming you are in the 2nd step, please set
rules.schema.name=GQRULES and leave the data.schema.name blank
Use generateddl.bat/sh (with --action upgrade) to generate the DDL
If you are in the 4th step i.e. doing data upgrade, set
rules.schema.name=GQRULES and data.schema.name=GQDBA
Use generateddl.bat/sh (with --action upgradedataonly) to generate the DDL


Trident Consulting Inc
US
I am at step-1. updated migrateSystem.properties file and executing generate.ddl instead of migrate.sh via cmd prompt.
cmd Prompt: GenerateDDL.bat --action upgrade --dbtype UDB --outputDirectory C:\AIG\PRPC7.2
Accepted Solution


Paypal Inc
IN
Then migratesystem.sh has to be utilized to generate the DDL. The jdbc connection information in your migratesystem.property file is correct. Please set the following properties (available in my second post) and run migratesystem.sh. You can see the DDL generated in the <PRPC_BUILD_HOME>/schema/generated folder.
Note after applying the DDL by the DBA you have to run the migratesystem.sh again to unload records from the old schema and load into the new schema


Trident Consulting Inc
US
when i execute migrate.bat after step-3 i get the below issue. i have rules schema with all tables and has 0 records in it.
[java] com.ibm.db2.jcc.a.nn: DB2 SQL Error: SQLCODE=-551, SQLSTATE=42501, SQLERRMC=GQUSRQ;DROP
BUILD FAILED
C:\AIG\PRPC7.2\scripts\migrateSystem.xml:697: Java returned: 1
Total time: 1 minute 49 seconds
Exiting with Error


Trident Consulting Inc
US
Perfect. Thanks!!


Trident Consulting Inc
US
Sudharsan,
Quick clarification. i am step-3 and so far i have handled only GQRULES schema and have approximately 200 tables now. Is this expected?
Also, when will data from rules schema willbe migrated from single to split schema?


Paypal Inc
IN
Your first step is incomplete. After applying the DDL, all the data of those respective tables must also be moved manually or by using bulk mover.


Trident Consulting Inc
US
After Step-1 i have 19 tables in RULES schema. Then i executed migrate (for bulk mover) with the below changes.
pega.bulkmover.directory=C:\AIG\PRPC7.2\temp
#The location where a temporary directory will be created for use by the migrate system utilities.
pega.migrate.temp.directory=C:\AIG\PRPC7.2\temp
######## The operations to be run by the utility, they will only be run if the property is set to true.
#Set to true if migrating before an upgrade. If true admin table(s) required
#for an upgrade will be migrated with the rules tables.
pega.move.admin.table=false
#Generate an xml document containing the definitions of tables in the source system. It will be found in the schema directory of the
#distribution image.
pega.clone.generate.xml=false
#Create ddl from the generated xml document. This ddl can be used to create copies of rule tables found on the source system.
pega.clone.create.ddl=false
#Apply the generated clone ddl to the target system.
pega.clone.apply.ddl=false
#Unload the rows from the rules tables on the source system into the pega.bulkmover.directory.
pega.bulkmover.unload.db=true
#Load the rows onto the target system from the pega.bulkmover.directory.
pega.bulkmover.load.db=true
After Step-1 i have 19 tables in RULES schema. Then i executed migrate (for bulk mover) with the below changes.
pega.bulkmover.directory=C:\AIG\PRPC7.2\temp
#The location where a temporary directory will be created for use by the migrate system utilities.
pega.migrate.temp.directory=C:\AIG\PRPC7.2\temp
######## The operations to be run by the utility, they will only be run if the property is set to true.
#Set to true if migrating before an upgrade. If true admin table(s) required
#for an upgrade will be migrated with the rules tables.
pega.move.admin.table=false
#Generate an xml document containing the definitions of tables in the source system. It will be found in the schema directory of the
#distribution image.
pega.clone.generate.xml=false
#Create ddl from the generated xml document. This ddl can be used to create copies of rule tables found on the source system.
pega.clone.create.ddl=false
#Apply the generated clone ddl to the target system.
pega.clone.apply.ddl=false
#Unload the rows from the rules tables on the source system into the pega.bulkmover.directory.
pega.bulkmover.unload.db=true
#Load the rows onto the target system from the pega.bulkmover.directory.
pega.bulkmover.load.db=true
### The following operations should only be run when migrating upgraded rules
#Generate the rules schema objects (views, triggers, procedures, functions). The objects will be created in the pega.target.rules.schema
#but will contain references to the pega.target.data.schema where appropriate.
pega.rules.objects.generate=false
#Apply the rules schema objects (views, triggers, procedures, functions) to pega.target.rules.schema.
pega.rules.objects.apply=false
Got failed with below exception
BUILD FAILED
C:\AIG\PRPC7.2\scripts\migrateSystem.xml:485: com.ibm.db2.jcc.a.nn: DB2 SQL Erro
r: SQLCODE=-104, SQLSTATE=42601, SQLERRMC==;ll. where pxobjclass;LIMIT, DRIVER=3
.52.95


Paypal Inc
IN
Looks like table list xml is not available. Can you try running again after setting below properties to true
pega.move.admin.table=true
pega.clone.generate.xml=true


Trident Consulting Inc
US
After step-1 i have 19 tables got created on RULES schema with all the values loaded properly.
Im on 2nd step, by setting below value at setupDatabase.properties
rules.schema.name=GQRULES and the data.schema.name blank
Used C:\AIG\PRPC7.2\scripts>GenerateDDL.bat --action upgrade --outputDirectory C:\AIG\PRPC7.2
this is successful but has 184 create tables in it. Uploaded DDL that was generated.


Pegasystems Inc.
IN
184 tables created is the correct one because this step will generate the tabels for delta of rule tables which are introduced after PRPC 6.1 SP2 and also creates all the data tables. These newly created data tables in the rules schema need to be cleaned up using Optimize schema wizard once the data upgrade is completed and Pega 7 system is up.


Trident Consulting Inc
US
on Step-3, while performing upgrade.sh/bat i see below issue.
[java] com.ibm.db2.jcc.a.nn: DB2 SQL Error: SQLCODE=-552, SQLSTATE=42502, SQLERRMC=GQUSRQ;CREATE FUNCTION, DRIVER=3.52.95


Pegasystems Inc.
IN
did your DBA applied the generated DDL files after step2? and also confirm whether you have set the following properties.
bypass.pega.schema=true
bypass.udf.generation=true


Trident Consulting Inc
US
Dhevendra,
The properties you mentioned have to set before generating the DDL or after the DDL is generated and Applied and before executing upgrade.sh/.bat?
i didnt set this properties as True and i have 184 create table statements on the DDL.


Pegasystems Inc.
IN
You have to set these properties after generate DDL step is completed and applied the generate DDL. So that upgrade process wont do the again meta data analysys for finding the schema changes.


Trident Consulting Inc
US
Thanks.. doing the same and see no issues.
my concern is about number of tables remains the same. why this generate statement has more than 180tables?


Pegasystems Inc.
IN
This is an expected behavior. Generated statements creates the delta between your migrated tables and OOTB Pega 7 all tabels. Even though you have 180 tables once upgrade is completed please rund the Optimize schema wizard to cleanup the unwanted tabels from the Rules and Data Schema. At that point of time you will see the unique tables in both the rules and data schema.


Trident Consulting Inc
US
upgrade successful. But i have other issue now.
After upgrade, the server came up. i disabled all the agents including OOTB agents.
When i login with [email protected]/install i see lot of issues at the log. Looks like an application issue. any suggestions here?
Log file attached!!
2016-09-26 14:08:03,846 [ch Thread t=009b2748] [ STANDARD] [ ] [ PegaRULES:07.10] (ngineinterface.service.HttpAPI) ERROR vmzldlcimw311.r1-core.r1.aig.net| Proprietary information hidden [email protected] - Proprietary information hidden: com.pega.pegarules.pub.PRRuntimeError
com.pega.pegarules.pub.PRRuntimeError: PRRuntimeError


Pegasystems Inc.
IN
Looks like some issue with rule assembly.
Could you please try truncate the following cache tables along with the temp directory then restart the server.
truncate table pr_assembledclasses;
truncate table pr4_rule_sysgen;
truncate table pr_sys_appcache_entry;
truncate table pr_sys_appcache_dep;
truncate table pr_sys_appcache_shortcut;
truncate table pr_sys_rule_impl;


Trident Consulting Inc
US
Dhevendra - i happen to do that.. didnt work either..
what i noticed all the cache tables, Assembledclasses and engineclasses exist in both the schema. is this something that has to be fixed in first place?
or
the optimizer tool will take care of this?