Question

ATOS
IN
Last activity: 13 Feb 2025 7:17 EST
Security between Pega Knowledge buddy and the LLM it connects to
When we query the knowledge buddy with the question on the ingested content, it retrieves the answer from the data chucks from the vector store and prepares the prompt and send that prompt to LLM(OpenAI) and format the answer and shares the response back to us. I am aware of the security of the content that's maintained within Pega, however would like to know what's the security mechanism that is being maintained between Pega and External LLMs?