Guidelines or Tips for Crawling the Internet Web Sites
Hi. Do we have any guidelines or tips for crawling the Internet Web sites with Pega Robotics?
For example, Yahoo Web site's search function seems to restrict robot's access in a row. As far as I tested, it returns the warning page after accessing it approximately 50 times consecutively. But it worked fine after I put 10-second think times between each iterations.
One of our customers is concerned on the robustness of our Automation executions in this perspective. Can we say something on this subject?