Currently even if you make changes to the schedule or the SDU collection, which is the normal trigger to re-read a set of documents or Web site that has already been crawled. Documents that have not been altered / edited will not be include within the collection if they have already been ingested. This means its almost impossible to get the crawlers to re-load the complete set of documents. This action may be required if you have made changes to the SDU config, or similar changes to the discovery ingest process. The only real solution seems to either delete all the documents in the collection or delete the collection itself, and start again.
Why is it useful?
|Who would benefit from this IDEA?||All users of the crawlers.|
How should it work?