Maintaining unneeded indexes can have a detrimental effect on performance, but it can also be detrimental if a needed index is missing.

The best way to identify expensive searches processed in the server is to examine the access logs for search operations with a high etime (elapsed processing time) value. After you identify these search operations, you can filter out any of these operations that do not need to be fast. For example, you might have applications that generate reports by performing inefficient searches, such as searches to retrieve all entries, and it is usually more acceptable for those searches to be slow.

For any remaining searches that are slow or that should be faster, the best way to understand why the search is expensive is to issue a search with the same base DN, scope, and filter, but request only the debugsearchindex attribute.

Note:

debugsearchindex is a special attribute that makes the server return debug information about the index processing that is being performed in the course of evaluating the search, and how long it took to complete each step of the evaluation.

From this output, you can see which indexes were used and which could not be used because there was either no applicable index or the index entry limit had been exceeded for the target key. You can see expensive accesses to exploded indexes and identify indexes you want to add, indexes that can benefit from being converted to composite indexes, or indexes where you might need to increase the index entry limit. Alternatively, you can determine a different way to perform the search so that it does not depend on components that are unindexed or that match a large number of entries.