We have several thousand tests and suffer from the very long analysis. We mitigated that by configuring the plugin to only read the last 25 runs and that helps a lot but also is very limiting since we can't get a good idea of what are the top 10 flaky tests over a certain period of time, just the last few hours in our case.
I understand that this is caused by jenkins doing everything in XML and but I think the plugin could cache the data it has read from previous runs so that it can be a lot more efficient if the same report is requested twice or if most of the runs have been analyzed previously.
A workaround we are considering is to have a post processing pipeline that will collect the test results and recreate them with only the failing tests as a trick to have very small test reports to analyze. This post job would be used to get a more useful report. If we can do this trickery, certainly the plugin can do this internally.