Hello Pellumb, thanks for the question and for sharing the results!
It is likely that different algorithms will produce slightly different results because of the (1) sampling variability and (2) different assumptions that each algorithm makes. In this case, because you were able to run enough number of simulations, the results from dakota engine are likely more accurate and thus preferred.
The method in the dakota engine (
efficient monte-carlo ) is asymptotically unbiased, meaning it is
guaranteed to converge to the 'exact' values when a large number of samples are available. If you specify 1500 samples, the results should be pretty accurate in most applications.
The approach in SimCenterUQ engine (
PM-GSA ), on the other hand, introduces more assumptions to achieve faster convergence. Because of these assumptions, even when we run enough number of simulations,
the results may still be biased. However, there are situations where this method is preferred to the one above, for example: (1) when the simulation model is very expensive, so only a limited number of samples (maybe a few hundred) are available; (2) when the random variables are correlated; (3) when Monte Carlo samples are already available, so you want to directly import the dataset instead of running simulations again; (4) when you would like to calculate 'joint sensitivity indices' or 'higher-order sensitivity indices'
Hope this will help!
Sang-ri