Recent Posts

Pages: 1 2 [3] 4 5 ... 10
Uncertainty Quantification (quoFEM) / Re: Parallel execution on a Windows HPC
« Last post by Sang-ri on October 31, 2023, 11:01:13 PM »

Thanks for sharing the file. We are still struggling to identify the issue. From my understanding, the automated process of overwriting the evaluation_concurrency value is exactly the same as the process we tried manually, as shown below.


Thank you for the info. We think this number should be 128 instead of 64. While we figure out the solution, can you try the following workaround and let us know if this makes CPU occupied 100%?

1. Modify the number after "asynchronous evaluation_concurrency" in from 64 to 128
2. Remove all files and folders in the local working directory except for "" and "templatedir"
3. Find the path of the Dakota executable from the preference window of quoFEM. Let us denote this {dakota path}
4. Open the command prompt, cd into the folder where is located, and type "{dakota path}" (without the quotation marks)

It will run the forward propagation analysis, and the results will be shown in dakotaTab.out.

Thank you,

Regarding this, is it possible that when we manually tried, dakotaTab.out file was not properly created even though the CPU was occupied 100%? Sorry, I should have asked this earlier.

If unsure, please just let me know. We will continue investigating the issue on our side.


Uncertainty Quantification (quoFEM) / Re: Parallel execution on a Windows HPC
« Last post by rsam1993 on October 31, 2023, 01:40:34 AM »
Thank you for your prompt response,

Yes, dakotaTab.out is created, but the results are not there. It seems to me that the dakota.out file is not completely generated by QuoFEM. I am attaching all the files you mentioned so you can check them all and see where this issue comes from.

Many thanks, Sang-ri.

1. I will try to perform the sensitivity analysis on quoFEM as per your instructions. Fingers crossed. Let's see how the sensitivity index values are.
2. For the GP-based surrogate model, could you please suggest which tools will be easier for my case, provided I have no machine learning experience? I am comfortable with Python and MATLAB.
Uncertainty Quantification (quoFEM) / Re: Parallel execution on a Windows HPC
« Last post by Sang-ri on October 30, 2023, 11:45:13 PM »

Thank you so much for following up, and I'm sorry for the inconvenience! Your feedback is extremely appreciated because we could not test this feature without having a machine with more than 64 cores.

Can you please check if "dakotaTab.out" file is created in the local working directory (C:\Users\rsamtaslimi\Documents\quoFEM\LocalWorkDir\tmp.SimCenter) and see if it contains the desired sample evaluation results?

If it does, it would be great if you could share "dakota.err" file in the same folder with us. Currently, quoFEM is raising an error whenever dakota.err is non-empty. So, we can simply add an exception condition to fix that.

If "dakotaTab.out" has not been created properly, please share files "dakota.err", "", "dakota.out", and "log.txt", if those exist in the local working directory, to help us figure out the source of error.

Thanks again!,
Hi Atish,

Thanks much for clarifying! I hope you get fully recovered soon.

Dealing with high-dimensional input is typically more tricky than high-dimensional output, partially because of the algorithmic challenges (e.g. the number of parameters to be optimized increases), but more importantly, because this means it is likely that the sensitivity index values are very low. In an extreme case, imagine the case where 50 variables equally contribute to the response - the sensitivity of each variable can be less than 0.02, and getting the estimation accuracy of this level will require an enormous number of samples. With 512 samples, the estimation can be significantly perturbed by the sampling variability. However, on the other hand, if only a few variables actually dominate the response of your model, some algorithms can work. The best way to figure it out is to test it out  :)

To run quoFEM analysis using existing Monte Carlo results, please follow the below:
  • In the UQ tab, select "Sensitivity Analysis"-"SimCenterUQ"-"Import Data Files". Set # samples to 512 and import the data files that are prepared following the instructions.
  • In the FEM tab, select "none".
  • Then, if you click the RV tab, quoFEM should already have auto-populated 50 variables (nothing to change), and finally, in the QoI tab, you can set any name for the output variable and set the length to 4.
Some caveats:
  • Please note that the total sensitivity index coming from the algorithm in SimCenterUQ is likely not credible for such high-dimensional inputs (the challenge is in fitting a Gaussian mixture distribution in 50-dim space; see here for the reference). So, only the main index should be useful.
  • If you want to get the reliable total index, you may want to run the algorithm in the Dakota engine, but this typically requires a much larger number of simulations and cannot be estimated using pre-simulated samples (need to import the model in FEM tab). But this algorithm is guaranteed to converge to the exact solution if the number of samples is very large
  • One more caveat is warranted for the case where the input variables are correlated. If this is the case, please note that the contribution can be "double counted" for the correlated variables, and be careful with the interpretations of Sobol indices.

For the surrogate model, I assume GP in quoFEM would not work - With 512 samples for 50 input dimensions, it will very likely result in overfitting.

Please let me know if something is unclear or have difficulty running the analysis.


Thanks, Sang-ri.

My apologies for the late reply. I have been feeling unwell for the past few days.

1. The dimension of my model input is 50. I have chosen 50 chemical reaction rates whose contribution I would like to inspect. The atmospheric model that I am working on is the Globol Ionosphere Thermosphere Model (GITM), which outputs many parameters, but I will try to focus on baseline parameters such as neutral temperature, nitric oxide density, neutral density, and electron density.  Let's say, for now, my output dimension is 4.

2. No, I have not used quoFEM before. Previously, I tried to use the SALib - Sensitivity Analysis Library in Python to perform Sobol Analysis. The result was not good.

I have 512 simulation data which I obtained when I ran my model using 512 different sets of perturbed reaction rates. Each set of perturbed reaction rates contains 50 reaction rates. I generated these 512 sets of perturbed reaction rates using Monte Carlo simulation.

Uncertainty Quantification (quoFEM) / Re: Parallel execution on a Windows HPC
« Last post by rsam1993 on October 29, 2023, 08:04:05 PM »
Dear Sang-ri,

I have updated the QuoFEm application on our HPC and located the config.jason file in the same directory as the QuoFEM executable. Now when I run it with 128 samples all the 128 cores are called and it seems the Opensees analyses are performing successfully, but I get this error at the end and QuoFEm does not give me any results!

Error Running Dakota: Too many processes (128) in wait_setupCurrent limit on processes = 64

And here is the error message that I get from dakota.err file:

Too many processes (128) in wait_setup
Current limit on processes = 64

I am not sure where the problem is, because it should work. Please let me know what you need me to share with you to find the reason for this error.
Uncertainty Quantification (quoFEM) / Re: Parallel execution on a Windows HPC
« Last post by rsam1993 on October 29, 2023, 03:30:43 PM »

I really appreciate it. I will start using this new update soon and let you know in case there is any problem, which I doubt.
Regional Hazard Simulation (R2D, rWhale) / Re: Error in running examples
« Last post by jyzhao on October 27, 2023, 04:45:20 PM »

Collecting inventory data is difficult. Although R2D can be used to assess the performance of building inventory stocks in natural hazards, it is not designed to collect building inventory data. You may refer to the testbeds in R2D to see examples of collecting building inventory data. If you would like to study highway transportation infrastructure, the BRIALS-transportation tool (which can be found in the "Tools" drop-down menu) in  R2D can generate the transportation infrastructure data for HAZUS-style earthquake performance assessment. Please let us know if you need further assistance.


Regional Hazard Simulation (R2D, rWhale) / Re: R2D on Linux HPC
« Last post by fmk on October 27, 2023, 04:41:16 PM »

It is possible to run on such a system if you can build both the backend and frontend. I am in the process of builidng a Dockerfile which you could use to see which packages are needed on your system and what the build process is. The Dockerfile uses Ubuntu18.04 as its base. The biggest issue I foresee is getting Qt version 5.15 installed. Failing to install the frontend would mean you could not use the GUI. The backend has a simpler build process and you could run the computational simulations there and possibly view results in python.

timeline for Dockerfile is probably next week (today being the 10/27/2023 for those others looking at this message). The Dockerfile will be in the R2DTool repo on github.

The purpose of the docker images is to run the frontend and backend on one of the TACC Frontera HPC compute nodes.

Pages: 1 2 [3] 4 5 ... 10