Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - AakashBS

Pages: [1] 2 3
Example files for Bayesian model class selection of a material model

Uncertainty Quantification (quoFEM) / Re: Problem with Dakota
« on: December 11, 2023, 05:39:36 PM »

We need some more information to diagnose the cause of this issue and advise how to resolve it. Could you please zip up the 'tmp.SimCenter' directory located in your Local Jobs Directory and share it with us here?

(You will find the path to your 'Local Jobs Directory' in the 'Preferences' panel of quoFEM. To bring up the Preferences panel, open the quoFEM tool, then click on File -> Preferences in the main menu if on Windows or quoFEM -> Preferences if on a Mac.)

Best regards,

Uncertainty Quantification (quoFEM) / Re: Unable to run quoFEM
« on: August 15, 2023, 04:00:56 AM »
Hi Allen,

Could you check if you had included the data file in the folder where your model files are? It was not in the list of files that you had last shared, and the analysis will not run without the data file. I was able to start the analysis locally as well as remotely after including the calibration data file from your first post in the directory with the model script. Please let me know if that resolved the issue.


Uncertainty Quantification (quoFEM) / Re: Unable to run quoFEM
« on: August 12, 2023, 04:03:35 AM »
Hello Allen,

Short answer:
From the attached JSON file, I see that you are using the TMCMC algorithm for Bayesian calibration. Please wait until the analysis is finished. If it was successful, you will see the results of the Bayesian calibration displayed in the RES panel of quoFEM. If the analysis fails, you will receive an error message in the message area in quoFEM. Until either of these two events occurs, the analysis is still progressing, even though it appears that nothing is happening when you see that the preset sample size number of work directory are created. The same directories are being reused while the analysis is running.

Longer answer:
It is typically not easy to sample the posterior probability distribution of the parameters of complex models such as the finite element model that you are using. To complete this hard task, the Transitional Markov chain Monte Carlo (TMCMC) algorithm constructs and samples a sequence of intermediate distributions, starting with the prior probability distribution. Sampling each of the intermediate densities requires propagating a Markov chain for a few steps (each step requires the model to be run). So, to generate the requested number (say 'Ns') of sample values from the posterior, the analysis will require (#intermediate_stages*#Markov_chain_steps_per_stage)*Ns model evaluations. So, once Ns model evaluations are complete (i.e., the preset sample size number of workdirs are created in the local results directory), this process repeats itself (#intermediate_stages*#Markov_chain_steps_per_stage) times. You will need to wait until this process completes to see the results of the analysis in quoFEM.

How long do you need to wait?
The number of intermediate distributions required to transition from the prior to the posterior depends on how different the prior is from the posterior. Since this is not known when the sampling is started, it is not possible to predict how many intermediate stages are necessary for a given problem. Typically, this number is in the range of 5-30 (the number may be much larger for your problem, it is not possible to know this beforehand). The number of Markov chain steps per stage is about 10. So, you will have to wait ~100 times the amount of time it takes to complete Ns model evaluations once. If this time is too large to run the analysis locally on your machine, and once you are sure that your model is setup correctly in quoFEM, you can use the 'RUN at DesignSafe' option in quoFEM. DesignSafe provides you access to a large number of processors, which allows you to run a much larger number of model evaluations concurrently. This brings down the time needed to run Ns model evaluations (however, the number of intermediate stages and the number of Markov chain steps does not change), and makes it feasible for you to perform Bayesian calibration of complex models which would not be possible without large computational resources.

How can you monitor the progress of the analysis while it is running?
A file called 'logFileTMCMC.txt' records the progress of the analysis in the 'tmp.SimCenter' directory within the local results directory. You can find the current stage number towards the end of the file. You will also find the value of a variable called 'beta' recorded in each stage. Intermediate stages are required until beta reaches a value of 1. Beta starts at 0 and initially grows very slowly, but the increase in beta typically accelerates, so do not be worried if you see that the value of beta is very small in the initial stages.
At the end of every stage, a new CSV file called 'resultsStage_.csv' containing the set of sample values from the intermediate density at that stage is written. This also is an indication that the analysis is progressing.

Hope this helps, and that you are able to effectively calibrate your model using quoFEM.

Best regards,

Hello Dano,

Glad to know that the analysis is running. For a more detailed discussion, could you please attend the UQ Office Hours (



Thank you for providing the corrected files.

When I open the shared json file in quoFEM and try to replicate your analysis, I see that the analysis is running. However, I get an error as shown in the attached screenshot - the size of the strain data is one less than the size of the stress data. Please see if fixing this discrepancy resolves the issue.

In order to troubleshoot your problem setup more effectively, I suggest reducing the number of stress-strain data points. For instance, you can consider keeping every 100th point. By doing so, the analysis will finish more quickly, allowing you to identify any potential errors with greater efficiency. Once you are sure the problem is setup correctly, you can use the entire dataset to get final results.


Here are a few recommendations:

1) Please remove the space after "Pinching" in the name of the file "Pinching Material.tcl".

2) Please use a different initial point for variables. When you set the initial point to 0 for the parameters of the Pinching4 model, the backbone curve is not uniquely defined and the material model cannot be evaluated.

3) Are you using the strain or the stress values as data for calibration? Please double check the name of the calibration data file in the UQ panel of quoFEM to make sure you are providing the right file. If you would like to use the stress instead of the strain, please make sure that the stress data is in one row, if the data is from one test.



Could you please share the dakota.err file from the 'tmp.SimCenter' directory?


Uncertainty Quantification (quoFEM) / Re: Results in file
« on: November 08, 2022, 02:55:03 AM »
Hello Ahmet,

Thank you for the question and for sharing the file!

There is a way to read the results from file after the analysis has completed. For most of the UQ analyses, results are written to a file called 'dakotaTab.out' in the tmp.SimCenter directory (there are exceptions, for example, results of Reliability analysis using Dakota are at the end of the 'dakota.out' file, and the results from Bayesian calibration using Dakota are in the file called 'dakota_mcmc_tabular.dat'.)
The 'dakotaTab.out' file is a tab-separated plain text file. From the third column onwards, this file contains the sample values of the input RVs, and then the following columns contain the values of the EDPs corresponding to the inputs. There is a header row which indicates what the columns of data in this file correspond to. You can typically ignore the first two columns of data in this file.

The BIM.json file (from the latest release (version 3.2.0) onwards of quoFEM, this file is called AIM.json) is created when the analysis is started. This file is not updated at the end of the analysis, so this file does not contain the results from the current run.

Uncertainty Quantification (quoFEM) / Re: Custom Analysis Engine
« on: October 05, 2022, 06:18:03 AM »
This is in addition to creating the driver script and any post-processing script that is necessary

Uncertainty Quantification (quoFEM) / Re: Custom Analysis Engine
« on: October 05, 2022, 05:45:39 AM »
One should provide read and execute permissions to all (user, group, and other) for all items in the path from the user's home directory to the custom analysis application on DesignSafe.

Uncertainty Quantification (quoFEM) / Re: Example page error
« on: October 03, 2022, 05:33:18 PM »
These links have been fixed to load the referenced images correctly and the documentation webpage will soon be updated.

Thank you for bringing it to our attention. The links have been fixed and will soon be updated on the documentation webpage.

The files shared in this post pertain to using quoFEM to conduct global sensitivity analysis and Bayesian calibration of a liquefaction-capable material model available in OpenSees, and probabilistic prediction of lateral spreading due to seismic soil liquefaction employing the calibrated material model.

On unzipping the attached folder, there will be three directories, each of which contains all the files necessary to run an analysis in quoFEM. The shared files have been tested on quoFEM Version 3.1.0

Refer to the attached preprint for a detailed description of the problem and the models used. Citation information will be provided once available.

Each model evaluation during the sensitivity analysis and calibration phase takes about 1 minute. Each model evaluation for predicting the lateral spreading takes about 20 minutes to complete. (Runtimes on a 2020 MacBook Pro with 2 GHz Intel Core i5 processor with 32 GB memory).

You can also try increasing the standard deviation to have a wider prior, as 10% coefficient of variation results in a narrow, informative prior. If the predictions from the model evaluated at the samples drawn from the prior are all too far away from the data, this can cause numerical difficulties during the algorithm run.

Using a log-normal prior instead of a normal prior will ensure that the input parameter values are admissible, if the parameters should always have positive values.

Pages: [1] 2 3