Recent Posts

Pages: 1 2 [3] 4 5 ... 10
Hydrodynamic Engineering (Hydro-UQ) / Re: Login to Design Safe with TACC account
« Last post by fmk on January 25, 2023, 05:26:32 PM »
You need to be able to login to DesignSafe before any of our tools will work. Did you go through the DesignSafe portal to get to the TACC create an account? You need to get TACC to sort this out. As you cannot login to designsafe to submit a ticket, you will have to login to the tacc portal again and submit a ticket (it is under the consulting menu). TACC and DesignSafe are staffed by the same people, so they should be able to help out. Get back to the thread if you are still having problems and I will try submitting one on your behalf to DesignSafe proper.
Hydrodynamic Engineering (Hydro-UQ) / Login to Design Safe with TACC account
« Last post by waqas on January 25, 2023, 04:53:42 PM »

I have registered the account on TACC and also logged in to the TACC account but I am not able to login to designsafe account with this TACC account details. Also while using the hydroUQ tool when i try to login to designsafe account with the TACC login details i am not able to login. Please note that I am new to this and just registered the account with TACC yesterday so may be would it take some time for approval? or there is some other issue with it?

Damage & Loss (PELICUN) / Re: Consequence functions
« Last post by navid on January 20, 2023, 12:43:48 AM »
Hi Adam,

Thank you for the information. It will be useful to have it available in Pelicun 3, especially for those who are willing to study functional recovery using ATC 138.

Damage & Loss (PELICUN) / Re: Consequence functions
« Last post by adamzs on January 19, 2023, 07:48:53 PM »
Hi Navid,

The consequence models for unsafe placards and injuries are not yet implemented in Pelicun 3.

As for unsafe placards, in our experience, there is consensus in both the researcher and practitioner community that the current methodology in FEMA P58 provides unrealistic estimates. A new, more complex methodology is developed by the ATC 138 project and will be released in the near future. At that point, we plan to implement a methodology in Pelicun that supports both the old and the new approach to calculating unsafe placards to support benchmarking and evaluation of the impact of changes.
The current methodology is available in Pelicun 2.6, but let us know if you found having it available in Pelicun 3 useful for your work. If we have sufficient interest in the current method, we can increase its priority and get it implemented before the ATC 138 project concludes.

As for injuries, we are developing an enhanced version of the methodology in Pelicun 2. We have the supporting datasets already available in Pelicun 3, but the implementation of the methodology is in progress. We plan to have injury calculations available by July 2023.

Let me know if you have further questions.


Damage & Loss (PELICUN) / Consequence functions
« Last post by navid on January 18, 2023, 04:03:36 AM »

Thank you so much for this helpful software.

I was wondering how to get the outputs for unsafe placarding (red tag), injuries, and fatalities. I could not reach those with a similar code for quantifying the "repair cost and time". I would appreciate it if you could advise how to write the appropriate python codes.

P58_data_Red_tag = PAL.get_default_data('bldg_redtag_DB_FEMA_P58_2nd')
P58_data_for_this_assessment_Red_tag = P58_data_Red_tag.loc[cmp_list, :]

 "PelicunDefault/bldg_redtag_DB_FEMA_P58_2nd.csv"], loss_map)

ValueError: multiple levels only valid with MultiIndex


Can you do the following:
1. Start tool, find the prefernces window and hit the reset and save buttons.
2. Now start an example with the run button
3. If it fails can you attach the debug.log file in ~/Documents/R2D and the .json file in ~/Documents/R2D/LocalWorkDir/tmp.SimCenter


I have been trying to get R2D to work on my computer on and off for months now...but keep running into the same combination of errors. I have attached the .TXT file of what it returns to me. I even deleted everything from R2D and redownloaded everything.

What seems to keep happening is that the app keeps adding an extra "applications" folder? the path changes to:  C:\SimCenter\R2D_Windows_Download\applications\applications\python\python.exe: can't open file 'C:\SimCenter\R2D_Windows_Download\applications\applications\applications\createBIM\CSV_to_BIM\': [Errno 2] No such file or directory

What am I missing?

Uncertainty Quantification (quoFEM) / Re: QuoFEM Sensitivity Analysis
« Last post by pellumbz on December 19, 2022, 08:51:18 AM »
Dear Dr. Sang-ri,
I would like to thank you, I really appreciate your effort. 
Many thanks also for explanation, it is clear for me now :)

Best regards,
Uncertainty Quantification (quoFEM) / Re: QuoFEM Sensitivity Analysis
« Last post by Sang-ri on December 17, 2022, 01:07:16 AM »
Hello Pellumb, thanks for the question and for sharing the results!

It is likely that different algorithms will produce slightly different results because of the (1) sampling variability and (2) different assumptions that each algorithm makes. In this case, because you were able to run enough number of simulations, the results from dakota engine are likely more accurate and thus preferred.

The method in the dakota engine ( efficient monte-carlo ) is asymptotically unbiased, meaning it is guaranteed to converge to the 'exact' values when a large number of samples are available. If you specify 1500 samples, the results should be pretty accurate in most applications.

The approach in SimCenterUQ engine ( PM-GSA ), on the other hand, introduces more assumptions to achieve faster convergence. Because of these assumptions, even when we run enough number of simulations, the results may still be biased. However, there are situations where this method is preferred to the one above, for example: (1) when the simulation model is very expensive, so only a limited number of samples (maybe a few hundred) are available; (2) when the random variables are correlated; (3) when Monte Carlo samples are already available, so you want to directly import the dataset instead of running simulations again; (4) when you would like to calculate 'joint sensitivity indices' or 'higher-order sensitivity indices'

Hope this will help!

Uncertainty Quantification (quoFEM) / QuoFEM Sensitivity Analysis
« Last post by pellumbz on December 16, 2022, 10:38:39 AM »
Dear all,

I'm running sensitivity analysis in QuoFEM, and I have a question related to "UQ Engine".
In QuoFEM, you can perform sensitivity analysis by using either Dakota or SimCenterUQ  UQ engine.
I run the same case (same RV and number of samples)  with both UQ engines but I don't get the same results regarding the sensitivity of each parameter (main and total).
My question is, which is more accurate or reliable regarding the sensitivity analysis?
Should both UQ Engine provide the same results or is normal that the results are different?
I attached the results from both analysis for your consideration.

Pages: 1 2 [3] 4 5 ... 10