Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - adamzs

Pages: [1] 2 3 ... 6
1
Damage & Loss (PELICUN) / Re: Consequence functions
« on: January 19, 2023, 07:48:53 PM »
Hi Navid,

The consequence models for unsafe placards and injuries are not yet implemented in Pelicun 3.

As for unsafe placards, in our experience, there is consensus in both the researcher and practitioner community that the current methodology in FEMA P58 provides unrealistic estimates. A new, more complex methodology is developed by the ATC 138 project and will be released in the near future. At that point, we plan to implement a methodology in Pelicun that supports both the old and the new approach to calculating unsafe placards to support benchmarking and evaluation of the impact of changes.
The current methodology is available in Pelicun 2.6, but let us know if you found having it available in Pelicun 3 useful for your work. If we have sufficient interest in the current method, we can increase its priority and get it implemented before the ATC 138 project concludes.

As for injuries, we are developing an enhanced version of the methodology in Pelicun 2. We have the supporting datasets already available in Pelicun 3, but the implementation of the methodology is in progress. We plan to have injury calculations available by July 2023.

Let me know if you have further questions.

Adam


2
Performance Based Engineering (PBE) / Re: Dakota Error Debugging
« on: October 25, 2022, 08:57:41 PM »
Thanks for sharing that info, Hannah!

3
Performance Based Engineering (PBE) / Re: Fragility Database export
« on: October 20, 2022, 02:23:51 AM »
Hi Nikoch,

If you're interested in the database under PBE 3.0, you can find it as part of pelicun, here: https://github.com/NHERI-SimCenter/pelicun/tree/master/pelicun/resources
The fragility_DB_FEMA_P58_2nd CSV and JSON files have the fragility function data; and the bldg_repair, bldg_injury, and bldg_redtag files have the consequence function data.

Let me know if you are interested in the old version of the data we used under PBE 2.x and I am happy to send that to you.

Adam

4
Performance Based Engineering (PBE) / Re: Fragility Database export
« on: October 18, 2022, 08:45:03 PM »
Hi Nikoch,

I've just tested the exporting feature with the P-58 database on versions 2.2.2 and 3.0.0 - both work fine for me on Windows 10.

So, let's explore what could go wrong on your machine. When clicking the export button, you need to select a destination folder. I usually create a folder for this purpose, but you could simply select one of the existing folders on your hard drive. It does not need to be a special path.

Are you running in a Windows or macOS environment?

Do the examples run fine within PBE? If yes, that confirms that the tool has access to the databases.

Depending on your answers to the above, I'll have more questions.

Adam

5
Hi Anne,

sorry for joining this conversation a little late.

When I suggested the uq module, I meant the uq module of Pelicun. That supports truncated distributions out of the box.

The rejection sampling that Sang-ri suggested works for a single variable. It is somewhat inconvenient that you need a larger sample to make sure you have enough data left after rejecting the points outside your domain of interest, but that is not a big issue. However, when you want a correlated multivariate truncated distribution, rejections can lead to a very different correlation structure than what you get if you define them using a Gaussian copula between the truncated marginals. Neither solution is 'correct', but the two solutions are different.

Re Frank's question, I think lower and upper limits on any random variable could be useful to have in quoFEM.

Adam

6
Regional Hazard Simulation (R2D, rWhale) / Re: Questions for R2D program
« on: September 20, 2022, 05:31:03 PM »
Hi Sejin,

Can you provide some information on the tools you typically use for these calculations?

How do you evaluate the external and internal pressure on the building? For example, is it a result of an advanced CFD analysis, or you use some kind of an approximate solution that considers a wind environment and a building archetype?

How do you ev aluate the resistance of the building component? Is it a deterministic or a probabilistic analysis?

What happens to the components that become windborne? Do you want to keep track of them afterward?

I need this information to be able to answer your question on how to implement these features in R2D.

Adam

7
Regional Hazard Simulation (R2D, rWhale) / Re: Questions for R2D program
« on: September 12, 2022, 06:57:38 PM »
Hi Sejin,

The currently available modules consider wind-borne debris using the approach from HAZUS: the debris environment is characterized for every building (one of four possibilities), and the (building level) damage state at any given wind speed is influenced by the selected environment. This is a crude but efficient approach that does not take advantage of any local simulation.

You can certainly:
- Introduce more sophisticated archetypes that better represent the aerodynamic characteristics of various buildings and debris environments.
- Introduce a more sophisticated mapping of available information on the built environment to debris environment classes
- Add a debris trajectory module to R2D.

The first two options are relatively straightforward. The complexity of the last one depends on the methodology you plan to use to model the creation (i.e. after buildings or the environment are damaged) and path of debris in the area. If you can tell us a bit more about that methodology, I can give you more information about the complexity of implementing it in our backend and in R2D.

Adam

8
Hi Karen,

Thanks for reaching out to us. I've checked the JSON file you attached, and I did not find any errors in it.

Can you share your auto-population file as well? I suspect that there might be an issue with linking the JSON fragilities with specific buildings. (If the file includes sensitive information that you would not like to share publicly at this point, please feel free to share it in a private message.)

Thanks,
Adam

9
Damage & Loss (PELICUN) / Re: Uncertainty in consequence models
« on: June 01, 2022, 05:23:48 PM »
Hi Andrés,

Sounds good! Don't hesitate to reach out if you have any questions or run into issues.

Adam

10
Damage & Loss (PELICUN) / Re: EDP keys
« on: June 01, 2022, 02:24:16 AM »
I forgot to mention in my previous message that I've made a new release (v3.1.b7) to make these changes immediately available for you to use.

11
Damage & Loss (PELICUN) / Re: EDP keys
« on: June 01, 2022, 02:19:51 AM »
Hi Pooya,

Thanks for drawing my attention to this issue. I've added a few demand types so that everything listed in the fragility database is now supported by Pelicun. Specifically, I've added these new types:

* Peak Link Rotation Angle - LR
* Peak Link Beam Chord Rotation - LBR
* Peak Floor Displacement - PFD
* Peak Effective Drift Ratio - EDR

Let me know if there is any other demand type you would find useful to have in Pelicun.

Adam

12
I am posting a discussion I had in email with Vaafoulay K., with their permission, to preserve it for others to see.

Vaafoulay: Can I obtain the +1 standard deviation costs and downtime directly from the DL summary files?

Adam: If you are interested in the mean+1 standard deviation, I suggest looking at another file, the DL_summary_stats.csv. That file provides statistics on the main outputs, including repair cost and repair time. If you take the value in the mean row and add the value in the std row, you'll get the mean + 1 standard deviation result.

Now, you might want to look at some statistics that are not included in that file. For example, it is quite common to assume that the distribution of the results is lognormal and we are interested in the median + 1 log standard deviation. Such results are not provided in the stats file, so you'd have to calculate them using the sample in the DL_summary.csv.  So, you'd open the DL summary file, take the column of interest (e.g., reconstruction/cost) and use an app of your choice (e.g., Excel, Matlab, Python scripts) to calculate the desired statistic based on the available realizations.

13
Hi Asim,

Thank you for your interest in Pelicun and for asking this question on the Forum.

Let me give you a high-level overview first and I am happy to answer any questions on the details later.

As you probably know, Hazus uses multilinear functions for wind damage and loss assessment. Both function types are defined by a series of values that correspond to damage or loss conditioned on wind speeds that increase in 5 mph increments from 50 mph to 250 mph.

We extracted the above data from Hazus and parsed it for 5076 building archetypes x 5 different terrain roughnesses.

For each archetype-terrain combination:
    - For each damage state:
        - We ran two optimizations to find the best fitting normal and lognormal CDF to the data. The objective function for the fitting was the sum os squared errors at the discrete points (every 5 mph) from 50 mph to 200 mph. Note that we did not consider wind speeds above 200 mph because we were concerned about the robustness of the data in that domain.
        - The distribution family with the smaller error was chosen from these two results.
        - The error magnitude was saved and later reviewed for the entire database. For the vast majority of the data, the fit is almost perfect. I can provide quantitative details if you are interested.
    - Now that we have the fragility functions as CDFs, we calculated the probability of being in each damage state at each of the 5 mph increments from 50 to 200 mph.
    - We ran an optimization where the unknowns were the loss ratios assigned to the first three damage states. The fourth damage state was always assigned a loss ratio of 1.0 (i.e., total loss). The loss ratio assigned to each wind speed is the expected loss, that is, the sum of the product of each damage state's likelihood and the corresponding loss ratio.
    - This optimization was a bit more tricky because we had to add constraints to make sure the loss ratios are monotonically increasing. The objective function used the sum of squared errors between the Hazus losses and our model's losses at each 5 mph increment from 50 mph to 200 mph.
    - The fit was great in most cases, but we found some archetypes where the fragility curves and the loss curves were in such disagreement that their coupling with the above method was only possible with considerable error. We believe the curves we produced for these cases represent a more realistic behavior and consequences than the ones in the Hazus database. Again, I am more than happy to elaborate if you are interested.

The fragility and loss data is available in the developer branch of Pelicun:
    - Fragilities: https://github.com/NHERI-SimCenter/pelicun/blob/develop/pelicun/resources/fragility_DB_SimCenter_Hazus_HU.csv
    - Losses: https://github.com/NHERI-SimCenter/pelicun/blob/develop/pelicun/resources/bldg_repair_DB_SimCenter_Hazus_HU.csv

I plan to compile a similar database with the raw Hazus data to facilitate benchmark studies that compare the two approaches.

Let me know if you have further questions.

Adam






14
Damage & Loss (PELICUN) / Re: Generation of Simulated Demands
« on: May 25, 2022, 01:30:20 AM »
Hi Jiajun,

I am writing to let you know that the calibration notebook is on my list of todos and I'll get to it shortly.

Thank you for your patience.

Adam

15
Hi Pooya,

I just wanted to let you know that I've released a new version of Pelicun3, we are at 3.1.b6 now. It might be a good idea to run the comparison with the new version.

I also updated the FEMA P58 example notebook on DesignSafe with a lot of additional details and explanation: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3411v5

Adam

Pages: [1] 2 3 ... 6