Recent Posts

Pages: 1 2 3 [4] 5 6 ... 10
31
Site Response Analysis (s3hark) / Progress bar
« Last post by amber on May 30, 2022, 12:42:58 AM »
Hello! :)

I tried to test the application, but the progress bar stays at 0% after clicking the analysis button. I'm not sure where I got wrong. The screenshot is attached below. Thank you!

Sincerely,
Amber
32
Damage & Loss (PELICUN) / Re: Coupled damage - Loss models in PELICUN
« Last post by adamzs on May 28, 2022, 02:15:55 AM »
Hi Asim,

Thank you for your interest in Pelicun and for asking this question on the Forum.

Let me give you a high-level overview first and I am happy to answer any questions on the details later.

As you probably know, Hazus uses multilinear functions for wind damage and loss assessment. Both function types are defined by a series of values that correspond to damage or loss conditioned on wind speeds that increase in 5 mph increments from 50 mph to 250 mph.

We extracted the above data from Hazus and parsed it for 5076 building archetypes x 5 different terrain roughnesses.

For each archetype-terrain combination:
    - For each damage state:
        - We ran two optimizations to find the best fitting normal and lognormal CDF to the data. The objective function for the fitting was the sum os squared errors at the discrete points (every 5 mph) from 50 mph to 200 mph. Note that we did not consider wind speeds above 200 mph because we were concerned about the robustness of the data in that domain.
        - The distribution family with the smaller error was chosen from these two results.
        - The error magnitude was saved and later reviewed for the entire database. For the vast majority of the data, the fit is almost perfect. I can provide quantitative details if you are interested.
    - Now that we have the fragility functions as CDFs, we calculated the probability of being in each damage state at each of the 5 mph increments from 50 to 200 mph.
    - We ran an optimization where the unknowns were the loss ratios assigned to the first three damage states. The fourth damage state was always assigned a loss ratio of 1.0 (i.e., total loss). The loss ratio assigned to each wind speed is the expected loss, that is, the sum of the product of each damage state's likelihood and the corresponding loss ratio.
    - This optimization was a bit more tricky because we had to add constraints to make sure the loss ratios are monotonically increasing. The objective function used the sum of squared errors between the Hazus losses and our model's losses at each 5 mph increment from 50 mph to 200 mph.
    - The fit was great in most cases, but we found some archetypes where the fragility curves and the loss curves were in such disagreement that their coupling with the above method was only possible with considerable error. We believe the curves we produced for these cases represent a more realistic behavior and consequences than the ones in the Hazus database. Again, I am more than happy to elaborate if you are interested.

The fragility and loss data is available in the developer branch of Pelicun:
    - Fragilities: https://github.com/NHERI-SimCenter/pelicun/blob/develop/pelicun/resources/fragility_DB_SimCenter_Hazus_HU.csv
    - Losses: https://github.com/NHERI-SimCenter/pelicun/blob/develop/pelicun/resources/bldg_repair_DB_SimCenter_Hazus_HU.csv

I plan to compile a similar database with the raw Hazus data to facilitate benchmark studies that compare the two approaches.

Let me know if you have further questions.

Adam





33
Damage & Loss (PELICUN) / Coupled damage - Loss models in PELICUN
« Last post by asimbash9201 on May 27, 2022, 06:28:03 AM »
I need some clarification about how are the damage and loss models coupled in PELICUN to get a coupled damage and loss model. I assume some ratios are worked out which when multiplied and summed over the fragility curves, gives the expected loss curve. As such these ratios should be same for similar building attributes. Can you please provide some explanation or clarification about the same.
I have been reading the report on Lake Charles testbed on Designsafe, where the same approach has been used.
34
Damage & Loss (PELICUN) / Re: Generation of Simulated Demands
« Last post by adamzs on May 25, 2022, 01:30:20 AM »
Hi Jiajun,

I am writing to let you know that the calibration notebook is on my list of todos and I'll get to it shortly.

Thank you for your patience.

Adam
35
Damage & Loss (PELICUN) / Re: Comparison between the Results of Pelicun and PACT
« Last post by adamzs on May 25, 2022, 01:29:16 AM »
Hi Pooya,

I just wanted to let you know that I've released a new version of Pelicun3, we are at 3.1.b6 now. It might be a good idea to run the comparison with the new version.

I also updated the FEMA P58 example notebook on DesignSafe with a lot of additional details and explanation: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3411v5

Adam
36
Damage & Loss (PELICUN) / Re: Uncertainty in consequence models
« Last post by adamzs on May 25, 2022, 01:26:01 AM »
Hi Andres,

Apologies for the late response.

I decided to make a few enhancements in Pelicun to streamline the process of adding a new distribution type. I've just released v3.1.b6 and also updated the Example notebook on DesignSafe with a lot of additional details and explanation on how Pelicun 3 works. I encourage you to take a look here: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3411v5

Read on if you are still interested in adding the skew normal distribution.

First, you'll need to have the source code available locally. I also recommend linking this version of pelicun to your active Python on the local system to make testing easier. I am not sure how familiar you are with the steps to do this, so let me give you a few tips - please don't hesitate to ask me if you need more information:
- You should get pelicun from the main NHERI-SimCenter account on GitHub. The master branch always provides the latest stable release. Since you are interested in extending the code, you'll need the develop branch; you can find that here: https://github.com/NHERI-SimCenter/pelicun/tree/develop
- I suggest forking the NHERI-SimCenter/pelicun repo and then cloning the develop branch of your own fork to your local hard drive. This will give you the latest version of the code base.
- In your local system, you can store pelicun in any location and make sure Python finds it by adding it to the PYTHONPATH environment variable or by using the externals.pth file. I am happy to provide more information on either of these if you are not familiar with them.
- Once you set things up properly, you should see 3.1.b6 when you import pelicun and check pelicun.__ version__ in a python interpreter.
- Now you can make your edits, test the resulting code and, when you are happy with it, I would appreciate if you committed your contributions to the main repo so that everyone can use this new distribution. To do that, you'll need to first commit your changes to the develop branch on your fork of pelicun and then open a pull request from your develop branch to the develop branch of the main repo. When I see your pull request, I'll test your version and - if everything looks okay - I'll accept it.

That's it, that's how you can extend pelicun and share your work with the community.

Now, let's see how you would go about adding a new distribution. You'll only need to make changes in the uq module to do this. See the latest version of the script here for reference: https://github.com/NHERI-SimCenter/pelicun/blob/develop/pelicun/uq.py . Note that if the uq module gets updated by the time you are reading this, you can always click on history in the top right and go back to today's version so that the line numbers I give below will point to the right location.

Sampling an N dimensional multivariate distribution in Pelicun follows the logic below (see generate_sample method starting at line 1465):
- Sample an N dimensional uniform distribution using Monte Carlo or Latin Hypercube Sampling
- If there are variables across the N dimensions with non-zero correlation, apply the prescribed correlations assuming a Gaussian copula function. First, we try to use a fast Cholesky transformation; if this fails because the correlation matrix is not positive semidefinite, we use another method based on Singular Value Decomposition to apply correlations that preserve as much as possible from the prescribed correlation matrix.
- Perfect correlation can be handled very efficiently with a special 'anchor' feature, but that is almost surely outside of scope for your edits; let me know if you want to know more about it.
- Finally, we take each marginal and use inverse probability integral transformation to transform the sample from uniform distribution to the desired distribution.

Adding a new distribution only affects the last step of the above (as long as you don't want to have some special, non-Gaussian copula function also included - let me know if you do). Below is a list of locations where you'd need to make edits:
- Add the name of the new distribution and a description of its parameters in the documentation at the top of the RandomVariable class - line 791-827; add more info at least under 'distribution' and 'theta' parameters, but potentially also to others, if needed. Currently, Pelicun supports up to three parameters for distributions. That should be sufficient for the skew normal, but let me know if you need more than three.
- If the new distribution requires some special checks, you can add those in the init method (line 829). See, for example, that the multinomial distribution is checked to have probabilities sum up to at most 1.0.
- The cdf() method (line 994) returns the Cumulative Distribution Function ordinates for a given set of x values. You'll need to add your distribution here in an elif clause. Take a look at how the existing distributions are handled and try to mimic the same robustness:
   - You'll see that we get the parameters of the distribution in the theta vector and truncation limits in another vector. Truncation limits can be undefined which should lead to an unbounded distribution in one or both ends and not throw an error.
   - As for the parameters of the distribution, they are typically mandatory, but in some cases we can have rules set up to replace missing values - e.g., see how the limits of the uniform distribution are optional and replaced with infinite values if missing.
   - Make sure the input values are valid inputs to the CDF. For example, 0 and negative numbers are not part of the input domain for the lognormal CDF, so we need to make sure zeros are replaced by the smallest positive number (the computer can handle) in line 1050. This also shows that in general I try to make Pelicun work and handle issues gracefully rather than throwing error messages all the time when something out of the ordinary happens. So, someone feeding a zero to a lognormal CDF will get a probability of 0. I believe in most cases this behavior is preferred over the error message that would terminate execution.
   - I suggest starting with the implementation of the non-truncated version of the distribution. After testing and making sure that it works, you shall expand it by adding the truncation option. Let me know if you need help with this.
- The next method to edit is the inverse_transform() starting at line 1070. Here, you'll need to add an elif clause; I'd add it after line 1144 because the distribution you plan to add belongs to the normal family. The task is similar to the cdf method, but you are implementing an inverse cdf function here:
   - Note that the sample_size argument of the method is only used in special cases. If you implement a skew normal distribution, you should expect to have an array of values (which come from the [0,1] uniform distribution following the sampling logic I explained earlier) that your script will transform to the target distribution.
   - Pay attention to handling missing inputs and truncation limits - the advice I gave for the cdf() method applies here too.
- Finally, you'll need to edit the scale_distribution() method at the top of the uq module (line 70). This method is used by various models in Pelicun to scale input parameters defined in one unit to the SI unit that is used internally. Scaling distributions so far was straightforward as you'll see in the implementation - I hope the skew normal will not be an exception. Note that I define the normal distribution with the mean and coefficient of variation so that the second parameter is unitless and does not need to be scaled. It might be worth pulling similar tricks with the skew normal.

That's it. At this point you have the skew normal distribution implemented and you can sample it by creating a RandomVariable object, adding it to a RandomVariableRegistry and then calling the generate_sample() method of that registry. This is done all over the model module: https://github.com/NHERI-SimCenter/pelicun/blob/develop/pelicun/model.py For example, take a look at the generate_cmp_sample() method starting at line 1165. The _create_cmp_RVs() creates the RandomVariable objects and the registry (line 1139) and then at line 1180 we sample the registry.

Notice that RandomVariable objects are created by feeding the family and parameters of the distribution to the uq module automatically. Using the previous cmp example, take a look at line 1150 in the model module. As long as you feed in the correct name for the family and valid parameters, your new distribution should work immediately without making any changes to the model.py .

The only exception to this is the LossModel - currently, only normal and lognormal random variables are supported for probabilistic loss calculation. Let me know if you want to include the skewed normal there and I can help you set that up.

I hope you'll find the above helpful. Please let me know if you're working on this and don't hesitate to ask questions here if you run into any issues.
37
Uncertainty Quantification (quoFEM) / Re: QUOFEM offline?
« Last post by fmk on May 23, 2022, 06:33:46 PM »
are you trying to launch the DCV application at DesignSafe or are you running the application locally?

1) If the former, you need to be part of the DCV allocation at DesignSafe. To gain access submit a ticket requesting access https://www.designsafe-ci.org/help/new-ticket/

sorry about this, it is something we just became aware of and we are updating the dcv tool page with this info.

2) if this message is coming from your desktop application, does the job run locally and is it just failing when you hit the run at designsafe button. If just designsafe, can you open up the applications preferences and tell me what the remote app id is.

thanks
frank
38
Uncertainty Quantification (quoFEM) / QUOFEM offline?
« Last post by STOKLJOS on May 23, 2022, 05:38:58 PM »
Hello,

I was trying to run the quofem desktop application this weekend but everytime I submitted a job it says status is blocked. Is the application currently offline? If so when is it expected to come back online?

Thanks,

Josh S.
39
Damage & Loss (PELICUN) / Re: Comparison between the Results of Pelicun and PACT
« Last post by adamzs on May 20, 2022, 06:40:39 AM »
Hi Pooya,

Thank you for sharing those results. Such comparisons are always very helpful in verifying newly developed code.

In this particular case, I can offer two ideas on where the difference might come from.

First, make sure the demand sample is identical. If PACT assumes an increase in the variance of the lognormally distributed EDPs, make sure that assumption is made in Pelicun as well. I expect that you've checked this.

The second option is a bit more complicated: There is an important steps in the loss estimation of FEMA P-58 that do not receive a lot of attention in the official volumes: the calculation of the quantity of damage for modeling the economies of scale.

There are multiple ways of performing this task and the different approaches can lead to substantially different results. I have been collaborating with researchers at McMaster University and IUSS Pavia to investigate these issues and their impact on FEMA P-58 assessments. We are going to present a conference paper about it in the upcoming 12NCEE.

I'll give you a brief overview of the issue and suggest a solution below.

When it comes to the quantity of damaged components you need to decide if you want to aggregate damages across all floors of the building and if you want to aggregate damages across all damage states. These choices lead to four options:
- all floors, all damage states
- all floors, but only the given damage state
- only the given floor, all damage states
- only the given floor and only the given damage state

The first option can lead to a much larger quantity of damage than the last option, especially for medium and high rise buildings. This often yields a substantial reduction in repair consequences and their variance - because the variance of consequence functions in FEMA P-58 depends on their median value.
PACT uses the first option: all floors all damage states approach.
Pelicun 3 allows you to choose the approach you'd like to use, but the default setting is the second option: all floors, individual damage state. When you initialize an Assessment, you can provide a list of settings in a dictionary. One of those settings is "EconomiesOfScale". I have already written up a description of it in an updated version of the example from the first Live Expert Tips that I'll release shortly. Here is what I write there:

EconomiesOfScale: Controls how the damages are aggregated when the economies of scale are calculated. Expects the following dictionary: {'AcrossFloors': bool, 'AcrossDamageStates': bool} where bool is either True or False. Default: {'AcrossFloors': True, 'AcrossDamageStates': False}
- 'AcrossFloors' if True, aggregates damages across floors to get the quantity of damage. If False, it uses damaged quantities and evaluates economies of scale independently for each floor.
- 'AcrossDamageStates' if True, aggregates damages across damage states to get the quantity of damage. If False, it uses damaged quantities and evaluates economies of scale independently for each damage state.

For example, initializing the assessment like this should reproduce PACT's behavior:

PAL = Assessment({
    "PrintLog": True,
    "Seed": 415,
    "EconomiesOfScale": {"AcrossFloors": True, "AcrossDamageStates": True}
})

I suggest running the calculation with the above settings and comparing the results to PACT again. If the difference persists, you might have found a bug.

Please let me know how it goes.

Thanks,
Adam
40
Damage & Loss (PELICUN) / Comparison between the Results of Pelicun and PACT
« Last post by rezvan on May 19, 2022, 12:39:07 PM »
Hello,

I just wanted to share this post to show the comparison between the damage and loss assessment results, which were performed with Pelicun and PACT software for a 6-story building.
The results seem to be close; however, the uncertainty in the results based on PACT analysis is more considerable than Pelicun.

Thank you,
Pooya
Pages: 1 2 3 [4] 5 6 ... 10