Author Topic: rWhale - simulation using pre-selected time history records  (Read 102 times)

ikavvada

  • Newbie
  • *
  • Posts: 2
    • View Profile
Is it possible to run the regional earthquake simulation workflow working using pre-selected scaled time history records as inputs (one record for each site)? Does the workflow use both horizontal components or just a single horizontal of the time history records? What is the exact format of the time history records inputs that the workflow can use? Thank you!

adamzs

  • Newbie
  • *
  • Posts: 14
    • View Profile
Re: rWhale - simulation using pre-selected time history records
« Reply #1 on: June 12, 2020, 05:18:26 PM »
Yes. You can use one or even multiple pre-selected records at each location in the workflow.

The ground motions can be described by acceleration time-histories in 1, 2, or even 3 orthogonal directions.

You can either provide a set of ground motions for each asset's location, or you can set up a pre-defined grid and provide ground motions only at the grid nodes. In this second case, the ground motions at each asset's location are sampled from the nearby grid points. This is typically useful and can save a lot of space when you have a large number of assets in close proximity, so the hazard for several assets is characterized by very similar ground motions.

When using ground motion records, it is important to pay attention to the capabilities of the simulation model that the time histories will be fed into. If you plan to use a 2D structural model, for example, then you will not be able to take advantage of more than a single horizontal and perhaps a vertical component for the ground motion.
What kind of response estimation method do you plan to use? A related question: how detailed information do you have about the buildings?

ikavvada

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: rWhale - simulation using pre-selected time history records
« Reply #2 on: June 16, 2020, 07:52:20 PM »
Thank you for your response.

I have a pre-defined grid and a pre-defined set of time history acceleration records for each of the grid nodes (in the peer database format). I plan to use the MDOF non-linear shear building model described in the rWhale documentation under the assumption that the buildings have the same properties in X and Y directions. The information I have for the buildings are: structural type, date of built, height, no of stories, occupancy, quantities of structural and non-structural components and area. Could I use both horizontal components of the ground motion or should I only use a single horizontal component?

Also, I am trying to create the FEMA-P58 structural and non-structural quantities input files to be used in the workflow. For each building, I have the type, location and quantity of each building component. Where could I find the exact format of that information in order to be read by the workflow? Could you please share with me an example input file? Thank you!

adamzs

  • Newbie
  • *
  • Posts: 14
    • View Profile
Re: rWhale - simulation using pre-selected time history records
« Reply #3 on: June 24, 2020, 10:12:47 PM »
Thank you for joining the PBE office hours today and providing more information about the calculations you would like to run.

Let me confirm a few things and ask a few questions:

- The information you have about the buildings is sufficient to generate the structural model with the MDOF-LU tool.

- The MDOF-LU tool generates a 3D structural model with identical parameters in the two horizontal directions. Hence, you can use two horizontal ground motion components with that tool and you will get structural response is two directions as a result.

- Please tell me more about the ground motion time histories you plan to use. You mentioned that you have about 150K buildings and you wrote in your previous response that you have a pre-defined grid.
    = Does your grid have a separate node for every building? I assume it does not.
    = Do you have your own logic to assign ground motions from grid nodes to building locations or you are planning to use our nearest neighbor approach?
    = Do you plan to run multiple simulations for each building? If yes, how many?
    = How many unique ground motion records do you have in total considering all grid nodes?

- I am also interested in a bit more information about the loss assessment:
    = Are you planning to use the default FEMA P58 fragility groups (i.e., component types) and damage and loss data, or are you using an edited/extended version?
    = My understanding from our discussion is that you have a python script that creates the performance model (i.e., type, location, and quantity of each component) given some metadata about the building. If that is the case, you will be able to feed that script to the workflow and we will create the performance model on the fly for each building as we are doing the calculation of that building.
    = The input files for pelicun are the same in rWHALE and in PBE. I am going to upload the examples from the PBE webinars to the online documentation in the coming days. I'll add a message here with a link to the input file examples when those pages are available.

- Finally, please explain the output information that you would like to have available in the end of the regional assessment.
Since you are interested in using the more detailed FEMA P58 approach, I expect you to have several fragility groups and possibly dozens of performance groups in each building. I suggest to consider the amount of data generated if you keep all samples and think about what is necessary to keep from each individual building's calculation. By default, rWHALE stores the data from the DL_SUMMARY.csv. We can change that if you are interested in keeping more information.
One way to estimate the data size is to take the product of the number of buildings you have, say 150K, and the number of loss samples you would like to run, say 10000. That is 1.5 billion pieces of data if you only keep a single value for each assessment. We have some measures already implemented to keep the file size at bay. I suggest storing the data in 16 bit floating point numbers (instead of the default 64 bit numbers) in a compressed HDF5 format. Even then, keeping a single data point will generate about 1-2 GB of data.
The above example shows that storing hundreds of data points for each realization for each building is probably not feasible. Considering the above, please let us know what outputs you are interested in.