Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - rezvan

Pages: 1 [2]
Damage & Loss (PELICUN) / Re: EDP keys
« on: March 18, 2022, 01:14:22 PM »
I was able to get PFV to work without any problem in Pelicun ver.3.
I only do not know what units are supported by the Pelicun module for PFV. I tried 'mps' which I guess stands for meter per second and it worked well.

However, in FEMA P-58 only some cabinets and bookcases are sensitive to velocity whose repair costs are not even defined in the FEMA database.
So I personally believe that unless the user does not have any special velocity-sensitive component, this EDP does not have any significant effect on the results.

Pooya Rezvan,

Damage & Loss (PELICUN) / bldg_repair_DB_FEMA_P58_2nd: DS-LongLeadTime
« on: March 18, 2022, 01:36:50 AM »
Dear Adam,

I was wondering if you could clarify the usage of "DS-LongLeadTime" in the csv file containing the component repair information.

Thank you so much,

Pooya Rezvan

Damage & Loss (PELICUN) / Pelicun 3.1.b2 - bug
« on: March 17, 2022, 10:40:34 PM »
Hello Adam,
Thank you so much for the very helpful version of Pelicun.
I just found a problem related to the stripe number.
In the example that you provided in the Jupiter notebook, I was able to run the analysis for all stipe numbers except '1'.
If you change the stripe number to '1' the following error appears while calculating the damage:

Thank you,
Pooya Rezvan

18:34:15 Calculating damages...
         Applying task from prescribed damage process...
Traceback (most recent call last):

  File "C:\...\anaconda3\lib\site-packages\pandas\core\indexes\", line 3621, in get_loc
    return self._engine.get_loc(casted_key)

  File "pandas\_libs\index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc

  File "pandas\_libs\index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc

  File "pandas\_libs\hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item

  File "pandas\_libs\hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item

KeyError: '1'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

  File "C:\Users\pooya\Desktop\Temp\Python Code\temp2\", line 451, in <module>
    PAL.damage.calculate(sample_size, dmg_process=dmg_process)

  File "C:\Users\pooya\anaconda3\lib\site-packages\pelicun\", line 1991, in calculate
    qnt_sample = self._perform_dmg_task(task, qnt_sample)

  File "C:\Users\pooya\anaconda3\lib\site-packages\pelicun\", line 1775, in _perform_dmg_task
    source_ds_vals = source_cmp_df.groupby(

  File "C:\Users\pooya\anaconda3\lib\site-packages\pandas\core\", line 3505, in __getitem__
    indexer = self.columns.get_loc(key)

  File "C:\Users\pooya\anaconda3\lib\site-packages\pandas\core\indexes\", line 3623, in get_loc
    raise KeyError(key) from err

KeyError: '1'

Dear Adam,

I appreciate your detailed answer and proposed solutions.
I will try all the mentioned points and I am sure I will be able to handle it with your guidance.
By the way, I am extremely grateful for the invitation, without any doubt I will participate.


Damage & Loss (PELICUN) / Dakota File - Direct Call
« on: February 09, 2022, 05:31:38 AM »
First, Thank you so much for the extremely helpful Pelicun module.
I have a few questions and I will be so thankful if you could help with them.

1- As mentioned in workshop#3, in case the Pelicun module is being used directly, it is possible to delete unnecessary parts of the dakota.json file. The file 6_story_BRBF_pelicun.json file prepared for the workshop was an example of the shortened file. I tried to run Pelicun using this file and the response.csv file prepared for the workshop, but it seems there is some missing information in dakota file. I attached the log file and the message on the command prompt, I was wondering if you could share a shortened but correct json file.

2- For performing analysis on a large number of buildings, how is it possible to generate the dakota.json file automatically in python. For example, reading the required information from a csv file and generating the dakota file for each building automatically. As I understood, the information could be changed manually from the file or using PBE software and get dakota file which neither is practical for a large number of cases.

Thank you so much and again I appreciate your precious effort on this helpful and informative module.

P.S. Here is the message after running the in command prompt:

C:\Users\pooya\Desktop\Temp\pelicun>python C:\PBE\applications\performDL\pelicun\ --filenameDL 6_story_BRBF_pelicun.json  --filenameEDP response.csv
2022-02-09T05:16:02Z First line of DL_calculation
2022-02-09T05:16:03Z Initializing pelicun calculation...
Traceback (most recent call last):
  File "C:\PBE\applications\performDL\pelicun\", line 316, in <module>
  File "C:\PBE\applications\performDL\pelicun\", line 300, in main
  File "C:\PBE\applications\performDL\pelicun\", line 257, in run_pelicun
    A.read_inputs(DL_input_path, EDP_files[s_i], verbose=False) # make DL inputs into array of all BIM files
  File "C:\PBE\applications\performDL\pelicun\pelicunPBE\", line 767, in read_inputs
    super(FEMA_P58_Assessment, self).read_inputs(path_DL_input,
  File "C:\PBE\applications\performDL\pelicun\pelicunPBE\", line 165, in read_inputs
    self._AIM_in = read_SimCenter_DL_input(
  File "C:\PBE\applications\performDL\pelicun\pelicunPBE\", line 341, in read_SimCenter_DL_input
    raise ValueError(
ValueError: PlanArea has to be specified in the DL input file to estimate injuries decision variable(s).

Pages: 1 [2]