46
Damage & Loss (PELICUN) / Re: Dakota File - Direct Call from Python and Automatic Generating
« on: February 09, 2022, 06:36:17 AM »
Hi Rezvan,
Thank you for reaching out to us, it is good to hear that you find Pelicun helpful for your work.
Based on the error message, I assume the problem with the shortened file in your first question is that the names of certain attributes under the GeneralInformation part of the file do not follow the standard naming convention we introduced across our tools last summer. I assume the workshop you refer to was held before that date. Since those standard names were introduced, pelicun 2.6 and later versions look for NumberOfStories and PlanArea under GeneralInformation and does not accept other versions of these attributes anymore. Please take a look at the shortened file and edit it if needed so that it follows these conventions. That should help you get past the error.
Several approaches are available to analyze a large number of buildings:
- If this is a regional analysis, i.e., the buildings are in a geographical context with a location assigned to each, then I suggest using our R2D Tool or the rWHALE backend to run the analysis.
- If this is more of a parametric study on a large set of archetypes, then you can do one of the following:
= As you mentioned, you could prepare a dakota.json file for each building. I know grad students who do this through MatLab (without using PBE at all) by printing out a text file and then running Pelicun as an application directly from MatLab. Nevertheless, I agree with you that this approach is far from efficient.
= A better way to handle this would be to import pelicun in a Python script and use it as a library rather than as an application. PBE uses the DL_calculation.py script to run pelicun as an application. If you take a look at that script (it's under tools in the pelicun package), you'll see how pelicun is imported and how the various methods in the library are called. Calling these directly will make your code more efficient, but you'd still need to prepare input files and read output files if you are using pelicun 2.6. Also note that the dakota.json input file is just a dictionary under Python. So, you can prepare a dictionary and save it to a json file using the json package in Python.
= One of the major changes in pelicun 3 is a redesign of how researchers can interact with the library exactly to support the use case that you have. You do not need to prepare an input file and you can get the outputs directly as Python objects to stay within Python for the entire analysis and make large calculations much more efficient. I will present these features next Friday (Feb 18) during our Live Expert Tips session. If you are interested, I encourage you to register and participate in the event. Here is a link: https://designsafe-ci.zoom.us/meeting/register/tJYpdOGuqTgrHt3FR0yM7dxmCYf6kiEx5Btm
Let me know if you have further questions.
Adam
PS. I hope you don't mind if we delete the copy of your question that was posted on the PBE board.
Thank you for reaching out to us, it is good to hear that you find Pelicun helpful for your work.
Based on the error message, I assume the problem with the shortened file in your first question is that the names of certain attributes under the GeneralInformation part of the file do not follow the standard naming convention we introduced across our tools last summer. I assume the workshop you refer to was held before that date. Since those standard names were introduced, pelicun 2.6 and later versions look for NumberOfStories and PlanArea under GeneralInformation and does not accept other versions of these attributes anymore. Please take a look at the shortened file and edit it if needed so that it follows these conventions. That should help you get past the error.
Several approaches are available to analyze a large number of buildings:
- If this is a regional analysis, i.e., the buildings are in a geographical context with a location assigned to each, then I suggest using our R2D Tool or the rWHALE backend to run the analysis.
- If this is more of a parametric study on a large set of archetypes, then you can do one of the following:
= As you mentioned, you could prepare a dakota.json file for each building. I know grad students who do this through MatLab (without using PBE at all) by printing out a text file and then running Pelicun as an application directly from MatLab. Nevertheless, I agree with you that this approach is far from efficient.
= A better way to handle this would be to import pelicun in a Python script and use it as a library rather than as an application. PBE uses the DL_calculation.py script to run pelicun as an application. If you take a look at that script (it's under tools in the pelicun package), you'll see how pelicun is imported and how the various methods in the library are called. Calling these directly will make your code more efficient, but you'd still need to prepare input files and read output files if you are using pelicun 2.6. Also note that the dakota.json input file is just a dictionary under Python. So, you can prepare a dictionary and save it to a json file using the json package in Python.
= One of the major changes in pelicun 3 is a redesign of how researchers can interact with the library exactly to support the use case that you have. You do not need to prepare an input file and you can get the outputs directly as Python objects to stay within Python for the entire analysis and make large calculations much more efficient. I will present these features next Friday (Feb 18) during our Live Expert Tips session. If you are interested, I encourage you to register and participate in the event. Here is a link: https://designsafe-ci.zoom.us/meeting/register/tJYpdOGuqTgrHt3FR0yM7dxmCYf6kiEx5Btm
Let me know if you have further questions.
Adam
PS. I hope you don't mind if we delete the copy of your question that was posted on the PBE board.