State of the art of seismic risk and loss assessment in structures

Earthquakes are known as one of the disasters that have fatal consequences for human safety. However, inevitably, the earthquake itself is not the leading cause of the losses suffered by humans, both material and soul. The most powerful thing in human safety is infrastructure such as buildings, bridges, and houses. Therefore, an in-depth analysis of the risk factors that the infrastructure will experience in a natural disaster is needed. There is a variable seismic hazard in the Southeast Asia region, ranging from high seismic hazard allied with the underneath of the Indonesian and Philippine archipelagos to moderate and low seismic tremors associated with a sizeable stable region on the Sunda Shelf. This paper describes the history of seismic risk and loss assessment of infrastructures. The method used is by doing literature reviews of the most recent research relating to seismic risk and assessment around the world. More than fifteen research results are studied and discussed to get a deep knowledge about seismic risk and the assessment of loss due to seismic disaster.


Introduction
Earthquakes are known as the most dangerous natural disaster in history. Many victims and losses have occurred due to earthquakes, and because of that, research and studies have been conducted. Therefore this topic has still become trending. Along with technological advances, research on seismicity is also shifting from conventional conditions to current conditions. Much progress has been made to date, marked by reduced casualties and losses.
Engineers say that earthquakes are not dangerous. The problem comes from the structural conditions, which are not well designed and prepared. Many provisions also cause structures' failures since the provisions in many countries are not updated to the most recent research findings and technology development.
Fajfar [1], in his paper, has discussed the development of seismic provisions and possible further developments. He agrees that the essential part of seismic design and evaluation is the analysis of structures. It was begun more than a decade ago when the seismic regulations adopted 10% of the weight of structures to be considered in static analysis. This regulation was practiced for a long time in seismic codes and provisions. Eventually, by the knowledge and technology improvement, more state of the art of analysis strategies and methods were resolved and conducted, and dynamics and nonlinear analysis were considered. It is seen that from this point forward, probabilistic attention might be adopted as a preference [1]. More comprehensive evaluation could be conducted, for example, are economic efficiency and construction effectiveness. during an earthquake using the study case in Popayan Earthquake in Columbia in 1983. The scale of the earthquake itself was 5.5 by an approximate depth of 9 km. It lasted for 16 seconds, and the location of the hypocentre of Papayan was 5 km. The earthquake research becomes increasingly sophisticated, and as a result, the codes and regulations too. First of all, the causes of these are because its purpose is for evaluating everything in formulas even where nature cannot be and no matter whether inaccurate assumptions support investigations. These complexities sometimes lead to losing the way because there are lots of hesitation and doubt. The result of this is the failures of the structure [2].

Causes of Seismic Failures
Earthquakes have occurred since this world existed. Earthquakes are the process of releasing energy due to faults in the earth's core. Earthquakes are not dangerous, but what is deadly is the debris and events that occur after the earthquake. This type of disaster is closely related to humans because it has occurred since the earth was created. Humans can shelter from thunderstorms, but they cannot protect themselves from earthquakes. Therefore, all things that are closely related to earthquakes, including seismology, basically have a long history [3] [4]. Natural disasters like earthquakes cannot be prevented or predicted. Fortunately, by developing analytical and evaluation methods, structural damage can be reduced, for example, micro-zone studies, appropriate construction methods, and infrastructure that are well designed, and earthquake-resistant designs. Examples of damage caused by earthquakes are ground shaking, structural damage, failure of retaining structures, and secondary hazards such as fire and liquefaction. The magnitude of the effect felt by humans depends on several things, such as the source of the earthquake, the path, the type of structure, and the conditions around it, and the readiness of the community at the scene. Meanwhile, several things that affect the intensity of soil movement include topography, groundwater, and surface hydrology [3].
In seismic hazard microzonation analysis, the methodologies that are often used are probabilistic seismic hazard assessment (PSHA) and deterministic hazard assessment (DSHA) [4]. Probabilistic Seismic Hazard Analysis has been widely used throughout the world although it has not been objectively validated. For more than 50 years, this method has been used by many parties such as government and industry, to save lives and reduce material losses. Some of the things that are often done with this method are: deciding the regulation of safety criteria for nuclear power plants, making official decisions on national hazard maps, developing building code requirements, and determining earthquake insurance rates. However, it should be noted that no method can calculate with certainty the estimated seismic hazard that will occur. Therefore, even though it has been properly analysed by experienced experts, there are still many earthquakes that cause huge loss of life and property [5].
Various studies have been carried out to find the cause of the seismic collapse in structures. In its application, a structure can be designed and analyzed with the concept of uniform hazard. But in reality, the structural failure that occurs may not experience the same collapse. Different structures are designed and analyzed using the same peak ground acceleration (PGA), but the possible risk of damage that occurs may not be the same, especially in the shape of the hazard curve and uncertainty in structural capacity. Of course, it depends on the condition of the structure, and the condition of the soil around the building [6].
Yon Buraq et a.l [7] stated in their paper that the damage caused by moderate and large earthquakes continues. They found that the cause of structural failure which then resulted in the loss of life and material was poor structural design, for example not complying with the rules of strong-weak-beam columns, short columns, imperfect supports, and wall movements which then caused damage. In addition, the quality of the materials is also poor, the quality of the workers is not good, the lack of technical services and the construction with insufficient details of the structural elements are other reasons for the damage. In addition, other causes are damage to the building from the masonry in terms of design errors which can be shown as heavy earth roofs, incorrect details of wall-to-wall and wall-to-roof joints, absence of tie beams, large openings. However, the construction of the building using poor quality local materials based on traditional rules is another reason for the failure of this building [7].
Many researchers investigate other causes of seismic failures of buildings-for example, the research conducted by Yon et al. in 2017. The researchers considered that Turkey has a high risk of earthquake disasters and evaluated the failure buildings in Istanbul. The city of Istanbul is well-known as an earthquake-prone location and has suffered a lot of damage from the large earthquakes that hit it. Some places such as the Fatih and Eminonu districts which are part of Old Istanbul are commercial centers and historical places with a large population. The researchers attempted to analyze damage data from past earthquakes along with soil amplification in the region. Likewise, the history of liquefaction and the risk of slope stability. For this purpose, a seismic microzonation map has been prepared using a geographic information system. There is a relationship between soil behavior and the damage caused by previous earthquakes. Structures in the area are damaged according to the microzonation map. This has no impact on the level of damage as slope stability is not a high risk in the area. However, it was determined that soil amplification and partial liquefaction contributed to the destruction of historical artifacts and structures [8].
The earthquake itself, of course, had a role previously associated with structural failure. Many studies have also been carried out for other structures such as bridges. Masrilayanti et al. [9] investigated the behavior of the cable-stayed bridge superstructure due to dynamic earthquake loads in the form of a time history with uniform conditions on each support. A long-span bridge was tested to obtain different reactions due to earthquake loads with uniform soil conditions. Three types of soil were used to create the response spectrum, namely hard soil, soft soil, and hard soil. The results show that the reaction of the superstructure of the bridge also varies depending on the type of soil used. [9].
To prioritize seismic risk mitigation at a large scale, rough-input methodologies are required to timely and conventionally assess the seismic risk associated with the portfolio of interest. Based on the sheer volume of information and computational effort required, it is generally only feasible to evaluate seismic vulnerability at the regional level using mechanics-based methods for a portion of the portfolio, selected according to prioritization criteria. In the literature and some guidelines, simple indices have been proposed as a convenient way to assess seismic risk, primarily by comparing the code requirements at the time of design with current seismic demands. In essence, these indices attempt to define a relative seismic risk measure that can be used for a rapid ranking to identify the part of the portfolio that merits further study. The advantage of this risk metric is that it does not require complex data, in other words, it only requires data that is easy to obtain, for example, the year when the structure was designed and the location. This makes this method easy to apply to the risk analysis industry. In addition, although traditionally, hazards and vulnerabilities can be taken into account, they can even be adjusted so that they are following what is targeted by the community. Petruzzelli et al. in their research took part in the discussion on this subject, on how to develop a large-scale seismic assessment strategy for disaster management purposes [10].
By analysing the vulnerability function, the losses that may be generated by an earthquake with a certain intensity can be described. This is part of the seismic risk assessment analysis. Research conducted by Martins et al. provides an overview of the methodology of fragility and vulnerability analysis. In this case, several things such as the spread of uncertainty, validation/verification of results, and the efficiency of the type of intensity used were also presented by the researcher. This is part of the vulnerability analysis. Martin et al. created seven modules that aim to make it easier for users to conduct vulnerability analysis, starting from choosing the earthquake intensity to be used, to model verification [11].

Method for Risk and Loss Assessment
It is the probability of a harmful outcome (death, injuries, property, livelihoods, economic disruption, or environmental damage) caused by natural or humaninduced hazards acting in concert with vulnerable conditions.
In analysing risk, it is very important to combine all the things related to the risk aspect so that it can estimate it accurately and accurately. So, seismic risk analysis is not enough but needs to be supported by social, economic, and cultural norms as well. This risk assessment is also not the end of the analysis but can be continued with an analysis of management and allocation of appropriate and efficient resources to reduce risk to a level that is acceptable to all parties, especially the community [12].
A study conducted by Bradley et al. [13] suggested that to be able to estimate seismic losses, several things that must be analysed are a seismic hazard, structural response, damage fragility, and the consequences of damage so that in the end a seismic risk quantification is obtained. Bradley et al. [13] conducted a study related to the methodology for estimating losses of various sizes for a particular facility. Some things that can be included as inputs to this estimate are direct costs of structural and non-structural repairs, business interruption, and injury to building occupants, although the latter two are often not directly considered. The seismic loss estimation process also makes it possible to calculate the seismic risk of engineered structures, making it possible for analysts and decision-makers to communicate consistently and rationally about seismic risk acceptance and mitigation. Even from this vulnerability analysis, the interpretation of the effect of seismic performance and economic loss can be related to a function of ground motion intensity [13].
Risk can be introduced theoretically with the accompanying essential condition demonstrated in Fig. 1.
Hazard can be introduced in more ways than one. It tends to be communicated in outright or relative terms. Populace hazard can likewise be communicated as individual danger (the yearly likelihood of a solitary uncovered individual) and cultural danger (the connection between the yearly likelihood and the number of individuals that could be a financial danger). It very well may be communicated as far as Average Annual Loss, Maximum Probable Loss, or other determined from a progression of misfortune situations, each relating recurrence, and anticipated movement.
A few dangers might happen in chains: one risk causes the following. These are additionally called domino effects or concatenated hazards. These are the most dangerous sorts to dissect in a multi-peril hazard evaluation. The best methodology for investigating such peril affixes is to utilize supposed event trees. An event tree is a framework that is applied to break down every one of the mixes and the related likelihood of boundaries' event that influence the framework under investigation. Every one of the dissected occasions is connected to other utilizing hubs, which is all potential conditions of the framework are considered at every hub, and each state (part of the occasion tree) is portrayed by a characterized worth of the likelihood of an event, as can be found in Fig. 2 [14]. Probabilistic Seismic Hazard Analysis (PSHA) coordinates overall conceivable quake ground movements at a site to foster a composite portrayal of the otherworldly amplitudes and dangers (yearly frequencies of exceedance) at that site. The investigation has a solid premise in studies of the quake designing and permits choices on seismic plan levels for an office to be made with regards to the tremor extents, areas, and ground movements (remembering the impacts of neighborhood site conditions for amplitudes of solid shaking) that might happen. The utilization of PSHA is normal all through the world for deciding seismic plan levels. The joint effort of two analysts during the 1960s brought about the key ideas of PSHA [15].
Intelligent use of PSHA is to decide seismic plan levels for construction regulations that reflect predictable yearly probabilities of exceedance (or proportionately the return time of a predefined level of ground movement). The initially distributed seismic zone map that included degrees of ground movement and related return periods was Esteva. The distribution depended on appraisals of repeat time frames sizes in seismic zones, constricting the ground movement to assess power in different zones, and comparing the ground movement return periods to the quake repeat spans. The accentuation of Esteva, notwithstanding, was on the general designing choice interaction for tremors, not simply on the outrageous ground movement circulation. These ideas were distributed by Esteva and in his Ph.D. theory. They incorporated the Bayesian refreshing of seismic action rates, ground movement conditions for PGA, PGV, and PGD, the induction of plan spectra from these qualities, and seismic danger bends for otherworldly ordinates that were scaled from PGA, PGV, and PGD. It was the main distribution of danger bends for otherworldly ordinates. Two essential parts of Esteva's work were especially striking. In the first place, ground movements were perceived to have critical vulnerability related to their events. This vulnerability was measured, and the presentation of the term vulnerability was 20 years in front of its broadly acknowledged use. Second, choice guidelines for the seismic plan were created utilizing underlying disappointment as the choice variable. These choice principles represented limiting future expenses and advantages by a money-related markdown rate and thought about whether designs were to be modified after quake instigated disappointment or supplanted. The choice standards likewise represented various potential degrees of harm during quake movements and noticed that plan choices rely just upon the normal pace of primary disappointment, not on its vulnerability [15].
Fajfar [2] states in his distribution that after computers opened up, i.e., in the last part of the 1960s and the 1970s, the quick advancement of techniques for seismic examination and supporting programming was seen. These days, because of gigantic advancements in processing power, mathematical strategies, and programming, there are no restrictions identified with calculation. Shockingly, information about ground movement and primary conduct has not kept up a similar speed, particularly in the inelastic reach. Additionally, it can't be anticipated that, as a rule, the fundamental abilities of architects will be better compared to previously. Today, the absence of solid information and the restricted capacities of creators address the failure point in the chain addressing the planning cycle, instead of computational apparatuses, just like the case before [2].
The seismic performance can be shown by analyzing the structure's vulnerability in resisting an earthquake motion and then developing into fragility curves. Fragility curves are found to be valuable tools for predicting the extent of probable damage. They show the probability of highway structure damage as a function of strong-motion parameters, and they allow the estimation of a level of damage probability for a known ground motion index. Karim and Yamazaki [16] conducted an analytical approach to developing the fragility curves for highway bridges based on numerical simulation. Four typical reinforced concrete bridge piers and two reinforced concrete bridge structures were considered, of which one was a non-isolated system and the other was an isolated system, and they were designed according to the seismic design code in Japan. From a sum of 250 fundamental movement records chosen from Japan, the United States, and Taiwan, nonlinear time history examinations were performed, and the damage indices for the bridge structures were obtained. Damage indices and ground motion parameters were used to construct fragility curves for the four bridge piers and the two bridge structures by assuming a log-normal distribution. It was observed that there was a critical impact on the fragility curves because of the variety of primary boundaries. The relationship between the fragility curve parameters and the overstrength ratio of the structures was also obtained by performing a linear regression analysis. The connection between the fragility curve boundaries and the overstrength proportion of the constructions was additionally acquired by playing out a direct relapse examination. In light of the noticed connection between the fragility curve boundaries and the over-strength proportion of the designs, an improved strategy was created to build the fragility curves for highway bridges using 30 non-isolated bridge models. The improved strategy might be a useful instrument to develop the fragility curves for non-isolated highway bridges in Japan, which fall inside a similar gathering and have comparable attributes [16].
Masrilayanti et al. [17] in their paper present an advantageous technique to set up the fragility curves for the cable-stayed bridge. For this reason, three spans cablestayed bridge is surveyed utilizing a progression of seismic loads in various forces to guarantee that the construction was encountering harm in a few conditions. The fragility curve was gotten by dissecting the design utilizing Nonlinear Time History (NTHA) and Pushover Analysis. The tremor's ground movements were exposed to the extension in various intensities, which were scaled from the underlying ground movement. In the future, the construction's ductility were formed into the fragility curves as the reactions of the extension. HAZUS standard is utilized for characterizing the harms of the scaffold, which are assembled into; slight, moderate, extensive, and complete due to the seismic load. The upsides of the harm states were created to the fragility curves utilizing the probabilistic upsides of the harm states. The outcome uncovered that the fragility curve was qualified as the lognormal distribution [17]. Fig 3 describes the probabilities of a cable-stayed bridge having a different level of failure. There are four curves obtained by fragility analysis, and each of the curves describes a different percentage of prediction of the bridge performance. The blue line is for slight damage, which means the bridge will only have minor disturbance when the earthquake occurs in the various value of PGA. The orange line is for moderate damage, the red is for extensive damage, and the purple line describes the fatal damage of the structure. From the graphs, it can be predicted the damage level and the value of its percentage. For example, if the GPA is about 0.6, it can be seen from Fig.3 that the probability of its slight damage is about 98 percent, 90 percent of moderate damage, about 78 percent of extensive damage, and for complete damage, it is about 25 percent. By this analysis, the structure owner can prepare to have a better structure and avoid the loss and victim. In the end, everything will be regulated in building and mitigation standards issued by the government. Almost all countries worldwide are very concerned about this natural disaster and have made regulations regarding earthquake maps, earthquake-resistant structures, and disaster mitigations. As with other countries, the United States has also frequently revised its standards and regulations on earthquakes by developments in science and technology, along with the advancement of the International Building Code (IBC) and other new standards. Previously, the design of earthquake-resistant structures used the assumption of a building age of 50 years and was considered capable of withstanding earthquakes with a return period of 475 years. Searer et al. [18] address the switch to a 2475-year earthquake from engineering and economic perspectives. Fenwick and Macrae [19] describe the changes in the required design strengths, stiffness levels, and capacity design provisions of Newzealand earthquake provision with particular reference to buildings where reinforced concrete momentresisting frames provide the lateral force resistance. It is shown that basic examinations of reaction spectra and restricting between soft storeys can give deceiving ends in regards to the relative strength and solidness prerequisites except if allowance is made for some other factors.
Indonesia is known as one of the most seismically risky nations on the earth. Its enormous and weak populace focuses on solid seismic danger assessment and evaluation. The Indonesian Service of Public Works and Lodging has set up a group of tremor researchers and architects entrusted with further developing the information accessible to change the public seismic peril map in 2016. They arranged aftereffects of late dynamic issues that concentrate on utilizing land, geophysical, and geodetic perceptions, and another exhaustive seismic tremor inventory includes hypocenters moved in a threedimensional speed model. Seismic peril examination was embraced utilizing, as of late, created ground movement forecast conditions (GMPEs) and rationale trees to incorporate epistemic vulnerability related to various decisions for GMPEs and quake repeat models. The new seismic danger maps set up the significance of emotional issues and intraslab seismicity and the subduction megathrust in deciding the degree of seismic risk, particularly in coastal, populated regions. The new Indonesian peril guides will refresh general principles for planning tremor strong structures and framework [20].

Conclusion
Many researchers have been conducting excellent studies to find the cause of seismic failure in structures. Experimental, simulation, and field research, has resulted in many conclusions relating to this subject. This development is in line with technological advances in all fields and areas. A lot of research has also been done to prepare structures in mitigating the disasters caused by earthquakes. Many theories, simulations, experiments, and practices have been carried out and formulated for disaster preparedness. However, further research is still needed because the danger of natural disasters will continue to threaten humankind.
Earthquakes have occurred since this world existed. Until now, it is said that there is still no technology that can determine when and how big an earthquake will occur in a specific area. However, an earthquake itself is not dangerous, but the events that occur after the earthquake must be considered harmful to human beings.
For this reason, it is crucial to carry out comprehensive and comprehensive preparations for this earthquake disaster. The latest technology has been able to predict the level of collapse in a structure, for example, by performing vulnerability and fragility analysis. With fragility analysis, the probability of structural collapse can be estimated so that the structure can be increased according to the expected level of failure at a certain intensity level.