In a rapidly changing world grappling with more frequent and extreme weather events, our understanding of climate change from past patterns is obsolete and could be dangerously misleading. Climate change, as concluded by the IPCC’s sixth assessment report in 2021, is reshaping our reality. What was once considered “rare” is becoming commonplace, and our tools for prediction need a serious overhaul.

It’s more than hotter summers and heatwaves. Our entire weather system is shifting, changing the way rain, plants, and other elements interact. This shift has been described as “global weirding,” and it’s leading to a world where weather gets more unpredictable and intense, and the complex systems that make up our planetary environment are changing.

How is climate change linked to extreme weather?

In an era of global weirding, nothing is as it seems, or intuitively should be. For example, historically the types of vegetation in an area are closely shaped by prevailing climate conditions, with dense vegetation in wet rainforests and sparse plant life and arid conditions in deserts. But with “global weirding,” these norms are disrupted: deserts may experience more rainfall, while rainforests might dry out. This creates environments with dense vegetation but insufficient rain, meaning a much higher risk of severe wildfires in areas where they were previously rare.

This global weirding and uncertainty is also apparent when looking at new patterns of tropical cyclones, or hurricanes and typhoons as they’re called in different parts of the world. The warming air and seas, shifting wind patterns, and changing ocean currents might all play a role in their increasing intensity. But even the experts at the IPCC tread lightly when talking about this link, and many models point to a relative continuation in frequency, but an extreme increase in severity when compared to previous patterns.

And climate change doesn’t just bring isolated events. It’s like a domino effect. Think about Hurricane Ian’s destruction made worse by constantly rising sea levels and intense rainfalls. Or the strong winds that led to devastation across Mau. Or the massive drought in California, fueled by extreme heat that dries out plants, setting the stage for devastating wildfires.

Just to further confuse matters, even when we move to mitigate our impact on the climate, it can have weird consequences. For example, laws banning sulphur-based fuels used by tankers in key shipping lanes were aimed at reducing emissions. In the long-term this will have benefits, but in the short term it has led to an increase in temperatures across many of these areas, caused by the removal of the sulphur-based compounds which were shielding areas of the planet from more extreme weather conditions.

How do we adapt?

In all of this chaos, predicting future climate risk is becoming a tougher game. In the past, teams would build regression-based models which rely on historical data for future events. But that approach is becoming less reliable as climate-induced disasters change in frequency and intensity. Places that once saw occasional grass fire now experience intense blazes. There are tropical cyclones that venture to unusual northern territories like Canada and Alaska, and floods deemed “one in a 1000-year” events happen with startling regularity.

Even more eye-opening, recent wild weather shifts – such as ferocious wildfires and temperature surges – are outpacing even the top predictions, including those from the IPCC. The newly introduced CMIP6 models, even when evaluated against past events, have consistently underestimated the severity of these weather extremes.

So, where does this leave us? I see three key points that can help us understand our planet better in an era of global weirding.

  1. We need to acknowledge that using solely historic data for future predictions no longer works. Our climate’s fundamentals are shifting, and our models need to reflect that. 
  2. We need to harness new data sources. The richness of data now available to us – from real-time satellite imagery to advanced marine heat wave metrics – provides a promising foundation. 
  3. We need to harness AI-based technologies. Companies using AI to identify and analyse climate risks are getting better at integrating an array of new data streams into modelling outcomes through using deep learning and AI techniques. AI can help us to integrate more recent variation in temperatures, and help us to understand changes in the next 5 years, rather than predicting what the world could look like in 2100.

This is not just about adapting our models but also our mindset. Recognising that both adaptation and mitigation are no longer choices but necessities. It’s high time for policymakers, businesses, and individuals to embed climate considerations into long-term strategies, ensuring we’re investing in the risk models that are equipped to face the multifaceted challenges of our changing climate.

Josh Gilbert is co-founder and CEO of Sust Global, a geospatial AI company fusing satellite data, climate models and deep learning techniques to help investors and businesses to integrate climate-aware risk management strategies.

Many parts of the world are already experiencing weather patterns that have never been seen before, and this trend is expected to continue. As a result, the tools and models that have been used to predict wildfire risk in the past may not be sufficient in the future. Purely empirical approaches based on historical data will fail to predict fire risk under future weather conditions. This is why Sust Global has created a novel modeling approach that is empirical enough to learn useful information from past data using state-of-the-art AI modeling, but at the same time theoretical enough to make reasonable predictions outside the range of training data provided by the historical record. Our novel approach gives our model capabilities that most AI models lack.  For example, it can extrapolate and it is readily interpretable.

This approach is especially useful in predicting wildfires.  Wildfires have become a common occurrence in many parts of the world, and their frequency and intensity have been on the rise in recent years. This increase in wildfire risk can be attributed to a variety of factors, but climate change is undoubtedly one of the most significant. As temperatures continue to rise, droughts become more severe, and extreme weather events become more frequent, the challenge of forecasting wildfire risk becomes even more difficult.  For example, in many US States, future fire weather indices are forecast to be higher than anything observed in the past.

Comparing the distribution of monthly maximum fire weather values (using the KBDI fire weather index) from the recent past to mid-21st century across four states. Many of the future expected KBDI values are outside of the range of historic observations, making modeling the impact of these values challenging.

Our model begins by taking millions of satellite observations of historic fires with high-resolution data on dozens of variables that contribute to fire risk – including daily precipitation and temperature, local topography and land cover, as well as fire suppressability and ignition sources. This is the typical AI approach – take tons of big data, fire up some GPUs, and let the model learn complex relationships that would be hard for a human to decipher. 

However, our key innovation is that we force our model into two simple, interpretable layers that disaggregate overall fire risk at a location into baseline fire risk and weather-dependent fire risk. This lets us interpret these layers, and lets our model use weather-dependent fire risk to extrapolate fire risk trends into the future and make reasonable, scientifically-informed predictions in unseen contexts outside of the range of data it has been trained on.

As an example, here is a comparison of how our model learns to separate baseline and weather-dependent fire risk in two locations in the United State: the Flint Hills of Kansas and the Bob Marshall Wilderness of Montana.

From left to right, this graphic shows: (1) the location of the flint hills in Kansas, (2) the historic relationship between the yearly average KBDI fire weather index and the year burned area, (3) a map of the baseline fire risk in the area, with light (dark) values indicating higher (lower) baseline fire risk, and (4) a map of weather-dependent fire risk, with dark (light) values indicating lower (higher) weather-depended fire risk.

The flint hills are a rocky area in eastern Kansas with the densest coverage of intact tallgrass prairie in North America.  Annual prairie fires are common, and are often deliberately set as controlled burns.  As shown in the second graphic from left, there is little relationship between annual fire weather and annual burned areas: there have been years with lots of burning and a low fire weather index, and other years with high fire weather but relatively low rates of burning.  In such a context where prairie fires are very common but have seemingly little to do with weather conditions, the model has learned that baseline fire risk is very high, but weather-dependent fire risk is low.

From left to right, this graphic shows: (1) the location of the Bob Marshall wilderness in Montana, (2) the historic relationship between the yearly average KBDI fire weather index and the year burned area, (3) a map of the baseline fire risk in the area, with dark (light) values indicating lower (higher) baseline fire risk, and (4) a map of weather-dependent fire risk, with light (dark) values indicating higher (lower) weather-depended fire risk.

The Bob Marshall Wilderness is a vast wilderness area in the northern Rockies of Montana, and is home to grizzly bears, moose, elk, black bears, mountain goats, and many other species of wildlife.  Historically, winters were long and summers were brief while due to the high latitude and high altitude, snow typically persisted through to late spring. However, this is changing with climate change, and recent years have seen unprecedented fire weather and unprecedented fire activity. 

This connection between weather and risk is very clear in the second plot, where years with low fire weather have nearly no burned area, while years with high fire weather have very high levels of burning.  In such a context, our model will learn that baseline fire risk is low, but weather-dependent fire risk is high.  Consequently, our model will extrapolate that even worse fire weather will see even greater levels of burning, in line with what the fire science predicts.

Examining global maps of baseline and weather-dependent wildfire risk shows strong variations between world regions. Tropical savannas show very high levels of baseline wildfire risk, since in those regions it already burns nearly every year.  High-altitude and high-latitude forests and meadows, on the other hand, have very high weather-dependent risk. This means that these areas typically don’t see fires, but hotter weather is associated with increased wildfire risk. This suggests that these areas will be the most affected by climate change, and that wildfire will be a major issue in a warming world.  This is especially true in boreal forests, because that region is seeing much faster warming than other regions.

Global Baseline Wildfire Risk. This graphic shows baseline wildfire risk globally, which is highest in tropical savannas, for example in Africa and Northern Australia. It is lowest in humid forests, from tropical rainforests like the Amazon and Congo, to temperate forests in Europe, Eastern North America, and East Asia.
Global Weather-Dependent Wildfire Risk. This graphic shows weather-dependent wildfire risk globally, which is highest in high-alpine areas as well as in boreal forests.  It is lowest in the arid tropics, where fire weather typically already reaches extremely high values annually.

The scientific consensus is that under a business-as-usual scenario of carbon emissions, the entire world will be 5 degrees Celsius hotter in the year 2100.  We can’t know exactly what fire risk would be in such a hot and dry world, because we have never observed anything like it.  We only know that purely empirical models trained solely on historic data will severely underestimate fire risk in such an alien world.  This may be why models at some climate risk data providers are predicting a shockingly low 7% increase in fire risk across the US and Western Europe in such a scenario, while other groups suggest only a 50% increase in the number of wildfires globally

Given that we have already observed a 236% increase in wildfire burned area in the US from the 20th to the 21st century, over a period associated with only 1C increase in temperature, the true fire risk in a world that is 5C hotter is likely much higher than many empirical models are predicting.  Using a more theoretical model designed to separate baseline and weather-dependent fire risk in order to extrapolate intelligently into unseen temperature ranges, Sust Global’s forecast of a 450% increase in burned area in the US under a 5C hotter world is likely much closer to the true value than other projections.  Customers concerned with accurately estimating their fire risk, even decades into the future, should bet on Sust Global’s approach.

About the author: Matt Cooper is the Senior Data Science Engineer at Sust Global, where his work runs the gamut from engineering data pipelines to thinking critically about risk models to improving Sust Global’s platform. Before joining our science and product team, Matt was a Data Science Postdoc that Harvard School of Public Health, where he researched the effects of heatwaves and droughts on food security and health. Matt is passionate about making falsifiable predictions about the future, and using new evidence to update his mental and computational models.

At Sust Global, we are on the mission to transform the latest in machine learning, remote sensing and frontier climate science into the intelligent climate data infrastructure that organizations need to thrive. If you would like to learn more about our financial loss modeling capabilities, fill out the form below and a member of the team will be in touch shortly. 

The organization

Sust Global enables one of the world’s largest providers of financial markets data and infrastructure to integrate AI-powered climate data seamlessly into their platform, facilitating investment, trading, and risk decision making.

The Challenge

Our customer’s mission is to deliver the cleanest, richest and most accessible data, with a strategic focus on facilitating the shift to sustainable finance, making the provision of best-in class physical climate risk data a key priority. 

Their own clients include asset managers across the mining and energy sectors, where exposure to physical climate risk is extensive due to their large number operational assets. They are also increasingly required to provide TCFD-compliant reporting on climate risks in several jurisdictions, requiring physical climate risk data at asset level to satisfy their disclosure obligations.

Previous solutions offered low spatial resolution and did not provide asset-level data. They also surfaced insights via a separate standalone proprietary dashboard, limiting the potential for the seamless platform integration needed.

Sust Global’s solution

Sust Global’s global, granular data, with its flexible API-first integration into their existing workflows, was the obvious choice in terms of its accessibility and precision. Our product also provides data on multiple climate perils including water stress, which is of particular interest to the clients of this particular data platform.

Scalable across multiple portfolios and companies, we are an ideal fit to enable asset managers to get started on material physical climate risk reporting while adhering to the recommendations of the TCFD and other relevant disclosure frameworks. 

Our solution also offers future commercial possibilities. This includes sell-side thought leadership on the theme of risk management to engage investors in the alpha opportunity, as well as new products and services connected to climate-related risks and sustainability. Furthermore, investor expectations regarding ESG performance are rising and commodity funds lend themselves well to climate risk optimization with the right data. Future opportunities may include the weighting of equities according to climate risk, and also products positioned around sector neutrality.

Contact us via the form below to learn more about how Sust Global can provide your organization with access to reliable high frequency data insights that support climate-informed investment decisions via our flexible API integration.

The organization

Sust Global’s AI-powered analytics and APIs are serving one of the leading fixed income analytics platforms. Our data and APIs enable institutional investors to integrate climate analytics across portfolios, benchmarks, trading decisions and risk, with a strong focus on mortgage backed securities (MBS). 

The Challenge

As physical assets and portfolios face mounting risks from the effects of climate change, financial data platforms require robust methodologies that can predict the probability and severity of events accurately throughout the life of mortgages. This must be done at both the granular property level and the aggregate portfolio level. Additionally, our customer required a scalable and dynamic solution that could be used to scope risk across thousands of underlying properties and their mortgages. The data needed to integrate seamlessly into traditional financial workflows.

Accessibility, precision, and methodological rigor were therefore key priorities when choosing a climate analytics provider, enabling their own clients to plan their portfolio distribution effectively, particularly in high-risk areas. 

Sust Global’s solution

Sust Global’s industry leading AI-powered climate data is able to provide transparency on climate risk across global portfolios, at the portfolio and individual mortgage level, starting with only identifiers (such as CUSIP or ISIN). Through frictionless integration with Sust Global’s API, real-time RMBS climate risk intelligence enables portfolio managers to conduct prospective analyses of investment opportunities, as well as completing thorough portfolio risk assessments. Climate risks are quantified in terms of probability and severity across six climate hazards: wildfire, floods, hurricanes, heatwaves, water stress and sea level rise. 

To ensure that this data is fully integrated with the existing investment workflow, these risks are converted into financial loss projections by combining our AI-enhanced climate simulations with hundreds of sector-specific damage functions used throughout the insurance sector. This provides annual financial loss projections, hazard by hazard, at the portfolio and tangible asset level.

As a result, analysts and portfolio managers are able to proceed with confidence and clarity in their decision making in a changing climate. In doing so they are improving their risk-adjusted returns and fulfilling their fiduciary responsibilities. It also means that they are now better prepared for full compliance with any prospective climate reporting legislation likely to be introduced by the SEC. 

Contact us via the form below to learn more about how Sust Global can provide your organization with access to reliable high frequency data insights that support climate-informed investment decisions via our flexible API integration.

In the blog below, Matt Cooper – Senior Data Science Engineer at Sust Global – explains the methodological complexity involved in transforming raw climate data into meaningful financial loss models that organizations can actually use. He describes how Sust Global uses novel machine learning techniques to resolve this challenge and enable our clients to accurately identify how and where they are exposed to financial risk from the impacts of climate change.

How bad will climate change get?

Last year, the Intergovernmental Panel on Climate Change (IPCC) concluded that human influence had unequivocally ‘warmed the atmosphere, ocean, and land.’ This is already affecting many climate and weather extremes on a global scale, with deadly disasters such as wildfires, droughts, floods and cyclones becoming more intense and frequent. 

What is less certain is how much humanity will collectively work together to reduce future warming of the planet. The most common way to model the future of the climate is to use simulations, which use a whole suite of assumptions about the world to simulate the future – everything from the laws of fluid dynamics and atmospheric chemistry to variables in human behavior. These are not predictions but rather scenarios that give an idea of the range of possible futures.

With that in mind, how can businesses translate abstract climate science into the actionable financial metrics required to support informed strategic decision making?

At Sust Global we combine AI-enhanced climate simulations with industry-standard loss and damage models used by the insurance sector to tackle this problem. This blog post will outline each of the steps in the process: 

Table of contents

The science of simulation 

The first step in predicting what climate change will be like is to run simulations of the future. These simulations are called Global Climate Models, or GCMs, and they divide the world into grid cells. Then, based on the physical laws of how temperature, sunlight, water, and air interact in the presence of increasing CO2, these models can predict a small step into the future – typically one hour. Run these models long enough, and you have a simulation of a century. Aggregating lots of different models under many different assumptions can give us our best picture of what the future of the world will look like. 

While these models can answer important questions about how emitting more CO2 translates into global temperature increases, they aren’t quite ready to predict the occurrences of real property-destroying hazards – that’s where Sust Global comes in.

The raw data from GCMs needs to be refined to get estimates of hazards. This is in part because GCMs are very coarse – they can simulate weather over a wide area and can make accurate forecasts about the temperatures and rainfall in a region – but they don’t make accurate forecasts for things that happen on very local spatial scales. 

For example, a GCM can tell us if fire weather will be more likely in Southern California, but it can’t forecast accurately if a particular LA neighborhood is at risk, since this depends on things like what kind of forests are nearby. This level of local granularity is too detailed to fit into a global model of the entire earth’s climate system. 

GCMs can also broadly predict changes in rainfall trends, but whether or not your house is at risk of flooding depends on precise estimates of local topography. GCMs are also inadequate for complex, fine-scale weather phenomena like tropical cyclones (aka hurricanes and typhoons depending on where they form).

Figure 1: How a GCM Works. A GCM is a simulation of the future, based on a grid of the world and our knowledge of how physical processes affect the weather (source: Wikimedia Commons)

To give our customers an accurate picture of the climate hazards, a lot of our work involves taking the fuzzy, imprecise predictions from GCMs and using them to create a crisp, high-resolution picture of the future.

We take future predictions of fire weather, combine it with on-the-ground, satellite-derived data on land cover and on historic fire, and then use AI to make actual, credible forecasts of fire risk. We use similar methods to turn coarse estimates of future flood risk, in combination with thousands of high-resolution geospatial flood maps from FEMA, into highly precise flood estimates probabilities at multiple depths. All of this work adds enormous value to the global forecasts made by the scientific community and gives our customers accurate, validated data on hazards that they care about.

Figure 2: Varying Hazard Resolutions.  Here is an example of the scale at which fires are modeled and observed at the level of GCMs, RCMs (regional climate models), and at the resolution of high-res satellite observations.  At Sust Global, we take GCM output and downscale it to the resolution of satellite observations, for spatially precise estimates of fire risk. Source: Sust Global.

Quantifying the cost of climate hazards

Predicting the risk of climate hazards in the future is a complex task – one that relies on supercomputer simulations, cutting-edge AI models, and decades of satellite imagery. But this still does not tell us enough about how bad climate change will be – it does not tell us whether a 2-foot flood or a category 4 hurricane winds will do more damage to a house, or whether a week-long heatwave or a season of wildfire smoke will do more damage to public health. For these we need to translate hazards, like hurricanes and floods, into a cross-comparable metric like dollars lost, percentage of homes destroyed, hours of work interrupted, or even lives lost. To do these we need what are called damage functions.

A damage function is a mathematical function that takes in some climate hazard and outputs some measure of how bad the damage is. Let’s take fires as an example. Let’s say that we already have predictions of the risk of fires occurring in the year 2030 in the state of California, but we want to know how many houses they will destroy. So our function would look like:

fire probability * percent of houses usually destroyed * total number of houses = expected number of houses destroyed

This will provide us a metric for how many houses could be expected to be lost in a given year – a very important number for everyone from insurance companies and state emergency preparedness agencies to individual homeowners. Typically, the function is derived using historical data. That means that while the probability of future hazards will change in the future and is inferred by complex climate modeling, the damage that those hazards do is assumed to be constant.

So we could look at historic fires in California and say that, when a fire is in a residential area, typically 80% of houses are destroyed. Rough modeling could tell us that, if there are 12 million homes in California, each with 1-in-1000 chance of being in an area hit by wildfires, and we have observed that historically 80% of homes in wildfire-hit areas are destroyed, then we can infer that California will loose about 9,600 homes in a given year.  If, however, climate change makes a 1-in-1000 chance of fire increase to a 1-in-100 chance, that would be 96,000 homes destroyed.

In reality, the damage estimation process is much more complicated. For one, most hazards are modeled in terms of both probability and intensity. For example, at a given location, a hundred-year flood (i.e. a flood event that has 1 in 100 chance of being equalled or exceeded in a given year) could be 1 foot, but a thousand-year flood could be 10 feet. For hurricanes, there might be a 10% chance of a category 1 (74 mph) hurricane, but a 1% chance of a category 5 (157+ mph) hurricane. So damage functions must be able to predict different levels of damage at different intensities, whether intensity is measured in terms of depths, wind speeds, temperatures, hazard duration, etc.

Figure 3: Flooding Damage Function.  Percent of a structure destroyed, based on the depth of a flood and the economic function of the structure.  Source: Sust Global.

Accounting for uncertainty

Another major complexity of damage estimation is that a good model will use different damage functions in different contexts. Building standards often vary widely across geographies, usually accounting for prevalent hazards, so houses in Miami will likely be more resistant to hurricane-force winds than houses in Seattle. A proper damage estimation methodology should account for that and use separate damage functions for houses in Miami vs Seattle. 

Another way damage functions can vary is by type of structure, with apartments, residential homes, factories, schools, stadiums, etc., all being damaged differently by a flood of the same depth. A good model of future damages will take all these variables into account, which is why we account for both geography as well as asset type in our Value-at-Risk estimations, with 36 different damage functions for various asset types.

While these methods can give us state-of-the-art estimates of future climate change impacts, it is important to recognize that these estimates come with uncertainty, and accounting for this uncertainty is necessary to create reliable models. Indeed, there is uncertainty in nearly every step of the process. In the first step of climate simulation asking, i.e., how hot will the oceans be in 2050? – different GCMs give different predictions, creating one source of uncertainty. 

Following that, converting future climate conditions into hazard probabilities – how likely will a category 3 hurricane be in 2050, given that oceans are 3 degrees C warmer? Finally, there is uncertainty in the damage estimation – what kind of damage does a category 3 hurricane do? This is why most approaches to Value-at-Risk estimation use simulations of thousands of potential futures, under varying assumptions of climate change, hazard frequency, and hazard damages. 


Organizations across every sector struggle with transforming the complexity and uncertainty of climate science into meaningful business metrics and applied industrial practices for things like risk management, asset valuation, and risk pricing. Bridging this gap between science and business is one of the greatest challenges we face as we seek to adapt to a rapidly changing climate.

Using these methodologies and properly accounting for uncertainty, we at Sust Global can give precise and scientifically-validated estimates of the kind of risks your business assets will face under climate change. 

About the author: Matt Cooper is the Senior Data Science Engineer at Sust Global, where his work runs the gamut from engineering data pipelines to thinking critically about risk models to improving Sust Global’s platform. Before joining our science and product team, Matt was a Data Science Postdoc that Harvard School of Public Health, where he researched the effects of heatwaves and droughts on food security and health. Matt is passionate about making falsifiable predictions about the future, and using new evidence to update his mental and computational models.

At Sust Global, we are on the mission to transform the latest in machine learning, remote sensing and frontier climate science into the intelligent climate data infrastructure that organizations need to thrive. If you would like to learn more about our financial loss modeling capabilities, fill out the form below and a member of the team will be in touch shortly. 

Climate change is no longer a future threat – it is happening now and it is affecting communities, businesses, and economies. 

In 2021, the world was ravaged by a staggering 47 separate billion-dollar weather disasters, with overall damage wrought on the world economy totalling $329bn. This trend continued into 2022, with ten disasters causing economic losses exceeding $3bn in insured losses, including Hurricane Ian ($100bn in insured losses), the European drought ($20bn), and flooding in China ($12.3bn).

It is in the context of this grim new reality that governments and businesses across the world are beginning to wake up to the fundamental importance of understanding physical climate risk.  This refers to the potential harm caused by the physical effects of climate change. These risks can be either acute or chronic in nature.

Acute physical climate risks are event-driven, including the immediate and severe impacts of extreme weather events, such as heatwaves, wildfires, hurricanes, droughts and floods. They can cause widespread destruction and disruption, and are often costly to respond to and recover from. These effects can have a wide range of impacts on human and natural systems, including damage to infrastructure and property, loss of life, and disruption of economic activities. 

Aerial view of the destruction caused by Hurricane Ian in Florida.

Chronic physical climate risk refers to the gradual and long-term impacts of climate change. Chronic hazards may be less immediately visible than acute events, but they can be just as costly, if not more so. They include slow-onset events such as sea level rise, desertification, changes in weather patterns, and sustained increases to average temperature which lead to long-term changes in the natural environment and affect the livelihoods of people who depend on it.

Though the larger share of global attention and climate finance has been focused on finding ways to reduce emissions, momentum is gathering to bring climate adaptation to parity as increasingly frequent catastrophes highlight the extent to which the planet has already changed. 

Emblematic of this shifting climate action landscape is the historical agreement to compensate nations for loss and damage, and the focus on climate finance, at this year’s COP27. However, there are also broader trends and initiatives across the public and private sectors that indicate that longer term priorities are shifting to accommodate a greater focus on physical climate risk and climate adaptation alongside net zero prerogatives. 

In the article below, we explore why 2023 will be a watershed year for physical climate risk and why organizations of every kind will need to account for it in their strategy and operations.

Rebalancing focus between climate mitigation and climate adaptation 

Even casual observers of developments in the climate crisis are at least vaguely familiar with the Paris Agreement. Adopted by 196 countries in December 2015, the Paris Agreement was a landmark moment in multilateral climate negotiations and was the first legally binding agreement committing nations to common targets to combat climate change. 

The headline outcome of negotiations was the goal agreed by all parties to limit global warming to well below 2, and preferably to 1.5 degrees Celsius, compared to pre-industrial levels. In order to achieve this long-term temperature goal, countries aim to reach peaking global greenhouse gas emissions as soon as possible, with global carbon neutrality to follow by the middle of this century. 

The 1.5 degree target has gained remarkable purchase in the public imagination. Since the Paris Agreement came into force in 2016, governments and organizations across the public and private sector have published net zero strategies, with many well-known brands taking steps to make production of their products and services carbon neutral. 

Given the focus on global warming and net zero targets, when many people think of action to combat climate change their first thoughts are of efforts to avoid and reduce greenhouse gas emissions. This is called climate mitigation – essentially steps we take to stop the climate crisis from getting worse. 

While it is obviously important to drastically reduce global emissions to avert dangerous climate tipping points, an equally important but lesser reported component of climate action is climate adaptation. This refers to measures taken to adapt to the existing consequences of climate change, such as increasing incidences of fires or floods, drought, extreme temperatures, and rising sea levels.  

Though adaptation targets were agreed as part of the Paris accords and at subsequent COP summits, these have received relatively little attention by comparison with climate mitigation and funding for adaptation lags far behind that provided for emissions reduction. In its 2022 Adaptation Gap report, the UN reported that funds for climate adaptation in developing countries were five to ten times below what is required. 

More generally, adaptation aims to manage climate risk to an acceptable level, but understanding and usage of the physical climate risk data that is necessary to inform risk management strategies remains limited. Protecting assets, reducing exposure, and building resilience in local communities requires accurate modeling of the relevant physical climate hazards, a complex process involving frontier climate science, remote sensing, and multiple datasets. 

However, following a year of catastrophic natural disasters that included a summer of record-breaking wildfires in Europe, extreme drought across Africa, and devastating floods in Pakistan, physical climate risk and climate adaptation are finally beginning to receive the attention that is so urgently required. 

The themes explored and initiatives announced at COP27 offer a clear example of the way in which physical climate risk and adaptation are beginning to occupy greater space in the thoughts of the various stakeholders gathered at the summit. 

Most indicative of the growing momentum behind climate adaptation is the launch of the Sharm El-Sheikh Adaptation Agenda. Launched in partnership with the High-Level Champions and the Marrakech Partnership, it is ‘a comprehensive, shared agenda to rally global action around 30 adaptation outcomes that are needed to address the adaptation gap and achieve a resilient world by 2030’. It is also supported by more than 2000 organizations undertaking related work in more than 131 countries. 

The set of targeted outcomes represents the first comprehensive global plan to rally and align both state and non-state actors behind a set of shared adaptation goals. There are 30 ‘Adaptation Outcomes’ in total, set out as urgent global targets for 2030. They are divided thematically into actions designed to improve resilience across five crucial impact systems:

There are also cross-cutting goals that include enabling solutions for planning and finance. Targets include: 

These intersecting targets have consequences for every sector of society and the economy. A keen understanding of physical climate risk and access to clear and reliable climate data is essential across the public and private sector for organizations looking to mitigate their exposure, support communities, and take advantage of the opportunities presented by investing in climate adaptation measures. 

A move towards proactive physical risk management 

In another sign that world leaders and policymakers are getting serious about physical climate risk are the major investments announced in early warning systems and various other initiatives designed to mitigate the worst impacts of acute climate hazards.

The most ambitious and far-reaching of these initiatives is the action plan announced by the UN to ensure that everyone on the planet is covered by early warning systems. Countries with limited access to early warning systems have a mortality rate eight times higher than countries with strong coverage, but only 50% of all nations currently have access to such data and technologies. Meanwhile, the need for early warning systems is growing more urgent as the number of recorded disasters has increased five-fold, driven largely by human-induced climate change and related extreme weather events. 

In a press release, the UN estimated that its ‘Early Warnings for All’ plan will cost roughly $3.1bn, or the equivalent of 50 cents per person per year between 2023 – 2027. According to the Global Commission on Adaptation, the proposed costs would be ‘dwarfed’ by the benefits provided by wider access to early warning systems. They found that spending $800m on these systems in developing countries would avoid losses between $3bn-$16bn each year. As these systems are a relatively cheap and effective means of protecting lives and assets, investment in early warning systems is touted as the ‘low-hanging fruit’ solution to adapting to our changing climate. 

This preparedness-based approach to climate action is shared by other high-profile initiatives launched at COP27. The Global Shield Against Climate Risks, a bundle of activities in the fields of climate risk insurance, climate finance, and disaster prevention is spearheaded by the German BMZ (the Federal Ministry for Cooperation and Economic Development), developed in collaboration with the V20, and unanimously supported by the G7. Key objectives of the Shield include accelerating disbursement of funds to developing nations following climate-related disasters, devising complementary climate risk financing instruments, and mobilizing additional climate finance for vulnerable countries. 

One particularly interesting aspect of the Shield is the way its chief architects at the BMZ are proactive in campaigning for what they call a global ‘integrated risk management strategy’. In laying out its vision for the Shield, they argue the following: 

‘Germany’s development cooperation not only promotes strong mitigation policies, but also strives to meet the challenge by means of integrated climate risk management…

…Climate risk management is an approach which looks specifically at climate risks as part of a comprehensive risk management strategy; the risks range from extreme weather events such as storms and floods to slow-onset environmental changes such as increasing sea levels and desertification…

…Climate risk management is based on comprehensive and continuous risk assessments. Identifying risks (risk analysis) and then assessing the scale of their impact lays the foundation for prioritizing actions that need to be taken and identifying options that cover as many of the risks as possible.’

The Global Shield and Early Warnings for All initiatives are indicative of how this shift in thinking regarding physical climate risk is being translated into concrete actions in transnational policy. This is also mirrored by activities taking place among national governments, who are becoming increasingly cognizant of the need to coordinate their own bespoke adaptation strategies due to growing annual losses resulting from extreme weather. 

For example, the Canadian government recently published its first ever national adaptation strategy, which commits C$1.6bn in federal funding to protect communities and assets against the impacts of climate change. The strategy has five major priorities: improving health, building and maintaining resilient public infrastructure, protecting nature and biodiversity, reducing the impact of climate-related disasters, and supporting the economy. 

The government’s forecasts predict that annual economic losses due to climate-related extreme weather could reach C$15.4bn by 2030. The goal is to reduce those losses with federal investment, with research showing that every dollar spent on adaptation measures should save up to C$15 in costs, including direct and indirect benefits across the entire national economy.

In a statement following the announcement of the new strategy, federal Environment Minister Steven Guilbeault said:‘”The fight against climate change has reached our doorstep. We must not only reduce the emissions that cause climate change, we must also adapt to the changes that are upon us”.

With nations far from the frontline of the worst ravages of the climate crisis now sufficiently impacted that they are taking a proactive approach to adaptation, demand for physical climate risk intelligence will continue to grow as organizations seek to support their adaptation strategies. 

Expanding mandatory risk reporting obligations for the private sector

It’s not just governments and other public institutions that are beginning to think more seriously about physical climate risk. The private sector, and particularly the financial services industry, are also increasingly required to reckon with their exposure to physical climate risk.

Throughout 2022, regulators in multiple jurisdictions including the UK, Japan, Canada, and the European Union took measures to enhance and harmonize climate risk reporting standards, usually in alignment with the TCFD’s recommendations. Many of the recommended disclosures under the TCFD framework require organizations to describe their exposure to physical climate risk, explain their processes for identifying, assessing and managing these risks, and outline the metrics and targets they are using to measure their climate risk management activities. 

As these recommendations have become increasingly embedded in various regulatory frameworks, more organizations are compelled to think systematically about physical climate risk. 

Standards-setting bodies such as the International Sustainability and Standards Board (ISSB) have made substantial progress in their own work developing a comprehensive global baseline of sustainability reporting standards. 

Established at COP26 under the aegis of the IFRS Foundation, the ISSB’s objective is to provide better information for better economic investment decisions. During the previous 12 months, the IFRS has delivered on its commitments to establish the board, consolidate the voluntary disclosure landscape, consult on proposed global standards, and embed a global footprint by establishing functions through its multi-location model and partnership with various international stakeholder organizations.

The ISSB is on track to finalize its new set of global standards in 2023, with several developments and further steps towards implementation announced in November in and around COP27. For example, the board confirmed that they had voted to require companies to use climate-related scenario analysis to report on their climate resilience and identify climate-related risks and opportunities to support their disclosures. 

They also shared details of their new Partnership Framework, developed in collaboration with more than 20 organizations, and designed to support capital market participants as they prepare to implement ISSB standards once they are announced in the new year. The framework is supported by a global coalition of public and private organizations, while its primary focus is on facilitating implementation across all economic settings to establish a truly global baseline. 

The ISSB is working across jurisdictions to maximize the interoperability of its standards and develop international alignment on key climate disclosures. The board’s work has received substantial endorsement at the policy level, with many jurisdictions signaling a commitment to incorporating the standards into their domestic reporting systems. 

For example, a joint statement from the G7’s finance ministers and central bank governors affirmed that they will ‘take steps to operationalise’ the ISSB recommendations and principles in their own jurisdictions. It also reiterated their ‘commitment to move towards mandatory climate-related financial disclosures’ and welcomed the development of a global baseline standard. 

Given the momentum behind mandatory disclosures, driven by both the TCFD framework and forthcoming ISSB standards, companies should expect more countries and regions to adopt new reporting standards throughout 2023. It is important for organizations to prepare for what is ahead by following regulatory developments and familiarizing themselves with the data and methodology required to accurately measure their physical climate risk exposure. 

We need an intelligent climate data infrastructure at global scale

As we look to the year ahead, it seems that governments, businesses, and communities are increasingly aligned on the urgency with which society needs to adapt to a climate that continues to change at an accelerating pace. 

In order to make informed decisions on strategies to identify and mitigate risk, develop resilience, and protect lives and assets, organizations of every kind will need to integrate physical climate risk intelligence into their workflows. 

Physical climate risk data is crucial to adaptation efforts, whether you are a business required by law to quantify and disclosure your climate-related risk exposure, a carbon project developer looking to identify the most resilient site for a long-term sequestration project, or a government agency seeking to implement measures to protect coastal communities from sea-level rise, tropical cyclones, and flooding. 

At Sust Global, we are on the mission to transform the latest in machine learning, remote sensing and frontier climate science into the intelligent climate data infrastructure that organizations need to thrive on a changing planet. 

We use the most cutting edge climate scenarios and models, and satellite-derived data to serve asset level insights using patented geospatial machine learning techniques, offering cloud-based native tools for climate scenario analysis, risk assessment and sustainability reporting.

With our product, users can:

If you would like to speak with us about integrating physical risk analytics into your strategy and operations, fill out the form below and we will be happy to speak with you about your specific climate risk needs. 

The organization 

Agendi is a boutique sustainability strategy consultancy working with Fortune 500 organizations across a variety of sectors. Their work focuses mainly on environmental sustainability, along with some social and governance projects. Their sustainability work includes environmental consulting on material issues such as quantifying climate impacts and greenhouse gas accounting; as well as strategy planning – setting science-based targets and feasibility assessments, and defining strategies to work towards meaningful reductions in climate impacts. Agendi also supports clients with reporting, e.g. TCFD-aligned climate-related information disclosures, plus work on ESG rating schemes. 

The challenge

Prior to working with Sust Global, Agendi completed high level qualitative studies using publicly available tools coupled with in-house modeling. They were keen to find a physical climate risk data provider using the latest climate and data science in order to remove uncertainty and provide rigorously validated projections efficiently. They also needed additional granularity in order to supply clients with site-specific data. Easily accessible and comprehensive datasets, incorporating multiple spatial resolutions, climate scenarios, and flexibility on time horizons, were also an important requirement due to the cross-functional nature of how the climate data is utilized by client companies to fulfill multiple climate objectives. For example:

Finance, facilities, business resiliency and legal teams are additional end users of this data.

Sust Global’s solution

Agendi has been able to provide enhanced physical climate risk reporting with the support of Sust Global’s data. The multi-hazard, multi-scenario dataset up to the year 2100 provides is expansive and easy to navigate, meaning it can be deployed to meet the varying requirements of clients across different teams and industries.

One example of how they supported a client project with Sust Global was by using our climate data to identify short- and long-term site risks across a portfolio. The client had a variety of owned assets in locations with long-term leases and little flexibility to relocate. Agendi were able to select all the locations and obtain insights on their respective risk exposure through our platform. They then ran an interactive business impact workshop with the client, running through the risk data, examining what resilience measures are in place, and comparing this with the vulnerabilities highlighted in the risk assessment. They were also able to look at impact bands further down the line and obtain granular reports and heatmaps which fed into existing risk management systems and risk team analyses, flagging sites at high exposure as well as any owned assets which had to be added to the risk register. 

The increasing demand for TCFD reporting among clients has led to a consolidation of ESG platforms and desire for a more streamlined approach, which is supported by Sust Global’s flexible data integrations. Heatmaps and data visualizations have also formed an important component of cross-functional use of climate risk analyses. 

In the future, deeper methodological insights around site exposure will add further value to client reports. Loss modeling assessments across multiple sectors and with more granular detail about thresholds of uncertainty will also strengthen their analyses.

If you would like to learn more about how Sust Global can help your organization assess physical climate risk, get in touch by completing the form below.

As governments make a concerted effort to adapt to the impact of climate change and limit further increases in average global temperatures to below 2°C above pre-industrial levels, economic decision-makers are under increasing pressure to produce climate-related information disclosures. 

Though it began as guidance adoptable on a voluntary basis, the TCFD’s disclosure recommendations have become increasingly embedded in the regulatory framework of multiple jurisdictions, impacting thousands of organizations participating in global financial markets.

In the article below, we provide comprehensive guidance on the TCFD’s recommended climate-related disclosures and what they mean for you.

Table of Contents

What is TCFD? 

In the aftermath of the Paris Agreement, the Financial Stability Board created The Task Force on Climate-related Disclosures (TCFD) to develop guidance regarding the types of information that companies should include in financial disclosures to support participants in financial markets (e.g. investors, lenders, insurers underwriters, banks) in appropriately evaluating and pricing risks related to climate change. The Taskforce includes 31 members from across the G20 and represents both preparers and reviewers of financial disclosures. 

In 2017, with the aim of improving and increasing reporting of climate-related information, the Taskforce released a series of recommendations designed to help companies create more meaningful disclosures that support informed capital allocation. The recommendations are grouped into four thematic areas that represent the core elements of how companies operate: governance, strategy, risk management, and metrics and targets. 

The four themes are interrelated and are supported by a total of eleven disclosures that build out the framework with information that should enable investors to understand how the reporting organization considers and assesses climate-related risks and opportunities. Each recommendation is intended to be widely applicable and easily adoptable across various sectors and jurisdictions, and is designed to solicit forward-looking, actionable business intelligence that can be incorporated into existing mainstream financial filings. 

Following the release of the recommendations, there has been a five-fold increase in adoption of the TCFD framework on both a voluntary and mandatory basis, and the recommendations now have more than 3000 supporters globally. 

Why are the recommendations important?

Following the Paris Agreement in December 2015, more than 200 governments agreed to strengthen the global response to the dangers posed by climate change by holding the increase in average global temperature to below 2°C above pre-industrial levels, and to strive as far as possible to limit the increase in average temperature to 1.5°C above pre-industrial level. 

These commitments, and the large scale and long-term nature of the crisis we face, have profound implications for economic decision makers in both the public and private sectors. Furthermore, there is currently only a nascent understanding within the financial system of the financial risks posed by climate change. 

Within this context, there is growing demand for clear, actionable, consistent, and reliable climate-related risk information from companies, investors, and creditors. Investors and creditors in particular are becoming increasingly vocal in their demands for credible risk disclosures from their investees and debtors as they seek to understand their exposure to financial risks and opportunities associated with the physical impact of climate change and the transition to a low carbon economy. 

While so little is understood about the nature of climate-related risk in the financial system, this paucity of information might well lead to mispricing of assets and misallocation of capital with disastrous consequences for the stability of markets. 

However, as more entities adopt TCFD-aligned reporting practices, the quality and quantity of climate-related risk information will increase across the board, providing more useful insights for decision makers. 

Where is TCFD reporting mandatory?

A number of jurisdictions already require organizations with certain characteristics to complete mandatory TCFD-aligned climate disclosures:

TCFD-aligned reporting is also common practice among organizations in jurisdictions where there is not yet any regulatory requirement to do so. More than 13,000 companies in the US submit annual climate disclosures on a voluntary basis through the CDP system, which is aligned with the TCFD framework. Organizations often undertake voluntary disclosures to gain a competitive advantage with investors, tackle their exposure to climate risks, and identify new climate-related opportunities. 

It is also beneficial for organizations to adopt robust reporting practices now ahead of a regulatory landscape that is shifting towards mandatory reporting requirements. In the US, the SEC released proposals in March 2022 to enhance and standardize climate risk reporting for investors. If adopted, these regulations will impose requirements more expansive in scope than those of the TCFD framework. 

Similarly, the EU is adopting the Sustainable Finance Disclosure Regulation (SFDR), which imposes new mandatory disclosure requirements on financial market participants and financial advisors in the EU, FMPs with EU shareholders, and those marketing themselves in the EU. The first deadline for mandatory disclosures passes in January 2023 and, while the two frameworks share many similarities, the SFDR has a broader remit than TCFD recommendations and covers all ESG risks in addition to climate change. 

Finally, there have been moves across the G7 to mandate disclosures in alignment with TCFD recommendations, with ‘historic steps’ having been agreed in June 2021. 

What are the benefits of adopting TCFD recommendations?

Organizations that adopt TCFD-aligned reporting practices receive several benefits. Firstly, producing higher quality disclosures in line with the 11 recommendations yields meaningful business intelligence that can support decision making across a range of corporate and financial activities: 

Risk management

Measure and evaluate climate-related risks more effectively across your company’s physical locations, supply chain, customers, and competitors. With a clear view of where vulnerabilities exist, the relevant stakeholders can undertake meaningful contingency planning. This might involve undertaking higher CapEx in the short term to reduce future increases in OpEx related to physical climate risks, e.g. investing in decentralised renewable energy to ensure production lines can continue during chronic risks such as drought and following extreme weather such as cyclones.

Capital allocation

Make well-informed decisions about where and when to allocate your capital. For example, investors might wish to tilt an equity or fixed income fund away from investments exposed to significant physical climate risk (such as a supply chain highly exposed to the impact of natural disasters) or transition risk (i.e. carbon-intensive industries). 

Strategic planning

With a comprehensive analysis of risks and exposures over the short, medium and long term, investors, directors and other stakeholders can place sustainability and risk management at the heart of their corporate and investment strategies. 

To learn more about the above, read our case study explaining how a leading plastics manufacturer uses Sust Global’s data to produce detailed and accurate climate risk TCFD disclosures that inform their corporate sustainability strategy and risk mitigation activities. 

In addition to the benefits that companies stand to gain from access to meaningful business intelligence and strategic insights, those who adopt robust reporting practices can gain a competitive advantage with investors. In general, investors are incredibly supportive of the Taskforce’s recommendations, and are increasingly inclined to take an activist approach to investing when it comes to climate issues. 

A recent survey of institutional investors based in the US revealed that though there is a large consensus that companies that perform well in ESG matters merit a premium, 86% of investors believe that companies overstate or exaggerate their progress when disclosing results. By producing clear and accurate climate-related disclosures, companies can make it easier to raise capital by building trust with investors (who are themselves facing an enhanced regulatory landscape). 

What are the recommended disclosures? 

As explained above, the framework is organized into four related thematic groups with eleven supporting disclosures. Broadly speaking, the recommendations prompt the reporting entity to either describe an activity or disclose a material risk or mitigation strategy. Outlined below are the four themes and the recommendations grouped within each category.


Reports should disclose the organization’s governance around climate-related risks and opportunities. This includes: 


Organizations should disclose the actual and potential impacts of climate-related risks and opportunities on their businesses, strategy, and financial planning. Reports should: 

Risk management

Organizations should disclose how they identify, evaluate, and manage climate-related risks. Reports should:

Metrics and targets

Organizations should disclose the metrics and targets used to assess and manage relevant climate-related risks and opportunities across their portfolio and/or operations. This includes: 

There are various acceptable metrics under the framework and selection of metrics and targets that organizations select to measure their exposure to risk and climate impact will depend on the nature of their operations. Metrics typically fall under two categories: process metrics that reflect governance processes for managing exposure to climate risk and outcome metrics that measure the climate risks and impacts linked to their operations and/or investments. 

For example: 

More information on metrics and targets is available in the TCFD’s sector-specific guidance

How are climate risks defined under the framework? 

The TCFD divides climate-related risks into two categories:

Physical risks

Physical risks caused by the changing climate can either be acute or chronic in nature. Acute risks are largely event-driven, such as exposure to damages caused by increasingly extreme weather events, such as floods and wildfires. Chronic risks to businesses posed by long-term shifts in climate patterns, like sustained higher temperatures that lead to rising sea levels and frequent, intense heat waves.  

For example, there has been a substantial increase in weather-related insurance claims over the past decade, driving up the amount insurers have to pay out to their customers to cover damages. Without an adequate understanding of physical climate risk and the exposure of insurable assets in a given geography, it is difficult for insurers to assess their financial liabilities and accurately price the premiums on their products. 

Transition risks

These are financial risks that emerge as the world shifts to a lower-carbon economy. This transition entails extensive policy, legal, technological, and market changes to address the need to adapt to and mitigate climate change. For example, carbon-pricing initiatives introduced by governments to reduce GHG emissions use market mechanisms to pass the cost of emissions on to the emitter. This represents a substantial financial risk for organizations engaged or investing in carbon intensive activities. 

Companies are also at increasing risk of litigation as the value of losses and damages caused by climate change increases. In recent years, various actors including property owners, public interest groups, shareholders, governments and more have brought claims to court to hold organizations accountable for their failure to mitigate the impact of climate change. One such example is the case of Luciano Lliuya v. RWE AG

An indigenous Peruvian farmer, Saúl Luciano Lliuya, is suing the German energy company RWE AG for the costs of preventing the glacial Lake Palcacocha from flooding his hometown of Huaraz. Lliuya is seeking $18,239 in damages, which would be equivalent to 0.47% of the cost of flood prevention, on the basis that RWE AG is responsible for 0.47% of the total sum of global post-industrial carbon emissions. If the courts decide to grant Lliuya this settlement, this would set a groundbreaking precedent that corporations can be held legally liable for their contributions to climate change, laying the foundation for much larger suits against heavy industrial polluters. 

Further financial risks arise for certain organizations due to technological advances that support the transition to a lower-carbon, energy-efficient global economy. The development of emerging technologies such as renewable energy, carbon capture and storage, and battery storage will impact the competitiveness of organizations in certain industries, their production and distribution costs, and consumer demand for their products and services. 

How are climate opportunities defined under the framework? 

Many opportunities emerge through the process of ‘creative destruction’ that takes place during the disruption of economic systems and the emergence of new technologies intended to help us mitigate and adapt to the impact of climate change.

Climate-related opportunities vary according to the region, market, and industry in which an organization operates, the TCFD identified several areas of opportunities that cut across a variety of domains. These include cost savings through resource efficiency, the opportunity to develop or invest in products and services that bring new resilience or climate adaptation solutions to the marketplace, and with it access to new markets, and also improved resilience across an organization’s own supply and value chains. 

For example, companies and investors that engage proactively with new markets or asset types will find opportunities to diversify their portfolio and ensure they are well positioned for the transition to a lower carbon economy. Harvest Fund Management, an asset manager based in China, generated an x100 ROI for clients while also contributing to a sustainable future by investing in a solar photovoltaic company in 2013 that subsequently grew to become the largest global producer of photovoltaic wafers and modules. 

How do I implement the recommendations into my existing reporting and risk management practices

TCFD recommendations are not a one-size-fits-all model and best practice will depend on the specific nature of an organization. In practice, the first step in completing a TCFD-aligned disclosure is normally to complete an initial climate risk assessment via your existing governance and risk management arrangements. You can build out this initial assessment with further, more specific insights regarding exposure to physical risks (including potential damage to assets and disruption to operations) and transition risks posed by policy and legal changes throughout your value and supply chains. 

To measure and assess physical and transition risks, organizations need access to reliable, robustly validated climate risk and impact data analytics. This includes data on your carbon emissions, and the climate risk datasets and analytics required to run effective scenario analysis that pinpoint the exposure of your physical assets to climate hazards including wildfires, water stress, sea level rise, and more. It will also be necessary to map exposure to risk under different climate scenarios, including low, moderate, and high emissions models. 

TCFD disclosures are typically included in an organization’s annual mandatory mainstream financial disclosures, usually in a specific climate risk section in a strategy report. This is a straightforward way to integrate climate-related risk disclosures into your annual report, and you should make explicit reference to the eleven recommendations as part of the write up. 

At Sust Global, our sophisticated climate analytics simplify the process of producing TCFD-aligned climate disclosures. Through either our intuitive geospatial intelligence platform or our developer-friendly API, we can provide: 

If you would like to learn more about how we can help your organization implement scalable and compliant physical risk assessments, and produce credible and transparent disclosures, fill out the form below to speak with a member of our team.

The organization

Blue Forest’s mission is to use innovative finance to achieve sustainable environmental solutions, with a goal of increasing climate and ecological resilience by communicating the environmental and economic value of healthy ecosystems. Their work focuses on identifying the multiple benefits provided by resilient ecosystems such as protected air and water quality, reduced flood and fire risk, and enhanced biodiversity. The financing model relies on quantifying the benefits of ecosystem restoration and habitat management with local beneficiaries to enable traditional and new long-term funding commitments for ecosystem restoration projects. Private investors then provide the upfront capital for projects, accelerating the pace and scale of implementing restoration activities.

The challenge

In order to quantify risks from acute hazards, such as wildfire, Blue Forest relies on earth observation, field data, and local expert knowledge. While satellite observations and historic weather records enable organizations to characterize risk trends, data uncertainty around climate change poses a significant challenge in understanding future hazard risk. Future hazards are unlikely to resemble the recent past, both in location and intensity, and understanding the range of possible futures improves our understanding of current risk to infrastructure, communities, companies, and the environment.  

Blue Forest is actively building approaches and tools for hazard mapping in new geographies. Much of their work has focused on forest management projects in California and the Pacific Northwest, where substantial local knowledge enables them to develop high-precision models of wildfire risk. To expand access to their suite of tools for conservation finance, Blue Forest seeks validated, high-resolution datasets on future hazard impacts and climate change vulnerability. New pilots are also exploring financing opportunities in coastal and marine ecosystems, which introduces a broader range of hazard risks and locations required for assessments. 

Sust Global’s solution

Sust Global evaluates future risk from water stress, drought, and wildfires. High-resolution data has helped Blue Forest to expand its geographical focus, by allowing for resilience modeling in areas that previously would have been too data intensive to undertake. By incorporating future risk into the economic value of ecosystem restoration projects, Sust Global can help Blue Forest more comprehensively assess ideal locations and projects for leveraging innovative finance.

In addition to providing event probabilities and other numerical estimates of hazard severity, the risk products from Sust Global employ a straightforward stoplight approach, indicating a high, medium, or low level of confidence. These outputs can assist Blue Forest in the communication of climate risk with stakeholders who don’t have a science background. It may also provide a valuable means to further project goals where existing data are limited. The spatiotemporal resolution and coverage of Sust Global’s data has also provided additional detail to project assessments: for example, water risk is available from the watershed level to regional and global coverage. 

If you would like to learn more about how Sust Global can help your organization assess physical climate risk, get in touch by completing the form below.

One of the most impactful changes brought on by climate change is the increased frequency and intensity of wildfires. As temperatures warm and moisture is sucked out of the air and soil, a spark is more likely to set off a conflagration (i.e. a vast and extensive wildfire). 

The devastating consequences of this are felt in every continent. Analysis from the UN Environment Programme’s Frontiers 2022 report shows that, between 2002 to 2016, 423 million hectares of the earth’s land surface burned annually. A related analysis also estimated that over 13 million individual fires occurred globally from 2003 to 2016, each lasting 4-5 days on average. 

However, higher temperatures driven by climate change are not the only cause of increased fire risk: conditions on the ground also play a critical role. Fires don’t occur in a sandy desert, the middle of a lake, or a concrete urban area. The right combination of burnable vegetation on the ground and warm dry weather need to be present to spark a truly devastating blaze.

Estimating true fire risk requires the analysis of multiple complex data points that together explain how weather, biomass, and other factors interact. As climate change generates increasingly novel weather conditions, historical data is no longer a reliable indicator of fire risk in the future. Across the world, we’ve never seen rainforests this dry or savannahs this wet. This is why “global warming” might also be characterized as “global weirding” – everything will be stranger in the climate future we are creating. 

This “global weirding” creates unique modeling challenges: this blog will set out how the data scientists at Sust Global are focused on solving them.

Table of contents

A brief history of the past

Historically, fire risk has largely been determined by a location’s biome. A biome is a categorization of land cover and species that are present in a certain area. For example, deserts, rainforests, and arctic tundra are all examples of terrestrial biomes. Rainfall level is one of the most important factors in determining a location’s biome, and thus its fire risk.

To show how biomes relate to fire risk, the picture below illustrates different biomes along an increasing gradient from extremely wet rainforests, through drier woodlands and savannahs, to chaparral and desert.

Types of vegetation communities along a rainfall gradient from very high (left), to very low (right).  This rainfall gradient corresponds very closely to biomass levels, with most fire occurring in the middle.

Under a stable climate, you can tell a lot about the weather to expect in a location based on the vegetation communities. High-biomass rainforests (as pictured on the left) are located in areas with high amounts of year-round rainfall. Far above the soggy forest floor, the sky is usually filled with thick clouds. On the right, low-biomass deserts have very little rainfall, with bright, dry skies above, and rare, infrequent rain events.

Given that vegetation and rainfall are linked, you can also tell a lot about fire risk by looking at the biome in a location. Most fires occur in a habitable zone of precipitation. Go too far to either side of the precipitation/biomass gradient, and there will be either too much moisture for fires to be possible, or too little biomass for anything to burn. But biomes in the middle – e.g. open woodlands, savannahs, and chaparral – possess enough vegetation to be fuel for fire while also dry enough to create potential fire conditions.

This assumes a strong connection between precipitation and vegetation, which is the case under a stable climate: the types of plant communities found in an area are closely shaped by prevailing climate conditions. The dryness of an area limits the amount of biomass that can accumulate, with wildfire playing an important role in occasionally clearing out the excess biomass.

But under “global weirding,” that connection will be increasingly decoupled: deserts will likely become wetter and rainier, and parched rainforests will not get the levels of precipitation that previously sustained thick vegetation. In other areas, the timing of rainfall will shift: monsoons will come late, and wet seasons will come early.

While the figure above shows conditions under a stable climate, the figure below shows what happens when rainfall and biomass decouple.  Suddenly it is possible for areas to have very low levels of rainfall with high biomass levels.  You can see this in the record-breaking Amazon rainforest wildfires, where dense rainforest biomass is intersecting with unnaturally dry conditions.  Similarly, the woody California chaparral has been seeing the worst drought in 1,200 years, with precipitation levels more typical of bare desert. These are areas where 20th century biomass is getting exposed to 21st century levels of drought, presenting a large risk of extreme future fires, as well as a significant modeling challenge.

Decoupling of rainfall and biomass levels under climate change.  In areas along the diagonal, biomass is proportional to rainfall.  However, as precipitation levels shift, terrestrial landscapes will increasingly be located off the diagonal, and areas with excess biomass will see extreme wildfires.

Past risk exposure are no indication of future risk exposure

Quantitative approaches to risk management often assume that the past is a guide to the future. In most cases, this is a reasonable approach: Insurance companies estimate the risk of catastrophe by assuming that historic frequencies tell us something about future risks. For example: if there were two 5.0-magnitude earthquakes in the past hundred years, earthquake insurance that assumes a 2% annual chance of 5.0-magnitude earthquake may be offered. Whole subfields of actuarial science, and large statistical and catastrophe modeling teams at reinsurance companies operate under the assumption that statistics about the past are a meaningful guide to the future.

This approach no longer works for climate induced weather-related disasters: floods, hurricanes and wildfires will change in frequency and severity. Areas with previously occasional grass fires are starting to see catastrophic, tree-torching events. Hurricanes and Typhoons are making landfalls far north of where they normally stray, in places like Canada and Alaska. And what used to be 1000-year floods are occurring with improbable frequency.  This makes it very challenging to accurately predict the future risk of these disasters.

In the case of wildfires, much of the change in severity will come from the disrupted connection between precipitation and vegetation. This explains the devastating wildfires in the Amazon Rainforest which have been largely driven by extremely low levels of rainfall. Similarly, the American West has been experiencing its worst megadrought in the 1200 years, which has been associated with record-breaking wildfires. Similar patterns of drying and increasing fire intensity around the globe are why the UNEP is predicting increases in “extraordinary” and “uncontrollable” wildfires.

Much of the framing around these unprecedented fires focuses on drought – anomalously low rainfall. Another framing is that these very low levels of precipitation are not “anomalous” in the sense that they are lower than normal – rather, they are the new normal. As typical rainfall levels shift, vegetation communities will change. Parts of the Amazon rainforest will become savanna, while parts of California’s scrubland will become bare desert. In the process of these changes, a lot of excess biomass will be burned as wildfire.

Dry, hot, fire weather will hit areas that have never previously seen values so extreme, which poses a significant modeling challenge. Using historic data within a given range to predict future data in the same range is straightforward. But future fire weather will be well outside what has been historically observed, beyond what data scientists call a model’s “feature space”.

The KBDI fire weather index, is a reference scale that combines rainfall, temperature, and recent soil wetness to estimate fire risk, and ranges from 0 (no fire risk) to 200 (perfect fire weather). Historic observations of KBDI and fire occurrence in a given biome can be assessed to estimate fire risk. 

For example, we can say that in Colorado’s grasslands, when the KBDI fire weather index is 0, there is no risk of fire, because we have never seen a fire when the KBDI is 0. But, when that value reaches 75 there has historically been a moderate risk of fire. For example, say the KBDI value has reached 75 on at least 20 separate occasions in a state park, and one of those times, there was a wildfire in the park. 

We could then say that a KBDI of 75 is associated with a 1-in-20, or 5% chance of fire. But what about 125, or 150 – levels of fire weather we’ve never before seen in Colorado? We know that the risk will almost certainly be higher – but how much higher, and how much further will the fires spread when the fire weather index reaches new levels?

This is a challenge when trying to estimate future fire risk, as climate models say we will soon see unprecedented levels of fire weather around the world. For example, the figure below shows the distribution of KBDI values within the predominant biomes of four states. In all four states, fire weather will become much more common – to a degree never before seen in the historic record. 

It is clear that the risk will increase, but specific predictions are harder to interpret: for example, how much will the probability increase? Which states will be more affected than others by wildfires resulting from the changing climate? Which types of properties are most at risk?

Comparing the distribution of monthly maximum fire weather values (using the KBDI fire weather index) from the recent past to mid-21st century across four states. Many of the future expected KBDI values are outside of the range of historic observations, making modeling the impact of these values challenging.

Comparing the distribution of monthly maximum fire weather values (using the KBDI fire weather index) from the recent past to mid-21st century across four states. Many of the future expected KBDI values are outside of the range of historic observations, making modeling the impact of these values challenging.

Modeling the future

The expected fire future – with unprecedented heat and drought in areas with large amounts of biomass – surely means that fire will get worse in many places. But good climate intelligence requires quantitative and spatially specific risk estimation. We know fire risk will increase – but how much higher are future fire probabilities than current fire probabilities? Similarly, we know that the American West will see more fires – but what will the 2040s look like for a specific address in the California suburbs? These are the questions that Sust Global modeling can answer.

Here, we give an overview of two of our modeling approaches to estimate fire risk in an increasingly weird and uncertain future, where traditional modeling approaches don’t work.


The most common way to model the future of the climate is to use simulations. This modeling approach is theoretical rather than empirical. A simulation doesn’t use the past as a guide: it instead uses baseline assumptions to form (usually chaotic) pictures of the future. Global climate models, or GCMs, take a whole suite of assumptions about the world – everything from trajectories of human population growth and economic development to laws of fluid dynamics and atmospheric chemistry – and simulate the world’s climate future: hour by hour, grid cell by grid cell.

Many of these GCMs include assumptions about wildfire and vegetation. They simulate how fire depletes vegetation, how expanding cities from population growth reduce the amount of burnable vegetation, and how economic conditions affect the availability of fire ignition sources. These simulations can’t give precise predictions, but running different simulations under varying conditions provide an idea of the range of possible futures.

One challenge with GCMs is that they don’t offer precise scenarios: for example they may estimate how frequently fires will happen over large, 100-kilometer areas. Sust improves these fuzzy simulations by using patented AI to “zoom in” on the simulations and give a sharper, high-res projection of future fire risk – an approach we refer to as fireSRnet. This approach uses a cutting edge synthesis of climate simulations and artificial intelligence to make the best possible predictions of future fire risk.

Using strong scientific priors to inform model structure

Another way that Sust Global deals with the challenges of estimating future fire risk in a weird climatological future is to use models uniquely designed for making predictions outside of the available feature space.

It is very common in AI and machine learning to use models that perform well in the range of values used to train the data. Many of the most advanced neural networks can outperform humans on tasks ranging from medical diagnoses to games like chess and Go. However, these models typically fail when applied to new contexts or new ranges of data.  For example, a model used to make medical predictions that was trained on a dataset of young, white men will perform poorly when used to make diagnoses on older black women.

Similarly, a model trained on the historic relationship between weather, vegetation, and fire risk would perform well on making predictions over that same time period, or in the future under an stationary climate. However, such a model will perform poorly on the actual future under climate change, because that future will present novel combinations of weather and vegetation conditions outside of what has arisen historically, and many of these contexts are where future fire risk will be highest.

There are a number of strategies that work in unpredictable scenarios, but these may involve re-thinking the basics of what is learnt in an introductory machine learning course. Firstly, there are model forms, such as oblique decision trees and linear regression, that extrapolate from observed trends to make predictions outside of the feature space, rather than make a prediction at the bounds of the feature space. Such models will perform better than most machine learning models that make a prediction at the bounds of the observed feature space.

As an example of models with different approaches to predictions outside of the feature space, see the graph below. A non-linear phenomenon with noisy outcomes is shown in gray. Points in black are “observations”, values that are known and that are used to fit the model. Here, we train both a random forest and a piecewise linear regression model to see how they do beyond the range of observations, on a new observation at the red point. The random forest does better than the linear regression within the range of black points, i.e., within the feature space. 

However, beyond the range of the observed data, the true phenomenon shifts upwards, a trend that the linear regression catches but the random forest model misses.  This shows how simpler models like linear regression can perform better than more sophisticated models when making predictions outside of the feature space.

Wildfire risk models perform differently on data outside of the feature space.

An illustration of models that perform differently on data outside of the feature space.  The random forest model performs better than the linear regression over the range of the black points, but then does worse when predicting the new observation at the red point.  Note that this is just a representation of modeling approaches and so the x and y axes could represent any two variables – the x could be kbdi and the y could be fire, the x could be time and the y could be stock prices, the x could be temperature and y could be hospitalizations, etc.


Beyond model form, there are other considerations that help us make predictions outside of the available feature space. Firstly, models in this context need to be interpretable. Scientists need to understand why a model is making predictions in order to deduce whether those predictions are plausible. Even if we have never seen a rainforest with a KBDI of 150, can our model’s predictions of fire risk in this context be reasonable? Is it what a scientist would potentially expect to see? 

Secondly, it is very important that such models are simple. When making predictions about an uncertain future, overfitting is a huge issue, and even using regularization methods on the training data may still leave the model overfit with respect to out-of-feature space data.

Finally, it is important to test the model on unseen future data. Textbook cross validation approaches involve validating the dataset with portions of the data left out at random. But in the case of a weird future, it is important to test the modeling approach by specifically leaving out the latest observations from the most recent past. For example, train a model on most available historical data leaving out the most recent few years, and test that model on those years. 

In the case of wildfires, we are already starting to see signs of “global weirding”: wildfires in the winter, stronger wildfires and wildfires in new areas. If a model fails to predict these emerging signals of climate change with any accuracy, it will surely perform even more poorly on a distant and more unusual future. All of these considerations are key to making successful forecasts about the future climate and its impacts, and are integral to how we think about physical risk exposure modeling at Sust Global.

To learn more about your physical climate risk capabilities and our wildfire projection product, reach out to the team today by filling in the form below: