Rio de Janeiro flood risk assessment#

This notebook presents a workflow for assessing pluvial flood risk for the Acari River Basin under different climate scenarios and adaptation strategies using HydroFlows. The workflow integrates local data, incorporating real adaptation measures outlined in the city’s new master plan. These measures were identified through discussions with local partners, ensuring that the analysis reflects both existing conditions and planned interventions.

To develop a comprehensive flood risk assessment, we created hazard (SFINCS) and impact (FIAT) models using local datasets, either in their raw form or after preprocessing. Whenever preprocessing was required, those steps were explicitly included in the workflow to maintain full reproducibility. In cases where local data were insufficient, global datasets were used to fill gaps and meet model requirements.

In addition, local precipitation data was used to generate design events for different return periods. These events were then scaled using climate change (CC) scaling techniques to simulate future climate conditions. The study considers three climate scenarios:

  • The Present (Historical) scenario with no temperature change.

  • A Moderate Emissions (RCP4.5, 2050) scenario with a projected temperature increase of +1.2°C.

  • A High Emissions (RCP8.5, 2050) scenario with a projected temperature increase of +2.5°C.

For each of these climate scenarios, we evaluated three adaptation strategies to assess their effectiveness in reducing flood risk:

  • A default (no strategy), representing the current state of the system.

  • A reservoir-based strategy, accounting for newly planned reservoirs designed to buffer flood volumes.

  • A dredging strategy, which simulates the impact of sediment removal on flood mitigation.

By systematically analyzing these climate and adaptation scenarios, this study provides a structured and reproducible approach to understanding flood risk dynamics in the Acari basin. The insights gained can support decision-making and inform future urban resilience planning efforts.

[1]:
# Import packages
from pathlib import Path

from hydroflows import Workflow, WorkflowConfig
from hydroflows.log import setuplog
from hydroflows.methods import catalog, fiat, rainfall, script, sfincs
from hydroflows.workflow.wildcards import resolve_wildcards

Folder Structure#

The folder is organized to facilitate reproducibility and clarity in handling data, scripts, and configurations for flood risk assessment. Below is an overview of the key components:

  1. Model Executables (bin/)
    The bin/ directory stores the executable files required to run the models, namely SFINCS and FIAT
  2. Data Directory (data/)
    All input and processed data are stored within this directory. It contains the following subfolders:
  • global-data/: Stores global datasets, with a corresponding data_catalog.yml file documenting the required sources and paths.

  • local-data/: Contains local datasets, also accompanied by a data_catalog.yml file for reference.

  • preprocessed-data/: Stores datasets generated through preprocessing of local or global data. The preprocessing script automatically generates a data_catalog.yml file (shown later in the workflow).

  1. Scripts Directory (scripts/)
    This folder contains Python scripts used for data preprocessing and analysis.
  2. Model Setups (setups/)
    The setups/ directory is divided into two subfolders, global/ and local/, representing different modelling configurations for the Acari basin (in this noteboook only local). In the setup directories the built models as well as their output will be saved by case. Each setup contains a hydromt_config/ folder where configuration files for HydroMT are stored. The local/ setup includes specific configuration files for the SFINCS model, corresponding to different adaptation strategies, and also the configuration file for the FIAT model:
  • sfincs_config_default.yml (current system state)

  • sfincs_config_reservoirs.yml (planned reservoirs)

  • sfincs_config_dredging.yml (dredging scenario)

  • fiat_config.yml

Overview:

Rio
├── bin
│   ├── fiat_v0.2.1
│   └── sfincs_v2.1.1
├── data
│   ├── global-data
│   │   └── data_catalog.yml
│   ├── local-data
│   │   └── data_catalog.yml
│   ├── preprocessed-data
│   │   └── data_catalog.yml
│   └── region.gpkg
├── scripts
├── setups
│   ├── global
│   │   └── hydromt_config
│   └── local
│       └── hydromt_config
│           ├── sfincs_config_default.yml
│           ├── sfincs_config_reservoirs.yml
│           ├── sfincs_config_dredging.yml
│           └── fiat_config.yml
└── rio_risk_climate_strategies.ipynb
[2]:
# Define case name, root directory and logger

pwd = Path().resolve() # current directory
name = "local-risk-climate-strategies"
setup_root = Path(pwd, "setups", "local")
pwd_rel = "../../"  # relative path from the case directory to the current file

# Setup the logger
logger = setuplog(level="INFO")
INFO - log - hydroflows version: 0.1.0

Create the workflow#

In this block the workflow configuration is specified and a HydroFlows workflow is created. The workflow takes as input the following:

  • the region polygon of the Acari river basin

  • the data catalog files describing all the input datasets

  • the HydroMT configuration files

  • the model executables

In addition some more general settings are specified that are used in the methods below

[3]:
# Setup the config file and data libs


strategies = ["default", "reservoirs", "dredging"]

# Config
config = WorkflowConfig(
    # general settings
    region=Path(pwd_rel, "data/region.geojson"),
    plot_fig=True,
    catalog_path_global=Path(pwd_rel, "data/global-data/data_catalog.yml"),
    catalog_path_local=Path(pwd_rel, "data/local-data/data_catalog.yml"),
    # sfincs settings
    sfincs_exe=Path(pwd_rel, "bin/sfincs_v2.1.1/sfincs.exe"),
    depth_min=0.05,
    subgrid_output=True,  # sfincs subgrid output should exist since it is used in the fiat model
    # fiat settings
    hydromt_fiat_config=Path("hydromt_config/fiat_config.yml"),
    fiat_exe=Path(pwd_rel, "bin/fiat_v0.2.1/fiat.exe"),
    risk=True,
    # design events settings
    rps=[5, 10, 100],
    start_date="1990-01-01",
    end_date="2023-12-31",
    # Climate rainfall scenarios settings (to be applied on the derived design events)
    # Dictionary where:
    # - Key: Scenario name (e.g., "current", "rcp45", "rcp85")
    # - Value: Corresponding temperature delta (dT) for each scenario
    scenarios_dict = {
        "present": 0,  # No temperature change for the present (or historical) scenario
        "rcp45_2050": 1.2,  # Moderate emissions scenario with +1.2°C
        "rcp85_2050": 2.5,  # High emissions scenario with +2.5°C
    },
    # Strategies settings
    strategies_dict = {"strategies": strategies}
)


w = Workflow(config=config, name=name, root=setup_root, wildcards=config.strategies_dict)

Preprocess local data#

Preprocess local exposure data#

In this step, local exposure data (stored in data/local-data) are preprocessed using Python scripts executed via the HydroFlows ScriptMethod. First, the clip_exposure.py script clips the exposure data to the regional boundary. The outputs of this step are saved in the data/pre-processed-data, namely census2010.gpkg, building_footprints.gpkg and entrances.gpkg. These clipped datasets are then further processed using the preprocess_exposure.py script, executed again via ScriptMethod (rule fiat_preprocess_clip_exp). The final output is a new data catalog stored in data/preprocessed-data, including references to the newly generated datasets, needed to build the FIAT model

[4]:
# Clip exposure datasets to the region of interest.
fiat_clip_exp = script.ScriptMethod(
    script=Path(pwd_rel, "scripts", "clip_exposure.py"),
    # Note that the output paths/names are hardcoded in the scipt
    # These names are used in the hydromt_fiat config
    input={
        "region": Path(pwd_rel, "data/region.geojson"),
    },
    output={
        "census": Path(pwd_rel, "data/preprocessed-data/census2010.gpkg"),
        "building_footprints": Path(
            pwd_rel, "data/preprocessed-data/building_footprints.gpkg"
        ),
        "entrances": Path(pwd_rel, "data/preprocessed-data/entrances.gpkg"),
    },
)
w.create_rule(fiat_clip_exp, rule_id="fiat_clip_exposure")

# Preprocess clipped exposure
fiat_preprocess_clip_exp = script.ScriptMethod(
    script=Path(pwd_rel, "scripts", "preprocess_exposure.py"),
    input={
        "census": fiat_clip_exp.output.census,
        "building_footprints": fiat_clip_exp.output.building_footprints,
        "entrances": fiat_clip_exp.output.entrances,
    },
    output={
        "preprocessed_data_catalog": Path(
            pwd_rel, "data/preprocessed-data/data_catalog.yml"
        ),
    },
)
w.create_rule(fiat_preprocess_clip_exp, rule_id="fiat_preprocess_exposure")
[4]:
Rule(id=fiat_preprocess_exposure, method=script_method, runs=1)

Merging the global and local data catalogs#

Both local and global information is needed to build the SFINCS model. For this reason, with the following method we merge the two data sources

[5]:
# Merge global and local data catalogs
merge_all_catalogs = catalog.MergeCatalogs(
    catalog_path1=w.get_ref("$config.catalog_path_global"),
    catalog_path2=w.get_ref("$config.catalog_path_local"),
    catalog_path3=fiat_preprocess_clip_exp.output.preprocessed_data_catalog,
    merged_catalog_path=Path(pwd_rel, "data/merged_data_catalog_all.yml"),
)
w.create_rule(merge_all_catalogs, rule_id="merge_all_catalogs")
[5]:
Rule(id=merge_all_catalogs, method=merge_catalogs, runs=1)

Build Hazard and Impact models#

Build SFINCS#

The following method builds the SFINCS model for the Acari River Basin per adaptation strategy. It takes as input the region geometry, the HydroMT configuration files per strategy and the merged global-local data catalog and it generates SFINCS models saved in the models dir. The strategy is indicated by the name of the directory, e.g., models/sfincs_reservoirs.

[6]:
# Build SFINCS model for the Acari river basin
# - settings from the hydromt_config per strategy
# - data from the merged global-local catalog

# Note that subgrid_output is set to True, since the subgrid output is used in the fiat model
sfincs_build = sfincs.SfincsBuild(
    region=w.get_ref("$config.region"),
    sfincs_root="models/sfincs_{strategies}",
    config=Path("hydromt_config/sfincs_config_{strategies}.yml"),
    catalog_path=merge_all_catalogs.output.merged_catalog_path,
    plot_fig=w.get_ref("$config.plot_fig"),
    subgrid_output=w.get_ref("$config.subgrid_output"),
)
w.create_rule(sfincs_build, rule_id="sfincs_build")
[6]:
Rule(id=sfincs_build, method=sfincs_build, runs=3, repeat=['strategies'])

Build FIAT#

The following method builds the FIAT model for the Acari River Basin. It takes as inputs the sfincs_build output for the model region and ground elevation, the HydroMT configuration file and the merged preprocessed-global-local data catalog and it generates a FIAT model saved in the models dir. Note that since one FIAT model is required only inputs from one SFINCS model are taken using resolve_wildcards from hydroflows.workflow.wildcards.

[7]:
# Fiat build
# - the sfincs_build output for the model region and ground elevation; Note we use only the SFINCS model of the first strategy
# - settings from the hydromt_config
# - data from the merged catalog

fiat_build = fiat.FIATBuild(
    region=resolve_wildcards(
        sfincs_build.output.sfincs_region, {"strategies": strategies[0]}
    ),
    ground_elevation=resolve_wildcards(
        sfincs_build.output.sfincs_subgrid_dep, {"strategies": strategies[0]}
    ),
    fiat_root="models/fiat_default",
    catalog_path=merge_all_catalogs.output.merged_catalog_path,
    config=w.get_ref("$config.hydromt_fiat_config"),
)
w.create_rule(fiat_build, rule_id="fiat_build")
[7]:
Rule(id=fiat_build, method=fiat_build, runs=1)

Precipitation design events#

Here, we preprocess the local precipitation file using the preprocess_local_precip.py script. This step generates a NetCDF file compatible with the HydroFlows PluvialDesignEvents method, which is used to derive pluvial design events for various return periods. The design events are then saved in the event/design directory.

Present climate events#

[8]:
# Preprocess precipitation
precipitation = script.ScriptMethod(
    script=Path(pwd_rel, "scripts", "preprocess_local_precip.py"),
    # Note that the output path/filename is hardcoded in the scipt
    output={
        "precip_nc": Path(
            pwd_rel, "data/preprocessed-data/output_scalar_resampled_precip_station11.nc"
        )
    },
)
w.create_rule(precipitation, rule_id="preprocess_local_rainfall")

# Derive desing pluvial events from the preprocessed local precipitation
pluvial_design_events = rainfall.PluvialDesignEvents(
    precip_nc=precipitation.output.precip_nc,
    rps=w.get_ref("$config.rps"),
    wildcard="events",
    event_root="events/default",
)
w.create_rule(pluvial_design_events, rule_id="get_pluvial_design_events")
INFO - wildcards - Added wildcard 'events' with values: ['p_event_rp005', 'p_event_rp010', 'p_event_rp100']
[8]:
Rule(id=get_pluvial_design_events, method=pluvial_design_events, runs=1, expand=['events'])

Future climate scenarios#

The pluvial design events generated in the previous step are scaled for the different climate scenarios (temperature changes; see scenarios_dict) using the FutureClimateRainfall method. This step produces scaled events, which are saved in the events/{scenario_name} directory.

[9]:
# Climate rainfall scenarios events
scenarios_design_events = rainfall.FutureClimateRainfall(
    scenarios=w.get_ref("$config.scenarios_dict"),
    event_names=pluvial_design_events.params.event_names,
    event_set_yaml=pluvial_design_events.output.event_set_yaml,
    event_wildcard="events",  # we overwrite the wildcard
    scenario_wildcard="scenarios",
    event_root="events",
)
w.create_rule(scenarios_design_events, rule_id="scenarios_pluvial_design_events")

# The produced event set is saved as follows:
print("Output event sets", scenarios_design_events.output.future_event_set_yaml)
INFO - wildcards - Added wildcard 'scenarios' with values: ['present', 'rcp45_2050', 'rcp85_2050']
Output event sets events/{scenarios}/pluvial_design_events_{scenarios}.yml

Derive flood hazard#

In the following block, we derive the flood hazard for the different events per scenario and strategy. The hazard derivation is performed using the SfincsUpdateForcing, SfincsRun, SfincsPostprocess, and SfincsDownscale methods. First, the SFINCS forcing per event is updated using SfincsUpdateForcing. Then, the model execution is carried out using the SfincsRun method. The total number of models simulations will be equal to return periods × scenarios × strategies. The outputs of these model runs are then postprocessed. The SfincsPostprocess method converts SFINCS results into a regular grid of maximum water levels, which is required to update the FIAT model. The SfincsDownscale method then refines the SFINCS output to generate high-resolution flood hazard maps.

[10]:
# Update the SFINCS model with pluvial events
# This will create new SFINCS instances for each fluvial event in the simulations subfolder
sfincs_update = sfincs.SfincsUpdateForcing(
    sfincs_inp=sfincs_build.output.sfincs_inp,
    event_yaml=scenarios_design_events.output.future_event_yaml,
    output_dir=sfincs_build.output.sfincs_inp.parent / "simulations_{scenarios}" / "{events}",
)
w.create_rule(sfincs_update, rule_id="sfincs_update")

# Run the SFINCS model for each fluvial event
# This will create simulated water levels for each pluvial event
sfincs_run = sfincs.SfincsRun(
    sfincs_inp=sfincs_update.output.sfincs_out_inp,
    sfincs_exe=w.get_ref("$config.sfincs_exe"),
)
w.create_rule(sfincs_run, rule_id="sfincs_run")

# Postprocesses SFINCS results to a regular grid of maximum water levels
sfincs_post = sfincs.SfincsPostprocess(
    sfincs_map=sfincs_run.output.sfincs_map,
)
w.create_rule(sfincs_post, rule_id="sfincs_post")

# Downscale the SFINCS output to derive high-res flood hazard maps
sfincs_down = sfincs.SfincsDownscale(
    sfincs_map=sfincs_run.output.sfincs_map,
    sfincs_subgrid_dep=sfincs_build.output.sfincs_subgrid_dep,
    depth_min=w.get_ref("$config.depth_min"),
    output_root="output/hazard_{scenarios}_{strategies}",
)
w.create_rule(sfincs_down, rule_id="sfincs_downscale")

# the simulations are stored in:
print("simulation folder:", sfincs_update.output.sfincs_out_inp.parent)

# the hazard maps are stored in:
print("high-res hazard map folder:", sfincs_down.output.hazard_tif.parent)
simulation folder: models/sfincs_{strategies}/simulations_{scenarios}/{events}
high-res hazard map folder: output/hazard_{scenarios}_{strategies}

Derive flood risk#

In the following block, we derive the flood risk for the different hazards per scenario and strategy produced above. The risk derivation is performed using FIATUpdateHazard and FIATRun, while then final outcome is visualized with FIATVisualize.

[11]:
# Update hazard forcing with the pluvial eventset to compute risk
fiat_update = fiat.FIATUpdateHazard(
    fiat_cfg=fiat_build.output.fiat_cfg,
    event_set_yaml=scenarios_design_events.output.future_event_set_yaml,
    map_type="water_level",
    hazard_maps=sfincs_post.output.sfincs_zsmax,
    risk=w.get_ref("$config.risk"),
    output_dir=fiat_build.output.fiat_cfg.parent
    / "simulations_{scenarios}_{strategies}",
)
w.create_rule(fiat_update, rule_id="fiat_update")

# Run FIAT to compute flood risk
fiat_run = fiat.FIATRun(
    fiat_cfg=fiat_update.output.fiat_out_cfg,
    fiat_exe=w.get_ref("$config.fiat_exe"),
)
w.create_rule(fiat_run, rule_id="fiat_run")

# Visualize FIAT results
fiat_visualize_risk = fiat.FIATVisualize(
    fiat_output_csv=fiat_run.output.fiat_out_csv,
    fiat_cfg=fiat_update.output.fiat_out_cfg,
    spatial_joins_cfg=fiat_build.output.spatial_joins_cfg,
    output_dir="output/risk_{scenarios}_{strategies}",
)
w.create_rule(fiat_visualize_risk, rule_id="fiat_visualize_risk")

# fiat simulation folders
print("fiat simulation folder:", fiat_update.output.fiat_out_cfg.parent)

# risk informetrics/infographics are stored
print("risk informetrics/infographics output folder:", fiat_visualize_risk.output.fiat_infographics.parent)
fiat simulation folder: models/fiat_default/simulations_{scenarios}_{strategies}
risk informetrics/infographics output folder: output/risk_{scenarios}_{strategies}

Setup FloodAdapt#

A FloodAdapt database is created from SFINCS / Delft-FIAT models and the HydroFlows EventSet definition with the SetupFloodAdapt method.

[12]:
# floodadapt_build = flood_adapt.SetupFloodAdapt(
#     sfincs_inp=sfincs_build.output.sfincs_inp,
#     fiat_cfg=fiat_build.output.fiat_cfg,
#     event_set_yaml=pluvial_design_events.output.event_set_yaml,
#     output_dir="models/flood_adapt_builder_{strategies}",
# )
# w.create_rule(floodadapt_build, rule_id="floodadapt_build")

Visualize and execute the workflow#

The workflow can be executed using HydroFlows or a workflow engine. Below we first plot and dryrun the workflow to check if it is correctly defined. Then, we parse the workflow to SnakeMake to execute it.

[13]:
w.plot_rulegraph()
[13]:
../_images/_examples_rio_risk_climate_strategies_29_0.svg
[14]:
w.dryrun()
INFO - workflow - Dryrun rule 1/15: fiat_clip_exposure (1 runs)
INFO - workflow - Dryrun rule 2/15: preprocess_local_rainfall (1 runs)
INFO - workflow - Dryrun rule 3/15: get_pluvial_design_events (1 runs)
INFO - workflow - Dryrun rule 4/15: scenarios_pluvial_design_events (1 runs)
INFO - workflow - Dryrun rule 5/15: fiat_preprocess_exposure (1 runs)
INFO - workflow - Dryrun rule 6/15: merge_all_catalogs (1 runs)
INFO - workflow - Dryrun rule 7/15: sfincs_build (3 runs)
INFO - workflow - Dryrun rule 8/15: fiat_build (1 runs)
INFO - workflow - Dryrun rule 9/15: sfincs_update (27 runs)
INFO - workflow - Dryrun rule 10/15: sfincs_run (27 runs)
INFO - workflow - Dryrun rule 11/15: sfincs_downscale (27 runs)
INFO - workflow - Dryrun rule 12/15: sfincs_post (27 runs)
INFO - workflow - Dryrun rule 13/15: fiat_update (9 runs)
INFO - workflow - Dryrun rule 14/15: fiat_run (9 runs)
INFO - workflow - Dryrun rule 15/15: fiat_visualize_risk (9 runs)
[15]:
# to snakemake
w.to_snakemake(f"{name}.smk")