{
"cells": [
{
"cell_type": "markdown",
"id": "05912d98",
"metadata": {
"papermill": {
"duration": 0.003524,
"end_time": "2025-06-18T09:44:31.897880",
"exception": false,
"start_time": "2025-06-18T09:44:31.894356",
"status": "completed"
},
"tags": []
},
"source": [
"# Fluvial flood risk\n",
"\n",
"This example shows a workflow to derive fluvial flood risk using the **Wflow**, **SFINCS** and **Delft-FIAT** models. The starting point is a user defined region and data catalog. Wflow simulated discharge is translated into hydrographs for different return periods and used to simulate the flood hazard maps. The hazard maps are combined with exposure and impact data to derive risk.\n",
"\n",
"This example also show how to work with parsing to CWL."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "01bc5efd",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:31.905279Z",
"iopub.status.busy": "2025-06-18T09:44:31.904767Z",
"iopub.status.idle": "2025-06-18T09:44:35.132020Z",
"shell.execute_reply": "2025-06-18T09:44:35.131534Z"
},
"papermill": {
"duration": 3.231658,
"end_time": "2025-06-18T09:44:35.132812",
"exception": false,
"start_time": "2025-06-18T09:44:31.901154",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - log - hydroflows version: 0.1.0\n"
]
}
],
"source": [
"# Import packages\n",
"from pathlib import Path\n",
"\n",
"from hydroflows import Workflow, WorkflowConfig\n",
"from hydroflows.log import setuplog\n",
"from hydroflows.methods import discharge, fiat, sfincs, wflow\n",
"from hydroflows.methods.utils.example_data import fetch_data\n",
"\n",
"logger = setuplog(level=\"INFO\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f2369cfb",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:35.140961Z",
"iopub.status.busy": "2025-06-18T09:44:35.140248Z",
"iopub.status.idle": "2025-06-18T09:44:35.143501Z",
"shell.execute_reply": "2025-06-18T09:44:35.143092Z"
},
"papermill": {
"duration": 0.007766,
"end_time": "2025-06-18T09:44:35.144224",
"exception": false,
"start_time": "2025-06-18T09:44:35.136458",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# Define case name and root directory\n",
"name = \"fluvial_risk\"\n",
"pwd = Path().resolve() # Get the current file location\n",
"case_root = Path(pwd, \"cases\", name) # output directory\n",
"pwd_rel = \"../../\" # relative path from the case directory to the current file"
]
},
{
"cell_type": "markdown",
"id": "ecd5a9bf",
"metadata": {
"papermill": {
"duration": 0.003149,
"end_time": "2025-06-18T09:44:35.150523",
"exception": false,
"start_time": "2025-06-18T09:44:35.147374",
"status": "completed"
},
"tags": []
},
"source": [
"## Workflow inputs\n",
"\n",
"The example requires the following inputs which are provided via a configuration file:\n",
"- a user defined region that can be used to delineate the SFINCS model domain\n",
"- a data catalog file describing all input datasets. Here we fetch some test datasets for a region in Northern Italy. \n",
"- HydroMT configuration files for all three models\n",
"- how to execute models. Since the CWL runner we use is not supported on Windows, we opt for docker for WFLOW, SFINCS and python for Delft-FIAT."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6c9af434",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:35.157520Z",
"iopub.status.busy": "2025-06-18T09:44:35.157153Z",
"iopub.status.idle": "2025-06-18T09:44:35.939200Z",
"shell.execute_reply": "2025-06-18T09:44:35.938681Z"
},
"papermill": {
"duration": 0.786575,
"end_time": "2025-06-18T09:44:35.940193",
"exception": false,
"start_time": "2025-06-18T09:44:35.153618",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# Fetch the global build data\n",
"cache_dir = fetch_data(data=\"global-data\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "44d60060",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:35.948061Z",
"iopub.status.busy": "2025-06-18T09:44:35.947667Z",
"iopub.status.idle": "2025-06-18T09:44:35.951273Z",
"shell.execute_reply": "2025-06-18T09:44:35.950873Z"
},
"papermill": {
"duration": 0.008083,
"end_time": "2025-06-18T09:44:35.951986",
"exception": false,
"start_time": "2025-06-18T09:44:35.943903",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# Setup the configuration\n",
"config = WorkflowConfig(\n",
" # general settings\n",
" region=Path(pwd_rel, \"data/build/region.geojson\"),\n",
" catalog_path=Path(cache_dir, \"data_catalog.yml\"),\n",
" plot_fig=True,\n",
" start_date=\"2014-01-01\",\n",
" end_date=\"2021-12-31\",\n",
" # sfincs settings\n",
" hydromt_sfincs_config=Path(pwd_rel, \"hydromt_config/sfincs_config.yml\"),\n",
" sfincs_run_method=\"docker\",\n",
" depth_min=0.05, # minimum depth for inundation map\n",
" # fiat settings\n",
" hydromt_fiat_config=Path(pwd_rel, \"hydromt_config/fiat_config.yml\"),\n",
" fiat_run_method=\"python\",\n",
" # wflow settings\n",
" hydromt_wflow_config=Path(pwd_rel, \"hydromt_config/wflow_config.yml\"),\n",
" wflow_run_method=\"docker\",\n",
" # design event settings\n",
" rps=[2, 5, 10, 50, 100],\n",
")\n"
]
},
{
"cell_type": "markdown",
"id": "5d2d7f5c",
"metadata": {
"papermill": {
"duration": 0.003339,
"end_time": "2025-06-18T09:44:35.958583",
"exception": false,
"start_time": "2025-06-18T09:44:35.955244",
"status": "completed"
},
"tags": []
},
"source": [
"## Create the workflow"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2d9d951d",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:35.965808Z",
"iopub.status.busy": "2025-06-18T09:44:35.965470Z",
"iopub.status.idle": "2025-06-18T09:44:35.968278Z",
"shell.execute_reply": "2025-06-18T09:44:35.967791Z"
},
"papermill": {
"duration": 0.007224,
"end_time": "2025-06-18T09:44:35.969038",
"exception": false,
"start_time": "2025-06-18T09:44:35.961814",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# create and empty workflow\n",
"wf = Workflow(name=name, config=config, root=case_root)"
]
},
{
"cell_type": "markdown",
"id": "b93b065a",
"metadata": {
"papermill": {
"duration": 0.00319,
"end_time": "2025-06-18T09:44:35.975986",
"exception": false,
"start_time": "2025-06-18T09:44:35.972796",
"status": "completed"
},
"tags": []
},
"source": [
"### Build models\n",
"\n",
"In this section we build a model cascade and make sure these are configured correctly for offline coupling, i.e. Wflow exports discharge at the right locations and Delft-FIAT uses the same ground elevation as SFINCS. Note that you can also skip these steps and use your own models instead."
]
},
{
"cell_type": "markdown",
"id": "0da62292",
"metadata": {
"papermill": {
"duration": 0.003213,
"end_time": "2025-06-18T09:44:35.982420",
"exception": false,
"start_time": "2025-06-18T09:44:35.979207",
"status": "completed"
},
"tags": []
},
"source": [
"First, we build a **SFINCS** model for the user defined region using. \n",
" - setting from the hydromt_sfincs_config, see the [HydroMT-SFINCS docs](https://deltares.github.io/hydromt_sfincs/latest/) for more info.\n",
" - data from the catalog_path, see the [HydroMT docs](https://deltares.github.io/hydromt/v0.10.0/user_guide/data_prepare_cat.html) for more info.\n",
" - Note that we need src points at the boundary of the SFINCS model (``src_points_output=True``) to get Wflow output at the right locations. Make sure the hydromt_sfincs configuration defines these source locations.\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9d864d3b",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:35.989611Z",
"iopub.status.busy": "2025-06-18T09:44:35.989286Z",
"iopub.status.idle": "2025-06-18T09:44:35.995895Z",
"shell.execute_reply": "2025-06-18T09:44:35.995467Z"
},
"papermill": {
"duration": 0.010963,
"end_time": "2025-06-18T09:44:35.996588",
"exception": false,
"start_time": "2025-06-18T09:44:35.985625",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=sfincs_build, method=sfincs_build, runs=1)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Build a SFINCS model\n",
"sfincs_build = sfincs.SfincsBuild(\n",
" region=wf.get_ref(\"$config.region\"),\n",
" config=wf.get_ref(\"$config.hydromt_sfincs_config\"),\n",
" sfincs_root=\"models/sfincs\",\n",
" catalog_path=wf.get_ref(\"$config.catalog_path\"),\n",
" plot_fig=wf.get_ref(\"$config.plot_fig\"),\n",
" subgrid_output=True,\n",
" src_points_output=True,\n",
")\n",
"wf.create_rule(sfincs_build, rule_id=\"sfincs_build\")"
]
},
{
"cell_type": "markdown",
"id": "5ec79d5b",
"metadata": {
"papermill": {
"duration": 0.003256,
"end_time": "2025-06-18T09:44:36.003383",
"exception": false,
"start_time": "2025-06-18T09:44:36.000127",
"status": "completed"
},
"tags": []
},
"source": [
"Next we build a **Wflow** model using:\n",
"- the sfincs_build output for the model region\n",
"- \"gauges\" based on SFINCS source points\n",
"- settings from the hydromt_wflow_config, see [HydroMT-Wflow docs](https://deltares.github.io/hydromt_wflow/latest/) \n",
"- data from the data catalog"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "94af5e6b",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.010933Z",
"iopub.status.busy": "2025-06-18T09:44:36.010582Z",
"iopub.status.idle": "2025-06-18T09:44:36.015330Z",
"shell.execute_reply": "2025-06-18T09:44:36.014843Z"
},
"papermill": {
"duration": 0.00931,
"end_time": "2025-06-18T09:44:36.016033",
"exception": false,
"start_time": "2025-06-18T09:44:36.006723",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=wflow_build, method=wflow_build, runs=1)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Build a Wflow model\n",
"wflow_build = wflow.WflowBuild(\n",
" region=sfincs_build.output.sfincs_region,\n",
" config=wf.get_ref(\"$config.hydromt_wflow_config\"),\n",
" wflow_root=\"models/wflow\",\n",
" catalog_path=wf.get_ref(\"$config.catalog_path\"),\n",
" gauges=sfincs_build.output.sfincs_src_points,\n",
" plot_fig=wf.get_ref(\"$config.plot_fig\"),\n",
")\n",
"wf.create_rule(wflow_build, rule_id=\"wflow_build\")\n"
]
},
{
"cell_type": "markdown",
"id": "6f17e045",
"metadata": {
"papermill": {
"duration": 0.003362,
"end_time": "2025-06-18T09:44:36.022799",
"exception": false,
"start_time": "2025-06-18T09:44:36.019437",
"status": "completed"
},
"tags": []
},
"source": [
"Next, we build a **FIAT** model using:\n",
"- the sfincs_build output for the model region and ground elevation\n",
"- settings from the hydromt_fiat_config, see the [hydromt_fiat docs](https://deltares.github.io/hydromt_fiat/latest/)\n",
"- data from the data catalog"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fcdd284e",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.030495Z",
"iopub.status.busy": "2025-06-18T09:44:36.030182Z",
"iopub.status.idle": "2025-06-18T09:44:36.034757Z",
"shell.execute_reply": "2025-06-18T09:44:36.034353Z"
},
"papermill": {
"duration": 0.009177,
"end_time": "2025-06-18T09:44:36.035502",
"exception": false,
"start_time": "2025-06-18T09:44:36.026325",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=fiat_build, method=fiat_build, runs=1)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Build a FIAT model\n",
"fiat_build = fiat.FIATBuild(\n",
" region=sfincs_build.output.sfincs_region,\n",
" ground_elevation=sfincs_build.output.sfincs_subgrid_dep,\n",
" fiat_root=\"models/fiat\",\n",
" catalog_path=wf.get_ref(\"$config.catalog_path\"),\n",
" config=wf.get_ref(\"$config.hydromt_fiat_config\"),\n",
")\n",
"wf.create_rule(fiat_build, rule_id=\"fiat_build\")"
]
},
{
"cell_type": "markdown",
"id": "3ae077b3",
"metadata": {
"papermill": {
"duration": 0.003549,
"end_time": "2025-06-18T09:44:36.042571",
"exception": false,
"start_time": "2025-06-18T09:44:36.039022",
"status": "completed"
},
"tags": []
},
"source": [
"### Derive fluvial design events from Wflow\n",
"\n",
"In this section we derive fluvial design events for each Wflow output gauge location.\n",
"First, we update and run Wflow to simulate discharge for the present climate. \n",
"Then, we derive design hydrographs for a number of return periods from discharge time series. By default, the magnitude and shape of these events are based on the annual maxima peaks from the timeseries. Note that we have set `copy_model` to `True`. This is required to be able to run with CWL, as the moving around of inputs done by CWL does not play nice with the relative paths in model config files otherwise used. This also holds for the `update_sfincs` and `update_fiat` steps we will encounter later in this workflow.\n",
"\n",
"Note that in case of multiple discharge boundary locations this approach assumes full dependence which might be an oversimplification of the reality."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7b9af415",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.050484Z",
"iopub.status.busy": "2025-06-18T09:44:36.050311Z",
"iopub.status.idle": "2025-06-18T09:44:36.055185Z",
"shell.execute_reply": "2025-06-18T09:44:36.054769Z"
},
"papermill": {
"duration": 0.009781,
"end_time": "2025-06-18T09:44:36.055947",
"exception": false,
"start_time": "2025-06-18T09:44:36.046166",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=wflow_update, method=wflow_update_forcing, runs=1)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Update Wflow meteorological forcing for the simulation period\n",
"wflow_update = wflow.WflowUpdateForcing(\n",
" wflow_toml=wflow_build.output.wflow_toml,\n",
" catalog_path=wf.get_ref(\"$config.catalog_path\"),\n",
" start_time=wf.get_ref(\"$config.start_date\"),\n",
" end_time=wf.get_ref(\"$config.end_date\"),\n",
" output_dir=wflow_build.output.wflow_toml.parent/\"simulations\"/\"default\",\n",
" copy_model=True, # Necessary for CWL\n",
")\n",
"wf.create_rule(wflow_update, rule_id=\"wflow_update\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "abd4532d",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.064192Z",
"iopub.status.busy": "2025-06-18T09:44:36.063801Z",
"iopub.status.idle": "2025-06-18T09:44:36.068101Z",
"shell.execute_reply": "2025-06-18T09:44:36.067592Z"
},
"papermill": {
"duration": 0.009152,
"end_time": "2025-06-18T09:44:36.068910",
"exception": false,
"start_time": "2025-06-18T09:44:36.059758",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=wflow_run, method=wflow_run, runs=1)"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run the wflow model for a continuous simulation setup in the wflow_update rule\n",
"wflow_run = wflow.WflowRun(\n",
" wflow_toml=wflow_update.output.wflow_out_toml,\n",
" run_method=wf.get_ref(\"$config.wflow_run_method\"),\n",
")\n",
"wf.create_rule(wflow_run, rule_id=\"wflow_run\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "a03d9a1a",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.076900Z",
"iopub.status.busy": "2025-06-18T09:44:36.076714Z",
"iopub.status.idle": "2025-06-18T09:44:36.081869Z",
"shell.execute_reply": "2025-06-18T09:44:36.081356Z"
},
"papermill": {
"duration": 0.010012,
"end_time": "2025-06-18T09:44:36.082607",
"exception": false,
"start_time": "2025-06-18T09:44:36.072595",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - wildcards - Added wildcard 'events' with values: ['q_event_rp002', 'q_event_rp005', 'q_event_rp010', 'q_event_rp050', 'q_event_rp100']\n"
]
},
{
"data": {
"text/plain": [
"Rule(id=fluvial_events, method=fluvial_design_events, runs=1, expand=['events'])"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Derive fluvial design events\n",
"# Checkout the FluvialDesignEvents parameters for many options\n",
"fluvial_events = discharge.FluvialDesignEvents(\n",
" discharge_nc=wflow_run.output.wflow_output_timeseries,\n",
" rps=wf.get_ref(\"$config.rps\"),\n",
" event_root=\"input/events\",\n",
" index_dim=\"Q_gauges_bounds\",\n",
" wildcard=\"events\",\n",
")\n",
"\n",
"# Note that a new wildcard is created for the fluvial events\n",
"wf.create_rule(fluvial_events, rule_id=\"fluvial_events\")"
]
},
{
"cell_type": "markdown",
"id": "98f8d547",
"metadata": {
"papermill": {
"duration": 0.003752,
"end_time": "2025-06-18T09:44:36.090193",
"exception": false,
"start_time": "2025-06-18T09:44:36.086441",
"status": "completed"
},
"tags": []
},
"source": [
"### Derive flood hazard"
]
},
{
"cell_type": "markdown",
"id": "462e521a",
"metadata": {
"papermill": {
"duration": 0.003732,
"end_time": "2025-06-18T09:44:36.097657",
"exception": false,
"start_time": "2025-06-18T09:44:36.093925",
"status": "completed"
},
"tags": []
},
"source": [
"To derive flood hazard maps for each event, we \n",
"1. Update the SFINCS model using the discharge event timeseries. This will create new SFINCS instances for each event.\n",
"2. Run the SFINCS model. This will create simulated water levels for each event.\n",
"3. Postprocess the SFINCS output. This will postprocess the SFINCS results to a regular grid of maximum water levels.\n",
"4. Optionally, downscale the SFINCS output. This will downscale the max simulated SFINCS water levels to a high-res flood depth map."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "96b0c523",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.105974Z",
"iopub.status.busy": "2025-06-18T09:44:36.105542Z",
"iopub.status.idle": "2025-06-18T09:44:36.110729Z",
"shell.execute_reply": "2025-06-18T09:44:36.110356Z"
},
"papermill": {
"duration": 0.010058,
"end_time": "2025-06-18T09:44:36.111447",
"exception": false,
"start_time": "2025-06-18T09:44:36.101389",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=sfincs_update, method=sfincs_update_forcing, runs=5, repeat=['events'])"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Update the SFINCS model with fluvial events\n",
"sfincs_update = sfincs.SfincsUpdateForcing(\n",
" sfincs_inp=sfincs_build.output.sfincs_inp,\n",
" event_yaml=fluvial_events.output.event_yaml,\n",
" output_dir=sfincs_build.output.sfincs_inp.parent/\"simulations\"/\"{events}\",\n",
" copy_model=True, # Necessary for CWL\n",
")\n",
"wf.create_rule(sfincs_update, rule_id=\"sfincs_update\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "1e567edb",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.119782Z",
"iopub.status.busy": "2025-06-18T09:44:36.119634Z",
"iopub.status.idle": "2025-06-18T09:44:36.124217Z",
"shell.execute_reply": "2025-06-18T09:44:36.123715Z"
},
"papermill": {
"duration": 0.009715,
"end_time": "2025-06-18T09:44:36.125027",
"exception": false,
"start_time": "2025-06-18T09:44:36.115312",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=sfincs_run, method=sfincs_run, runs=5, repeat=['events'])"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run the SFINCS model for each fluvial event\n",
"sfincs_run = sfincs.SfincsRun(\n",
" sfincs_inp=sfincs_update.output.sfincs_out_inp,\n",
" run_method=wf.get_ref(\"$config.sfincs_run_method\")\n",
")\n",
"wf.create_rule(sfincs_run, rule_id=\"sfincs_run\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "62c3869a",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.133655Z",
"iopub.status.busy": "2025-06-18T09:44:36.133304Z",
"iopub.status.idle": "2025-06-18T09:44:36.137975Z",
"shell.execute_reply": "2025-06-18T09:44:36.137488Z"
},
"papermill": {
"duration": 0.009722,
"end_time": "2025-06-18T09:44:36.138691",
"exception": false,
"start_time": "2025-06-18T09:44:36.128969",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=sfincs_post, method=sfincs_postprocess, runs=5, repeat=['events'])"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Postprocesses SFINCS results to a regular grid of maximum water levels\n",
"sfincs_post = sfincs.SfincsPostprocess(\n",
" sfincs_map=sfincs_run.output.sfincs_map,\n",
")\n",
"wf.create_rule(sfincs_post, rule_id=\"sfincs_post\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "b2d58768",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.147490Z",
"iopub.status.busy": "2025-06-18T09:44:36.147171Z",
"iopub.status.idle": "2025-06-18T09:44:36.152147Z",
"shell.execute_reply": "2025-06-18T09:44:36.151636Z"
},
"papermill": {
"duration": 0.01022,
"end_time": "2025-06-18T09:44:36.152931",
"exception": false,
"start_time": "2025-06-18T09:44:36.142711",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=sfincs_downscale, method=sfincs_downscale, runs=5, repeat=['events'])"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Optionally, downscale the SFINCS output to derive high-res flood hazard maps\n",
"sfincs_downscale = sfincs.SfincsDownscale(\n",
" sfincs_map=sfincs_run.output.sfincs_map,\n",
" sfincs_subgrid_dep=sfincs_build.output.sfincs_subgrid_dep,\n",
" depth_min=wf.get_ref(\"$config.depth_min\"),\n",
" output_root=\"output/hazard\",\n",
")\n",
"wf.create_rule(sfincs_downscale, rule_id=\"sfincs_downscale\")"
]
},
{
"cell_type": "markdown",
"id": "2a30dee2",
"metadata": {
"papermill": {
"duration": 0.004087,
"end_time": "2025-06-18T09:44:36.161185",
"exception": false,
"start_time": "2025-06-18T09:44:36.157098",
"status": "completed"
},
"tags": []
},
"source": [
"## Derive flood risk"
]
},
{
"cell_type": "markdown",
"id": "b28584e7",
"metadata": {
"papermill": {
"duration": 0.004018,
"end_time": "2025-06-18T09:44:36.169244",
"exception": false,
"start_time": "2025-06-18T09:44:36.165226",
"status": "completed"
},
"tags": []
},
"source": [
"To calculate flood risk, we \n",
"- Update Delft-FIAT with *all fluvial events* which are combined in an event set. This will create a new Delft-FIAT instance for the event set.\n",
"- Run Delft-FIAT to calculate flood impact and risk. This will create impact and risk data at the individual and aggregated asset level.\n",
"- Visualize the risk results at the aggregated asset level.\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "c9b16a12",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.177878Z",
"iopub.status.busy": "2025-06-18T09:44:36.177693Z",
"iopub.status.idle": "2025-06-18T09:44:36.182418Z",
"shell.execute_reply": "2025-06-18T09:44:36.181940Z"
},
"papermill": {
"duration": 0.009933,
"end_time": "2025-06-18T09:44:36.183185",
"exception": false,
"start_time": "2025-06-18T09:44:36.173252",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=fiat_update, method=fiat_update_hazard, runs=1, reduce=['events'])"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Update FIAT hazard forcing with the fluvial eventset to compute fluvial flood risk\n",
"fiat_update = fiat.FIATUpdateHazard(\n",
" fiat_cfg=fiat_build.output.fiat_cfg,\n",
" event_set_yaml=fluvial_events.output.event_set_yaml,\n",
" map_type=\"water_level\",\n",
" hazard_maps=sfincs_post.output.sfincs_zsmax,\n",
" risk=True,\n",
" output_dir=fiat_build.output.fiat_cfg.parent/\"simulations\",\n",
" copy_model=True, # Necessary for CWL\n",
")\n",
"wf.create_rule(fiat_update, rule_id=\"fiat_update\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "a908d78a",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.192387Z",
"iopub.status.busy": "2025-06-18T09:44:36.192061Z",
"iopub.status.idle": "2025-06-18T09:44:36.196041Z",
"shell.execute_reply": "2025-06-18T09:44:36.195641Z"
},
"papermill": {
"duration": 0.009416,
"end_time": "2025-06-18T09:44:36.196787",
"exception": false,
"start_time": "2025-06-18T09:44:36.187371",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=fiat_run, method=fiat_run, runs=1)"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run FIAT to compute pluvial flood risk\n",
"fiat_run = fiat.FIATRun(\n",
" fiat_cfg=fiat_update.output.fiat_out_cfg,\n",
" run_method=wf.get_ref(\"$config.fiat_run_method\")\n",
")\n",
"wf.create_rule(fiat_run, rule_id=\"fiat_run\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "791cf7fd",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.206026Z",
"iopub.status.busy": "2025-06-18T09:44:36.205681Z",
"iopub.status.idle": "2025-06-18T09:44:36.210035Z",
"shell.execute_reply": "2025-06-18T09:44:36.209533Z"
},
"papermill": {
"duration": 0.009672,
"end_time": "2025-06-18T09:44:36.210696",
"exception": false,
"start_time": "2025-06-18T09:44:36.201024",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Rule(id=fiat_visualize, method=fiat_visualize, runs=1)"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Visualize Fiat \n",
"fiat_visualize = fiat.FIATVisualize(\n",
" fiat_cfg=fiat_update.output.fiat_out_cfg,\n",
" fiat_output_csv=fiat_run.output.fiat_out_csv,\n",
" spatial_joins_cfg=fiat_build.output.spatial_joins_cfg,\n",
" output_dir=\"output/risk\"\n",
")\n",
"wf.create_rule(fiat_visualize, rule_id=\"fiat_visualize\")"
]
},
{
"cell_type": "markdown",
"id": "d57a5fb9",
"metadata": {
"papermill": {
"duration": 0.004295,
"end_time": "2025-06-18T09:44:36.219340",
"exception": false,
"start_time": "2025-06-18T09:44:36.215045",
"status": "completed"
},
"tags": []
},
"source": [
"## Visualize and execute the workflow\n",
"\n",
"To inspect the workflow we can plot the rulegraph which shows all rules their dependencies.\n",
"The nodes are colored based on the type, for instance the red nodes show the result rules."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "ee37fcc0",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.228812Z",
"iopub.status.busy": "2025-06-18T09:44:36.228401Z",
"iopub.status.idle": "2025-06-18T09:44:36.262956Z",
"shell.execute_reply": "2025-06-18T09:44:36.262444Z"
},
"papermill": {
"duration": 0.04002,
"end_time": "2025-06-18T09:44:36.263654",
"exception": false,
"start_time": "2025-06-18T09:44:36.223634",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n",
"\n",
"\n"
],
"text/plain": [
""
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# plot the rulegraph using graphviz\n",
"wf.plot_rulegraph(filename=\"rulegraph.svg\", plot_rule_attrs=True)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "53164744",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.273922Z",
"iopub.status.busy": "2025-06-18T09:44:36.273515Z",
"iopub.status.idle": "2025-06-18T09:44:36.284158Z",
"shell.execute_reply": "2025-06-18T09:44:36.283672Z"
},
"papermill": {
"duration": 0.016462,
"end_time": "2025-06-18T09:44:36.284871",
"exception": false,
"start_time": "2025-06-18T09:44:36.268409",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 1/13: sfincs_build (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 2/13: fiat_build (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 3/13: wflow_build (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 4/13: wflow_update (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 5/13: wflow_run (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 6/13: fluvial_events (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 7/13: sfincs_update (5 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 8/13: sfincs_run (5 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 9/13: sfincs_downscale (5 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 10/13: sfincs_post (5 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 11/13: fiat_update (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 12/13: fiat_run (1 runs)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO - workflow - Dryrun rule 13/13: fiat_visualize (1 runs)\n"
]
}
],
"source": [
"# dryrun workflow. Make sure no warnings are raised\n",
"wf.dryrun()"
]
},
{
"cell_type": "markdown",
"id": "e69bfcbf",
"metadata": {
"papermill": {
"duration": 0.004981,
"end_time": "2025-06-18T09:44:36.295051",
"exception": false,
"start_time": "2025-06-18T09:44:36.290070",
"status": "completed"
},
"tags": []
},
"source": [
"The workflow can be executed using HydroFlows or a workflow engine. \n",
"To run the workflow in HydroFlows use ``wf.run()``. \n",
"To run the workflow with SnakeMake (preferred) use ``wf.to_snakemake()`` to create a snakemake file, see below.\n",
"You can then use the Snakemake CLI to execute the workflow, see the [snakemake documentation](https://snakemake.readthedocs.io/en/stable/executing/cli.html)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "30987400",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.306094Z",
"iopub.status.busy": "2025-06-18T09:44:36.305733Z",
"iopub.status.idle": "2025-06-18T09:44:36.322028Z",
"shell.execute_reply": "2025-06-18T09:44:36.321573Z"
},
"papermill": {
"duration": 0.022638,
"end_time": "2025-06-18T09:44:36.322737",
"exception": false,
"start_time": "2025-06-18T09:44:36.300099",
"status": "completed"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cases/fluvial_risk:\n",
"- Snakefile\n",
"- rulegraph.svg\n",
"- Snakefile.config.yml\n"
]
}
],
"source": [
"# Write the workflow to a Snakefile and snakefile.config.yml\n",
"wf.to_snakemake()\n",
"\n",
"# show the files in the case directory\n",
"print(f\"{wf.root.relative_to(pwd)}:\")\n",
"for f in wf.root.iterdir():\n",
" print(f\"- {f.name}\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "54d88fc1",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.333563Z",
"iopub.status.busy": "2025-06-18T09:44:36.333404Z",
"iopub.status.idle": "2025-06-18T09:44:36.335756Z",
"shell.execute_reply": "2025-06-18T09:44:36.335359Z"
},
"papermill": {
"duration": 0.008503,
"end_time": "2025-06-18T09:44:36.336473",
"exception": false,
"start_time": "2025-06-18T09:44:36.327970",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# uncomment to run the workflow\n",
"# import subprocess\n",
"# subprocess.run([\"snakemake\", \"-c\", \"1\"], cwd=wf.root)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "368c19f1",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.347294Z",
"iopub.status.busy": "2025-06-18T09:44:36.346979Z",
"iopub.status.idle": "2025-06-18T09:44:36.388973Z",
"shell.execute_reply": "2025-06-18T09:44:36.388455Z"
},
"papermill": {
"duration": 0.048088,
"end_time": "2025-06-18T09:44:36.389657",
"exception": false,
"start_time": "2025-06-18T09:44:36.341569",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# Write the workflow to a cwl file and cwl config file\n",
"wf.to_cwl()"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "2e34b93d",
"metadata": {
"execution": {
"iopub.execute_input": "2025-06-18T09:44:36.400522Z",
"iopub.status.busy": "2025-06-18T09:44:36.400263Z",
"iopub.status.idle": "2025-06-18T09:44:36.402679Z",
"shell.execute_reply": "2025-06-18T09:44:36.402201Z"
},
"papermill": {
"duration": 0.008554,
"end_time": "2025-06-18T09:44:36.403347",
"exception": false,
"start_time": "2025-06-18T09:44:36.394793",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": [
"# uncomment to run the workflow with cwll\n",
"# cwltool does not by default preserve environment variables. This causes issues when running Delft-FIAT.\n",
"# Hence the extra flag to explicitly tell cwltool to preserve the PROJ_DATA environment variable\n",
"# import subprocess\n",
"# subprocess.run([\"cwltool\", \"--preserve-environment\", \"PROJ_DATA\", f\"{wf.name}.cwl\", f\"{wf.name}.config.yml\"], cwd=wf.root)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e96c4d9f",
"metadata": {
"papermill": {
"duration": 0.005091,
"end_time": "2025-06-18T09:44:36.413679",
"exception": false,
"start_time": "2025-06-18T09:44:36.408588",
"status": "completed"
},
"tags": []
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "full",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.12"
},
"papermill": {
"default_parameters": {},
"duration": 5.949837,
"end_time": "2025-06-18T09:44:37.234992",
"environment_variables": {},
"exception": null,
"input_path": "/home/runner/work/HydroFlows/HydroFlows/docs/../examples/fluvial_risk.ipynb",
"output_path": "/home/runner/work/HydroFlows/HydroFlows/docs/_examples/fluvial_risk.ipynb",
"parameters": {},
"start_time": "2025-06-18T09:44:31.285155",
"version": "2.6.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}