bluemath_tk.wrappers.swash package
Submodules
bluemath_tk.wrappers.swash.swash_example module
bluemath_tk.wrappers.swash.swash_wrapper module
- class bluemath_tk.wrappers.swash.swash_wrapper.ChySwashModelWrapper(*args, **kwargs)[source]
Bases:
SwashModelWrapper
Wrapper for the SWASH model with friction.
- build_case(case_context: dict, case_dir: str) None [source]
Build the input files for a case.
- Parameters:
case_context (dict) – The case context.
case_dir (str) – The case directory.
- default_Cf = 0.0002
- class bluemath_tk.wrappers.swash.swash_wrapper.HySwashVeggyModelWrapper(*args, **kwargs)[source]
Bases:
SwashModelWrapper
Wrapper for the SWASH model with vegetation.
- class bluemath_tk.wrappers.swash.swash_wrapper.SwashModelWrapper(*args, **kwargs)[source]
Bases:
BaseModelWrapper
Wrapper for the SWASH model. https://swash.sourceforge.io/online_doc/swashuse/swashuse.html#input-and-output-files
- default_parameters
The default parameters type for the wrapper.
- Type:
dict
- available_launchers
The available launchers for the wrapper.
- Type:
dict
- postprocess_functions
The postprocess functions for the wrapper.
- Type:
dict
- build_cases -> None
Create the cases folders and render the input files.
- list_available_postprocess_vars -> List[str]
List available postprocess variables.
- _read_tabfile -> pd.DataFrame
Read a tab file and return a pandas DataFrame.
- _convert_case_output_files_to_nc -> xr.Dataset
Convert tab files to netCDF file.
- get_case_percentage_from_file -> float
Get the case percentage from the output log file.
- monitor_cases -> pd.DataFrame
Monitor the cases and log relevant information.
- postprocess_case -> xr.Dataset
Convert tab ouput files to netCDF file.
- join_postprocessed_files -> xr.Dataset
Join postprocessed files in a single Dataset.
- find_maximas -> Tuple[np.ndarray, np.ndarray]
Find the individual maxima of an array.
- get_waterlevel -> xr.Dataset
Get water level from the output netCDF file.
- calculate_runup2 -> xr.Dataset
Calculates runup 2% (Ru2) from the output netCDF file.
- calculate_runup -> xr.Dataset
Stores runup from the output netCDF file.
- calculate_setup -> xr.Dataset
Calculates mean setup (Msetup) from the output netCDF file.
- calculate_statistical_analysis -> xr.Dataset
Calculates zero-upcrossing analysis to obtain individual wave heights (Hi) and wave periods (Ti).
- calculate_spectral_analysis -> xr.Dataset
Makes a water level spectral analysis (scipy.signal.welch) then separates incident waves, infragravity waves, very low frequency waves.
- available_launchers = {'docker_mpi': 'docker run --rm -v .:/case_dir -w /case_dir geoocean/rocky8 mpirun -np 2 swash_mpi.exe', 'docker_serial': 'docker run --rm -v .:/case_dir -w /case_dir geoocean/rocky8 swash_serial.exe', 'geoocean-cluster': 'launchSwash.sh', 'mpi': 'mpirun -np 2 swash_mpi.exe', 'serial': 'swash_serial.exe'}
- build_case(case_context: dict, case_dir: str) None [source]
Build the input files for a case.
- Parameters:
case_context (dict) – The case context.
case_dir (str) – The case directory.
- calculate_runup(case_num: int, case_dir: str, output_nc: Dataset) Dataset [source]
Stores runup from the output netCDF file.
- Parameters:
case_num (int) – The case number.
case_dir (str) – The case directory.
output_nc (xr.Dataset) – The output netCDF file.
- Returns:
The runup.
- Return type:
xr.Dataset
- calculate_runup2(case_num: int, case_dir: str, output_nc: Dataset) Dataset [source]
Calculates runup 2% (Ru2) from the output netCDF file.
- Parameters:
case_num (int) – The case number.
case_dir (str) – The case directory.
output_nc (xr.Dataset) – The output netCDF file.
- Returns:
The runup 2% (Ru2).
- Return type:
xr.Dataset
- calculate_setup(case_num: int, case_dir: str, output_nc: Dataset) Dataset [source]
Calculates mean setup (Msetup) from the output netCDF file.
- Parameters:
case_num (int) – The case number.
case_dir (str) – The case directory.
output_nc (xr.Dataset) – The output netCDF file.
- Returns:
The mean setup (Msetup).
- Return type:
xr.Dataset
- calculate_spectral_analysis(case_num: int, case_dir: str, output_nc: Dataset) Dataset [source]
Makes a water level spectral analysis (scipy.signal.welch) then separates incident waves, infragravity waves, very low frequency waves.
- Parameters:
case_num (int) – The case number.
case_dir (str) – The case directory.
output_nc (xr.Dataset) – The output netCDF file.
- Returns:
The spectral analysis.
- Return type:
xr.Dataset
- calculate_statistical_analysis(case_num: int, case_dir: str, output_nc: Dataset) Dataset [source]
Calculates zero-upcrossing analysis to obtain individual wave heights (Hi) and wave periods (Ti).
- Parameters:
case_num (int) – The case number.
case_dir (str) – The case directory.
output_nc (xr.Dataset) – The output netCDF file.
- Returns:
The statistical analysis.
- Return type:
xr.Dataset
- default_parameters = {'Hs': {'description': 'Significant wave height.', 'type': <class 'float'>, 'value': None}, 'Hs_L0': {'description': 'Wave height at deep water.', 'type': <class 'float'>, 'value': None}, 'WL': {'description': 'Water level.', 'type': <class 'float'>, 'value': None}, 'comptime': {'description': 'The computational time.', 'type': <class 'int'>, 'value': 180}, 'deltat': {'description': 'The time step.', 'type': <class 'int'>, 'value': 1}, 'dxinp': {'description': 'The input spacing.', 'type': <class 'float'>, 'value': 1.0}, 'gamma': {'description': 'The gamma parameter.', 'type': <class 'int'>, 'value': 2}, 'n_nodes_per_wavelength': {'description': 'The number of nodes per wavelength.', 'type': <class 'int'>, 'value': 60}, 'vegetation_height': {'description': 'The vegetation height.', 'type': <class 'float'>, 'value': None}, 'warmup': {'description': 'The warmup time.', 'type': <class 'int'>, 'value': 0}}
- find_maximas(x: ndarray) Tuple[ndarray, ndarray] [source]
Find the individual maxima of an array.
- Parameters:
x (np.ndarray) – The array (should be the water level time series).
- Returns:
The peaks and the values of the peaks.
- Return type:
Tuple[np.ndarray, np.ndarray]
- get_case_percentage_from_file(output_log_file: str) str [source]
Get the case percentage from the output log file.
- Parameters:
output_log_file (str) – The output log file.
- Returns:
The case percentage.
- Return type:
str
- join_postprocessed_files(postprocessed_files: List[Dataset]) Dataset [source]
Join postprocessed files in a single Dataset.
- Parameters:
postprocessed_files (list) – The postprocessed files.
- Returns:
The joined Dataset.
- Return type:
xr.Dataset
- list_available_postprocess_vars() List[str] [source]
List available postprocess variables.
- Returns:
The available postprocess variables.
- Return type:
List[str]
- monitor_cases(value_counts: str = None) DataFrame | dict [source]
Monitor the cases based on different model log files.
- postprocess_case(case_num: int, case_dir: str, output_vars: List[str] = None, overwrite_output: bool = True, overwrite_output_postprocessed: bool = True, remove_tab: bool = False, remove_nc: bool = False) Dataset [source]
Convert tab output files to netCDF file.
- Parameters:
case_num (int) – The case number.
case_dir (str) – The case directory.
output_vars (list, optional) – The output variables to postprocess. Default is None.
overwrite_output (bool, optional) – Overwrite the output.nc file. Default is True.
overwrite_output_postprocessed (bool, optional) – Overwrite the output_postprocessed.nc file. Default is True.
remove_tab (bool, optional) – Remove the tab files. Default is False.
remove_nc (bool, optional) – Remove the netCDF file. Default is False.
- Returns:
The postprocessed Dataset.
- Return type:
xr.Dataset
- postprocess_functions = {'Hfreqs': 'calculate_spectral_analysis', 'Hrms': 'calculate_statistical_analysis', 'Msetup': 'calculate_setup', 'Ru2': 'calculate_runup2', 'Runlev': 'calculate_runup'}