Pyclaw Input/Output Package

Pyclaw supports the following input and output formats:

  • ASCII - ASCII file I/O, supports traditional clawpack format files

  • HDF5 - HDF5 file I/O

  • NetCDF - NetCDF file I/O, support for NetCDF3 and NetCDF4 files

Each module contains two main routines read_<format> and write_<format> which Solution can call with the appropriate <format>. In order to create a new file I/O extension the calling signature must match

read_<format>(solution,frame,path,file_prefix,write_aux,options)
where the the inputs are
Input:
  • solution - (Solution) Pyclaw object to be output

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name.

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out.

  • options - (dict) Optional argument dictionary

and

write_<format>(solution,frame,path,file_prefix,write_aux,options)
where the inputs are
Input:
  • solution - (Solution) Pyclaw object to be output

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name.

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out.

  • options - (dict) Optional argument dictionary.

Note that both allow for an options dictionary that is format specific and should be documented thoroughly. For examples of this usage, see the HDF5 and NetCDF modules.

HDF5 and NetCDF support require installed libraries in order to work, please see the respective modules for details on how to obtain and install the libraries needed.

Note

Pyclaw automatically detects the availability of HDF5 and NetCDF file support and will warn you if you try and use them without the proper libraries.

pyclaw.fileio.ascii

Routines for reading and writing an ascii output file

clawpack.pyclaw.fileio.ascii.read(solution, frame, path='./', file_prefix='fort', read_aux=False, options={})

Read in a frame of ascii formatted files, and enter the data into the solution object.

This routine reads the ascii formatted files corresponding to the classic clawpack format ‘fort.txxxx’, ‘fort.qxxxx’, and ‘fort.axxxx’ or ‘fort.aux’ Note that the fort prefix can be changed.

Input:
  • solution - (Solution) Solution object to read the data into.

  • frame - (int) Frame number to be read in

  • path - (string) Path to the current directory of the file

  • file_prefix - (string) Prefix of the files to be read in. default = 'fort'

  • read_aux (bool) Whether or not an auxiliary file will try to be read in. default = False

  • options - (dict) Dictionary of optional arguments dependent on the format being read in. default = {}

clawpack.pyclaw.fileio.ascii.read_array(f, state, num_var)

Read in an array from an ASCII output file f.

The variable q here may in fact refer to q or to aux.

This routine supports the possibility that the values q[:,i,j,k] (for a fixed i,j,k) have been split over multiple lines, because some packages write just 4 values per line. For Clawpack 6.0, we plan to make all packages write q[:,i,j,k] on a single line. This routine can then be simplified.

clawpack.pyclaw.fileio.ascii.read_patch_header(f, num_dim)

Read header describing the next patch

Input:
  • f - (file) Handle to open file

  • num_dim - (int) Number of dimensions

Output:
  • patch - (clawpack.pyclaw.geometry.Patch) Initialized patch represented by the header data.

clawpack.pyclaw.fileio.ascii.read_t(frame, path='./', file_prefix='fort')

Read only the fort.t file and return the data.

Note this file is always ascii and now contains a line that tells the file_format, so we can read this file before importing the appropriate read function for the solution data.

For backward compatibility, if file_format line is missing then return None and handle this where it is called.

This version also reads in num_ghost so that if the data is binary, we can extract only the data that’s relevant (since ghost cells are included).

Input:
  • frame - (int) Frame number to be read in

  • path - (string) Path to the current directory of the file

  • file_prefix - (string) Prefix of the files to be read in. default = 'fort'

Output:
  • (list) List of output variables

  • t - (int) Time of frame

  • num_eqn - (int) Number of equations in the frame

  • nstates - (int) Number of states

  • num_aux - (int) Auxiliary value in the frame

  • num_dim - (int) Number of dimensions in q and aux

  • num_ghost - (int) Number of ghost cells on each side

  • file_format - (str) ‘ascii’, ‘binary32’, ‘binary64’

clawpack.pyclaw.fileio.ascii.write(solution, frame, path, file_prefix='fort', write_aux=False, options={}, write_p=False)

Write out ascii data file

Write out an ascii file formatted identical to the fortran clawpack files including writing out fort.t, fort.q, and fort.aux if necessary. Note that there are some parameters that assumed to be the same for every patch in this format which is not necessarily true for the actual data objects. Make sure that if you use this output format that all of your patches share the appropriate values of num_dim, num_eqn, num_aux, and t. Only supports up to 3 dimensions.

Input:
  • solution - (Solution) Pyclaw object to be output.

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name. default = 'fort'

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out. default = False

  • options - (dict) Dictionary of optional arguments dependent on the format being written. default = {}

clawpack.pyclaw.fileio.ascii.write_array(f, patch, q)

Write a single array to output file f as ASCII text.

The variable q here may in fact refer to q or to aux.

pyclaw.fileio.hdf5

Routines for reading and writing a HDF5 output file

This module reads and writes hdf5 files via either of the following modules:

h5py - http://code.google.com/p/h5py/ PyTables - http://www.pytables.org/moin

It will first try h5py and then PyTables and use the correct calls according to whichever is present on the system. We recommend that you use h5py as it is a minimal wrapper to the HDF5 library.

To install either, you must also install the hdf5 library from the website:

http://www.hdfgroup.org/HDF5/release/obtain5.html

clawpack.pyclaw.fileio.hdf5.read(solution, frame, path='./', file_prefix='claw', read_aux=True, options={})

Read in a HDF5 file into a Solution

Input:
  • solution - (Solution) Pyclaw object to be output

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name. default = 'claw'

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out. default = False

  • options - (dict) Optional argument dictionary, not used for reading.

clawpack.pyclaw.fileio.hdf5.write(solution, frame, path, file_prefix='claw', write_aux=False, options={}, write_p=False)

Write out a Solution to a HDF5 file.

Input:
  • solution - (Solution) Pyclaw solution object to input into

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name. default = 'claw'

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out. default = False

  • options - (dict) Optional argument dictionary, see HDF5 Option Table

Key

Value

compression

(None, string [“gzip” | “lzf” | “szip”] or int 0-9) Enable dataset compression. DEFLATE, LZF and (where available) SZIP are supported. An integer is interpreted as a GZIP level for backwards compatibility.

compression_opts

(None, or special value) Setting for compression filter; legal values for each filter type are:

  • gzip - (int) 0-9

  • lzf - None allowed

  • szip - (tuple) 2-tuple (‘ec’|’nn’, even integer

    0-32)

See the filters module for a detailed description of each of these filters.

chunks

(None, True or shape tuple) Store the dataset in chunked format. Automatically selected if any of the other keyword options are given. If you don’t provide a shape tuple, the library will guess one for you.

shuffle

(True/False) Enable/disable data shuffling, which can improve compression performance. Automatically enabled when compression is used.

fletcher32

(True/False) Enable Fletcher32 error detection; may be used with or without compression.

pyclaw.fileio.netcdf

Routines for reading and writing a NetCDF output file

Routines for reading and writing a NetCDF output file via either

These interfaces are very similar so if a different module needs to be used, it can more than likely be inserted with a minimal of effort.

This module will first try to import the netcdf4-python module which is based on the compiled libraries and failing that will attempt to import the pure python interface pupynere which requires no libraries.

To install the netCDF 4 library, please see:

http://www.unidata.ucar.edu/software/netcdf/

Authors:

Kyle T. Mandli (2009-02-17) Initial version

clawpack.pyclaw.fileio.netcdf.read(solution, frame, path='./', file_prefix='claw', read_aux=True, options={})

Read in a NetCDF data files into solution

Input:
  • solution - (Solution) Pyclaw object to be output

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name. default = 'claw'

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out. default = False

  • options - (dict) Optional argument dictionary, unused for reading.

clawpack.pyclaw.fileio.netcdf.write(solution, frame, path, file_prefix='claw', write_aux=False, options={}, write_p=False)

Write out a NetCDF data file representation of solution

Input:
  • solution - (Solution) Pyclaw object to be output

  • frame - (int) Frame number

  • path - (string) Root path

  • file_prefix - (string) Prefix for the file name. default = 'claw'

  • write_aux - (bool) Boolean controlling whether the associated auxiliary array should be written out. default = False

  • options - (dict) Optional argument dictionary, see NetCDF Option Table

Key

Value

description

Dictionary of key/value pairs that will be attached to the root group as attributes, i.e. {‘time’:3}

format

Can be one of the following netCDF flavors: NETCDF3_CLASSIC, NETCDF3_64BIT, NETCDF4_CLASSIC, and NETCDF4 default = NETCDF4

clobber

if True (Default), file will be overwritten, if False an exception will be raised

zlib

if True, data assigned to the Variable instance is compressed on disk. default = False

complevel

the level of zlib compression to use (1 is the fastest, but poorest compression, 9 is the slowest but best compression). Ignored if zlib=False. default = 6

shuffle

if True, the HDF5 shuffle filter is applied to improve compression. Ignored if zlib=False. default = True

fletcher32

if True (default False), the Fletcher32 checksum algorithm is used for error detection.

contiguous

if True (default False), the variable data is stored contiguously on disk. Setting to True for a variable with an unlimited dimension will trigger an error. default = False

chunksizes

Can be used to specify the HDF5 chunksizes for each dimension of the variable. A detailed discussion of HDF chunking and I/O performance is available here. Basically, you want the chunk size for each dimension to match as closely as possible the size of the data block that users will read from the file. chunksizes cannot be set if contiguous=True.

least_significant_digit

If specified, variable data will be truncated (quantized). In conjunction with zlib=True this produces ‘lossy’, but significantly more efficient compression. For example, if least_significant_digit=1, data will be quantized using around (scale*data)/scale, where scale = 2**bits, and bits is determined so that a precision of 0.1 is retained (in this case bits=4). default = None, or no quantization.

endian

Can be used to control whether the data is stored in little or big endian format on disk. Possible values are little, big or native (default). The library will automatically handle endian conversions when the data is read, but if the data is always going to be read on a computer with the opposite format as the one used to create the file, there may be some performance advantage to be gained by setting the endian-ness.

fill_value

If specified, the default netCDF _FillValue (the value that the variable gets filled with before any data is written to it) is replaced with this value. If fill_value is set to False, then the variable is not pre-filled.

Note

The zlib, complevel, shuffle, fletcher32, contiguous, chunksizes and endian keywords are silently ignored for netCDF 3 files that do not use HDF5.