Changeset 11316 for NEMO/trunk/doc/latex
- Timestamp:
- 2019-07-19T19:20:02+02:00 (5 years ago)
- Location:
- NEMO/trunk/doc/latex/NEMO/subfiles
- Files:
-
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
NEMO/trunk/doc/latex/NEMO/subfiles/chap_ASM.tex
r11151 r11316 8 8 \label{chap:ASM} 9 9 10 Authors: D. Lea, M. Martin, K. Mogensen, A. Weaver, ... % do we keep 10 \minitoc 11 11 12 \minitoc 12 \vfill 13 \begin{figure}[b] 14 \subsubsection*{Changes record} 15 \begin{tabular}{l||l|m{0.65\linewidth}} 16 Release & Author & Modifications \\ 17 {\em 4.0} & {\em D. J. Lea} & {\em \NEMO 4.0 updates} \\ 18 {\em 3.4} & {\em D. J. Lea, M. Martin, K. Mogensen, A. Weaver} & {\em Initial version} \\ 19 \end{tabular} 20 \end{figure} 13 21 14 22 \newpage 15 23 16 24 The ASM code adds the functionality to apply increments to the model variables: temperature, salinity, 17 sea surface height, velocity and sea ice concentration. 25 sea surface height, velocity and sea ice concentration. 18 26 These are read into the model from a NetCDF file which may be produced by separate data assimilation code. 19 27 The code can also output model background fields which are used as an input to data assimilation code. … … 56 64 Typically the increments are spread evenly over the full window. 57 65 In addition, two different weighting functions have been implemented. 58 The first function employs constant weights,66 The first function (namelist option \np{niaufn} = 0) employs constant weights, 59 67 \begin{align} 60 68 \label{eq:F1_i} … … 66 74 0 & {\mathrm if} \; \; \; t_{i} > t_{n} 67 75 \end{array} 68 \right. 76 \right. 69 77 \end{align} 70 78 where $M = m-n$. 71 The second function employs peaked hat-like weights in order to give maximum weight in the centre of the sub-window,79 The second function (namelist option \np{niaufn} = 1) employs peaked hat-like weights in order to give maximum weight in the centre of the sub-window, 72 80 with the weighting reduced linearly to a small value at the window end-points: 73 81 \begin{align} … … 83 91 \right. 84 92 \end{align} 85 where $\alpha^{-1} = \sum_{i=1}^{M/2} 2i$ and $M$ is assumed to be even. 93 where $\alpha^{-1} = \sum_{i=1}^{M/2} 2i$ and $M$ is assumed to be even. 86 94 The weights described by \autoref{eq:F2_i} provide a smoother transition of the analysis trajectory from 87 95 one assimilation cycle to the next than that described by \autoref{eq:F1_i}. … … 92 100 \label{sec:ASM_div_dmp} 93 101 94 The velocity increments may be initialized by the iterative application of a divergence damping operator. 95 In iteration step $n$ new estimates of velocity increments $u^{n}_I$ and $v^{n}_I$ are updated by: 102 It is quite challenging for data assimilation systems to provide non-divergent velocity increments. 103 Applying divergent velocity increments will likely cause spurious vertical velocities in the model. This section describes a method to take velocity increments provided to \NEMO ($u^0_I$ and $v^0_I$) and adjust them by the iterative application of a divergence damping operator. The method is also described in \citet{dobricic.pinardi.ea_OS07}. 104 105 In iteration step $n$ (starting at $n=1$) new estimates of velocity increments $u^{n}_I$ and $v^{n}_I$ are updated by: 106 96 107 \begin{equation} 97 108 \label{eq:asm_dmp} … … 105 116 \right., 106 117 \end{equation} 107 where 118 119 where the divergence is defined as 120 108 121 \[ 109 122 % \label{eq:asm_div} … … 112 125 +\delta_j \left[ {e_{1v}\,e_{3v}\,v^{n-1}_I} \right]} \right). 113 126 \] 114 By the application of \autoref{eq:asm_dmp} and \autoref{eq:asm_dmp} the divergence is filtered in each iteration, 127 128 By the application of \autoref{eq:asm_dmp} the divergence is filtered in each iteration, 115 129 and the vorticity is left unchanged. 116 130 In the presence of coastal boundaries with zero velocity increments perpendicular to the coast … … 122 136 The divergence damping is activated by assigning to \np{nn\_divdmp} in the \textit{nam\_asminc} namelist 123 137 a value greater than zero. 124 By choosing this value to be of the order of 100 the increments in 125 the vertical velocity will be significantly reduced. 138 This specifies the number of iterations of the divergence damping. Setting a value of the order of 100 will result in a significant reduction in the vertical velocity induced by the increments. 126 139 127 140 … … 131 144 \label{sec:ASM_details} 132 145 133 Here we show an example \ngn{nam asm} namelist and the header of an example assimilation increments file on146 Here we show an example \ngn{nam\_asminc} namelist and the header of an example assimilation increments file on 134 147 the ORCA2 grid. 135 148 136 %------------------------------------------nam asm-----------------------------------------------------149 %------------------------------------------nam_asminc----------------------------------------------------- 137 150 % 138 151 \nlst{nam_asminc} -
NEMO/trunk/doc/latex/NEMO/subfiles/chap_OBS.tex
r11151 r11316 8 8 \label{chap:OBS} 9 9 10 Authors: D. Lea, M. Martin, K. Mogensen, A. Vidard, A. Weaver, A. Ryan, ... % do we keep that ?11 12 10 \minitoc 13 11 12 \vfill 13 \begin{figure}[b] 14 \subsubsection*{Changes record} 15 \begin{tabular}{l||l|m{0.65\linewidth}} 16 Release & Author & Modifications \\ 17 {\em 4.0} & {\em D. J. Lea} & {\em \NEMO 4.0 updates} \\ 18 {\em 3.6} & {\em M. Martin, A. Ryan} & {\em Add averaging operator, standalone obs oper} \\ 19 {\em 3.4} & {\em D. J. Lea, M. Martin, ...} & {\em Initial version} \\ 20 {\em --\texttt{"}--} & {\em ... K. Mogensen, A. Vidard, A. Weaver} & {\em ---\texttt{"}---} \\ 21 \end{tabular} 22 \end{figure} 23 14 24 \newpage 15 25 16 The observation and model comparison code (OBS)reads in observation files17 (profile temperature and salinity, sea surface temperature, sea level anomaly, sea ice concentration, and velocity) and calculates an interpolated model equivalent value at the observation location and nearest model time step.26 The observation and model comparison code, the observation operator (OBS), reads in observation files 27 (profile temperature and salinity, sea surface temperature, sea level anomaly, sea ice concentration, and velocity) and calculates an interpolated model equivalent value at the observation location and nearest model time step. 18 28 The resulting data are saved in a ``feedback'' file (or files). 19 29 The code was originally developed for use with the NEMOVAR data assimilation code, 20 30 but can be used for validation or verification of the model or with any other data assimilation system. 21 31 22 The OBS code is called from \mdl{nemogcm} for model initialisation and to calculate the model equivalent values for observations on the 0th time step.23 The code is then called again after each time step from \mdl{step}.24 The code is only activated if the namelist logical \np{ln\_diaobs} is set to true.32 The OBS code is called from \mdl{nemogcm} for model initialisation and to calculate the model equivalent values for observations on the 0th time step. 33 The code is then called again after each time step from \mdl{step}. 34 The code is only activated if the \ngn{namobs} namelist logical \np{ln\_diaobs} is set to true. 25 35 26 36 For all data types a 2D horizontal interpolator or averager is needed to … … 28 38 For {\em in situ} profiles, a 1D vertical interpolator is needed in addition to 29 39 provide model fields at the observation depths. 30 This now works in a generalised vertical coordinate system. 40 This now works in a generalised vertical coordinate system. 31 41 32 42 Some profile observation types (\eg tropical moored buoys) are made available as daily averaged quantities. … … 36 46 the observation operator code can calculate equivalent night-time average model SST fields by 37 47 setting the namelist value \np{ln\_sstnight} to true. 38 Otherwise the model value from the nearest timestep to the observation time is used.39 40 The code is controlled by the namelist \ textit{namobs}.48 Otherwise (by default) the model value from the nearest time step to the observation time is used. 49 50 The code is controlled by the namelist \ngn{namobs}. 41 51 See the following sections for more details on setting up the namelist. 42 52 43 \autoref{sec:OBS_example} introduces a test example of the observation operator codeincluding53 In \autoref{sec:OBS_example} a test example of the observation operator code is introduced, including 44 54 where to obtain data and how to setup the namelist. 45 \autoref{sec:OBS_details} introduces some more technical details of the different observation types used and 46 also show sa more complete namelist.47 \autoref{sec:OBS_theory} introduces some of the theoretical aspects of the observation operatorincluding55 In \autoref{sec:OBS_details} some more technical details of the different observation types used are introduced, and we 56 also show a more complete namelist. 57 In \autoref{sec:OBS_theory} some of the theoretical aspects of the observation operator are described including 48 58 interpolation methods and running on multiple processors. 49 \autoref{sec:OBS_ooo} describes the offline observation operator code.50 \autoref{sec:OBS_obsutils} introduces some utilities to help workingwith the files produced by the OBS code.59 In \autoref{sec:OBS_sao} the standalone observation operator code is described. 60 In \autoref{sec:OBS_obsutils} we describe some utilities to help work with the files produced by the OBS code. 51 61 52 62 % ================================================================ … … 56 66 \label{sec:OBS_example} 57 67 58 This section describes an example of running the observation operator codeusing59 profile data which can be freely downloaded.60 It shows how to adapt an existing run and build of NEMO to run the observation operator.68 In this section an example of running the observation operator code is described using 69 profile observation data which can be freely downloaded. 70 It shows how to adapt an existing run and build of \NEMO to run the observation operator. Note also the observation operator and the assimilation increments code are run in the \np{ORCA2\_ICE\_OBS} SETTE test. 61 71 62 72 \begin{enumerate} … … 65 75 \item Download some EN4 data from \href{http://www.metoffice.gov.uk/hadobs}{www.metoffice.gov.uk/hadobs}. 66 76 Choose observations which are valid for the period of your test run because 67 the observation operator compares the model and observations for a matching date and time. 68 69 \item Compile the OBSTOOLS code using:77 the observation operator compares the model and observations for a matching date and time. 78 79 \item Compile the OBSTOOLS code in the \np{tools} directory using: 70 80 \begin{cmds} 71 ./maketools -n OBSTOOLS -m [ARCH] .81 ./maketools -n OBSTOOLS -m [ARCH] 72 82 \end{cmds} 73 83 74 \item Convert the EN4 data into feedback format: 84 replacing \np{[ARCH]} with the build architecture file for your machine. Note the tools are checked out from a separate repository under \np{utils/tools}. 85 86 \item Convert the EN4 data into feedback format: 75 87 \begin{cmds} 76 88 enact2fb.exe profiles_01.nc EN.4.1.1.f.profiles.g10.YYYYMM.nc 77 89 \end{cmds} 78 90 79 \item Include the following in the NEMO namelist to run the observation operator on this data:91 \item Include the following in the \NEMO namelist to run the observation operator on this data: 80 92 \end{enumerate} 81 93 … … 87 99 This can be expensive, particularly for large numbers of observations, 88 100 setting \np{ln\_grid\_search\_lookup} allows the use of a lookup table which 89 is saved into an ``xypos``file (or files).101 is saved into an \np{cn\_gridsearch} file (or files). 90 102 This will need to be generated the first time if it does not exist in the run directory. 91 103 However, once produced it will significantly speed up future grid searches. 92 104 Setting \np{ln\_grid\_global} means that the code distributes the observations evenly between processors. 93 105 Alternatively each processor will work with observations located within the model subdomain 94 (see section~\autoref{subsec:OBS_parallel}).106 (see \autoref{subsec:OBS_parallel}). 95 107 96 108 A number of utilities are now provided to plot the feedback files, convert and recombine the files. 97 These are explained in more detail in section~\autoref{sec:OBS_obsutils}.98 Utilit es to convert other input data formats into the feedback format are also described in99 section~\autoref{sec:OBS_obsutils}.109 These are explained in more detail in \autoref{sec:OBS_obsutils}. 110 Utilities to convert other input data formats into the feedback format are also described in 111 \autoref{sec:OBS_obsutils}. 100 112 101 113 \section{Technical details (feedback type observation file headers)} … … 110 122 %------------------------------------------------------------------------------------------------------------- 111 123 112 The observation operator code uses the "feedback"observation file format for all data types.124 The observation operator code uses the feedback observation file format for all data types. 113 125 All the observation files must be in NetCDF format. 114 126 Some example headers (produced using \mbox{\textit{ncdump~-h}}) for profile data, sea level anomaly and 115 127 sea surface temperature are in the following subsections. 116 128 117 \subsection{Profile feedback }129 \subsection{Profile feedback file} 118 130 119 131 \begin{clines} … … 271 283 \end{clines} 272 284 273 \subsection{Sea level anomaly feedback }285 \subsection{Sea level anomaly feedback file} 274 286 275 287 \begin{clines} … … 395 407 \end{clines} 396 408 397 T he mean dynamic topography (MDT) must be provided in a separate file defined on409 To use Sea Level Anomaly (SLA) data the mean dynamic topography (MDT) must be provided in a separate file defined on 398 410 the model grid called \ifile{slaReferenceLevel}. 399 411 The MDT is required in order to produce the model equivalent sea level anomaly from the model sea surface height. … … 417 429 \end{clines} 418 430 419 \subsection{Sea surface temperature feedback }431 \subsection{Sea surface temperature feedback file} 420 432 421 433 \begin{clines} … … 546 558 In those cases the model counterpart should be calculated by averaging the model grid points over 547 559 the same size as the footprint. 548 NEMO therefore has the capability to specify either an interpolation or an averaging549 (for surface observation types only). 560 \NEMO therefore has the capability to specify either an interpolation or an averaging 561 (for surface observation types only). 550 562 551 563 The main namelist option associated with the interpolation/averaging is \np{nn\_2dint}. … … 559 571 \item \np{nn\_2dint}\forcode{ = 4}: Polynomial interpolation 560 572 \item \np{nn\_2dint}\forcode{ = 5}: Radial footprint averaging with diameter specified in the namelist as 561 \np{rn\_ ???\_avglamscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs})573 \np{rn\_[var]\_avglamscl} in degrees or metres (set using \np{ln\_[var]\_fp\_indegs}) 562 574 \item \np{nn\_2dint}\forcode{ = 6}: Rectangular footprint averaging with E/W and N/S size specified in 563 the namelist as \np{rn\_ ???\_avglamscl} and \np{rn\_???\_avgphiscl} in degrees or metres564 (set using \np{ln\_ ???\_fp\_indegs})575 the namelist as \np{rn\_[var]\_avglamscl} and \np{rn\_[var]\_avgphiscl} in degrees or metres 576 (set using \np{ln\_[var]\_fp\_indegs}) 565 577 \end{itemize} 566 The ??? in the last two options indicate these options should be specified for each observation typefor578 Replace \np{[var]} in the last two options with the observation type (sla, sst, sss or sic) for 567 579 which the averaging is to be performed (see namelist example above). 568 580 The \np{nn\_2dint} default option can be overridden for surface observation types using 569 namelist values \np{nn\_2dint\_ ???} where ??? is one of sla,sst,sss,sic.581 namelist values \np{nn\_2dint\_[var]} where \np{[var]} is the observation type. 570 582 571 583 Below is some more detail on the various options for interpolation and averaging available in NEMO. … … 573 585 \subsubsection{Horizontal interpolation} 574 586 575 Consider an observation point ${\mathrm P}$ with with longitude and latitude $({\lambda_{}}_{\mathrm P}, \phi_{\mathrm P})$and587 Consider an observation point ${\mathrm P}$ with longitude and latitude (${\lambda_{}}_{\mathrm P}$, $\phi_{\mathrm P}$) and 576 588 the four nearest neighbouring model grid points ${\mathrm A}$, ${\mathrm B}$, ${\mathrm C}$ and ${\mathrm D}$ with 577 589 longitude and latitude ($\lambda_{\mathrm A}$, $\phi_{\mathrm A}$),($\lambda_{\mathrm B}$, $\phi_{\mathrm B}$) etc. 578 All horizontal interpolation methods implemented in NEMO estimate the value of a model variable $x$ at point $P$ as590 All horizontal interpolation methods implemented in \NEMO estimate the value of a model variable $x$ at point $P$ as 579 591 a weighted linear combination of the values of the model variables at the grid points ${\mathrm A}$, ${\mathrm B}$ etc.: 592 580 593 \begin{align*} 581 {x_{}}_{\mathrm P} & \hspace{-2mm} = \hspace{-2mm} &582 583 584 585 594 {x_{}}_{\mathrm P} = 595 \frac{1}{w} \left( {w_{}}_{\mathrm A} {x_{}}_{\mathrm A} + 596 {w_{}}_{\mathrm B} {x_{}}_{\mathrm B} + 597 {w_{}}_{\mathrm C} {x_{}}_{\mathrm C} + 598 {w_{}}_{\mathrm D} {x_{}}_{\mathrm D} \right) 586 599 \end{align*} 600 587 601 where ${w_{}}_{\mathrm A}$, ${w_{}}_{\mathrm B}$ etc. are the respective weights for the model field at 588 602 points ${\mathrm A}$, ${\mathrm B}$ etc., and $w = {w_{}}_{\mathrm A} + {w_{}}_{\mathrm B} + {w_{}}_{\mathrm C} + {w_{}}_{\mathrm D}$. … … 597 611 For example, the weight given to the field ${x_{}}_{\mathrm A}$ is specified as the product of the distances 598 612 from ${\mathrm P}$ to the other points: 599 \begin{align*} 613 614 \begin{alignat*}{2} 600 615 {w_{}}_{\mathrm A} = s({\mathrm P}, {\mathrm B}) \, s({\mathrm P}, {\mathrm C}) \, s({\mathrm P}, {\mathrm D}) 601 \end{align *}602 where 603 \begin{align*}604 s\left ({\mathrm P}, {\mathrm M} \right ) 605 & \hspace{-2mm} = \hspace{-2mm} &606 \cos^{-1} \! \left\{616 \end{alignat*} 617 618 where 619 620 \begin{alignat*}{2} 621 s\left({\mathrm P}, {\mathrm M} \right) & = & \hspace{0.25em} \cos^{-1} \! \left\{ 607 622 \sin {\phi_{}}_{\mathrm P} \sin {\phi_{}}_{\mathrm M} 608 + \cos {\phi_{}}_{\mathrm P} \cos {\phi_{}}_{\mathrm M} 609 \cos ({\lambda_{}}_{\mathrm M} - {\lambda_{}}_{\mathrm P}) 623 + \cos {\phi_{}}_{\mathrm P} \cos {\phi_{}}_{\mathrm M} 624 \cos ({\lambda_{}}_{\mathrm M} - {\lambda_{}}_{\mathrm P}) 610 625 \right\} 611 \end{align*} 626 \end{alignat*} 627 612 628 and $M$ corresponds to $B$, $C$ or $D$. 613 629 A more stable form of the great-circle distance formula for small distances ($x$ near 1) 614 630 involves the arcsine function (\eg see p.~101 of \citet{daley.barker_bk01}: 615 \begin{align*} 616 s\left( {\mathrm P}, {\mathrm M} \right) & \hspace{-2mm} = \hspace{-2mm} & \sin^{-1} \! \left\{ \sqrt{ 1 - x^2 } \right\} 617 \end{align*} 631 632 \begin{alignat*}{2} 633 s\left( {\mathrm P}, {\mathrm M} \right) = \sin^{-1} \! \left\{ \sqrt{ 1 - x^2 } \right\} 634 \end{alignat*} 635 618 636 where 619 \begin{align*} 620 x & \hspace{-2mm} = \hspace{-2mm} & 621 {a_{}}_{\mathrm M} {a_{}}_{\mathrm P} + {b_{}}_{\mathrm M} {b_{}}_{\mathrm P} + {c_{}}_{\mathrm M} {c_{}}_{\mathrm P} 622 \end{align*} 623 and 624 \begin{align*} 625 {a_{}}_{\mathrm M} & \hspace{-2mm} = \hspace{-2mm} & \sin {\phi_{}}_{\mathrm M}, \\ 626 {a_{}}_{\mathrm P} & \hspace{-2mm} = \hspace{-2mm} & \sin {\phi_{}}_{\mathrm P}, \\ 627 {b_{}}_{\mathrm M} & \hspace{-2mm} = \hspace{-2mm} & \cos {\phi_{}}_{\mathrm M} \cos {\phi_{}}_{\mathrm M}, \\ 628 {b_{}}_{\mathrm P} & \hspace{-2mm} = \hspace{-2mm} & \cos {\phi_{}}_{\mathrm P} \cos {\phi_{}}_{\mathrm P}, \\ 629 {c_{}}_{\mathrm M} & \hspace{-2mm} = \hspace{-2mm} & \cos {\phi_{}}_{\mathrm M} \sin {\phi_{}}_{\mathrm M}, \\ 630 {c_{}}_{\mathrm P} & \hspace{-2mm} = \hspace{-2mm} & \cos {\phi_{}}_{\mathrm P} \sin {\phi_{}}_{\mathrm P}. 631 \end{align*} 637 638 \begin{alignat*}{2} 639 x = {a_{}}_{\mathrm M} {a_{}}_{\mathrm P} + {b_{}}_{\mathrm M} {b_{}}_{\mathrm P} + {c_{}}_{\mathrm M} {c_{}}_{\mathrm P} 640 \end{alignat*} 641 642 and 643 644 \begin{alignat*}{3} 645 & {a_{}}_{\mathrm M} & = && \quad \sin {\phi_{}}_{\mathrm M}, \\ 646 & {a_{}}_{\mathrm P} & = && \quad \sin {\phi_{}}_{\mathrm P}, \\ 647 & {b_{}}_{\mathrm M} & = && \quad \cos {\phi_{}}_{\mathrm M} \cos {\phi_{}}_{\mathrm M}, \\ 648 & {b_{}}_{\mathrm P} & = && \quad \cos {\phi_{}}_{\mathrm P} \cos {\phi_{}}_{\mathrm P}, \\ 649 & {c_{}}_{\mathrm M} & = && \quad \cos {\phi_{}}_{\mathrm M} \sin {\phi_{}}_{\mathrm M}, \\ 650 & {c_{}}_{\mathrm P} & = && \quad \cos {\phi_{}}_{\mathrm P} \sin {\phi_{}}_{\mathrm P}. 651 \end{alignat*} 632 652 633 653 \item[2.] {\bfseries Great-Circle distance-weighted interpolation with small angle approximation.} 634 654 Similar to the previous interpolation but with the distance $s$ computed as 635 \begin{align *}655 \begin{alignat*}{2} 636 656 s\left( {\mathrm P}, {\mathrm M} \right) 637 & \hspace{-2mm} = \hspace{-2mm} & 638 \sqrt{ \left( {\phi_{}}_{\mathrm M} - {\phi_{}}_{\mathrm P} \right)^{2} 657 & = & \sqrt{ \left( {\phi_{}}_{\mathrm M} - {\phi_{}}_{\mathrm P} \right)^{2} 639 658 + \left( {\lambda_{}}_{\mathrm M} - {\lambda_{}}_{\mathrm P} \right)^{2} 640 659 \cos^{2} {\phi_{}}_{\mathrm M} } 641 \end{align *}660 \end{alignat*} 642 661 where $M$ corresponds to $A$, $B$, $C$ or $D$. 643 662 … … 649 668 a cell with coordinates (0,0), (1,0), (0,1) and (1,1). 650 669 This method is based on the \href{https://github.com/SCRIP-Project/SCRIP}{SCRIP interpolation package}. 651 670 652 671 \end{enumerate} 653 672 … … 658 677 \item The standard grid-searching code is used to find the nearest model grid point to the observation location 659 678 (see next subsection). 660 \item The maximum number of grid points is calculated in the local grid domain for which 661 the averaging is likely need to cover. 662 \item The lats/longs of the grid points surrounding the nearest model grid box are extracted using 663 existing mpi routines. 679 \item The maximum number of grid points required for that observation in each local grid domain is calculated. Some of these points may later turn out to have zero weight depending on the shape of the footprint. 680 \item The longitudes and latitudes of the grid points surrounding the nearest model grid box are extracted using 681 existing MPI routines. 664 682 \item The weights for each grid point associated with each observation are calculated, 665 683 either for radial or rectangular footprints. … … 673 691 674 692 Examples of the weights calculated for an observation with rectangular and radial footprints are shown in 675 Figs.~\autoref{fig:obsavgrec} and~\autoref{fig:obsavgrad}.693 \autoref{fig:obsavgrec} and~\autoref{fig:obsavgrad}. 676 694 677 695 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 696 714 Weights associated with each model grid box (blue lines and numbers) 697 715 for an observation at -170.5\deg{E}, 56.0\deg{N} with a radial footprint with diameter 1\deg. 698 } 716 } 699 717 \end{center} 700 718 \end{figure} … … 704 722 \subsection{Grid search} 705 723 706 For many grids used by the NEMO model, such as the ORCA family, the horizontal grid coordinates $i$ and $j$ are not simple functions of latitude and longitude.724 For many grids used by the \NEMO model, such as the ORCA family, the horizontal grid coordinates $i$ and $j$ are not simple functions of latitude and longitude. 707 725 Therefore, it is not always straightforward to determine the grid points surrounding any given observational position. 708 Before the interpolation can be performed, a search algorithm is then required to determine the corner points of 726 Before the interpolation can be performed, a search algorithm is then required to determine the corner points of 709 727 the quadrilateral cell in which the observation is located. 710 This is the most difficult and time consuming part of the 2D interpolation procedure. 728 This is the most difficult and time consuming part of the 2D interpolation procedure. 711 729 A robust test for determining if an observation falls within a given quadrilateral cell is as follows. 712 730 Let ${\mathrm P}({\lambda_{}}_{\mathrm P} ,{\phi_{}}_{\mathrm P} )$ denote the observation point, 713 731 and let ${\mathrm A}({\lambda_{}}_{\mathrm A} ,{\phi_{}}_{\mathrm A} )$, ${\mathrm B}({\lambda_{}}_{\mathrm B} ,{\phi_{}}_{\mathrm B} )$, 714 732 ${\mathrm C}({\lambda_{}}_{\mathrm C} ,{\phi_{}}_{\mathrm C} )$ and ${\mathrm D}({\lambda_{}}_{\mathrm D} ,{\phi_{}}_{\mathrm D} )$ 715 denote the bottom left, bottom right, top left and top right corner points of the cell, respectively. 716 To determine if P is inside the cell, we verify that the cross-products 733 denote the bottom left, bottom right, top left and top right corner points of the cell, respectively. 734 To determine if P is inside the cell, we verify that the cross-products 717 735 \begin{align*} 718 736 \begin{array}{lllll} … … 750 768 be searched for on a regular grid. 751 769 For each observation position, the closest point on the regular grid of this position is computed and 752 the $i$ and $j$ ranges of this point searched to determine the precise four points surrounding the observation. 770 the $i$ and $j$ ranges of this point searched to determine the precise four points surrounding the observation. 753 771 754 772 \subsection{Parallel aspects of horizontal interpolation} … … 757 775 For horizontal interpolation, there is the basic problem that 758 776 the observations are unevenly distributed on the globe. 759 In numerical models, it is common to divide the model grid into subgrids (or domains) where777 In \NEMO the model grid is divided into subgrids (or domains) where 760 778 each subgrid is executed on a single processing element with explicit message passing for 761 779 exchange of information along the domain boundaries when running on a massively parallel processor (MPP) system. 762 This approach is used by \NEMO. 763 764 For observations there is no natural distribution since the observations are not equally distributed on the globe. 780 781 For observations there is no natural distribution since the observations are not equally distributed on the globe. 765 782 Two options have been made available: 766 783 1) geographical distribution; … … 784 801 the domain of the grid-point parallelization. 785 802 \autoref{fig:obslocal} shows an example of the distribution of the {\em in situ} data on processors with 786 a different colour for each observation on a given processor for a 4 $\times$ 2 decomposition with ORCA2. 803 a different colour for each observation on a given processor for a 4 $\times$ 2 decomposition with ORCA2. 787 804 The grid-point domain decomposition is clearly visible on the plot. 788 805 789 806 The advantage of this approach is that all information needed for horizontal interpolation is available without 790 807 any MPP communication. 791 Of course, this is under the assumption that we areonly using a $2 \times 2$ grid-point stencil for808 This is under the assumption that we are dealing with point observations and only using a $2 \times 2$ grid-point stencil for 792 809 the interpolation (\eg bilinear interpolation). 793 810 For higher order interpolation schemes this is no longer valid. … … 827 844 At the bottom boundary, this is done using the land-ocean mask. 828 845 846 For profile observation types we do both vertical and horizontal interpolation. \NEMO has a generalised vertical coordinate system this means the vertical level depths can vary with location. Therefore, it is necessary first to perform vertical interpolation of the model value to the observation depths for each of the four surrounding grid points. After this the model values, at these points, at the observation depth, are horizontally interpolated to the observation location. 847 829 848 \newpage 830 849 831 850 % ================================================================ 832 % Offline observation operator documentation851 % Standalone observation operator documentation 833 852 % ================================================================ 834 853 835 854 %\usepackage{framed} 836 855 837 \section{ Offline observation operator}838 \label{sec:OBS_ ooo}856 \section{Standalone observation operator} 857 \label{sec:OBS_sao} 839 858 840 859 \subsection{Concept} 841 860 842 The obs oper maps model variables to observation space. 843 It is possible to apply this mapping without running the model. 844 The software which performs this functionality is known as the \textbf{offline obs oper}. 845 The obs oper is divided into three stages. 846 An initialisation phase, an interpolation phase and an output phase. 847 The implementation of which is outlined in the previous sections. 848 During the interpolation phase the offline obs oper populates the model arrays by 849 reading saved model fields from disk. 850 851 There are two ways of exploiting this offline capacity. 861 The observation operator maps model variables to observation space. This is normally done while the model is running, i.e. online, it is possible to apply this mapping offline without running the model with the \textbf{standalone observation operator} (SAO). The process is divided into an initialisation phase, an interpolation phase and an output phase. 862 During the interpolation phase the SAO populates the model arrays by 863 reading saved model fields from disk. The interpolation and the output phases use the same OBS code described in the preceding sections. 864 865 There are two ways of exploiting the standalone capacity. 852 866 The first is to mimic the behaviour of the online system by supplying model fields at 853 867 regular intervals between the start and the end of the run. 854 868 This approach results in a single model counterpart per observation. 855 This kind of usage produces feedback files the same file format as the online obs oper.856 The second is to take advantage of the offline setting in which857 multiple model counterparts can be calculated perobservation.869 This kind of usage produces feedback files the same file format as the online observation operator. 870 The second is to take advantage of the ability to run offline by calculating 871 multiple model counterparts for each observation. 858 872 In this case it is possible to consider all forecasts verifying at the same time. 859 By forecast, I mean any method which produces an estimate of physical reality which is not an observed value. 860 In the case of class 4 files this means forecasts, analyses, persisted analyses and 861 climatological values verifying at the same time. 862 Although the class 4 file format doesn't account for multiple ensemble members or 863 multiple experiments per observation, it is possible to include these components in the same or multiple files. 873 By forecast, we mean any method which produces an estimate of physical reality which is not an observed value. 864 874 865 875 %-------------------------------------------------------------------------------------------------------- 866 % offline_oper.exe876 % sao.exe 867 877 %-------------------------------------------------------------------------------------------------------- 868 878 869 \subsection{Using the offline observation operator}879 \subsection{Using the standalone observation operator} 870 880 871 881 \subsubsection{Building} 872 882 873 In addition to \emph{OPA\_SRC} the offline obs oper requires the inclusion of the \emph{OOO\_SRC} directory.874 \emph{ OOO\_SRC} contains a replacement \mdl{nemo} and \mdl{nemogcm} which883 In addition to \emph{OPA\_SRC} the SAO requires the inclusion of the \emph{SAO\_SRC} directory. 884 \emph{SAO\_SRC} contains a replacement \mdl{nemo} and \mdl{nemogcm} which 875 885 overwrites the resultant \textbf{nemo.exe}. 876 This is the approach taken by \emph{SAS\_SRC} and\emph{OFF\_SRC}.886 Note this a similar approach to that taken by the standalone surface scheme \emph{SAS\_SRC} and the offline TOP model \emph{OFF\_SRC}. 877 887 878 888 %-------------------------------------------------------------------------------------------------------- 879 % Running 889 % Running 880 890 %-------------------------------------------------------------------------------------------------------- 881 891 \subsubsection{Running} 882 892 883 The simplest way to use the executable is to edit and append the \textbf{ooo.nml} namelist to 884 a full NEMO namelist and then to run the executable as if it were nemo.exe. 885 886 \subsubsection{Quick script} 887 888 A useful Python utility to control the namelist options can be found in \textbf{OBSTOOLS/OOO}. 889 The functions which locate model fields and observation files can be manually specified. 890 The package can be installed by appropriate use of the included setup.py script. 891 892 Documentation can be auto-generated by Sphinx by running \emph{make html} in the \textbf{doc} directory. 893 The simplest way to use the executable is to edit and append the \textbf{sao.nml} namelist to 894 a full \NEMO namelist and then to run the executable as if it were nemo.exe. 893 895 894 896 %-------------------------------------------------------------------------------------------------------- 895 897 % Configuration section 896 898 %-------------------------------------------------------------------------------------------------------- 897 \subsection{Configuring the offline observation operator} 898 The observation files and settings understood by \textbf{namobs} have been outlined in the online obs oper section. 899 In addition there are two further namelists wich control the operation of the offline obs oper. 900 \textbf{namooo} which controls the input model fields and \textbf{namcl4} which 901 controls the production of class 4 files. 899 \subsection{Configuring the standalone observation operator} 900 The observation files and settings understood by \ngn{namobs} have been outlined in the online observation operator section. 901 In addition is a further namelist \ngn{namsao} which used to set the input model fields for the SAO 902 902 903 903 \subsubsection{Single field} 904 904 905 In offline mode model arrays are populated at appropriate time steps via input files.906 At present, \textbf{tsn} and \textbf{sshn} are populated by the default read routines. 905 In the SAO the model arrays are populated at appropriate time steps via input files. 906 At present, \textbf{tsn} and \textbf{sshn} are populated by the default read routines. 907 907 These routines will be expanded upon in future versions to allow the specification of any model variable. 908 908 As such, input files must be global versions of the model domain with 909 909 \textbf{votemper}, \textbf{vosaline} and optionally \textbf{sshn} present. 910 910 911 For each field read there must be an entry in the \ textbf{namooo} namelist specifying911 For each field read there must be an entry in the \ngn{namsao} namelist specifying 912 912 the name of the file to read and the index along the \emph{time\_counter}. 913 913 For example, to read the second time counter from a single file the namelist would be. … … 915 915 \begin{forlines} 916 916 !---------------------------------------------------------------------- 917 ! nam ooo Offline obs_oper namelist917 ! namsao Standalone obs_oper namelist 918 918 !---------------------------------------------------------------------- 919 ! ooo_files specifies the files containing the model counterpart920 ! nn_ ooo_idx specifies the time_counter index within the model file921 &nam ooo922 ooo_files = "foo.nc"923 nn_ ooo_idx = 2919 ! sao_files specifies the files containing the model counterpart 920 ! nn_sao_idx specifies the time_counter index within the model file 921 &namsao 922 sao_files = "foo.nc" 923 nn_sao_idx = 2 924 924 / 925 925 \end{forlines} … … 927 927 \subsubsection{Multiple fields per run} 928 928 929 Model field iteration is controlled via \textbf{nn\_ ooo\_freq} which929 Model field iteration is controlled via \textbf{nn\_sao\_freq} which 930 930 specifies the number of model steps at which the next field gets read. 931 931 For example, if 12 hourly fields are to be interpolated in a setup where 288 steps equals 24 hours. … … 933 933 \begin{forlines} 934 934 !---------------------------------------------------------------------- 935 ! nam ooo Offline obs_oper namelist935 ! namsao Standalone obs_oper namelist 936 936 !---------------------------------------------------------------------- 937 ! ooo_files specifies the files containing the model counterpart938 ! nn_ ooo_idx specifies the time_counter index within the model file939 ! nn_ ooo_freq specifies number of time steps between read operations940 &nam ooo941 ooo_files = "foo.nc" "foo.nc"942 nn_ ooo_idx = 1 2943 nn_ ooo_freq = 144937 ! sao_files specifies the files containing the model counterpart 938 ! nn_sao_idx specifies the time_counter index within the model file 939 ! nn_sao_freq specifies number of time steps between read operations 940 &namsao 941 sao_files = "foo.nc" "foo.nc" 942 nn_sao_idx = 1 2 943 nn_sao_freq = 144 944 944 / 945 945 \end{forlines} … … 952 952 %\end{framed} 953 953 954 It is easy to see how a collection of fields taken frona number of files at different indices can be combined at954 A collection of fields taken from a number of files at different indices can be combined at 955 955 a particular frequency in time to generate a pseudo model evolution. 956 As long as all that is needed is a single model counterpart at a regular interval then 957 namooo is all that needs to be edited. 958 However, a far more interesting approach can be taken in which multiple forecasts, analyses, persisted analyses and 959 climatologies are considered against the same set of observations. 960 For this a slightly more complicated approach is needed. 961 It is referred to as \emph{Class 4} since it is the fourth metric defined by the GODAE intercomparison project. 962 963 %-------------------------------------------------------------------------------------------------------- 964 % Class 4 file section 965 %-------------------------------------------------------------------------------------------------------- 966 \subsubsection{Multiple model counterparts per observation a.k.a Class 4} 967 968 A generalisation of feedback files to allow multiple model components per observation. 969 For a single observation, as well as previous forecasts verifying at the same time 970 there are also analyses, persisted analyses and climatologies. 971 972 973 The above namelist performs two basic functions. 974 It organises the fields given in \textbf{namooo} into groups so that observations can be matched up multiple times. 975 It also controls the metadata and the output variable of the class 4 file when a write routine is called. 976 977 %\begin{framed} 978 \textbf{Note: ln\_cl4} must be set to \forcode{.true.} in \textbf{namobs} to use class 4 outputs. 979 %\end{framed} 980 981 \subsubsection{Class 4 naming convention} 982 983 The standard class 4 file naming convention is as follows. 984 985 \noindent 986 \linebreak 987 \textbf{\$\{prefix\}\_\$\{yyyymmdd\}\_\$\{sys\}\_\$\{cfg\}\_\$\{vn\}\_\$\{kind\}\_\$\{nproc\}}.nc 988 989 \noindent 990 \linebreak 991 Much of the namelist is devoted to specifying this convention. 992 The following namelist settings control the elements of the output file names. 993 Each should be specified as a single string of character data. 994 995 \begin{description} 996 \item[cl4\_prefix] 997 Prefix for class 4 files \eg class4 998 \item[cl4\_date] 999 YYYYMMDD validity date 1000 \item[cl4\_sys] 1001 The name of the class 4 model system \eg FOAM 1002 \item[cl4\_cfg] 1003 The name of the class 4 model configuration \eg orca025 1004 \item[cl4\_vn] 1005 The name of the class 4 model version \eg 12.0 1006 \end{description} 1007 1008 \noindent 1009 The kind is specified by the observation type internally to the obs oper. 1010 The processor number is specified internally in NEMO. 1011 1012 \subsubsection{Class 4 file global attributes} 1013 1014 Global attributes necessary to fulfill the class 4 file definition. 1015 These are also useful pieces of information when collaborating with external partners. 1016 1017 \begin{description} 1018 \item[cl4\_contact] 1019 Contact email for class 4 files. 1020 \item[cl4\_inst] 1021 The name of the producers institution. 1022 \item[cl4\_cfg] 1023 The name of the class 4 model configuration \eg orca025 1024 \item[cl4\_vn] 1025 The name of the class 4 model version \eg 12.0 1026 \end{description} 1027 1028 \noindent 1029 The obs\_type, creation date and validity time are specified internally to the obs oper. 1030 1031 \subsubsection{Class 4 model counterpart configuration} 1032 1033 As seen previously it is possible to perform a single sweep of the obs oper and 1034 specify a collection of model fields equally spaced along that sweep. 1035 In the class 4 case the single sweep is replaced with multiple sweeps and 1036 a certain ammount of book keeping is needed to ensure each model counterpart makes its way to 1037 the correct piece of memory in the output files. 1038 1039 \noindent 1040 \linebreak 1041 In terms of book keeping, the offline obs oper needs to know how many full sweeps need to be performed. 1042 This is specified via the \textbf{cl4\_match\_len} variable and 1043 is the total number of model counterparts per observation. 1044 For example, a 3 forecasts plus 3 persistence fields plus an analysis field would be 7 counterparts per observation. 1045 1046 \begin{forlines} 1047 cl4_match_len = 7 1048 \end{forlines} 1049 1050 Then to correctly allocate a class 4 file the forecast axis must be defined. 1051 This is controlled via \textbf{cl4\_fcst\_len}, which in out above example would be 3. 1052 1053 \begin{forlines} 1054 cl4_fcst_len = 3 1055 \end{forlines} 1056 1057 Then for each model field it is necessary to designate what class 4 variable and index along 1058 the forecast dimension the model counterpart should be stored in the output file. 1059 As well as a value for that lead time in hours, this will be useful when interpreting the data afterwards. 1060 1061 \begin{forlines} 1062 cl4_vars = "forecast" "forecast" "forecast" "persistence" "persistence" 1063 "persistence" "best_estimate" 1064 cl4_fcst_idx = 1 2 3 1 2 3 1 1065 cl4_leadtime = 12 36 60 1066 \end{forlines} 1067 1068 In terms of files and indices of fields inside each file the class 4 approach makes use of 1069 the \textbf{namooo} namelist. 1070 If our fields are in separate files with a single field per file our example inputs will be specified. 1071 1072 \begin{forlines} 1073 ooo_files = "F.1.nc" "F.2.nc" "F.3.nc" "P.1.nc" "P.2.nc" "P.3.nc" "A.1.nc" 1074 nn_ooo_idx = 1 1 1 1 1 1 1 1075 \end{forlines} 1076 1077 When we combine all of the naming conventions, global attributes and i/o instructions the class 4 namelist becomes. 1078 1079 \begin{forlines} 1080 !---------------------------------------------------------------------- 1081 ! namooo Offline obs_oper namelist 1082 !---------------------------------------------------------------------- 1083 ! ooo_files specifies the files containing the model counterpart 1084 ! nn_ooo_idx specifies the time_counter index within the model file 1085 ! nn_ooo_freq specifies number of time steps between read operations 1086 &namooo 1087 ooo_files = "F.1.nc" "F.2.nc" "F.3.nc" "P.1.nc" "P.2.nc" "P.3.nc" "A.1.nc" 1088 nn_ooo_idx = 1 1 1 1 1 1 1 1089 / 1090 !---------------------------------------------------------------------- 1091 ! namcl4 Offline obs_oper class 4 namelist 1092 !---------------------------------------------------------------------- 1093 ! 1094 ! Naming convention 1095 ! ----------------- 1096 ! cl4_prefix specifies the output file prefix 1097 ! cl4_date specifies the output file validity date 1098 ! cl4_sys specifies the model counterpart system 1099 ! cl4_cfg specifies the model counterpart configuration 1100 ! cl4_vn specifies the model counterpart version 1101 ! cl4_inst specifies the model counterpart institute 1102 ! cl4_contact specifies the file producers contact details 1103 ! 1104 ! I/O specification 1105 ! ----------------- 1106 ! cl4_vars specifies the names of the output file netcdf variable 1107 ! cl4_fcst_idx specifies output file forecast index 1108 ! cl4_fcst_len specifies forecast axis length 1109 ! cl4_match_len specifies number of unique matches per observation 1110 ! cl4_leadtime specifies the forecast axis lead time 1111 ! 1112 &namcl4 1113 cl4_match_len = 7 1114 cl4_fcst_len = 3 1115 cl4_fcst_idx = 1 2 3 1 2 3 1 1116 cl4_vars = "forecast" "forecast" "forecast" "persistence" "persistence" 1117 "persistence" "best_estimate" 1118 cl4_leadtime = 12 36 60 1119 cl4_prefix = "class4" 1120 cl4_date = "20130101" 1121 cl4_vn = "12.0" 1122 cl4_sys = "FOAM" 1123 cl4_cfg = "AMM7" 1124 cl4_contact = "example@example.com" 1125 cl4_inst = "UK Met Office" 1126 / 1127 \end{forlines} 1128 1129 \subsubsection{Climatology interpolation} 1130 1131 The climatological counterpart is generated at the start of the run by 1132 restarting the model from climatology through appropriate use of \textbf{namtsd}. 1133 To override the offline observation operator read routine and to take advantage of the restart settings, 1134 specify the first entry in \textbf{cl4\_vars} as "climatology". 1135 This will then pipe the restart from climatology into the output class 4 file. 1136 As in every other class 4 matchup the input file, input index and output index must be specified. 1137 These can be replaced with dummy data since they are not used but 1138 they must be present to cycle through the matchups correctly. 1139 1140 \subsection{Advanced usage} 1141 1142 In certain cases it may be desirable to include both multiple model fields per observation window with 1143 multiple match ups per observation. 1144 This can be achieved by specifying \textbf{nn\_ooo\_freq} as well as the class 4 settings. 1145 Care must be taken in generating the ooo\_files list such that the files are arranged into 1146 consecutive blocks of single match ups. 1147 For example, 2 forecast fields of 12 hourly data would result in 4 separate read operations but 1148 only 2 write operations, 1 per forecast. 1149 1150 \begin{forlines} 1151 ooo_files = "F1.nc" "F1.nc" "F2.nc" "F2.nc" 1152 ... 1153 cl4_fcst_idx = 1 2 1154 \end{forlines} 1155 1156 The above notation reveals the internal split between match up iterators and file iterators. 1157 This technique has not been used before so experimentation is needed before results can be trusted. 956 If all that is needed is a single model counterpart at a regular interval then 957 the standard SAO is all that is required. 958 However, just to note, it is possible to extend this approach by comparing multiple forecasts, analyses, persisted analyses and 959 climatologies with the same set of observations. 960 This approach is referred to as \emph{Class 4} since it is the fourth metric defined by the GODAE intercomparison project. This requires multiple runs of the SAO and running an additional utility (not currently in the \NEMO repository) to combine the feedback files into one class 4 file. 1158 961 1159 962 \newpage … … 1162 965 \label{sec:OBS_obsutils} 1163 966 1164 Some tools for viewing and processing of observation and feedback files are provided in1165 the NEMO repository for convenience.1166 These include OBSTOOLS which are a collection of \fortran programs which are helpful to deal with feedback files.967 For convenience some tools for viewing and processing of observation and feedback files are provided in 968 the \NEMO repository. 969 These tools include OBSTOOLS which are a collection of \fortran programs which are helpful to deal with feedback files. 1167 970 They do such tasks as observation file conversion, printing of file contents, 1168 971 some basic statistical analysis of feedback files. 1169 The other tool is an IDL program called dataplot which uses a graphical interface to972 The other main tool is an IDL program called dataplot which uses a graphical interface to 1170 973 visualise observations and feedback files. 1171 974 OBSTOOLS and dataplot are described in more detail below. … … 1173 976 \subsection{Obstools} 1174 977 1175 A series of \fortran utilities is provided with NEMO called OBSTOOLS. 1176 This are helpful in handling observation files and the feedback file output from the NEMO observation operator. 1177 The utilities are as follows 1178 1179 \subsubsection{c4comb} 1180 1181 The program c4comb combines multiple class 4 files produced by individual processors in 1182 an MPI run of NEMO offline obs\_oper into a single class 4 file. 1183 The program is called in the following way: 1184 1185 1186 \footnotesize 1187 \begin{cmds} 1188 c4comb.exe outputfile inputfile1 inputfile2 ... 1189 \end{cmds} 978 A series of \fortran utilities is provided with \NEMO called OBSTOOLS. 979 This are helpful in handling observation files and the feedback file output from the observation operator. A brief description of some of the utilities follows 1190 980 1191 981 \subsubsection{corio2fb} 1192 982 1193 983 The program corio2fb converts profile observation files from the Coriolis format to the standard feedback format. 1194 The program is called in the following way: 1195 1196 \footnotesize 984 It is called in the following way: 985 1197 986 \begin{cmds} 1198 987 corio2fb.exe outputfile inputfile1 inputfile2 ... … … 1202 991 1203 992 The program enact2fb converts profile observation files from the ENACT format to the standard feedback format. 1204 The program is called in the following way: 1205 1206 \footnotesize 993 It is called in the following way: 994 1207 995 \begin{cmds} 1208 996 enact2fb.exe outputfile inputfile1 inputfile2 ... … … 1212 1000 1213 1001 The program fbcomb combines multiple feedback files produced by individual processors in 1214 an MPI run of NEMO into a single feedback file. 1215 The program is called in the following way: 1216 1217 \footnotesize 1002 an MPI run of \NEMO into a single feedback file. 1003 It is called in the following way: 1004 1218 1005 \begin{cmds} 1219 1006 fbcomb.exe outputfile inputfile1 inputfile2 ... … … 1223 1010 1224 1011 The program fbmatchup will match observations from two feedback files. 1225 The program is called in the following way: 1226 1227 \footnotesize 1012 It is called in the following way: 1013 1228 1014 \begin{cmds} 1229 1015 fbmatchup.exe outputfile inputfile1 varname1 inputfile2 varname2 ... … … 1234 1020 The program fbprint will print the contents of a feedback file or files to standard output. 1235 1021 Selected information can be output using optional arguments. 1236 The program is called in the following way: 1237 1238 \footnotesize 1022 It is called in the following way: 1023 1239 1024 \begin{cmds} 1240 1025 fbprint.exe [options] inputfile … … 1246 1031 -B Select observations based on QC flags 1247 1032 -u unsorted 1248 -s ID select station ID 1033 -s ID select station ID 1249 1034 -t TYPE select observation type 1250 -v NUM1-NUM2 select variable range to print by number 1035 -v NUM1-NUM2 select variable range to print by number 1251 1036 (default all) 1252 -a NUM1-NUM2 select additional variable range to print by number 1037 -a NUM1-NUM2 select additional variable range to print by number 1253 1038 (default all) 1254 -e NUM1-NUM2 select extra variable range to print by number 1039 -e NUM1-NUM2 select extra variable range to print by number 1255 1040 (default all) 1256 1041 -d output date range … … 1262 1047 1263 1048 The program fbsel will select or subsample observations. 1264 The program is called in the following way: 1265 1266 \footnotesize 1049 It is called in the following way: 1050 1267 1051 \begin{cmds} 1268 1052 fbsel.exe <input filename> <output filename> … … 1272 1056 1273 1057 The program fbstat will output summary statistics in different global areas into a number of files. 1274 The program is called in the following way: 1275 1276 \footnotesize 1058 It is called in the following way: 1059 1277 1060 \begin{cmds} 1278 1061 fbstat.exe [-nmlev] <filenames> … … 1283 1066 The program fbthin will thin the data to 1 degree resolution. 1284 1067 The code could easily be modified to thin to a different resolution. 1285 The program is called in the following way: 1286 1287 \footnotesize 1068 It is called in the following way: 1069 1288 1070 \begin{cmds} 1289 1071 fbthin.exe inputfile outputfile … … 1293 1075 1294 1076 The program sla2fb will convert an AVISO SLA format file to feedback format. 1295 The program is called in the following way: 1296 1297 \footnotesize 1077 It is called in the following way: 1078 1298 1079 \begin{cmds} 1299 1080 sla2fb.exe [-s type] outputfile inputfile1 inputfile2 ... … … 1306 1087 1307 1088 The program vel2fb will convert TAO/PIRATA/RAMA currents files to feedback format. 1308 The program is called in the following way: 1309 1310 \footnotesize 1089 It is called in the following way: 1090 1311 1091 \begin{cmds} 1312 1092 vel2fb.exe outputfile inputfile1 inputfile2 ... … … 1320 1100 1321 1101 An IDL program called dataplot is included which uses a graphical interface to 1322 visualise observations and feedback files. 1102 visualise observations and feedback files. Note a similar package has recently developed in python (also called dataplot) which does some of the same things that the IDL dataplot does. Please contact the authors of the this chapter if you are interested in this. 1103 1323 1104 It is possible to zoom in, plot individual profiles and calculate some basic statistics. 1324 1105 To plot some data run IDL and then: 1325 \footnotesize 1106 1326 1107 \begin{minted}{idl} 1327 1108 IDL> dataplot, "filename" … … 1331 1112 for example multiple feedback files from different processors or from different days, 1332 1113 the easiest method is to use the spawn command to generate a list of files which can then be passed to dataplot. 1333 \footnotesize 1114 1334 1115 \begin{minted}{idl} 1335 1116 IDL> spawn, 'ls profb*.nc', files … … 1350 1131 The plotting colour range can be changed by clicking on the colour bar. 1351 1132 The title of the plot gives some basic information about the date range and depth range shown, 1352 the extreme values, and the mean and rmsvalues.1133 the extreme values, and the mean and RMS values. 1353 1134 It is possible to zoom in using a drag-box. 1354 1135 You may also zoom in or out using the mouse wheel. … … 1362 1143 observation minus background value. 1363 1144 The next group of radio buttons selects the map projection. 1364 This can either be regular l atitude longitude grid, or north or south polar stereographic.1145 This can either be regular longitude latitude grid, or north or south polar stereographic. 1365 1146 The next group of radio buttons will plot bad observations, switch to salinity and 1366 1147 plot density for profile observations.
Note: See TracChangeset
for help on using the changeset viewer.