Training Module 22: using Pipes and Tokens

The slides used in the following video are shown below:

Slide 1

Welcome to the training course on IBM Scalable Architecture for Financial Reporting, or SAFR.  This is Module 22, using Pipes and Tokens

Slide 2

Upon completion of this module, you should be able to:

  • Describe uses for the SAFR piping feature
  • Read a Logic Table and Trace with piping
  • Describe uses for the SAFR token feature
  • Read a Logic Table and Trace with tokens
  • Debug piping and token views

Slide 3

This module covers three special logic table functions, the WRTK which writes a token, the RETK which reads the token and the ET function which signifies the end of a set of token views.  These functions are like the WR functions, the RENX function and the ES function for non-token views. 

Using SAFR Pipes does not involve special logic table functions.

Slide 4

This module gives an overview of the Pipe and Token features of SAFR.

SAFR views often either read data or pass data to other views, creating chains of processes, each view performing one portion of the overall process transformation

Pipes and Tokens are ways to improve the efficiency with which data is passed from one view to another

Pipes and Tokens are only useful to improve efficiency of SAFR processes; they do not perform any new data functions not available through other SAFR features.

Slide 5

A pipe is a virtual file passed in memory between two or more views. A pipe is defined as a Physical File of type Pipe in the SAFR Workbench.

Pipes save unnecessary input and output. If View 1 outputs a disk file and View 2 reads that file, then time is wasted for output from View 1 and input to View 2. This configuration would typically require two passes of SAFR, one processing view 1 and a second pass processing view 2.

If there is a pipe placed between View 1 and View 2, then the records stay in memory, and no time is wasted and both views are executed in this single pass.

The pipe consists of blocks of records in memory.

Slide 6

Similarly a token is a named memory area. The name is used for communication between two or more views.

Like a pipe, it allows passing data in memory between two or more views.

But unlike pipes which are virtual files and allow asynchronous processing, tokens operate one record at a time synchronously between the views.

Slide 7

Because Pipes simply substitute a virtual file for a physical file, views writing to a pipe are an independent thread from views reading the pipe.

Thus for pipes both threads are asynchronous

Tokens though pass data one record at a time between views.  The views writing the token and the views reading the token are in the same thread.

Thus for tokens there is only one thread, and both views run synchronously

Slide 8

An advantage of pipes is that they include parallelism, which can decrease the elapsed time needed to produce an output.  In this example,  View 2 runs in parallel to View 1. After View 1 has filled block 1 with records, View 2 can begin reading from block 1.  While View 2 is reading from block 1, View 1 is writing new records to block 2. This advantage is one form of parallelism in SAFR which improves performance. Without this, View 2 would have to wait until all of View 1 is complete.

Slide 9

Pipes can be used to multiply input records, creating multiple output records in the piped file, all of which will be read by the views reading the pipe.  This can be used to perform an allocation type process, where a value on a record is divided into multiple other records.

In this example, a single record in the input file is written by multiple views into the pipe.  Each of these records is then read by the second thread.

Slide 10

Multiple views can write tokens, but because tokens can only contain one record at a time, token reader views must be executed after each token write.  Otherwise the token would be overwritten by a subsequent token write view.

In this example, on the left a single view, View 1, writes a token, and views 2 and 3 read the token. View 4 has nothing to do with the token, reading the original input record like View 1.

On the right, both views 1 and 4 write to the token.  After each writes to the token, all views which read the token are called. 

In this way, tokens can be used to “explode” input event files, like pipes.

Slide 11

Both pipes and tokens can be used to conserve or concentrate computing capacity.  For example CPU intensive operations like look-ups, or lookup exits, can be performed one time in views that write to pipes or tokens, and the results added to the record that is passed onto to dependent views.  Thus the dependent, reading views will not have to perform these same functions.

In this example, the diagram on the left shows the same lookup is performed in three different views.  Using piping or tokens, the diagram on the right shows how this lookup may be performed once, the result of the lookup stored on the event file record and used by subsequent views, thus reducing the number of lookups performed in total.

Slide 12

A limitation of pipes is the loss of visibility to the original input record read from disk; the asynchronous execution of thread 1 and thread 2 means the records being processed in thread one is unpredictable.

There are instances where visibility by the view using the output from another view to the input record is important.  As will be discussed in a later module, the Common Key Buffering feature can at times allow for use of records related to the input record.  Also, the use of exits can be employed to preserve this type of visibility.  Token processing, because it is synchronous within the same thread, can provide this capability. 

In this example, on the top the token View 2 can still access the original input record from the source file if needed, whereas the Pipe View 2 on the bottom cannot.

Slide 13

Piping can be used in conjunction with Extract Time Summarization, because the pipe writing views process multiple records before passing the buffer to the pipe reading views.  Because tokens process one record at a time, they cannot use extract time summarization.  A token can never contain more than one record.

In the example on the left, View 1, using Extract Time Summarization, collapses four records with like sort keys from the input file before writing this single record to the Pipe Buffer.  Whereas Token processing on the right will pass each individual input record to all Token Reading views.

Note that to use Extract Time Summarization  with Piping, the pipe reading views must process Standard Extract Format records, rather than data written by a WRDT function; summarization only occurs in CT columns, which are only contained in Standard Extract Format Records. 

Slide 14

Workbench set-up for writing to, and reading from a pipe is relatively simple.  It begins with creating a pipe physical file and a view which will write to the pipe.

Begin by creating a physical file with a File Type of “Pipe”.

Create a view to write to the Pipe, and within the View Properties:

  • Select the Output Format of Flat File, Fixed-Length Fields,
  • Then select the created physical file as the output file

Slide 15

The next step is to create the LR for the Pipe Reader Views to use.

The LR used for views which read from the pipe must match exactly the column outputs of the View or Views which will write to the Pipe.  This includes data formats, positions, lengths and contents.

In this example, Column 1 of the View,  ”Order ID” is a 10 byte field, and begins at position 1.  In the Logical Record this value will be found in the Order_ID field, beginning at position 1 for 10 bytes.

Preliminary testing of the new Logical Record is suggested, making the view write to a physical file first, inspect the view output, and ensure the data matches the Pipe Logical Record, and then change the view to actually write to a pipe.  Looking at an actual output file to examine positions is easier than using the trace to detect if positions are correct as the Pipe only shows data referenced by a view, not the entire written record produced by the pipe writing view.

Slide 16

The view which reads the Pipe uses the Pipe Logical Record.  It also must read from a Logical File which contains the Pipe Physical File written to by the pipe writer view.

In this example, the view read the pipe-LR ID 38, and the Pipe LF ID 42 Logical File which contains the Physical File of a type Pipe written to by the pipe writing view.  The view itself uses all three of the fields on the Pipe Logical Record.

Slide 17

This is the logic table generated for the piping views.  The pipe logic table contains no special logic table functions.  The only connection between the ES sets is the shared physical file entity.

The first RENX – ES set is reading Logical File 12.  This input event file is on disk.  The pipe writing view, view number 73, writes extracted records to Physical File Partition 51. 

The next RENX – ES set reads Logical File 42.  This Logical File contains the same Physical File partition (51) written to by the prior view.  The Pipe Reader view is view 74.

Slide 18

This the the trace for piped views.  In the example on the left, the thread reading the event file, with a DD Name of ORDER001, begins processing input event records.  Because the pipe writer view, number 73, has no selection criteria, each record is written to the Pipe by the WRDT function.  All 12 records in this small input file are written to the Pipe.

When all 12 records have been processed, Thread 1 has reached the end of the input event file, it stops processing, and turns the pipe buffer over to Thread 2 for processing.

Thread 2, the Pipe Reader thread, then begins processing these 12 records. All 12 records are processed by the pipe reader view, view number 74.

As shown on the right, if the input event file had contained more records, enough to fill multiple pipe buffers, the trace still begins with Thread 1 processing, but after the first buffer is filled, Thread 2, the pipe reading view, begins processing.  From this point, the printed trace rows for Thread 1 and Thread 2 may be interspersed as both threads process records in parallel against differing pipe buffers.  This randomness is highlighted by each NV function against a new event file record

Slide 19

Workbench set-up for writing to, and reading from a token is very similar to the set-up for pipes. It begins with creating a physical file with a type of token, and a view which will write to the token.

Begin by creating a physical file with a File Type of “Token”.

Create a view to write to the token, and within the View Properties, select the Output Format of Flat File, Fixed-Length Fields.

Then in the proprieties for the Logical Record and Logical File to be read, select the output Logical File and created Physical File as the destination for the view

Slide 20

Like pipe usage, the next step is to create the Logical Record for the Token Reader Views to use. In our example, we have reused the same Logical Record used in the piping example.  Again, the logical record which describes the token must match the output from the view exactly, including data formats, positions, lengths and contents.

Slide 21

Again, the view which reads the Token uses this new Logical Record.  It also must read from a Logical File which contains the Token Physical File written to by the token writer view.

Note that the same Logical File must be used for both the token writer and token reader; using the same Physical File contained in two different Logical Files causes an error in processing. 

Slide 22

This is the logic table generated for the token views.

Any views which read a token are contained within an RETK – Read Next Token / ET – End of Token set at the top of the logic table.

The token writer view or views are listed below this set in the typical threads according to the files they read; the only change in these views is the WRTK logic table function, which Writes a Token shown at the bottom of the logic table in red.

Although the token reading views are listed at the top of the Logic Table, they are not executed first in the flow.  Rather, the regular threads process, and when they reach an WRTK Write Token logic table function, the views in the RETK thread are called and processed.  Thus the RETK – ET set is like a subroutine, executed at the end of the regular thread processing.

In this example, view 10462 is reading the token written by view 10461.  View 10462 is contained within the RETK – ET set at the top of the logic table.  View 10461 has a WRTK function to write the token.  After execution of this WRTK function, all views in the RETK – ET set are executed.

Slide 23

This is the Logic Table Trace for the Token Views.

Note that, unlike the Pipe Example which has multiple Event File DD Names within the Trace, because Tokens execute in one thread, the Event File DD Name is the same throughout, ORDER001 in this example.

The WRTK Write Token Function, on view 10461 in this example, which ends the standard thread processes are immediately followed by the Token Reading view or views, view 10462 in this example.  The only thing that makes these views look different from the views reading the input event file is that they are processing against a different Logical File and Logical Record, Logical File ID 2025  and Logical Record ID 38 in this example. 

Slide 24

The trace on the right has been modified to show the effect of having multiple token writing views executing at the same time. 

Note multiple executions of View 10462, the token reading view processing the same Event Record1,  from the Same Event DD Name ORDER001, after each token writing views, 10460 and 10461.

Slide 25

The GVBMR95 control report shows the execution results from running both the Pipe and Token views at the same time.

  • At the top of the report, the Input files listed include the
  • Disk File, the event file for both the pipe and token writing views
  • the Token (for the Token Reading views), and
  • the Pipe (for the Pipe Reading views)
  • The bottom of the report shows the final output disposition, showing records were written to
  • an Extract File (shared by the pipe and token reading views),
  • the Token (for the Token Writing View) and
  • the Pipe (for the Pipe Writing View)
  • The center section shows more details about each one of the Views.

Slide 26

The top of this slide shows the final run statistics about the run from the GVBMR95 control report.

Note that a common error message may be encountered if view selection for the pass is not carefully constructed, for either Pipes or Tokens.  Execution of a pass which contains only  Pipe Writers and no Pipe Readers will result in this message.  The opposite is also true, with no Pipe Writers and only Pipe Readers selected.  The same is also true for token processes.  Pairs of views must be executed for either process, matched by Logical Files containing matching Physical Files for the process to execute correctly.

Slide 27

This module described using piping and tokens. Now that you have completed this module, you should be able to:

  • Describe uses for the SAFR piping feature
  • Read a Logic Table and Trace with piping
  • Describe uses for the SAFR token feature
  • Read a Logic Table and Trace with tokens
  • Debug piping and token views

Slide 28

Additional information about SAFR is available at the web addresses shown here. This concludes Module 22, The SAFR Piping and Token Functions

Slide 29

Training Module 7: Column Formulas

The slides used in the video are shown below:

Slide 1

Welcome to the training course on IBM Scalable Architecture for Financial Reporting, or SAFR. This is Module 7, “Column Formulas.”

Slide 2

This module provides you with an overview of SAFR column formulas. Upon completion of this module, you should be able to:

  • Explain the difference between column assignment and record selection
  • Set column attributes and assign simple constant values
  • Use if-then-else logic in setting column values
  • Create column formulas

Slide 3

In contrast to record filtering, SAFR provides column assignment logic and column calculations, which also use SAFR logic text. Column logic is used to populate output columns with field values, constants, or calculations.

In the top example, record filtering excluded all customers with orders prior to January 2000. In the bottom example, columns are populated based upon customer type, either retail or commercial. Note that no value is assigned for the third record because this record refers to a government customer.

Slide 4

Remember that a constant can be applied to a column by clicking in the View Source cell and then selecting the Column Source Type constant in the Column Source Properties pane.

Slide 5

An alphanumeric value is entered “as is.” It is not included in single or double quotation marks. If spaces are assigned to the column, they will not be visible in this cell. Later you’ll see a better way of assigning a space to a column.

Slide 6

An alternative approach to assigning values to a column is to use a column formula. To enter a formula, select Formula in the Column Source Type field. Then double-click anywhere in the Column Source Value cell to display the icon, and click the icon.

Slide 7

The Logic Text Editor includes the Logic Text Helper, which enables you to place keywords, field names, and column numbers into logic for column assignment. These functions are similar to functions provided by record filtering.

Slide 8

A very simple “column equals” statement can be used to assign a constant value to a column. In this example, a constant value of a space is assigned to a column. Constants surrounded with double quotation marks create alphanumeric constants.

Slide 9

Field values and constants assigned in logic text must fit within the column output attributes that will be used to define the output. In this example, the value of ORDER_TOTAL_AMOUNT is written as a 4-byte binary field in the output file.

Slide 10

Column logic can include if-then-else logic as well. The column output will contain the result of the THEN or the ELSE conditions. Nested IFs are also permitted.

In this example, the value of ORDER_TOTAL_AMOUNT or the constant of 0 is written to the output file.

Slide 11

If then else logic can be written several ways. In addition to the typical IF THEN ELSE, a value can be assigned to a column and then a test can be performed to see whether that value should be overwritten.

All three logic statements seem to result in the same output. However, the third statement, without an implicit ELSE condition, is not recommended and may have unpredictable results. The second statement is also less efficient because the assignment is overwritten when the IF condition is true.

Slide 12

In addition to using field names in logic text, you can use functions. This column uses the ISSPACES function, which tests a field value equal to spaces. If true, the output column contains the text ***Error***; if not true, it contains the store ID.

Slide 13

Column logic can be used to assign values to a prior column, which is called reassignment. Column logic is applied to columns starting from the left and moving to the right. Thus columns are populated left to right within SAFR, and previously assigned values can be overwritten by later columns.

Slide 14

Within the logic text, COLUMN= assigns a value to the current column, and COL.nn= refers to a prior column, where nn is the column number. Because columns are populated left to right, only prior columns can be referenced within column formulas. This feature does not require the same logic to be repeated in multiple columns; all columns depending upon the logic statement can be assigned within one column.

In this example, the logic within column 6 assigns, first, a space to column 6, second, the value XYZ to column 4, and third, asterisks to column 5.

Slide 15

Column logic allows calculations to be performed against numeric fields. Any calculation referencing a field name is performed at extract time because the Extract phase has visibility into all fields on the input LR.

Slide 16

Format phase column formulas are accessed by double-clicking anywhere in the Format-Phase Calculation cell to display the icon, and then clicking the icon.

Slide 17

Format calculations can be performed only on records that have made it through record selection, via the Extract phase. Format phase calculations refer to columns rather than field numbers. The only columns available for calculations are columns that are numeric, perform aggregation, or are the results of calculations.

In this example, column 4 is populated with the results of column 3 divided by column 2.

Slide 18

The following slides show the interaction of all four types of record and column filtering in the order they are applied by the Performance Engine.

The order is:

  • Extract phase record filtering
  • Extract phase column filtering and assignment
  • Format phase column calculations, and
  • Format phase output record filtering

Slide 19

The Extract phase performs input record filtering, determining which records should be selected for processing. In this example, record number 4 is not selected for further processing because its date is less than January 2000.

These examples of selected records are used on the next slide to illustrate column assignment.

Slide 20

Column assignment is performed on each record that passes record level filtering. It assigns specific values to the columns.

In this example, a zero is assigned to columns where the true condition failed on the first and third rows. The third column assigns similar logic based upon commercial customers, whereas the first column is assigned a 1 on every row.

These examples of extracted columns are used on the next slide to illustrate column calculations.

Slide 21

The Format phase performs column calculations using column values assigned in the Extract process, and then performs record filtering, based upon the final records.

In this example, column 4 is added to the output in the Format phase, based upon the column calculation. Because column 4 subtracts the value in either column 2 or column 3, it ends up being a count of non-retail, non-commercial customers.

These examples of calculated columns are used on the next slide to illustrate Format phase output filtering.

Slide 22

The Format phase performs record filtering as the last step before writing to the output file. It works against aggregated records if the view summarizes records. Otherwise it works against detailed records, as in this example.

In this example, the filter tests column 4 selects only non-zero records.

In conclusion, because of the combination of record filtering, column assignment, column calculation logic, and this final output record filter, the output file ends up being an extract of all non-retail, non-commercial customers.

Slide 23

This module provided an overview of SAFR column formulas. Now that you have completed this module, you should be able to:

  • Explain the difference between column assignment and record selection
  • Set column attributes and assign simple constant values
  • Use if-then-else logic in setting column values
  • Create column formulas

Slide 24

Additional information about SAFR is available at the web addresses shown here.

Slide 25

Slide 26

This concludes the Module 7. Thank you for your participation.

Training Module 6 :Record Filtering

The slides for this training video are shown below:

Slide 1

Welcome to the training course on IBM Scalable Architecture for Financial Reporting, or SAFR. This is module 6: “Record Filtering.”

Slide 2

This module provides you with an overview of SAFR record filtering. Upon completion of this module, you should be able to:

  • Use SAFR logic text facilities to create record filters
  • Read SAFR logic text and identify keywords, fields, lookup paths, and constants
  • Construct complex record filters
  • Interpret the results of processing

Slide 3

SAFR record filters use conditional processing to include or exclude records, based upon specific criteria. SAFR filters both input records and output records. Input record filtering is performed in the extract phase, and output record filtering is performed in the format phase.

Both input and output record filtering work on entire records. If no record filters are created, all records are selected for input, and are written to the output.

Record filters are written in SAFR logic text.

Slide 4

In contrast to record filtering, SAFR also provides column assignment logic and column calculations, which also use SAFR logic text. Column logic is used to populate output columns with field values, constants, or calculations.

In the top example, record filtering excluded all customers with orders before January 2000. In the bottom example, columns are populated based upon customer type, either retail or commercial. Note that no value is assigned for the third record because it is for a “government” customer.

Slide 5

The extract record filter is opened on the View Source Properties tab. First, open the View Source Properties tab with a single left-click in the blue cell. Open the Extract Record Filter tab by double-clicking anywhere in the record filter cell to display the icon, and then click the icon.

Slide 6

The Format phase record filter is opened on the Format Phase tab on the View Properties screen. Simply click the Edit button to work with output record filters.

Slide 7

SAFR logic text is constructed in the Logic Text Editor. The left panel displays the logic, and the right panel displays the Logic Text Helper, which displays key-words and values that can be used in logic text. Within logic text, comments are preceded by an apostrophe, and are shown in green. Keywords are shown in blue. Field names are shown in black, and alphanumeric constants are shown in pink, enclosed in pink double quotes.

Slide 8

Because the Extract process has access to the original input record, extract record filters use field names and LR names. In SAFR meta-data, for example, field names are enclosed in brackets. Format phase logic text filters refer only to column numbers, not to field names.

Slide 9

The logic text helper enables you to build logic text by selecting the text to include in the filter. Double-clicking any listed keyword or field places that value in the logic text. Constants, such as the pink zeros shown here, are the only portion of the logic text that requires manual entry.

Slide 10

The Logic Text Helper includes operators, such as these:

  • Logical operators (AND, OR, NOT)
  • Arithmetic operators
  • Comparisons such as GREATER THAN or LESS THAN, and
  • Keywords, such as LIKE and CONTAINS.

Slide 11

The SELECT and SKIP key words tell the Performance Engine which record to select or skip. The logic shown in this example selects all records with an order date greater than January 1, 2000. All other records will be skipped.

Slide 12

The SELECTIF and SKIPIF statements provide a shorthand way of specifying the same condition. Note that the greater-than sign has been switched to a less-than-or-equal-to sign, so that both statements will select the same records.

Slide 13

The Logic Test Helper includes a listing of all fields from the input logical record. To avoid syntax errors, place the cursor at the correct location in the SAFR keyword, and double-click to insert the selected field name.

Slide 14

The Logic Text Helper also shows all lookup paths and fields on LRs available through joins. When used in logic text, these field names are preceded by the lookup path name.

In this example, the logic tests the customer date-of-birth field, looking for records with specific dates to include or exclude.

Slide 15

Format filtering can be performed only on records that have made it through record selection, in the Extract phase. Format phase output record filtering refers to columns rather than field numbers. The only columns available for output filtering are columns that are numeric, perform aggregation, or are the results of calculations. Format filtering is applied after aggregation is performed.

In this example, only records where column 6 is greater than 0 are written to the final output file. The accumulated order count in column 6 for Johnson is 0, so this record is not written to the output file, but the Philips record, with a total of 3, is.

Slide 16

Logic text functions test numerous conditions, including various date conditions, nulls, numeric values, and found and not-found conditions on joins. In this example, all records with spaces in the ORDER DATE field will be skipped.

Slide 17

Logic text supports nested IF statements. SAFR logic text follows the typical order of operations and logic constructs of other programming languages.

In this example, all order records for customer 0 at stores 1 and 2 are selected. In addition, all orders for customer 2 at any store, with an amount less than or equal to 0, are selected for processing.

Slide 18

The ISFOUND (is found) and ISNOTFOUND (is not found) functions test the results of joins. If a corresponding join record is found, the ISFOUND condition tests true. The parameter for these functions is simply the lookup path to be tested for a valid join.

In this example, order records are selected if a customer record is found using the order-to-customer lookup path.

Slide 19

To view a quick syntax snapshot, hover over the Logic Text Helper keywords. SAFR online help provides full syntax help for all keywords. To access help, open the Help menu and, select Logic Text.

Slide 20

Logic text syntax checking tests logic text for errors, such as mistyped keywords or invalid field names. Error messages are displayed at the bottom of the screen. Views cannot be activated until all errors are cleared.

Slide 21

Logic text syntax checking displays errors when syntax is not correct. The error message displays the line followed by the character number where the error was detected.

In this example, the syntax is missing a second right parenthesis after the constant “2.”

Slide 22

This module provided an introduction to record filtering. Now that you have completed this module, you should be able to:

  • Use SAFR logic text facilities to create record filters
  • Read SAFR logic text and identify keywords, fields, lookup paths, and constants
  • Construct complex record filters, and
  • Interpret the results of processing

Slide 23

Additional information about SAFR is available at the web addresses shown here.

Slide 24

Slide 25

This concludes the Module 6. Thank you for your participation.

Training Module 1: Introduction to Views

The following slides are used in this on-line video :

Slide 1

Welcome to the training course on IBM Scalable Architecture for Financial Reporting, or SAFR. This is Module 1: Introduction to SAFR Views.

Slide 2

This module provides you with a broad overview of the SAFR product and the basics of its graphical user interface. By the end of this training, you should be able to:

  • Identify the two major software components of the SAFR product
  • Identify the major types of SAFR metadata
  • Navigate the SAFR Workbench, and
  • Create a simple view. 

Slide 3

SAFR is a software application development tool that solves high-volume data analysis and reporting problems. SAFR provides capabilities for data transformation, data mining, database query, and financial reporting.

Slide 4

SAFR consists of two software components: the PC-based Workbench and the mainframe-based batch process known as the Performance Engine. Developers use the Workbench to build applications that are stored in a metadata repository in an IBM®  DB2®  database. These applications are then run by the Performance Engine, which reads data from source files or databases, transforms it, and writes it to output files.

Slide 5

Several types of metadata make up a SAFR application. The most common are the environment definition, the physical file definition (or PF), the logical file definition (or LF), the logical record definition (or LR), the view definition, and the view folder. 

Note that, when discussing SAFR metadata, we often omit the word definition because it is usually clear from the context whether we mean the metadata or the entity it refers to. 

Slide 6

An environment definition describes a logical collection of metadata within the SAFR Workbench. Typical types of environments include development, production, or training environments. Access to an environment can be restricted to a certain set of users. 

Slide 7

A physical file definition, or PF, describes a data source. Examples include customer or order files. A logical file definition, or LF, describes a collection of one or more physical files. A logical record definition, or LR, describes a record layout. In COBOL programs, record layouts are often found in copybooks. In relational databases, they are found in table definitions.

Some examples of these metadata types are shown here. 

  • A logical record for a customer is used to map data to a customer logical file. The customer logical file refers to data in a customer physical file. 
  • An order logical record is used to map data from a logical file named ORDER_001, which refers to data in a single physical file named ORDER_001. 
  • The order logical record can also be used to map data from a logical file named ORDER_ALL. ORDER ALL refers to a collection of order physical files.

Slide 8

A view definition describes a data transformation. It is analogous to a program or a query. Views are the basic units of work that are performed by the Performance Engine. 

Views are often grouped together into view folders for ease of maintenance. View folders are often named for a particular developer or function. Security can be applied to view folders to prevent unauthorized access. 

Slide 9

The SAFR Workbench is used to add, change, and delete SAFR metadata. It contains a menu and toolbar, and consists of multiple display areas, or frames. 

  • The Navigator area displays the types of metadata available. 
  • The Metadata List area displays a list of items for the selected metadata type. 
  • And the Editor area is the part of the screen where you modify metadata items.

Slide 10

  • By expanding the View Folders item in the Navigator area, you can see a list of all view folders. 
  • The contents of the selected folder are displayed in the Metadata List area.
  • From there, you can select a view for editing and the view will be displayed in the Editor area.

Slide 11

View information is displayed on two separate screens:

  • The View Editor screen, where you can define specific data transformations.
  • The View Properties screen, where you can modify information that applies to the whole view. 

Slide 12

You use the General tab on the View Properties screen to specify the output format – flat file or hardcopy report – and other related information. 

This tab also displays information about when the view was created and last modified and by whom. 

Slide 13

In addition, the General tab displays the name of the view folder where the view is stored.  

Slide 14

You can access advanced features on the Extract Phase tab and the Format Phase tab. You can open these tabs by single-clicking them. 

Slide 15

You can toggle back and forth between the View Properties screen and the View Editor screen by clicking the first icon in the Editor area toolbar, or by pressing the F9 key.

Slide 16

In View Editor mode, the Workbench displays several frames of view information. 

Slide 17

The View Editor grid displays the characteristics of view output columns. These characteristics include the data type, the length, and the alignment, such as left, right, or center. 

Slide 18

You can display information about the data source for the view by right-clicking a blue cell in the View Editor grid. This information includes the logical record and the logical file. 

Slide 19

To open a frame showing the column source properties, you right-click a green cell. The source of a column’s data can be a field in the source file, a constant, a lookup value, or the result of a formula. 

Slide 20

The View Editor incorporates several functions, such as inserting a column or activating a view. You can run a View Editor function in several ways:

  • Select it from the Edit menu or the Action menu for the Workbench
  • Left-click the function icon on the View Editor toolbar
  • Right-click in the View Editor grid and select the function from the pop-up menu
  • Or press the appropriate key combination, which is noted on the Workbench menu and the pop-up menu. 

Choose whichever technique you prefer. 

Slide 21

To add a new view source, you right-click on the grid to display the pop-up menu, and then select Insert and View Source

Slide 22

The Insert View Source window opens. You can select from a list of data sources in the window. 

Slide 23

Now let’s take what you’ve learned and create your own view. The following example is a simple data transformation, reading data from the ORDER001 file and writing out only the Order ID, Customer ID, and Total Amount fields. 

Slide 24

If we were to code a conventional program, we would:

  • Define the file attributes
  • Define the record layouts
  • Code the business logic
  • Compile the program
  • Link the program and
  • Run the program

Slide 25

With the SAFR tool, the first three steps are performed in the Workbench and the last three are performed for you by the Performance Engine. 

Slide 26

Defining files and records will be covered in the SAFR training module entitled “Creating Metadata.” The Performance Engine will be introduced in the “Performance Engine Overview” module. The topic of coding business logic will be introduced in the next few slides. 

Slide 27

In the example described in the following slides, metadata has been pre-populated in the Workbench for ease of instruction. 

To create a new view, click the Administration menu, select New and then select View.

Slide 28

The View Properties tab opens. 

Slide 29

Enter a descriptive name for the view, such as “Simple_Transformation_View.” Note that embedded spaces are not allowed in names, so you must use underscores to separate words. Next, clear the Format Phase check box; this feature is not needed for this simple view.

Slide 30

To display the View Editor Grid, select Show Grid or Properties, from the Edit menu (Alternatively, you can click the toolbar icon or press the F9 key).

Slide 31

From the Edit menu, select Insert, and then select View Source (Alternatively, you can right-click and select Insert and then select View Source from the pop-up menu, or you can press the Shift key and the Insert key). The Insert View Source window opens.

Slide 32

From the Logical Record list, select ORDER. Then, from the Logical File list, select ORDER_001

Slide 33

From the Edit menu, select Add Column (Alternatively, you can click the plus sign icon on the toolbar or press Alt and the Insert key). A new column is added to the grid. 

Slide 34

Click the green cell. The Column Source Properties frame opens on the right. 

Slide 35

In the Column Source Type field, click the list box and select Source File Field

Slide 36

In the Column Source Value field, click the list box and select ORDER_ID.  

Slide 37

Repeat the previous steps to add columns for Customer ID and Order Total Amount. Then save the view by selecting Save from the File menu, or by clicking the Save icon in the Workbench toolbar, or by pressing Control and S. 

Slide 38

To activate the view, use any of these methods:

  • Select Activate from the Action menu
  • Press the Activate icon on the View Editor toolbar
  • Press F5. 

The view title bar now displays the word “active”. Save the view again to preserve this active state. The view is now ready to be run.

Slide 39

This module provided an overview of SAFR. Now that you have completed this module, you should be able to:

  • Identify the two major software components of the SAFR product
  • Identify the major types of SAFR metadata
  • Navigate the SAFR Workbench and
  • Create a simple view

Slide 40

Additional information about SAFR is available at the web addresses shown here.

Slide 41

Slide 42

This concludes the Module 1. Thank you for your participation.