And provided leadership for broader OMP initiatives, like the Open z/OS Enablement Project based upon our experience with shared environments. You can learn more about our relationship to our sister projects in this blog post.
We’re not done making progress though. In the coming weeks and months we are working to:
Release the Workbench, Run Control, and Performance Engine code bases to Github.
Begin to convert the GenevaERS Documentation to a new Github Home.
Continue to explore potential deeper integration with Apache Spark and GenevaERS, as the Map on the Mainframe component.
Apache Spark Integration: “Map” on the Mainframe Phase
The team continues nearly weekly R&D efforts to explore tighter integration with Apache Spark. GenevaERS has many similarities to the Map-Reduce constructs of Apache Spark, preceding it by a decade or more.
The GenevaERS Extract Engine GVBMR95, is a parallel processing, machine code generation function that can resolve many queries or functions in one pass through the underlying data. The GenevaERS Summarization and Aggregation Engine, GVBMR88, is much more like the reduce phase in Map-Reduce.
The team believes there may be distinct advantages to using the GenevaERS Extract Engine for the Map phase in Map-Reduce, coupled with Apache Spark for the Reduce phase, using its extended functionality and capabilities.
The team holds a weekly R&D Session on most Friday’s at 2:30 EST on webex if you are interested in joining.
Would you like to get more out of your valuable data on z/OS? Consider applying GenevaERS to the problem. Use the GenevaERS e-mail list to start a discussion of the potential use case.
A daily scrum call is held Monday through Thursday at 5:00 EST on webex (not held on TSC meeting days, 2nd and 4th Tuesdays.) Our On Boarding Document will allow you to get connected to all the GenevaERS Resources.
Additionally here are opportunities, many marked in the GenevaERS Repo as Good First Issue, to explore involvement in the GenevaERS Community:
The GenevaERS project is teaming up and providing support for two other Open Mainframe Initiatives; Polycephaly and the Open z/OS Enablement or OzE project.
Polycephaly is intended to be a key technology in expanding access to mainframes, marrying two different development life cycle methodologies, distributed and z/OS. Polycephaly requires minimal z/OS system programming, and provides flexible development paths and options, moving from linear to non-linear development. It removes the need for separate development paths for distributed and z/OS workloads. Developers can develop on any platform, store to Git and Jenkins to deploy.
GenevaERS’s Performance Engine, which resolves scores of queries or processes in a single pass through a database, today is executed via standard JCL. Polycephaly opens the possibility for an updated execution engine, allowing use of Git and Jenkins commands to perform all the functions typically done within JCL. This may open up use of the Performance Engine to resources not skilled in z/OS commands and JCL.
Work to progress this investigation would include attempting to convert the GenevaERS model Performance Engine JCL to Polycephaly commands. Doing this work will expose the developer to a number of new and old technologies, building bridges in interesting ways.
Learn more about Polycephaly through this introductory video on GenevaTV.
The Open z/OS Enablement or OzE project grew out of the experience of establishing a community working environment for GenevaERS. The team found there are few places upon which to do Open Source community work. And so the team proposed to the Open Mainframe Project an approach to help solve the problems.
The vision of is lower the bar for companies and individuals to make z/OS computing resources available more broadly. Lack of access to z environments is a major impediment to the growth and innovation on the platform. Type of uses targeted include:
Open Source Communities and new software development efforts
Mentoring and new user growth, consistent with and attractive to those who use other public cloud learning opportunities
Experimentation and innovation on the edge of environment stability like the Raspberry Pi Model Impediments to these types of environments include: – Critical knowledge and support in sysprogs for z systems – Cost and control of donated resources (MIPs, software, storage, etc.) – Security and access control
The project intends to create code, processes and techniques which reduce these impediments and enable broader use and development of the Z platform.
You can learn more about the Open z/OS Enablement Project by watching this episode of GenevaTV.
Organization of the project continues, but much progress has been made. Check out the Community Repository on GitHub, and its Governance and Technical Steering Committee Checklist to see what’s been happening.
But don’t stop there….
The first Technical Steering Committee (TSC) Meeting, open to all, is scheduled for Tuesday, August 11, 2020 at 8 PM US CDT, Wednesday August 12, 10 AM WAST/HK time.
Project Email List: To join the meeting or to be kept up to date on project announcements, join the GenevaERS Email List. You’ll receive an invitation as part of the calendar system. You must be on the e-mail list to join the meeting.
The POC will be composed of various runs of Spark on z/OS using GenevaERS components in some, and ultimately full native GenevaERS.
The configurations run include the following:
The initial execution to produce the required outputs will be native Spark, on an open platform.
Spark on z/OS, utilizing JZOS as the IO engine. JZOS is a set of Java Utilities to access z/OS facilities. They are C++ routines having Java wrappers, allowing them to be included easily in Spark. Execution of this process will all be on z/OS.
The first set of code planned for release for GenevaERS is a set of utilities that perform GenevaERS encapsulated functions, such as GenevaERS overlapped BSAM IO, and the unique GenevaERS join algorithm. As part of this POC, we’ll encapsulate these modules, written in z/OS Assembler, in Java wrappers, to allow them to be called by Spark.
If time permits, we’ll switch out the standard Spark join processes for the GenevaERS utility module.
The last test is native GenevaERS execution.
The results of this POC will start to contribute to this type of architectural choice framework, which will highlight where GenevaERS’s single pass optimization can enhance Apache Spark’s formidable capabilities.
Note that the POC scope will not test the efficiencies of the Map and Reduce functions directly, but will provide the basis upon which that work can be done.
Open source is all about people making contributions. Expertise is needed in all types off efforts to carry of the POC.
Data preparation. We are working to build a data set that can be used on either z/OS or open systems to provide a fair comparison for either platform, but with enough volume to test some performance. Work here includes scripting, data analysis, and cross platform capabilities and design.
Spark. We want some good Spark code, that uses the power of that system, and makes the POC give real world results. Expertise needed includes writing Spark functions, data design, and turning.
Java JNI. Java Native Interface is the means by which one calls other language routines under Java, and thus under Spark. Assistance can be used in helping to encapsulate the GenevaERS utility function, GVBUR20 to perform fast IO for our test.
GenevaERS. The configuration we create we hope to be able to extract as GenevaERS VDP XML, and provide it as a download for initial installation testing. A similar goal with the sample JCL that will be provided. GenevaERS expertise in these fields is needed.
Documentation, Repository Work, and on and on and on. At the end of drafting this blog entry, facing the distinct chance it will be released with typos and other problems, we recognize we could use help in many more areas.
The focus for our work is this Spark-POC repository. Clone, fork, edit, commit, create pull request, repeat.
On a daily basis there is an open scrum call on these topics held at this virtual meeting link at 4:00 PM US CDT. This call is open to anyone to join.
The following are some initial thoughts on the next version of GenevaERS as an Open Source project might go:
Currently the only way to specify GenevaERS processes (called a GenevaERS “view”) is through the Workbench, which is a structured environment allowing specifications of column formats, values to be used in populating those columns, including the use of logic called GenevaERS Logic Text.
GenevaERS developers have known for years that in some cases a simple language would be easier to use. The structured nature of the Workbench is useful for simple views, but becomes more difficult to work with for more complex views.
In response, we propose enhancements to the front-end of GenevaERS for the following:
Open up a new method of specifying logic besides the Workbench, initially Java.
This language would be limited to a subset of all the Java functions as supported by the the extract engines.
The current Workbench compiler would be modified to produce a GenevaERS logic table from the Java code submitted to it.
Develop plug-ins for major IDE’s (Eclipse, Intellij) that highlight use of functions GenevaERS does not support in the language.
GenevaERS Performance Engine Processes should be able to construct a VDP (View Definition Parameter) file from a mix of input sources.
Doing this would allow:
Storage of GenevaERS process logic in source code management systems, enabling all the benefits of change management
Opening up to other languages; often extracts from GenevaERS repositories are more SQL-like because of the data normalization that happens within the repository
Taking in logic specified for Spark and other execution engines which conform to GenevaERS syntax, providing higher performance throughput for those processes
Begin to open the possibility of constructing “pass” specifications, rather than simply defined in execution scripting.
Perhaps creation of in-line “exit” like functionality wherein the specified logic can be executed directly (outside the Logic Table construct).
The GenevaERS Performance Engine uses the Logic Table and VDP file to resolve the GenevaERS processes (GenevaERS “views”).
Proposed enhancements include:
Expanding today’s very efficient compiler to more functions, to support greater sets of things expressed in the languages. This would include things like greater control over looping mechanisms, temporary variables, greater calculation potential, and execution of in-line code and called functions within a view under the GenevaERS execution engine.
If there is interest and capacity, we may even move in the longer term towards an alternative Java execution engine. This would be a dynamic java engine, similar to the GenevaERS extract engine today; not a statically created for specific business functions as discussed below.
View to Stand-alone Java Program: It is possible to consider creating utilities which translate from GenevaERS meta data to another language, each view becoming a stand alone program, which could simply be maintained as custom processes. This would provide a migration path for very simple GenevaERS Processes to other tooling, when performance is not important.
Multi-View Execution Java Program: A set of GenevaERS views (a VDP) could be converted to a single Java program, which produces multiple outputs in a single pass of the input file, similar to what GenevaERS does today. In other words, it is possible to look at how GenevaERS performs the one-pass architecture, isolates differing business logic constructs from each other, perform joins, etc., and write new code to do these functions. This would also provide performance benefits from learning from the GenevaERS architecture.
Dynamic Java Program: Today’s GenevaERS compiler which produces a Logic Table could be converted to produce a Java (or other language) executable. This might add the benefit of making the processes dynamic, rather than static. This can have benefits to changing rules and functions in the GenevaERS workbench, and some sense of consistent performance for those functions, and the potential benefit of growing community of GenevaERS developers for new functions.
These ideas will be discussed at an upcoming GenevaERS community call to gauge interest and resources which might be applied.
Existing customers have been working on using GenevaERS metadata in related processes outside of GenevaERS. Our first contributor to open source, Sandy Peresie, took on the challenge of building a process which reads Geneva metadata XML file and produces an output of selected elements in COBOL. This was an experimental project, the first of the GenevaERS open source projects.
She chose to use Python 3.8.
To execute the process:
place the downloaded ascii/crlf VDP in the Data directory.
change the file name in the safrmain_poc.py
execute the file
Design specifications for this work in summary were:
View Column Records can be used to produce the expected output from the view.
To produce an input structure from the referenced LRs, do the following:
Use a LogicaRecord from the tree. Read its LOGRECID field. You can also get the LR name from this section.
Then in the LRField records look for fields that have that LOGRECID (that is treat it as a foreign key… a back reference) you will find there the LRFIELDID and name of the field and its start position within the record. (also an ordinal number) There is a complication here if the field redefines another at the same space. Look at the redefines. Suggest that you ignore redefined fields as a first iteration.
Now that you have the LRFIELDID you can use the LR-Field-Attribute section of the XML In there you will find the data type of the field FLDFMTCD, its length, sign, number of decimal places etc.
The following is a screen shot of the output she produced from her initial prototyping efforts focused on step 1.
Thanks for your work Sandy! You’ve started the ball rolling on GenevaERS Open Source Contributions!
The slides used in the following video are shown below:
Welcome to the training course on IBM Scalable Architecture for Financial Reporting, or SAFR. This is Module 22, using Pipes and Tokens
Upon completion of this module, you should be able to:
Describe uses for the SAFR piping feature
Read a Logic Table and Trace with piping
Describe uses for the SAFR token feature
Read a Logic Table and Trace with tokens
Debug piping and token views
This module covers three special logic table functions, the WRTK which writes a token, the RETK which reads the token and the ET function which signifies the end of a set of token views. These functions are like the WR functions, the RENX function and the ES function for non-token views.
Using SAFR Pipes does not involve special logic table functions.
This module gives an overview of the Pipe and Token features of SAFR.
SAFR views often either read data or pass data to other views, creating chains of processes, each view performing one portion of the overall process transformation
Pipes and Tokens are ways to improve the efficiency with which data is passed from one view to another
Pipes and Tokens are only useful to improve efficiency of SAFR processes; they do not perform any new data functions not available through other SAFR features.
A pipe is a virtual file passed in memory between two or more views. A pipe is defined as a Physical File of type Pipe in the SAFR Workbench.
Pipes save unnecessary input and output. If View 1 outputs a disk file and View 2 reads that file, then time is wasted for output from View 1 and input to View 2. This configuration would typically require two passes of SAFR, one processing view 1 and a second pass processing view 2.
If there is a pipe placed between View 1 and View 2, then the records stay in memory, and no time is wasted and both views are executed in this single pass.
The pipe consists of blocks of records in memory.
Similarly a token is a named memory area. The name is used for communication between two or more views.
Like a pipe, it allows passing data in memory between two or more views.
But unlike pipes which are virtual files and allow asynchronous processing, tokens operate one record at a time synchronously between the views.
Because Pipes simply substitute a virtual file for a physical file, views writing to a pipe are an independent thread from views reading the pipe.
Thus for pipes both threads are asynchronous
Tokens though pass data one record at a time between views. The views writing the token and the views reading the token are in the same thread.
Thus for tokens there is only one thread, and both views run synchronously
An advantage of pipes is that they include parallelism, which can decrease the elapsed time needed to produce an output. In this example, View 2 runs in parallel to View 1. After View 1 has filled block 1 with records, View 2 can begin reading from block 1. While View 2 is reading from block 1, View 1 is writing new records to block 2. This advantage is one form of parallelism in SAFR which improves performance. Without this, View 2 would have to wait until all of View 1 is complete.
Pipes can be used to multiply input records, creating multiple output records in the piped file, all of which will be read by the views reading the pipe. This can be used to perform an allocation type process, where a value on a record is divided into multiple other records.
In this example, a single record in the input file is written by multiple views into the pipe. Each of these records is then read by the second thread.
Multiple views can write tokens, but because tokens can only contain one record at a time, token reader views must be executed after each token write. Otherwise the token would be overwritten by a subsequent token write view.
In this example, on the left a single view, View 1, writes a token, and views 2 and 3 read the token. View 4 has nothing to do with the token, reading the original input record like View 1.
On the right, both views 1 and 4 write to the token. After each writes to the token, all views which read the token are called.
In this way, tokens can be used to “explode” input event files, like pipes.
Both pipes and tokens can be used to conserve or concentrate computing capacity. For example CPU intensive operations like look-ups, or lookup exits, can be performed one time in views that write to pipes or tokens, and the results added to the record that is passed onto to dependent views. Thus the dependent, reading views will not have to perform these same functions.
In this example, the diagram on the left shows the same lookup is performed in three different views. Using piping or tokens, the diagram on the right shows how this lookup may be performed once, the result of the lookup stored on the event file record and used by subsequent views, thus reducing the number of lookups performed in total.
A limitation of pipes is the loss of visibility to the original input record read from disk; the asynchronous execution of thread 1 and thread 2 means the records being processed in thread one is unpredictable.
There are instances where visibility by the view using the output from another view to the input record is important. As will be discussed in a later module, the Common Key Buffering feature can at times allow for use of records related to the input record. Also, the use of exits can be employed to preserve this type of visibility. Token processing, because it is synchronous within the same thread, can provide this capability.
In this example, on the top the token View 2 can still access the original input record from the source file if needed, whereas the Pipe View 2 on the bottom cannot.
Piping can be used in conjunction with Extract Time Summarization, because the pipe writing views process multiple records before passing the buffer to the pipe reading views. Because tokens process one record at a time, they cannot use extract time summarization. A token can never contain more than one record.
In the example on the left, View 1, using Extract Time Summarization, collapses four records with like sort keys from the input file before writing this single record to the Pipe Buffer. Whereas Token processing on the right will pass each individual input record to all Token Reading views.
Note that to use Extract Time Summarization with Piping, the pipe reading views must process Standard Extract Format records, rather than data written by a WRDT function; summarization only occurs in CT columns, which are only contained in Standard Extract Format Records.
Workbench set-up for writing to, and reading from a pipe is relatively simple. It begins with creating a pipe physical file and a view which will write to the pipe.
Begin by creating a physical file with a File Type of “Pipe”.
Create a view to write to the Pipe, and within the View Properties:
Select the Output Format of Flat File, Fixed-Length Fields,
Then select the created physical file as the output file
The next step is to create the LR for the Pipe Reader Views to use.
The LR used for views which read from the pipe must match exactly the column outputs of the View or Views which will write to the Pipe. This includes data formats, positions, lengths and contents.
In this example, Column 1 of the View, ”Order ID” is a 10 byte field, and begins at position 1. In the Logical Record this value will be found in the Order_ID field, beginning at position 1 for 10 bytes.
Preliminary testing of the new Logical Record is suggested, making the view write to a physical file first, inspect the view output, and ensure the data matches the Pipe Logical Record, and then change the view to actually write to a pipe. Looking at an actual output file to examine positions is easier than using the trace to detect if positions are correct as the Pipe only shows data referenced by a view, not the entire written record produced by the pipe writing view.
The view which reads the Pipe uses the Pipe Logical Record. It also must read from a Logical File which contains the Pipe Physical File written to by the pipe writer view.
In this example, the view read the pipe-LR ID 38, and the Pipe LF ID 42 Logical File which contains the Physical File of a type Pipe written to by the pipe writing view. The view itself uses all three of the fields on the Pipe Logical Record.
This is the logic table generated for the piping views. The pipe logic table contains no special logic table functions. The only connection between the ES sets is the shared physical file entity.
The first RENX – ES set is reading Logical File 12. This input event file is on disk. The pipe writing view, view number 73, writes extracted records to Physical File Partition 51.
The next RENX – ES set reads Logical File 42. This Logical File contains the same Physical File partition (51) written to by the prior view. The Pipe Reader view is view 74.
This the the trace for piped views. In the example on the left, the thread reading the event file, with a DD Name of ORDER001, begins processing input event records. Because the pipe writer view, number 73, has no selection criteria, each record is written to the Pipe by the WRDT function. All 12 records in this small input file are written to the Pipe.
When all 12 records have been processed, Thread 1 has reached the end of the input event file, it stops processing, and turns the pipe buffer over to Thread 2 for processing.
Thread 2, the Pipe Reader thread, then begins processing these 12 records. All 12 records are processed by the pipe reader view, view number 74.
As shown on the right, if the input event file had contained more records, enough to fill multiple pipe buffers, the trace still begins with Thread 1 processing, but after the first buffer is filled, Thread 2, the pipe reading view, begins processing. From this point, the printed trace rows for Thread 1 and Thread 2 may be interspersed as both threads process records in parallel against differing pipe buffers. This randomness is highlighted by each NV function against a new event file record
Workbench set-up for writing to, and reading from a token is very similar to the set-up for pipes. It begins with creating a physical file with a type of token, and a view which will write to the token.
Begin by creating a physical file with a File Type of “Token”.
Create a view to write to the token, and within the View Properties, select the Output Format of Flat File, Fixed-Length Fields.
Then in the proprieties for the Logical Record and Logical File to be read, select the output Logical File and created Physical File as the destination for the view
Like pipe usage, the next step is to create the Logical Record for the Token Reader Views to use. In our example, we have reused the same Logical Record used in the piping example. Again, the logical record which describes the token must match the output from the view exactly, including data formats, positions, lengths and contents.
Again, the view which reads the Token uses this new Logical Record. It also must read from a Logical File which contains the Token Physical File written to by the token writer view.
Note that the same Logical File must be used for both the token writer and token reader; using the same Physical File contained in two different Logical Files causes an error in processing.
This is the logic table generated for the token views.
Any views which read a token are contained within an RETK – Read Next Token / ET – End of Token set at the top of the logic table.
The token writer view or views are listed below this set in the typical threads according to the files they read; the only change in these views is the WRTK logic table function, which Writes a Token shown at the bottom of the logic table in red.
Although the token reading views are listed at the top of the Logic Table, they are not executed first in the flow. Rather, the regular threads process, and when they reach an WRTK Write Token logic table function, the views in the RETK thread are called and processed. Thus the RETK – ET set is like a subroutine, executed at the end of the regular thread processing.
In this example, view 10462 is reading the token written by view 10461. View 10462 is contained within the RETK – ET set at the top of the logic table. View 10461 has a WRTK function to write the token. After execution of this WRTK function, all views in the RETK – ET set are executed.
This is the Logic Table Trace for the Token Views.
Note that, unlike the Pipe Example which has multiple Event File DD Names within the Trace, because Tokens execute in one thread, the Event File DD Name is the same throughout, ORDER001 in this example.
The WRTK Write Token Function, on view 10461 in this example, which ends the standard thread processes are immediately followed by the Token Reading view or views, view 10462 in this example. The only thing that makes these views look different from the views reading the input event file is that they are processing against a different Logical File and Logical Record, Logical File ID 2025 and Logical Record ID 38 in this example.
The trace on the right has been modified to show the effect of having multiple token writing views executing at the same time.
Note multiple executions of View 10462, the token reading view processing the same Event Record1, from the Same Event DD Name ORDER001, after each token writing views, 10460 and 10461.
The GVBMR95 control report shows the execution results from running both the Pipe and Token views at the same time.
At the top of the report, the Input files listed include the
Disk File, the event file for both the pipe and token writing views
the Token (for the Token Reading views), and
the Pipe (for the Pipe Reading views)
The bottom of the report shows the final output disposition, showing records were written to
an Extract File (shared by the pipe and token reading views),
the Token (for the Token Writing View) and
the Pipe (for the Pipe Writing View)
The center section shows more details about each one of the Views.
The top of this slide shows the final run statistics about the run from the GVBMR95 control report.
Note that a common error message may be encountered if view selection for the pass is not carefully constructed, for either Pipes or Tokens. Execution of a pass which contains only Pipe Writers and no Pipe Readers will result in this message. The opposite is also true, with no Pipe Writers and only Pipe Readers selected. The same is also true for token processes. Pairs of views must be executed for either process, matched by Logical Files containing matching Physical Files for the process to execute correctly.
This module described using piping and tokens. Now that you have completed this module, you should be able to:
Describe uses for the SAFR piping feature
Read a Logic Table and Trace with piping
Describe uses for the SAFR token feature
Read a Logic Table and Trace with tokens
Debug piping and token views
Additional information about SAFR is available at the web addresses shown here. This concludes Module 22, The SAFR Piping and Token Functions