Consolidated Data Supply Chain: 2016

The following slides were developed while consulting at 12 of top 25 world banks about financial systems from 2013-2016. Although not specific to GenevaERS, it delineates the causes of the problems in our financial systems, and the potential impact of GenevaERS supported systems and the Event Based Concepts.

Additional details can be read in this white paper: A Proposed Approach to Minimum Costs for Financial System Data Maintenance, and further thinking on the topic in this blog and video presentation from 2020: Sharealedger.org: My Report to Dr. Bill McCarthy on REA Progress.

SAFR and A Smarter Planet for Financial Reporting: 2011

The following was written as part of a renewed sales initiative for SAFR, building upon the latest financial ledger projects.

This write-up written for IBM sales efforts, was used as the summary of Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 2: The Problem and Chapter 3: The Solution.

The Problem

The recent financial crisis has exposed the systemic problem that the world’s largest financial institutions cannot adequately account for and report on liquidity, positions, currency exposure, credit, market, and interest rate risk and product, customer and organizational performance.  The CFO plays a critical role in correcting this problem by leveraging the financial data they already control, as well as leveraging scale to take out cost.  But even industry insiders do not realize that financial institutions suffer a unique set of domain problems when it comes to financial reporting.

Current financial reporting systems are antiquated and very inefficient. They were designed decades ago to simply track flow of capital, revenue and expenses at the company and departments levels.  The lack of transparency is evident in the increasing costs of the finance function with few benefits to show for the investment.  Sarbanes Oxley and other regulations have proven ineffective at getting at the root of the problem and the resulting financial meltdown regulations may well prove similarly ineffective.  These pressures create diseconomies of scale which affect the largest institutions the most.

For the most part, existing systems deliver accurate results in summary, but the increase in transparency requires line of site to the underlying causes of those results. Consider if your personal bank account statement or credit card bill only presented the change in balance between periods, but provided no list of transactions. When the statement is as expected, further detail may not be needed. But when the balance is in question, your first response is ‘why’ and you immediately want to see the transaction detail.  The same issues are at stake when managing the finances of the enterprise – with the associated cost and consequences considerably higher!  A single instance of financial restatement has cost some organizations hundreds of millions of dollars to correct, not counting lost market valuation.

Currently 90% of the money supply in mature markets is represented by digital records of transactions and not hard currency. It’s no wonder that that the volume of electronic finance records being kept has exploded compared to when the systems were first created. Yet our approach to these demands has not been to automate the process of keeping and accessing the details of the transactions. Almost all employees in today’s financial institutions are involved in capturing and coding financial details in some way, and a large number of non-finance employees are involved in the investigative process to locate the additional detail so often required. The effort for this manual intervention is incredibly inefficient and costly.

As we see all around us, computing capacities have increased by several orders of magnitude since these finance systems were designed. However, reporting processes have grown organically as a system of transaction summaries in order to continue to bridge multiple financial systems – but have lacked a single unified approach. This has meant that for the most part the business of financial reporting has not benefited from the increase of computing capacities available today.

The Solution

A Smarter Planet is founded on financial markets that provide for greater transparency and comprehension of the financial reporting by bank and non-bank entities, allowing the markets to react to conditions in more informed, less reactionary ways. IBM has spent 25 years refining an approach to this for financial institutions. The IBM® Scalable Architecture for Financial Reporting™ (SAFR) system provides financial reporting that is built bottoms up from the individual business event transactions to provide the finest grained views imaginable.

By harnessing today’s computing power and straight through processing approach, the details behind summary data can be made available in seconds rather than days or weeks. Providing nearly instant access to the highest quality financial data at any level of granularity will eliminate the duplicative reporting systems which tend to capture and produce summaries of the same business events for many stakeholders and reporting requirements.

More importantly, it will automate the hidden work of armies of people who are required to investigate details and attempt to explain results, or attempt to reconcile the disparate result of these many reporting systems—a truly wasteful activity caused by the systems themselves. Keeping the details in a finance system that can serve these needs allows for increased control, quality and integrity of audit efforts rather than dissipating them.

Some may question how much detail is the right level of detail?  Others may suggest this is too radical a change in a mature, understood and tested set of systems.  IBM experience with some of the largest financial services companies suggests that building a finance system.  based on the requirement to instrument the most granular level of transaction detail immediately stems the tide of increasing costs, lowers a variety of risks and can be a key driver of transformation of the banks ability to become more agile.  In time this approach begins to provide economies of scale for reporting.

SAFR is: (1) an information and reporting systems theory, (2) refined by 25 years of practical experience in creating solutions for a select group of the world’s largest businesses, (3) distilled into a distinctive method to unlock the information captured in business events, (4) through the use of powerful, scalable software for the largest organization’s needs, (5) in a configurable solution addressing today’s transparency demands.

The Theory

Companies expend huge sums of money to capture business events in information systems.  Business events are the stuff of all reporting processes.  Yet executives report feeling like they are floating in rafts, crying “Data, data everywhere and no useful information.”  Focusing reporting systems on exposing business events combinations can turn data into information.

The Experience

Although analysis of business events holds the answers to business questions, they aren’t to be trifled with, particularly for the largest organizations.  Reporting processes—particularly financial reporting processes—accumulate millions and billions of business events.  In fact, the balance sheet is an accumulation of all the financial business events from the beginning of the company!  Such volumes mean unlocking the information embedded in business events requires fundamentally different approaches.  The 25 years of experience of building SAFR in services engagements has exposed, principle by principle, piece by piece, and layer by layer the only viable way.

The Method

This experience has been captured in a method of finding and exposing business events, within the context of the existing critical reporting processes.  It uses today’s recognized financial data like a compass pointing north to constrain, inform, and guide identification of additional reporting details.  It facilitates definition of the most important questions to be answered, and to configuring repositories to provide those answers consistently.  It also explains how to gradually turn on the system without endangering existing critical reporting processes.

The Software

The infrastructure software, a hard asset with hundreds of thousands of lines of source code and feature set rivaling some of the best known commercial software packages, is most often what is thought of when someone refers to SAFR. 

The Scan Engine is the heart of SAFR, performing in minutes what other tools require hours and days to do.  The Scan Engine is a parallel processing engine, generating IBM z/OS machine code.  In one pass through a business event repository it creates many business event “views,” providing rich understanding.  It categorizes, through join processes, the business events orders of magnitude more efficiently than other tools.  Built for business event analysis, it consistently achieves a throughput of a million records a minute.  It is highly extensible to complex problems. 

SAFR Views are defined in the SAFR Developer Workbench or rule based processes in the SAFR Analyst Workbench or in custom developed applications.  The Scan Engine executed as a scheduled process, scans the SAFR View and Metadata Repository selecting views to be resolved at that time.

The Indexed Engine, a new SAFR component, provides one at a time View resolution through on-line access to Scan Engine and other outputs.  It uses Scan Engine performance techniques.  Reports structure and layout are dynamically defined in the Analyst Workbench.  The Indexed Engine creates reports in a fraction of the time required for other tools.  Its unique capabilities allow for a movement based data store, dramatically reducing data volumes required both in processing and to fulfill report request.

Upon entering Managed Insights, users select parameters to drill down to increasing levels of business events, and perform multidimensional analysis through the Viewpoint Interfaces.  The Insight Viewer enables discovery of business event meaning in an iterative development mode.

The Solution

The SAFR Infrastructure Software has been configured over 10 years for number of clients to provide an incredibly scalable Financial Management Solution (FMS) for the largest financial services organizations. 

The heart of FMS is the Arrangement Ledger (AL).  An “arrangement” is a specific customer/contract relationship.  The AL, a customer/contract sub-ledger, maintains millions of individual customer/contract level balance sheet and income statements.  This incredibly rich operational reporting system supports a nearly unbelievable swath of information provided by scores of legacy reporting systems in summary, with the added benefit of being able to drill down to business event details if needed.  Doing so allows reporting high quality financial numbers by customer, product, risk, counterparty and other characteristics, all reconciled, audited, and controlled.  

AL is fed daily business events typically beginning with legacy general ledger entries and then transitioning to detailed product systems feeds over time.  The business events become complete journal entries at the customer-contract level, including reflecting the impact of time in the Accounting Rules Engine.  Rules are under control of finance rather than embedded in programs in source systems, enabling Finance to react to changes in financial reporting standards, including International Financial Reporting Standards (IFRS).

The business event journal entries are posted by the Arrangement Ledger on a daily basis, while it simultaneously generates additional point in time journal entries based upon existing balances, including those for multi-currency intercompany eliminations, GAAP reclassification and year-end close processing.  It accepts and properly posts back-dated entries to avoid stranded balances, and summarizes daily activity to pass to the General Ledger.  The General Ledger provides another control point for the traditional accounting view of the data.  The Arrangement Ledger detects and performs reclassification keeping the arrangement detail aligned with the summary General Ledger

AL also accepts arrangement descriptive information with hundreds of additional attributes to describe each customer-contract, and counterparty or collateral descriptive attributes, enabling producing trial balances by a nearly unlimited set of attributes, not just the traditional accounting code block.  Extract processes produces various summaries, perhaps ultimately numbering in the hundreds or thousands, to support information delivery for not only traditional accounting but also statutory, regulatory, management, and risk reporting.  The SAFR one pass multiple view capability allows AL to load data, generate new business events, and create extracts all in one process, including loading the incredibly information rich Financial Data Store. 

Information Delivery includes multiple ways of accessing the Arrangement Ledger and Financial Data Store.  The major window is through SAFR Managed Insights.  This parameter-driven Java application provides thousands of different permutations of the data.  It allows drill-down from summaries to lower and lower levels of data without impacting on-line response time.  It allows dynamic creation of new reports and multi-dimensional analysis of Financial Data Store data.  Extract facilities provide the ability to feed other applications with rules maintained by finance.  Other reports provide automated reconciliation and audit trails.

FMS can be tailored to work within an existing environment, including working within the existing security and reference data frameworks.  FMS is often can be a sub-component of an ERP implementation.

Conclusion

This is a financial system architecture for the 21st century. This is the reporting system architecture for the 21st century. Finance transformation starts with finance systems transformation.  Finance systems transformation starts with rejecting the legacy finance systems architecture that provides only summary results.  It is transforming the financial systems—the original enterprise data warehouse—into a system capable of supporting today’s information demands.

__________

Copyright ©2010, 2011, 2015, 2018 by Kip M. Twitchell.

All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the author except for the use of brief quotations in a book review or scholarly journal. Posted by permission.

Global Universal Bank Case Study: 2010

The following presentation was Rick Roth’s last major presentation before retiring, one of the founders of Geneva.

More about this time can be found in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 57. The General Ledger through Chapter 63. Go Live.

Global Investment Bank Analyzes Trade Level Detail: 1999

[An investment bank arm], a division of [a major Swiss bank], chose Geneva ERS to manage its Detail Repository, a central component of the Global General Ledger SAP implementation.  This repository provides trade-level detail analysis to for internal and external global reporting.  [Bank] management has referred to the global general ledger as “a cornerstone of [the bank]’s strategy and its successful realization globally is vital to all future plans.”

Approximately 1.3 million detailed transaction data records from twenty-two different feeder systems are loaded into the Detail Repository nightly. These transactions are trade level detail records from Europe, Asia Pacific, and North America.  Geneva ERS scans the repositories’ 51 million records in 22 entities and 269 physical partitions.  It extracts 20 million records that are aggregated into approximately 480,000 summary balances.  These summary records are sent to SAP for balance sheet and summary profit and loss reporting.  This process runs in approximately 3 hours of elapsed time and 5 and ½ hours of CPU time and produces 30 different outputs.

A second Detail Repository process uses Geneva ERS and custom programs to satisfy the intricate regulatory requirements.  This system consists of 65 Geneva “Views” or programs, 4 custom programs, and 5 PERL scripts.  Geneva is executed 19 times with each execution handling a subset of the core business requirements.  During this nightly process Geneva reads 71 million records in 40 gigabytes, extract 59 million records in 30 gigabytes, and performs 229 million table joins.  The output is created in 12 CPU hours and 8 wall clock hours.  In comparison, legacy applications required 24 hours to complete a limited number of these processes.

Outputs from these processes are used for US tax and regulatory, Asia specific regulatory management, and Swiss regulatory reporting.  They include information on:

  • Capital Allocations
  • Average Balancing and Multi-Currency Revaluation
  • Syndicated Loan Netting
  • Federal and Swiss Regulatory Collateral Allocation
  • Residual Maturity
  • Interest Rate Selection
  • Product Risk Weighting
  • Specific Reserve Allocation
  • Unutilized Facility Calculation.

The view outputs include files used in additional processing or to feed other systems, delimited files, tabular reports, and inputs to a sophisticated executive information system.  The executive information system allows users to select which report to view and for what period.  The user is presented with the highest level summaries.  The user can then drill down into specific areas of interest, select ranges of data, sort columns, view other dimensions of the same data, graph the data, and export to spreadsheets.  The executive information system is accessed by as many as 50 users throughout the world.

The Geneva Views are maintained in Sybase tables accessed by Geneva ERS Visual Basic ViewBuilder front-end.  The software maintains various types of metadata including record layouts, field types, join relationships between record structures, logical and physical file partition information, as well as the actual query selection, summarization, and formatting logic.  The business logic contained within the views ranges from simple transform logic to the sophisticated globally defined business rules that make up the global general ledger accounting model.

More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 20. Parallelism and Platform.

Financial Services Company Projects: 1998

The following citation, in summary at the top, with more detail below, are for a large financial service firm. More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 21. ERP Reporting and Chapter 35. Model the Repository.

Insurance Company Manages Very Large Data Warehouse

Three years ago, a Fortune 100 Insurance Company embarked upon a “ Data Warehouse Program” to “build a single consistent common source of financially balanced data to be used by multiple business users for decision support, data mining, and internal/external reporting.”  A major component of this information architecture is the construction of multiple Operational Data Stores or ODS’s.  These ODS’s may contain up to 10 years of event level or detailed transaction history as the basis for populating other data warehousing applications.  The company uses Geneva ERS to load and extract data from these data stores.

Data is loaded into these repositories nightly.  In some cases Geneva ERS is used to extract data from operational source systems.  “Source systems” vary from the PeopleSoft Journal Lines DB/2 table to legacy flat files.  The DB/2 extract process reads 80 physical DB2 partitions in parallel, scanning over 80 million records in approximately 10 minutes.

This extracted data is edited using a combination of custom Geneva ERS processes and stand-alone custom programs.  After the data is edited it is loaded into the repositories in parallel by Geneva ERS.  The number of records loaded varies by repository, from 100,000 financial transactions (journal lines) to more than 1,000,000 policy transactions.  In total, approximately 2 million transactions a day are added to the repositories.

In the same pass of the data that loads the repositories, Geneva ERS also produces multiple reports and extract files for use by business users and other application systems.  One of the repositories is “backed up” by Geneva ERS at the same time.  This daily execution of Geneva ERS reads over 450 million records in 23 different entities and 220 physical partitions.  It writes over 900 million physical records to over 800 files.  This process has run in 1 hour 3 minutes of wall clock time and 3 hours of CPU time. 

Other executions of Geneva ERS extract data on a daily, weekly, monthly and annual basis.  The output from one execution creates an executive information drill down file accessed via the company Intranet.  This web site is accessible by over 300 business users.

The Geneva “Views” executed in all of the above processes are maintained within the Geneva ViewBuilder software.  This includes record structures, field types, relationships between structures, and the actual queries themselves.  Many queries are very similar to programs in their sophistication and contain complex business logic.  Some have over 900 column definitions.  These views also utilize custom code for accessing certain files and executing business logic requiring a programming language.  Over 100 people have been trained on programming using Geneva ERS, and the company has had up to 10 testing environments at one time.

The most sophisticated use of Geneva ERS emulates a PeopleSoft financial cost allocation process.  Custom programs were developed which generate over 6,000 Geneva ERS views based upon over 7,000 PeopleSoft rules.  Geneva ERS executes these views to scan the financial repository selecting records eligible for allocation.  It then allocates these costs through four allocation layers, such as products, and geographical units.  At 1999 year-end, this process read over 50 million records selecting nearly 3 million that were eligible for allocation.  These 3 million records were exploded into over 290 million virtual allocation records, of which 186 million summarized records were written to physical files.  The process runs in 7½ hours wall clock time and 28½ hours of CPU time.

Financial Services Company Simplifies Complex Processes

The Problem

Many financial service organizations were early adopters of computer technology.  They quickly constructed systems to automate numerous processes.  Programs were often added to the fringe of existing processes to solve new problems or provide an additional report or file for analysis. The need to keep multiple types of books and fulfill regulatory reporting requirements added to the complexity of some of these systems.  These systems grew before modularization, subsystem definition, or other computing concepts were developed.  Furthermore, the number of transactions generated by these organizations always challenged computing capacity.  This required creative and complex solutions. Over time these systems metamorphosed into closely intertwined, interconnected, and inflexible systems.

In 1996, a Fortune 100 Insurance Company determined their reporting system was so inflexible that it would become unsupportable in the near future.  They decided  “…to build a single, consistent source of financially balanced data to be used by multiple business users for decision support, data mining, and internal/external reporting.”  After analyzing their information needs they determined that the best approach to satisfy the broadest number of users was to construct a comprehensive data warehouse environment, including extract transformation processes, Operational Data Stores (ODSs), reporting environments, and data marts.  The company viewed the ODSs as the heart of the data warehousing effort.  ODSs are designed to contain up to 10 years of detailed transactional history.  By storing the transactional detail, the company can satisfy requests for individual transactions, summaries using any field from the transaction detail, or snap shots as of any point in time.  This robust environment truly enables the “single, consistent source of financially balanced data.”

Although such a data store simplifies and satisfies all types of information needs, it also means tremendous data volumes.  Managing such data volumes requires a creative, industrial strength solution.  It also requires people who know how to make it happen.  This company went looking for a tool and the people that were up to the challenge.  They chose PricewaterhouseCoopers and Geneva ERS.

The Solution

Geneva offers a robust data warehousing solution able to process tremendous amounts of data quickly.  It also provides for innovative, flexible solutions that are easily supported by company employees.  This company uses Geneva to do the following:

  • Source system Extract and Transform
  • ODS Load, Query, Interface production
  • Detailed Financial Allocation Process
  • Executive Information Delivery

Extract and Transform

Extracting from legacy systems can often present a challenge to the most robust tools.  Especially when source systems include complex data structures and high data volumes.  Legacy systems often were designed to support monthly reporting cycles by accumulating and summarizing data before finally kicking out the hard copy report at the end of the month.  Changing such systems to provide more recent information can make the difference between a successful information environment and simply a different format for the same old content.  However, making sure the results are right can be a very expensive process.

PwC consultants used a new approach to attack this problem.  For this client, they first created a program which understood these complex and unique legacy structures, which could be called by the Geneva ERS open API.  This program opened up difficult legacy systems to the power of Geneva. Geneva was used to extract data from multiple sources.  The Geneva development environment allows definition of source system data structures.  Once defined to Geneva, business logic can be applied.  The business logic is stored as a Geneva “View.”  The Views are organized by logical frequencies like daily or weekly processes or on request processes. Once the environment was created, they focused on understanding the data structures instead of tracing through the entire legacy process. They used an iterative prototyping approach to discover the source of all the data contained in legacy downstream files.  They successfully proved that the system could be converted from a monthly to a daily system. They extracted the source system data and populated the ODSs.  The iterative prototyping approach used by PwC consultants shaved months off the delivery cycle and hundreds of man-hours spent in legacy system research.

The power of Geneva is evident in the types of source systems from which data is extracted.  In addition to the complex legacy structure noted above, a DB/2 extract process reads 80 physical DB/2 partitions in parallel, scanning over 80 million records in approximately 10 minutes.  All processing is completed in a fraction of the time it would take another data warehousing tool.

Load, Query and Interface Production

PwC set out to help create an environment that would allow many users, with diverse information needs to pull the desired information from the repositories.  They created data and processing models that allowed all queries to be resolved in a single pass of the transaction detail.  These models minimizes data storage requirements by eliminating duplicate data from transactions, but combining the data back together for report production in an efficient manner.

In the same pass of the data that loads the repositories, Geneva ERS also produces multiple reports and extract files for use by business users and other application systems.  Many output formats are possible including reports, spreadsheets and files.  And because the ODSs contain transaction level detail, users are able to choose what level of detail they wish to see. The data model also allows for use of Geneva’s date effective join capabilities.  This enables users to create reports as of any point in time.  Summaries can be created using organizational structures from any point in time.

The client choose to construct an ODS to support the ERP General Ledger being installed.  However, the coding structure for the new ERP package differed significantly from the historical organization and account coding structure.  The ODS supported all interface production, translating from old to new and from new to old.  Global utilities were constructed that were called from the Geneva API.  Because of Geneva’s ability to process the detailed transactions, all fields could be translated at the lowest level of detail.  This enabled consistent answer set production for all interfaces.

The number of records loaded varies by repository, from 100,000 financial transactions (journal lines) to more than 1,000,000 policy transactions.  In total, approximately 2 million transactions a day are added to the repositories.  The single pass architecture even produces back ups using Geneva ERS in the same pass of the ODS.  This daily execution of Geneva ERS reads over 450 million records in 23 different entities and 220 physical partitions.  It writes over 900 million physical records to over 800 files.  This process has run in 1 hour 3 minutes of wall clock time and 3 hours of CPU time. 

Detailed Financial Allocation Process

The most sophisticated use of Geneva emulates an ERP financial cost allocation process.  The insurance company recognized that with their volume of data, the ERP package could not handle the allocation process.  They would have to change their business requirements or find another solution.  They looked to PwC and Geneva to supply that solution.  Client members and PwC consultants analyzed the allocation process and created custom programs which generate over 6,000 Geneva views based upon over 7,000 allocation rules.  Geneva executes these views to scan the financial ODS selecting records eligible for allocation.  It then allocates these costs through four allocation layers, such as products, and geographical units. 

During the first year of implementation, the year-end allocation process read over 50 million records selecting nearly 3 million that were eligible for allocation.  These 3 million records were exploded into 186 million allocation results.  The process runs in 7½ hours wall clock time.  The Geneva system produces these results 250 times faster than the ERP package.  Because of this innovative Geneva solution, the business users were able to have their data represented exactly as they wanted.

Geneva ERS Executive Information System

Providing users with on-line access into repositories holding hundreds of millions of detailed records is a significant challenge.  The PwC team developed an innovative approach to give users the access, but not compromise the performance of the ODS processes or require massive new processing capacity.  The result was the Geneva ERS Executive Information System.

This system uses Geneva to produce summary “views” of the detailed transactions.  These summaries were developed within Geneva, and could summarize by any field in the ODS.  Approximately 20 different cuts of the data were developed.  During the load and query processes, these queries are executed to refresh the summaries from the detail transactions.  Because the summaries are always regenerated from the detail, no sophisticated update processes had to be developed and they also always contain the same consistent answer. 

Users access the companies Intranet site, and select which summary to view.  The Geneva ERS Java applet allows users to drill down within the report to lower and lower levels of detail.  Because of the unique structure of the Geneva output file, data access if very efficient.  This web site is accessible by over 300 business users

The Result

Most Geneva Views are created and maintained within the Geneva ViewBuilder software.  This interface stores table structures, field types, relationships between structures, and the business logic to process the data.  Geneva trainers trained over 100 company employees on site on the use of the ViewBuilder, for everything from basic view construction to use of the Geneva API, to key data warehousing concepts.

With PwC assistance the company implemented the first two ODSs.  They have now moved on to developing additional warehouses on their own and multiple data marts.
 The result has been the ability to replace certain legacy systems with much more flexible architecture.  The company has made major strides in meeting it objectives of “…a single, consistent source of financially balanced data to be used by multiple business users for decision support, data mining, and internal/external reporting.”  They have been able to create business intelligence in an intelligent way.

State of Alaska Benefits from Data Warehouse High-Performance Solution: 1996

The following brochure described the Geneva Solution for the State of Alaska. The origins of Geneva started in the early 1980’s with a custom developed financial system for Alaska. In circa 1994 the newer version of the tool (version 3.0) was installed to enhance the reporting processes.

More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 18. Input/Output.

A Summary of Event-Driven System Concepts: 1995

by Richard K. Roth, Principal, Price Waterhouse LLP

Eric L. Denna, Warnick/Deloitte & Touche Faculty Fellow, Brigham Young University, Marriott School of Management

[More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 15. Operational Versus Informational.]

Introduction

Through employing events-based concepts in systems architecture, organizations can dramatically increase the value and role their central systems play in accomplishing the business mission. Events-based architectural concepts result in system designs that increase the timeliness and quality of information, as well as expand the scope of business functions addressed by a given set of computer programs. Compared to traditional designs, events-based designs result in systems that are better, faster, and cheaper — from both development and operational standpoints.

The concept of event-driven (or events-based) systems started with taking a broader view of the proper role and scope for accounting systems. So we have used the accounting system problem as the basis for our discussion here. However, the same broader view applied to other application areas leads to the same general conclusions. 

Events-based concepts discard the arbitrary limitations that historically have been used to narrowly define accounting. Under an events-based concept, owners of an accounting system become anyone in the organization with a need for information about events captured by the system, not just the accountants. While the accountants continue to be important users of the system, the financial statements and other reports accountants need are viewed as simply another byproduct of capturing and summarizing all the events of importance to the organization. Accountants use what they need to prepare balance sheets and cash flow analyses. Managers use some of the accounting data along with statistics about claims paid or policies issued to determine the number and type of adjusters and agents that should be assigned to various risks or locations.

Moving off the old accounting paradigm and its artifacts of invoices, journal vouchers, general

and subsidiary ledger files, etc., to an events-based approach  has the effect of  simplifying  the system architecture problem for accountants, other users,  and system builders alike. The traditional accounting system architecture has invoice and disbursement transactions being recorded in a payables subsystem, billing and cash receipt transactions get posted in a receivables subsystem, inventory receipts and issues get recorded in an inventory subsystem, and everything gets posted twice in the general ledger subsystem. Special functions often have to be developed to record cash receipts that represent previous overpayments of invoices or disbursements that represent refunds of credit balances in customer accounts. The effect of this traditional accounting system architecture is to create the multiple system problems of redundant entry and reconciliation within what appears to be the same system. This is why traditional accounting systems usually seem more complex than a general understanding of the accounting problem would suggest they should be.

By viewing accounting events in the same context as all events, a different architecture is suggested because it becomes impractical to build a new subsystem each time some new event needs to be captured.  Information about vehicles inspected, miles of road striped, licenses issued, inventory counted, invoices received, invoices paid, bills issued, and cash deposited take on a similar appearance when viewed in the broader context. An events-based architecture for a system allows events to be summarized, counted, and compared at various levels of detail for various time frames according to the needs of the particular user concerned. Trends in product sales/square foot of floor space is of interest to assortment planners in retail operations.  Payroll costs divided by miles of road striped by month would be of interest to a highway maintenance engineer.  Warranty claims by product and failure type are of interest to manufacturers. Policy claims by risk with policyholder overlays would be of interest to risk managers and accountants alike.

As these examples illustrate, we can still get the accounting information required for financial accounting purposes from an events-based architecture by comparing various classes of events without having to incur the overhead of developing, operating, maintaining, and (probably most of all) understanding separate subsystems just to keep track of these subtotals for us.

Differences Illustrated

Exhibits I-I through IV have been included to contrast the differences between traditional and events-based system architectures.

Exhibit I-I

Exhibit I-I shows the basic architecture that has been used in accounting systems automation for the last 20 or so years.

Accounting events (which are a subset of all events a business would like to plan for, evaluate or control) are captured by a set of computer programs collectively called a transaction processor. The transaction processor takes a subset of information that was present on the accounting events as they were originally entered and  posts the transactions on one or more summary files. The full detail is then posted to history files, which typically are archived  or otherwise made relatively inaccessible.  Report programs specific to the financial purpose being addressed are developed based on the specifics of information that was posted to the summary files by the transaction processor. 

To the extent that the purposes being addressed have been fully provided for in the summary files and report programs, information can be obtained from the system. However, to the extent that detail information is required that was lost during the summary posting process, or was not captured at all because it was considered outside the scope of what accounting events properly should include, these requirements cannot be accommodated by the system and must be addressed by a subsidiary system specially designed for that purpose.

Exhibit I-II

Exhibit I-II shows the extension of this result to the primary purposes that traditionally are included as part of the financial function. Financial reporting functions are provided for in a general ledger subsystem. Inventory control and  reporting functions are provided for  in  an inventory    subsystem.   Fixed   assets   are    a different type of inventory and are provided for separately again …and so on.

Each subsystem characteristically is designed to accept and process the specific events that are the appropriate subject matter for its corresponding area of the business. Event details are posted to inaccessible history files and specially designed summary files become the basis for narrow  focus reporting requirements. Visibility to detail events captured and lost in the history files of one system become further removed in the context of the multiple subsystems of what we refer to as the — Financial Subsystems Paradigm. Each subsystem has its own data structures designed to support its particular purposes and business rules for transaction processing and reporting.  Maintaining and understanding these subsystems and their interrelationships becomes an area of specialization in its own right.  Systematically ensuring that the various subsystems are in balance is a labor-intensive function that consumes much of the energy invested in accounting activities. The numerous inter-system relationships make flexibility practically un-attainable.

Exhibit I-III

Exhibit I-III illustrates the real impact of inside-the-box thinking relative to the Financial Subsystems Paradigm. The mainstream events that make or break business missions often have financial effects, but there also are other information requirements that are above and beyond what accountants need to accomplish their mission. By defining accounting systems narrowly, the Financial Subsystems Paradigm has left most mainstream business requirements to be addressed by still separate subsystems that capture most of the same events needed by the accountants, but with more detailed information attached.

The separate subsystems also have the same basic architecture of traditional accounting systems which is characterized by a transaction processor that archives detail and posts summary files that feed narrowly defined reporting processes. The result  is  that  parallel   systems  are   required   to support accountants and mainstream business requirements which in turn creates  increasing   layers    of     reconciliation, flexibility, system maintenance, and data visibility problems.  The layers of complexity caused by traditional architecture, coupled with the Financial Subsystems Paradigm, have resulted in a virtually unworkable situation from an enterprise standpoint. 

Traditional architecture was fit for the purpose in the world of 20 years ago when the economics of computing power forced designers to archive detail (much like manual accounting systems) and automated systems were few in number, limited in scope and non-strategic in nature. A new paradigm is required as we attack integrated system requirements. The events-based architecture illustrated in Exhibit I-IV is a solution that is attractive both theoretically and practically.

Exhibit I-IV

Exhibit I-IV illustrates an architecture where the point of departure is a broad view of the business systems problem.

It views the proper subject matter for a system as all business events that an organization is interested in planning for, evaluating or controlling. Rather than building separate transaction processors to capture, edit and post payroll transactions, general ledger  transactions, inventory transactions… and so on, one transaction processor can be designed to capture events according to their respective data structures and apply the respective business      rules  for  editing   and  posting and reporting  in a generalized way. It explicitly recognizes that detail information  should  be  accessible  so  that information requirements can be accommodated as they are identified over time. In addition, it

recognizes that mainstream information require-ments and financial information requirements should be supported through the same data capture, maintenance, and reporting facility. 

Conclusion

The practical result of viewing systems problems in this broader architectural context is a consolidated design for the systems that get built.  A consolidated design means fewer programs to write, test, and maintain.  Fewer interrelationships must be defined, understood, explained, and reconciled.  The sensitivity to requirements defined at the outset is diminished substantially because views of the data captured are not limited by the summary data structures posted by the transaction processors. 

A  consolidated architecture is better because of its flexibility in accommodating changing requirements.  A consolidated architecture is faster due to the reduced development life cycle emanating from the underlying simplicity of the concept.  It is also cheaper because redundant functions are eliminated and there are fewer objects to build and maintain in the application portfolio.

Original Images

These are images of the original paper.