The recent financial crisis has exposed the systemic problem that the world’s largest financial institutions cannot adequately account for and report on liquidity, positions, currency exposure, credit, market, and interest rate risk and product, customer and organizational performance. The CFO plays a critical role in correcting this problem by leveraging the financial data they already control, as well as leveraging scale to take out cost. But even industry insiders do not realize that financial institutions suffer a unique set of domain problems when it comes to financial reporting.
Current financial reporting systems are antiquated and very inefficient. They were designed decades ago to simply track flow of capital, revenue and expenses at the company and departments levels. The lack of transparency is evident in the increasing costs of the finance function with few benefits to show for the investment. Sarbanes Oxley and other regulations have proven ineffective at getting at the root of the problem and the resulting financial meltdown regulations may well prove similarly ineffective. These pressures create diseconomies of scale which affect the largest institutions the most.
For the most part, existing systems deliver accurate results in summary, but the increase in transparency requires line of site to the underlying causes of those results. Consider if your personal bank account statement or credit card bill only presented the change in balance between periods, but provided no list of transactions. When the statement is as expected, further detail may not be needed. But when the balance is in question, your first response is ‘why’ and you immediately want to see the transaction detail. The same issues are at stake when managing the finances of the enterprise – with the associated cost and consequences considerably higher! A single instance of financial restatement has cost some organizations hundreds of millions of dollars to correct, not counting lost market valuation.
Currently 90% of the money supply in mature markets is represented by digital records of transactions and not hard currency. It’s no wonder that that the volume of electronic finance records being kept has exploded compared to when the systems were first created. Yet our approach to these demands has not been to automate the process of keeping and accessing the details of the transactions. Almost all employees in today’s financial institutions are involved in capturing and coding financial details in some way, and a large number of non-finance employees are involved in the investigative process to locate the additional detail so often required. The effort for this manual intervention is incredibly inefficient and costly.
As we see all around us, computing capacities have increased by several orders of magnitude since these finance systems were designed. However, reporting processes have grown organically as a system of transaction summaries in order to continue to bridge multiple financial systems – but have lacked a single unified approach. This has meant that for the most part the business of financial reporting has not benefited from the increase of computing capacities available today.
A Smarter Planet is founded on financial markets that provide for greater transparency and comprehension of the financial reporting by bank and non-bank entities, allowing the markets to react to conditions in more informed, less reactionary ways. IBM has spent 25 years refining an approach to this for financial institutions. The IBM® Scalable Architecture for Financial Reporting™ (SAFR) system provides financial reporting that is built bottoms up from the individual business event transactions to provide the finest grained views imaginable.
By harnessing today’s computing power and straight through processing approach, the details behind summary data can be made available in seconds rather than days or weeks. Providing nearly instant access to the highest quality financial data at any level of granularity will eliminate the duplicative reporting systems which tend to capture and produce summaries of the same business events for many stakeholders and reporting requirements.
More importantly, it will automate the hidden work of armies of people who are required to investigate details and attempt to explain results, or attempt to reconcile the disparate result of these many reporting systems—a truly wasteful activity caused by the systems themselves. Keeping the details in a finance system that can serve these needs allows for increased control, quality and integrity of audit efforts rather than dissipating them.
Some may question how much detail is the right level of detail? Others may suggest this is too radical a change in a mature, understood and tested set of systems. IBM experience with some of the largest financial services companies suggests that building a finance system. based on the requirement to instrument the most granular level of transaction detail immediately stems the tide of increasing costs, lowers a variety of risks and can be a key driver of transformation of the banks ability to become more agile. In time this approach begins to provide economies of scale for reporting.
SAFR is: (1) an information and reporting systems theory, (2) refined by 25 years of practical experience in creating solutions for a select group of the world’s largest businesses, (3) distilled into a distinctive method to unlock the information captured in business events, (4) through the use of powerful, scalable software for the largest organization’s needs, (5) in a configurable solution addressing today’s transparency demands.
Companies expend huge sums of money to capture business events in information systems. Business events are the stuff of all reporting processes. Yet executives report feeling like they are floating in rafts, crying “Data, data everywhere and no useful information.” Focusing reporting systems on exposing business events combinations can turn data into information.
Although analysis of business events holds the answers to business questions, they aren’t to be trifled with, particularly for the largest organizations. Reporting processes—particularly financial reporting processes—accumulate millions and billions of business events. In fact, the balance sheet is an accumulation of all the financial business events from the beginning of the company! Such volumes mean unlocking the information embedded in business events requires fundamentally different approaches. The 25 years of experience of building SAFR in services engagements has exposed, principle by principle, piece by piece, and layer by layer the only viable way.
This experience has been captured in a method of finding and exposing business events, within the context of the existing critical reporting processes. It uses today’s recognized financial data like a compass pointing north to constrain, inform, and guide identification of additional reporting details. It facilitates definition of the most important questions to be answered, and to configuring repositories to provide those answers consistently. It also explains how to gradually turn on the system without endangering existing critical reporting processes.
The infrastructure software, a hard asset with hundreds of thousands of lines of source code and feature set rivaling some of the best known commercial software packages, is most often what is thought of when someone refers to SAFR.
The Scan Engine is the heart of SAFR, performing in minutes what other tools require hours and days to do. The Scan Engine is a parallel processing engine, generating IBM z/OS machine code. In one pass through a business event repository it creates many business event “views,” providing rich understanding. It categorizes, through join processes, the business events orders of magnitude more efficiently than other tools. Built for business event analysis, it consistently achieves a throughput of a million records a minute. It is highly extensible to complex problems.
SAFR Views are defined in the SAFR Developer Workbench or rule based processes in the SAFR Analyst Workbench or in custom developed applications. The Scan Engine executed as a scheduled process, scans the SAFR View and Metadata Repository selecting views to be resolved at that time.
The Indexed Engine, a new SAFR component, provides one at a time View resolution through on-line access to Scan Engine and other outputs. It uses Scan Engine performance techniques. Reports structure and layout are dynamically defined in the Analyst Workbench. The Indexed Engine creates reports in a fraction of the time required for other tools. Its unique capabilities allow for a movement based data store, dramatically reducing data volumes required both in processing and to fulfill report request.
Upon entering Managed Insights, users select parameters to drill down to increasing levels of business events, and perform multidimensional analysis through the Viewpoint Interfaces. The Insight Viewer enables discovery of business event meaning in an iterative development mode.
The SAFR Infrastructure Software has been configured over 10 years for number of clients to provide an incredibly scalable Financial Management Solution (FMS) for the largest financial services organizations.
The heart of FMS is the Arrangement Ledger (AL). An “arrangement” is a specific customer/contract relationship. The AL, a customer/contract sub-ledger, maintains millions of individual customer/contract level balance sheet and income statements. This incredibly rich operational reporting system supports a nearly unbelievable swath of information provided by scores of legacy reporting systems in summary, with the added benefit of being able to drill down to business event details if needed. Doing so allows reporting high quality financial numbers by customer, product, risk, counterparty and other characteristics, all reconciled, audited, and controlled.
AL is fed daily business events typically beginning with legacy general ledger entries and then transitioning to detailed product systems feeds over time. The business events become complete journal entries at the customer-contract level, including reflecting the impact of time in the Accounting Rules Engine. Rules are under control of finance rather than embedded in programs in source systems, enabling Finance to react to changes in financial reporting standards, including International Financial Reporting Standards (IFRS).
The business event journal entries are posted by the Arrangement Ledger on a daily basis, while it simultaneously generates additional point in time journal entries based upon existing balances, including those for multi-currency intercompany eliminations, GAAP reclassification and year-end close processing. It accepts and properly posts back-dated entries to avoid stranded balances, and summarizes daily activity to pass to the General Ledger. The General Ledger provides another control point for the traditional accounting view of the data. The Arrangement Ledger detects and performs reclassification keeping the arrangement detail aligned with the summary General Ledger
AL also accepts arrangement descriptive information with hundreds of additional attributes to describe each customer-contract, and counterparty or collateral descriptive attributes, enabling producing trial balances by a nearly unlimited set of attributes, not just the traditional accounting code block. Extract processes produces various summaries, perhaps ultimately numbering in the hundreds or thousands, to support information delivery for not only traditional accounting but also statutory, regulatory, management, and risk reporting. The SAFR one pass multiple view capability allows AL to load data, generate new business events, and create extracts all in one process, including loading the incredibly information rich Financial Data Store.
Information Delivery includes multiple ways of accessing the Arrangement Ledger and Financial Data Store. The major window is through SAFR Managed Insights. This parameter-driven Java application provides thousands of different permutations of the data. It allows drill-down from summaries to lower and lower levels of data without impacting on-line response time. It allows dynamic creation of new reports and multi-dimensional analysis of Financial Data Store data. Extract facilities provide the ability to feed other applications with rules maintained by finance. Other reports provide automated reconciliation and audit trails.
FMS can be tailored to work within an existing environment, including working within the existing security and reference data frameworks. FMS is often can be a sub-component of an ERP implementation.
This is a financial system architecture for the 21st century. This is the reporting system architecture for the 21st century. Finance transformation starts with finance systems transformation. Finance systems transformation starts with rejecting the legacy finance systems architecture that provides only summary results. It is transforming the financial systems—the original enterprise data warehouse—into a system capable of supporting today’s information demands.
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the author except for the use of brief quotations in a book review or scholarly journal. Posted by permission.
Insurance Company Manages Very Large Data Warehouse
Three years ago, a Fortune 100 Insurance Company embarked upon a “ Data Warehouse Program” to “build a single consistent common source of financially balanced data to be used by multiple business users for decision support, data mining, and internal/external reporting.” A major component of this information architecture is the construction of multiple Operational Data Stores or ODS’s. These ODS’s may contain up to 10 years of event level or detailed transaction history as the basis for populating other data warehousing applications. The company uses Geneva ERS to load and extract data from these data stores.
Data is loaded into these repositories nightly. In some cases Geneva ERS is used to extract data from operational source systems. “Source systems” vary from the PeopleSoft Journal Lines DB/2 table to legacy flat files. The DB/2 extract process reads 80 physical DB2 partitions in parallel, scanning over 80 million records in approximately 10 minutes.
This extracted data is edited using a combination of custom Geneva ERS processes and stand-alone custom programs. After the data is edited it is loaded into the repositories in parallel by Geneva ERS. The number of records loaded varies by repository, from 100,000 financial transactions (journal lines) to more than 1,000,000 policy transactions. In total, approximately 2 million transactions a day are added to the repositories.
In the same pass of the data that loads the repositories, Geneva ERS also produces multiple reports and extract files for use by business users and other application systems. One of the repositories is “backed up” by Geneva ERS at the same time. This daily execution of Geneva ERS reads over 450 million records in 23 different entities and 220 physical partitions. It writes over 900 million physical records to over 800 files. This process has run in 1 hour 3 minutes of wall clock time and 3 hours of CPU time.
Other executions of Geneva ERS extract data on a daily, weekly, monthly and annual basis. The output from one execution creates an executive information drill down file accessed via the company Intranet. This web site is accessible by over 300 business users.
The Geneva “Views” executed in all of the above processes are maintained within the Geneva ViewBuilder software. This includes record structures, field types, relationships between structures, and the actual queries themselves. Many queries are very similar to programs in their sophistication and contain complex business logic. Some have over 900 column definitions. These views also utilize custom code for accessing certain files and executing business logic requiring a programming language. Over 100 people have been trained on programming using Geneva ERS, and the company has had up to 10 testing environments at one time.
The most sophisticated use of Geneva ERS emulates a PeopleSoft financial cost allocation process. Custom programs were developed which generate over 6,000 Geneva ERS views based upon over 7,000 PeopleSoft rules. Geneva ERS executes these views to scan the financial repository selecting records eligible for allocation. It then allocates these costs through four allocation layers, such as products, and geographical units. At 1999 year-end, this process read over 50 million records selecting nearly 3 million that were eligible for allocation. These 3 million records were exploded into over 290 million virtual allocation records, of which 186 million summarized records were written to physical files. The process runs in 7½ hours wall clock time and 28½ hours of CPU time.
Financial Services Company Simplifies Complex Processes
Many financial service organizations were early adopters of computer technology. They quickly constructed systems to automate numerous processes. Programs were often added to the fringe of existing processes to solve new problems or provide an additional report or file for analysis. The need to keep multiple types of books and fulfill regulatory reporting requirements added to the complexity of some of these systems. These systems grew before modularization, subsystem definition, or other computing concepts were developed. Furthermore, the number of transactions generated by these organizations always challenged computing capacity. This required creative and complex solutions. Over time these systems metamorphosed into closely intertwined, interconnected, and inflexible systems.
In 1996, a Fortune 100 Insurance Company determined their reporting system was so inflexible that it would become unsupportable in the near future. They decided “…to build a single, consistent source of financially balanced data to be used by multiple business users for decision support, data mining, and internal/external reporting.” After analyzing their information needs they determined that the best approach to satisfy the broadest number of users was to construct a comprehensive data warehouse environment, including extract transformation processes, Operational Data Stores (ODSs), reporting environments, and data marts. The company viewed the ODSs as the heart of the data warehousing effort. ODSs are designed to contain up to 10 years of detailed transactional history. By storing the transactional detail, the company can satisfy requests for individual transactions, summaries using any field from the transaction detail, or snap shots as of any point in time. This robust environment truly enables the “single, consistent source of financially balanced data.”
Although such a data store simplifies and satisfies all types of information needs, it also means tremendous data volumes. Managing such data volumes requires a creative, industrial strength solution. It also requires people who know how to make it happen. This company went looking for a tool and the people that were up to the challenge. They chose PricewaterhouseCoopers and Geneva ERS.
Geneva offers a robust data warehousing solution able to process tremendous amounts of data quickly. It also provides for innovative, flexible solutions that are easily supported by company employees. This company uses Geneva to do the following:
Source system Extract and Transform
ODS Load, Query, Interface production
Detailed Financial Allocation Process
Executive Information Delivery
Extract and Transform
Extracting from legacy systems can often present a challenge to the most robust tools. Especially when source systems include complex data structures and high data volumes. Legacy systems often were designed to support monthly reporting cycles by accumulating and summarizing data before finally kicking out the hard copy report at the end of the month. Changing such systems to provide more recent information can make the difference between a successful information environment and simply a different format for the same old content. However, making sure the results are right can be a very expensive process.
PwC consultants used a new approach to attack this problem. For this client, they first created a program which understood these complex and unique legacy structures, which could be called by the Geneva ERS open API. This program opened up difficult legacy systems to the power of Geneva. Geneva was used to extract data from multiple sources. The Geneva development environment allows definition of source system data structures. Once defined to Geneva, business logic can be applied. The business logic is stored as a Geneva “View.” The Views are organized by logical frequencies like daily or weekly processes or on request processes. Once the environment was created, they focused on understanding the data structures instead of tracing through the entire legacy process. They used an iterative prototyping approach to discover the source of all the data contained in legacy downstream files. They successfully proved that the system could be converted from a monthly to a daily system. They extracted the source system data and populated the ODSs. The iterative prototyping approach used by PwC consultants shaved months off the delivery cycle and hundreds of man-hours spent in legacy system research.
The power of Geneva is evident in the types of source systems from which data is extracted. In addition to the complex legacy structure noted above, a DB/2 extract process reads 80 physical DB/2 partitions in parallel, scanning over 80 million records in approximately 10 minutes. All processing is completed in a fraction of the time it would take another data warehousing tool.
Load, Query and Interface Production
PwC set out to help create an environment that would allow many users, with diverse information needs to pull the desired information from the repositories. They created data and processing models that allowed all queries to be resolved in a single pass of the transaction detail. These models minimizes data storage requirements by eliminating duplicate data from transactions, but combining the data back together for report production in an efficient manner.
In the same pass of the data that loads the repositories, Geneva ERS also produces multiple reports and extract files for use by business users and other application systems. Many output formats are possible including reports, spreadsheets and files. And because the ODSs contain transaction level detail, users are able to choose what level of detail they wish to see. The data model also allows for use of Geneva’s date effective join capabilities. This enables users to create reports as of any point in time. Summaries can be created using organizational structures from any point in time.
The client choose to construct an ODS to support the ERP General Ledger being installed. However, the coding structure for the new ERP package differed significantly from the historical organization and account coding structure. The ODS supported all interface production, translating from old to new and from new to old. Global utilities were constructed that were called from the Geneva API. Because of Geneva’s ability to process the detailed transactions, all fields could be translated at the lowest level of detail. This enabled consistent answer set production for all interfaces.
The number of records loaded varies by repository, from 100,000 financial transactions (journal lines) to more than 1,000,000 policy transactions. In total, approximately 2 million transactions a day are added to the repositories. The single pass architecture even produces back ups using Geneva ERS in the same pass of the ODS. This daily execution of Geneva ERS reads over 450 million records in 23 different entities and 220 physical partitions. It writes over 900 million physical records to over 800 files. This process has run in 1 hour 3 minutes of wall clock time and 3 hours of CPU time.
Detailed Financial Allocation Process
The most sophisticated use of Geneva emulates an ERP financial cost allocation process. The insurance company recognized that with their volume of data, the ERP package could not handle the allocation process. They would have to change their business requirements or find another solution. They looked to PwC and Geneva to supply that solution. Client members and PwC consultants analyzed the allocation process and created custom programs which generate over 6,000 Geneva views based upon over 7,000 allocation rules. Geneva executes these views to scan the financial ODS selecting records eligible for allocation. It then allocates these costs through four allocation layers, such as products, and geographical units.
During the first year of implementation, the year-end allocation process read over 50 million records selecting nearly 3 million that were eligible for allocation. These 3 million records were exploded into 186 million allocation results. The process runs in 7½ hours wall clock time. The Geneva system produces these results 250 times faster than the ERP package. Because of this innovative Geneva solution, the business users were able to have their data represented exactly as they wanted.
Geneva ERS Executive Information System
Providing users with on-line access into repositories holding hundreds of millions of detailed records is a significant challenge. The PwC team developed an innovative approach to give users the access, but not compromise the performance of the ODS processes or require massive new processing capacity. The result was the Geneva ERS Executive Information System.
This system uses Geneva to produce summary “views” of the detailed transactions. These summaries were developed within Geneva, and could summarize by any field in the ODS. Approximately 20 different cuts of the data were developed. During the load and query processes, these queries are executed to refresh the summaries from the detail transactions. Because the summaries are always regenerated from the detail, no sophisticated update processes had to be developed and they also always contain the same consistent answer.
Users access the companies Intranet site, and select which summary to view. The Geneva ERS Java applet allows users to drill down within the report to lower and lower levels of detail. Because of the unique structure of the Geneva output file, data access if very efficient. This web site is accessible by over 300 business users
Most Geneva Views are created and maintained within the Geneva ViewBuilder software. This interface stores table structures, field types, relationships between structures, and the business logic to process the data. Geneva trainers trained over 100 company employees on site on the use of the ViewBuilder, for everything from basic view construction to use of the Geneva API, to key data warehousing concepts.
With PwC assistance the company implemented the first two ODSs. They have now moved on to developing additional warehouses on their own and multiple data marts. The result has been the ability to replace certain legacy systems with much more flexible architecture. The company has made major strides in meeting it objectives of “…a single, consistent source of financially balanced data to be used by multiple business users for decision support, data mining, and internal/external reporting.” They have been able to create business intelligence in an intelligent way.
Reasons for performance problems being encountered with operational reporting and data warehouse functions in medium- and high-volume SAP implementations are described. A generalized approach for estimating reporting workload early in an SAP project is provided along with throughput metrics that can be used to assess whether performance problems are unlikely, probable, or certain. The integrated ABAP/Geneva solution that was implemented to resolve the performance problems for Keebler is described, including comparative processing metrics for the alternative architectures.
This paper should be used in conjunction with the Data Warehouse Size Estimation (DWSE) model and instructions available for download from the IT Knowledge Net under the topic “Estimating Report Process Requirements.”
Experience inside and outside PW is demonstrating that the LIS, FIS, HIS, and EIS facilities of SAP are not adequate to support the full range of operational reporting or open information warehouse requirements for medium-to-large SAP implementations. Operational reporting and open information warehouse requirements in practice mean the ability to “slice and dice” the granular operational data captured by SAP applications for a variety of business purposes. Since our client base for SAP implementations consists of larger organizations, it is important for Price Waterhouse to identify up front when problems are likely to be encountered and be ready with solid recommendations on how requirements in this area should be accomplished.
“Data warehousing” frequently is assumed to be the solution for the functional void in reporting that needs to be filled. A problem with this terminology is that “data warehousing” has come to mean different things to different people. And in some quarters, lumping the problem of slicing and dicing granular SAP data under “data warehousing” has resulted in an inadvertent underscoping of the general reporting problem that needs to be solved.
Common concepts or topics that frame notions of data warehousing include: varying levels of data summarization, limitations on data content to the needs of particular business purposes (subject areas), time variant snapshots, focus on interactive usage and GUI presentation, performance measurement, limited scope pilot applications as a way to get started, and data warehousing software and hardware most widely discussed in the technology press. All of these things have their place under the right circumstances. But, approaching the problems being faced by big businesses with new SAP applications in these terms can be misleading with respect to the fundamental reporting problem that needs to be addressed.
Reporting Problem in General
Implementing a new SAP application means that the corresponding legacy system reporting framework being used to run the business is being replaced (presumably improved). Depending on the application, the old framework typically consists of hundreds to low single thousands of reports (on-line and hard copy). These reports were developed in response to business problems and ways of managing the enterprise that evolved over time. Certainly, a large portion of the legacy reports are obsolete and a new concept and framework for reporting will consist of many fewer more targeted reports. But, what is important here is that the mission critical reporting framework must be replaced as part of the SAP implementation. This is a general problem that should not be approached inadvertently with a set of limiting assumptions about what data warehousing is or is not.
Reporting problems come in varying sizes depending on the data basis and size of the reports in the mix. The data basis for a report means the number and size of data records that must be manipulated in order to produce the report. Report size means the number and length of the lines that constitute the final report output. By far the largest single factor in determining overall reporting load is data basis.
The most important aspect of data basis usually relates to the number and size of records that must be extracted then further manipulated to produce the report output. However, under certain circumstances, the number and size of records that must be scanned and evaluated as candidates for extract can be the primary determination of data basis. In cases where the extract data basis is small, but large sequential table scans must be executed in order to identify the extract records, input table size becomes the relevant data basis for projecting reporting load.
In general, the size of an overall reporting load can be determined by aggregating the data basis and report sizes across the number of reports and frequency with which they will be executed in a given period of time. By stratifying reporting requirements in categories of on-demand, daily, weekly, monthly, quarterly, and annual, an accurate picture of report processing load over time can be projected. It generally is a good practice to project detail reporting load separately from the summary reporting load.
Until an attempt is made to project the reporting load in this way, it almost certainly will be underscoped. Fortunately, most reports (largest number of reports) are low data-basis operational reports that typically are small in terms of data basis and report size. As a class, these reporting requirements do not contribute greatly to overall reporting load and do not need to be evaluated in detail. However, high data-basis reports tend to be comparatively few in number but account for the bulk of the overall reporting load. Underscoping risk primarily is in the area of high data-basis operational reporting requirements.
Low data-basis operational reports tend to deal with business problems involving open orders or transactions for individual customers, or other small subsets of data constrained to a limited period of time. Very often the indices that support SAP transaction processing also support the access requirements of operational reporting. Consequently, limited table scans are required to access the few records in the data basis for these kinds of reports. In general, the majority of low data-basis operational reporting requirements can be satisfied with a data basis under a hundred records and most with under a thousand, making them good candidates for on-demand processing. Even a large number of people executing these kinds of reports frequently will not create an unmanageable reporting load.
High data-basis operational reports tend to include requirements for retrospective analyses of transaction processing over time. High data-basis operational reporting requirements come from all major aspects of a business whether they be marketing, sales, product development, financial, production, human resources, procurement or legal to name some major constituencies. And all of these constituencies are interested in the basic transactions captured and processed by the SAP modules for many of their analytical purposes.
Keebler High Data-Basis Reporting Problem
Although the number of high data-basis operational reports tends to be dramatically smaller than the number of low data-basis reports, the data basis and size for these reports tends to be dramatically larger. The experience at Keebler clearly bears this out: in the original analysis of high data-basis operational reporting requirements, only 45 sales analysis reports were identified, but the data basis for these reports turned out to be approximately 23 gigabytes per week on average (peak loads at end of period are much higher than the average indicates). Examples of these reports are included below:
FS Sales Force Ranking
FS Item Customer Pch: SIUs
FS Item Customer Pch: Cases
Food Service Ranking – ITEM
Food Service Ranking – Customer
Food Service Comparison CASES
Food Service Comparison $$$
Food Service Budget – PTD to HYTD
Item Sales to Budget (Div.)
Diversified Retail Sales – Final
SBU Item Sales for Period: (by Sales Org)
SBU Item Sales for Period: (SBU-Brand-Item) Corp Type
Item Sales to Budget (Store Door)
Budget Buster RANKED
Final Sales BT
Military Account Item
Account Item – Trend & YTD
Customer Purchase (Store Detail)
Gain & Loss, w/ Allow.
FS Cat. Grouping
As should be evident from the titles, these do not represent “nice to have” reporting requirements, but are blocking and tackling period-to-period comparisons of key volume statistics needed to run the company. The 23 gigabytes also does not include Keebler-specific requirements for data reorganization due to material, sales and other account hierarchy changes, legacy systems data integration, replacement of reporting currently outsourced to Synectics and substantial ad hoc requirements that are anticipated. Unless a detailed analysis of the high data-basis operational reporting requirements is done, it is easy to dismiss these kinds of basic reporting requirements as something that will be handled in background processes when time is available.
In order to understand whether a given reporting load could be handled in background when time is available, it is necessary to have some basic metrics about how fast this kind of processing can be done in a given environment. Assuming an HP T-500 8-Way world with EMC disk (the Keebler configuration), extract processing can be done at a rate of about 6 megabytes/minute (if table joins are required, calculations need to be adjusted to reflect processing at a rate of about 3,000 joins/minute). These numbers are consistent with information from SAP on performance in reporting and general Price Waterhouse experience with UNIX SMP machines running third-party relational data bases. In the Keebler case, this would mean approximately 65 hours per week of continuous processing just to complete the initial extract, assuming no table joins were required. Taking into account table joins would mean the basic extract processing would take about 260 hours of continuous single-threaded processing. Further, according to SAP, a general rule of thumb is that reports with a data basis of several hundred thousand records will require several hours to process (this is consistent with our Keebler observations). Considering that the 23 weekly Keebler gigabytes represent about 210 million extract records, something on the order of 2,000 processing hours per week would be a reasonable estimate (albeit a practical impossibility to implement) of the end-to-end processing load, given the Keebler environment.
Another consideration is that, according to SAP, report sizes in excess of 50 megabytes cannot be produced at all because of application server memory limitations (no Keebler attempt at this kind of reporting got far enough to verify this limitation). This would mean that reports would have to be split up (stacked), which in turn would increase the extract scanning load substantially.
* * *
To recap the Keebler experience, it was assumed at the outset that most reporting could be handled as a data warehouse problem through the SAP SIS and EIS facilities. Early experience with using these facilities highlighted the fundamental throughput problems and other alternatives were explored, including popular data warehouse tools. However, it was when the detailed analysis of the basic high data-basis operational reporting load was completed that the scope of the problem became apparent. No rationalization about data warehousing being limited to performance measurement or summary level numbers could eliminate the need to produce the reports or reduce the data basis required to generate them. Understanding the scope of the problem in terms of data basis was the key to focusing attention on architectural solutions that would work at scale.
Projecting Data Basis
Data basis is a function of:
Data structure complexity; and
Number of reports, level of detail, and content requirements.
Transaction volume. Transaction detail represents the richest source of information about financial and other events captured by a system. As part of transaction processing, information entered about individual events is augmented with data from master tables according to business rules that constitute the accounting model in place at that point in time. Depending on SAP module and configuration options implemented, detail transaction files (e.g., SD COPA CE1 table) are posted in addition to various summary balances in the SAP data model.
In principle, a cumulative inception-to-date transaction file plus the related effective-dated reference tables could be the source data for all types of operational reporting. Projecting data basis for this reporting architecture would be relatively easy: total gigabytes of transaction files that would have to be scanned for each report, taking into account the number of reference table joins that would have to be performed as part of processing, extended by the frequency for reports over a period of time, would be the data basis for this kind of reporting load.
Taking the Keebler SD case as an example, 120,000 1,608-byte CE1 transaction records per day comes out to about 4.5 gigabytes per month or 111 gigabytes for the two years of history. The 45 basic reports actually represent an average of about 5 alternate sequences per report, so taking into account the repetitive aspects of daily, weekly, and monthly cycles, there are about 700 elemental reports that have to be produced to accomplish the basic high data-basis monthly reporting requirement. Producing one set of all reports directly off the detail would mean a data basis of something like 2.8 terabytes, clearly an impractical load to process at a rate of several hours per gigabyte, or even at a rate of several gigabytes per hour.
For audit trail or archive reporting purposes, there is no way around facing up to the volumes associated with cumulative transaction files, unless these requirements simply are ignored. However, for most summary reporting purposes, effectively designed summary files can dramatically reduce the data basis required to accomplish these requirements. The degree to which data basis reductions can be realized though use of summary file schemes depends primarily on data structure complexity.
Data Structure Complexity. The simple physics of routinely reprocessing raw transaction data as the method for producing reports was the driver behind development of the manual accounting cycle. By determining the important summary totals required in advance, then classifying all transactions accordingly as they were journalized, detail could be archived period-by-period leaving a practical size data basis for report processing over time. In the manual world, this required maintaining a separate set of account structures and subsidiary ledgers for each summarization structure. General ledger accounting, cost accounting, revenue accounting, tax accounting, fixed asset accounting, inventory accounting, regulatory accounting, receivable accounting, payable accounting, and so on, all had needs for different sets of summary totals derived from various subsets and supersets of the same basic transactions. As automation of these reporting processes advanced through the ‘60s and ‘70s, the basic subsidiary ledger structure of the manual world was preserved in the subsystem architecture of computer-based applications.
As computers became faster at re-sequencing the same basic transaction set a variety of different ways, more granular account structures started to emerge in subsystem implementations. Product, organizational, and project dimensions started to be included in general ledger account structures by the early ‘80s simply because it became practical to extract, sort, and summarize a few hundred thousand to a million or so fully qualified summary accounts several different ways, something that was strictly impractical in a manual world. As a result, separate subsystems were no longer required for each business function that required a different roll-up of transaction detail. But, the determination of what could be consolidated and what had to be broken out in a separate subsystem continued to be arbitrated primarily based on the volume of fully qualified accounts implied by the dimensions included in the account classification structure.
SAP presents a highly integrated profile for transaction processing, which leaves the impression that highly integrated cross-dimensional views of the data captured also should be available. However, there are virtually an unlimited number of ways transaction-level data from SAP could be summarized taking into account all the possible cross-structural and time-delimited combinations. Fortunately, for a given implementation, patterns in the use of cross-structural combinations usually emerge that permit reporting data basis to be minimized by taking advantage of “locality” in the transactions with respect to account classification structures (i.e., maintaining summary files). Locality means that a large number of transactions create a much smaller set of combinations when summarized by a set of cross-structural dimensions. How substantial the locality effect is depends on transaction coding patterns, the cross-structural dimensions that are required given the patterns of report usage, and the number, breadth, and depth of the structures — data structure complexity.
The benefits of locality are usually substantial. But summary files are often assumed to be the “big” answer that makes the high data-basis operational reporting problem go away. The truth is that summary files quickly get larger than the detail files if a new summary file is created for each new use of the data that requires a different slice, which means that the summary files become a significant volume and maintenance problem unto themselves. In practice, summary files should be maintained to the point where the data basis reduction related to report production is greater than the data basis created by the need for summary file maintenance functions. In addition, summary files should be employed as sparingly as possible due to the substantial system administration requirements that attend their maintenance and use. In the Keebler case, only three (3) different cross-dimensional summary files maintained in multiple partitions by time-slice were required to satisfy these conditions, which is typical for data models of this type.
Modeling the degree of locality can be done fairly easily by using data from existing systems to calculate the collapse rate across structures. Three-to-six months of the two-to-four highest volume transaction types related to the structures in question usually is enough. Simply sorting the transaction files and calculating the average number of unique cross-structural key combinations by week, month and quarter will give a reasonably good picture of what kinds of collapse rates can be expected. The result usually is far less dramatic than is assumed in the absence of a detailed analysis.
Frequently, special reporting requirements will emerge that have not been (or cannot be) anticipated and are not supported by summary files. If the requirements relate to summary reports that will be produced with some frequency, the summary files can be modified and regenerated or new summary files can be created by reprocessing the detail transaction data. If the requirements relate to special requests with no pattern of usage, or if they are audit trail or archive requirements, or if requirements are that summary files reflect the current account structure hierarchies, there must be a way to process the transaction-level detail, or the requirements must be ignored. Said differently, unless all summary reporting requirements can be specified precisely at the outset, and audit trail, archive, or structure reorganization reporting is not a requirement, processing the transaction detail will be unavoidable. Summary files are not the “big” answer. They are just an important part of the solution.
Number of Reports, Level of Detail, and Content Requirements. Easily the biggest driver of data basis is the requirement to report at a low levels of hierarchies in cross-structural combinations. Subsystem reporting requirements typically are limited to summarizing data one structure at a time. As a result, the number of fully qualified accounts that get created by even large volumes of transactions is limited to the number of accounts in the given structure. By avoiding the need to report by customer/product/organization, vendor/SKU/location, employee ID/general ledger account, and so on, subsystem architectures limit their reporting data basis exposure — and consequently their usefulness.
Reporting at low levels of cross-structural detail usually is important because that is the level at which day-to-day resource allocation decisions are made. Inventory replenishment, sales force management and compensation, promotional campaigns, and crew scheduling are examples of large dollar investments managed in atomized quantities of resources, location-by-location, vendor-by-vendor, days at a time. High-level performance measures that show out of whack inventory turnover rates or sales per employee do not help identify the out-of-stock problems that need to be fixed or the particular sales calls not getting made. The thousands of reports in legacy systems got created because figuring out what was going wrong required a specific cross-structural slice of the data to illuminate the drivers about which something could be done.
It is unlikely that thousands of reports are needed to run any business at a given point in time. But, over time, the number of problems that need to be solved certainly accumulate into the thousands. In the world of subsystem architectures, a new problem meant building a new reporting subsystem (that mostly never get retired). In the integrated SAP world, the underlying assumption is that whatever slice is needed will be available because the transactions were captured by an integrated process. But, as we have seen, availability of any slice means access to detail or maintenance of every summary total (a clearly impractical alternative).
Two basic approaches to defining reporting level of detail and content requirements can be taken:
Produce a pre-defined set of reports in case they are needed.
Provide a framework that can generate almost anything, but produce only what people ask for.
The first approach is easiest to manage from a project standpoint. But, limiting the set of reports to a practical number is difficult. And it results in an enormous amount of processing for generating numbers nobody looks at. Also, it is not what most clients have in mind when they embark on implementing a new integrated information system. The second approach is preferable, but more difficult to achieve. Something that takes both into account is required.
An exercise that can be done very early in the project that promotes achieving the right balance is as follows:
For the high-volume aspects of the modules being implemented, define the transaction-level data that constitutes the basis for operational reporting.
Define the lowest level of cross-structural reporting that will be supported.
Define the inter-period comparison (e.g., last year/this year, structure re-organizations) features that will be generally supported.
Define a reasonably inclusive set of cross-transaction accumulators (e.g., promoted/not promoted sales, gross sales, returns, statistic 1, statistic 2, etc.) that will be generally supported.
Define a basic set (20-40) of potentially high data-basis operational reports to be implemented that represent ranges in level of detail and content.
Design the cross-dimensional summary files that efficiently support producing those reports over a two-year time horizon.
Calculate the data basis for generating and maintaining the summary files and producing the reports.
(An Excel Data Warehouse Size Estimation (DWSE) model, with instructions, is available for download on the IT Knowledge Net to facilitate doing these data basis projections; see topic “Estimating Reporting Process Requirements.”)
The summary of results from the model for the current Keebler implementation are provided as follows:
Sum of Report Count
Sum of Table Joins – Total
Sum of Recs Read – Total
Sum of MB Read – Total
Sum of Records Extracted
Sum of MB Extracted
Sum of Report Count
Sum of Table Joins – Total
Sum of Recs Read – Total
Sum of MB Read – Total
Sum of Records Extracted
Sum of MB Extracted
Sum of Report Count
Sum of Table Joins – Total
Sum of Recs Read – Total
Sum of MB Read – Total
Sum of Records Extracted
Sum of MB Extracted
Sum of Report Count
Sum of Table Joins – Total
Sum of Recs Read – Total
Sum of MB Read – Total
Sum of Records Extracted
Sum of MB Extracted
Sum of Report Count
Sum of Table Joins – Total
Sum of Recs Read – Total
Sum of MB Read – Total
Sum of Records Extracted
Sum of MB Extracted
Even though specific configuration details are not available early in the project, the drivers of data complexity, level of detail, and report content are sufficiently predictable based on existing business processes to yield very useful results. This exercise forces the scale issue to surface early so that informed decisions can be made about hardware and software platform. It illuminates the generalized nature of the reporting problem and promotes implementing a generalized solution rather than a one-program-one-report architecture.
Demonstrating a generalized solution helps with scope control because it is clear how new requirements can be accommodated as they evolve over time. It minimizes the tendency to define everything the existing system does plus everything new that can be thought of into the requirements. And it shows that so many possibilities exist that there is no hope of producing everything just in case somebody wants to use some of it.
Most importantly, this exercise makes it possible to make meaningful economic decisions about how much and what kind of reporting will be supported. The big drivers of cost in reporting are data basis and number of reports. By creating this model early in the project that can be exercised as new ideas are explored, expectations about reporting requirements and technology can be managed on a factual rather than emotional basis.
Components of a High Data-Basis Operational Reporting Solution
Given that SAP facilities alone are not adequate, something else is required. A basic assumption in the market is that some sort of data warehouse solution probably is the answer. In addition, the components of the architecture are widely assumed to be based around SMP or MPP hardware, a relational and/or multi-dimensional data base and some OLAP presentation-layer tools.
The problem is that the heavy data manipulation load associated with high data-basis operational reporting is not a process that works well on bus architecture machines using these kinds of access methods. Only by ignoring the high-volume aspects of reporting requirements can these kinds of solutions be expected to work in a cost-effective way.
Current SAP operational reporting/data warehouse strategies seem to fall into three major categories:
Use SAP summary level reporting features (SIS, LIS, EIS, etc.) and access the data directly from SAP data structures to support very low data-basis operational reporting requirements. General “slicing and dicing” of granular detail cannot be supported. Practical limits on summary table size are in the 10,000s of rows.
Extract SAP data and create a series of summary files in relational or multi-dimensional data bases for access by third-party data warehouse tools to support low-data basis operational reporting requirements. General “slicing and dicing” of granular detail cannot be supported. However, options for broader-based interactive access to pre-defined summary tables are available compared to native SAP facilities. Practical limits on summary table size are in the 100,000s of rows.
Extract SAP data and create a separate operational data store containing detail and summary files as required. Use Geneva or custom COBOL/Assembler programs for extraction and reduction processing on an MVS machine. Use third-party data warehouse tools or SAP/ABAP facilities for presentation-layer functions on MVS, UNIX, or other platforms as required.
In low data-basis situations, or where reporting requirements can be constrained, or where reporting requirements can be precisely defined at the outset and remain stable over time, a creative implementation of Strategies 1 or 2 may be adequate. However, for larger clients with even medium transaction volumes and moderately complicated data structures, including an MVS-based operational data store in the architecture (Strategy 3) is a very practical solution for dealing with the inherent data bandwidth problem. In addition, the MVS environment provides for managing and achieving high throughput for the multiple concurrent batch processes characteristic of operational reporting problems. Complex batch streams with contingent dependencies are not similarly well supported in the UNIX world.
The Strategy 3 architecture that was implemented to solve the Keebler problem very successfully demonstrates the essential elements of a general solution:
In this solution, the heavy data manipulation processes and archive requirements were off-loaded from the SAP UNIX machine. This left the SAP R/3 environment free to be tuned for transaction processing, unencumbered by extensive summary file posting requirements and accumulating detail data volumes. High data-basis report generation takes place on an outsourced MVS machine and answer sets are returned to the SAP/UNIX environment for access through an ABAP report viewer that appears to be a completely integrated part of the SAP application. It is an extremely cost-effective way to provide broad-based desk-top access to high-volume operational data.
Typical CPU and wall times for end-to-end processing of the reporting load described above using the Geneva engine are as follows:
Frequency CPU Time Wall Clock Time
Daily 15 min 1 hour
Weekly 150 min 3.5 hours
Period Summary File Build 300 min 8 hours
Period Reports 350 min 6 hours
(CPU/wallclock ratios vary due to differences in tape processing requirements, time of day the jobs are run, and whether the Geneva configuration is set up for single or multi-threaded processing — net result: cumulative processing time over the course of a month is approximately 26 CPU hours)
A more general presentation for high-volume situations that accounts for the various tool sets that could be employed would look as follows:
Event-level data from SAP, legacy, and other operational systems would be extracted and transformed using various tool sets as appropriate under the circumstances. The transform process would primarily deal with data cleansing issues (i.e., gaps, inconsistencies, and errors). Data reduction (i.e., summarization level and content) would be left primarily to the ODS component managed by Geneva or custom programs. The primary advantage of this separation is that this provides a systematic framework for re-running reductions over time as new requirements emerge. Reductions can be in the form of end-user report files delivered directly through some presentation layer, as was done in the Keebler case with ABAP on the UNIX machine. Or they can be data sets for downstream processes, including replacements or updates to relational or multi-dimensional data bases, data mining applications that require pre-processed inputs, or interfaces to other systems.
Absent an ODS component in medium- and high-volume situations, requirements must be defined very precisely at the outset, they must be stable over time, and limited to what the assumed technology platform can practically accommodate. As early SAP implementation experience has shown, long-range requirements are difficult to define. And volumes that can be feasibly accommodated are dramatically less than most imagine.
However, by incorporating an ODS on a high-data bandwidth platform for systematically reprocessing transaction-level data and maintaining/re-generating summary file structures, highly cost-effective and practical scale and flexibility can be designed into the architecture from the outset. SAP high data-basis operational reporting and data warehouse problems can be readily solved by using right tools for the right job in medium- and high-volume situations.
 For example, a data basis of all invoices for the last year would be a large data basis when compared to only invoices for yesterday, which would constitute a relatively smaller data basis. Last year’s invoices or yesterday’s invoices could be summarized at a high organizational level such as division, which would result in a small report size, or either data basis could be summarized by customer, item and week, which would result in a relatively larger report size.
 Detail (audit trail) reporting is a class of analytical reports that fits this profile. The general problem is to enumerate the detail transactions that support summary numbers accumulated as part of transaction processing, or summary numbers generated on other operational and analytical reports. Unless an index is available that permits going directly to the small subset of transactions required, sequential table scans are required to pick out the transactions that qualify for a given set of selection criteria. If the criteria require that table joins be executed as part of the processing, this needs to be accounted for accordingly.
While the extract data basis and consequent report sizes are small for the bulk of detail reporting requirements, the number of detail report requests can be substantial and the primary selection criteria can be based on any field in the data base, not just those with indices. The more robust the summary analytical reporting framework, the more demand there will be for detail reporting to substantiate the summary numbers. And the more likely it is that the robust summary numbers will be based on dimensions of the data base that do not have indices to facilitate selection processing.
 The nature of the processing necessary to produce reports also is an important factor. To the extent that the source data structures for a report are denormalized, or a report requires only a single table for input, data basis and size provide a good surrogate for describing reporting loads. To the extent that normalized structures are involved that force numerous table joins to be performed as part of extraction, calculation and formatting processes, projections of overall reporting load must be adjusted accordingly.
 The weekly data basis would have been over 20 times the 23 gigabytes if all processing had been done using transaction level data (basic transaction volume is 120,000 order lines per day @ 1,608 bytes per line). Based on the reporting requirements identified, summary file structures were defined that would reduce total data manipulation load necessary to produce the reports. As new reporting requirements emerged during development, summary files were repeatedly modified and regenerated from the transaction detail as necessary to support efficient production processing. It is expected that the summary file structures will continue to be modified and regenerated over time as new reporting requirements evolve.
 In some cases, obtaining detail transaction records from SAP is difficult for a variety of reasons. Since this is the richest source of information, as well as the basis for auditability of numbers generated by SAP, it is important that we resolve the open issues and develop consistent recommendations surrounding how all necessary transaction detail should be obtained from SAP, where it should be stored under varying volume situations, and how it should be accessed.
 Fortunately, audit trail reporting tends to involve large table scans but small extract sets, which means that processing subsequent to extraction is minimal.
 SAP highly integrates the interrelated aspects of transaction processing that traditionally have been disconnected by subsystem architectures. As transaction detail is captured in SAP, a very rich set of attributes about each business event is recorded. On a go-forward basis, business rules can be modified to accommodate new requirements or other changes over time. And SAP provides for posting a wide variety of summary records at transaction processing time to facilitate inquiry and operational reporting requirements. But, the determination of what summary totals will be required for each aspect of managing the business remains part of the up front configuration exercise. The ability to adapt to retrospective information requirements that were not foreseen up front, or new requirements that emerge over time, is very limited in the SAP world.
SAP will post only those summary totals that it is configured to post at the time a transaction is processed. No systematic facilities are provided for accessing or analyzing the rich detail that gets captured. There is no framework for mass regeneration of summary totals that are posted in error due to configuration problems. There is no general method for reorganizing data to reflect retrospective changes in account hierarchy relationships. SAP effectively addresses problems with transaction processing integration. But SAP does not effectively address integration problems related to high data-basis operational reporting.
In 1996 a major US cookie manufacturer installed SAP to manage all aspects of the manufacturing, HR and financial operations. The system captured data and provided adequate information for managing most operational aspects of the business. But SAP could not deliver the detailed sales analysis necessary to support the highly detailed, store-specific sales process. They chose Geneva ERS to solve the problem in an integrated SAP environment.
The reporting requirements were substantial. The company required detailed information at the customer/line item level over various time slices from daily to year-to-date, for over 60,000 customer stores and 2,000 items, selecting and summarizing by over 100 fields. The company sells over a million items per week, and they needed comparative information for each time period for at least the last two years. They also needed comparative data be reorganized to reflect changes in organizational structure over time!
A Geneva ERS application was developed to deliver the full benefit of the SAP implementation, without burdening users with another tool. SAP’s Profitability Analysis module was configured to capture the required data. Programs were written to extract the sales items from SAP on a nightly basis. A series of Geneva ERS processes were developed to manage the data warehouse, including creation of various summary file structures. The completed data warehouse contains over 250 gigabytes of data in 15 entities, containing over 300 million records.
Geneva ERS was configured to produce executive information files, commonly called “information cubes.” The company defined Geneva ERS “Views” to create over 150 cubes, and 75 interface files. Geneva’s effective date lookup processing allowed for recasting the historical detail to reflect the current management structures.
The SAP Executive Report Viewer was also developed. Constructed within the SAP environment and fully integrated with the Profitability Analysis reporting, this analysis tool gives users a single source of information. Through this tool, users are able to select a time period, and then the respective cube. Users can specify selection criteria, drill down into reports to lower and lower levels of detail. They can export to Excel, view the data in a graphical format, format the report, and save a profile of the report.
Users have commented that the sales reporting system has been one of the most successful components of the SAP implementation. It has allowed the company to capitalize on it national sales force that makes personal contact with stores nearly 3 times each week. The sales representatives are armed with needed information. They can analyze customer and product information for daily, weekly, monthly, year to date, and spot trends from over 3 years of data. The system has helped fuel their growth in net income nearly 10 times from 1996 to 1999, and has scaled as the company has purchased and integrated three additional companies.
by Richard K. Roth, Principal, Price Waterhouse LLP
Eric L. Denna, Ph.D., Warnick/Deloitte & Touche Faculty Fellow, Brigham Young University, Marriott School of Management
This is reprinted from the February 1996 issue of Manufacturing Systems (C) Chilton Publications, All Right Reserved. [More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 16. Data Warehousing.]
The concept of Decision Support Systems (DSS) driven by data warehouses is a powerful one in our “data rich but information poor” world. In any given situation, a wide range of computer hardware and software technologies potentially could be all or part of a solution. To avoid solutions that are unnecessarily costly, or solutions that simply do not work, guiding principles for allocating functions to computing-platform technology are important.
The wide range of architectural issues that must be addressed makes it easy to focus on some areas to the exclusion of critical considerations. This article presents a framework for defining DSS/data-warehouse requirements, with corresponding principles for making computing platform assignments, that help avoid the pitfalls. Some recommendations for getting started without getting derailed also are provided.
The Framework and the Problem
Decision Support Systems (DSS) can be thought of as having four basic components:
Data Storage and Retrieval Engine (data warehouse)
The presentation layer very often gets the most attention at the outset of a DSS discussion or project. It is the easiest to touch and feel and provides the easiest visualization of how problems could be posed and end results communicated. However, the majority of the business purposes for a DSS are addressed by the first three components. And the majority of the cost (85% or more in Bill Inmon’s experience) is incurred in storing and accessing the detail data that drive the DSS applications — component #2. An effective presentation layer is important, because DSS accessibility by business analysts without the routine intervention of technical personnel makes a DSS more useful for a broader range of problems. But, for problems big enough to be worth solving, even the most cumbersome presentation layer is better than no solution at all.
In an extremely simple case, one piece of software (e.g., spreadsheet) running on one hardware platform (e.g., PC) could serve the requirements for all four components. However, most real-world problems require an integrated solution consisting of several hardware and software technologies. Choosing the right technologies depends on the nature of the problems being addressed. And the problems vary along the lines of the four basic DSS components described above. It is important to consider the requirements for each component separately, because lumping them under a single heading makes it difficult to choose between alternatives.
DSS applications may use data from a variety of internal or external sources. In most cases, the primary and high volume data will come from operational systems used to run the business on a day-to-day basis. The detailed data at the bottom of operational systems (captured at great cost then archived — by design) are the primary raw materials for understanding the patterns of interactions between suppliers, employees, business processes, customers, markets, products, services, pricing decisions, and financing alternatives.
The architecture of operational systems makes them good sources for data to drive DSS applications, but the same architecture makes them terrible models for DSS components 2-4. Operational systems tend to be functionally oriented and optimized for “getting today’s work off the table.” Operational data tends to be very detailed dealing with individual or a series of related transactions that have a limited life-cycle, from an operational standpoint. Management information system components of operational systems tend to support the limited-life-cycle clerical and management aspects of their respective narrow functional areas. But they do not support a “retrospective” subject-area or enterprise view of transaction processing over time, which is the general goal for DSS.
Data in operational systems is usually spread over multiple technology platforms (including paper files), and is maintained according to different update cycles and classification structures. Part of developing a DSS includes rationalizing or converting the operational data so that it is comparable across sources where different aspects of the same or related underlying transactions are being represented. Regardless of the technical platform or architecture for the DSS, the task of rationalizing data from various sources is the same, and it should not be underestimated.
It usually will be necessary to capture some new data to support the goals of a DSS initiative. It is important to remember that capturing new information is inherently an operational problem, not a DSS issue per se. As such, it should be treated as a problem for integration with existing operational systems, if practical or logical, or as a new operational system. System capabilities to fulfill new data requirements should be designed using an operational application architecture appropriate for the business processes from which the new information will be captured. A common mistake is to mix new data requirements that surface as part of defining a DSS environment with the requirements for components 2-4, which tends to bias (confuse) hardware and software platform choices.
Data Storage and Retrieval Engine (data warehouse)
Following Bill Inmon’s model for a data warehouse, data is maintained in four major classes:
Current and old detail corresponds most closely to the detailed transaction-level data from the operational systems discussed above. Lightly summarized data amount to reductions of the detail appropriate for the general requirements of a particular department’s DSS applications. Highly summarized data amount to further reductions of the lightly summarized data for specific DSS modeling requirements. The primary difference between each of the four classes of data is VOLUME. And volume is the primary (although not only) factor that determines the appropriate technology platform for storing and retrieving data in a warehouse.
Classes 1 and 2 represent the highest data volumes and the richest source of information. Since the volume of detail data is the primary cost driver for data warehouses, the largest impact on the cost (or cost avoidance) of a DSS environment can be achieved by making the right technology platform choices in this area.
Current detail usually is thought of as being roughly two years of history and maintained on magnetic disk devices. Old detail is roughly assumed to be up to 10 years of history and maintained on tape or other low cost storage devices. Depending on the scope of the warehouse, nature of the business and size of the enterprise, volumes in the multi-million/multi-billion record range should be anticipated for annual additions to enterprise-wide detail data. Storage space required can range from hundreds of gigabytes to terabytes.
Lightly summarized and highly summarized data can vary greatly in degree of volume reduction from the corresponding detail. Some DSS applications require routine reprocessing of large subsets of the detail (customer scoring applications would be an example). Probably most applications, however, can be satisfied with a very small subset of the data summarized at one or two orders of magnitude of reduction (modeling financial performance for individual products or categories of products over time would be an example). Because of the potential volume reductions, the platform options open up substantially when dealing with lightly and highly summarized data.
For any large volume of data (large could be as small as a few hundred thousand rows or as large as a few billion rows), the nature of the processing required, in combination with the absolute number of rows/records, is the driver for platform choice. Three basic hardware architectures and two basic file organizations are available:
1. Hardware architectures
A. Mainframe — Offers very high data bandwidth (4.6-17 megabytes per second on each of up to 256 channels per machine) in a shared memory, parallel processing environment. Excellent architecture for situations where large amounts of data on disk or tape must be evaluated for selection or re-sequencing purposes. Excellent environment for situations where enterprise-wide access is important. Drawbacks are: 1) the relatively high cost of disk storage (presently about 4:1 — but coming down quickly — compared with server technology) and processing power (MIPS); and 2) the relatively thin market for “friendly” presentation layer tool choices. This environment is often referred to as “data rich but MIPS poor”.
B. Server — Offers cheap processing power (MIPS) and memory, but relatively low data bandwidth (0.1-1.25 megabytes per second with only one channel per machine) in a single processor environment. Excellent for situations where all data can be loaded into memory before processing (e.g., personal applications like spreadsheets, word processing and graphics) or where relatively small amounts of data must be accessed for selection or re-sequencing purposes. The cheap processing power and memory make it cost-effective to consume these resources in the long instruction path lengths necessary for implementing “friendly” graphical user interfaces. As a result, the market is relatively rich in “friendly” presentation layer tool choices. This environment is often referred to as “MIPS rich but data poor.”
C. Symmetric Multi-Processors (SMP) — Uses cheap server technology running in a parallel processing environment (currently 4-20 processors per machine) to scale up the power of server technology described above. Excellent environment for accessing large stores of data on magnetic disks through indices where instruction path lengths are long but the amount of data that must be moved from the disks to the processors for selection or re-sequencing purposes is relatively small.
2. File organizations
A. Sequential — Primary method for storing old detail in operational systems and warehouses. Also, the primary file structure used for transferring data from various platforms to the data warehouse and between subsystems of a DSS environment. Depending on the configuration, a mainframe can read sequential files at a rate of 1 to 20 or more gigabytes per minute. Server technology can read sequential files at rates of .002-.1 gigabytes per minute. a difference of 1 to 4 orders of magnitude, depending on specific machine architectures and configurations. Excellent method for storing data warehouse information where many data elements can serve as the primary selection criteria and no practical amount of indexing will materially reduce the need for sequential table scans if stored in a relational file organization
B. Relational — Primary method for storing operational data to support on-line transaction processing. Also, the preferred method for storing lightly, and especially, highly summarized data in a warehouse application where usage tends to result in one or a few data elements being the primary selection criteria. The primary advantage of this file organization is that relational data can be accessed through SQL, which is the primary method through which most user “friendly” tools communicate with data warehouses. Disadvantages are that random access to data is very slow — on the order of 6-12 megabytes per minute on mainframe or server technology. In addition, the instruction path length is very long for resolving SQL instructions, which consumes substantial processing power (MIPS). As a result of the long SQL instruction path lengths, data warehouse applications where relational organizations are appropriate tend to be implemented on server or SMP technology where cheap MIPS are available.
In summary, platform choice for the storage and retrieval engine depends on how much data there is and what will be done with it. Current and old detail tends to be high in volume and used to feed specialized departmental and individual warehouses. Detail data often will be accessed sequentially, because many elements can be used as the primary selection criteria. Therefore, mainframes tend to be good platforms for the detail portions of data warehouses. Departmental and individual warehouses tend to be lower-volume reduced answer sets of the detail data. These warehouses often are created to support a particular focus of analysis and, consequently, tend to be good candidates for relational storage and access through indices. Consequently, server/SMP technologies can be good platforms for the departmental and individual portions of data warehouses.
Once a particular DSS problem is formulated, one or more analytical models may be appropriate. The most common DSS problem requires selecting an answer set from the warehouse that meets specified criteria. Then, according to other criteria, the preliminary answer set is augmented, transformed, re-sequenced, summarized/counted, formatted and displayed as the result. The result is usually in the form of a cross tabulation or often a single line. Basically, this kind of analytical framework is known as data reduction or, more simply, reporting. The storage and retrieval component of the data warehouse always has a reporting framework of some competency as an integral part of its functionality. However, the requirements of any particular DSS problem may require data reduction/reporting functions not available directly from the storage and retrieval engine.
Separate reporting and data analysis facilities may be necessary to accomplish particular objectives. More advanced data analysis requirements would include statistical and econometric models that employ regression, factor analysis or other trend, pattern recognition and exception analysis techniques. For small amounts of numerical data, predictive models can be implemented using work-station-based neural network technology. For large amounts of data with complicated business rules, the mathematics of fractal geometry can be employed to deal with data compression issues to facilitate storage and processing, usually on a mainframe. Using these kinds of techniques almost always requires transferring an answer set provided by the storage and retrieval engine’s reporting mechanism to a separate software package (which may be on a different platform).
It is tempting to bundle the storage and retrieval engine with the data analysis component if DSS is viewed as primarily a reporting problem. However, thinking about the data analysis problem in a broader context clarifies the difference between the two components and demonstrates the importance of separately defining DSS requirements for each area.
The presentation layer is the mechanism through which a DSS user supplies the criteria, parameters and instructions that the DSS components use in their processing. Also, the presentation layer is the delivery mechanism for results produced by the DSS programs.
The image that comes to mind of a DSS presentation layer includes a powerful workstation configured with a high resolution color monitor running a graphical user interface (GUI) displaying charts and numbers that can be traversed using point-and-click drill-down features. Ideally, the presentation layer would make use easy for non-technical people and would not require that they have a detailed understanding of the underlying data in the warehouse. The user interface would be intuitive so that learning and use without instruction manuals is facilitated. Further, it would be good if it provided fully integrated and seamlessly automated control over all DSS components, from access to data captured by operational systems through delivery of query results. Complete user control over DSS facilities from the presentation layer would mean that technical MIS personnel would not need to participate in operation of the system.
Notions of the “ideal” notwithstanding, it is important to remember that the result of a DSS query may be one or a few numbers. It may be an automated file to be used in some other application. Frequently, the output will be a lot of numbers that are more useful if printed in rows and columns on paper. Where non-technical users are concerned, they are more resourceful than they generally are given credit for. People with real problems to solve can deal with an interface more complicated than just anyone could use.
Reality is that very few things are totally integrated and automated — the current state of the art in DSS presentation layers in particular and information technology in general certainly are good examples. One reason is that increasing levels of integration and automation lead to increasing complexity and cost that simply is not worth it in many cases. Another reason is that DSS presentation layer technology is relatively new. We are a long way from the day when users who do not know what to ask for can “just get what they want” through a holistic interface. And MIS people will be in the middle of non-trivial DSS applications because the hardware and software components that make up the systems will remain a complicated specialty for the foreseeable future.
The primary reason for building a DSS is to provide a mechanism for understanding the patterns buried in otherwise inaccessible data. The objective is not to use client/server technology, give analysts a GUI, use SQL everywhere or automate and integrate everything in sight. Operational systems are capturing huge amounts of information at great cost that is potentially valuable but not being used. DSS applications that exploit that information can be developed relatively quickly as long as tools available are applied with common sense in proportion to the nature of the problems being addressed.
Presentation layer characteristics are probably the most diverting technical distractions that stop DSS solutions from getting implemented, or even started. Presentation layer desires often lead to platform decisions that shoehorn DSS functionality into technology not fit for the purpose — the computer equivalent of building skyscrapers out of tinker toys.
Requirements for the presentation layer should be defined after addressing the first three DSS components described above. This minimizes the tendency to get distracted from the primary objective and the temptation to allocate functionality to sub-optimal platform technologies.
Most executives are inherently comfortable with the notion of a data warehouse and DSS. It seems logical that all the money spent on capturing detail transaction-level data ought to pay off for something more than filling orders, collecting receivables and getting the bills paid. The idea that there is something to be learned from a thoughtful analysis of all that history is a natural one.
The first hurdle usually comes when trying to cost justify the first DSS application. The history of MIS in general is one great in expectations, high in cost, fraught with problems, late on delivery and mediocre in results. The comfortable idea of a DSS sometimes becomes an anxious thought once a specific proposal gets formulated.
Anxiety notwithstanding, many companies have found that this is an idea that actually works. It is almost certain that some fundamental assumptions about customers, suppliers, markets, etc. will not be supported by the historical record. DSS applications can be the catalyst for identifying and justifying fundamental changes in strategy or tactics — Product services organizations have fundamentally changed their maintenance agreement packages and pricing; manufacturers have identified subtle causes for out of bounds product failure rates; direct mail operations have quit sending promotions to unresponsive customer segments; financial institutions have developed radically more lenient credit exception policies; and retailers have been able to validate (or not) their view of fashion trends. It usually proves worth doing if the development pitfalls are avoided.
To get over the first hurdle and avoid the pitfalls, we recommend the following:
1. Pick a problem that is worth solving for which virtually all of the data necessary to drive the data analysis is available from one or a small number of existing operational systems. Do not burden the DSS project with a non-trivial operational system re-engineering project.
2. Prove the value of doing retrospective data analysis for the problem on the table. The data analysis component is where the value is delivered. The other three DSS components represent infrastructure that makes it possible to roll out DSS on a larger scale. Do not worry about metadata for source systems or the warehouse. Do not worry about the long run architecture for the enterprise-wide detail warehouse. Do not worry about presentation-layer standards. View this problem as an individual or small departmental data warehouse object case. Get a hard dollar cost/benefit success for DSS that will relieve some anxiety in downstream cost justification proposals.
3. Define the requirements and allocate functionality to hardware and software platforms according to the principles outlined above for the four major DSS components. It probably will not be possible to satisfy the requirements on one PC and a spreadsheet, but keep the number of platform technologies to a minimum consistent with the principles.
4. Constrain the requirements by the existing hardware and software portfolio in place where possible. Do not take on new learning curves unnecessarily. Avoid getting distracted by technology at all costs.
The Computer Conference Analysis Newsletter, May 18, 1993 n319 p12(1)
Inmon, W.H., Building the Data Warehouse, New York: John Wiley & Sons: 1993, p. 33.
Roth, R.K., and Denna, E.L. (1994) “A Summary of Event-Driven System Concepts,” Price Waterhouse LLP, White Paper
Theoretically, disk I/O bandwidth could be 8-20 megabytes per second, depending upon bus architecture, however, achievable throughput is dramatically lower as indicated.
The following was the first GenevaERS brochure, produced perhaps near the end of 1994. More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 23. Development.