Consolidated Data Supply Chain: 2016

The following slides were developed while consulting at 12 of top 25 world banks about financial systems from 2013-2016. Although not specific to GenevaERS, it delineates the causes of the problems in our financial systems, and the potential impact of GenevaERS supported systems and the Event Based Concepts.

Additional details can be read in this white paper: A Proposed Approach to Minimum Costs for Financial System Data Maintenance, and further thinking on the topic in this blog and video presentation from 2020: Sharealedger.org: My Report to Dr. Bill McCarthy on REA Progress.

SAFR and A Smarter Planet for Financial Reporting: 2011

The following was written as part of a renewed sales initiative for SAFR, building upon the latest financial ledger projects.

This write-up written for IBM sales efforts, was used as the summary of Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 2: The Problem and Chapter 3: The Solution.

The Problem

The recent financial crisis has exposed the systemic problem that the world’s largest financial institutions cannot adequately account for and report on liquidity, positions, currency exposure, credit, market, and interest rate risk and product, customer and organizational performance.  The CFO plays a critical role in correcting this problem by leveraging the financial data they already control, as well as leveraging scale to take out cost.  But even industry insiders do not realize that financial institutions suffer a unique set of domain problems when it comes to financial reporting.

Current financial reporting systems are antiquated and very inefficient. They were designed decades ago to simply track flow of capital, revenue and expenses at the company and departments levels.  The lack of transparency is evident in the increasing costs of the finance function with few benefits to show for the investment.  Sarbanes Oxley and other regulations have proven ineffective at getting at the root of the problem and the resulting financial meltdown regulations may well prove similarly ineffective.  These pressures create diseconomies of scale which affect the largest institutions the most.

For the most part, existing systems deliver accurate results in summary, but the increase in transparency requires line of site to the underlying causes of those results. Consider if your personal bank account statement or credit card bill only presented the change in balance between periods, but provided no list of transactions. When the statement is as expected, further detail may not be needed. But when the balance is in question, your first response is ‘why’ and you immediately want to see the transaction detail.  The same issues are at stake when managing the finances of the enterprise – with the associated cost and consequences considerably higher!  A single instance of financial restatement has cost some organizations hundreds of millions of dollars to correct, not counting lost market valuation.

Currently 90% of the money supply in mature markets is represented by digital records of transactions and not hard currency. It’s no wonder that that the volume of electronic finance records being kept has exploded compared to when the systems were first created. Yet our approach to these demands has not been to automate the process of keeping and accessing the details of the transactions. Almost all employees in today’s financial institutions are involved in capturing and coding financial details in some way, and a large number of non-finance employees are involved in the investigative process to locate the additional detail so often required. The effort for this manual intervention is incredibly inefficient and costly.

As we see all around us, computing capacities have increased by several orders of magnitude since these finance systems were designed. However, reporting processes have grown organically as a system of transaction summaries in order to continue to bridge multiple financial systems – but have lacked a single unified approach. This has meant that for the most part the business of financial reporting has not benefited from the increase of computing capacities available today.

The Solution

A Smarter Planet is founded on financial markets that provide for greater transparency and comprehension of the financial reporting by bank and non-bank entities, allowing the markets to react to conditions in more informed, less reactionary ways. IBM has spent 25 years refining an approach to this for financial institutions. The IBM® Scalable Architecture for Financial Reporting™ (SAFR) system provides financial reporting that is built bottoms up from the individual business event transactions to provide the finest grained views imaginable.

By harnessing today’s computing power and straight through processing approach, the details behind summary data can be made available in seconds rather than days or weeks. Providing nearly instant access to the highest quality financial data at any level of granularity will eliminate the duplicative reporting systems which tend to capture and produce summaries of the same business events for many stakeholders and reporting requirements.

More importantly, it will automate the hidden work of armies of people who are required to investigate details and attempt to explain results, or attempt to reconcile the disparate result of these many reporting systems—a truly wasteful activity caused by the systems themselves. Keeping the details in a finance system that can serve these needs allows for increased control, quality and integrity of audit efforts rather than dissipating them.

Some may question how much detail is the right level of detail?  Others may suggest this is too radical a change in a mature, understood and tested set of systems.  IBM experience with some of the largest financial services companies suggests that building a finance system.  based on the requirement to instrument the most granular level of transaction detail immediately stems the tide of increasing costs, lowers a variety of risks and can be a key driver of transformation of the banks ability to become more agile.  In time this approach begins to provide economies of scale for reporting.

SAFR is: (1) an information and reporting systems theory, (2) refined by 25 years of practical experience in creating solutions for a select group of the world’s largest businesses, (3) distilled into a distinctive method to unlock the information captured in business events, (4) through the use of powerful, scalable software for the largest organization’s needs, (5) in a configurable solution addressing today’s transparency demands.

The Theory

Companies expend huge sums of money to capture business events in information systems.  Business events are the stuff of all reporting processes.  Yet executives report feeling like they are floating in rafts, crying “Data, data everywhere and no useful information.”  Focusing reporting systems on exposing business events combinations can turn data into information.

The Experience

Although analysis of business events holds the answers to business questions, they aren’t to be trifled with, particularly for the largest organizations.  Reporting processes—particularly financial reporting processes—accumulate millions and billions of business events.  In fact, the balance sheet is an accumulation of all the financial business events from the beginning of the company!  Such volumes mean unlocking the information embedded in business events requires fundamentally different approaches.  The 25 years of experience of building SAFR in services engagements has exposed, principle by principle, piece by piece, and layer by layer the only viable way.

The Method

This experience has been captured in a method of finding and exposing business events, within the context of the existing critical reporting processes.  It uses today’s recognized financial data like a compass pointing north to constrain, inform, and guide identification of additional reporting details.  It facilitates definition of the most important questions to be answered, and to configuring repositories to provide those answers consistently.  It also explains how to gradually turn on the system without endangering existing critical reporting processes.

The Software

The infrastructure software, a hard asset with hundreds of thousands of lines of source code and feature set rivaling some of the best known commercial software packages, is most often what is thought of when someone refers to SAFR. 

The Scan Engine is the heart of SAFR, performing in minutes what other tools require hours and days to do.  The Scan Engine is a parallel processing engine, generating IBM z/OS machine code.  In one pass through a business event repository it creates many business event “views,” providing rich understanding.  It categorizes, through join processes, the business events orders of magnitude more efficiently than other tools.  Built for business event analysis, it consistently achieves a throughput of a million records a minute.  It is highly extensible to complex problems. 

SAFR Views are defined in the SAFR Developer Workbench or rule based processes in the SAFR Analyst Workbench or in custom developed applications.  The Scan Engine executed as a scheduled process, scans the SAFR View and Metadata Repository selecting views to be resolved at that time.

The Indexed Engine, a new SAFR component, provides one at a time View resolution through on-line access to Scan Engine and other outputs.  It uses Scan Engine performance techniques.  Reports structure and layout are dynamically defined in the Analyst Workbench.  The Indexed Engine creates reports in a fraction of the time required for other tools.  Its unique capabilities allow for a movement based data store, dramatically reducing data volumes required both in processing and to fulfill report request.

Upon entering Managed Insights, users select parameters to drill down to increasing levels of business events, and perform multidimensional analysis through the Viewpoint Interfaces.  The Insight Viewer enables discovery of business event meaning in an iterative development mode.

The Solution

The SAFR Infrastructure Software has been configured over 10 years for number of clients to provide an incredibly scalable Financial Management Solution (FMS) for the largest financial services organizations. 

The heart of FMS is the Arrangement Ledger (AL).  An “arrangement” is a specific customer/contract relationship.  The AL, a customer/contract sub-ledger, maintains millions of individual customer/contract level balance sheet and income statements.  This incredibly rich operational reporting system supports a nearly unbelievable swath of information provided by scores of legacy reporting systems in summary, with the added benefit of being able to drill down to business event details if needed.  Doing so allows reporting high quality financial numbers by customer, product, risk, counterparty and other characteristics, all reconciled, audited, and controlled.  

AL is fed daily business events typically beginning with legacy general ledger entries and then transitioning to detailed product systems feeds over time.  The business events become complete journal entries at the customer-contract level, including reflecting the impact of time in the Accounting Rules Engine.  Rules are under control of finance rather than embedded in programs in source systems, enabling Finance to react to changes in financial reporting standards, including International Financial Reporting Standards (IFRS).

The business event journal entries are posted by the Arrangement Ledger on a daily basis, while it simultaneously generates additional point in time journal entries based upon existing balances, including those for multi-currency intercompany eliminations, GAAP reclassification and year-end close processing.  It accepts and properly posts back-dated entries to avoid stranded balances, and summarizes daily activity to pass to the General Ledger.  The General Ledger provides another control point for the traditional accounting view of the data.  The Arrangement Ledger detects and performs reclassification keeping the arrangement detail aligned with the summary General Ledger

AL also accepts arrangement descriptive information with hundreds of additional attributes to describe each customer-contract, and counterparty or collateral descriptive attributes, enabling producing trial balances by a nearly unlimited set of attributes, not just the traditional accounting code block.  Extract processes produces various summaries, perhaps ultimately numbering in the hundreds or thousands, to support information delivery for not only traditional accounting but also statutory, regulatory, management, and risk reporting.  The SAFR one pass multiple view capability allows AL to load data, generate new business events, and create extracts all in one process, including loading the incredibly information rich Financial Data Store. 

Information Delivery includes multiple ways of accessing the Arrangement Ledger and Financial Data Store.  The major window is through SAFR Managed Insights.  This parameter-driven Java application provides thousands of different permutations of the data.  It allows drill-down from summaries to lower and lower levels of data without impacting on-line response time.  It allows dynamic creation of new reports and multi-dimensional analysis of Financial Data Store data.  Extract facilities provide the ability to feed other applications with rules maintained by finance.  Other reports provide automated reconciliation and audit trails.

FMS can be tailored to work within an existing environment, including working within the existing security and reference data frameworks.  FMS is often can be a sub-component of an ERP implementation.

Conclusion

This is a financial system architecture for the 21st century. This is the reporting system architecture for the 21st century. Finance transformation starts with finance systems transformation.  Finance systems transformation starts with rejecting the legacy finance systems architecture that provides only summary results.  It is transforming the financial systems—the original enterprise data warehouse—into a system capable of supporting today’s information demands.

__________

Copyright ©2010, 2011, 2015, 2018 by Kip M. Twitchell.

All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the author except for the use of brief quotations in a book review or scholarly journal. Posted by permission.

Global Universal Bank Case Study: 2010

The following presentation was Rick Roth’s last major presentation before retiring, one of the founders of Geneva.

More about this time can be found in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 57. The General Ledger through Chapter 63. Go Live.

Financial Management Solution (FMS): 2009

The following presentation was built after a major successful project using Geneva/SAFR at a large, international bank, moving from tooling to a fully formed business configuration system.

Additional details about this time period are located in Balancing Act: A Practical Approach to Business Event Based Insights, in Chapter 64. Promotion.

Solution Competitive Analysis: 2005

The following deck gives a sense of the internal analysis of where Geneva played in the market circa 2005. This was developed after (1) multiple start-up initiatives to commercialize the asset, (2) decisions by IBM to not make it a program produce, (3) an attempt to divest of the product to other vendors. So this analysis characterized many of the unique properties of the asset.

More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights. Specifically, look at Chapter 56. Walkabout.

GenevaERS Introduction to IBM: 2003

On October 1st, 2002, IBM closed the acquisition of PricewaterhouseCoopers Consulting division, and with it gained access to GenevaERS.

This deck was built to introduce the product to IBM, wondering if it would stay a consulting asset or if it would move to be part of the Software Group offerings.

This is a very comprehensive (you might read that as “long”) view of the product and service offering, from a decade of work by the dedicated team.

More about this time can be read in  Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 54, Crisis.

Digineer Solution Blueprint: 2002

The following are two solution blueprints by an additional attempted start-up to license Geneva. In this instance, the platform was to be ported to the Intel Itanium processor, putting a very powerful data analysis tool right under the desk of the analyst.

The vision was far sighted, anticipating what happened with cloud and data analytics. But it was ravaged by the technology collapse post Internet bubble.

More about this time can be read in Balancing Act: A Practical Approach to Business Event Based Insights, Chapter 53. Spin-offs and Chapter 1: Introduction

INTELLIGENCE ENGINE FROM PwC CONSULTING AND DIGINEER™

Business Challenge:

Consolidating massive amounts of data from multiple sources to gain an integrated view

Technology Solution:

  • OI5 DataEngine™ from Digineer™
  • Enterprise Hardware Platform
  • Scaleable Intel® Architecture servers

Advanced Transaction Analysis for Business and Government

Crucial data that can help businesses and government operate more efficiently often lies buried within an overwhelming mass of transaction details scattered among numerous computers. Extracting this data using most data mining solutions presents many difficulties, since these solutions lack the flexibility, scope, and sheer processing power necessary to convert uncorrelated data into useful information. The Intelligence Engine solution combines technology expertise from PricewaterhouseCoopers (PwC Consulting), Digineer™, and Intel to help solve this issue. The Intelligence Engine solution spans data repositories across the world, using query-processing techniques to generate data intelligence from large numbers of events and transactions.

FACING THE CHALLENGE

Without the ability to look deep into operational processes, large-scale businesses are hindered in their decision-making and organizational goals––essentially running blind. Where data exists, it often resides on disparate servers in a variety of formats, resisting easy access or useful analysis. Collecting, consolidating, and correlating this data requires tools that can be scaled for massive volumes of data, and adapted easily to expose a wide range of information. Challenges to this end include:

  • Huge volumes of transaction data––Many conventional data warehousing and data mining solutions collapse under the weight of millions of daily event-level transactions. A robust solution must marry intelligent query processing with sufficient processing power to meet high levels of activity.
  • Data scattered throughout many repositories––Large organizations frequently store data in a variety of formats in data repositories spread throughout the country––or the world. Finding the key data—the proverbial needle in a haystack—demands a solution scaled to the magnitude of the task.
  • Need to quickly spot trends and patterns––The accelerating pace of modern business makes it necessary to detect and respond to trends and patterns in days rather than weeks. Knowledge-discovery tools must be flexible and adaptable for this purpose.
  • Evolving data requirements––Creating data models and report formats can be a very time-consuming and expensive proposition, and updating them can also require excessive time and effort. Tools to reduce this design effort and the associated maintenance help ensure that changing data requirements can be accommodated in minimal time.

MEETING THE CHALLENGE

Being able to accurately detect emerging patterns and trends in daily operations is invaluable to any organization seeking to remain competitive under the pressures of rapid change. The Intelligence Engine solution incorporates Digineer™ OI5 DataEngine™ deployed on Intel® Itanium® processor family-based servers to provide fast and flexible analytical data handling across multiple systems. By providing consolidated views of dispersed data in a variety of formats, this solution lets large organizations make fully informed decisions within their industry by assimilating data from every group or department––regardless of platform. The OI5 DataEngine™ successfully bridges dissimilar data structures and geographically separated servers to turn raw data into useful knowledge, providing a robust solution to help guide leaders of large organizations.

To accelerate the process of collecting and consolidating data, Digineer’s toolset simplifies query-processing setup and report generation. PricewaterhouseCoopers has successfully deployed this solution in a wide range of industries, including retail, financial, healthcare, and government.

SOLUTION BENEFITS

The Intelligence Engine solution lets any large organization benefit from improved visibility into the heavy volume of daily transactions and events specific to their business––whether retail sales tallies per product for a nationwide distributor, the weather conditions compared to yields at a juice producer’s orange orchards, or quality control data tracked by a steel manufacturer looking for ways to improve processes.

Clients adopting this solution enjoy improved operations management through rapid analysis of transaction data. They also gain deeper insights into key processes within their organization that may have been inaccessible in the past because of the difficulty in collecting and correlating the relevant data.

FEATURES : BENEFITS

Supports multiple data formats: The OI5 Data Engine™ provides support for a wide range of data formats, allowing flexible handling of data stored in repositories across many different platforms.

Designed for faster, high-volume transaction processing: The Intelligence Engine solution combines the intelligent pre-processing capabilities of the OI5 DataEngine™ with the architecture benefits of the Intel® Itanium® processor family. The result is a solution that scales well for the extremely large volumes of data involved in transaction processing––producing rapid responses to complex data queries.

Adapts easily to changing data requirements: The solution includes tools that simplify query design and report generation, encouraging companies to create new models for data collection and analysis, while meeting the core data extraction requirements for the organization.

Advanced Business Intelligence and Analytics Solution

Retailers, financial institutions, healthcare companies, and government agencies are collecting more data than ever before. This includes sales data from stores, warehouses, e-Commerce outlets, and catalogs, as well as clickstream data from Web sites. In addition, there is data gathered from back office sources––merchandising, buying, accounting, marketing, order processing, and much more. To pull together information from all these different sources and integrate, process, and analyze it, companies require advanced business intelligence and analytical solutions powered by a robust, high-performance infrastructure.

The Intelligence Engine solution addresses this very need, by combining technological expertise from PricewaterhouseCoopers (PwC Consulting), Digineer™, and Intel®. The query-processing capabilities of the Digineer™ OI5 DataEngine™ combined with the processing power of the Intel® Itanium® processor family, enables PwC Consulting to provide companies with the ability to quickly consolidate and integrate massive amounts of data from multiple sources––giving key decision makers a single view of vital business information.

Optimized to run on Intel® Itanium® processor family-based platforms, the Digineer™ OI5 DataEngine™ delivers a reliable, high-performance mechanism for processing and querying vast quantities of raw, event-level data through a software algorithm that employs intelligent pre-processing for improved efficiency. The Explicitly Parallel Instruction Computing (EPIC) technology and 64-bit addressing capabilities found in Intel® Itanium® architecture, deliver new levels of performance when applied to data mining and knowledge discovery operations. Powerful Intel® Itanium® processor familybased servers provide the massive computing power needed to run and consolidate memory-intensive databases across channels. In addition, integrating different processes and organizations is easier and more cost effective with a modular, open environment based on industry standards and Intel® Architecture.

THE BUSINESS CHALLENGE

Today’s competitive advantage will go to companies that make the right decisions at the right time. To succeed, these companies must be able to quickly consolidate and integrate massive amounts of data from multiple sources, into a single view of their business that can increase marketing and operational efficiency.

Turning Data into Dollars, a May 2001 Forrester report, indicates that executives charged with setting the strategic direction of their organization understand the value of business intelligence and analytics, yet are faced with the following challenges:

  • Mountains of data create processing bottlenecks—The sheer volume of data collected–– often involving terabyte-scale databases––creates the potential for processing bottlenecks in those systems not designed to effectively cope with this quantity of information. The massive data quantities produce such a heavy processing load that many existing solutions rapidly reach throughput limitations. Without sufficient processing power and highly optimized analytical tools, organizations must often relinquish access to important subsets of their data, or must add expensive hardware and software to increase system performance.
  • Analyzing information from scores of data repositories presents a challenge— Organizations cannot easily access and analyze their operational information because of the difficulty in extracting large quantities of data from multiple data repositories spread throughout the organization. Existing solutions lack the flexibility and robustness for effective data access and extraction.
  • Inflexible models fail to support evolving data streams—Organizations spend significant amounts of development time creating data models, collecting data, defining reports, and constructing consolidated data warehouses. In highly dynamic environments, these systems may rapidly lose relevancy and require ongoing adjustments at the core software or hardware level. Updating these core data model components to correspond with evolving knowledge discovery requirements necessitates a significant investment in time and effort.
  • New analytical views must be developed quickly and on demand—Different departments and groups within an organization have unique needs for the information mined from available data, and these needs change frequently, often on an urgent basis. A practical solution must have the capability of easily providing new analytical views from the data to respond to immediate needs from diverse groups.
  • Extracting key information is inordinately difficult—The ability to drill down through individual transactions and extract the useful patterns and trends is critical to knowledge discovery and analysis. Most existing systems do not adequately provide this capability when there are large quantities of data stored on multiple systems, or in one large system where data access is cumbersome.

The retail environment faces several challenges that can take advantage of a business intelligence solution like Intelligence Engine. For example, in a typical retail environment, data gathered from stores, Web sites, catalogs, as well as suppliers and warehouses provide valuable kernels of knowledge that can increase profitability. Consolidating and analyzing massive amounts of memory-intensive databases requires a robust infrastructure. As a result, many retailers mine only a subset of their available information. To recognize buying patterns over time, massive volumes of sales transactions must be analyzed and correlated by sales channel, region, individual store, product, and customer preference. Within the typical retail environment, this data resides in a number of separate systems.

THE SOLUTION OVERVIEW

The Intel® Architecture-based Intelligence Engine solution provides companies with a fast, affordable, and flexible way to consolidate, integrate, and analyze massive, memoryintensive data across multiple systems.

The Digineer™ OI5 DataEngine™, operating on a platform powered by the Intel® Itanium® processor family, performs high-speed analytical processing of massive amounts of event-level data. The data engine can simultaneously write large numbers of data extracts to downstream data warehouses, online and offline reporting tools, and real-time analytics applications. The 64-bit addressing capabilities of the Intel® Itanium® architecture provides the robust computing power needed to run and consolidate memory-intensive databases across channels. The extraordinary performance of the Intel® Itanium® processor family, coupled with the EPIC features, provide an architecture fully capable of adding capacity as needed, and providing heightened performance.

Historically, retailers have addressed the scale and performance problem by creating large data stores using conventional technology. As the volume of data that must be captured and processed increases––often exponentially––processing the massive volumes of data creates a bottleneck and prevents timely access to the detailed information required to run the business. Once this critical point has been reached, retailers must choose between adding expensive hardware to boost performance, or sacrificing the breadth and depth of captured data.

The OI5 DataEngine™ offers an attractive alternative to the scale and performance problem. Retailers use the solution to analyze key retail metrics, such as customer profitability, recency, frequency, volume analytics, supplier performance, and enterprise financials. The OI5 DataEngine™ derives these analytics from a collection of customer relationship management (CRM), supply chain management (SCM), and point-of-sale (POS) databases–– even if the data is distributed among several different machines. The performance provided in these kinds of implementations costs much less than equivalent systems.

KEY FEATURES:

The Digineer™ OI5 DataEngine™ delivers:

  • Speed—Performs fast and efficient processing of massive volumes of transactional data, resulting in a consolidated data store that accurately represents the organization’s operational dynamics.
  • Scale—Integrates data from multiple, disparate silos without requiring any modifications or enhancements to the existing data sources.
  • Depth—Includes powerful data query capabilities that efficiently and economically enable data mining of massive amounts of transaction-level event data.
  • Breadth—Provides an outstanding price and performance ratio, making the solution accessible to medium- and large-sized organizations.

PwC Consulting fully integrates the OI5 DataEngine™ into the client’s business processes and technologies. PwC Consulting’s background and experience in business intelligence and analytics produces a solution that enables organizations to rapidly collect, aggregate, manage, analyze, filter, customize, and distribute information throughout their value chains. By providing consolidated views of the dispersed data––rapidly and cost-effectively–– organizations can identify trends and quickly respond to changing needs and situations, thus improving overall business performance.

TECHNOLOGY

The Intelligence Engine solution runs efficiently on a 4-way Intel® Itanium® processorbased server. Supported operating systems include (Win64) Microsoft* Windows* 2000 server (Q3’02), Linux* (Red Hat*, version 7.2 for the Itanium® processor), and HP-UX* version 11 (Q4’02).

Components of the Digineer™ OI5 DataEngine™ rely on the following technologies:

  • Core Data Engine™—C++ with multi-threading
  • Result Set Viewer—Java* using Swing*
  • MetaData Manager—Visual Basic*
  • MetaData Database—Any ODBC-compliant database

The OI5 DataEngine™ runs optimally using Intel® EPIC architecture, taking maximum advantage of the Itanium® processor family’s multiple memory registers, and massive on-chip resources. The 128-bit general and floating-point registers excel at supporting the complex operations involved in analytics. Tuning the implementation to achieve maximum performance through parallelism, and tapping into the power of the Itanium® processor family’s multiple execution units makes it possible to rapidly process and analyze the large volumes of data necessary to accomplish this solution.

PwC Consulting solutions employ a variety of technologies using Digineer™ OI5 DataEngine™. By being able to create data stores through the importing and exporting of data files, this solution makes it unnecessary to devise complex, custom integration schemes. Support for each of the following sources and targets is provided:

  • Source databases
    • Oracle*
    • BM* DB2*
    • Sybase*
    • Microsoft* SQL Server*
    • IMS*
    • IDMS*
  • Target analysis engines
    • EMC* Symmetrix Manager*
    • SAP* portals
    • SAS*
    • IBM* Intelligent Miner*
    • Cognos* Powerplay* and Improptu*
    • Custom applications built in Visual Basic, Java, C++, and HTML using Microsoft Internet Information Server* (IIS) and Transaction servers
  • Digineer™ OI5 DataEngine™ has been deployed for use with SAP* and PeopleSoft* ERP packages as sources for data and targets for additional analysis. Future plans include development of interfaces for MQSeries*, and clickstream data sources, scheduled for release by mid-2003.

WHO THE SOLUTION WILL BENEFIT

The Intelligence Engine solution provides rapid analysis of data from multiple sources to gain a better picture of emerging trends and patterns. For a strong competitive advantage, companies will need to develop a quantitative, as well as qualitative, understanding of their business. The Intelligence Engine solution complements existing data warehouse and business intelligence solutions. It can also maximize the performance and extend the functionality of systems already in place.

The key vertical integration areas in which the Intelligence Engine solution can provide significant value include:

  • Retail—Domestic and international (Western European) retailers with annual sales of more than $1 billion, and large-scale customer, supplier, point-of-sale, and operational databases.
  • Manufacturers—Mid-market manufacturers with annual sales of more than $500 million and one or more large silo databases.
  • Financial Services—Financial services organizations with multiple geographic locations and stringent operational and regulatory reporting requirements.
  • Healthcare—Pharmaceutical and biotechnology companies that depend on data from multiple systems to manage and optimize their operations.
  • Government—Agencies under pressure to provide more extensive reporting of data, over which the organization has little governance. In addition, data sources continue to propagate, grow in size, and are increasingly complex and divergent from each other. 5

SOLUTION BENEFITS

The PwC Consulting Intelligence Engine solution using Digineer™ OI5 DataEngine™ adapts well to environments where operational data is distributed among several different systems––sometimes in different data formats––and where the report requirements change frequently. Implementing the Digineer™ OI5 DataEngine™ on servers powered by the Intel® Itanium® processor family offers these benefits:

  • Provides faster high-volume transaction processing. Supplies the massive computing power needed to run and consolidate memory-intensive databases across channels. This enables key decision makers and business owners to obtain accurate business intelligence and analytics, and quickly gain deeper insights into the dynamics of their operations.
  • Supports multiple data formats. Supports multiple applications and loads required to consolidate and analyze data throughout the company. This enables the solution to easily access existing data stores and business intelligence solutions without modifications or retrofitting.
  • Adapts easily to changing data needs. Accommodates an organization’s evolving data needs and models so that new informational requirements can be met with minimal investment of time and money. Integrates different processes and organizes these in an easier and more cost-effective manner with a modular, open environment based on industry standards and Intel® Architecture. Solutions based on the OI5 DataEngine™ as deployed on servers powered by the Intel® Itanium® processor family provide a highly competitive price and performance ratio. 6

FUNCTIONAL BUSINESS CONCEPT

Maintaining a current and accurate view of an organization’s operational status requires the capability of locating and analyzing information from a very large data store in a short period of time. This problem can be compounded when data resides in disconnected data silos, or has been extracted from a variety of non-correlated reports. Reducing the scope of the data analyzed in order to save time or expedite processing can create an inaccurate view of current operations. Processing bottlenecks and inefficiencies often result from solutions implemented on platforms without a tuned architecture capable of handling data quantities that reach terabyte levels. While increasing the quantity and complexity of a solution’s hardware and software may improve performance, it also raises costs. A better approach is to deploy an implementation on a platform designed to accommodate massive volumes of data with processing power that is suitable to the task. The Intelligence Engine solution using the OI5 DataEngine™ on a platform powered by the Intel® Itanium® processor family performs very high-speed, analytical processing of massive amounts of event data. This event data can be dispersed across any number of data sources, including enterprise resource planning (ERP), SCM, CRM, and legacy systems–– without reducing the effectiveness of the solution. Processed data can then feed downstream data repositories, analytical and business intelligence tools, and executive reporting systems, as shown in the following diagram.

The OI5 DataEngine™ uses a patented software approach that leverages the latest advances in microprocessor technologies featured in the Intel® Itanium® architecture, including 64-bit addressing and parallel processing capabilities.

The open architecture of the OI5 solution allows data to be extracted and transformed directly from legacy systems. OI5 provides a massively scaleable, high-performance solution for the most demanding intelligence and analytics applications

USER EXPERIENCE

Define Metadata

The first step to using OI5 is to define the metadata, including record layouts, files or tables, and relationships between tables. The OI5 design removes the complexities of understanding table relationships. Common keys and other relationships between tables only need to be defined once. Instead of having to understand the “where” statement in SQL to add department names to a sales report, OI5 lets users select the department name and then add it to their report. OI5 automatically locates the correct data.

Define Views

Next, define the views (queries):

  • Select the transaction files to read
  • Specify the appropriate output format
  • Indicate the appropriate filters to apply
  • Indicate the data to sort or summarize
  • Design columns, and indicate any calculations to perform
  • OI5 does the rest

Run Processes

During data file output, OI5 produces multiple files in one execution of the process. This feature reduces the time needed to supply data to other systems. OI5 views can reformat data,perform logic operations, and make calculations. With the open API, custom functions can be created for other types of transformation. These features make OI5 a powerful data extraction and transformation tool, as well as a flexible application engine.

SOFTWARE ARCHITECTURE

The OI5 DataEngine™ helps large organizations meet the scaleability and performance challenge of assessing key organizational operations when the number of events being measured and analyzed becomes too great. The solution supports queries directed to extremely large volumes of event-level data in a single pass.

Data exposed by the OI5 DataEngine™ can reside in different physical locations on multiple systems, including ERP, CRM, SCM, and POS systems. The front end of the OI5 DataEngine™ maps these “in-place” data sources into the data engine’s MetaData definition modules. The engine then uses the MetaData definition to intelligently pre-process the queries before generating the data views and outputting data stores for analytical processing. The data output can either be viewed with the OI5 Viewer or directed to other business intelligence and reporting systems.

The core, patented technology of OI5 is configured for refreshing and managing a largescale operational data store (ODS) derived from multiple data sources. As shown in the following diagram, it includes three integrated modules that deliver the full power of the OI5 technology and its massive data throughput: OI5 MetaData Manager, OI5 View Manager, and OI5 Core Processor. The OI5 DataEngine™, as implemented on servers powered by the Intel® Itanium® processor, runs on Microsoft* Windows* 2000 Server (64-bit) and HP-UX systems.

The parallel query-processing capabilities of the system follow a defined software algorithm that is optimized for speed. The algorithm first pre-processes all the data queries based on the data engine’s knowledge of where and how the event data is structured and formatted (determined by the MetaData definition in combination with Read Exits), as well as the required output data stores (known as Views). Optimized code generated by this query pre-processing step gets sequenced into a logic table. When the logic table is processed, the OI5 DataEngine™ spawns parallel query instructions that minimize instruction path lengths to the available microprocessors. This explicitly parallel query-processing directly takes advantage of the EPIC architecture of the Itanium® processor family. The OI5 DataEngine™, using instruction path optimization and single-pass data queries, offers unmatched query-processing and exceptional handling of the data view output.

SYSTEMS ARCHITECTURE

The OI5 DataEngine™––the basis of this data warehouse analytics solution––is optimized for the Intel® Itanium® processor family. The following diagram shows how information from existing data warehouses is delivered to the OI5 DataEngine™ for advanced analytics processing. The OI5 MetaData and View Manager applications, running on a workstation powered by the Intel® Pentium® 4 processor, define the relations and views for the associated processing and analytics. Processed data can be directed to Web portal solutions, as well as custom applications and reports, or supplied in a form for re-entry into an existing data warehouse.

The Web portal is powered by the Intel® XeonTM processor family. The OI5 DataEngine™ is powered by a 4-way Intel® Itanium® processor-family based server.

SUMMARY

PwC Consulting has used the OI5 DataEngine™ in numerous scenarios that have been designed and deployed for clients. This solution achieves favorable results in knowledge discovery applications that involve massive numbers of transactions. Running on servers powered by the Intel® Itanium® processor family, the underlying OI5 software data engine performs very high-speed, batch-analytical processing of large volumes of event data–– while simultaneously writing multiple data views to downstream data warehouses, online and offline reporting tools, and real-time analytical processing applications. When used in combination with Intel® Itanium® processor family-based platforms, the OI5 DataEngine™ offers an exceptional price and performance value.

LEARN MORE ABOUT THIS INNOVATIVE SOLUTION

For general information about the products described in this data sheet, visit: http://www.digineer.com

http://www.intel.com/go/solutionblueprints

If you have a specific question about implementing this solution within your organization, contact your Intel representative or e-mail us at:

solutionblueprints@intel.com

SOFTWARE PROVIDERS

Digineer™

Digineer, the Digineer logo and OI5 DataEngine are trademarks or registered trademarks of Digineer, Inc.

Copyright ©2002 Digineer. All Rights Reserved.

Intel, the Intel and Intel Inside logos, Pentium and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Copyright ©2002 Intel Corporation. All Rights Reserved.

*Other names and brands may be claimed as the property of others. Information regarding third party products is provided solely for educational purposes. Intel is not responsible for the performance or support of third party products and does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of these devices or products.

250790—001 and 250789—001

Airline Mines Data for Lost Revenue: 2000

An R&D statistical study at a Fortune 100 airline suggested that millions of dollars each month were being lost due to lost revenue.  The cause of the lost revenue ranged from simple ticketing mistakes and training issues to intentional fraud.  But the approach used in the R&D study had the following limitations: 

  • Statistical sampling does not identify specific cases.  To take action specific tickets must be identified.  Also, some tests require identifying patterns for individual employees or customers, which requires very large samples, preferably the entire database.
  • Sources other than the actual ticketing database were used because of its complexity and critical production system availability.  But working against anything less than the production ticketing database introduces the possibility that the results can be disputed.
  • Not all tickets could be inspected because of the volume of ticketing data.  Over 500 million records, from approximately 40 entities comprising 40 million tickets needed to be scanned.  This had to be done for over 10 different specific lost revenue detection tests.
  • Certain types of detection tests could not even be attempted in the study because of the complexity of the logic involved.

The airline agreed to put Geneva ERS to the test.  In a 14-week effort, a nine-member team performed the following:

  • The detection test business logic was defined, and data mapping from the business logic to the database performed,
  • Custom code was developed to scan the CA IDMS ticket database, and execute other complex logic
  • An architecture was developed, Geneva ERS installed, and the database structures defined within the tool,
  • Geneva ERS “views” or queries were created to produce four files (virtual and physical) and over 10 violation reports,
  • The queries were executed and refined dozens of times against test databases about 1/6th the size of the production database.

Executions against the production database required scanning the 500 million records in approximately 1 ½ to 3 hours wall clock time and from 3 to 6 hours CPU time.  The ticket database was scanned using 30 parallel processes, ultimately reading 170 different files.  All detection tests were resolved in one scan of the production database.

The results validated the dollar values estimated in the R&D study showing that over $6 million annually were being lost in one area alone, and millions more might be reported incorrectly.  It also provided insight into some areas that had never been investigated before.  But more importantly, Geneva ERS identified specific cases which could be investigated and collected.  The evidence was so solid that certain employees were dismissed as a result of the investigation. 

The following deck proposed similar projects to other airlines.