Business Intelligence

Business Intelligence should not be Hype, use it to answer these 4 golden rules, I included an explanation of the OBIEE Design Architecture with step by step approach to building Reports.

on Wednesday, 03 April 2013. Posted in Blog, Business Intelligence

 

 

 

 

 

 

 

 

 

 

 

 

BI should provide users the data and tools to answer questions that are important to running the part of the business you are responsible for

  1. Determine if the business is on track
  2. Identify where things are going wrong
  3. Take and monitor corrective actions
  4. Spot trends

 

Below is the OBIEE Standalone Architecture and an overview of its components

 OBIEE Architecture

  • Data Source --Contain the business data, users want to analyze. OBI supports Relational Database, OLAP databases, Flat Files, Spreadsheets, XML files etc .
  • OBI Web Server-- Provides the processing to visualize the information for client consumption.
  • Clients
    • Answers--Provides a self-service analysis platform for analytical needs.
    • Dashboards—Provides a way to organize related information for easy access.
    • Delivers---Used to create Ibots. Delivers alerts to subscribed users. It is integrated with dashboards and answers.
    • BI Analytical Server--- Provides efficient processing , generates dynamic SQL to query data base iin the data sources, structures results to satisfy requests.
    • BI Scheduler---Manages and executes jobs requesting data analytics. Schedules reports to be delivered to users at specified times.

Last week received the following support email

From: **********************************
Sent: March 25, 2013 2:29 PM
To: Alex Antonatos
Subject: OBIEE reports with Answers

Hi Alex,

I am a regular reader of your blog, great website and thank you for sharing your experiences.

We have installed OBIEE and new to BI Answers trying to understand how to create reports with Answers. Can you help us understand the steps required in deploying procurement spent report with Oracle Answers.                                                    

Sincerely,

*******************

*******************

Step by Step approach in building BI reports using Answers

Oracle BI Answers is the Oracle BI interface used to query the organization’s data. They provide answers to business questions. It allows you to explore and interact with information, and present and visualize information using different views.

Answers can be accessed by using URL default area http://OBIServer/analytics/saw.dll?Answers

Step 1: Arrow mark indicates the subject areas accessed by the current user. In my current example I am using Purchasing to respond to the support email above.

 Step 1- BI Answers

Step 2: Once click on the subject area displays the related tables and columns in left side pane

 Step 2- Bi Answers

Step 3: Select the columns from the left side pane.

Step 3- Bi Answers

Step 4: Select the columns from the left side pane.

 Step 4- Bi Answers

Step 5: Save the results in the required folders

 Step 5 - Bi Answers

Then you can view reports in various formats. By selecting the type of report from the drop down, then it will generate the report in that view.

 Sample BI Output

Like always, don’t hesitate to share your experiences/tips with me and the website readers on OBIEE

Use buzzwords wisely instead work on your value proposition, here is one example that has worked well for me for the last 5 years

on Tuesday, 12 March 2013. Posted in Blog, Business Intelligence

Use buzzwords wisely instead work on your value proposition, here is one example that has worked well for me for the last 5 years

 

Essentially, most of the really popular buzzwords ones have become cliché, and therefore make it easy for a customer to spot the sales pitch, and customers do not like to buy sales pitches. They like to buy service and products.

The verdict, then, is that buzz words are an effective tool, but only when used in moderation. One or two unique, carefully selected phrases can catch a customer's attention, and help them understand the benefits of the product or service. More than that will start to confuse the information and put the customer on guard.

As an employee or consultant you should always work on your value proposition.

Your value proposition is a statement that summarizes on how you are differentiated from your competitors. It should answer the question: “Why should I choose to buy this product or service, and why now?” Here are some tips:

The best value proposition is clear: what is it, for whom and how is it useful? If those questions are answered, you’re on the right path. Always strive for clarity first.

If your value proposition makes people go “oumph?”, you’re doing it wrong. If they have to read a lot of text to understand your offering, you’re doing it wrong. Yes, sufficient amount of information is crucial for conversions, but you need to draw them in with a clear, compelling value proposition first.

What makes a good value proposition:

  1. Clarity! It’s easy to understand.
  2. It communicates the concrete results a customer will get from purchasing and using your products and/or services.
  3. It says how it’s different or better than the competitor’s offer.
  4. It avoids hype (like ‘never seen before amazing miracle product’), superlatives (‘best’) and business jargon (‘value-added interactions’’synergies’).
  5. It can be read and understood in about 10 seconds.

 

Here is one example i have been offering clients for the last 5 years:

As a consultant, I have put in place multiple Enterprise BI solutions, over the years in implementing these products I developed a product in Excel to help clients analyze and provide options to build or Buy their BI applications.

The Differentiators of my product are

  1. Customizable to each clients project needs
  2. Fixed cost
  3. Quick turnaround and independent advice
  4. Simple to use
  5. No setup fee
  6. No long-term identification time and material contract required
  7. Money-back guarantee if the product has not provided any value

 appsconsultant.com BI work effort estimator

Since 2007, 110 global clients have purchased the product and satisfied with the results.

I believe that everybody has some kind of talent in them. No matter what it is everyone is able to do something better than some people. Be aware of your talent and make it be readable and understandable in around 10 seconds, why 10 seconds? In 2013 most of us have developed a short attention span main 4 reasons behind this is 1. Internet 2.Texting 3. Portable Music 4. Television

Here are some screenshots of the BI product:

appsconsultant.com BI budget estimates

 

appsconsultant.com BI Business rules

appsconsultant.com BI effort estimator

Most common data modeling errors

on Tuesday, 01 January 2013. Posted in Business Intelligence

I will describe mistakes that development teams may make in an application development project during development and implementation of a logical and physical data model. I have encountered these situations many times over the last 13 years (and even made some of them myself in the earlier years). The goal of the article is to allow you to learn from other people’s experiences and increase the chances of success on the first try. My experience has been concentrated on Oracle databases, so most of the recommendations deal with techniques you can use with Oracle.

Even though many of the recommendations will seem obvious, you may recognize your own projects in some of these mistakes. But perhaps it will be comforting to know that you are in good company, as these errors are far too common, even though Oracle and other relational database systems have been available for years.

Oracle is a sophisticated relational database management system that supports complex data structures. While there is enormous flexibility in building a data model, many of the options in the Data Definition Language (DDL) have default choices, which encourages the temptation to just “build it and go,” focusing exclusively on the logical functionality of the application without planning for the physical operation of the database. However, the complexities also require some planning in advance, to allow the application to perform adequately as it scales up to thousands of users, millions of records, or hundreds of gigabytes of storage. The following table summarizes common mistakes and the problems that they can cause.

Mistake Impact Symptoms Benefit of Correct Action
Failure to address long-term growth. Severe Chronic space shortages.

DBAs in constant reaction mode.

Expense of acquiring storage space.

Predictable cost for storage space.

Space is available before it is needed.

Time for maintenance tasks (backup, recovery, upgrade) can be predicted and scheduled.

No routine way to clean out old data. Severe Unbounded growth of old data.

Expense of acquiring storage space.

Degrading performance over time.

Difficulty dealing with ad hoc deletions on massive scale.

Excessive production of redo logs; recovery time increases.

Reduced cost of storage space.

Efficient methods for discarding old data.

Predictable steady-state operation.

Overuse of synthetic keys. Minor to moderate Server maintains unnecessary index.

Insert transactions serialize on unnecessary sequence generator.

Improved performance of high-insert transactions.
Inadequate planning of indexes.  Moderate to major  Server wastes time and space maintaining unnecessary indexes.Suboptimal query plans.  Improved performance.

Failure to Address Long-term Growth

Logically, all that is needed to construct a data model is CREATE TABLE, CREATE INDEX, and other related commands. Using these commands, your logical entities become your physical tables, and you’re done with the data model, right? But operations that work fine on small tables may perform inadequately when a table grows to production size. I have seen many databases for which someone just created tablespaces with a few gigabytes, and then created the tables and indexes; then these creators had to deal with endless crises as they found that the tablespaces were not big enough. Such crises are even more acute when solving that space shortage involves buying more disk drives, since the procurement and installation of disk is rarely fast enough to solve a crisis.

To avoid this, you must focus more attention on the physical storage of large tables during the design of the data model.

  •  
    •  
      • Estimate both the size of the table and its long-term growth pattern. Extreme precision is not required, but the estimate should be based on the actual needs of the table. A table exists to store some real-world information, so use the underlying business needs of the table to produce a simple model (perhaps in a spreadsheet) estimating the number of rows. The business domain experts should provide input to this process. How many customers, clients, or patients will there be? How will the population grow over time? How many accounts, orders, bills, payments, and so on, will a typical customer have over time? How many detailed records with there be on average for a given parent key? How long will information be retained? Answering these questions may be difficult in the case of a new business or application, but frequently there is enough experience in the business to provide at least a crude estimate (of course, it is better to err on the high side than the low side).
      • Estimate the physical size of the tables. Again, extreme precision is not required, but the result should be based on reality. Something as simple as the sum of average column lengths times the number of rows, plus some overhead for PCTFREE, will yield at least a reasonable order of magnitude. Obviously, if the database already exists, then the actual tables can be used as an accurate gauge of the storage needs, though one should still extrapolate growth into the future.
      • Don’t forget to estimate the physical size of the indexes. This is more difficult than estimating table sizes, because the actual size can depend on how compactly Oracle stores keys in non-leaf blocks. But even a crude estimate is better than none at all. A good starting point would be the average length of indexed columns times the number of rows, plus some overhead for PCTFREE. For bitmap indexes, a good estimate would be the number of unique values of the indexed column times the number of rows divided by bits per block. A word of caution: in my experience, indexes may consume up to twice as much space as these simple models would predict. You have to try it on your database to know for sure.
      • Use this information to estimate the need for disk space both now and several years in the future. It is not really helpful to know a table’s current size if it is going to triple in size in the next three years. Modeling the growth will allow the DBA to construct tablespaces of adequate size without having to react to ever more critical shortages of space. It also allows you to allocate adequate time for maintenance activities such as backups or upgrades, since the time required for these activities is proportional to the size of the database. It may also reveal that a table needs to be archived or cleaned out on a regular basis.
      • During development, validate the predictions by comparing predicted bytes per row with the actual row sizes of the tables and indexes in the development or test environments, and adjust the prediction model accordingly. This is especially important for the indexes, for which actual space usage may differ from the predictions by as much as a factor of two.

No Routine Way to Clean Out Old Data

Some parts of your data will be dated, and will have a limited lifetime depending on the needs of the business. Perhaps the business needs to keep orders or payments online for three years, and user activity logs for the last three months. Frequently, programmers will focus on the routine operation of an application, and will pay great attention to the modules that insert the new records. But the application itself takes no specific action to “expire” the old data, so no one is concerned with doing this efficiently, if at all. Naturally, this leads to unbounded growth. On a large table, this can be a disaster. When there is no plan for cleaning out old data, database operations people will typically respond to emergency shortages of space either by scrambling to delete huge numbers of rows or by adding space on an ad hoc basis, neither of which are good long-term solutions.

In Oracle, the DELETE operation is intended for online transaction support, and as such it is most efficient only for small numbers of rows. For every row deleted, the table and index blocks containing the row must be read, recorded in a rollback segment, modified, written to the redo logs, and written back to the disk. All this disk I/O is essential for integrity of the database (which is of course why you are using Oracle anyway), but it is not efficient for large-scale clean-out from a big table. Furthermore, the amount of rollback and redo produced will increase the number log archives and thus increase the time required to recover from a backup.

Oracle’s solution to this problem is range partitioning, first introduced in version 8.0. A range-partitioned table is physically separated into many different partitions, and Oracle automatically adds rows to a particular partition based on the value of a specific column in the row (called the partition key). Each partition accepts values in a specific range (hence the name), and the ranges of the partitions are consecutive and non-overlapping. There are many advantages to partitioning large tables, including the following:

  •  
    •  
      • All rows in a given range can be eliminated with a single small, efficient DDL transaction by simply dropping or truncating a partition.
      • For queries that have a condition limiting the value of the partition key, such as partition key BETWEEN :lower_bound AND :upper_bound, the query optimizer can skip partitions that will have no matching rows, reducing I/O required to process the query, an operation known as partition pruning.
      • Maintenance operations can proceed on individual partitions while the rest of the table is still available to support users.
        More details about partitioning can be found in chapter 11 of Oracle 9i Database Concepts and chapter 17 of Oracle 9i Database Administrator's Guide, including specifics on how to set up and managed partitioned tables.

In order to reap the benefits of partitioning, several choices must be made in advance:

  •  
    •  
      • Which column to use as the partition key.
      • Which intervals to use as the partition ranges.
      • Whether to partition the indexes, and if so, how to do so.
      • When to add new partitions and drop old ones.
        There are plenty of resources to advise on these decisions, so I’ll not cover them in detail here. The main point to remember is that partitioning is by far the most efficient way to deal with large tables that must be cleaned out on a regular basis.

Overuse of Synthetic Keys

As you know, the primary key of a table is a set of one or more attributes such that every row in the table has a unique value. When these attributes are things that the business uses routinely, the key is called a “natural key.” Examples could include Social Security numbers, vehicle identification numbers, customer or patient IDs, account numbers, and so on. However, sometimes a table has no obvious natural key, or the natural key has many columns. Database designers frequently create a key using a sequence generator. This is called a “synthetic key,” because it serves the purpose of a primary key but has no real meaning to the business. Some database designers and reference materials use the term, “surrogate key.”

I have seen data models in which every table in the whole schema was keyed by synthetic keys. While there are valid reasons to use a synthetic key, it is going overboard to extend the practice to every table in a data model, for these reasons:

  •  
    •  
      • There are tables for which a synthetic key simply serves no purpose. The primary key index is a waste of space, and the server must waste time updating the index during inserts and deletes. One example would be a table of log entries; it probably does not need a key at all. Another example would be the detail table in a master/detail relationship. Detail records are typically fetched by the master key; there is little benefit in giving the detail rows their own synthetic key.
      • Simultaneous transactions that insert new rows must serialize on the sequence generator as they each take turns getting new key values. If there are dozens or hundreds of such transactions running in parallel, as in an OLTP system, then such serializing will limit performance. (Sequence caching can mitigate this serializing effect in Oracle Parallel Server or Oracle Real Application Cluster, but it does not help at all on a single-instance database.)

So when should you use a synthetic key? I have encountered these situations for which a synthetic key was useful:

  •  
    •  
      • The natural key is something that might change over time, such as a car's license number. A synthetic key can remain fixed while the other attributes change.
      • The natural key has many columns, and there are many places in your application where a program must fetch a single row by key value. A synthetic key would allow the application to identify a row by only one value.
      • The natural key has many columns, and the table is the target of foreign references. It is generally impractical to repeat all of the natural key columns in the child tables; a synthetic key allows you to create foreign references with just one column.

You may find other reasons for using a synthetic key. But rather than habitually defining a synthetic key for every table, it is a good idea to make sure that the benefit of a single-column key outweighs the cost of indexing and the cost of generating sequence numbers.

Inadequate Planning for Indexes

One of the most challenging tasks in creating a logical data model is to decide which columns to include in indexes. Unfortunately, there are no magic criteria that are guaranteed to tell you which indexes need to exist and which ones don’t, but there are some rules of thumb that can serve as a starting point. These generally require a decent understanding of the way the tables will be accessed by the application programs, report programs, and (if possible) ad hoc analysis. It is beyond the scope of this article to explore all of the situations for which an index is a good or a bad idea. The Oracle9i Application Developer’s Guide Fundamentals has some very good advice in a section titled, “Indexing the Correct Tables and Columns,” in chapter 5.

So why did I bring this up in a discussion of common mistakes? Because in many cases developers are tempted to apply simple rules of thumb globally across the entire data model without any consideration of whether the resulting indexes are really useful or not.

Foreign Key Indexes

Some data modeling tools have options to create indexes on all foreign keys, and some developers may be tempted just to turn on that option. After all, the rule of thumb says to index columns that are used in joins, right? In reality, a foreign key index is useful only if at least one of the following is true:

  •  
    •  
      • When a query plan in your application starts by fetching rows in the parent table, and then fetches rows in the child table by way of the foreign key, then an index on that foreign key will help. Note that the join order and the search method are both important; if the plan starts with the child table first, or uses a full scan of the child table, or uses a different index on the child table’s primary key, your foreign key index is not helping.
      • When your application deletes rows in the parent table, or updates key columns in the parent table, Oracle can use the foreign key index on the child table to enforce the foreign key constraint.

If neither of these situations apply, then indexing the foreign key is a waste of space and time. Furthermore, if the foreign key is the first column of the child table, then a separate index on just the foreign key would be redundant, and hence totally unnecessary.

Low-cardinality Columns

For low-cardinality columns, if you really need an index, you might consider using a bitmap index, since bitmap indexes have many benefits when dealing with low-cardinality columns. (See Oracle 9i Concepts for a thorough discussion of bitmap indexes.) However, because a single block in a bitmap index can contain thousands of rows, concurrency can be a problem during inserts or deletes. The only way to know for sure is to try a realistic benchmark with and without the index.

Order of Indexed Columns

The order of the columns in the index can also affect its usefulness, though this is less of an issue with the new skip-scanning feature of Oracle 9i. Generally speaking, columns that are high-cardinality (i.e., have many distinct values in the table) and are used in the WHERE clauses should generally be first in the index. The idea is to allow the query processor to find the rows you want by searching as few blocks as possible.

To understand this concept, consider a telephone book as an analogy. The entries are sorted by last name, first name, street name, and house number. When you are searching for a person by last name, the alphabetical organization allows you to proceed to the correct line very quickly. But it does not help at all if you are searching by street name, the third field in the sort order. Instead, you would need a book where street name is the primary sort field.

Reviewing Index Definitions

During development, it is important to figure out which indexes you will and will not need. Do not defer this activity until production deployment.

  •  
    •  
      • Determine indexes you need by checking the query plans for important queries that your application uses. Make sure the indexes are adequate to support those queries.
      • Determine indexes you don’t need by using Oracle 9i’s new index monitoring feature, which can find unused indexes for you. See the ALTER INDEX command in Oracle 9i SQL Reference for the specifics on how to enable this feature. For this to be useful, your application must be running under realistic conditions that exercise all of the features that will be employed in production.

The only purposes for indexes are to enforce constraints (primary key, unique, and foreign key) and to improve query performance. Any index that does not do one of these two things wastes space, and wastes the server’s time during DML operations.

You may have noticed that this discussion of indexes does not have many hard-and-fast rules that will tell you that a given index is or is not a good idea. But the main point of this piece is that you cannot apply such rules indiscriminately; you must test your database to verify that useful indexes exist and useless ones don’t.

Summary

We have seen a number of problems that a database project can encounter when developers focus exclusively on the logical functionality of a database without planning for performance or space management. Taking the time to plan for these problems during development can help you avoid crises in production, and avoid the time and expense of reorganizing your database later to repair the problems.

Data Architecture Guiding Principles

on Tuesday, 01 January 2013. Posted in Blog, Business Intelligence

 

 
These guiding principles provide direction but also serve as criteria for evaluating proposed solutions and opportunities:

 

One version of the truth

Create a single set of data used for analytic reporting which is correct and consistent across the enterprise. Look at MDM emphasize on your critical data 

 

 

Work from a single logical vision

Build a single logical BI model/architecture that is agreed upon and provides a common goal for building or modifying physical systems

 

 

Standardize on a set of BI tools

Minimize costs by avoiding duplicate licensing and environments
Create a consistent experience at all levels (tool is not important SAP, Cognos, Oracle, BO,STANDARDIZE)

 

 

Maximize flexibility with modular architecture

Minimize specific tool dependency issues
Reduce future development and maintenance costs significantly

 

 

Build, Deploy and Refine in iterations

Achieve incremental benefits
Accelerate business adoption through reduced development time

 

 

Base data transfer and storage on reporting needs

Avoid the transfer and storage of data that does not meet a current or planned analytic reporting need
Meet business needs for data timeliness

 

 
 

Designing your enterprise reporting strategy

on Tuesday, 01 January 2013. Posted in Blog, Business Intelligence

 

One key mistake, that often surfaces with the design of your report strategy is the mis-understanding of the classification of reports

Classification of Reports

Reports can broadly be classified as Operational reports, Tactical reports and Strategic reports. The category of reporting requirement drives the format in which the data needs to be stored to ensure adequate performance. Operational reports are typically run off relational databases while tactical (analysis/planning) and strategic reports are run off multi-dimensional databases.

 

1)      Operational Reports

Operational reports are essential for day-to-day operations of a business. These are typically either static or parameterized in terms data selection. This class of reports has minimal analytical requirements and typically excludes complex analytical calculations or planning processes. Relational (OLTP) databases are sufficient to meet these requirements.

 

2)      Tactical Reports

Tactical reports include planning and budgeting reports. Planning and budgeting report are more analytic in nature and may often require complex data analysis including past trends. Depending on the complexity of the reporting requirement, a combination of Relational (OLTP) and Multi-dimensional (OLAP) databases are required to support these needs.

 

3)      Strategic reports

Senior Management and executives have a business need to obtain the reporting figures at an overview level as opposed to an analyst in a department who needs reporting data at a finer level of detail. Reports of this type may also require operability with charts, tabular metrics and graphs. Strategic reports may involve complex analysis. Multidimensional (OLAP) databases need to be used to avoid performance degradation.

The above mentioned reports may further be classified as follows:

 

 ·       Standard Vs Ad-hoc reports

·         Summary Vs Detail reports

·         Key Figures Vs Master Data reports

·         Internal Vs External Reports 

Standard Vs Ad-hoc Reports

Standard reports are executed not once, but many times. Therefore, they should not have to be recreated each time. Instead, it should be possible to re-access templates and fill them automatically with the new data. However, if users would like a general view of certain facts for which there no pre-defined report, they need a tool that enables them to quickly compile this information themselves. It is important that they can quickly find the data they need. They need a tool that provides a business-operations view of the relevant data fields - characteristics, indicators and other attributes. This tool must be clear, and must help to find the fields needed by filtering and sorting.

Note: Standard report is a classification based on the reusability of the template or a report definition including layout and data fields. Standard report does not refer to the Oracle standard report.

 

Summary Vs Detail ReportsSummary reports have condensed data and the main focus is on data that can be aggregated. Usually the top-down method is used, that is, users start at a relatively high level of condensing and then display successively more details for important data (zoom in or drilldown). Experts also change (navigate) between different views (slice and dice).The other approach is the bottom-up approach. This is used mostly for reports that either contains hardly any data fields that can be aggregated, or for those in which the main focus from the start is on a detailed and complete overview of all selected data.

 

Key Figures Vs Master Data ReportsKey figures are numerical and can usually be condensed with simple formulas (but sometimes only with very complex formula). For example DSO (Day sales Outstanding) is a key figure and indicates the number of days past due and can be at product or a line of business level. A master data report is an all-inclusive report that has all the data pertaining to a range selection. 

Internal Vs External Reports

Based on the report destination, reports can be broadly classified as being internal or external. An internal report is circulated within an organization where as an external report is sent to an outside agency or a govt. body.

Much too heavy of an emphasis is based on Technology which can be complex, Technology in most cases is not the issue, it can be complex but always manageable.

To successfully deploy your Enterprise reporting strategy, it is important that you understand your organization plan to reach success, I recommend you use drivers (ex: automating processes - cost savings, educating users and managers - utilise the full potential of ERP, CRM). In general people drive enterprise reporting system to success not technology.

In spite of this advice, each time an organization starts an ERP (or CRM or SFA or BPM or whatever) selection and implementation project, the going-in assumption for reports  is that the software must handle the unique aspects of the business.

After having personally managed several large-scale system implementations (and consulted others on numerous projects), I developed a model to simplify business/IT reporting initiatives. The goal of this model is to improve the likelihood that we will make more rational decisions about where to accept "vanilla," or standard reports and where it makes sense which report you customize. For an ERP implementation, I have found that using this approach reduces reporting project timelines by as much as 50% and budgets by as much as 40%. Statistics show that only 30-40% of reports you design are ever used after go-live -develop wisely.

 

 

 

 

Starting an OBI project

on Tuesday, 01 January 2013. Posted in Featured, Blog, Business Intelligence

Starting an OBI project

The nuts and bolts of putting together a decent data warehouse are not that hard.

But in the corporation the reality is: you need to make sure you store the correct granularity of data, you have all of the right constraints in place (plus all those other helpful bits of metadata that the query optimizer uses), the right backup and recovery regime, some useful pre-built summaries, a fast ETL (preferably set-based) load process and enough grunt to deliver the data to the users. Not rocket science (but perhaps hard enough for people to want to use a consultant)

But the hard thing is to plan for those ad-hoc user queries created in a drag and drop query tool. Typically, the user picks columns from a palette of available data objects and drops them on to a canvas representing a report, graph, cross-tab or whatever.  One experience that you will probably encounter is when the user tell s you it is easy to create a report on the screen; it usually does not mean it is simple to get the information back from the data warehouse.

To get you started, I created a logical flow of deliverables that I think you should maintain during your data warehouse project.

p.s: right click on image and save as on your desktop, format 11X17.

 

 

DeliverablemapforOBIEE

Designing your enterprise physical data model including sample deliverable

on Tuesday, 01 January 2013. Posted in Blog, Business Intelligence

 

The Physical Data Model defines the data mapping and migration path for data from the legacy system to the new application or system. This model should include the following information for each data element: description; source location; source data type; data transformation logic; target data type; target location.

To complete this deliverable, the physical data model for the new application while considering the data model of the packaged application such as (Oracle, Salesforce, Kaba , if applicable). This data model defines the data architecture of the new application.

Below is a sample deliverable that contains a data mapping of a CRM to target system exercise.

The three important areas of this deliverable that I think must be documented 1- Data Map 2- Relationships 3- Version History (Traceability)

 

Data Map

 

 

 

 

 

physical_data_model 

 

Relationships

 

physical_data_model2

 

 

Version History (Traceability)

physical_data_model3

 

Copyright 2015 Appsconsultant.com. All rights reserved.