I recently had the opportunity to work with several of my colleagues at Decision First and SAPExperts to develop a comprehensive special report detailing the steps required to implement Standalone SAP HANA using both SAP BusinessObjects 4.0 and SAP Data Services 4.0. Below is a summary of the article, a link to the official Executive Summary document and a link to the Article. To read the full article you must have a membership to SAPExperts. If you do not have a membership I would highly recommend that you invest in it. A SAPExperts membership will grant you access to information and content that will help you master your understanding of the SAP tools.
Implementing SAP HANA, an End-to-End Perspective
by Jonathan Haun, Consulting Manager, Decision First Technologies; Christopher Hickman, Principal Consultant, Decision First Technologies; and Don Loden, Principal Consultant for BI, Decision First Technologies.
In this executive summary, learn tips about SAP HANA in the areas of creating a data model, the modeling process, and connecting SAP BusinessObjects to SAP HANA. This executive summary is an abbreviated version of a full, exclusive SAPexperts special report, “Implementing SAP HANA, an End-to-End Perspective,” available to SAPexperts subscribers. For more information about SAPexperts, go to www.sapexperts.com.
Download the full Executive Sumary for free (PDF): SAP HANA Exec Summary
Link to the full article: View the Article
Follow the authors on Twitter:
@donloden – Don Loden
@chickman72 – Christopher Hickman
@jdh2N – Jonathan Haun
Follow SAPExperts on Twitter:
Jonathan,
Thank you very much for such a wonderful whitepaper. I thoroughly enjoyed it and came with below questions. Request you to provide anwsers or helpful references.
Pg7 : Are you suggesting to stick to mostly denormalized schemas rather than Star Or Snowflake because of join costs ?
Pg8 : It is a good idea to use float types but how this is going to render with BW on HANA when we use Virtual Providers on HANA Models to create BEx queries. I mean it is going use BW data types or HANA data types ?
Pg16 : Date is put in as VARCHAR to leverage some functionality from HANA modelling, but could not understand it ? Could you help me finding/understanding it ?
Pg17 : How to pass chosen correctable address recrods from InfoSteward to DS jobs after profiling ?
PG21 : Is schema space for every DB User is allocated individually while creating User or same for every User as per config ? What happens to Views/data after User gets deleted ?
Pg22 : So Hierarchies can be defined only at Attribute level but not at Analysis or Calc View level ?
Pg26 : Could you provide a real-life example of Time based attributes and how it should be implemented ?
Pg34 : Do HANA Views with Referential joins acting exactly like a Universe which strives to use only relevant tables/joins based on objects being used in queries ? Did you see any differences between these behaviours or can you compare when they behave differently ?
Pg35 : You say, cardinality affected activation of a view but I would say it is not correct behaviour as HANA DB can check only current data and future loads might affect cardinality ?
Pg37 : Is better peformance of Calc View based on a projected Analytiv View because of Referential joins and lesser number of fields in output ?
Pg41 : Is Analytic previliges have to created in same pkg where Views were created ?
Pg57 : If Measure is mandatory for every report, then Master Data reports are not possible even though we have relevant dim tables available in HANA ?
pg60 : If data movement is going to happen in case of a InfoSpace on a UNX , then even though it is based on a HANA view it is behaving like a UNX on a regular RDBMS ? Not sure how it behaved on OLAP UNV based on BWA based BEX query ?
Pg62 : Any comparison of capabilities or pefromance between ODBC or JDBC connectivity ?
Thanks again for such a wonderful article.
Vamsi
Thank you for your kind remarks. It will be difficult for me to answer all of your questions in detail. I make a living as a consultant, therefore giving away all the secrets for free is hard to stomach. In the spirit of the SAP community, though I will answer the following questions.
Pg7 : Are you suggesting to stick to mostly denormalized schemas rather than Star Or Snowflake because of join costs ?
The rule is not as black and white as you stated. The Star Schema is always a great starting point. However, when a Dimension to Fact joins is subject to high cardinality, it is best to move the dimension values into the fact table to avoid the high cost of the join. We are calling this type of model a modified star schema model.
Pg16 : Date is put in as VARCHAR to leverage some functionality from HANA modeling, but could not understand it ? Could you help me finding/understanding it ?
The primary key of the SAP HANA built in Data / Time attribute is a numeric date YYYYMMDD stored as a varchar value. We are recommending that the foreign keys for date be created as varchar to support the built in data attribute in SAP HANA. If you plan to create your own data dimension, you can revert to using INT types for dates.
Pg22 : So Hierarchies can be defined only at Attribute level but not at Analysis or Calc View level ?
Hierarchies can be defined within Attribute Views and Calculation Views.
Thanks.