370988085-Learn-HANA-in-1Day-Definitive-Guide-to-Learn-SAP.pdf

Learn HANA in 1 Day By Krishna Rungta Copyright 2016 - All Rights Reserved – Krishna Rungta ALL RIGHTS RESERVED. No pa

Views 178 Downloads 28 File size 14MB

Report DMCA / Copyright

DOWNLOAD FILE

Citation preview

Learn HANA in 1 Day By Krishna Rungta

Copyright 2016 - All Rights Reserved – Krishna Rungta ALL RIGHTS RESERVED. No part of this publication may be reproduced or transmitted in any form whatsoever, electronic, or mechanical, including photocopying, recording, or by any informational storage or retrieval system without express written, dated and signed permission from the author.

Table Of Content Chapter 1: Introduction 1. 2. 3. 4. 5.

Introduction SAP HANA Why to choose SAP HANA? SAP HANA In-Memory Strategy SAP HANA Advantages SAP HANA Compare to BWA (Business Warehouse Accelerator)

Chapter 2: Hana Architecture 1. SAP HANA Architecture 2. SAP HANA Landscape 3. SAP HANA Sizing

Chapter 3: SAP HANA Studio 1. 2. 3. 4. 5.

Pre-Requisite for SAP HANA Studio Supported Platform Download & Install SAP HANA Studio Add System in SAP HANA Studio Work With SAP HANA Studio

Chapter 4: SQL Script, Data Type, Trigger, Sequence, Operator, Function, Expression, Identifiers 1. 2. 3. 4. 5. 6. 7. 8.

What is SAP HANA SQL SAP HANA Identifiers SAP HANA Data Type SAP HANA Operator SAP HANA SQL FUNCTIONS SAP HANA SQL EXPRESSIONS SAP HANA SQL Stored Procedure SAP HANA Create Sequence

9. SAP HANA Create Trigger

Chapter 5: Data Provisioning 1. 2. 3. 4. 5.

Overview of Replication Technology SLT (SAP Landscape Transformation Replication Server) SAP DS (SAP DATA Services) SAP HANA Direct Extractor Connection (DXC) Flat file Upload to SAP HANA

Chapter 6: Modeling 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

SAP HANA Modelling Overview Join Method in SAP HANA SAP HANA Best Practices for Creating Information Models SAP HANA Attribute View SAP HANA Analytic View SAP HANA Calculation View SAP HANA Analytic Privileges SAP HANA Information Composer SAP HANA Import and Export SAP HANA Performance Optimization Technique

Chapter 7: Security 1. 2. 3. 4. 5. 6.

SAP HANA Security Overview SAP HANA Authentication SAP HANA Authorization SAP HANA User and Role Administration SAP HANA License Management SAP HANA Auditing

Chapter 8: Reporting 1. Reporting In SAP BI (Business Intelligence) Overview 2. Reporting In Webi of SAP Business Objects (BO) on HANA 3. Reporting In Crystal Report

4. Reporting In SAP Lumira 5. Reporting In Microsoft Excel

Chapter 1: Introduction

SAP HANA is the latest ERP Solution from SAP, which is a combination of Hardware and Software. HANA has unprecedented adoption by the SAP customers. SAP HANA is latest, in-memory database, and platform which can be deployed on-premises or cloud. SAP HANA is a combination of hardware and software, which integrates different components like SAP HANA Database, SAP SLT (System Landscape Transformation) Replication server, SAP HANA Direct Extractor connection and Sybase replication technology

Introduction SAP HANA SAP HANA has two part – 1. SAP HANA Database – SAP HANA Database is a hybrid in–memory database. SAP HANA Database is the heart of SAP in-memory technology. In SAP HANA, Database table are of two types – Row Store Column Store 2. SAP HANA Platform – SAP HANA Platform is a development platform with an in-memory data store which allows the customers to analyze a large volume of data in real time. SAP HANA Platform works as a development platform, which provides infrastructure and tools for building a high-performance application based on SAP HANA Extended Application Services (SAP HANA XS). There are different types of SAP HANA edition, some of them as belowSAP HANA Platform Edition – It provides Core database technology. It Integrate SAP component like SAP HANA database,

SAP HANA Studio and SAP HANA clients. It is for customers who want to use ETL-based replication and already have a license for SAP Business Objects Data Services. SAP HANA Enterprise Edition – It contains data provisioning (SLT, BODS, DXC) component including core database technology. It is for customers who want to use either trigger-based replication or ETLbased replication and do not have all of the necessary license for SAP Business Objects Data Services. SAP HANA Extended Edition – It contains data provisioning (Sybase) features more than Platform and Enterprises edition. It is for customers who want to use the full potential of all available replication scenarios including log-based replication. The following diagram shows difference between all editions –

Why to choose SAP HANA? SAP HANA is a next-generation in-memory business platform. It accelerates analytics and application on a single and in-memory platform. Mentioned below are the few reasons why to choose SAP HANA – Real Time – SAP HANA Provides Real-Time Data Provisioning and Realtime Reporting. Speed – SAP HANA provide high speeds processing on massive data

due to In-Memory Technology. Any Data/Source- SAP HANA can access various data source including Structured and Un-Structured data from SAP or Non-SAP data source. Cloud- SAP HANA database and application can be deployed to the Cloud environment. Simplicity – SAP HANA reduce efforts behind ETL process, Data Aggregation, Indexing, and Mapping. Cost – SAP claims that SAP HANA Software can reduce Total IT cost of a company. Choice Option – SAP HANA is supported by different hardware vendor and Software provider, so based on the requirement, the user can choose the best option.

SAP HANA In-Memory Strategy SAP HANA has many processes running on the SUSE Linux Server. SUSE Linux server manages the reservation of memory to all process. When SAP HANA starts up, Linux OS reserves memory for the program code, program stack, and static data. OS can dynamically reserve additional data memory upon request from the SAP HANA Server. SAP HANA create a memory pool for managing and tracking the consumption of memory. The memory pool is used to store all the inmemory data and system tables, thread stack, temporary computations and all other data structure required for managing the database.

When more memory is required for table growth or temporary computations, the SAP HANA memory manager obtains this from the pool. For an overview, check out the Memory Overview feature of the SAP HANA studio. To access it, right-click on a System ->Configuration and Monitoring -> Open Memory Overview in the context menu, as follows:

SAP HANA Advantages Below are advantages of SAP HANA -

By In-Memory Technology user can explore and analyze all transactional and analytic data in real time from virtually any data source. Data can be aggregated from many sources. Real-time replication services can be used to access and replicate data from SAP ERP. SQL and MDX interface from third party support. It provides information modeling and design environment.

SAP HANA Compare to BWA (Business Warehouse Accelerator) SAP BW Accelerator: It is an in-memory accelerator for BW. BWA is focused on improving the query performance of SAP NetWeaver BW. BWA is specifically designed to accelerate BW queries reducing the data acquisition time by persisting copies of the infocube. SAP HANA: SAP HANA is in-memory database and platform for highperformance analytic reports and application. In SAP HANA data can be loaded from SAP and non-SAP Source System through SLT, BODS, DXC, and Sybase and can be viewed using SAP BO/BI, Crystal Reports, and Excel, etc. Currently, SAP HANA also work as in-Memory database for SAP BW, so in this way SAP HANA able to improve the overall performance of SAP Net weaver BW. Summary: SAP HANA is an in-memory database and application, which runs on SAP authenticated hardware and Software. SAP HANA have three version – platform, enterprises and extended. SAP HANA can load data from SAP and Non-SAP data source through SLT, BODS, DXC, and Sybase. SAP HANA provide real-time provisioning and reporting. SAP HANA provide high-performance real-time analytic reporting. SAP HANA reduces Total IT cost.

Chapter 2: Hana Architecture SAP HANA Database is Main-Memory centric data management platform. SAP HANA Database runs on SUSE Linux Enterprises Server and builds on C++ Language. SAP HANA Database can be distributed to multiple machines. SAP HANA Advantages are as mentioned below – SAP HANA is useful as it's very fast due to all data loaded in-Memory and no need to load data from disk. SAP HANA can be used for the purpose of OLAP (On-line analytic) and OLTP (On-Line Transaction) on a single database. SAP HANA Database consists of a set of in-memory processing engines. Calculation engine is main in-memory Processing engines in SAP HANA. It works with other processing engine like Relational database Engine(Row and Column engine), OLAP Engine, etc. Relational database table resides in column or row store. There are two storage types for SAP HANA table. 1. Row type storage (For Row Table). 2. Column type storage (For Column Table). Text data and Graph data resides in Text Engine and Graph Engine respectively. There are some more engines in SAP HANA Database. The data is allowed to store in these engines as long as enough space is available.

SAP HANA Architecture Data is compressed by different compression techniques (e.g. dictionary encoding, run length encoding, sparse encoding, cluster encoding, indirect encoding) in SAP HANA Column store.

When main memory limit is reached in SAP HANA, the whole database objects (table, view,etc.) that are not used will be unloaded from the main memory and saved into the disk. These objects names are defined by application semantic and reloaded into main memory from the disk when required again. Under normal circumstances SAP HANA database manages unloading and loading of data automatically. However, the user can load and unload data from individual table manually by selecting a table in SAP HANA studio in respective Schema- by rightclicking and selecting the option "Unload/Load". SAP HANA Server consists of 1. 2. 3. 4. 5.

Index Server Preprocessor Server Name Server Statistics Server XS Engine

1. SAP HANA Index Server SAP HANA Database Main server are index server. Detail of each server is as belowIt's the main SAP HANA database component

It contains actual data stores and the engine for processing the data. Index Server processes incoming SQL or MDX statement. Below is the architecture of Index Server.

SAP HANA Index Server overview 1. Session and Transaction Manager: Session Component manage sessions and connections for SAP HANA database. Transaction Manager coordinates and control transactions. 2. SQL and MDX Processor: SQL Processor component queries data and send to them in query processing engine i.e. SQL/SQL Script / R / Calc Engine. MDX Processor queries and manipulates Multidimensional data (e,g. Analytic View in SAP HANA). 3. SQL / SQL Script / R / Calc Engine: This Component executes SQL / SQL script and calculation data convert in calculation model. 4. Repository: Repository maintain the versioning of SAP HANA metadata object e.g.(Attribute view, Analytic View, Stored procedure). 5. Persistence layer: This layer uses in-built feature "Disaster Recovery" of SAP HANA database. Backup is saved in it as save points in the data volume. 2. Preprocessor Server This server is used in Text Analysis and extracts data from a text when the search function is used.

3. Name Server This Server contains all information about the system landscape. In distributed server, the name server contains information about each running component and location of data on the server. This server contains information about the server on which data exists. 4. Statistic Server Statistic server is responsible for collecting the data related to status, resource allocation / consumption and performance of SAP HANA system. 5. XS Server XS Server contains XS Engine. It allows external application and developers to use SAP HANA database via the XS Engine client. The external client application can use HTTP to transmit data via XS engine for HTTP server.

SAP HANA Landscape "HANA" mean High Performance Analytic Appliance is a combination of hardware and software platform. Due to change in computer architecture, the more powerful computer is available in terms of CPU, RAM, and Hard Disk. SAP HANA is the solution for performance bottleneck, in which all data is stored in Main Memory and no need to frequently transfer data from disk I/O to main memory. Below are SAP HANA Innovation in the field of Hardware/Software.

There are two types of Relational data stores in SAP HANA: Row Store and Column Store.

Row Store It is same as Traditional database e.g. (Oracle, SQL Server). The only difference is that all data is stored in row storage area in memory of SAP HANA, unlike a traditional database, where data is stored in Hard Drive. Column Store Column store is the part of the SAP HANA database and manages data in columnar way in SAP HANA memory. Column tables are stored in Column store area. The Column store provides good performance for write operations and at the same time optimizes the read operation. Read and write operation performance optimized with below two data structure.

Main Storage Main Storage contains the main part of data. In Main Storage, suitable data compression Method (Dictionary Encoding, Cluster Encoding, Sparse Encoding, Run Length encoding, etc.) is applied to compress data with the purpose to save memory and speed up searches.

In main storage write operations on compressed data will be costly, so write operation do not directly modify compressed data in main storage. Instead, all changes are written in a separate area in column storage known as "Delta Storage." Delta storage is optimized for a write operation and uses normal compression. The write operations are not allowed on main storage but allowed on delta storage. Read operations are allowed on both storages. We can manually load data in Main memory by option "Load into Memory" and Unload data from Main memory by "Unload from Memory" option as shown below.

Delta Storage Delta storage is used for a write operation and uses basic compression. All uncommitted modification in Column table data stored in delta storage. When we want to move these changes into Main Storage, then use "delta merge operation" from SAP HANA studio as below –

The purpose of delta merge operation is to move changes, which is collected in delta storage to main storage. After performing Delta Merge operation on sap column table, the content of main storage is saved to disk and compression recalculated. Process of moving Data from Delta to Main Storage during delta merge

There is a buffer store (L1-Delta) which is row storage. So in SAP HANA, column table acts like row store due to L1-delta. 1. The user runs update / insert query on the table (Physical Operator is SQL statements.). 2. Data first go to L1. When L1 moves data further (L1- Uncommitted data) 3. Then data goes to L2-delta buffer, which is column oriented. (L2Committed data) 4. When L2-delta process is complete, data goes to Main storage. So, Column storage is both Write-optimized and Read-optimized due to L1-Delta and main storage respectively. L1-Delta contains all uncommitted data. Committed data moves to Main Store through L2-Delta. From main store data goes to the persistence layer (The arrow indicating here is a physical operator that send SQL Statement in Column Store). After Processing SQL Statement in Column store, data goes to the persistence layer. E.g. below is row-based table-

Table data is stored on disk in linear format, so below is format how data is stored on disk for row and column table In SAP HANA memory, this table is stored in Row Store on disk as format – Memory address And in Column, data is stored on disk as – Memory address Data is stored column-wise in the linear format on the disk. Data can be compressed by compress technique. So, Column store has an advantage of memory saving.

SAP HANA Sizing Sizing is a term which is used to determine hardware requirement for SAP HANA system, such as RAM, Hard Disk and CPU, etc. The main important sizing component is the Memory, and the second important sizing component is CPU. The third main component is a disk, but sizing is completely dependent on Memory and CPU. In SAP HANA implementation, one of the critical tasks is to determine the right size of a server according to business requirement. SAP HANA DB differ in sizing with normal DBMS in terms of –

Main Memory Requirement for SAP HANA ( Memory sizing is determined by Metadata and Transaction data in SAP HANA) CPU Requirement for SAP HANA (Forecast CPU is Estimated not accurate). Disk Space Requirement for SAP HANA ( Is calculated for data persistence and for logging data) The Application server CPU and application server memory remain unchanged. For sizing calculation SAP has provided various guidelines and method to calculate correct size. We can use below method1. Sizing using ABAP report. 2. Sizing using DB Script. 3. Sizing using Quicksizer Tool. By using Quicksizer tool, Requirement will be displayed in below format-

Chapter 3: SAP HANA Studio SAP HANA Studio is an Eclipse based, integrated development environment (IDE) for development and administration of SAP HANA Database in the form of GUI tool. SAP HANA Studio runs on client/developer machine and connects to SAP HANA Server. SAP HANA Studio can access local or remote SAP HANA Database. By using SAP HANA Studio we can – Enables user to manage the SAP HANA Database. Create and manage user authorizations. Create New or modify existing models of data.

Pre-Requisite for SAP HANA Studio Supported Platform SAP HANA Studio runs on below platform – Microsoft Windows x32 and x64 versionsWindow XP Window Vista Window 7 Window 8 SUSE Linux Enterprises Server: x86 64 Bit version, Red Hat Enterprises Linux (6.5). Mac OS 10.9 or Higher. System Requirement JAVA JVM – During Installation and updating of SAP HANA Studio, a JVM is installed or updated.

SAP HANA Client – It is software, by which you will be able to connect any other database, application. SAP HANA Client can be installed on UNIX / Linux and Microsoft Windows and also on SAP HANA Server host during server installation. SAP HANA Client installed separately from SAP HANA studio.

Download & Install SAP HANA Studio

Installation Path The default installation on system path according to OS and their version is as below – Microsoft Window (32 & 64 bit)- C:\Program files \sap\hdbstudio. Linux x86, 64 bit - /user / sap / hdbstudio. Mac OS , 64 bit - /Applications / sap / hdbstudio.app

Software Download You can download SAP HANA Studio and SAP HANA Client from here Select File To Download according to your OS –

Installation on Microsoft Window Install SAP HANA Studio in the default directory with administration privileges or in user home folder without administration privileges. Click on hdbsetup.exe for installing SAP HANA studio.

A SAP HANA Lifecycle Management Screen appears.

Default installation folder is C:/Program Files / SAP / hdbstudio. Step 1) Define Studio Properties

1. Select install new SAP HANA Studio. 2. Click on

Button.

Select Features screen appear as below – Step 2) Select features

1. Select Features screen are used to select features. 2. Select Feature as below SAP HANA Studio Administration – Toolset for various administration task, Excluding Transport. SAP HANA Studio Application Development – Toolset for developing SAP HANA native Applications (XS and UI5 Tools excluding SAPUI5). SAP HANA Studio Database Development – Toolset for content development. 3. Click

button.

Step 3) Review and Confirm

1. Review & Conform Screen appears. 2. Summary of SAP HANA Studio Installation display. 3. Click on

Button.

Step 4 & 5) Install Software and Finish. 1. Installation Progress screen appear and after it goes to finish page.

2. A Message "You have successfully installed the SAP HANA Studio". 3. Click on

button.

Run SAP HANA Studio Now, go to Default installation folder is "C:/Program Files / SAP / hdbstudio". There is hdbstudio.exe file, by right clicking on it, you can create a shortcut on the desktop.

When you click "hdbstudio.exe" file, it will open Workspace Launcher screen displayed below.

1. Workspace is selected by default. We can change Workspace location by Browse option. Workspace is used to store studio configuration settings and development artifacts. 2. Select "Use this as the default and do not ask again" option to prevent popup this screen every time for workspace selection when we open SAP HANA Studio. 3. Click

Button.

SAP HANA Studio Welcome screen appear -

In the Welcome screen different perspective is displayed, Detail of each perspective is as below –

1.

Administration Console Perspective

This screen is used to configure, administration and monitoring the SAP HANA Database. Several View and editor are available in SAP HANA Administration Console. System View Toolbar is used for Administration; it looks like as below –

Below is a Table showing System-level editors and views available in SAP HANA Administration Console. View/ Editors Detail

Systems

System Monitor

The System view provides hierarchical view of all the SAP HANA System managed in SAP HANA Studio with their contents (catalog, content, etc.)

System Monitor is an editor which provides an overview of all SAP HANA Database at one screenshot. We can see the detail of

Path

Window-> Show View -> System

Button on System View Toolbar.

image

the individual system in System Monitor by drill down.

This is used for performing Administration administration and monitoring task.

1. From The System Toolbar. 2. By double click on System.

This editor is used in case of emergency to perform monitor and operation on Administration the system in Diagnosis which either Mode No SQL connection available or the SQL connection overload.

1. From click Administration tool list icon. 2.Ctrl+Shift + O

Backup

Security

Used in performing Backup and Administration.

This editor is used for managing below topic1.Password

Expand the system and choose backup.

Security option from Security views of the

Policy 2.Data Volume Encryption

SQL Console

Used for Entering, Executing and analyzing SQL statement in SQL Console.

system.

From the System Toolbar Choose SQL

2. Modeler Perspective This perspective is used to create modeling objects, database object management in SAP HANA System. This perspective used by modelers for the following activity – Create / Modify Tables, Functions, Indexes, View, Sequences, Synonym, Trigger, Views. Create Modelling object like Attribute View, Analytic View, Calculation View, Analytic Privileges, Procedures and Decision Table. Data Provisioning to SAP HANA database from SAP / NonSAP Source through SLT, BODS, DXC.

3. Development Perspective This Perspective is used to develop an application on HANA for the web environment. In this Perspective programming language is used – JAVA Script, J Query, ODATA, etc.

4. Lifecycle Management Perspective This screen is used to Install and Update software regarding SAP HANA Database and SAP HANA Studio. Lifecycle management is also used to transport an object from one HANA system to another HANA System.

Add System in SAP HANA Studio To work with SAP HANA Database user needs to be connected with SAP HANA database from SAP HANA Studio. So we build a connection to SAP HANA DATABASE as below – Step 1) Click on "Add System" icon from System Toolbar as below-

Step 2) Provide the following detail as below – 1. Host Name – Enter SAP HANA database here. 2. Instance Number – Two Digit Instance number. 3. Description – Description of the system for better understanding 4. Click on

button.

A connection properties screen appears in which we need to enter SAP HANA Database User and Password.

1. Enter Username and Password for SAP HANA Database for access it from SAP HANA Studio. 2. Click

button.

If there is no error, then the connection is successful, and System name is added in SAP HANA Studio under System Node.

Work With SAP HANA Studio To login in SAP HANA Database through SAP HANA Studio, follow below steps-

1. Click on Added System. Here" DB (HANAUSER)". 2. A popup screen for User Name/ password. Enter User Name and Password for HANA Database. 3. Click on Ok button. After Login to SAP HANA Studio, We get below screen for selected HANA System.

In Hana Studio under HANA System following sub-nodes exits-

Catalog SAP HANA Studio Catalog node represent SAP HANA data dictionary, in which Database object (Table, View, Procedure, Index, Trigger, Synonyms, etc.) stores in Schema Folder. When the user is created in SAP HANA, Schema of the same name will be created in SAP HANA

Database by default. This is a default schema of user when a user creates any database object. Schema is used to group database object. Schema defines a container that hold database objects such as Table, Views, Trigger, Procedure, Sequence, Function, Indexes, Synonyms, etc. Schema can be created in SQL Editor by below SQLCREATE SCHEMA "SCHEMA_NAME" OWNED BY "USERNAME". Here "SCHEMA_NAME" AND "USERNAME" Should be changed according to Requirement. After Refresh Catalog Node Newly Created Schema will be displayed. I have created Schema "DHK_SCHEMA" by it in-front SQL.

All Database Object are stored in respective folder of Schema as below –

Provisioning Provisioning is used for selecting source Meta data and importing metadata and data into SAP HANA. There are two categories of provisioning, they are 1. SAP HANA In-Built Tool (Flat file, Smart Data Access, Smart Data Streaming, etc.) 2. External Tools (SLT, BODS, DXC, etc.) In SAP HANA Studio Provisioning node, SAP uses a new feature called, "Smart Data Access" which is Built in Tool. Smart Data Access combines data from heterogeneous data sources like Hadoop, Teradata, Oracle, and Sybase.

Data from different sources will store in SAP HANA database as "Virtual Table". The restriction with virtual tables is, it can be only used to build calculation views in SAP HANA.

Content Content Node is Design Time Repository, which hold all information of data models in the package. All information view e.g.(Attribute View, Analytic View, Calculation View, etc.) will be created in Package under Content Node. The package is used for grouping related information object in a structured way. The package can be created by clicking right click on Content Node ->New->Package.

Security Security Node in SAP HANA Studio contain 3 Sub-node, they are – 1. Security – Used for Create User Audit Policy, Password Policy, etc. 2. Users – Used for create/Modify/Delete user. Role and Privileges will also grant to user from this screen. 3. Roles – Used for Create/Modify/ delete Roles. Privileges are added/deleted from here to Role.

Chapter 4: SQL Script, Data Type, Trigger, Sequence, Operator, Function, Expression, Identifiers

Most RDBMS database uses SQL as database language, the reason of being popular is – it is powerful, vendor independent and standardized. SAP HANA also supports SQL. In SAP HANA, SQL is the main database language.

What is SAP HANA SQL SQL Stands for Structured Query Language. It is a Standard Language for communicating with Relational database like Oracle, MySQL etc. SQL is used to store, retrieve and modify the data in the database. By using SQL in SAP HANA, we can perform following jobSchema definition and use (CREATE SCHEMA). DML Statement (SELECT, UPDATE, INSERT). DDL Statement ( CREATE , ALTER , DROP ) DCL Statement ( GRANT ,REVOKE) System Management Session Management Transaction Management Comment in SQL We can add a comment to improve the readability and maintainability of SQL Statements. Comment can be put on SQL in two waysSingle Line Comment - Double Hyphens "—". This is one line comment.

Multiple Line Comment – "/* */ ". All Commented text is ignored by SQL Parser.

SAP HANA Identifiers Identifiers are used to represent name in SQL statement (e.g. table name, view name, column name, index name, synonym name, procedure name, function name, etc.) There are two types of identifiers – delimited identifiers and undelimited identifiers. Delimited Identifiers – It is enclosed in the delimiter, Double Quotes "". The identifier can contain any character including special character. Undelimited Identifiers – Undelimited identifiers (table name, column name) must start with a letter and cannot contain any symbols other than a digit or an underscore '_'. There are two types Quotation mark for delimit as belowSingle Quotation Mark (' ') – It is used to delimit the string. Double Quotation Mark (" ")- It is used for delimiting identifiers.

SAP HANA Data Type In SAP HANA Database, SQL Data Type is as below – Classification

SubSQL Data Type Classification

Column Store Type

Default Format

Date Times Types

Date

DATE

CS_DAYDATE

'YYYY -MM -DD'



Time

TIME

CS_SECONDTIME

'HH34:MI:SS'



Second Date

SECONDDATE

CS_LONGDATE



Time Stamp

TIMESTAMP

CS_SECONDDATE

'YYYY-MM-DD HH34:MI:SS.FFn'

Numeric Types

Tiny Integer

TINYINT

CS_INT

8-bit unsigned integer, Range 0 To 255



Small Integer

SMALLINT

CS_INT

16-bit signed integer , Range -32, 768 To 32, 767



Integer

INTEGER

CS_INT

32-bit signed integer, Range -2,147, 483, 648 To 2,147, 483, 647

CS_FIXED(18,0)

4-bit signed integer , Range -9, 223, 372, 036, 854, 775, 808 To 9, 223, 372, 036, 854, 775, 807

CS_FIXED(p-s,s)

Precision p can range from 1 to 38. The scale s can range from 0 to p. If precision and scale are not specified, DECIMAL becomes a floating-point decimal number.







Big Integer

Decimal

BIGINT

DECIMAL(p,s) p-Precision sscale

Small Decimal SMALLDECIMAL CS_SDFLOAT

'YYYY-MM-DD HH34:MI:SS'

It is a floating-point decimal number. The precision and scale should be within the range 1~16 for precision and -369~368 for scale, depending on the stored value. SMALLDECIMAL is only supported for column store Table.



Real Number

REAL

CS_FLOAT

single-precision 32-bit floating-point number



Double Number

DOUBLE

CS_DOUBLE

a double-precision 64-bit floating-point number



Float

FLOAT(n)

CS_DOUBLE

It is 32-bit or 64-bit real number. Where n specifies the number of bits and should be in the range between 1 and 53.

Boolean

Boolean

BOOLEAN

CS_INT

TRUE, FALSE And UNKNOWN (NULL).

Character String

VariableLength Character String

CS_STRING

It is a Variable-length character string, where 'n' specified the maximum length in bytes and this is an integer between 1 and 5000.

CS_STRING

Variable-length Unicode character set string, where indicates the maximum length in characters and is an integer between 1 and 5000





VariableLength Unicode character

VARCHAR(n)

NVARCHAR(n)

Alpha Numeric ALPHANUM(n) Character

CS_ALPHANUM

Variable length alphanumeric characters, where n indicates the maximum length and is an integer between 1 and 127

It is Variable-length



Short Text

SHORTTEXT(n) CS_STRING

character string which provide text search and string search features. This data type can be defined for column store tables, but not for row tables.

Binary Types

Binary Text

VARBINARY(n)

CS_RAW

Store binary data of a specified maximum length in bytes, where n indicates the maximum length and is an integer between 1 and 5000.

LOB Types(Large Object Types)

Binary LOB

BLOB

CS_RAW

Large amounts of binary data



Character LOB

CLOB

CS_STRING

ASCII character data



Unicode Character LOB

NCLOB

CS_STRING

Large Unicode character object

CS_STRING

The TEXT data type provide text search features. This data type can be defined for column Store tables, but not for row store tables.

CS_STRING

The BINTEXT data type is similar to data type TEXT and thus supports text search features, but it is possible to insert binary data. This data type can be defined for column tables, but not for row tables.





TEXT

BINARY Text Data

TEXT

BINTEXT

Multi-valued Types

Array

ARRAY



It stores collections of values of the same data type where each element is related with exactly one position. Arrays can contain NULL values as in the absence of a value.

SAP HANA Operator SAP HANA Operator can be used for calculation, value comparison or to assign value. SAP HANA Contain below operatorsUnary and Binary Operator Arithmetic Operator String Operators Comparison Operator Logical Operator Set Operator

Unary and Binary Operator Operator Operation

Description

Unary

A Unary operator applies to one operand

Unary plus operator(+) Unary negation operator(-) Logical negation(NOT)

Binary

A Binary Operator applies on two operand

Multiplicative operators ( *, / ) Additive operators ( +,- ) Comparison operators ( =,!=,,=) Logical operators ( AND, OR )

Arithmetic Operator Addition (+)

Subtraction (-) Multiplication ( * ) Division ( / )

String Operator A String Operator is a concatenation operator which combines two items such as strings, expressions or constants into one. Two Vertical Bar "||" is used as the concatenation operator.

Comparison Operator Comparison operator is used to compare two operand. Below are list of Comparison OperatorEqual to ( = ) Greater Than ( > ) Less Than ( < ) Greater than or equal to ( > = ) Less than or equal to ( < = ) Not Equal (!= , )

Logical Operator Logical operator is used in search criteria. E.g. WHERE condition1 AND / OR / NOT condition2 Below is list of logical operator – AND - (e.g. WHERE condition1 AND condition2) If both Condition1 AND Condition2 are true, then Combine condition is true else it will false. OR – (e.g. WHERE condition1 OR condition2) If Condition1 OR Condition2 is true, then combine condition is true or

false if both Conditions are false. NOT - (e.g. WHERE NOT condition) NOT condition is true If Condition is false.

Set Operators UNION - Combines two or many select statements or query without duplicate. UNION ALL - Combines two or many select statements or query, including all duplicate row. INTERSECT - Combines two or many select statements or query, and return all common rows. EXCEPT - Takes the output from the first query and removes row selected by the second query. E.g. I have two table (table1, table2) in which some values are common.

We use Set operator (Union, Union ALL, Intersect, except) for these two table in SQL as below –

Table1 creation SQL Script CREATE COLUMN TABLE DHK_SCHEMA.TABLE1 ( ELEMENT CHAR(1), PRIMARY KEY (ELEMENT) ); INSERT INTO DHK_SCHEMA.TABLE1 VALUES ('P'); INSERT INTO DHK_SCHEMA.TABLE1 VALUES ('Q'); INSERT INTO DHK_SCHEMA.TABLE1 VALUES ('R'); INSERT INTO DHK_SCHEMA.TABLE1 VALUES ('S'); INSERT INTO DHK_SCHEMA.TABLE1 VALUES ('T');

Table2 creation SQL Script CREATE COLUMN TABLE DHK_SCHEMA.TABLE2 ( ELEMENT CHAR(1), PRIMARY KEY (ELEMENT) ); INSERT INTO DHK_SCHEMA.TABLE2 VALUES ('S'); INSERT INTO DHK_SCHEMA.TABLE2 VALUES ('T'); INSERT INTO DHK_SCHEMA.TABLE2 VALUES ('U'); INSERT INTO DHK_SCHEMA.TABLE2 VALUES ('V'); INSERT INTO DHK_SCHEMA.TABLE2 VALUES ('W');

Note: Here "DHK_SCHEMA" is a schema name, the user can change schema name in SQL accordingly. Set Operator Examples are as below Operator

SQL Query

Output Uses

SELECT * FROM ( SELECT ELEMENT FROM DHK_SCHEMA.TABLE1 UNION UNION

Combine Result of two or more query with no duplicate.

SELECT ELEMENT FROM DHK_SCHEMA.TABLE2 ) ORDER BY ELEMENT;

SELECT * FROM ( SELECT ELEMENT FROM DHK_SCHEMA.TABLE1 UNION ALL UNION ALL SELECT ELEMENT FROM

Combine Result of two or more query with all duplicate.

DHK_SCHEMA.TABLE2 ) ORDER BY ELEMENT;

SELECT * FROM ( SELECT ELEMENT FROM DHK_SCHEMA.TABLE1 INTERSECT INTERSECT

Combine Result of two or more query with all common rows.

SELECT ELEMENT FROM DHK_SCHEMA.TABLE2 ) ORDER BY ELEMENT;

SELECT * FROM ( SELECT ELEMENT FROM EXCEPT

DHK_SCHEMA.TABLE1 EXCEPT SELECT ELEMENT FROM DHK_SCHEMA.TABLE2 )

Takes output from first query and removes row selected by the second query

ORDER BY ELEMENT;

SAP HANA SQL FUNCTIONS SAP HANA Provides following SAP HANA Functions1. Data Type Conversion Function – Data Type conversion function are used to convert one data type to another. Below are list of Data Type Conversion functionE.g. CAST, TO_ALPHANUM, TO_BIGINT, TO_BINARY etc. 2. Date Time Functions - Date Time Function are used to convert Date/ Time in a different format. E.g. – ADD_DAYS, ADD_MONTHS, ADD_SECOND, etc. 3. Fulltext Functions - Fulltext Functions is used for text search. E.g. – SCORE etc. 4. Number Functions - Number Functions take a numeric value, or string with numeric characters, as input and return numeric values. E.g. – ABS, ROUND, POWER, etc. 5. String Functions - String Functions take a string as input, process them and return value according to function. E.g. (ASCII, CHAR, CONCAT, etc.) 6. Window Functions – Window Functions let user divide result set of a query into groups of rows named window partition. E.g. RANK (), DENSE_RANK (), ROW_NUMBER (), etc. 7. Miscellaneous Function- There are some more functions, which are used for the miscellaneous job. E.g. – CONVERT_CURRENCY, CURRENT_SCHEMA, etc.

SAP HANA SQL EXPRESSIONS SQL Expressions is a clause that can be used for return values. There are

4 types of SQL ExpressionsCase Expressions – In this expression the user can use IF – THEN – ELSE logic without write procedure. Function Expressions – SQL built-in-functions can be used as Expressions. Aggregate Expressions – In This Expression aggregate functions is used to calculate a single value from the values of multiple rows for a column. Aggregate Name

Description

COUNT

Count the Number of rows returned by the query.

MIN

Return the minimum value of the expression.

MAX

Return the maximum value of the expression.

SUM

Return the sum of expressions.

AVG

Return the arithmetical mean of expressions.

STDDEV

Return the Standard Deviation of given expressions as the square root of VARIANCE function.

VAR

Return the variance of expressions as the square of standard deviation

Subqueries in Expression – A subquery is a select statement enclosed in parentheses and used in a main select statement as input.

SAP HANA SQL Stored Procedure

A procedure is a unit/module that perform a specific task. This procedure can be combined to form larger programs. This basically forms the 'Modular Design'. A procedure can be invoked by another procedure which is called the calling program. Procedures are re-useable processing block with a specific sequence of data transformation. The procedure can have multi-input/output parameters. The procedure can be created as read-only or read-write. An SQL Procedure can be created at – At Schema Level(Catalog Node) At Package Level(Content Node) Stored Procedure syntax in SAP HANA is as shown below – SYNTAX CREATE PROCEDURE [()] [LANGUAGE ] [SQL SECURITY ] [DEFAULT SCHEMA ] [READS SQL DATA [WITH RESULT VIEW ]] AS {BEGIN [SEQUENTIAL EXECUTION] END | HEADER ONLY } The CREATE PROCEDURE statement creates a procedure using the mention programming language . SYNTAX ELEMENTS ELEMENTS

DESCRIPTION



Procedure Name

The parameter is defined here. IN, OUT, INOUT parameter is there. Each parameter is marked using the keywords IN/OUT/INOUT

• IN – Used for Pass Value To procedure as INPUT. It is Read Only parameter. • OUT – Used for Return Value from Procedure as OUTPUT. • INOUT – Used for Pass and Return Value To Procedure by same parameter.

LANGUAGE

Defines the programming language used in the procedure. Default: SQLSCRIPT

Specifies the security mode of the procedure. Default: DEFINER • DEFINER - Specifies that the execution of the procedure is performed with the SQL SECURITY privileges of the definer of the procedure. • INVOKER - Specifies that the execution of the procedure is performed with the privileges of the invoker of the procedure.

It defines the schema for unqualified objects in the procedure body. If nothing is define, then the current schema of the session is used for the procedure.

READS SQL DATA

It marks the procedure as being read-only, it means the procedure does not modify the database data or its structure and that the procedure does not contain DDL or DML statements. This procedure only calls other read-only procedures.

It defines the result view to be used as the output of a read-only procedure. WITH RESULT VIEW

If a result view is specified for a procedure, then it can be called by an

SQL statement in the same process as a table or view.

SEQUENTIAL EXECUTION

This statement will force sequential execution of the procedure logic. No parallelism takes place.

It defines the main body of the procedure based on the programming language selected.

HEADER ONLY

If Header Only is used, then only procedure properties are created with OID.

SAP HANA Create Sequence A sequence is a database object that automatically generates the incremented list of numeric values according to rule as specified in sequence specification. For example to insert employee number automatically in column (EMPLOYEE_NO) of Table, when a new record is inserted in the table, then we use sequence. Sequence values are generated in Ascending or Descending order. Sequences are not associated with tables; they are used by the application. There are two values in sequence – CURRVAL – Provide Current value of Sequence. NEXTVAL – Provide Next value of sequence. SYNTAX CREATE SEQUENCE [] [RESET BY ] SYNTAX ELEMENTS ELEMENTS

DESCRIPTION



It is the name of the sequence.

[] It specifies one or more sequence parameters.

START WITH

It describes the starting sequence value.

INCREMENT BY

This specifies the value to be incremented from the last value assigned for each time when new sequence value generated. The default is 1.

MAXVALUE

This specifies maximum value ,which can be generated by the sequence. can be between -4611686018427387903 and 4611686018427387902.

NO MAXVALUE

When the NO MAXVALUE is specified, for an ascending sequence, the maximum value will be 4611686018427387903 and the minimum value for a descending sequence will be -1.

It specifies the minimum value that a sequence can generate. MINVALUE / NO can be between -4611686018427387904 and MINVALUE 4611686018427387902. When the NO MINVALUE is used, the minimum value for an ascending sequence is 1

CYCLE

CYCLE directive specifies that sequence number will be restarted after it reaches its maximum or minimum value.

NO CYCLE

Default option.NO CYCLE directive specifies that sequence number will not be restarted after it reaches its maximum or minimum value.

CACHE /

The cache size specifies which range of sequence numbers will be cached in a node. must be unsigned integer.

NO CACHE

Default option. NO CACHE directive specifies that the sequence number will not be cached in a node.

RESET BY

It specifies that during the restart of the database, the database automatically executes the and the sequence value is restarted with the returned value.

Example – We will create a sequence with named DHK_SCHEMA.EMP_NO, which will create incremented value of the sequence by +1 each time, when the sequence is used. Sequence Script CREATE SEQUENCE DHK_SCHEMA.EMP_NO START WITH 100 INCREMENT BY 1. Here we will use object "sequence" in below example to increment the value of employee no by +1 each time the select query is executed. In the query, the "nextval" can be used for serial number generation or same type of requirement. Use of Sequence – SELECT DHK_SCHEMA.EMP_NO.nextval FROM DUMMY; OUTPUT – 100,101,102………So on every execution of above select query.

SAP HANA Create Trigger A trigger is also a stored procedure that automatically executes when an event happens on a given table or view. The database users only having the TRIGGER privilege for the given are allowed to create a trigger for that table or view. The CREATE TRIGGER command defines a set of statements that are executed when a given operation (INSERT/UPDATE/DELETE) takes place on a given subject table or subject view. Syntax

CREATE TRIGGER ON [REFERENCING ] [] BEGIN [] [] END SYNTAX ELEMENTS ELEMENTS

DESCRIPTION

It specifies the name of the trigger to be created, with the optional schema name.

BEFORE | AFTER | INSTEAD OF • BEFORE - Specifies that the trigger will be executed before the DML Operation on a table. • AFTER - Specifies that the trigger will be executed after the DML operation

on a table. • INSTEAD OF - Specifies that the trigger will be executed instead of the DML operation on a view. A view with INSTEAD OF trigger becomes updatable.

SAP HANA SQL DATA Profiling Data profiling is the process of analyzing the data available in an existing data source and collecting statistics and information about that data. SQL DATA profiling task is used to understand and analyze data from different data source. By Data profiling process user can remove incorrect and incomplete data before loading to the data warehouse. Advantage of SQL DATA Profiling is as below – It helps to understand the source data. By Data Profiling, we can analyze data effectively. By Data Profiling, we can remove incorrect, incomplete and improve data quality. Improve the ability to search the data by adding keywords, description. Understand data challenge early in the project, finding data problem late in the project can lead to delay and cost excess. By data profiling, implementation cycle of major projects may be shorten.

SAP HANA SQL SCRIPT SQL Script is a collection of extensions to SQL. It can be used in stored procedure in place of plain SQL. It determines the functional and procedural extensions. In SQL Script user can define local variables for structure and tables that are primarily used for the creation of stored procedure. SQL script can also be used in Calculation view. In SQL Script, there are two different logic containersProcedure (Procedures allows you to describe a sequence of data transformations on data passed as input and database tables). User Defined Function (The User Defined Function container is separated into Scalar User Defined Function and Table User Defined Function).

SQL Script Language elements are as below – Declarative SQL Script Logic (Functional Extension) It allows the definition of table types without referencing database tables. Typical Statement like SELECTs. Calculation Engine (CE) Functions. Orchestration SQL Script Logic (Functional Extension) Orchestration logic is used to implement data flow by using DDL, DML and SQL Query Statements and control flow logic using imperative language constructs such as loops and conditionals. Data Definition Language Statement. E.g. Create Schema. Data Manipulation Language (E.g. Insert). Imperative SQL Script Logic (Procedural Extension) Imperative logic splits the logic among several data flow. E.g. IF, ELSEIF, ELSE, CASE, FOR (Loop) and Exceptions. Importance of SQL Script Only SQL Script provides the necessary elements to migrate dataintensive logic or the operation of the application server to the database server. Key points of SQL Script; SQL Script is executed and processed in the calculation engine within the HANA database. SQL Script is able to perform complex calculations. In SQL Script, a local variable can be declared to hold the interim result. SQL Script Procedure can return more result by using "OUTPUT Parameter" while Normal SQL Procedure can return only one. In SQL Script, you can define global or local tables types which can be used as parameters.

By using SQL Script, parallel processing mode can be achieved.

SAP HANA Calculation View CE Functions also known as Calculation Engine Plan Operator (CE Operators) are alternative to SQL Statements. CE function is two types – Data Source Access Function This function binds a column table or a column view to a table variable. Below is some data Source Access Function list – CE_COLUMN_TABLE CE_JOIN_VIEW CE_OLAP_VIEW CE_CALC_VIEW Relational Operator Function By Using Relational Operator, the user can bypass the SQL processor during the evaluation and communicate with calculation engine directly. Below is some Relational Operator Function list – CE_JOIN (It is used to perform inner join between two sources and Read the required columns/data.) CE_RIGHT_OUTER_JOIN(It is used to perform right outer join between the two sources and display the queried columns to the output.) CE_LEFT_OUTER_JOIN (It is used to perform left outer join between the sources and display the queried columns to the output). CE_PROJECTION (This function display the specific columns from the source and apply filters to restrict the data. It provides column name aliase features

also.) CE_CALC (It is used to calculate additional columns based on the business requirement. This is same as calculated column in graphical models.) Below is a list of SQL with CE function with some ExampleQuery Name

SQL Query

CE-Build in Function

Select Query On Column Table

SELECT C, D From "COLUMN_TABLE".

CE_COLUMN_TABLE("COLUMN_TABLE", [C,D])

Select Query On Attribute View

SELECT C, D From "ATTRIBUTE_VIEW"

CE_JOIN_VIEW("ATTRIBUTE_VIEW", [C,D])

Select Query on SELECT C, D, SUM(E) From Analytic View "ANALYTIC_VIEW" Group By C,D

CE_OLAP_VIEW("ANALYTIC_VIEW",[C,D])

SELECT C, D, SUM(E) From Select Query on "CALCULATION_VIEW" Group By Calculation View C,D

CE_CALC_VIEW("CALCULATION_VIEW", [C,D])

Where Having

SELECT C, D, SUM(E) From "ANALYTIC_VIEW" Where C = 'value'

Var1= CE_COLUMN_TABLE("COLUMN_TABLE"); CE_PROJECTION(:var1,[C,D],"C" ="value"/

Summary: SAP HANA Supports SQL- SQL Script is used for better performance, and SQL Script is an extension of SQL. SAP HANA has an own data type. SAP HANA provide CE function, which is executed in calculation engine.

SAP HANA also support SQL function e.g. aggregate function.

Chapter 5: Data Provisioning DATA Provisioning is a process of creating, preparing, and enabling a network to provide data to its user. Data needs to be loaded to SAP HANA before data reaches to the user via a front-end tool. All these processes are referred as ETL (Extract, Transform, and Load), and detail is as belowExtract – This is first and sometimes most difficult part of ETL, in which data are extracted from the different source system. Transform – In the Transformation Part, series of rules or functions is defined for the data extracted from the source system, for loading data into the target system. Load – The Load phase loads the data in the target system.

Overview of Replication Technology SAP HANA supports two type of Provisioning tool – 1. SAP HANA Built-In Provisioning Tool 1. 2. 3. 4. 5.

Flat File Smart Data Streaming Smart Data Access (SDA) Enterprise Information Management(EIM) Remote data

2. External tool supported by SAP HANA 1. 2. 3. 4.

SAP Landscape Transformation SAP Business Objects Data Services SAP Direct Extractor Connection Sybase Replication Server

At present, there are main methods of data provisioning for SAP HANA,

these are – Methods of Data Description Provisioning

SLT

SLT ("SAP Landscape Transformation Replication Server") running on the SAP Net Weaver Platform. SLT is an ideal solution for Real-Time and Schedule time replication for SAP and non-SAP source system.

SAP DATA Services

SAP DATA Services is a platform for designing of ETL processes with a graphical user interface.

DXC

DXC stand for Direct Extractor Connect is a batch driven ETL tool.

Flat File Upload

This option used to Upload data (.csv, .xls, .xlsx) to SAP HANA.

SAP HANA SLT Road Map is as below – DATA Provisioning through SLT require RFC/DB connection to SAP/Non-SAP Source System and a DB connection for SAP HANA database. On SAP SLT server we define Mapping and Transformation. Below is a roadmap for data provisioning through SLT.

SLT (SAP Landscape Transformation Replication Server)

SLT is the SAP first ETL tool that allows you to load and replicate data in real-time or schedule data from the source system and Non-Source System into SAP HANA Database. SAP SLT server uses a trigger-based replication approach to pass data from source system to target system. SLT server can be installed on the separate system or on SAP ECC System. Benefit of SLT system is as belowAllows real-time or schedule time data replication. During replicating data in real-time, we can migrate data in SAP HANA Format. SLT handles Cluster and Pool tables. This support automatically non-Unicode and Unicode conversion during load/replication. (Unicode is a character encoding system similar to ASCII. Non-Unicode is encoding system covers more character than ASCII). This is fully integrated with SAP HANA Studio. SLT have table setting and transformation capabilities. SLT have monitoring capabilities with SAP HANA Solution Manager. Architecture Overview of SAP SLT server with SAP / Non-SAP System is as below-

SAP SLT Connection Architecture overview between SAP System and SAP HANA SAP SLT Replication Server transforms all metadata table definitions from the ABAP source system to SAP HANA. For SAP source, the SLT connection has the following features When a table is replicated, SAP SLT Replication server create logging tables in the source system. Read engine is created in the SAP Source System. The connection between SAP SLT and SAP Source is established as

RFC connection. The connection between SAP SLT and SAP HANA is established as a DB connection. A database user with the same authorization as user "SYSTEM" can create a connection between SAP SLT and SAP HANA Database.

SAP SLT Connection between SAP System and SAP HANA DATABASE SAP SLT Server automatically create DB connection for SAP HANA database (when we create a new configuration via transaction LTR). There is no need to create it manually.

Configure SAP SLT Server for SAP Source System First we need to configure SAP SLT replication server for connection between SAP Source and SAP HANA database. T-code, LTR is used for creating a connection between SAP Source and SAP SLT. Step 1) Login to SAP SLT server, and Call transaction "LTR" from SAP SLT replication server.

A Web-dynpro pop-up screen will appear for login to SAP SLT server. 1. Enter Client / User id / password 2. Click on logon tab

A pop-up screen for Configuration will appear as below-

Click on "New" Button for the new configuration. Step 2) In this step, 1. 2. 3. 4. 5.

Enter Configuration name and description. Select SAP System as the source system. Enter RFC connection for SAP System. Enter Username / Password / Host Name and Instance number. Enter Job options detail – No. of data Transfer Jobs. No. Of calculation jobs.

6. Select Replication option as Real Time. 7. Once all the settings are maintained click on 'OK' to create a new schema in SLT.

A Configuration Name "SLTECC" will be added and active.

After configuration SAP SLT server successfully, SAP SLT Server automatically create DB connection for SAP HANA database (when we create a new configuration via transaction LTR). There is no need to create it manually. In next step, we import data to SAP HANA from SAP Source.

Import SAP Source Data to SAP HANA through

SLT Once we have successfully configured SAP SLT server, a SCHEMA as configuration name above in SAP SLT is created in SAP HANA Database. This Schema contains following objects1. 2. 3. 4.

1 Schema - SLTECC. 1 User – SLTECC. 1 Privileges 8 Tables – DD02L (SAP Tables Name ) DD02T (SAP Table Texts) RS_LOG_FILES RS_MESSAGE RS_ORDER RS_ORDER_TEXT RS_SCHEMA_MAP RS_STATUS.

5. 4 Role SLTECC_DATA_PROV SLTECC_DATA_POWER_USER SLTECC_DATA_USER_ADMIN SLTECC_DATA_SELECT 6. 2 Procedures RS_GRANT_ACCESS RS_REVOKE_ACCESS All configuration is completed, now we load a table from SAP ECC (ERP Central Component). Step 1) To load tables from SAP ECC to SAP HANA database, follow below steps1. Go to Data provisioning from Quick View.

2. Select SAP HANA System. 3. Click on Finish Button.

Step 2) A screen for SLT Based Table Data Provisioning will be displayed. There are 5 options for data provisioning as belowProvision Detail Option

Load (Full Load)

This is a one-time event, which starts an initial load of data from source system.

Replicate (Full Load + Delta Load)

It start initial load (if not done earlier), and also consider delta change. Database trigger and related logging table will be created for each table.

Stop It stops the current replication process for a table. It removes database trigger and Replication logging table completely.

Suspend

It pause a running replication process of a table. The database trigger will not be deleted from the source system, and recording of changes will continue. Related Information is stored in the related logging table in the source system.

Resume

Resume restarts the replication for a suspended table. After resume, the suspended replication process will resume.

We use the first option from the table "Load option" for initial load of the table (LFBK) data from source to SAP HANA table. Step-by-step is as below1. Source and Target system details are selected according to SAP SLT configuration. 2. Click on Load Button and select the table (LFBK) which we need to load/replicate in SAP Hana. 3. Table (LFBK) will be added to Data Load Management Section with Action "Load" and Status "Scheduled."

After data load, Status will be changed to "Executed". The table will be created in "SLTECC" schema with data. Step 3) Check Data in the table (LFBK) by Data Preview from Schema "SLTECC" as below -. 1. Login in SAP HANA Database through SAP HANA Studio and select SAP HANA System HDB (HANAUSER).

2. Select Table (LFBK) under Table node. 3. Right click on Table (LFBK) and click on Open data preview option. 4. Loaded Data through SLT process will be displayed in Data preview screen.

Now we have successfully loaded data in table "LFBK". We will use this table future in Modelling.

SAP SLT Connection between non-SAP System and SAP HANA SAP SLT Replication Server transforms all metadata table definitions from the non-ABAP source system to SAP HANA. For Non-SAP source, the SLT connection has following features When a table is replicated, SAP SLT Replication server create logging tables in the source system. Read engine is created in the SAP SLT Replication server. The connection between SAP SLT and SAP Source / SAP HANA is established as a DB connection.

SAP SLT Connection between Non - SAP SLT Connection and SAP HANA System/DATABASE SAP SLT can only do simplest transformations, so for complex transformations, we need another ETL tool such as SAP Data services.

SAP DS (SAP DATA Services) SAP Data services is an ETL tool which gives a single enterprises level solution for data integration, Transformation, Data quality, Data profiling and text data processing from the heterogeneous source into a target database or data warehouse. We can create applications (job) in which data mapping and transformation can be done by using the Designer. (Latest version of SAP BODS is 4.2).

Features of Data Services It provides high-performance parallel transformations. It has comprehensive administrative tools and reporting tool. It supports multi-users. SAP BODS is very flexible with web-service based application. It allows scripting language with rich sets of functions. Data Services can integrate with SAP LT Replication Server (SLT) , with the

Trigger-based technology. SLT adds delta-capabilities to every SAP or non-SAP source table Which allows for using data capturing for change and transferring the delta data of the Source table. Data validation with dashboards and process auditing. Administration tool with scheduling capabilities and monitoring/dashboards. Debugging and built-in profiling and viewing data. SAP BODS support Broad source and target. Any Applications (e.g. SAP). Any Databases with bulk loading and changes data capture Files: fixed width, comma delimited, COBOL, XML, Excel.

Component of Data Services SAP DATA services have below component – 1. Designer- It is a development tool by which we can create, test, and execute a job that populates a data warehouse. It allows the developer to create objects and configure them by selecting an icon in a source-to-target flow diagram. It can be used to create an application by specifying workflows and data flows. To Open Data Service Designer go to Start Menu -> All Programs -> SAP Data Services (4.2 here) -> Data Service Designer.

2. Job Server- It is an application that launches the data services processing engine and serves as an interface to the engine and Data Services Suite. 3. Engine- Data Service engine executes individual jobs which are defined in the application. 4. Repository- Repository is a database that stores designer predefine objects and user defined objects (source and target metadata, transformation rules). Repository are of two types – Local Repository (Used by Designer and Job Server). Central Repository ( Used for object sharing and version control) 5. Access Server- Access server passes messages between web applications, data services job server and engines. 6. Administrator- Web Administrator provides browser-based administration of data services resources detail is as below – Configuring, starting and stopping real-time services. Scheduling, monitoring and executing batch jobs. Configuring Job Server, Access Server, and Repository usage.

Managing users. Publishing batch jobs and real-time services via Web services Configuring and managing adapters.

Data Services Architecture – Data Services architecture have the following componentCentral Repository - it is used for repositories configurations to jobs servers, security management, version control and object sharing Designer – Used for Create Project, Job, workflow, data flow and run. Local repository (here you could create change and start jobs, Workflow, dataflow). Job server & engine – It manage the jobs. Access Server – It is used to execute the real-time jobs created by developers in the repositories. In below image, Data Services and there component relationship is shown.

SAP BODS Architecture Designer Window Detail: First we look into the first component of SAP data service- Designer Detail of each section of data service designer is as below1. Tool Bar (Used for Open, Save, Back, Validate, Execute, etc.). 2. Project Area (Contains the current project, which includes Job, Workflow, and Dataflow. In Data Services, all entities are objects. 3. Work Space (The Application Window area in which we define, display, and modify objects). 4. Local Object Library (It contains local repository objects, such as transforms, job, workflow, dataflow, etc.). 5. Tool Palette (Buttons on tool palette enable you to add new objects to the workspace).

Object Hierarchy The below diagram shows hierarchical relationships for the key object types within Data Services.

Note:

Workflows and Conditional* are optional Objects used in SAP Data services detail is as below Objects

Description

Project

A project is a highest-level object in the Designer window. Projects provideyou with a way to organize the other objects you create in Data Services.Only one project is open at a time (where "open" means "visible in the projectarea").

Job

A "job" is the smallest unit of work that you can schedule independently forexecution.

Scripts

A subset of lines in a procedure.

A "work flow" is the incorporation of several data flows into a coherentflow of work for an entire job. Workflow is optional. Workflow is a procedure.

Workflow

Call data flows Call another work flow Define the order of steps to be executed in your job Pass parameters to and from data flows. Specify how to handle errors that occur during execution. Define conditions for executing sections of the project.

A "data flow" is the process by which source data is transformed intotarget data. A data

flow is a reusable object. It is always called from a work flow or a job. Dataflow

Identify the source data that you want to read. Define the transformations that you want to perform on the data. Identify the target table to which you want to load data.

Logical channel that connects Data Services to source and Datastore target databases. Datastore

Target

Must be specified for each source and target database. Are used to import metadata for source and target databases into therepository Are used by Data Services to read data from source tables and load datato target tables

Table or file in which Data Services loads data from the source.

Data Services example by load data from SAP Source Table Everything in Data services is an object. We need to separate data store for each source and target database. Steps for loading data from SAP source table - SAP BODS have many steps, in which we need to create a data store for source and target and map to them. Create Data Store between Source and BODS Import the metadata (Structures) to BODS. Configure Import Server Import the metadata to HANA system. Create Data Store between BODS to HANA. Create Project. Create Job (Batch/Real time) Create Work Flow Create Data Flow Add Object in Dataflow Execute the job Check the Data Preview in HANA

Step 1) Create Data Store between SAP Source and BODS 1. To Load data from SAP Source to SAP HANA through SAP BODS, we need a data Store. So we create a Data store first as shown below – Project -> New - > Data Store

2. A pop-up for Create new data store will appear, enter detail as below 1. 2. 3. 4. 5. 6.

Enter data store name "ds_ecc". Select Data store type name as "SAP Applications". Enter database server name User name and password. Click on "Apply" button. Click on "OK" button.

3. Data Store will be created and view the created datastore as below1. Go to Local Object Library 2. Select DataStore tab. 3. Data store "ds_ecc" will be displayed.

Step 2) Import Metadata (Structure) to BODS Server. We have created a data store for ECC to BODS; now we import metadata from ECC into BODS. To import follow below steps 1. Select Datastore "ds_ecc" and right click. 2. Select Import by Name option.

A pop-up for Import by Name will be displayed. Enter detail as below – 1. Select Type as a table. 2. Enter Name of Table, which we want to import. Here we are

importing KNA1 table. 3. Click On "Import" Button. KNA1 table will appear under table node of "ds_ecc" data source.

Table Metadata will be imported, in datastore ds_ecc as below –

Step 3) Configure Import Server

Till now we have imported table to data stored "ds_ecc" created for ECC to SAP BODS Connection. To import data into SAP HANA, we need to configure Import server, 1. To do this go to Quick View-> Configure Import Server as below -

2. A pop-up for Select System will appear, Select SAP HANA (HDB here) System as below-

3. Click on "Next" button. Another Pop-Up for data service credential will appear, enter following details 1. SAP BODS server Address (here BODS:6400 ) 2. Enter SAP BODS Repository Name ( HANAUSER Repositery Name ) 3. Enter ODBC Data Source (ZTDS_DS).

4. Enter Default port for SAP BODS server(8080).

Click on "Finish" button. Step 4) Import the metadata to HANA System 1. Till now we have Configured Import Server, now we will import metadata from SAP BODS server. 1. Click Import option in Quick View. 2. A pop-up for Import option will be displayed. Select "Selective Import of Metadata" option.

Click on "Next "Button. 2. A pop-up for "Selective Import of Metadata" will be displayed, in which we select target System. 1. Select SAP HANA System (HDB here).

Click on "Next" Button. Step 5) Create Data Store between BODS and HANA As we know, in BODS we need to create a separate datastore for source and target. We have already created a data store for the source, now we need to create a data store for the target (between BODS and HANA). So, we create a new data store with name"DS_BODS_HANA". 1. Go to Project -> New -> Datastore.

2. A screen for Create new Datastore will appear as below. 1. 2. 3. 4. 5. 6. 7. 8.

Enter Datastore name (DS_BODS_HANA). Enter Datastore type as Database. Enter Database type as SAP HANA. Select Database Version. Enter SAP HANA Database server name. Enter Port name for SAP HANA Database. Enter Username and password. Tick on "Enable automatic data transfer".

Click on "Apply" and then "OK" button. Data store "DS_BODS_HANA" will be displayed under datastore tab of Local Object Library as Below-

3. Now we import table in data store "DS_BODS_HANA". 1. Select data store "DS_BODS_HANA" and right click. 2. Select Import By Name.

4. A pop-up for Import by Name will appear as be below1. 2. 3. 4.

Select Type as Table. Enter Name as KNA1. Owner will be displayed as Hanauser. Click on Import Button.

Table will be imported in "DS_BOD_HANA" datastore, to view data in table follow below steps – 1. Click on table "KNA1" in data store "DS_BODS_HANA". 2. Data will be displayed IN TABULAR Format.

Step 6) Define Project: Project group and organize related objects. The Project can contain any number of jobs, Workflow, and dataflow. 1. Go to Designer Project menu. 2. Select new option. 3. Select Project option.

A POP-UP for New Project Creation appears as below. Enter Project Name and Click on Create Button. It will create a project folder in our case BODS_DHK.

Step 7) Define Job: A Job is a reusable object. It contains workflows and dataflow. Jobs can be executed manually or as a schedule. To Execute BODS Process we need to define the job. We create a Job as JOB_Customer. 1. Select Project (BODS_DHK) created in step 1, Right click and select "New Batch Job".

2. Rename it to "JOB_Customer". Step 8) Define Workflow: 1. Select Job "JOB_Customer" in project area, 2. Click the workflow button on the tool palette. Click on the Black Workspace area. A workflow icon will appear in the workspace. 3. Change the name of the workflow as "WF_Customer".

Click the name of workflow, an empty view for the workflow appears in the workspace.

Step 9) Define Dataflow: 1. Click On Workflow "WF_Customer". 2. Click the Dataflow button on the tool palette. Click on the Black Workspace area. A Dataflow icon will appear in the workspace.

3. Change the name of the Dataflow as "DF_Customer". 4. The Dataflow also appears in the project area on the left under job name.

Step 10) Add Object in Dataflow: Inside data flow, we can provide instruction to transform source data into the desired form for the target table. We will see below object – An object for the source. An object for the target table. An object for Query transform. (Query transform maps the columns from source to target.) Click on the dataflow DF_Customer . A blank workspace will appear as below -

1. Specify object from Source – Go to Data store "ds_ecc " and Select table KNA1 and drag and drop to data flow blank screen as below screen2. Specify object for Target- Select Data store "DS_BODS_HANA" from the repository and select table KNA1. 3. Drag and drop to the workspace and select "Make Target "option. There will be two table for source and target. Here we will define the table as source and target.

4. Query Transformation – This is a tool used to retrieve data based on input schema for user specific condition and for transport data from source to target. 1. Select Query Transform icon from tool Palette, and drag and drop it between source and target object in workspace as below 2. Link Query object to Source. 3. Link Query Object to Target table.

4. Double Click On Query Icon. By this, we map a column from input

schema to output schema. When we click on Query icon, a next window for mapping will appear, in which we do the following steps 1. Source Table KNA1 is selected. 2. Select all column from the source table and right click and select a map to output. 3. Target Output selected as Query, and column will be mapped.

5. Save and Validate project. 1. Click on validate Icon. 2. A pop-up for validation success appear.

Step 11) Execute Job – To execute Job, follow the below path1. Select Project Area icon to open Project, and select created Project. 2. Select Job and right click. 3. Select Execute option, to execute Job.

1. After Executing Job, a Job Log window is displayed, in which all message regarding Job will be displayed. 2. The last message will be Job < > is completed successfully.

Step 12) – Validate / Check Data in SAP HANA Database. 1. Login to SAP HANA database through SAP HANA Studio, and select HANAUSER schema. 2. Select KNA1 table in Table node. 3. Right Click on table KNA1 and Select Open Data Preview. 4. Table (KNA1) Data loaded by BODS processes as above will be displayed in data preview screen.

SAP HANA Direct Extractor Connection (DXC) SAP HANA DXC uses existing ETL (Extract, Transform, and Load) method of SAP Business Suite Application via a HTTPS connection. SAP HANA DXC is batch driven data replication technique i.e. it can execute after a time interval. In SAP Business suite application content Data Source Extractors have been available for data modeling and data acquisition for SAP Business Warehouse. SAP DXC use these Data Source Extractor to deliver data directly to SAP HANA.

Advantage of SAP DXC SAP DXC requires no additional server or application in the system landscape. It reduces the complexity of data Modelling in SAP HANA, as it sends the data to SAP HANA after applying all business extractor logic in the source system.

It speeds up SAP HANA Implementation time lines. It extract semantic rich data from SAP Business site and provide to SAP HANA.

Limitation of SAP DXC Data Source must have pre-define ETL method, if not then we need to define them. SAP DXC requires a Business Suite System on Net Weaver 7.0 or higher (e.g. ECC) equal or below SP level: Release 700 SAPKW70021 (SP stack 19, from Nov 2008). A procedure with a key field defined must exist in Data Source .

Configure SAP DXC DATA Replication Step 1) Enabling XS Engine and ICM Service Enabling XS Engine Go to SAP HANA Studio -> Select System -> Configuration -> xsengine.ini.

Set instance value to 1 in Default filed. Enabling ICM Web Dispatcher Service - It enables ICM Web Dispatcher service in HANA system. Web dispatcher uses ICM

method for data read and loading in HANA system. Go to SAP HANA Studio -> Select System -> Configuration -> webdispatcher.ini

Set instance value to in default column 1. Step 2) Setup SAP HANA Direct Extractor Connection Set DXC Connection in SAP HANA – To create a DXC connection we need to import delivery unit in SAP HANA as below Import Delivery Unit. You need to download the DXC delivery unit from SAP into SAP HANA database. You can import the unit in the location "/usr/sap/HDB/SYS/global/hdb/content". Import the delivery unit using Import Dialog in SAP HANA Content Node ? Configure XS Application server to utilize the DXC ? Change the application container value to libxsdxc. Configure XS Application server to utilize the DXC. Modify the application container value to libxsdxc (if any value existed,

then append it). Test the DXC Connection. Verify the DXC is working. We can check DXC Connection by using below path in Internet Explorer – http://:80/sap/hana/dxc/dxc.xscfunc - Enter a user name and password to connect. User and Schema need to be defined in HANA Studio http connection in SAP BW for HANA need to define through SM59, So create a http connection in SAP BW Using T-code SM59. Input Parameters will be -- RFC Connection equal Name of RFC Connection -- Target Host equal HANA Host Name -- Service Number equal 80 Log on Security Tab Maintain the DXC user created in HANA studio which has basic Authentication method. Data Sources in BW need to configure to Replicate the Structure to HANA defined schema. We Need to Setup the Following Parameters in BW Using Program SAP_RSADMIN_MAINTAIN (T-code SE38 or SA38) Parameters List in Program – Parameter list contains value , which pass value to call screen.

PSA_TO_HDB: This three object values are shown as below GLOBAL – This is used for replicate all data source to HANA SYSTEM – It Specified clients to Use DXC DATASOURCE –It specified Data Source, and only specified can be used. PSA_TO_HDB_DATASOURCETABLE: In this we need to give the Table Name, which having the List of data sources which are used for DXC. In the VALUE field, enter the name of the table you created. PSA_TO_HDB_DESTINATION: In this we need to Move the Incoming data (in this we need to Give the value which we create in SM59) ( here XC_HANA_CONNECTION_HANAS) PSA_TO_HDB_SCHEMA: It specifies which schema the replicated data need to assign Data Source Replication Install data source in ECC using RSA5. we have taken data source 0FI_AA_20 ( FI-AA: Transactions and Depreciation). First we need to Replicate the Meta data Using Specified application Component (data source version Need to 7.0 version. If we have 3.5 version data source, we need to migrate that first. Active the data Source in SAP BW. ) Once data source loaded and activated in SAP BW, it will create the

following table in the Defined schema. /BIC/A00 – IMDSO Active Table /BIC/A40 – IMDSO Activation Queue /BIC/A70 – Record Mode Handling Table /BIC/A80 – Request and Packet ID information Table /BIC/AA0 – Request Timestamp Table RSODSO_IMOLOG - IMDSO related table. Stores information about all data sources related to DXC. Now data is successfully loaded into Table /BIC/A0FI_AA_2000 once it is activated. And we can preview data from table /BIC/A0FI_AA_2000 in SAP HANA Studio.

Flat file Upload to SAP HANA SAP HANA support uploading data from a file without ETL tools (SLT, BODS, and DXC). It is a new feature of HANA Studio Revision 28 (SPS04). SAP HANA Support following type of files which will be available on client system as below – .CSV (Comma Separated value files) .XLS .XLSX Prior to this option, the user needs to create control file (.CTL file). To upload data in SAP HANA, table need to exist in SAP HANA. If table exits, records will be appended at the end of the table, if table not present then table need to be created. This application suggests column name and data type for the new tables. Steps for upload data from flat file to SAP HANA is as belowCreate the table in SAP HANA Create file with data in our local system Select the file

Manage the mappings Load the data

Create the table in SAP HANA If table are not present in SAP HANA, then we can create a table by SQL Script or by this process by selecting"NEW" option. We will use "NEW" option for creating a new table.

Create file with data in our local System We are going to upload Sales Organization master data. So create a .csv file and .xls file for it on local system. We are going to upload SalesOrg.xlsx file into SAP HANA, so we have created a file SalesOrg.xlsx in the local system. SalesOrg

Name

Currency

CoCode

Address

Country

Ref_Sorg

1000

ABC Pvt. Ltd.

USD

1000

NEW YORK

USA

1000

2000

ABC Enterprises

INR

2000

INDIA

INDIA

2000

Select the file Step 1) Open modeler perspective ? 'Main Menu' ? 'Help' ? 'Quick View' as shown below.

A Quick View screen appears as below-

Select 'Import' option from Quick View. A pop-up for import option will be displayed.

A Pop-Up for import screen is displayed. Go to SAP HANA Content? 'Data from Local File'.

Click Next. A pop-up for File selection will be displayed, follow below steps for the select file.

1. Select SalesOrg.xls file. 2. Click on "Open" button.

A screen for file selection for import will be displayed ,in which we can select a file for loading data from local system to SAP HANA database. Available options can be categorized into three main areas, they are Source File Section File Details Section Target Table Step 2) In this step we have to enter following detail 1. Select File – Selected file path will be displayed here. 2. Header Row Exits – If SalesOrg.xls file has a header (column Name). So, we have ticked it. 3. Import All Data – Tick this option if you want to import all data from a file, otherwise mention Start Line and End line for specific data load from file. 4. Ignore leading and trailing white space Tick this option for ignoring

leading and trailing white space in the file. 5. Target Table – In this section two option – New – If the table is not present in SAP HANA, then choose this option, and provide Exiting schema name and table name to be created. Exiting – If the table exist in SAP HANA, then choose this option. Select Schema name and Table. Data will be appended to end of the table. 6. Click on "Next" button

Manage the Mappings A mapping screen will be used for performing the mapping between

source and target columns. There are two different types of mapping available.When we click on we get two option as below One to One: By using this option, we can map column to column based on the sequence. This option can be used if we know all the columns are in sequence. Map by Name: By using this option, we can map the source and target columns based on the name. This can be used if we know that the columns names are same. Mapping of Source to Target – We will map here Source File column to Target Table, and also, we can change target table definition. 1. Proposed Table structure from Source File- Table column Name is supposed from excel file Column Name (Header). 2. Target Table Structure: Target Table store type is selected as column store by default. 3. Click File name and drag to target field, File will be mapped. The field can be mapped automatically by one to one or map By Name option. we can manually do the mapping using drag and drop option If our column name could not be mapped with the above options. 4. In the File data section of the same screen, we can also see how the data looks in the source file for all the columns.File data section displays data of SalesOrg file. 5. Click on "Next" file.

A window for import data from the local file will appear.

Load the data It is final screen before we start the loading process. By this screen data that already exist in the table will display and also the information about the schema and table to which we are going to load the data will display. 1. Detail Sections: In this section Selected Source File name, Target Table Name, and Target Schema Name detail will be displayed. 2. Data from File: It will display data extracted from the file. 3. If displayed data in Data from file section has been verified, click 'Finish' to Start loading the data to the table.

After successful import option completing, we should see the entry in the job log view with status 'Completed Successfully.'

Once the data importing job is successful, 1. We can see the data in table of by selecting the table of respective schema and right click on Table ? 'Data Preview' as shown below. 2. Data of table will display in Data preview screen as below-

Summary: We have learned following things for SAP HanaSAP DATA Provisioning Method Overview

Load data in SAP HANA Table through SAP System Landscape Transformation with an example. Load data in SAP HANA from SAP ECC through BODS with an example. SAP DXC overview. Load Data from Flat file to SAP HANA Table.

Chapter 6: Modeling SAP HANA Modelling is an activity by which we create information view. Information View is similar to dimension, cube or information provider of BW. This information view is used for creating the multi-dimensional data model.

SAP HANA Modeling Modelling is an activity in which user refine or slice data in the database table by creating information view based on the business scenario. This information views can be used for reporting and decision-making purpose. Information view is made from various combinations of content data to create a model for a business scenario. Content Data in information view are of two types – Attribute: Descriptive and Non-Measureable Data. E.g. Vendor ID, Vendor Name, City, etc. Measure: Data can be quantifiable and calculated. E.g. Revenue, Quantity Sold and Counters. The measure is derived from analytic and calculation view. The measure cannot be created in Attribute view.

Types of Attribute SAP HANA Support three Type of attributes Types of Attributes

Simple Attribute

Activities

It is derived from data foundation.

Calculated It is derived from one or more existing attributes and constants. E.g. Arithmetic Attribute calculation or derive the full name from the first and last name.

Local Attribute

It is used inside modelling views (analytic View / calculation view) for Customize the behavior of attribute, so it is local to Modelling view and cannot access from outside of Modelling view.

Types of Measure SAP HANA Support four Type of Measure – Types of Attributes

Simple Measure

Activities

It is derived from data foundation.

It is derived from one or more exiting measure, constants and function. E.g. Calculated Arithmetic calculation. Measure

Restricted It is used to filter value based on user-defined rules for attribute values. Measure

Counter

It is Special types of the column that display unique number for attributes Columns (Analytic View/ Calculation View). It is used in count the one or more attributes columns.

Information Views are of three types as below – Attribute View - This is used for master data context. Analytic View – This is used for creating fact tables and similar to Cube of BW. Calculation View – This is used for creating a complex view and similar to multiple Provide in BW. In order to work in SAP HANA, privileges are required by user, below are privileges required for SAP HANA Modelling -

Privileges Required for Modelling

Privileges provide security to SAP HANA database, by which authorized user can access authorized content only. Object Privileges – Object privileges are SQL privileges which are used for providing read/write access on database objects. Below are object privileges are required for Modelling. SELECT privilege on _SYS_BI Schema. SELECT privilege on _SYS_BIC Schema. EXECUTE privilege on REPOSITORY_REST (SYS). SELECT privilege on Table Schema. Package Privileges – Package Privileges are required to authorize action on individual packages. Below are package privileges are required for data modellingREPO.MAINTAIN_NATIVE_PACKAGES privilege on Root Package. REPO.READ, REPO.EDIT_NATIVE_OBJECTS & REPO.ACTIVATE_NATIVE_OBJECTS on package used for Content Objects. Analytic Privileges – To Access SAP HANA Information View Analytic Privileges are required For Full data access to all information view in SAP HANA System, "_SYS_BI_CP_ALL" analytic privileges required. For restricted data access, analytic privileges need to be created and assign to the user. Other PrivilegesProvide Grant on Own Schema to _SYS_REPO user as 'GRANT SELECT ON SCHEMA " Schema name" TO _SYS_REPO WITH GRANT OPTION'; REPO.MAINTAIN_DELIVERY_UNITS for creating delivery Units. REPO.IMPORT, REPO.EXPORT for Import / Export of delivery Units.

REPO.WORK_IN_FOREIGN_WORKSPACES for work in foreign workspaces.

SAP HANA Join Types Join in SAP HANA is used to Join table and information view and select values as per the requirement. There are following types of Join method to Join SAP HANA tables – Join Type

Uses

Comment

INNER

Inner Join selects the set of records that match in both the table.



LEFT OUTER JOIN

Left Outer Join selects the complete set of records If no match from the second from the first table, with a matching record from table, Null value select from the second table (If Available). the second table.

RIGHT OUTER JOIN

Right Outer Join selects the complete set of records from the second table, with matching record from the first table (If Available).

If no match from the first table, Null value select from the first table.

FULL OUTER JOIN

Full Outer Join selects all records from both the table.



It is available for only REFERENTIAL It is same as inner join by assuming that referential attribute view and analytic JOIN entity is maintained between the two tables. view.

TEXT JOIN

Text Join are used to select language specific specification

For Getting Description of the column it is used.

SAP HANA Best Practices for Creating Information Models

SAP HANA Best Practice is standard while creating an object in SAP HANA Database. Below are best practice for object – PACKAGE: Create a top-Level package like "Development" for development work. Create a sub-package under top-Level package for each developer. More sub-package also can be created, if required. SCHEMA: Design your Schema Layout before start the project.E.g. (DS_SCHEMA, SLT_SCHEMA, FI_SCHEMA, SD_SCHEMA, etc.). The custom table should be in a separate schema. TABLES: Table that will be used in reporting or OLAP should be Column store type. Table that will be used in Transaction or OLTP should be as Row Store type. Give comment / description for table and column name properly for clarity. NAMING CONVENTION: OBJECTS

Format

Description

ATTRIBUTE VIEWS

AT_PRODUCT

AT_..... means Attribute View

ANALYTIC VIEWS

AN_SALES

AN_.....means Analytic view

CALCULATION VIEWS

CA_SALES

CA_..... means Calculation view

AP_REST_AT(Attribute View)

ANALYTIC PRIVILEGES

AP_REST_AN(Analytic View)

AP_.... means Analytic Privileges

AP_REST_CA(Calculation View)

HI_BNAME_PC(Parent Child) HIERARCHY

HI_... means Hierarchy HI_BNAME_LV(Level)

PROCEDURE

SP_PROCEDURENAME

SP_... Stored Procedure

INPUT PARAMETERS

IP_PARA_NAME

IP_... means parameter

VARIABLES

VA_VNAME

VA_...means variable name

Creating a Package in SAP HANA Studio Package: It is a container that contains all information about the model (attribute view, analytic view, calculation view, etc. . .) in a group. Types of package: Package are of two types, which is as below – Type

Description

Icon

In Structural package, only sub-package can be created. No Information view (Attribute view, analytic view, etc.) can be created in Structural package. Structural E.g. of Structural package – SAP, system-local, system-local. Generated, system-local. Private. Non The Non-Structural package can contain information object and sub-packages. Structural This is default package.

Uses of Package: Package group are all information model, and make model transporting easier. Both packages can be used in transporting. Steps for creating Structural Package in SAP HAN StudioSTEP 1) In this step,

1. Select Hana System, here it is HDB. 2. Go to Content folder.

STEP 2) In this step, 1. Select New. 2. Select Package option.

STEP 3) In this step, 1. Enter Package Name. E.g. "DHK_SCHEMA". 2. Enter Description for Package. 3. Original language and Person Responsible is selected by default.

Non-Structural Package with Name "DHK_SCHEMA" will be created in Content node as below-

STEP 4) Now, convert Non-Structural Package to Structural Package. 1. Select package "DHK_SCHEMA" and right click on it. 2. Go to edit option for the package.

STEP 5) In this step, 1. Select "Yes" in for Structural Options field. 2. Click on OK Button.

Our "DHK_SCHEMA" when changed from Non-Structural to Structural

package the icon style will be changed from to . This is an indication that non-structural package is now converted to the structural package.

Step for Creating Non-Structural package under Structural Package as sub-package. The package is created by default as Non-Structural. In Non-Structural Package, other Package and information object can be created. It is better to first create a structural package, and then create a substructural package in it. STEP 1) In this step, 1. Select Structural Package "DHK_SCHEMA" and right click on it. 2. Select New -> package.

STEP 2) In this step, 1. Enter Sub-package name in Name field. 2. Enter description for it. 3. Click on "OK" Button.

A new Non-structural package will be created as sub package under DHK_SCHEMA package.

SAP HANA Attribute View Attribute View: Attribute view acts like a dimension. It join multiple tables and act as Master. Attribute view is reusable objects. Attribute view has the following advantageAttribute View act as Master data context, which provides Text or Description for Key/Non-key field. Attribute View can be reuse in Analytic View and Calculation View. Attributes View is used to select a subset of columns and rows from a database table.

Attributes (fields) can be calculated from multiple table fields. There is no measure and aggregation option. Attribute View Type: Attribute View are 3 typesAttribute View Type>

Description

Standard

It is a standard attribute which is created by table fields. It is Time attribute view, which is based on default time table – For calendar type Gregorian -

Time

M_TIME_DIMENSION M_TIME_DIMENSION_ YEAR M_TIME_DIMENSION_ MONTH M_TIME_DIMENSION_WEEK For calendar type Fiscal M_FISCAL_CALENDAR

Derived

It is an attribute view which is derived from another existing attribute view. Derived attribute view will be opened in read-only mode. The only editable field is its description. Copy From – When you want to define an attribute view, by copying an existing attribute View, then you can use "Copy From" option.

Note: Difference between Derived and Copy from is, in the case of derived, you can only edit the description of new attribute view while in the case of copy, you can modify everything entirely.

Create Standard Attribute View Standard view creation has pre-define step as below-

Table Creation for Attribute View Here we are going to create Standard Attribute View for product table, so firstly we create "PRODUCT" and "PRODUCT_DESC" Table. SQL Script is shown as below for table creation – Product table Script CREATE COLUMN TABLE "DHK_SCHEMA"."PRODUCT" ( "PRODUCT_ID" NVARCHAR (10) PRIMARY KEY, "SUPPLIER_ID" NVARCHAR (10), "CATEGORY" NVARCHAR (3), "PRICE" DECIMAL (5,2) ); INSERT INTO "DHK_SCHEMA"."PRODUCT" VALUES ('A0001','10000','A', 500.00); INSERT INTO "DHK_SCHEMA"."PRODUCT" VALUES ('A0002','10000','B',

300.00); INSERT INTO "DHK_SCHEMA"."PRODUCT" VALUES ('A0003','10000','C', 200.00); INSERT INTO "DHK_SCHEMA"."PRODUCT" VALUES ('A0004','10000','D', 100.00); INSERT INTO "DHK_SCHEMA"."PRODUCT" VALUES ('A0005','10000','A', 550.00); Product Description table ScriptCREATE COLUMN TABLE "DHK_SCHEMA"."PRODUCT_DESC" ( "PRODUCT_ID" NVARCHAR (10) PRIMARY KEY, "PRODUCT_NAME" NVARCHAR (10), ); INSERT INTO "DHK_SCHEMA"."PRODUCT_DESC" VALUES ('A0001','PRODUCT1'); INSERT INTO "DHK_SCHEMA"."PRODUCT_DESC" VALUES ('A0002','PRODUCT2'); INSERT INTO "DHK_SCHEMA"."PRODUCT_DESC" VALUES ('A0003','PRODUCT3'); INSERT INTO "DHK_SCHEMA"."PRODUCT_DESC" VALUES ('A0004','PRODUCT4');

INSERT INTO "DHK_SCHEMA"."PRODUCT_DESC" VALUES ('A0005','PRODUCT5'); Now table "PRODUCT" and "PRODUCT_DESC" is created in schema "DHK_SCHEMA".

Attribute View Creation STEP 1) in this step, 1. Select SAP HANA System. 2. Select content Folder. 3. Select Non-Structural Package Modelling under Package DHK_SCHEMA in the content node and right click->new. 4. Select Attribute view option.

STEP 2) Now in next window, 1. 2. 3. 4.

Enter Attribute Name and Label. Select View Type, here Attribute View. Select subtype as "Standard". Click on Finish Button.

STEP 3) Information view editor screen will open. Detail of each part in Information Editor is as below 1. Scenario Pane: In this pane the following node exists-

Semantics Data foundation 2. Detail Pane: In this pane following tab exists – Column View Properties Hierchery 3. Semantics (Scenario Pane): This node represents output structure of the view. Here it is Dimension. 4. Data Foundation (Scenario Pane): This node represents the table that we use for defining attribute view. 5. Here we drop table for creating attribute view. 6. Tab (columns, view Properties, Hierarchies) for details pane will be displayed. 7. Local: Here all Local attribute detail will be displayed. 8. Show: Filter for Local Attribute. 9. Detail of attribute. 10. This is a toolbar for Performance analysis, Find column, validate, activate, data preview, etc.

STEP 4) To include database table for creating attribute view, click on data foundation node and follow the instruction step by step as below 1. Drag table "PRODUCT" and "PRODUCT_DESC" from TABLE node under DHK_SCHEMA 2. Drop "PRODUCT" and "PRODUCT_DESC" to data foundation node. 3. Select field from "PRODUCT" Table as Output In detail pane. Field icon color changes from grey to orange.

4. Select field from "PRODUCT_DESC" Table as Output in the detail pane. Field Icon color change from grey to orange. 5. The field selected as output from both the table appear Under Column list in Output Pane.

Join "PRODUCT" table to "PRODUCT_DESC" by "PRODUCT_ID" field. STEP 5) Select Join path and Right Click on It and choose Edit option. A screen for Edit Join Condition will appear 1. Select Join Type as Type "Inner". 2. Select cardinality as "1..1".

After selecting join type click on "OK" button. In next step, we select the column and define a key for output. STEP 6) In this step, we will select column and define the key for output 1. 2. 3. 4.

Select Semantic Panel. Column tab will appear under Detail pane. Select "PRODUCT_ID" as Key. Check Hidden option for field PRODUCT_ID_1 (PRODUCT_DESC table field). 5. Click on validate Button. 6. After successful validation, click on activate Button.

Job Log for validation and activation activity is displayed on bottom of screen on the same page, i.e. Job Log section as below -

STEP 7) An attribute view with name "AT_PRODUCT" will be created. To view, refresh the Attribute View folder. 1. Go to DHK_SCHEMA->MODELLING Package. 2. AT_PRODUCT Attribute view display under Attribute view folder.

STEP 8) To view data in Attribute view, 1. Select data Preview option from the toolbar. 2. There will be two option for data view from attribute view – Open in Data Preview Editor (This will display data with analysis option). Open in SQL Editor. (This will display output as only SQL query Output).

STEP 9) To see View Attribute data in data Preview editor – There are 3 options – Analysis, Distinct and Raw data Analysis: This is a Graphical representation of the attribute view. 1. By selecting Analysis tab, we select Attributes for Label and Axis

format view. 2. Drag and drop attribute in label axis, it will display in Label axis(X Axis). 3. Drag and drop attribute in value axis, it will display in value axis (Y Axis). 4. The output will be available in the format of Chart, Table, Grid, and HTML.

Distinct Values: The distinct value of the column can be displayed here. This will show total no. records for selected attribute.

Raw Data tab: This option display data of attribute view in table format. 1. Click on Raw data tab 2. It will display the data in table format

STEP 10) View Attribute data in from SQL editor as below –

This option display data through SQL Query from the column view under "SYS_BIC" schema. A column view with name "will created after activation of attribute view "AT_PRODUCT". This is used to see SQL query used for displaying data from the view. 1. Display SQL Query for data selection. 2. Display output.

Attribute View when activated, a column view under _SYS_BIC schema is created. So, when we run Data Preview, system select data from column view under a _SYS_BIC schema. Screen shot of column view "AT_PRODUCT" under "_SYS_BIC" Schema of catalog node is as below -

SAP HANA Analytic View SAP HANA Analytic view is based on STAR Schema Modelling, and it represents OLAP/Multi-Dimensional Modelling objects. In SAP HANA Analytic view, dimension table are joined with the fact table that contains transaction data. A dimension table contains descriptive data. (E.g. Product, Product Name, Vendor, customer, etc.). Fact Table contains both descriptive data and Measureable data (Amount, Tax, etc.). SAP HANA Analytic view forms a cube-like structure, which is used for analysis of data. Analytic View is mainly used in a scenario where we need aggregated data from the underlying table. Example: Here we create an analytic view for Purchase Order based on earlier created attribute view "AT_PRODUCT". We use Table Purchase Order Header and Purchase Order Detail table for it. SQL Script for Create Table "PURCHASE_ORDER" in "DHK_SCHEMA" CREATE COLUMN TABLE "DHK_SCHEMA"."PURCHASE_ORDER" ( PO_NUMBER NVARCHAR(10) primary key, COMPANY NVARCHAR (4),

PO_CATEGORY NVARCHAR(2), PRODUCT_ID NVARCHAR(10), VENDOR NVARCHAR(10), TERMS NVARCHAR(4), PUR_ORG NVARCHAR(4), PUR_GRP NVARCHAR(3), CURRENCY NVARCHAR(5), QUOTATION_NO NVARCHAR(10), PO_STATUS VARCHAR(1), CREATED_BY NVARCHAR(20), CREATED_AT DATE ); INSERT INTO "DHK_SCHEMA"."PURCHASE_ORDER" VALUES(1000001,1000,'MM','A0001','V000001','CASH' ,1000,'GR1','INR',1000011,'A','HANAUSER','2016-0107');

INSERT INTO "DHK_SCHEMA"."PURCHASE_ORDER" VALUES(1000002,2000,'MM','A0002','V000001','CASH',1000,'GR1','INR',1000012,'A', 0106');

INSERT INTO "DHK_SCHEMA"."PURCHASE_ORDER" VALUES(1000003,2000,'MM','A0003','V000001','CASH',1000,'GR1','INR',1000013,'A', 0107'); INSERT INTO "DHK_SCHEMA"."PURCHASE_ORDER"

VALUES(1000004,2000,'MM','A0004','V000001','CASH',1000,'GR1','INR',1000014,'A', 01-07'); SQL Script for Create Table "PURCHASE_DETAIL" in "DHK_SCHEMA" CREATE COLUMN TABLE "DHK_SCHEMA"."PURCHASE_DETAIL" ( PO_NUMBER NVARCHAR(10) primary key, COMPANY NVARCHAR(4), PO_CATEGORY NVARCHAR(2), PRODUCT_ID NVARCHAR(10), PLANT NVARCHAR(4), STORAGE_LOC NVARCHAR(4), VENDOR NVARCHAR(10), TERMS NVARCHAR(4), PUR_ORG NVARCHAR(4), PUR_GRP NVARCHAR(3), CURRENCY NVARCHAR(5), QUANTITY SMALLINT, QUANTITY_UNIT VARCHAR(3), ORDER_PRICE DECIMAL(8,2), NET_AMOUNT DECIMAL(8,2),

GROSS_AMOUNT DECIMAL(8,2), TAX_AMOUNT DECIMAL(8,2) );

INSERT INTO "DHK_SCHEMA"."PURCHASE_DETAIL" VALUES(1000001,1000,'MM','A0001',100

'V000001','CASH',1000,'GR1','INR',10,'UNIT',50000.00,40000.00,50000.00,10000.00)

INSERT INTO "DHK_SCHEMA"."PURCHASE_DETAIL" VALUES(1000002,2000,'MM','A0002',100

'V000002','CASH',1000,'GR1','INR',10,'UNIT',60000.00,48000.00,60000.00,12000.00)

INSERT INTO "DHK_SCHEMA"."PURCHASE_DETAIL" VALUES(1000003,2000,'MM','A0003',100 'V000001','CASH',1000,'GR1','INR',20,'UNIT',40000.00,32000.00,40000.00,8000.00);

INSERT INTO "DHK_SCHEMA"."PURCHASE_DETAIL" VALUES(1000004,2000,'MM','A0004',100 'V000002','CASH',1000,'GR1','INR',20,'UNIT',20000.00,16000.00,20000.00,4000.00); With this table script, two Table will be created "PURCHASE_ORDER" and "PURCHASE_DETAIL" with data.

SAP HANA Analytic View Creation We are going to create a SAP HANA Analytic View with name "AN_PURCHASE_ORDER", with the already created attribute view "AT_PRODUCT", tables "PURCHASE_ORDER" and "PURCHASE_DETAIL". STEP 1) In this step, 1. Select Modelling sub-package under DHK_SCHEMA package. 2. Right-Click ->New.

3. Select Analytic View option.

STEP 2) Information View editor will display for Analytic View – 1. Enter Analytic View Name as "AN_PURCHASE_ORDERS" and Label for it. 2. Select View type as "Analytic View".

Once data is selected, click on Finish Button. Information View editor will be displayed for analytic view. STEP 3) Add Table from Schema in Data Foundation node under Scenario pane. There will be three nodes under Scenario Pane1. Semantics: This node represents output structure of the view. 2. Start Join: This node create join in order to join the attributes view with the fact table. 3. Data Foundation: In this node, we add FACT table for Analytic View. Multiple tables can be added, but measure from only one table can be selected. 4. Drag and Drop table "PURCHASE_ORDER" and "PURCHASE_DETAIL" From DHK_SCHEMA to Data Foundation Node of Scenario pane.

STEP 4) Add attribute view in Star join Node.

1. Select "AT_PRODUCT" Attribute view from Modelling package. 2. Drag and Drop Attribute View in Star Join Node. STEP 5) In the same window in detail panel do as directed, 1. Click on data foundation node. Table added in data foundation node will display in Detail section. 2. Join Table "PURCHASE_ORDER" To Table "PURCHASE_DETAIL" ON "PO_NUMBER" Field. 3. Enter Join type and Cardinality.

Click on OK Button. STEP 6) in the same window, 1. Select PO_NUMBER, COMPANY, PO_CATEGORY, PRODUCT_ID, PLANT, STORAGE_LOC from "PURCHASE_DETAIL" Table. 2. Select CURRENCY Column From "PURCHASE_DETAIL" Table. 3. Select GROSS_AMOUNT, TAX_AMOUNT. 4. Select PO_STATUS, CREATED_BY, CREATED_AT Column From "PURCHASE_HEADER" Table.

All selected column (Orange color) will display in the output of Analytic view. STEP 7) Now we join attribute view to our fact table (data foundation). Click on Star join Node in semantic pane, as below –

Attribute view and fact table will be displayed in the detail pane. Now we Join attribute view to fact table as below Join Attribute View with Data Foundation on "PRODUCT_ID" Column.

Click on Join link, a Pop-Up for Edit Join will be displayed. Define Join type as "Referential" and Cardinality 1...1.

Click On OK Button. STEP 8) In this step, we define attribute, measure and key for the view. 1. Select Semantics Node in Scenario Pane.

2. Select Columns Tab under Details pane. 3. Define column type as attribute and measure, I have defined all column as attribute except "GROSS_AMOUNT", which is defined as a measure.

STEP 9) Validate and Activate Analytic view 1. Validate the analytic view.

2. Activate Analytic view.

Now analytic view "AN_PURCHASE_ORDERS" will be created and activated in Analytic Folder of Modelling Sub-package as –

STEP 10) Preview Data in Analytic View. 1. Go to the toolbar section and click on "Data Preview" Icon. 2. Select Open in Data Preview Editor.

Again we use 3 options to see data in Data Preview Editor – 1. Analysis – In this tab, we have to drag and drop Attribute and measure in Label Axis and Value axis. We can see the output in Chart, table, Grid, and HTML format.

2. Distinct values – Distinct values show distinct value for selected attribute. We can select only one attribute at a time.

3. Raw Data – It will show in Table format from Raw Data tab as below -

Note: SAP HANA Analytic view can contain only Attribute view and does not support Union.

SAP HANA Calculation View SAP HANA calculation view is a powerful information view. SAP HANA Analytic view measure can be selected from only one fact table. When there is need of More Fact table in information view then calculation view came in the picture. Calculation view supports complex calculation. The data foundation of the calculation view can include tables, column views, analytic views and calculation views. We can create Joins, Unions, Aggregation, and Projections on data sources. Calculation View can contain multiple measures and can be used for multidimensional reporting or no measure which is used in list type reporting. Characteristic of SAP HANA Calculation View as below – Support Complex Calculation. Support OLTP and OLAP models. Support Client handling, language, currency conversion. Support Union, Projection, Aggregation, Rank, etc. SAP HANA Calculation View are of two types – 1. SAP HANA Graphical Calculation View (Created by SAP HANA Studio

Graphical editor). 2. SAP HANA Script-based calculations Views (Created by SQL Scripts by SAP HANA Studio).

SAP HANA Graphical Calculation View In SAP HANA Analytic view, we can select a measure from one table only. So when there is a requirement for a view which contains measure from the different table then it cannot achieve by analytic view but by calculation view. So in this case, we can use two different analytic view for each table and join them in calculation view. We are going to create a graphical Calculation View "CA_FI_LEDGER" by joining two Analytic View "AN_PURCHASE_ORDER" And "AN_FI_DOCUMENT". CA_FI_LEDGER will display finance document detail related to a purchase order. STEP 1) In this step, 1. Go to package (Here Modelling) and right click. 2. Select New Option. 3. Select Calculation View.

A Calculation View Editor will be displayed, in which Scenario Panel

display as below –

Detail of Scenario panel is as below – Palette: This section contains below nodes that can be used as a source to build our calculation views. We have 5 different types of nodes, they are 1. Join: This node is used to join two source objects and pass the result to the next node. The join types can be inner, left outer, right outer and text join.Note: We can only add two source objects to a join node. 2. Union: This is used to perform union all operation between multiple sources. The source can be n number of objects. 3. Projection: This is used to select columns, filter the data and create additional columns before we use it in next nodes like a union,

aggregation and rank. Note: We can only add one source objects in a join node. 4. Aggregation: This is used to perform aggregation on specific columns based on the selected attributes. 5. Rank: This is the exact replacement for RANK function in SQL. We can define the partition and order by clause based on the requirement. STEP 2) 1. Click Projection node from palette and drag and drop to scenario area from Purchase order analytic view. Renamed it to "Projection_PO". 2. Click Projection node from palette and drag and drop to scenario area for FI Document analytic view. Renamed it to "Projection_FI". 3. Drag and drop analytic View "AN_PUCHASE_ORDER" "AN_FI_DOCUMENT" and from Content folder to Projection node and "Projection_FI" respectively. 4. Click Join Node from Palette and drag and drop to scenario area. 5. Join Projection_PO node to Join_1 node. 6. Join Projection_FI node to Join_1 node. 7. Click Aggregation node from palette and drag and drop to scenario area. 8. Join Join_1 node to Aggregation node.

We have added two analytic views, for creating a calculation view. STEP 3) Click on Join_1 node under aggregation and you can see the detail section is displayed. 1. Select all column from Projection_PO Node for output. 2. Select all column from Projection_FI node for output. 3. Join Projection_PO Node to Projection_FI node on column Projection_PO. PO_Number = Projection_FI.PO_NO.

STEP 4) In this step, 1. Click on Aggregation node and Detail will be displayed on right side of the pane. 2. Select Column for output from the Join_1 displayed on the right side in the detail window.

STEP 5) Now, click on Semantics Node.

Detail screen will be displayed as below. Define attribute and measure type for the column and also, mark key for this output. 1. Define attribute and measure. 2. Mark PO_Number and COMPANY as Key. 3. Mark ACC_DOC_NO as key.

STEP 6) Validate and Activate calculation View, from the top bar of the window.

1. Click on Validate Icon. 2. Click on Activate Icon. Calculation View will be activated and will display under Modelling Package as below –

Select calculation view and right click ->Data preview We have added two analytic views and select measure (TAX_AMOUNT, GROSS_AMOUNT) from both analytic view. Data Preview screen will be displayed as below –

SAP HANA Analytic Privileges Analytic Privileges restrict the user to view data for which they authorize. SAP HANA Analytic Privileges is used for Security purpose. SQL Privileges provide authorization on object level not at a record level, so provide a record or row-level authorization "Analytic Privileges" will be used. SAP HANA Analytic Privileges are used to provide authorization on below information view – Attribute View Analytic View

Calculation View Now we are going to create an Analytic Privileges and will assign to user "ABHI_TEST", by this Analytic we restrict the user to view data only for a company with value 1000. Step 1) Go To option as belowModelling package (Right Click) --> New -> Analytic Privileges.

Step 2) New Analytic Privilege popup appear 1. Enter Analytic Privileges Name / Label. 2. Package name is automatically selected. 3. Selection option creates new.

Click on 'OK' button, in next step Analytic Privileges editor will be displayed for add and edit privileges. Step 3) Analytic Privileges editor will open as below-

1. In General Section name and Label is displayed. 2. Click on "Add Button" in Reference Models Section. 3. Select Calculation View (CA_FI_LEDGER) which we created earlier.

4. 5. 6. 7. 8. 9.

Click on Add Button for create validity of privileges. Assign Privileges validity. Click on Add button for selecting attribute for assign restriction. Select Attribute Company. Click on the add button to assign a value to attribute for restriction. Assign Value by selecting Type / operator and value. Here we want to restrict to user see data of calculation view for the only company (1000).

Validate and Activate Analytic Privileges, an analytic privilege is created in Analytic Privileges folder under Modelling package as below -

Step 4) Now we are going to assign Analytic Privileges to User "ABHI_TEST". "ABHI_TEST" user have privileges to access modelling package. Double click on User "ABHI_TEST" from Security -> Users. 1. Select Analytic Privileges tab. 2. Click on "+" Button. 3. A pop-up for Select Analytic Privileges will be displayed. Enter Name of Analytic Privileges which we have created earlier. 4. Select Analytic Privileges. 5. Click on Ok Button.

Analytic Privileges is added as below -

Step 5) Now, we deploy this changes in user by clicking deploy button.

A message is displayed as below – User 'ABHI_TEST' changed. Step-6) Check Analytic Privileges Assign to user "ABHI_TEST" is working or not. Login to "ABHI_TEST" user by select HANAUSER system as below – 1. Select HDB (HANAUSER) Current System and right click. 2. Select "Add System with Different User", and enter User Name/ password for user "ABHI_TEST". 3. A System HDB (ABHI_TEST") will be added to system list.

ABHI_TEST user has no full access of data of Calculation view created by HANAUSER, as HANAUSER has created analytic privileges for restriction on this calculation view for the company -1000 and assign to ABHI_TEST user. So, Go to Content Folder -> Select Package -> Calculation View

(CA_FI_LEDGER) ->Right Click -> Data Preview. Data Preview screen will be displayed as belowData in calculation view will be restricted for company code– 1000.

SAP HANA Information Composer SAP HANA Information composer is a web application that allows us to do modelling and upload local data to SAP HANA database. This is Modelling environment for non-technical people like an end user. Information Composer works same as SAP HANA Modeler and used by Business user with less technical knowledge. A Large amount of data (up to 5 million of cell) can be uploaded using Information Composer. Role required for work with SAP HANA Information Composer – IC_MODELLER: Used to allow the user to work with Information composer, Load data and create information view. IC_PUBLIC: Used to allow the user to work with information composer, see workbook and information views.

SAP HANA Import and Export Import and Export option of SAP HANA Provide features to move tables, Information View, landscape to other or on the same system. Export: STEP 1) Go To File Menu->Choose Export.

A Pop-Up for Export will be displayed – There are two Export options for SAP HANA Object will be displayed –

1. SAP HANA Catalog Objects: Used for export catalog objects (table, view, procedure, etc.). Landscape: Used for an export landscape from one system to another. 2. SAP HANA Content Change and Transport System (CTS): Used for export information view with ABAP program. Delivery Unit: Delivery Unit is a single unit. This option is used to export multiple package which is mapped to single delivery unit. Developer Mode: This option can be used to export individual objects to a location in the local system. SAP Support Mode: This can be used to export the objects with the data for SAP support purposes.

Import: STEP 1) Go To File Menu->Choose Import.

A pop-up for Import Option will be displayed – There are two Import option for SAP HANA Object will be displayed – 1. SAP HANA

Catalog Objects: Used for import catalog objects (table, view, Procedure, etc.). ESRI Shape files: Environmental System Research Institute, Inc. (ESRI) shape file format are used to store geometry data and attribute information for the spatial features in a data set. Landscape: Used for import landscape from one system to another. 2. SAP HANA Content Data from Local file: Used for import data from .csv, .xls, .xlsx file in table. Delivery Unit: Delivery Unit is a single unit. This option is used to import multiple package which is mapped to single delivery unit. Developer Mode: This option can be used to import individual objects to a location in the local system. Mass Import of Metadata: This can be used to import the Metadata of Mass objects. SAP Net weaver BW models: This can be used to import BW Models into SAP HANA. Selective Import of Metadata: This can be used to import the Metadata of Single objects into SAP HANA.

SAP HANA Performance Optimization Technique There are the following rule for performance Optimization Technique –

All Information View and Table view should be used with a projection node. Projection Node improves performance by narrowing the column set. By applying filters at projection nodes. Avoid JOIN nodes in calculation view, Use UNION instead of it. Use Input Parameters / Variable to restrict the dataset within Analytic / Calculation View. The calculation should be done before aggregation. Hierarchies need to re-define in Calculation view, Hierarchies of attribute view is not visible in calculation view. Hierarchies of attribute view is visible in Analytic view. The label of attribute and description of measure defined in Attribute view, Analytic view and Calculation view will not display in calculation view. We need to Re-Map it. Do Not Mix CE Function and SQL script in Information model. Summary: In this tutorial of SAP Hana, we have learnt about Modelling Concept (Joins, Package, Information view, Analytic Privileges, User, and Role, etc.). Apart from these we have learned in details about; SAP HANA Modelling Different type of Joins in SAP HANA. SAP HANA Modelling Best Practices. Creating Package (Structural / Non-Structural) in SAP HANA. SAP HANA Attribute Creation for Product Table. SAP HANA Analytic View creation for Purchase Order. SAP HANA Calculation View creation on two analytic views. SAP HANA Analytic Privileges creation and assign to other user. SAP HANA Information composer Overview. Different method of SAP HANA Import and Export. How to use SAP HANA Performance Optimization Technique.

Chapter 7: Security Security in SAP HANA means protecting important data from unauthorized access and ensures that the standards and compliance meet as security standard adopted by the company.

SAP HANA Security : Overview SAP HANA provides a facility i.e. Multitenant database, in which multiple databases can be created on single SAP HANA System. It is known as multitenant database container. So SAP HANA provide all security related feature for all multitenant database container. SAP HANA Provide following security-related feature – User and Role Management Authorization Authentication Encryption of data in Persistence Layer Encryption of data in Network Layer SAP HANA User and Role SAP HANA User and Role management configuration depend on the architecture as below – 1. 3-Tier Architecture. SAP HANA can be used as a relational database in a 3-Tier Architecture. In this architecture, security features (authorization, authentication, encryption, and auditing) are installed on application server layers. SAP application (ERP, BW, etc.) connects to database only with the help of a technical user or database administrator (Basis Person). The end-user cannot directly access to database or database server.

2. 2-Tier Architecture. SAP HANA Extended Application Services (SAP HANA XS) is based on 2 –Tier Architecture, in which Application server, Web Server and Development Environment are embedded in a single system.

SAP HANA Authentication Database user identifies who is accessing the SAP HANA Database. It is verified through a process Named "Authentication." SAP HANA support many authentication methods. Single Sign-on (SSO) are used to integrate several Authentication method. SAP HANA supports following authentication method Kerberos: It can be used in the following case – Directly from JDBC and ODBC Client (SAP HANA Studio). When HTTP is used to access SAP HANA XS. User Name / Password When the user enters their database username and password, then SAP HANA Database authenticate the user. Security Assertion Markup Language(SAML) SAML can be used to authenticate SAP HANA User, who is accessing SAP HANA Database directly through ODBC/JDBC. It is a process of mapping external user identity to the internal database user, so user can login in sap database with the external user id. SAP Logon and Assertion Tickets

The user can be authenticated by Logon or Assertion Tickets, which is configured and issued to the user for creating a ticket. X.509 Clients Certificates When SAP HANA XS Access by HTTP, Client certificates signed by a trusted Certification authority (CA) can be used to authenticate the user.

SAP HANA Authorization SAP HANA Authorization is required when a user using client interface (JDBC, ODBC, or HTTP) to access the SAP HANA database. Depending on the authorization provided to the user, it can perform database operations on the database object. This authorization is called, "privileges." The Privileges can be granted to the user directly or indirectly (through roles). All Privileges assign to users are combined as a single unit. When a user tries to access any SAP HANA Database object, HANA System performs authorization check on the user through user roles and directly grants the privileges. When requested Privileges found, HANA system skips further checks and grant access to request database objects. In SAP HANA following privileges are their Privileges Description Types

It controls normal system activity. System Privileges are mainly used for –

System Privileges

Creating and Deleting Schema in SAP HANA Database Managing user and role in SAP HANA Database Monitoring and tracing of SAP HANA database Performing data backups Managing license Managing version

Managing Audit Importing and Exporting content Maintaining Delivery Units

Object Privileges

Object Privileges are SQL privileges that are used to give authorization to read and modify database objects. To access database objects user needs object privileges on database objects or on the schema in which database object exists. Object privileges can be granted to catalog objects (table, view, etc.) or non-catalog objects (development objects). Object Privileges are as below – CREATE ANY UPDATE, INSERT, SELECT, DELETE, DROP, ALTER, EXECUTE INDEX, TRIGGER, DEBUG, REFERENCES

Analytic Privileges are used to allow read access on data of SAP HANA Information model (attribute view, Analytic View, calculation View). Analytic Privileges

This privilege is evaluated during query processing. Analytic Privileges grants different user access on different part of data in the Same information view based on user role. Analytic Privileges are used in SAP HANA database to provide row level data Control for individual users to see the data is in the same view.

Package Privileges

Package Privileges are used to provide authorization for actions on individual packages in SAP HANA Repository.

Application Privileges are required in In SAP HANA Extended Application Services (SAP HANA XS) for access application. Application Privileges Application privileges are granted and revoked through the proceduresGRANT_APPLICATION_PRIVILEGE and REVOKE_APPLICATION_PRIVILEGE procedure in the _SYS_REPO schema.

Privileges on User

It is an SQL Privileges, which can grant by the user on own user. ATTACH DEBUGGER is the only privilege that can be granted to a user.

SAP HANA User Administration and Role

Management To Access SAP HANA Database, users are required. Depending on the different security policy there are two types of user in SAP HANA as below – 1. Technical User (DBA User) – It is a user who directly work with SAP HANA database with necessary privileges. Normally, these users don't get deleted from the database. These users are created for an administrative task such as creating an object and granting privileges on database object or on the application. SAP HANA Database system provides following user by default as standard user– SYSTEM SYS _SYS_REPO 2. Database or Real User: Each user who wants to work on SAP HANA database, need a database user. Database user are a real person who works on SAP HANA. There are two types of Database user as below – User Type

Description

Role assigned

Standard User

This user can create objects in an own schema and reads data in system views. Standard User created with "CREATE USER" statement.

PUBLIC role is assigned for read system views.

Restricted User has no full SQL Access via an SQL Console and created with "CREATE RESTRICTED USER" statement. If Privileges required for use of any application, then they are provided

through the role. Restricted User

Restricted User cannot create database objects. Restricted User cannot view data in the database. Restricted User connects to database through HTTP Only. ODBC/JDBC access for client connection must be enabled with SQL statement.

RESTRICTED_USER_ODBC_ACCESS or RESTRICTED_USER_JDBC_ACCESS role required to user for Full Access of ODBC/JDBC functionality

SAP HANA User Administrator have access to the following activity – 1. 2. 3. 4. 5.

Create/delete User. Define and Create Role. Grant Role to the user. Resetting user password. Re-activate / de-activate user according to requirement.

1. Create User in SAP HANA- only database user with ROLE ADMIN privileges can create user and role in SAP HANA. Step 1) To create new user in SAP HANA Studio go to security tab as shown below and follow the following steps; 1. Go to security node. 2. Select Users (Right Click) -> New User.

Step 2) A user creation screen appear. 1. Enter User Name. 2. Enter Password for the user. 3. These are authentication mechanism, by default User name /

password is used for authentication.

By Clicking on the deploy created.

Button user will be

2. Define and Create Role A role is a collection of privileges that can be granted to other users or role. The role includes privileges for database object & application and depending on the nature of the job. It is a standard mechanism to grant privileges. Privileges can be directly granted to the user. There are many standard roles (e.g. MODELLING, MONITORING, etc.) available in SAP HANA database. We can use the standard role as a template for creating a custom role. A role can contain following privileges – System Privileges for administrative and development task (CATALOG READ, AUDIT ADMIN, etc.) Object Privileges for database objects (SELECT, INSERT, DELETE, etc.) Analytic Privileges for SAP HANA Information View Package Privileges on repository packages (REPO.READ, REPO.EDIT_NATIVE_OBJECTS, etc.) Application Privileges for SAP HANA XS applications.

Privileges on the user (For Debugging of procedure). Role Creation Step 1) In this step, 1. Go to Security node in SAP HANA System. 2. Select Role Node (Right Click) and select New Role.

Step 2) A role creation screen is displayed.

1. Give Role name under New Role Block. 2. Select Granted Role tab, and click "+" Icon to add Standard Role or exiting role.

3. Select Desired role (e.g. MODELLING, MONITORING, etc.) STEP 3) In this step, 1. Selected Role is added in Granted Roles Tab. 2. Privileges can be assign to the user directly by selecting System Privileges, object Privileges, Analytic Privileges, Package Privileges, etc. 3. Click on deploy icon to create Role.

Tick option "Grantable to other users and roles", if you want to assign this role to other user and role. 3. Grant Role to User STEP 1) In this step, we will Assign Role "MODELLING_VIEW" to another user "ABHI_TEST". 1. Go to User sub-node under Security node and double click it. User window will show. 2. Click on Granted roles "+" Icon. 3. A pop-up will appear, Search Role name which will be assign to the user.

STEP 2) In this step, role "MODELLING_VIEW" will be added under Role.

STEP 3) In this step, 1. Click on Deploy Button. 2. A Message " User 'ABHI_TEST" changed is displayed.

4. Resetting User Password If user password needs to reset, then go to User sub-node under Security node and double click it. User window will show. STEP 1) In this step, 1. Enter new password. 2. Enter Confirm password.

STEP 2) In this step, 1. Click on Deploy Button. 2. A message "User 'ABHI_TEST" changed is displayed.

5. Re-Activate/De-activate User Go to User sub-node under Security node and double click it. User window will show. There is De-Activate User icon. Click on it

A confirmation message "Popup" will appear. Click on 'Yes' Button.

A message "User 'ABHI_TEST' deactivated" will be displayed. The DeActivate icon changes with name "Activate user". Now we can activate user from the same icon.

SAP HANA License Management The license key is required to use SAP HANA Database. A license key can be installed and deleted using SAP HANA Studio, SAP HANA HDBSQL Command Line tool, and HANA SQL Query editor. SAP HANA database support two types of license key – Permanent License Key: Permanent license keys are valid till expiration date. We need to request and apply license key before expire. If license key expires then Temporary License Key are is automatically installed for 28 days. Temporary License Key: This is automatically installed with a new SAP HANA Database Installation. It is valid for 90 days and later can apply for Permanent key from SAP. Authorization of License Management "LICENSE ADMIN" privileges are required for License Management.

SAP HANA Auditing SAP HANA Auditing features allow you to monitor and record action which is performed in SAP HANA System. This features should be activated for the system before creating audit policy. Authorization for SAP HANA Auditing "AUDIT ADMIN" System Privileges required for SAP HANA Auditing.

Summary: In this tutorial, we have learned following topic SAP HANA Security overview. SAP HANA Authentication in detail. SAP HANA Authorization in detail. SAP HANA User Administration method. SAP HANA Role Administration method SAP HANA license Management process. SAP HANA Role Auditing Process.

Chapter 8: Reporting We have till now loaded data from SAP Source, Non-SAP, and Flat file through SAP SLT, SAP BODS and created information view (Attribute View, Analytic View, and Calculation View). Now we will discuss the information view in reporting tools –The drivers like JDBC, ODBC, and ODBO in reporting tool are part of SAP HANA Client. So by installing SAP HANA Client software (*.exe file) will make all the drivers available on the PC for connecting to reporting tools to SAP HANA. We will use SAP BO, SAP Lumira, and Microsoft Excel to access SAP HANA information view in this tutorial. The choice of reporting tool depends upon the type of reports that are required.

Reporting in SAP BI (Business Intelligence) Overview SAP BI is a data warehousing and reporting tool. In BI (Business Intelligence) raw data will be cleaned, business logic applied, processed and provide meaningful information to the user. BI is a SAP Product, which provides the user friendly environment. SAP BI support many databases, but when we migrate from source data to SAP HANA then architecture will be as belowSAP BI is three tier architecture – 1. Database Server – In this, data is physically stored in PSA, ODS, Infocube. 2. Application Server – It is used to access data from database server and process data for Presentation server. 3. Presentation Server- It displays the data to the user. SAP Bex Query Designer (Component of SAP BI) can access SAP HANA View as info provider and display data in Bex.

Reporting in Webi of SAP Business Object (BO) from HANA SAP Business Objects Web Intelligence (SAP BO WebI) is part of the SAP Business Objects Platform (SBOP) client tools family. SAP Business Objects tool use the JDBC/ODBC driver to connect to the source system. Features of SAP BO WebI is as below – Webi is adhoc reporting tool. Webi is used for detail level report. Webi display the result in tabular or graphical formats. User can create / modify their own queries for the report. After installing SAP HANA client, JDBC/ODBC driver will be installed on PC. These drivers act as an intermediate between SAP HANA and client reporting tools when presenting data to the user.

SAP BO Server and SAP BO Platform (SBOP) client tools.

Create Universe using Information Design Tool Step 1) In this step, we will create a universe Using Information Design Tool (IDT). HANA can be accessed via ODBC and JDBC drivers, and its tables can be defined and queried with SQL language. Tables are managed with a tool called HANA Studio. 1. Launch IDT by navigating to Start Menu -> SAP Business Intelligence -> SAP Business Objects BI platform 4 Client Tools ->Information Design Tool

Information Design tool screen will appear. To create a universe we need a project in IDT. 2. Navigate to Project Option as below1. Click On File. 2. Click on New option. 3. Select Project option.

Or 1. Click On New File Icon. 2. Select Project.

A pop-up for New Project will appear- In this pop-up, enter following details: 1. Enter name of project 2. Click on finish button.

Project Name "WEBI_DHK_HANA" will appear under Local Projects Tab as below.

A project is a local workspace where you store the resources used to build one or more universes. There are two connections available, detail of each is as below – 1. Relational Connection – To access data from a table and access regular RDBMS use Relational Connection. 2. OLAP Connection – To access data from the application (SAP, Oracle, Microsoft, SAP BO) and data stored in Cube / Information View use OLAP connection. Step 2) Now we create a Relational connection. So go to Project - > New -> Relational Connection. A Pop-Up for New Relation Connection source name appear1. Enter Resource Name. 2. Click on Next Button.

A Pop-Up for Database Middleware Driver Selection will appear1. Select JDBC Drivers option under SAP HANA Database 1.0. 2. Click on Next Button.

A pop-up for New Relational Connection parameter will be displayed, enter following details into it. 1. Authentication Mode will be selected as "Use Specified User Name and Password". 2. Enter HANA User Name. 3. Enter Password. 4. Select Single server in DATA Source section. Enter Host Name (here best: 30015). 5. Enter Instance Number. 6. Click on Test Connection button to verify the connection.

A pop-up for Test Connection successful will be displayed.

After successful Connection, a connection with name SAP HANA.cnx will be created. Step 3) To consume Universe by Web Intelligence, Dashboards or Crystal Report for Enterprise, we need to publish the connection. So, now we publish connection for SAP HANA.cnx. 1. Select SAP HANA.cnx connection under Project "WEBI_DHK_HANA" and right click. 2. Select Publish Connection to a repository. 3. A Publish connection with name SAPHANA.cns will be created after successful publish in the repository.

A pop-up for publishing connection to repository appears – 1. Select Business IDT. 2. Click on finish button.

A pop-up "The connection was published successfully" will be displayed.

Now we create Universe by using SAP HANA Business Layer. Step 4) Create Universe (Universe is a business representation of your data warehouse or transactional database. Universe allow the user to interact with data without knowledge of complexities of the database).

To create universe, we use SAP HANA Business Layer as below SAP HANA Business Layer From SAP BOBI 4.1, SAP provides a new option "SAP HANA Business Layer" while creating the universe using Information Design tool. Before SAP BOBI 4.1 version, we need to create a derived table while building data foundation and map the variable and input parameters. SAP HANA Business Layer automatically creates a data foundation and business layer based on selected SAP HANA Views. SAP HANA Business Layer automatically detects the input parameter and variables. We Create Universe through SAP HANA Business Layer as below1. Select Project Created "WEBI_DHK_HANA". 2. Right click the project and select a new option. 3. Select option "SAP HANA Business Layer".

A Pop-up for SAP HANA Business Layer appears1. 2. 3. 4.

Enter Business Layer Name. Enter Data Foundation Name. Enter Description. Click On Next Button.

A Pop-up for Select SAP HANA Connection displayed1. Tick on Connection "SAPHANA.cnx". 2. Click on Next Button.

A Pop-up for selecting SAP HANA Information Model will be displayed. 1. Now Select Analytic View (AN_PURCHASE_ORDERS) Created under HANAUSER Package. 2. Click on Finish Button.

The analytic view will open in Information Design tool. Step 5) Detail of Information Design tool will be as below1. Under Project- all Relational Connection, Business Layer, and Data Foundation will be displayed. 2. In repository resource section, object (Connection, Business Layer, etc.) stored in the repository will be displayed. 3. Business Layer shows Business view of Analytic view. It is for Functional Person. 4. Data Foundation shows table and column name. It is for Technical Person.

Step 6) In this step, Select Business layer section, go to the folder with the name of Analytic View. Now follow points as below1. 2. 3. 4.

Select data foundation section. Drag and drop column to Analytic View It will display objects under Analytic View Dragged field will be displayed under Analytic View (AN_PURCHASE_ORDER).

Step 7) In this step, save all object. Go to file and click on "Save All" option to save all objects.

After that, follow the step below. Step 8) Create a Query and view output. 1. Go to Queries option. 2. A Query panel will be displayed, Select the field to which you want to include in Query. 3. Drag and drop them to "Result object for Query #1 "section. 4. Click on Refresh Button on Result set section. 5. Result will be displayed.

Reporting in SAP Crystal Report SAP Crystal Report help us to design, explore, visualize and provide report, which can be run on web or enterprises applications. With SAP Crystal Report we can create simple report or complex report.

There are two types of SAP Crystal Reports – 1. Crystal Report 2011 /13 /16: This will be used when – If you want to call a HANA stored procedure from Crystal Reports. If you want to create SQL Expressions If you want to execute a view with parameters or variables and submit non-default values 2. Crystal Report for Enterprises: This will be used when If there are Universe available or needed, then use SAP Crystal Reports Enterprises. We will use crystal reports for enterprises. Step 1) Login into Crystal Report for Enterprises.

SAP Crystal reports for Enterprises will be displayed as below1. It is Report Formatting section, which provides a different tool for formatting. 2. This is an icon for the window (Data Explorer, outline, Group tree, Find). 3. Detail of Data Explorer, outline, Group tree, Find, etc. will be displayed. 4. Report Page Formatting option. 5. Work Area for Report.

Step 2) Now we create a connection for the data source.

Click on Choose data Source option from Data Explorer-

A pop-up for Choose a data source type will be displayedSelect Browse option from SAP HANA Platform Section.

A pop-up for connecting to the server will be displayed. Click on "New Server" Button.

A window for Server connection will be displayed as below-

1. 2. 3. 4. 5. 6. 7. 8.

Click on Add Button. Enter Connection Display Name (saphana). A connection Name will appear in the connection list. Enter HANA Server name. Enter HANA Server Instance Name. Enter HANA username. Click on "Test Connection "Button. A pop-up for Test connection logon. Enter password for SAP HANA user. 9. Click on the OK button.

A message for connection successful will be displayed.

Click on OK Button. A pop-up for connecting to the Server will display. Step 3) In this step, 1. We will do the following thing

1. Select server "saphana." 2. Click on OK button

A pop-up for entering a password will be displayed, enter the password and then click OK.

One more pop-up will appear for selecting HANA View.

2) In this step, we select SAP HANA View. 1. Select HANA View (Analytic View AN_PURCHASE_ORDERS). 2. Click on OK Button.

3. In this step, a window for Query will open. Follow below point to create a query1. 2. 3. 4.

Select required column from the list for the query. Drag and Drop required field in Query. Click on Refresh button. The result set will be displayed.

5. Click on

button.

Report Output will be displayed as below-

Reporting in SAP Lumira SAP Lumira is new software by SAP to Analyze and Visualize data. By SAP Lumira user can create a beautiful and interactive map, infograpg, and charts. SAP Lumira can import data from Excel, and other source. SAP Lumira can access information view of SAP HANA directly. SAP HANA Perform visual BI analysis using dashboard. In SAP Lumira, the following steps need to be done to visualize data.

Now we visualize SAP HANA View in SAP Lumira, so firstly we login in SAP Lumira by Click on SAP Lumira Client icon on desktop as below-

Step 1) Create a Document and acquire a data set After open SAP Lumira, screen open for work on SAP Lumira, Detail about this screen as below1. Application Toolbar – It contains toolbar like File, Edit, View, Data, and Help. 2. Home Link – By using this option we can go to home screen. 3. My Document Sections 1. 2. 3. 4.

Documents Visualizations Datasets Stories

4. SAP Lumira Cloud – By using this option, we can use cloud options. 5. Connections – W can see all connection here.

So, click on Document option on My Items section to create a Documents – 1. Go to file application Toolbar., click on file option 2. Select New Option.

SAP Lumira supports below dataset – Microsoft Excel Text Copy from clipboard Connect to SAP HANA Download From SAP HANA Universe Query With SQL Connect to SAP Business Warehouse SAP Universe Query Panel Step-2) Connect to SAP HANA Here we will connect to SAP HANA and access SAP HANA Information View. 1. So select Connect to SAP HANA. 2. Click on Next Button.

A pop-up for SAP HANA Server Credential will be displayed as below1. 2. 3. 4. 5.

Enter SAP HANA Server name. Enter SAP HANA Instance no. Enter SAP HANA User Name. Enter SAP HANA Password. Click on "Connect" Button to Connect to SAP HANA Server.

After clicking on Connect button, we will connect to SAP HANA and able to access SAP HANA view. Step 3) Access SAP HANA Analytic View A window for select SAP HANA View will appear as below1. Select SAP HANA View ("AN_PURCHASE_ORDERS" here). 2. Click on Next Button.

Step 4 ) Define Dimension and Measure. The next window for select Measures and Dimensions will be displayed – 1. All Measure will be grouped under Measure sections. 2. All Dimension will be grouped under Dimension Sections. 3. Click on Create Button.

Step 5) Visualize SAP HANA Analytic View in SAP Lumira. After activating Information View in SAP HANA, a Column View with the similar name of information view, under "_SYS_BIC" Schema created in SAP HANA catalog node. When we need to access any SAP HANA Information View outside from SAP HANA, We can access it only from "_SYS_BIC" Schema. A Visualize screen will appear, which select column view under "_SYS_BIC" Schema – 1. Different Chart Type can be selected from Chart Builder Section. 1. Click On Chart Builder Icon. 2. Select Column Chart option.

2. Click On "+" icon in front of MEASURES Section to Add Measure in Y Axis format. 3. We have added "GROSS_AMOUNT" AND "TAX_AMOUNT" Measure. 4. Click on the icon in front of "DIMENSIONS" Section. A list of all available dimension appear. 5. Select "CATEGORY", "PRODUCT_ID", "PRODUCT_NAME" from Dimension List to display on X Axis.

SAP HANA Analytic view will be displayed in visualize tab of SAP Lumira, in which we have different screen as below1. Tool for Ascending / descending, Ranking, Clear, refresh, etc. 2. This can be used for Filtering. 3. The output of SAP HANA Analytic views in SAP HANA lumira.

Reporting in Microsoft Excel Microsoft excel has powerful reporting inbuilt option; we can create report

quickly by pivot tables and charts. MS Office uses MDX language ((Multi Dimension Expression language) to access data from SAP HANA. MDX Language is used by reporting tool to access data from a multidimensional object in a database environment. By Only MDX Query we can access sap Hana hierarchies. We can access only SAP HANA information view which has a property as 'CUBE' in semantic, so we cannot access attribute view by MS Excel. Connecting Drivers- MS Excel uses ODBO (OLE DB FOR OLAP) drivers for connecting to SAP HANA database.

Now we will access SAP HANA database from SAP HANA as shown in steps below – Step 1) Connect To SAP HANA Excel1. Open MS Excel. Go to Data Tab. 2. Click on "From Other Sources" Icon. 3. Select From Data Connection Wizard.

A screen for Data connection wizard will be displayed as below1. Select "Other / Advance" option. 2. Click on Next Button.

A window for "Data Link Properties "will be open as below1. Select "SAP HANA MDX Provider "under Provide Tab. 2. Click on Next Button.

A window for data link properties will be displayed – Enter the following detail as below –

1. 2. 3. 4. 5.

Enter Host Name of SAP HANA Database. Enter Instance number of SAP HANA database. Enter Username / password for SAP HANA database. Enter Language name. Click on "Test Connection" to test the connection to SAP HANA Database from Excel.

A message "Test connection succeeded" will be pop-up.

Click on the OK button. Step 2) Till now we had created a connection from EXCEL to SAP HANA and tested the connection. Now we access SAP HANA Information View from Excel. Now a window for "Data connection wizard" will be displayed. 1. Select the package in which contains SAP HANA Information View. 2. Select Information View (Analytic View, Calculation View). 3. Click on Next Button.

A new window for data connection File will appear, enter following details. 1. Give File Name (AV_SALES). 2. Tick option "Save password in file", for avoid to enter a password while open excel file. 3. A Pop-up for save password security related will be displayed. 4. Click on Finish Button.

Step 3) Now SAP HANA Information View will be displayed in Excel as pivot table as below –

Summary: We have learned in this tutorial below topic – Reporting in SAP BI overview Reporting in SAP BO Webi with an example of SAP HANA information View. Reporting in SAP Crystal Report Enterprises with an example of SAP HANA information View.

Reporting in SAP Lumira with an example of SAP HANA information View. Reporting in Microsoft Excel by consuming SAP HANA Information view.