CDS

CDS view core annotations

Story Highlights
  • Catalog/Persistency
  • End User UI texts
  • Data Aging
  • Aggregation
  • Metadata Handling
  • Hierarchies
  • Client Handling
  • Requirements
  • Client Handling in (Open) QL
  • ABAP CDS Legacy Annotations
  • Column Views

Predefined Core Annotations

Core annotations allow application developers to specify additional metadata that influences the core infrastructure respectively is relevant for all kind of scenarios.

Scope of Annotations

Annotations can be defined to annotate certain constructs within a data model – i.e. types, entities, views or elements thereof. If annotations are used in a way that violates this restriction, parsing SHOULD fail.

The way to define the restriction is by using the built-in annotation @Scope as shown in the following example:

@Scope: #ENTITY 

annotation Weight { ... };

@Scope: [#ENTITY, #VIEW, #ELEMENT] 

annotation Description { ... };

The annotation @Scope is defined like this:

The annotation @Scope is defined like this:

annotation Scope : String(22) enum {

OBJECT; TYPE; SIMPLE_TYPE; STRUCT_TYPE; ENTITY; VIEW; CONTEXT; 
ELEMENT; ASSOCIATION; ANNOTATION; PARAMETER; JOIN; ANY; EXTEND_TYPE; EXTEND_ENTITY; EXTEND_VIEW; EXTENSION;

} [1..*] default ANY;

The enumeration symbols are interpreted as follows:

  • OBJECT subsumes TYPE, ENTITY, VIEW
  • TYPE subsumes SIMPLE_TYPE, STRUCT_TYPE

If an annotation is not annotated with @Scope, or if the scope is defined as ANY – it is not restricted to any specific construct.

See More: CDS Annotations

Metamodel References

In some scenarios there is a need to specify generic references to metamodel artifacts like entity or element. This is for example the case for some annotation definitions, like

@Scope: #ENTITY 
annotation Catalog {

Index : list[] of {
…
elementNames [1..*]: elementRef;
};
        };

To allow Editors or –Compilers to check if the developer has defined an allowed value for such kind of annotations we need some way to indicate these metamodel references. In the example above it could be checked that the Index definition refers to existing elements within the entity.

The corresponding annotations are defined like that:

@Scope: [#SIMPLE_TYPE, #ENTITY]  
annotation EntityRef  : boolean default true;

@Scope: [#SIMPLE_TYPE, #ENTITY]  
annotation TypeRef  : boolean default true;

@Scope: [#SIMPLE_TYPE, #ELEMENT] 
annotation ElementRef  : boolean default true;

@Scope: [#SIMPLE_TYPE, #ELEMENT] 
annotation SubElementRef  : boolean default true;

Examples /Usage:

Continuing the above mentioned use case for annotations we need to offer predefined data types that define these metamodel references. The following data types should be defined:

@EntityRef
type entityRef : String;

@TypeRef
type typeRef : String;

@ElementRef
type elementRef : String;

@SubElementRef
type subElementRef : String;

The type elementRef can contain either “local names” (to reference other elements in the same signature) or “path expressions” (to reference elements that can be accessed via an association). Due to “legacy reasons” the elementRef can contain in some exceptional cases also “full-qualified” element names.

Catalog/Persistency

These annotations will mainly be used in cases where a developer has the need to refine the definition of the underlying table (e.g. by specifying the delivery class or table type) respectively wants to leverage DB-related optimizations (like indices).

Further usages are scenarios where entities are created based on existing tables. This is the case either in replication scenarios or when a developer wants to migrate an existing application into the “new programming model”. In both cases the developer wants to leverage the new concepts like associations, support of structured types, semantic data types, etc. In these scenarios the usage of the Catalog annotations must be limited as the “main source” for defining the “catalog specifics” is still the DB table.

See More: SAP ABAP CDS Joins and Unions

The Catalog annotations are defined like that:

@Scope: [#CONTEXT, #ENTITY, #VIEW, #TYPE]
annotation Schema : String(128);

@Scope: #ENTITY, #VIEW 
annotation Catalog {
tableType: String(20) enum {ROW= ‘ROW’; COLUMN= ‘COLUMN’;
   GLOBAL_TEMPORARY= ‘GLOBAL_TEMPORARY’;} default COLUMN;
   deliveryClass : String(1) enum {A=’A’;C=’C’;L=’L’;G=’G’;S=’S’;} default A;
   viewEnhancementCategory : array of String(20) enum {NONE; PROJECTION_LIST;
                                                       GROUP_BY; UNION;};
   entityEnhancementCategory : array of String(20) enum {NONE; SIGNATURE;};
};

@Scope: #ELEMENT 
annotation KeyGenerationPolicy {
       uuid : boolean default true;
       sequence : {
       name : String;
       noCycle : Boolean default true;;
       minValue : Integer;
       maxValue : Integer;
       incrementBy : Integer;
       startWith : Integer;
                 };
    numberRange : String;
    custom : String;
};

Examples:

We need to explicitly differentiate the following scenarios (already in the syntax) as they have different implications on the infrastructure:

Create a new persistency (catalog object) based on the entity-definition:

  • In this case the “catalog specifics” are defined via annotations on entity level.
  • It needs to be ensured that changes to the underlying persistency structure can only made on level of the entity (in order to avoid “surprises”)
  • Example for the Syntax:
type Amount {
    value : Decimal;
    currency : like Currency.code;
 }

@Catalog:  { deliveryClass: #A, enhancementType: #ANY }
entity customer { 
   @KeyGenerationPolicy.sequence: { name: ‘customerID’, startWith: 1}
   key ID : UUID;
   firstName : String(77);
   name : String(77);
   revenue : Amount;    // Amount is a structured type
}

Expose existing tables as entities (via a non canonical mapping):

  • It needs to be ensured that changes to the underlying persistency structure can only made on level of the table (in order to avoid “surprises”) and these changes need to be propagated to the entity definition. For example adding of new element needs to be done by adding of new DB columns or indices need to be created on the level of the DB table.
  • So the exposure of the table signature as an entity is more or less an “aliasing” together with some structural changes (like combining two flat DB columns into a structure type (e.g. Amount)).

 

  • Example for the Syntax:
type Amount {
  value : Decimal;
  currency : like Currency.code;
}

entity customer as SELECT from KNA1 {  // ‚KNA1‘ represents the table name
@KeyGenerationPolicy.#uuid
key KNR as ID, 
    FKN_FNAME as firstName,
    DATUM as entryDate : dateFrom,
    KN_NAME as Name,   
   {
      AMOUNT_VAL as value,  
      CURRKEY as currency
   } as revenue : Amount // This is an example for combining two flat DB
                          // columns into a structure type
}

 

Analysis:

On context level the following annotation is offered:

  • Schema – defines the DB Schema into which the artifacts, which are defined in the context, are generated. For HANA/XS this is mandatory. For ABAP this annotation is not offered to the developer as the ABAP Container manages the DB Schemas transparently.

The annotations on entity level define additional metadata for the underlying DB table. The following aspects can be defined:

  • tableType – specifies the table type being created in the database. The allowed values are ROW, COLUMN and GLOBALTEMPORARY. The semantics of the types are as follows:

ROW, COLUMN

If the majority of access is through a large number of tuples but with only a few selected attributes, COLUMN-based storage should be used. If the majority of access involves selecting a few records with all attributes selected, ROW-based storage is preferable. The SAP HANA Database uses a combination to enable storage and interpretation in both forms. You can define the type of organization for each table. The default value is COLUMN.

GLOBAL TEMPORARY:

Table definition is globally available while data is visible only to the current session. The table is truncated at the end of the session. Metadata in a global temporary table is persistent meaning the metadata exists until the table is dropped and the metadata is shared across sessions. Data in a global temporary table is session-specific meaning only the owner session of the global temporary table is allowed to insert/read/truncate the data, exists for the duration of the session and data from the global temporary table is automatically dropped when the session is terminated. Global temporary table can be dropped only when the table does not have any record in it.

  • deliveryClass – controls the transport of table data when installing, upgrading, or performing a tenant copy, and when transporting between customer systems. The allowed values for delivery classes are
    • A – Application table (master and transaction data).
    • C – Customer table, data is maintained by the customer only.
    • L – Table for storing temporary or local data.
    • G – Customer table, SAP may insert new data records, but may not overwrite or delete existing data records.
    • S – System table, data changes have the same status as program changes.
  • viewEnhancementCategory / entityEnhancementCategory – defines how the view/entity (including the underlying table) that has been defined by SAP or partners can be enhanced subsequently by industries, partners or customers. The allowed values of the annotations have the following meaning:
    • NONE – defines that the view can’t be enhanced at all
    • PROJECTION_LIST – allows structural changes to the projection list of a View like adding of additional elements
    • SIGNATURE – allows structural changes to the signature of an Entity like adding of additional elements
    • GROUP_BY – Allows to add new “non-aggregated” elements to a “base view” that uses aggregation by extending the GROUP BY-statement of the “base view”. Extensions of the GROUP BY-statement have to be explicitly enabled as they change the cardinality of the result set which is on “pure SQL level” an incompatible change. So applications that require those kind of changes have to be prepared accordingly that they can deal which different cardinalities of the result set.
    • UNION – Allows to add new elements to a “base view” that contains one or more UNIONs. Extensions of  view with UNIONs have to be explicitly enabled as they change the cardinality of the result set which is on “pure SQL level” an incompatible change. So applications that require those kind of changes have to be prepared accordingly that they can deal which different cardinalities of the result set.

If the viewEnhancementCategory-annotation or the entityEnhancementCategory-annotation is not specified explicitly it is assumed by the infrastructure that it is possible to extend the projection list of the view respectively the signature of an entity. With other words the behavior is equivalent to scenarios where the annotation is specified as follows:

@Catalog.viewEnhancementcategory : [#PROJECTION_LIST] or   
@Catalog.entityEnhancementcategory : [#SIGNATURE]

The annotations on element level define additional metadata for individual Table Columns. The following aspects can be defined:

  • KeyGenerationPolicy – defines which technique should be used to generate the values for the key elements. The allowed values are:
    • UUID – the infrastructure generates a unique ID. This technique is recommended for technical keys.
    • Sequence – defines the DB Sequence that will be used to generate the values. Via this metadata the infrastructure can ensure that the defined DB Sequence object exists in the database once the entity is activated.
      This technique is recommended for technical keys. When specifying a DB Sequence, the following attributes are relevant:

        name

Defines the name of the Sequence object

        noCycle

If noCycle is set to true, the sequence number will not be reused after it                          reaches its maximum or minimum value.

         minValue

The minimum value of a sequence can be specified after MINVALUE and is                    between 0 and 4611686018427387903. When MINVALUE is not specified (->                value is set to NULL), the minimum value for an ascending sequence is 1 and                the minimum value for a descending is -4611686018427387903.

         maxValue

Defines the largest value generated by the sequence and must be between 0                and 4611686018427387903. When MAXVALUE is not specified (-> value is set              to NULL), the maximum value for an ascending sequence is                                           4611686018427387903 and the maximum value for descending sequences is               -1.

        incrementBy

Defines the amount the next sequence value is incremented from the last value              assigned. The default is 1. Specify a negative value to generate a descending                sequence. An error is returned if the INCREMENT BY value is 0.

     startWith

Defines the starting value of the sequence. If you do not specify a value for the              START WITH clause, MINVALUE is used for ascending sequences and                          MAXVALUE is used for descending sequences.

  • numberRange – defines that a number range object should be used to generate the values. This technique is recommended for semantic keys. When number ranges are used, the name of the number range object is specified. The definition of the number range objects is being described within the specification of Domain-specific Annotations.
  • custom – allows to specify the name of an application-specific routine that generates the values.

 See More: CDS Conversion Functions (CAST, Unit Conversion, Currency Conversion)

End User UI texts

To allow an intuitive consumption of the data model in (End User) UIs, further metadata needs to be defined which helps the end user to understand the semantics of the underlying data model artifacts. Depending on the concrete context different types of UI texts are required. The UI text is displayed on the screen in the logon language of the user (if the text was translated into this language).

The corresponding annotation is defined like that:

@Scope: #ANY
@MetadataExtension.usageAllowed : true
annotation EndUserText  {
     @LanguageDependency.maxLength: 40
     label : String;    // for field label or column headers
     @LanguageDependency.maxLength: 65
     quickInfo : String; //for quick info, accessibility hints or mouse over
     @LanguageDependency.maxLength: 35
     heading : String; // defines the ‘header text’ of lists
    documentation : Boolean default true;  
};

 

Examples:

@EndUserText : { label: ‘Sales Order Header’,
       quickInfo: ‘Sales Order Header that contains data relevant for all items’,
       documentation : true }
entity SalesOrderHeader { 
      ...
      @EndUserText.label: ‘Date of Order’ // default language is still ‘EN’
      orderDate : Date;
      ...
};

 Analysis:

End User UI texts must be translated. Therefore the infrastructure needs to extract them from the RDL resource and to transfer the extracted texts to the concrete translation infrastructure of the corresponding container (HANA/XS, ABAP, …).

The following types of UI texts will be distinguished:

  • label – defines a human-readable text that is displayed besides input fields or as column headers.
  • quickInfo – defines a human-readable text that provides additional information compared to the label text. The quickInfo is used for accessibility hints or the “Mouse over” function.
  • heading – defines a human-readable text that is displayed as column headers.
  • documentation – defines if a dedicated online documentation is available (respectively needs to be available). This documentation contains a comprehensive explanation of the semantics being displayed in a formatted way.
  • $docuRef – contains the link (URL) to the online documentation. This attribute can’t be explicitly set by the application developer who is defining his DDL artifact (as the concrete link depends on the customer-specific landscape). Nevertheless it is defined here in order to define how the documentation can be accessed programmatically (e.g. within a UI infrastructure). This attribute exists depending on the fact if documentation has been set to true.

In the ABAP-Stack the following mapping between the existing “data element texts” and the EndUserText-annotations apply:

  • Label:
    • Take the data element long label if it is not empty and its length is <= 20
    • Otherwise take the data element medium label if it is not empty (the length of the medium label is always <= 20)
    • Otherwise take the data element short label if it is not empty
    • And as a last fallback take the data element long label even if its lenght is > 20
  • quickInfo: will be filled with the „short description“-value of the data element
  • heading: will be filled with the „heading“-value of the data element

Data Aging

Data Aging is not only supposed to optimize the data storage according to access performance and cost, it is also supposed to keep all data under full control of the database and SQL – without needing the resource-consuming data transfer between different systems (database and archive) through an application layer. In a nutshell, Data Aging is as a Suite-tailored concept for reducing expensive storage – in the case of HANA the memory footprint.

As new data volume management capability, Data Aging

  • supports the exclusive loading of operationally relevant („HOT“) data, whereas the other (“COLD”) data may primarily remain on „unlimited“ (less expensive but slower) storage, not affecting HOT data performance; yet the COLD” data remains accessible via SQL on request
  • has a lower TCD (easier implementation) and TCO (less admin effort) than archiving and current archive access

The basic Idea of Aging

The application knows which business objects are closed and may hence be moved to “COLD” partitions. It shall actively trigger the movement of corresponding rows by updating a field in a temperature column. Since the table is partitioned by this temperature column, the rows are automatically moved to a “COLD” partition.

As the application moves objects based on business logic, the application shall also control whether it is required for a given query to read from “COLD” partitions or whether the “HOT” partition is sufficient.

From an application perspective (at least for OLTP-like applications), the default access to the database is to “HOT” data only.

General Database Behavior

The default of the database shall remain in the way it is in Standard SQL: If nothing is specified (i.e. in the Where clause), everything has to be read (“HOT” and “COLD” partitions).

This means that the application/client has to set explicitly that the database shall only read a subset of the existing partitions (for example only the “HOT” partition or some “COLD” partitions up to a requested date).

For certain application scenarios (e.g. sFIN) it is required to define Views that proactively overwrite the above mentioned default behavior. Therefore the following CDS-Annotation is required:

@Scope: #VIEW 
annotation DataAging {
noAgingRestriction : boolean default true;
        };

Example:
@DataAging.noAgingRestriction
define view orders_in_de as select from snwd_bpa
{}

Semantics of the “DataAging.includeColdData”Annotation:

When the annotation DataAging.noAgingRestriction is specified (and set to “true”) the annotated CDS-View has an “including COLD” query scope, that is, no syntax extension (to indicate the partition) will be created regardless of the temperature context that is given when the view is used during query execution time.

This view semantics should primarily be handled within ABAP. So if such a view is part of the SQL query, no syntax extension is added to the SELECT-statement towards the database. It is accepted that this behavior only applies to Open SQL. If the view is selected with native SQL, this logic does not apply.

Aggregation

When purely relying on standard SQL, the developer/provider of a view needs to decide explicitly within the view definition, if the view should deliver aggregated results or single records. This definition cannot be overwritten by the consumer of the view.

In some scenarios a more flexible behavior is desired, in a sense that the view definition doesn’t restrict its invocation either to delivering single records or aggregated results. This decision should be left to the consumer of the view who has explicit means in the CDS Query Language to express which behavior should be applied. This approach also allows avoiding the usage of the implicit aggregation behavior of the Calc Engine.

To support those kinds of scenarios, the view developer

  • defines the “core” view definition (= the parts being described via a QL-Query) in a way that it returns single records (when the view is consumed via “aggregation-free” QL-statements) and
  • adds dedicated annotations to the elements in the projection list that define which aggregation behavior should be applied when the consumer wants to get aggregated results from the view.

Based on such a view definition, the following queries are possible:

  • SELECT <measure> FROM <view> à No aggregation is applied (= „SQL Semantics“)
  • SELECT SUM(<measure>) FROM <view> à Returns the sum of the “measure” (independent from the fact what aggregation behavior has been defined via the annotation within the view definition) (= „SQL Semantics“)
  • SELECT AGG(<measure>) FROM <view> à Aggregates based on the aggregation behavior that has been defined via the dedicated annotation (= „Calc Engine Semantics“)

The corresponding annotation to specify this aggregation behavior is defined like that:

@Scope: [#ELEMENT, #SIMPLE_TYPE]
annotation Aggregation {
    default : String(30) enum { NONE; SUM; MIN; MAX; AVG; COUNT_DISTINCT; NOP;
                                FORMULA;};
   referenceElement: array of elementRef;
      };

 

For compatibility reasons still the following annotation is supported:

@Scope: [#ELEMENT, #SIMPLE_TYPE]
annotation DefaultAggregation : String(30)
      enum { NONE; SUM; MIN; MAX; AVG; COUNT; COUNT_DISTINCT; FORMULA;};

 Examples:

The aggregation behavior for a simple type can be defined as follows:

@Aggregation.default : #SUM
type sales : decimal;

The aggregation behavior for elements of an entity / view is defined as follows:

entity Customer {
     @Aggregation.default: #NONE
     key CustomerID : String(8);
     @Aggregation.default: #SUM
     Amount : Amount;
     @Aggregation.default: #SUM
     Quantity : Quantity;
     @Aggregation.default: #FORMULA
     AveragePrice = Amount / Quantity;
     @Aggregation.default: #NONE
     CustomerName : String(90);
};

Analysis:

When the Aggregation-Annotation has been specified for a simple type or directly for an element, the corresponding elements are used as so called “measures” (= elements that can be aggregated) both in standard CDS-QL scenarios and on analytical scenarios (where the processing is executed by a specialized analytical engine).

  • default – defines which aggregation semantics should be applied:
    • The values “SUM”, “MAX”, “MIN”, “COUNT” and “COUNT_DISTINCT” determine the default aggregation of the measure.
    • “NOP” returns a value, if it is unique; otherwise it returns a special error value
    • The value “FORMULA” indicates, that the element is a formula which has to be calculated after the operands have been determined by aggregation or calculation. It should never be aggregated. If the element is not a formula, then this value must not be used.
      Example: Margin : = Revenue / Cost. If in a report Margin should be shown per OrgUnit, then first the aggregates of Revenue and Cost have to be determined per OrgUnit and then the Margin has to be calculated per OrgUnit. The Margin for the company is not the aggregate of the Margin per OrgUnit but has to be calculated separately by Revenue for all OrgUnits devided by the Costs for all OrgUnits.
    • “NONE” indicates that the element is not a measure. Usually these elements are used in filters and GROUP BY-statements.
  • referenceElement – For certain scenarios where it is required that the element which represents the result of the aggregation differs from the element(s) which is aggregated (e.g. due to different required types), it is possible to specify the elements to be aggregated via the “referenceElement” annotation. This is usually the case in COUNT (DISTINCT) scenarios.

 

Remarks:

1)        If the element is a formula which has to be calculated before the aggregation (on record-level) and aggregated via a standard aggregation, then Aggregation.default should be set to “SUM”, “MAX” “MIN”, “COUNT” or “COUNT_DISTINCT”.

2)        If no Aggregation.default-Annotation is assigned to an Element, the engine assumes the aggregation behavior “NONE” (so no aggregation takes place).

3)        By default the annotated Aggregation is inherited along the usage hierarchy of an element or simple type. So for example if

  • an element of “View A” is annotated with the @default: #FORMULA and
  • this element is exposed to the projection view “View B” (being defined on top of “View A“, also the element of “View B” has the @default: #FORMULA.

It is possible to overwrite the inherited Aggregation on every usage level. So in the previous example it is possible to change the Aggregation of the element within “View B” from “FORMULA” to for example “SUM”.

Metadata Handling

A central building block o the CDS concept is to attach additional metadata to CDS Artifacts by using CDS-Annotations. To efficiently address the corresponding requirements the CDS infrastructure needs to offer additional concepts besides specifying those annotations locally in the DDL file, like

  • extending CDS definitions with metadata that follows a different lifecycle (via using CDS Metadata Extensions / Facets)
  • Propagating annotations along usage relationships to allow reuse of metadata.

Those concepts need a dedicated control by developers which is realized by the “@Metadata”-Annotation domain.

The metadata annotations are defined like that:

@Scope: [#ENTITY, #VIEW, #TYPE, #ANNOTATE_VIEW, #ANNOTATE_TYPE ]
annotation Metadata {
      // Scope: [#ENTITY, #VIEW, #TYPE]
      allowExtensions : boolean default true;
      // Scope: [#VIEW]
      ignorePropagatedAnnotations : boolean default true;
      // Scope: [#ANNOTATE_VIEW, #ANNOTATE_TYPE]
      layer: String(20) enum {FOUNDATION; APPLICATION; INDUSTRY;
                              PARTNER; CUSTOMER;};
      }

 

Examples:

TBD.

Analysis:

  • allowExtensions – defines if a “Metadata Extension” can be defined for the corresponding DDL artifact or not.
    Default Behavior: If the @Metadata.allowExtensions-annotation is not specified explicitly, it is assumed by the infrastructure that it is not possible to create metadata extensions for the corresponding DDL artifact.
  • ignorePropagatedAnnotations – if this annotation is set, the “active / effective” annotations for the corresponding view are computed without propagating Element-annotations from its underlying data sources (views or tables/entities). In other words the “active / effective” annotations of this view are computed as follows:
    • View-level annotations:
      Those have to be specified directly for the view (either in the View-DDL or in an associated metadata extension / facet)
    • Parameter-annotations:
      They are derived from the “active /effective” type annotations of the corresponding data type of this parameter. Those annotations are overwritten by the direct parameter annotations which are specified either in the View-DDL or in an associated metadata extension / facet.
    • Element-annotations:
      For each element they are derived from the “active /effective” type annotations of the corresponding data type of this element. This logic is independent from the fact if the data type is propagated from the underlying data source or locally defined via a “direct CAST”.
      Those annotations are overwritten by the direct element annotations which are specified either in the View-DDL or in an associated metadata extension / facet.
  • layer – allows to assign a metadata extension to a semantic Layer (which is technically identified by a Layer-ID). This is a constant which is internally mapped to a number allowing for an easy sorting of the layers. Currently the following semantic layers are defined: FOUNDATION = 1500, APPLICATION = 2500, INDUSTRY = 3500, PARTNER = 4500, CUSTOMER = 5500.

All metadata extensions which are related to one core object are ordered according to the layer-id. Then the annotations are evaluated from the highest layer-id to the lowest layer-id. When the first time an annotation value is found for a given annotation key, then the following metadata extensions are ignored.

 

Hierarchies

Via hierarchy annotations the developer is able to specify the hierarchies he wants to make explicitly accessible in his data model together with the structure that defines these hierarchies.

The infrastructure takes this metadata and generates optimized access structures for these hierarchies that provide the flexibility for use case-specific features (like operations to work with complete sub-trees of a hierarchy, etc.) and performance improvements. In HANA dedicated Hierarchy Views are generated based on this metadata. In addition the Query Language will offer dedicated statements/operators for hierarchy-specific operations.

Generally the three hierarchy types “Leveled”, “Parent-Child” and “External” can be distinguished. The “Leveled” and the “Parent-Child” hierarchy are based directly on the master data entities. The hierarchies are time dependent if the master data entity is time dependent. Currently we assume that for “External” hierarchies no dedicated Annotations are required but that they can be modelled via a “Parent-Child” hierarchy on a CDS-View that combines all relevant hierarchy attributes.

The hierarchy annotations are defined like that:

@Scope: [#CONTEXT, #ENTITY, #VIEW]
annotation Hierarchy {
     leveled : array of {
         name : String(127);
         label: String;
         defaultMember : String;
         multipleParents: boolean default true;
         nodeStyle: String(20) enum {LEVEL_NAME; NAME_ONLY; NAME_PATH;};
         levels : array of {
           element : elementRef;
             order {
               by : elementRef;
               direction : String(4) enum {ASC=’ASC’; DESC=’DESC’;}
                                     default ASC;
                    };
              };
              rootNode {
                visbility : String(25) enum { ADD_ROOT_NODE; 
                                              DO_NOT_ADD_ROOT_NODE; }
                                              default ADD_ROOT_NODE;
                // TODO: How to filter/set a root node during runtime ?
              };
              orphanedNode {
                    handling: String(20) enum {ROOT_NODES; ERROR; IGNORE;
                                               STEPPARENT_NODE;} default ROOT_NODES;
                  stepParentNodeId : String;
              };
  };
  parentChild : array of {
         name: String(127);
         label: String;
         defaultMember: String;
         multipleParents: boolean default true;
         recurseBy: elementRef; // to be used if hierarchy is defined 
                             // via an association
         recurse : {              // to be used if hierarchy is defined via  
                 // “normal” elements
              parent : array of elementRef;
              child : array of elementRef;
                    };
         siblingsOrder : array of {
             by : elementRef;
             direction : String(4) enum {ASC=’ASC’; DESC=’DESC’;}
                                   default ASC;
                                  };
         rootNode {
             visibility : String(25) enum { ADD_ROOT_NODE_IF_DEFINED;
                                            ADD_ROOT_NODE; DO_NOT_ADD_ROOT_NODE; }
                                            default ADD_ROOT_NODE_IF_DEFINED;
            // TODO: How to filter/set a root node during runtime ?
                  };
        orphanedNode {
             handling: String(20) enum {ROOT_NODES; ERROR; IGNORE;
                                        STEPPARENT_NODE;} default ROOT_NODES;
             stepParentNodeId : array of String;
                       };
        directory : associationRef;
     };

 Examples:

Leveled Hierarchy:

@Analytics : { dataCategory : #DIMENSION, replicationEnabled }

@Hierarchy.leveled : [{ name : ‘Location’, label : ‘Hierarchy across Locations’,

    levels : [ { element : ‘Country’ },

               { element : ‘Region’ },

               { element : ‘ID’ } ] 

    } ]

entity Customer {

key ID : String(8);

Country : Association to Country;

Region  : Association to Region WHERE region.country = self.country;

...

};

Parent Child Hierarchy for Entities/Views with Associations:

@Analytics : { dataCategory : #DIMENSION, replicationEnabled }

@Hierarchy.parentChild : [ { name : ‘Organisation’, recurseBy : ‘Manager’} ]

entity Employee {

key ID : String(8);

Manager : Association to this;

...

};

Parent Child Hierarchy for Entities/Views without Associations:

@Analytics : { dataCategory : #DIMENSION, replicationEnabled }

@Hierarchy.parentChild : [ { name : ‘Organisation’, 

  recurse : {  parent : [ ‘Manager’ ] ,

               child : [ ‘ID’] } 

  } ]

entity Employee {

key ID : String(8);

Manager : String(8);

...

};

Hierarchy Directory Association:

@Analytics : { dataCategory : #HIERARCHY }

@Hierarchy.parentChild : { 

    recurseBy : ‘ParentNode’,

    directory: ‘CostCenterHierarchyDirectory’ 

  }

entity CostCenterHierarchyNode {

 key CostCenterHierarchyDirectory : Association to CostCenterHierarchyDirectory;

 key HierarchyNode : CostCenterHierarchyNode;

ParentNode : Association to this;

...

};

 Analysis:

Leveled Hierarchy:

A Leveled Hierarchy is defined through a list of levels. Each level is defined via a reference to an Element of the Entity.

An example of a leveled hierarchy is time: days, months and years.

On entity level the following metadata can be defined:

  • name – Technical name of the hierarchy. Via this name the hierarchy is later on accessed in Queries or program logic.
  • label – Description of the hierarchy being displayed in End User UIs.
  • defaultMember – This specifies, which node should be the default member in MDX. If nothing is specified, the root node is used as DefaultMember. If there are several root nodes, it is not specified,which one will be chosen as DefaultMember.
  • multipleParents – This flag indicates that multiple parents might occur in the hierarchy. E.g. in a geographic hierarchy you might want to assign the country Turkey to the continents Europe and Asia. We need the flag to distinguish the above use case from the following use case: In a Time hierarchy YEAR, QUARTER, MONTH January under 2011 and January under 2012 are not the same member with multiple parents, they are different Januaries.
  • nodeStyle – Defines how the individual hierarchy nodes are composed. Currently valid values are:
    • “nameOnly” – the result set value / node name is taken directly without any further decoration, for example “B2”
    • “levelName” – (default) the unique node ID is composed by level name and node name, for example “[Level 2].[B2]”
    • “namePath – the unique node ID is composed by the result node name and the names of all ancestors apart from the (single physical) root node. For example “[A1].[B2]”
  • levels – defines which elements specify the levels of the hierarchy. The sequence in which the elements are specified within the levels-annotation also defines on which level the elements appear in the hierarchy. This means: The first element in the sequence specifies “level 1”, the second specifies “level 2”, etc.. The “levels”-annotation contains the following attributes:
    • element – contains the name of the element within the projection list of the view, that defines the concrete hierarchy level
    • order – defines how values on the same hierarchy level are ordered. Via the “by”-attribute the element name is specified which contains the values to be ordered. Via the “direction”-attribute it can be specified if the sort order is “ascending” or “descending”.
  • rootNode – via the “rootNode”-Annotation dedicated metadata how to handle root node(s) in the hierarchy can be defined. The annotation contains the following attributes:
    • visibility – Specifies the handling of the root node aka the “all member” in MDX terminology. It can have the following values:
      • ‘ADD_ROOT_NODE_IF_DEFINED’ (default): The system will add the root node of the hierarchy if it is explicitly defined, but the system will not add an extra artificial root node.
      • ‘ADD_ROOT_NODE’: The system will always add an artificial single root node to the hierarchy. All other nodes are descendants of this node.
      • ‘DO_NOT_ADD_ROOT_NODE’: The system will not add an artificial single root node to the hierarchy.
        Example 1: Assume we have the following simple table with a parent child hierarchy defined on top
        PREDECESSOR    SUCCESOR
        A                                   B
        A                                   C

Then we have:
ADD_ROOT_NODE_IF_DEFINED, ADD_ROOT_NODE: Results in a 2 level hierarchy with A at the root having B,C as children. DO_NOT_ADD_ROOT_NODE: Results in a 1 level hierarchy with B and C as independent root nodes.
Example 2: Assume we have the following simple table with a parent child hierarchy defined on top.
PREDECESSOR    SUCCESOR
null                              A
A                                B
A                                C

Then we have:
ADD_ROOT_NODE_IF_DEFINED, DO_NOT_ADD_ROOT_NODE: Results in a 2 level hierarchy with A at the root having B,C as children.
ADD_ROOT_NODE: Results in a 3 level hierarchy with null as root node having A as child and A having B,C as children.

  • orphanedNode – Defines how nodes without a parent (more precisely with a parent that doesn’t occur as a child) are processed. More details can be found here: https://wiki.wdf.sap.corp/wiki/display/wikihana/Hierarchies#Hierarchies-Orphanednodes
    • handling The following values are supported:
      ROOT_NODES (default): Treat them as root nodes
      ERROR: Stop processing and show an error
      IGNORE: Ignore them they are removed from the hierarchy
      STEPPARENT_NODE: Put them under a stepparent node
    • stepParentNodeId In case of handling = STEPPARENT_NODE this field contains the node ID(s) of the step parent node(s).
      In case of leveled hierarchies this attribute contains the step parent node Id according to the nodeStyle – e.g. -[Level 2].[B2] for node style = “levelName” or [2012].[Feb].[08] for node style = “namePath”.

Parent-Child Hierarchy:

Other than for leveled hierarchies (where each level needs to be specified explicitly), a parent-child is defined by exactly one parent element. This parent element describes a self-referencing relationship within the master data entity and will usually be defined via an association. Only one level needs to be assigned to a parent-child hierarchy, because the levels present in the hierarchy are drawn from the parent-child relationships between members associated with the parent element.

Within the same master data entity one or more parent-child hierarchies could be defined.

A simple example of a Parent-Child Hierarchy is the “Employee” master data. A “Manager” is again an “Employee” and almost every “Employee” is assigned to a “Manager”.

On entity level the following metadata can be defined:

  • name – Technical name of the hierarchy. Via this name the hierarchy is later on accessed in Queries or program logic.
  • label – Description of the hierarchy.
  • defaultMember – This specifies, which node should be the default member in MDX. If nothing is specified, the root node is used as DefaultMember. If there are several root nodes, it is not specified, which one will be choosen as DefaultMember.
  • multipleParents – This flag indicates that multiple parents might occur in the hierarchy. See in leveled hierarchies.
  • recurseBy – This annotation is used to define the “parent-child” relationship in case the view definition contains an association which expresses this relationship. So in this case only the name of this association needs to be specified here.
  • recurse – In case the underlying view definition doesn’t contain an association defining the “parent-child” relationship but only “normal” elements define the “parents” respectively the “childs”, the “recurse” annotation has to be used. This annotation has the following attributes:
    • parent – via the “parent”-attribute the element names defining the key of the “parent” are specified.
    • child – via the “child”-attribute the element names defining the key of the “child” are specified.
  • siblingsOrder – defines how values on the same hierarchy level are ordered. The annotation contains the following attributes:
    • by – Via the “by”-attribute the element name is specified which contains the values to be ordered.
    • direction – Via the “direction”-attribute it can be specified if the sort order is “ascending” or “descending”.
  • rootNode – via the “rootNode”-Annotation dedicated metadata how to handle root node(s) in the hierarchy can be defined. The annotation contains the following attributes:
    • visibility – Specifies the handling of the root node aka the “all member” in MDX terminology. The values and their semantics are exactly the same as for the corresponding annotation for “leveled hierarchies”.
  • orphanedNode See in leveled hierarchies.
    • handling – See in leveled hierarchies.
    • stepParentNodeId –In case of handling = STEPPARENT_NODE this field contains the node ID(s) of the step parent node(s).
      In case of parent child hierarchies one need to define a step parent node ID for each component i.e. each parent-child combination.
  • directory – For external hierarchies, the view of hierarchy nodes often contains the nodes for multiple alternative hierarchies. A user is supposed to choose a single hierarchy for display, and her selection is used as a filter when selecting from the hierarchy node view. The directory annotation identifies an association, the hierarchy directory association, from the hierarchy node view to a view, the so-called hierarchy directory, providing all available alternative hierarchies for this hierarchy node view. The hierarchy directory is used as value-help for user input, and a chosen hierarchy directory entry is used to filter the nodes via the hierarchy directory association.

Client Handling

The main purpose of the client handling as it is specified within this document is to provide a client separation on the CDS language level. Say, if views or entities are accessed from a host language, only the content of the current client should be visible.

Requirements

Client handling in CDS should be as declarative as possible. The developer has to solely express his intent, say, an entity, table function or a view is client dependent or not. At the same time, the client attribute should not be visible in the data model on the entity or view level, as it does not carry any business related meaning.

Meanwhile it has to be possible to perform queries including cross client access.

In case of view definitions, it can be distinguished between different view handling algorithms. The developer should have the opportunity to choose the appropriate algorithm for his use case.

Specification of Client Dependency and Client Handling

The following sections distinguish between a CDS entity and a CDS view definition as it is specified by the CDS specification.

Entity Definitions

When defining a “client-dependent” entity, the developer needs to explicitly indicate this by annotating the entity definition via the following annotation:


@Scope: [ #ENTITY ]
annotation ClientHandling {
 type : String(30) enum { CLIENT_DEPENDENT; CLIENT_INDEPENDENT; INHERITED; }
 };

When this annotation is set to CLIENT_DEPENDENT, it indicates to the CDS-DDL/-QL Compiler and Runtime that the transparent enrichments for the client-dependent behavior need to be done as being described in the course of this paragraph. In other words the infrastructure ensures that the developer doesn’t need to specify any additional client-dependent metadata/logic in case he wants to leverage the “default behavior” (= filtering of result set according to “current client”).[1]

The impact of this annotation differs dependent from the fact if the entity (and the corresponding table) is newly created or if the entity is defined on top of an existing (DDIC-) table:

  • For newly created entities the infrastructure will transparently add the required support for client-handling into the underlying definitions. Concretely it will enrich the DB table by adding a dedicated field to store the “client” information.[2] This field will always have the name “MANDT” and is always of type CHAR(3).[3] Furthermore the field “MANDT” will be added to the Primary Key.

a)           Proposal:

Based on the entity definition

@ClientHandling.type : CLIENT_DEPENDENT
entity E1 { 
key f1 : String;
f2 : String;
}

The following DB table will be created:

CREATE TABLE E1 (

MANDT NVARCHAR (3) NOT NULL,

f1 NVARCHAR NOT NULL,

f2 : NVARCHAR,

PRIMARY KEY (MANDT, f1)

)

 

  • When exposing existing (DDIC-) tables as entities, the infrastructure needs to ensure that the developer doesn’t explicitly add the existing “client-field” of the table to the entity signature. Instead the infrastructure adds the information which field of the underlying table contains the client to its internal metadata of the entity definition.

The algorithm to identify the “client-field” within an underlying DDIC table is as follows: It must be the first field in the table, it must be part of the primary key and it must have the DDIC data type CLNT.[4]

Default Behavior

If no @ClientHandling.type is explicitly given, #CLIENT_DEPENDENT is used as a default value. If the entity is derived from an existing table, #INHERITED is used as default.

View Definitions

Client dependency and client handling is specified by the annotation

@Scope: [ #VIEW, #TABLE_FUNCTION ]
annotation ClientHandling {
 //@Scope: #VIEW
    type : String(30) enum { CLIENT_DEPENDENT; CLIENT_INDEPENDENT; INHERITED; }
default INHERITED
    //@Scope: [#VIEW, #TABLE_FUNCTION]
// For #TABLE_FUNCTION only #NONE und #SESSION_VARIABLE are allowed
    algorithm : String(20) enum { NONE; AUTOMATED; SESSION_VARIABLE; }  ]
};

For view definitions the attribute @ClientHandling.algorithm specifies the algorithm how the client handling should be performed in this view. Depending on the specific view, an automated client handling might not be possible with every client handling algorithm.

Client Dependency:

  • If @ClientHandling.type : #CLIENT_DEPENDENT is specified, the defined view is client dependent. If the given handling algorithm is not applicable, an error is raise. If the view contains no client dependent data source, an error is raised.
  • Likewise, if @ClientHandling.type : #CLIENT_INDEPENDENT is specified, the view at hand is not client dependent, only @ClientHandling.algorithm : #NONE is allowed. If the view contains a client dependent data source, an error is raised.
  • If @ClientHandling.type : #INHERITED is specified, the view is client dependent, if one of the underlying data sources is client dependent. The rules for the different client handling algorithms are applied accordingly. If no annotation @ClientHandling.typeis specified, #INHERITED is the default value.

Different client handling algorithms:

  • @ClientHandling.algorithm : #NONE
    No Client handling at all.

    • If @ClientHandling.type : #CLIENT_DEPENDENT is set and the underlying data source is client dependent, a manual client handling inside the view has to be implemented.
    • If @ClientHandling.type : #CLIENT_INDEPENDENTis set, only @ClientHandling.algorithm : #NONEis allowed.
    • If @ClientHandling.type : #INHERITED is set, an error is raised. If client handling is inherited, some sort of algorithm has to be applied.

Nevertheless, the ON-Condition of the JOIN within the view definition is transparently expanded to compare the values of the “client elements” in the underlying data sources. This is required as for client dependent entities that are created “top-down”, the developer doesn’t have any access to the client element in the underlying DB table and is therefore not able to manually specify the ON-condition.

Note: Currently no valid use case for the option #NONE exists. Therefore, this option will not be implemented.

  • @ClientHandling.algorithm : #AUTOMATED
    Each ON-Condition of a JOIN is transparently expanded to compare the values of the “client elements” in the underlying data sources. If the underlying data source is a join itself, one of the client columns is used in the on clause of the surrounding join. If the nested join is a left outer join, the client column of the right data sources is propagated up to the next surrounding level (for right outer joins inversely).
    For the native database view generated during the activation of the CDS view in the database, a client column (name ‘CLIENT’ or ‘MANDT’) is added at position one to the select list. If necessary, the GROUP BY and the ORDER BY clause have to be extended as well.
    This algorithm is only applicable if @ClientHandling.type is set to #CLIENT_DEPENDENT or #INHERITED.
    Note: if a left outer join has client independent data source on the left side and a client dependent data source on the right side, an automated client handling is not possible, as it is described above (for right outer joins inversely). Due to arbitrary ON clauses of the left outer join, the client column propagated to the select list or to surrounding joins could contain NULL values. When the client handling in the host language (like in ABAP) applies a filtering condition for the current client by comparing this propagated client column with the current client, this comparison is always false for NULL values, even so the data is retrieved from the current client.
    If such a situation occurs (left side of outer join client independent, right side client dependent), the left data source of the left outer join is replaced by a cross join of the data source itself with the table containing all possible clients (in ABAP table T000). By adding a cross join with the table of all clients, the left hand of the outer join is artificially client dependent and the further client handling works as expected.
    This algorithm is only applicable if @ClientHandling.type is set to #CLIENT_DEPENDENT or #INHERITED.
  • @ClientHandling.algorithm : #SESSION_VARIABLE
    Instead of propagating a client column to the accessing host language and relying on the client filtering there, the client handling and filtering is implemented inside the CDS View itself. As in the previous algorithms, each On-clause of a join in the view definition is enhanced by comparing the client columns of the underlying data sources. For each client dependent data source, the where clause or the on clause is enhanced by an additional comparison of the client column with the session variable containing the current client set by the accessing host language (in ABAP, sy-mandt). As before, in the native database view generated during the activation of the CDS view in the database, a client column (name ‘CLIENT’ or ‘MANDT’) is added at position one to the select list.
    This algorithm is only applicable if @ClientHandling.type is set to #CLIENT_DEPENDENT or #INHERITED.
    Note: A cross client access will possible in the Open SQL, if the USING CLIENT clause is used, it is not possible to access several different clients in one select.

As for entities, CDS Views do not expose the client column as part of their signature.

See More: SAP HANA Partitioning, Data Replication, Memory algorithms, Data Layout in the Main Memory

Default Behavior

The default behavior is applied if no annotation is explicitly given.

  • @ClientHandling.type: If no @ClientHandling.type is given the client dependency is derived from the underlying data sources, the value #INHERITED is taken as default. Say, the view is client dependent if one of the underlying data sources is client dependent. If @ClientHandling.algorithm is given, the specified value has to be compliant with the client dependency of the underlying data sources as described by the previous section regarding the different client handling algorithms.
  • @ClientHandling.algorithm: If no @ClientHandling.algorithm is explicitly given, the default algorithm is defined as follows:
    • If @ClientHandling.type : #CLIENT_INDEPENDENT is set, the default algorithm #NONE is assumed.
    • If @ClientHandling.type : #CLIENT_DEPENDENT is set, @ClientHandling.algorithm: #AUTOMATED is assumed.
    • If no @ClientHandling.type annotation is explicitly given, client dependency is derived from the underlying data source, the default type therefor is #INHERITED and the handling strategy is #AUTOMATED.

Table Functions

The client dependency of table functions can be specified in the same way as for table entities:

@Scope: [ #ENTITY ]
annotation ClientHandling {
 type : String(30) enum { CLIENT_DEPENDENT; CLIENT_INDEPENDENT; }
 };

Whereas the value #INHERITED cannot be used for table functions.

Default Behavior

If no @ClientHandling.type is explicitly given, #CLIENT_DEPENDENT is used as a default value.

Client Handling in (Open) QL

For the access of the CDS-View-Definition in the program code via a SELECT-statement, the following extensions are required:

  • The QL Compiler/-Runtime needs to transparently enrich the WHERE-Condition to filter the content of the “CLNT”-element based on the “current client”.
    àSyntax proposal (extension is marked in bold):
SELECT <field1>, <field2>, … FROM <view> WHERE CLNT=<SY-MANDT>
  • When the consumer wants to explicitly select the data from a different client than the “current client”, a dedicated (new) keyword USING CLIENT respectively USING CLIENT IN[5] needs to be supported in the SELECT-statement.
    àSyntax proposal (extension is marked in bold):
SELECT <field1>, … FROM <view> USING CLIENT <client_number> …

SELECT <field1>, … FROM <view> USING CLIENT ALL

SELECT <field1>, … FROM <view>
        USING CLIENT IN (<client1>, <client2>,)

When data from all clients should be selected, the USING CLIENT statement is extended by the keyword ALL (with the consequence that after the ALL keyword no further concrete clients can be specified). For this scenario additionally a pseudo variable $client is required to distinguish the records in the result set. This pseudo variable is also automatically included into the projection list when the view is accessed via “SELECT *” in combination with USING CLIENT.

See More: HANA Row vs Column storage? Columnar Dictionary Compression ; Inverted index?

ABAP CDS Legacy Annotations

The annotation @ClientDependent: true as it is being used right now in ABAP CDS triggers a client handling as it is described in the previous section “Default Behavior” for a view. A view is client dependent if one of the underlying data sources is client dependent. Client handling is performed according to the description regarding @ClientHandling.algorithm : #AUTOMATED. The existing annotation @ClientDependent: false could now be translated to @ClientHandling.algorithm : #NONE.

The current annotation @ClientDependent and the new annotation @ClientHandling cannot be used together in one CDS view definition.

Column Views

Via dedicated annotations additional metadata is specified which on the one hand can’t be expressed via standard (S)QL statements and on the other hand doesn’t fulfill the criteria to become a language extension.

Most annotations are purely applicable to Column Views, nevertheless some annotations also apply to (S)QL Views introducing SAP-specific behavior (like client handling).

The required “(Column) View” annotations are defined like that:

@Scope: #VIEW

annotation ColumnView {

useCalcEngine : boolean default true;

engineHint : String(10) enum { OLAP; JOIN; SQL; };

alwaysAggregateResult: boolean default false;

executionHints: array of NameValuePair;

runWithInvokerPrivileges : boolean default true;

fixedClient : Integer;

countStarElement : elementRef;

defaultSchema : String;

applyPrivilegeType: String(30) enum { NONE; ANALYTIC_PRIVILEGE;
SQL_ANALYTIC_PRIVILEGE;};

};
type NameValuePair {

name: String;

value: String

}
@Scope: #ELEMENT

annotation ColumnViewElement {

dimension : boolean default false;

keep: boolean default true;

hidden : boolean default true;

expressionLanguage : String(10) enum { SQL; COLUMN_STORE; } default SQL;

};

ClientDependent is not defined anymore as a column view specific annotation. Instead the same annotation as in the client handling chapter is used, because it has the same meaning:

If this flag is checked then the “client elements” are detected and a filter for the session (logon) client is automatically applied for them.

Analysis

On View-level the following metadata can be defined:

  • useCalcEngine – When this annotation is set to “true”, the View will be activated into the Calculation Engine using the optimized calculation engine execution semantics(the so-called “TEMPLATE-Semantic”). This i.e. means that all fields that are not part of the SQL query are removed from the model during instantiation.
  • engineHint – Indicates which concrete HANA Engine to be used for the View. Eventually this property will become obsolete, once the optimizer can decide this completely internally. However currently this is still required as a hint to keep the good performance of certain models.
    When the value “SQL” is chosen, the CalcEngine will push down the execution (more precisely: as much of the execution as possible) to the SQL engine. For some models this results in a better performance.
  • alwaysAggregateResult: Switching on this flag changes the aggregation behavior when querying the view such that measures are always aggregated even if not specified in the SQL. This flag is only evaluated, if the top node of the view is of type aggregation.
    Example: If you do a select A,B,C from MyView and A is defined as measure in the root aggregation node of the calc scenario, the query will be implicitly transformed into select sum(A),B,C from calcScenario group by B,C. In other words: the final result will always be aggregated.
    An effect of setting the flag is that a where condition doesn’t change the result in case the SQL query specifies no aggregation and no group by.
  • executionHints: This set of name value pairs allows for more dynamic hints for the engine execution. With this we can experiment with new engine hints more flexibly by just providing them as name value pairs. So a new hint can be tested and prototyped without any model / syntax change.
  • runWithInvokerPrivileges – Only relevant for SQL script views. Defines whether the script is executed with invoker privileges (true) or with definer/owner privileges (false)
  • fixedClient –by specifying fixedClient (integer) one can set a fixed client instead of the sessionClient that is taken as the default.
  • countStarElement – If such an element is provided the count(*) query is calculated by running a query on that element and taking over the result as the count(*) result.
  • defaultSchema – The default schema is used for the look up in the currency tables and to specify unqualified table names in scripts.
  • applyPrivilegeType Defines if analytic privileges are applied when the view is queried. It also defines which type of privileges is to be applied. The currently supported types are:
    • ANALYTIC_PRIVILEGES: The classical analytic privileges
    • SQL_ANALYTIC_PRIVILEGES: SQL based analytic privileges.

On element level the following Column View-specific metadata can be defined:

  • dimension – This only makes sense in views having the DataCategory = ‘CUBE’ and can be applied to elements of type “CDS Association”. It indicates that the Association target is used as a dimension, which means in a multi dimensional client tool and in MDX all columns and hierarchies of that view appear below the dimension and not in a flat manner on the top view level.
    If the target of the association has already the dataCategory “Dimension”, the annotation doesn’t need to be set.
  • keep – This flag can only be set for attributes in aggregation nodes and only for those that do not have an aggregationType set. The flag is ignored, if the overall “executionSemantics” of the View Definition is RELATIONAL. It indicates that the attribute is kept during instantiation (no aggregation happens along this attribute) and execution of the underlying calc scenario and that it is passed to the upper nodes even if it is not requested by the query.
  • hidden –When an element is flagged as hidden it is not returned to external consumers but can be used to store for example subtotals.
  • expressionLanguage – Defines if the specified expression is specific for the SQL- or the Calc Engine. This annotation is need as long the expression languages in the two engines are not harmonized. In case a given expression is specific for the Calc Engine, this annotation must be specified with the value “COLUMN_STORE”. In all other cases the infrastructure interprets the expression as being a (S)QL expression.
[1] In case this annotation is not set, the infrastructure doesn’t apply any transparent enrichment.

[2] Enriching the DB table via this explicit “client-field” ensures that standard SQL clients or -providers (like the other SAP-supported DBs besides HANA) can deal with the client-handling in a standard compliant way.

[3] In future version of the CDS specification a dedicated semantic type CLNT should be introduced.

[4] In case a platform container is not able to automatically derive this metadata, the developer must explicitly state, which field of the DB table represents the “client field”

[5] USING CLIENT IN allows specifying a list of clients for which the results should be retrieved.

  • aggregateAllNodes ??
Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker