Saturday, 2 January 2016

APO Monitoring & Support needs

As we moved head in the project life cycle, we move closer to Supporting the APO Solution after it is being used.

There are two most important things(off course after the Design and Test!) which need SAP APO to be a usable solution - Monitoring and Master data maintenance.

Monitoring : Usually APO Solutions have the need to be available to users all day for 5 days week with updated data in the live-cache all the time. This makes it very hard to do all the calculation and keep the system hardware configuration to be optimal.

One of my older client had a need to run the APO instance 24X7, not because business need to plan all the time, but due to geographical, cultural constrains. i.e. some countries had week starting on Sunday(UAE, Egypt et all) and some are ending week on Sunday(Australia, Singapore et all).

The single APO server was used by countries starting from Time zone UTC - 1 till UTC + 12:00. Given the server configiration and sizing this leave very less time to adjust the calculations and keep them running.

Project are now starting to spend significant amount of time on analysing dependencies and timing to make the data available to planners when they need it the most.

Constant and active monitoring of system and its resources plays a huge role into it, it is as important as the system itself. Not knowing the latest of APO system utilization an organization will not be able to meet the expectation of users with existing APO configurations. Any update coming to address the needs of the planners will then have to addressed with more costly hardware upgrades.

Utilization of system is again a trade of Functional Utilization and Material Utilization of system. i.e. If APO has proposed correct planning values is it really making the life of planners easier? is it proposing solution based on business need? Are we adjusting the planning results more often? How much is the human interventions? Are we having right amount of data in system? is our master data correctly maintained? How much time are planners spending on updating the master data?All these questions are pertaining to Functional Utilization.

Material Utilization asks if hardware utilization of my system is not reaching its critical limits? How many dumps are we getting each day, are these dump related to heavy transaction or voluminous data? Is there a need to increase server sizing? Do we have good response time of all the critical transactions? These questions are more pertaining to Material Utilization of system.

Master data Maintenance : Master data maintenance is one of the most time consuming activity in any standard APO solution. Keeping a health check of your master data is a must not only for APO but of all automated system,

Master data need update as you grow in your business, the existing portfolio need update to meet planing need, new need to be introduced and older need be cleansed. Not only portfolio moves but also the staff keep churning and with then the knowledge and this all has direct impact on APO master data maintenance

Monitoring should be in place to know if the values maintained in the critical and important master data fields are good and make sense from business

For one of the fortune 500 company who was very much using APO to plan its supply chain, we implemented a tool to keep a check on the Master data and sound alert when a master data field is not OR wrongly maintained. We had placed tools in place to detect older master data. cleansing mechanism were in place to remove those garbage. We also had tools in place to monitor if APO was used by planners and respective %age compliance to values proposed by APO.

Overall active monitoring has a very big values for your Supply Chain.

Wednesday, 2 December 2015

Put Time series to DB and read back


Macros are used primarily to perform the calculation in the planning grid, these calculations maybe situational and can be used to highlight a planning state.

The “Function” used in the macros are used to perform a special operation on the values given as Arguments of the function. The TS_GET and TS_SET are primarily used in order to transfer the live cache data to Database and extract from Database back to Livecac. Each data set transferred to the Database is associated with four attributes which identifies the data set uniquely.

We will see how we can use the function to save the time series data and how to retrieve it back from database in the rest of the document.

These functions can only send the time series value to Database not the order series as it is not capable to store the order type information.

How to use TS_SET:


TS_SET can only be used to transfer the time series values to live cache. This function uses 5 arguments and stores an area of time series from Live-cache to Database.  All these time series values are associated with four different attributes. Values of these attributes can be set using TS_SET function at the time while specifying the area of time series. Following section will take you though the syntax of TS_SET.

Syntax


Following is the syntax:

TS_SET( ARG1, … ARG5)

Where

ARG1 – First identifier can be a string of 10 characters.

ARG2 – Second identifier can be a string of 10 characters.

ARG3 – Third identifier can be a string of 10 characters.

ARG4 – Last identifier, can be a string of 10 characters.

ARG5 – Area of time series of a Keyfigure to be saved to database.

Example of TS_SET



 

In the above example, we have called the function with following values:

Argument 1: ‘Identifier1’

Argument 2: ACT_USER

Argument 3: ‘KEY2ANYKEY’

Argument 4: ‘KEYVAL3ANY’

Argument 5: Area of a Key figure promotion starting from 19.11.2012 till 15.12.2014 (104 week).

Result:


Once the macro executes the step having the function as TS_SET, the Time series values can be seen in the table /SAPAPO/ADV_SERI.





Figure 1 Times series saved in the data base

 

How to use TS_GET:


The data which we have saved can be read into a time series Keyfigure using macro function TS_GET. This function can be called with six arguments. The function can only right the data into a time series Keyfigure (Or auxiliary KFs). We have the choice to restrict the number of columns to be extracted from the database.

Syntax:


TS_GET will have six arguments as follows:

TS_GET (ARG1..ARG6). Where

ARG1:  First identifier in the database refers to column series ID of the table /SAPAPO/ADV_SERI.

AGR2: Second identifier in the database refers to column Key1 of the table /SAPAPO/ADV_SERI

AGR3: Third identifier in the database refers to column Key2 of the table /SAPAPO/ADV_SERI

AGR4: Fourth identifier in the database refers to column Key3 of the table /SAPAPO/ADV_SERI

AGR5: Start column. Values will be picked up from DB to Keyfigure starting from this column.

ARG6: End column. Value in this column will be specified as the end column, values will be picked up from BD to Keyfigure till this column.

Example:



 

 

In the above example, we have called the function with following values:

Argument 1: ‘Identifier1’

Argument 2: ACT_USER

Argument 3: ‘KEY2ANYKEY’

Argument 4: ‘KEYVAL3ANY’

Argument 5: We want to extract the data starting from the column 1.

Argument 6: Till the column 5.

Result:


Once the macro is executed, the initial 5 entries copied from Keyfigure Promotion which were copied to DB will have been extracted and copied to Keyfigure Impact as follows:

Following will be the result in Keyfigure impact:

















 

 

Monday, 12 January 2015

Releasing demand on Saturday in APO DP to SNP


Forecast release using Fiscal year variant


This scenario comes into picture when

1.       You do not need your forecast numbers to be released on a Monday and your week definition is not coherent across horizon.

2.       Your week definition is coherent cross location but do not want to release the forecast on a Monday, while using the period split profile.

3.       You want to release the forecast to start of the week and but your week definition is not coherent across location.

 

Having said that you are already using the FYV and your requirement is to release the demand on the End of the period that is “Sunday”.

 

Following Design will help you address this requirement without any customization:

1.       You need to have the FYV defined in such a way that individual periods end a day before where you want to see your forecast.





 


                i.e. Normally the week ends for APO on 16th Dec 2012, you should have FYV with period ending on 15th Dec 2012.

 

2.       You plan in DP as per normal APO week (i.e. Monday to Sunday)

3.       Instead of releasing the forecast directly from DP, take a backup of the forecast figures in a cube.

4.       The backup cube must have Calweek as well as FYV as periodicity.

5.       Now release the demand from this Infocube to SNP. While releasing the demand to SNP you have to use the periodicity as FYV


 

6.       Once the forecast is released, it will be released on the first day of the FYV period. That is a Sunday.


 




 

Advantage of using above design:

1.       No customizing is required.

2.       Flexibility to define the FYV period hence flexible to set forecast not only at end of the period but also in the middle if need be.

3.       Your Forecast in also match the demand in DP with one week lag.


 


 

Let me know your view on this J

 

APO DP Authorisations


APO planning books are by far a very robust tool to restrict access to data that we want to display to various types of users in an organization. This is one feature of APO which sells itself in very good manner.

This read is putting light only on the ways to restrict access to planning book functions; however, this does not cover the details on the restriction that can be placed on Info Object level.

 

Restrictions at planning book and data view level:

The very first level of restriction that can be placed is at data view level. This restriction can be conceptualized in the design phase. For example the quantitative market intelligence figure can be put, in addition to figures put by demand planning team, by a team sitting at in headquarters to arrive at a final forecast. This can be achieved by opening the “market Intelligence” key figure in a separate view and giving access of this view to central team.

We can restrict the access to the planning book using the Authorization object C_APO_PB while giving values as required to following fields: ACTVT -  Activity  APO_DVIEW – Name of the data view APO_PLBK2 – Name of the planning book.

This way is a very top level access restriction which is found very commonly in several projects. There are other kinds of requirements that are found in APO implementation few of which are detailed below.

General requirements in Demand planning:

1.       Do not want a KF to be open for change but only for display except for few users : There are many ways to achieve this, few are discussed below

a.       We can design different the planning book and data views and restrict the KF input/output.  We can then restrict the access to these planning books.

b.      You can restrict the access to only display for a KF by using the authorization object C_APO_IOBJ and fields are :ACTVT APO_IOBJNM APO_PAREA.

 

2.       Do not want all users to load data on a Brand level, only few can do it and fewer can edit is at brand level. However, the demand planning authorization object do not support the authorizations to be based on the values of the characteristic. There are two ways to handle it:

a.       Planning based on selection : You do not allow user to load shuffler and select object automatically. Instead you assign them selection user can only load selections assigned to him.In order to have above solution we need to restrict shuffler ac cess and restrict access for planner to choose selections. You can gray out the selection button by removing the object C_SELCTION from role. And by removing the C_SELORG you can restrict user to call the selection organizer. This simply means that user can only work with selections assigned to him.

b.       Another way to restrict the user from loading the planning objects based on the values of the characteristics is to user a custom enhancement. You can user method SELECTION_CHECK of BAdI /SAPAPO/SDP_SELECTOR to implement the restrictions

 In demand planning, the data can be displayed at any level of aggregation allowed by planning book characteristics. However, one may wish to restrict the access to only display at all levels. You can always set the planning book mode to display by using user parameter /SAPAPO/SDP94_D_MODE – to A after this the pencil icon will not be visible in the planning book even after loading the data into planning book.

SAP APO Demand Planning - Macros


SAP APO demand planning is basically data analysis and calculation. The Macros are important part of demand planning and used for mainly calculation part. Used for interactive and background mass calculations and processing. Macros do the calculation on the Keyfigues values and Keyfigure cell, row . column & cell attributes i.e. calculating sum of the values of KFs, idenfing and highlighting the specific values in the interactive planning board.

Collective Macro Drop Box

The Macros can be executed in the interactive planning as well as using the Background Jobs (//MC8D).

Demand Planning Macros Classification:

The Macros can be of two type broadly
1. Single Independent Macro
2. Collective Macros

The collective macro is nothing but the collection of Single Independent Macros. The Collective Macros are used to execute the single independent macros in a defined sequence.

If we go by the events then Macros can be of following four types:
1. Default
2. Level Change
3. Start
4. Exit

Default Macros are executed on any event which takes place in Interactive planing for example enter, save etc. The level change macros are executed automatically whenever any structural aggregation and dis aggregation takes place i.e. disaggregating from Locaion-product to product level. Start macros are executed whenever a selection is loaded. Exit macros are executed in the interactive planning when you leave a selection.

Then offcourse we have Macros created with standard function and Macros created with cutom function.


You can find the information about the macro GUID name in the following popup: