Tight deadlines often rule the workday of a controller and numerous similar information workers. In particular during month and quarter end closing actvities speed matters - while, for sure, expectations towards precision and reliability of the results remain. Balancing the tension between those two targets, obviously, can be a challenge.
In our role as solution architects for controlling applications, we were onboarded as team members for the SAP internal ERP on HANA adoption project. A project, which lead to an exchange of the existing ERP database by our new HANA platform. An important milestone, and a solid foundation for a closer integration of our CRM and BW systems, which were already upgraded to HANA.
The ERP on HANA adoption was part of the SAP runs SAP initiative, which aims to showcase SAPs solution portfolio. It positions SAP itself as an early adopter of its own solutions. Prior to any customer, solid business process and system setup scenarios can be developed, and products can be guided up the crucial steps on the learning curve. An additional and important proofpoint before the rollout to our customers.
Graphic: Controlling Profitability Analysis (CO-PA), Acceleration by ERP on HANA / Integrated Platform
In the following list we would like to share in brief some remarkable results, we were able to achieve in the CO-PA area, after the upgrade to ERP on HANA:
CO-PA experiences an impressive general acceleration. In particular reporting and mass data based analytical transformation processes gained an immense boost in performance.
CO-PA reports can be build on line item basis - no need for summarizations and aggregations anymore.
We were able to set up virtual infoproviders for realtime reporting in BW, which enable us to directly access ERP line items as they occur. i.e. time consuming extractions can be substituted - a blessing during closing activities.
No index-definitions or adjustments are necessary anymore to set up new processes. i.e. free development, less impact on competing processes in the modul environment.
The use of the new enhanced table selection possibilities allows complex and detailed adhoc content analysis.
HANA as a platform provides almost unrestricted opportunities for additional, customer specific process implementations and optimizations.
It's obvious, that similar to the graphic above, those results emphasize on integration, process design and execution and on the analytical reporting aspects of the ERP on HANA implementation - from a controlling perspective. A homogeneous platform, as drafted in the graphic, is a big advantage for any kind of "end to end" optimization and growth strategy. The technological boundaries actually disapear and you can set your primary focus on process implementation, integration and reporting - as you normally do as an application consultant.
We think, those results show also, that, as a consequence of the HANA platform usage, the company as a whole benefits - business and IT environment. The technology impacts everybody involved in running, analyzing, designing and modelling of the existing system and process landscape. It provides flexibility and accelerates the business transformation process.
I wake up philosophical. Im a Woody Allen fan, and very often watch his old flicks. Yesterday i see 'Love and Death', its a nice movie, i recommend you. I guess you are thinking, what is the relation with a Woody Allen movie and Actual Costing?
Last week i begin a new project, and one of my 1st activities is always explain the Costing Methods for Materials. This definition is hard and complicated, and very often depends on the Managers, and the managers are not part of the project team. Why explain the way the system works? Because if you don't explain things, other people 'assume'.
I think the principal functionality of ML is Actual Costing. Most of the companies want 'Actual Costs'. The problem is 'what is Actual Cost' from the point of view of a company ? The words 'Actual Cost' are subjective. Lets see a subjectivity definition:
Subjectivity is the condition of being a subject: i.e., the quality of possessing perspectives, experiences, feelings, beliefs, desires,[1] and/or power. Subjectivity is used as an explanation for what influences and informs people's judgments about truth or reality. It is the collection of the perceptions, experiences, expectations, personal or cultural understanding, and beliefs specific to a person. It is often used in contrast to the term objectivity,[1] which is described as a view of truth or reality which is free of any individual's influence.
I don't like to assume, i like to test, like to watch and feel, like to confirm. And of course, I like my users work in this way. My 1st rule is 'don't assume'.
Based on this, long time ago i decided to don't use the word 'actual cost' in my projects because this word is 'Subjective'. When someone ask me about the costing methods in SAP, my response is the following (just a resume):
V/2 - Variable in SAP> is just a Daily Moving Average....
S/2 - Standard in SAP > Is Fix Standard. The difference between the Standard and the total cost incurred go to P/L Account. Inventory and GOGS is not revaluated correctly....
S/3 - Actual Cost in SAP> Its a 'Monthly Average Actual Cost', and it means The system calculate the average cost of incurred in the month + the actual of the beginning stock. At period End Closing, COGS and Stock are re-valuated. The monthly average consider all the cost you want to include (full cost absorption), but we have to define the way (or drivers) to do the indirect cost absorption. We have to define a 'fair' way to do the absorption... Ok you can say 'fair' and 'justice' is subjective.
Well..this is just a resume, the conversations about this topic are large, philosophical, lots of hours of questions and answers. Why i spend lot of time on this? because the your team must share the same knowledge, and don't assume.
Just to end, check this extract from Love and Death, discussing God Existence and subjectivity.
Recursiveness is a natural phenomenon. Exists in nature, exist wherever. If you study science, i guess you know about Fibonacci sequence. If you study Photography, i guess you know the Rule of Thirds or Golden Rule. If you see a a sunflower, you are watching recursion. And of course, if you work with SAP Costing, there are some recursive process.
A typical example is a Production Order, if the Header Material is also a Component.
OK, now lets think about ML Close. There is a step named 'Multilevel Price Determination'. In this step, the lower level price difference go to the upper levels. Sounds easy! but, if a material is lower and upper level of itself, what happen?
Lets think about this scenario:
1. You buy 1000kg at $100/kg and arrive to your storage location.
2. This night, a storm dump the storage location. 300KG looks damaged, but QM girls are not sure.
3. Next day, QM tell PP boys create a order to select good product. PP do 300kg good issue, and found only 100KG are good. This is my MM Kardex (tcode MB51)
C. If I assume the Labor is Free (today i dont want to put things difficult), the Ending stock is valuated ad 100.000 (no aditional costs) and the Unit Cost is
100.000 / 800 = 125 That was easy!!
D. now, think abut the consumption. You consume 300 KG, and guess you expect $30.000 This is the value in the preliminary valuation. now we ask what are this $7500 of price difference ? this value dont make sense. Here come 2 questions:
D.1 Why the actual cost of the consumption is 37.500 and not 30.000?
D.2 In what way the System calculate this value?
E. The response to D.1 is easy if you understand the concepts of ML. You valuate the Ending Stock at the Monthly Average. The PP Order is not revaluated (so sad), the PRD accounts are cleared directly in FI. Im not going to explain the detail of the FI posts (if any of you want the explanation, i can doit).
F. This is the screenshot of the CKM3N at the time of the Single Level step.
H. Now, the the response of D.2 How the system calculate this value ? Could you calculate this Value? Its not difficult. there are lots of ways to solve the equation. If you study computer science in the university, i guess you face this kind of problems. Im going to solve the equation in XLS with a iterative algorithm Im going to calculate the Actual Value Column, but calculate the Receipt from lower levels is the same stuff... and yes, in Excel Image may be NSFW. Clik here to view..
I Did 12 iterations, with 2 decimals detail to arrive the exact values of my CKM3N. My algorith is the following:
+ Procurement is constant.. because there is no price difference.. you just buy the vendor
+ Production Cost i = Consumption Cost i-1
+ Cumulated Stock i = Procurement i + Production costs i
If we want to valuate inhouse product through costing run in a way which is not provided by SAP standards (all the priorities / strategies provided),
we need to use the user exit EXIT_SAPLCK21_002.
A product costing of such nature is detailed here; We tried to gather and finalize the requirements and how it could be achieved :
Following parameters need to be achieved when costing run takes place:
A. For any bought out item indigenous ,
ØSystem should look into all PO s containing that material.
ØSuch PO s should be restricted to whose delivery date falls within last 365 days of costing run date.
ØSystem will consider only PO s released and line item is not deleted.
ØSystem will then pick the highest price from the PO s; the price is net effective price.
ØIf the purchasing unit of material is different from BOM unit, the price needs to be converted to BOM unit.
ØThis net effective price will be compared to MAP of the material and higher of two will be taken into valuation.
ØIncase system could not find any such PO for the abovementioned period then the MAP will be picked up for cost estimate.
B. For any bought out item imported,
ØTo encounter the High Seas sale issue, which contain only the basic price, system will find the basic price of all import PO s;
then the highest basic price will be picked up.
ØAll modvatable items will be deducted from the effective price to arrive at the basic price .
For exchange rate ,
ØFor all imported bought out items, the above mentioned price will be multiplied by exchange rate as on most recent date the costing run date
ØTo achieve this, a separate exchange rate type will be maintained in the exchange rate table ; which must be different from classical "M" or "P"
To achieve point no 1 A:
We need to join EKKO & EKPO table to get the PO s relevant for the material & plant.
Standard SAP join table WB2_V_EKKO_EKPO will be used.
Delivery Date we will get from EKET-EINDT, this should be restricted to last 365 days of costing run date. This can be achieved by calling a FM.
Effective Price we will get from EKPO-EFFWR; this to be divided by EKPO-MENGE (quantity); currency we will get from EKKO-WAERS.
So with the material no (EKPO-MATNR) PO s will be sorted first; then the highest price will be picked up.
To convert the price from purchasing UOM to BOM UOM, a FM will be called.
This will be compared with (MBEW-VERPR) field of the MM.
The MAP is set as 2nd priority, in case system fails to find any PO in the above period.
To achieve point no 1 B:
System will recognize the high seas sale with a unique purchasing group “XXX”.
For all import PO s from net effective price the landing charge, CVD modvatable, additional duty of customs will be deducted to arrive at basic price.
This basic price will be inflated by say 1.1205 times to arrive at net effective price. ( this 1.1205 factor is applicable for India)
To achieve point no 2:
As mentioned, system will check the currency and will multiply with the unique exchange rate type maintained.
A new exchange rate type Y1 is configured for quotation costing; only this exchange rate type and corresponding exchange rates will be called in the user exit.
Relevant field is TCURR-GDATU (most recent date to costing runs date) & TCURR-UKURS (exchange rate as on that date).
The details of ABAP code written
FUNCTION EXIT_SAPLCK21_002.
INCLUDE ZXCKAU08
TYPES: BEGIN OF lty_ekpo, ebeln TYPE ebeln, ebelp TYPE ebelp, bukrs TYPE bukrs, waers TYPE waers, matnr TYPE matnr, menge TYPE bstmg, effwr TYPE effwr, bprme TYPE bprme, knumv type knumv, ekgrp type ekgrp, mengec TYPE bstmg, eindt TYPE eindt,
END OF lty_ekpo.
TYPES: BEGIN OF lty_eket, ebeln TYPE ebeln, ebelp TYPE ebelp, eindt TYPE eindt, END OF lty_eket.
TYPES: BEGIN OF lty_ins, menge TYPE bstmg, END OF lty_ins. DATA: lt_ekpo TYPE STANDARD TABLE OF lty_ekpo, lw_ekpo TYPE lty_ekpo. DATA: lt_eket TYPE STANDARD TABLE OF lty_eket, lw_eket TYPE lty_eket. DATA: lw_ins TYPE lty_ins.
DATA: lv_date TYPE p0001-begda, lv_verpr TYPE verpr, lv_vprsv TYPE vprsv, lv_value TYPE ukurs_curr, lv_meins TYPE meins, tmeins LIKE t006-msehi, tbprme LIKE t006-msehi, tmgvgw LIKE plfh-mgvgw,
tkwert like konv-kwert, tkschl like konv-kschl.
REFRESH: lt_eket,lt_ekpo.CLEAR: lv_date,lv_verpr,lv_vprsv,lv_value.SELECT SINGLE meins INTO lv_meins FROM mara WHERE matnr = f_matbw-matnr.SELECT SINGLE verpr vprsv INTO (lv_verpr,lv_vprsv) FROM mbew WHERE matnr = f_matbw-matnr AND bwkey = f_matbw-werks.
IF lv_vprsv = 'V'. SELECT ebeln ebelp_i bukrs waers matnr_i menge_i effwr_i bprme_i knumv ekgrp FROM wb2_v_ekko_ekpo2 INTO TABLE lt_ekpo WHERE bukrs = '1000' (the company code =1000) AND matnr_i = f_matbw-matnr and ( frgke = 'A' or frgke = 'C' ) and LOEKZ_I = ' '. IF sy-subrc = 0. SORT lt_ekpo BY ebeln ebelp. ENDIF.
IF lt_ekpo IS NOT INITIAL. SELECT ebeln ebelp eindt FROM eket INTO TABLE lt_eket FOR ALL ENTRIES IN lt_ekpo WHERE ebeln = lt_ekpo-ebeln AND ebelp = lt_ekpo-ebelp AND etenr = '0001'. IF sy-subrc = 0. SORT lt_eket BY ebeln ebelp. ENDIF.
IF lt_eket IS NOT INITIAL. LOOP AT lt_ekpo INTO lw_ekpo. if lw_ekpo-ekgrp = '946'.(the purchasing group for high seas sale = 946) lw_ekpo-effwr = lw_ekpo-effwr * 11205 / 10000. endif. if lw_ekpo-waers ne 'INR'. select kwert kschl into (tkwert, tkschl) from konv where knumv = lw_ekpo-knumv
and kposn = lw_ekpo-ebelp. if sy-subrc eq 0. if tkschl = 'ZLC1'. lw_ekpo-effwr = lw_ekpo-effwr - tkwert. elseif tkschl = 'ZCV1'. lw_ekpo-effwr = lw_ekpo-effwr - tkwert. elseif tkschl = 'JADC'. lw_ekpo-effwr = lw_ekpo-effwr - tkwert. endif. (the condition types to be deducted are identified as ZLC1 / ZCV1 / JADC) endif. clear : tkwert, tkschl. endselect. endif. READ TABLE lt_eket INTO lw_eket WITH KEY ebeln = lw_ekpo-ebeln ebelp = lw_ekpo-ebelp BINARY SEARCH.
IF lw_ekpo-waers NE 'INR'. CALL FUNCTION 'READ_EXCHANGE_RATE' EXPORTING client = sy-mandt date = sy-datum foreign_currency = lw_ekpo-waers local_currency = 'INR' type_of_rate = 'Y1' IMPORTING exchange_rate = lv_value EXCEPTIONS no_rate_found = 1 no_factors_found = 2 no_spread_found = 3 derived_2_times = 4 overflow = 5 zero_rate = 6 OTHERS = 7. IF sy-subrc <> 0. MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4. ENDIF. lw_ekpo-mengec = lw_ekpo-mengec * lv_value. ENDIF. MODIFY lt_ekpo FROM lw_ekpo. CLEAR: lw_ekpo,lw_eket. ENDIF. ENDLOOP.
CALL FUNCTION 'RP_CALC_DATE_IN_INTERVAL' EXPORTING date = sy-datum days = '00' months = '00' signum = '-' years = '01' IMPORTING calc_date = lv_date.
IF lv_date IS NOT INITIAL. DELETE lt_ekpo WHERE eindt LT lv_date. ENDIF. SORT lt_ekpo BY mengec DESCENDING. DELETE ADJACENT DUPLICATES FROM lt_ekpo COMPARING matnr.
READ TABLE lt_ekpo INTO lw_ekpo INDEX 1. IF sy-subrc = 0. IF lv_verpr IS NOT INITIAL. IF lv_verpr GE lw_ekpo-mengec. lw_ins-menge = lv_verpr. ELSEIF lv_verpr LE lw_ekpo-mengec. lw_ins-menge = lw_ekpo-mengec. ENDIF. exp_preis = lw_ins-menge. ENDIF.* else. "IF PO Not Found* exp_preis = lv_verpr. ENDIF. clear lv_verpr. ENDIF. ENDIF.
In my previous posting,I briefly given the idea about current and non current bifurcation incompliance with revised schedule VI requirement. In this article, I tried to give an illustrative example how to solve this problem in a simple way.
The current portion of the Non current liabilities is grouped under this head
Current liabilities
(c ) Other current liabilities....
Likely the Current portion of Non current assets is grouped under this head
Current assets
( f ) Other current assets.....
The above grouping is arrived by deducting the current portion from the non current portion. In order to fulfil our requirement , the manual intervention is unavoidable.
Based on the above structure any bifurcation required at the quarter end can be posted in the GL code starting with “C” followed by the Original gl code.
For example
A term Loan @ 9% for Rs. 10 Crore repayable 31.12.2014
Entry to be made on 31.3.2014
Term Loan 9% : 206000 DR. 10,00,00,000
C206000 CR. 10,00,00,000
Now Current Liabilities Term Loan 9% : GL C206000 :10,00,00,000
On 1.4.2014 this entry can be reversed and placed in the Original GL code :
C206000 Dr. 10,00,00,000
206000 CR. 10,00,00,000
The reversal can be easily done through recurring entry programme at the beginning of the financial year.
So that all the Current portion of the non current will be cleared and posted in a single GL for control purpose. . I request the experts to give their valuable comments in this topic. N.Selvakumar.
One my client's Controlling Module is Designed with a Focus to get CorrectSale Order Wise Profitability, based on the Actual Cost of Materials Consumedfor the respective Orders, and Activity Costs incurred during the processing,and allocable Over heads defined. The Objective is to get Correct Post Execution Profitability of each Order, and to compare the same with Plan Costs, so as to find out the Plan V/s Actual Variance.
The Same Data will be collated and compared to Customer wise Profitability to support long term business strategies, and Critical Business Decisions, based on the cost of producing as per the Customised Order requirements of Customers.
NCO (Non Confirming to order) Cost.
While doing the manufacturing against Orders as per Customer Speciation’s (especially in case of High Value items in OE, Appliances etc.) there are high chances of Rejection / Diversion of Coils from the Order, due to Quality rejections (Seconds). These materials are declared as not matching to the respective High End Quality required by the Customer in the Specific Order, but can be used for some other lower quality orders / commercial qualities.
Once detached from the SO, this coil is available as unattached SFG or FG (as the case may be) with will be automatically attached to a new Order, through the APO Run, based on the Characteristics available.
Business Impact
As these coils are of High Cost, Generated against High Value Order, and Carrying the Accumulated Cost while Detaching from the Original SO and carries to the new SO, which actually require a low cost material, which eventually result in to:
Original SO has not absorbed the Cost of Quality and so gives a wrong Profitability.
Second SO which actually require a low Cost Material, absorb extra material Cost , which is not relevant to it, and results again in wrong profitability records.
To achieve correct SO Profitability and Customer Profitability records in SAP, we need to overcome the above limitation.
Kindly requesting to all SAP gurus if any one have solution based on my above requirement please let me know.
Many global companies these days are using offshore SAP Support teams. Quite often offshore SAP Support organization accounts for over 100 people and the rotation rate within the team can be relatively high. This can create a certain disconnect between the business and the support team. The knowledge transfer process and on-boarding and off-boarding procedures for the support team are not necessarily state of the art. Global consulting companies can easily move dozens of people overnight from one big account to another.
Same disconnect is often visible even inside same Support organization between onshore and offshore teams. That starts from the language barrier and goes on to technological challenges, like a noise on the line or network being slow or down. And sometime this time gap of 10 or 15 minutes to communicate a proper action can be of a vital importance. I would not go into a deep analysis of various types of potential issues created by a "split architecture" structure of support teams, but will illustrate the problem with just one basic example.
This example is about different approaches to problem solving. One approach is rather technical (and that often can prove to be wrong) and another approach is based on the knowledge of the business process. Unfortunately the second approach can take much longer time to learn through.
Imagine that the client creates an incident ticket where the employee is observed having a wrong hourly rate. Taking it from the technical approach the solution can be to go into KP26 and change the rate to correct value. But same activity type can be used by many other employees and that might make things even worse if the rate was indeed correct for that activity type. Taking it from the business approach you may find out that employee was assigned with a wrong activity type in the IT 0315 for example, and different activity type have to be assigned there. And we do not know if it will be changed with or without delimiting and with what date (so once again the support consultant needs not only the cross-functional SAP knowledge, but have to understand the business process).
Another similar example is that the person reports mistake during time reporting with CATS. The technical solution can be found on the master data side and person will be enabled with a function. On the business process side the person belongs to the department that should not be doing any time reporting, and his/her HR number was selected by mistake or by typo.
To summarize my point is not against the offshore support teams by any mean, but about a proper learning cycle that should include extensive knowledge transfer focusing specifically on the business process side. Not taking it seriously can gradually have a negative impact on the consultancy reputation as trusted service provider.
To address this risk consulting companies can take more serious approach to the knowledge transfer as part of the on-boarding and off-boarding process for offshore team members. Same time there is a clear conflict of interests for the company. On one hand the whole idea behind the offshore team is about cost reduction. On the other hand educating technical people with a knowledge of customer specific business processes means both time and money and naturally the consulting company will want to transfer such cost to the client. At the end of the day it might raise the bill for the offshore team to match certain on site team members.
It also puts a big question on the customer side to choose between short term cost saving or paying a premium for the long term quality and reliability.
Product costing is one of the key areas in Manufacturing and process industries. It is used for estimating and valuating the internal cost of a product.Product cost planning is used for estimating/predicting the cost incurred in producing a Finished Product . This estimate is also used for budgeting purpose . When the material is actually produced, the actual cost is incurred and the variance between planned cost and the actual cost is calculated. Based on the magnitude of variance, decision is taken to re-estimate the cost of the materials.
Standard Cost estimate is the basis of product cost planning . This is nothing but the estimation of cost of a particular product, being manufactured.
Before getting in to SAP terminologies, let’s relate it with a very simple practical scenario, so that it is more clear and understandable.
Suppose a company is manufacturing glass containers. The components used are
The Glass
The Cap
The above two components are raw materials and the end product container is the finished good. Below diagram represents the entire production process.
The above line defines Routing and the machine used is the work center(tcode : CR01/02/03). The exact activity or the work carried out is the activity(KL01/02/03). Tcode is for the routing is CA01/02/03
KP26 : Activity cost planning
There is a source which would be maintained, so as to be referred as a base for the activity,
So that based on the number of hours spent, it will add up to the estimated cost of the material being costed. That source is KP26 .
As can be seen above 1 hour of the machine activity would cost 200 unit amount. In our case the number of hours is 2, hence the cost would be 400 unit.
2. Cost estimate creation
Now as we are done with the master data part lets look in to the cost estimate creation.CK11N is the t-code to create the cost estimate creation.Lets talk about the different parameters involved in the same.
Costing Variant
As can be seen in the below screenshot,which is the initial screen of CK11N, costing variant PPC1 is used. Costing variant is the basis of cost estimates creation. It holds important parameters like, whether or not standard price will be updated, what is the source from which the price will be picked, the default validity period of the cost estimate, any reference costing variant etc.
The configuration of the costing variant is one of the most important configuration, in the product cost planning.The costing variant PPC1, used below is a standard costing variant. Custom costing variants, can also be created.
In the bottom grid, we see the itemization part . This is nothing but all the items that constitute the costing result. Based on the valuation strategy the cost are picked from the constituent materials. In the example above, we see Component Material cost(Picked from component Material master as per valuation strategy ) We can switch between itemization and cost component split.
Cost Component Split
Clicking on the below highlighted part takes us to the Cost Component Split. Cost component split is the split of the entire cost estimate among various cost buckets
Once the cost estimate is saved , the same is marked and released. After the same, the material standard price is updated.
The above explanation intends to simplify the basic understanding around cost estimate . This understanding can serve as the first step towards understanding the entire configurations pertaining to Standard Cost Estimates.
If the performance issue is reproducible, this will be the best. Since we can start a ST12 trace for the job. The performance data collected will be more accurate and easy to analyze.
–If ABAP takes the most time, it should be program problem, which should be analyzed by the team that owns the program. But for performance issues which take long time, 90% of the issues are caused by DB.
–So most of the time Database will take the most time. Normally Database will show above 75% of the total time. In fact, if you get this result.
If you want to continue the analysis:
–First you can sort descending by Net time, then you can see the first line shows the Call which takes the most time. Normally this should be the cause for the performance problem.
–As you can see in the example, the time accessing table KEKO takes the most time. So first you can try to search Notes with key words “Program Name” + “Table Name” +”Performance”.
–When you look up businesses in a Yellow Pages telephone book, you probably never consider reading the entire telephone book from cover to cover. You more likely open the book somewhere in the middle and zero in on the required business by searching back and forth a few times, using the fact that the names are sorted alphabetically.
–One way for the database to deal with an unsorted table is by using a time-consuming sequential read- record by record. To save the database from having to perform a sequential read for every SQL statement query, each database table has a corresponding primary index.
Then we need to understand the difference between Primary Index and Secondary Index:
–We can consider the key fields combination of a table is the Primary Index. You can check with SE11 for each table to find it’s key fields.
–A primary index is always unique; that is, for each combination of index fields, there is only one table entry.
–Sometimes the Primary Index is not sufficient. We need to define a Secondary Index that contains the fields we need. You can also check Secondary Index with SE11 by clicking button “Indexes…”.
–For example, Material Number is not a key field of table KEKO. If we want to search by Material Number, we can define a Secondary Index on the field Material Number. We can check in SE11, KEKO~A is the Secondary Index we talked about.
Solution: Define new Secondary Index for the table.
Wrong Index is used.
–Sometimes, although the correct index exists, due to some reasons, the optimizer still chooses the wrong index.
oThis might be cause by obsolete table statistics. When there are changes to the table indexes, table statistics need to be updated manually.
oWith some database (e.g. MSSQLServer), the execution plan doesn’t change per each call, sometimes it follows the last call’s execution plan. So there the wrong index might be used.
–Normally issue happens on one or two tables, when you keep refresh the screen, the table remains the same. Then you should find out the statement and check the execution plan.
–Via MENU-Administration-Program-Debugging, you can start debugging and find out the statement which is currently being read.
You can get it when making material ledger period end closing (CKMLCP).
You can also get it when posting goods movements.
Cause
The cause is that data between material ledger and material master.
Material ledger is a special ledger that will record each goods movements and write them in material ledger tables. It’s assumed that the data between material ledger and material master must be consistent. Or else, error C+048 will raise.
You can check the inconsistency with CKMC.
Annoying
When you get this error, the only recommended way to solve it is to report an incident to SAP for correction. The annoying thing is it’s always difficult to reproduce the error and very difficult to identify the root cause.
Root cause
According to experience, the KBA 2149876 was created to identify the most common root causes.
There, the root cause for “Incorrect implementation of BAPI: BAPI_GOODSMVT_CREATE” is very common.
Many customers are using BAPI_GOODSMVT_CREATE in their Z-program or interfaces to create goods movements.
But the way of using BAPI_GOODSMVT_CREATE is critical. If it’s used in a wrong way, inconsistencies will happen before you know them.
Below I paste 2 examples, one is not good, one is correct example:
Sometimes you will find some tables begin with K9R* which is generated by old derivation rules. Now these old derivation rules are not availble in system any more but these tables remain. Or when you try to delete some unused characteristics in COPA, these K9R* tables will prevent you from that.
If you really want to delete unused chars from the CO-PA field catalogue,you have to delete first all those old K9R* derivation tables that have been generated in the past for such chars but that are currently not used any longer in derivation.You can delete each such UNUSED K9R* derivation table by executing function module KED0_DELETE_DERIVATION_TABLE for this table.
For example,for each generated K9R* table that is part of the Where-usedlist of the corresponding data element RKEG_WWxxx, you have to check whether this table is still used in table TKEDRS for a currently existing derivation rule. If it's not used you can delete it with function module KED0_DELETE_DERIVATION_TABLE. This is a little laborious (to run the function module for quite a lot of old K9R* tables that are currently not used any longer in derivation), but if the derivation rules would have been been deleted correctly in the customizing transaction KEDR the corresponding generated K9R* tables and RKEAK9R* access routines would have been deleted automatically in this context by the system.
Sometimes you open T-code CO03 (Production Order display), from menu:Goto-> Costs ->Analysis to the report result there. But you find there is no "Total target costs" updated for your production order. If you check in report T-cd KKBC_ORD, you have the same result.
The possible reasons are:
- variance calculation has not been done yet - the order is not set to DLV/TECO - there is no Valid Standard Cost Estimate
Then what should you check?
1st: You should check if your order has status "VCAL" (Variances calculated) or not. If there is no, it means "variance calculation has not been done yet".
In fact the target costs are calculated and written to the database only at the point of variance calculation.
Before that point, you can still see the 'online' calculation of the target costs for example in CO03 cost analysis report in report 'Target/Actualcomparaison', only if good receipt have taken place. This is only for information purposes since no variance calculation has taken place yet.
So you should perform variance calculation in KKA2 and check the result again.
2nd: Production orders with full settlement are taken into account in the variance calculation if the status is "Technically completed" or "Delivered". If the status has not been set, the system generates message KV011 "Order does not have status DLV or TECO".
The order has to be TECO OR DLV to be relevant for the variance calculation. The DLV status has priority over TECO. But if DLV is not active TECO becomes relevant. You can check the DLV and TECO date with the Note 530563 - WIP calculation: Status DLV and TECO DLV (AFPO-LTRMI) TECO (AUFK-IDAT2)
3rd: Another prerequisite to be able to do the variance calculation is: a standard cost estimate for the order material must be available with a validity (from - to dates ) which includes the date of DLV or TECO for the order.
In case you have had a released cost estimate in a different date, please check if an adjustment for the cost estimate date is still possible or not. If possible, you can re-run variance calculation in KKS2 afterwards.
If you wish to change the existing released cost estimate then you could re-organise the material with CKR1, this will delete the released cost estimate. OR you can use CKR1 with 'REDO' as outlined in Note 410619 - CKR1:Cancellation of the release via OK code = REDO possible , which will revoke the released status.
But sometimes it is not feasible since it may have other effect on Material Ledger etc.
The target costs are typically calculated based on the itemization of the used cost estimate. The calculation then is:
Target costs = planned costs / planned output quantity or lot size * actual output quantity
Target quantity = planned quantity / planned output quantity or lot size * actual output quantity.
If there are lot size independent items in the itemization, the calculation is:
Target costs = planned costs
Target quantity = planned quantity
This also depends on the target costs version. The above relates to target cost version 0 (most commonly used).
In case of Valuated Sales Order Stock the cost estimate is used which was used to valuate the sales order stock. See Note 520000 - FAQ: Valuated special stock.
A little bit different to the above is the case in which the production order cost estimate is used. Quite common in the Valuated Sales Order Stock. In this case the itemization needs to be created on-the-fly within the variance calculation which is then standardized to the output quantity. First the input quantities are valuated with the according prices and after that the itemization is standardized to the output quantity as described above.
Please also refer to the online documentation below:
Steps for Allocation Cycle – Segment – Cost Center mapping
Business Purpose:
Cost allocation is a very common tool deployed within organizations to collect cost for a specific purpose. This cost is then spread across other cost centers/departments/inventory units after a certain period. After several periods, financial controllers and auditors often struggle in SAP to find out what was the origination and destination of cost allocation in SAP. This whitepaper details step by step process to get sending and receiving cost centers relationship from several allocation cycles.
Scenario:
Business may be looking for a list of all sending cost centers and receiving cost centers from different cycles/years in the past to do cost allocation analysis.
SAP Technical Steps:
1. Go to Cost center master data KS03 > Environment > Where Used List > Cycles in CO-OM and download the list. Record Cycle Names and Segment Numbers.
2. Go to table T811K (Allocations: Key Fields) and input “Controlling Area name & Cycle concatenated” in Cycle field with cycle names obtained from KS03 data.
Identify Sending Cost Centers:
3. Get sending cost center from field “From value (and to value if any)” where field = “KOSTL” and “I (Type of Set)” = 2 (Sending cost center) from T811K data.
4. Get the sending cost center group from field “Set Identification” from T811K data.
5. Get related set nodes for the current set. Remove setclass and Controlling area number prefix from the receiving cost center ID.
6. Go to SETNODE table and input Set Class = 0101, Org Unit = 0003 (Controlling area), Set Name from T811K data. Extract and get subset ID for further process.
7. Go to SETLEAF table and input Set Class = 0101, Org Unit = 0003 (Controlling area), Set Name = Subset ID from SETNODE Table data.
8. Rerun T811K sets in SETLEAF as some of the cost center hierarchies don’t have any nodes, but directly assigned with cost centers.
Identify Receiving Cost Centers:
9. For Receiving cost center, get “Set Identification” for fields where field = “KOSTL” and “I (Type of Set)” 3 = ReceiverSet from T811K data
10. Get related set nodes for the current set. Remove setclass and Controlling area number prefix from the receiving cost center ID.
11. Go to SETNODE table and input Set Class = 0101, Org Unit = 0003 (Controlling area), Set Name from step 4. Extract and get subset ID for further process.
12. Go to SETLEAF table and input Set Class = 0101, Org Unit = 0003 (Controlling area), Set Name = Subset ID from SETNODE Table data.
13. Rerun T811K sets in SETLEAF as some of the cost center hierarchies don’t have any nodes, but directly assigned with cost centers.
Here is a demonstration for an indirect activity allocation – using activity category 2 (indirect determination, indirect allocation). This type of activity quantity allocation involves most number of calculation steps than other activity allocation since the activity quantity is indirectly determined and allocated indirectly based on tracing factors and as we shall see activity quantity is not entered directly by the user at any point for any cost center. Here is a practical case scenario where this is applicable:
Our aim is to determine activity called “Tester Hours” for the Cost center called “Quality Control” and further allocate to the receiver cost center involving goods receipt and finished product. Now we do not know how many Tester Hours will be utilized by the Quality Control so the logic is that we determine from the number of items that are being tested, we know that 0.4 hrs are needed per piece. Also we know that 4000 items and 6000 items of goods receipt and finished product respectively are to be tested. From this information we are going to determine the activity quantity (indirect determination) and allocate the same to the cost center where the goods receipt and finished products are tested (indirect allocation).
Let us jump right into it with the below master data ready:
Tester Hours: ZACT3 (name of activity type, category 2 indirect determination and indirect allocation)
Sender quality control cost center: 53001 (name of cost center, where ZACT3 quantity will be indirectly determined and it is the sender during indirect allocation)
Receiving goods receipt cost center: 53003 (name of cost center, where goods receipt testing takes place and the receiver during indirect allocation)
Receiving finished product cost center: 53004 (name of cost center, where finished product testing takes place and the receiver during indirect allocation)
Statistical Key figure: 5ZSKF3 (name of the statistical key figure which contains the number of tester items at the cost center 53003 and 53004)
Here is the activity type master data screen shot tcode: KL02
Below screen shot displays the statistical key figure entered for the cost center 53004 and 53003, kindly note activity allocation has not yet taken place. The SKF unit ‘AU’ is slightly misleading Image may be NSFW. Clik here to view. think of it as unit 'number of items'. Using tcode: S_ALR_87013618 we can view the SKF entered as shown below. Image may be NSFW. Clik here to view.
Creation of a plan indirect determination and indirect allocation cycle is done using tcode: KSC7
Kindly note the selection rule: “Quantities calculated inversely”, Receiving tracing factor is Plan statistical key figure
The sender is the cost center 53001 while the receivers are cost center group ZCCGRP1 which contains the cost centers 53001,53002,53003,53004. We can also enter the cost center 53003 and 53004 alone as receivers for this scenario. Please note that the activity quantity on the sender and receiver are zero as on this step.
In the sender values tab the weighting factor is 40/100 = 0.4 as per our requirement i.e. 0.4 hrs/piece. In the receiver tracing factor tab screen, the receiving tracing factor is the plan statistical key figure 5ZSKF3 entered in the version 912.
Viewing the plan line items in KSCB we can confirm that the Quality Control cost center (53001 cost center has been the sender for the receiver cost center 53003 and 53004. Though the activity ZACT3 quantity has not been entered at any step the quantities have been indirectly determined for the Quality Control cost center 53001 and allocated to the goods receipt cost center 53003 and finished product cost center 53004.
Similar steps are to be followed using tcode KSC1 and KSC5 for creating actual indirect activity allocation cycle and execution respectively. Kindly note that until now the cost for the activity ZACT3 has not been dealt with, since in this scenario the cost per unit of activity i.e. 5 USD/HR is known we can perform an assessment cycle based on the plan activity units i.e. activity dependant costs can be entered for the respective cost centers with tracing factors as activity quantities. Hope this blog has been useful to understand indirect activity allocation using activity category 2.
I just want to share if anyone may need. It is complete scenario from configuration to planning, actual posting and final revaluation. I used some of the comments/posts on the forum - That I see it valuable and clear
1. Master data & set up
1.1 KL01-Create activity type
Set the ‘Price indicator’ to “1” (Plan price, automatically based on activity) in the ‘Allocation default values’ section in order that the plan prices are calculated automatically. -> This means that the cost center budget entered in transaction KP06 will be divided by the quantity entered in transaction KP26 when you run the plan price calculation in transaction KSPI.
You need to set the ‘Act. Price indicator’ to “5” (Actual price, automatically based on activity) in the ‘Variance Values for Actual Allocation’ section in order that the actual prices are calculated automatically. This means that the actual costs posted a cost center will be divided by the actual quantity consumed in the cost center when you run the ‘Actual Price Calculation’ in transaction KSII.
You can decide the method used to c alculate activity prices by setting the appropriate indicators in the Controlling Area settings. To do this go to transaction OKEV, highlight the appropriate version and double-click on the folder “Settings for each fiscal year”. Then double-click on the appropriate year and go to the tab “Price Calculation”.
In the ‘Plan’ section, you can choose the method for calculation the price which can be “Periodic” (where the budget for each period is divided by the quantity of that period) or “Average” (where the budget for all the periods are divided by the quantity for all periods and the result is assigned to every period).
In the ‘Actual’ section you also choose one of the above mentioned methods.
In addition you need to specify whether you want the difference between the actual and plan price is updated in the same transaction that was originally posted or if it should be posted in a different transaction. You do this by selecting the relevant indicator in the “Revaluation” field
For actual activity price calculation, the cost object that the cost center’s quantities have been consumed in can be revalued based on the “Revaluation” setting mentioned above. In order for the revaluation to take place however, you need to run a separate transaction for the cost object (after you have run the actual price calculation transaction KSII). This transaction will depend on the type of cost object that is being revalued. See the below table for the cost objects and their corresponding revaluation transactions.
Cost Object
Revaluation Transaction (Individual)
Revaluation Transaction (Collective)
Production Orders
CON1
CON2
Process Orders
CON1
CON2
Product Cost Collectors
CON1
CON2
QM Orders
CON1
CON2
Networks
CON1
CON2
Internal Orders
KON1
KON2
Note that if you use the Material Ledger to calculate actual costs of your materials, then you do not need to run the above revaluation transactions. Instead, you can set the ‘Activity Update Indicator’ to “2” (Activity update relevant to price determination) in the ‘Activate Actual Costing’ transaction (SPRO – > Controlling -> Product Cost Controlling -> Actual Costing/Material Ledger -> Actual Costing -> Activate Actual Costing). You would then also need to set up the general ledger account that the cost center will be credited with by going to transaction OBYC and entering the account in transaction key/general modification GBB/AUI.
Price 568 = total cost 11360/ total activity 20 -> correct
In this example, an document is generated when run KSII is because of one of the IOs have closed status, then that IO can not be revaluated, system posted to a cost center instead.
KON1 – Revaluation at actual price (KON2 for collective processing)
You may face "Inappropriate status" if IO have close status -> can not revaluate IO anymore -> system post to other cost object/ cost center right after run KSII function as seen above.
A budget is only as powerful as it is customized. Given your organization's unique business structure and requirements, set up Funds Management in SAP PSM to keep your costs and expenses on track. First set up your master data and define a budgetary structure; then integrate budgetary management, actuals and commitments updates, and AVC processes.
Configure SAP PSM-FM for your budget needs
Set up a budget structure that reflects your organization
Customize budgetary management, actuals and commitment updates, and availability controls
I hope you will find this guide helpful and I'll be glad to receive your feedbacks. It will be very a great contribution both to myself, in my new role as SAP guides author, and, which is more important, to SCN in its mission to support and evolve SAP products.
When we are executing cost run for SFG or FG through T.Code CK11N or CK40N then we can not measure our costing how much accurate or not. So, manually we can mention plan prince in the material master and can compare our estimated costing as per below mention process.
Menu path
Accounting Controlling - Product Cost Controlling- Product Cost Planning - Information system -Summarized Analysis ®Analyze Costing Run -S_ALR_87099930
Transaction code : S_ALR_87099930
Before cost release we have to verify with Materials predefine plan cost for get accurate price through Transaction code S_ALR_87099930