Executive Summary
In this report we take a dive into the problem of returned goods and will create a plan in order to redesign the learning and development planning, which is looking to genuinely achieve no sale. There are different approaches applied, which are commonly related to the learning process and weightage.
The approach involves an experiment based approach where we observe the process, develop a conceptual learning framework and finally test with the manufactured unit samples.
The statistical sampling methodology tells us how to convert the data details into multiple sub-sized samples followed by a chi-square analysis and unit based selection to find the most probable faulty patterns and if all of them are fine the chances of return will be minimum. This report will also provide us with a few recommendations based on the inferences drawn.
Introduction
The supply chain management and testing forms the backbone of any organization and it is very important that there is minimum return of manufactured units and is directly related to the revenue of a company.
XYZ foods is a manufacturing company which has retained its operations and the entire production and packaging is happening in the company however there is an outsourced unit which is taking care of the supply chain.
The problem which is being faced by the organization is that there are unit returns which result in a revenue loss and the requirement is to ensure that there is relevant effort applied in order to attain a no return condition. In order to approach the desired results we will look into the development methodology and will do the required analysis.
In order to achieve a flawless unit which is ready for distribution, we have to ensure that the quality of production is enhanced and therefore efforts need to be placed in the direction of the production which is taking place hence the following steps will be taken in order to achieve the same.
Learning and Development
Sustained improvement is a concept which is widely known now-a-days and is the desired outcome of all learning and development plans. According to David A. Kolb the following cycle is required to be followed so that the development takes place naturally.
Experiencing - Applying - Generalising - Reflecting
The model that Kolb introduced basically believed in getting experience with the help of close observation and then forming abstract concepts followed by a test in order to apply and achieve the desired results. The same will be applied on the manufacturing team to identify the flaws and conduct sample based testing.
To understand the role of learning in details we will first set goals that will help us to deduce the reasons for failure. The learning activities will be planned according to the analysis of the reasons due to which a product is returned. Each returned sample will be analysed to identify the problem type and self-assessment will be used to find the solution which will then be implemented. Now since we will be inserting this entire process that we discussed we will definitely get to know about the learning areas how ever it will effect the cycle time and there fore we also need to work on our time management technique.
Once again we have to look at time, from an analytical perspective and we need to review the current use of time and it is required that we meet the same time lines as before however enhancing the performance in such a manner that the units delivered re not returned. In order to do this, we will channelize the entire process into short-term targets and this will help us to assess the time taken at each step, in case there is any delay, the forthcoming steps can be catalyzed by increase in resources to attain timely production. Testing will be an extra step which will be taken in order to enhance the performance level and therefore which following the time management theory, there must be some anticipatory arrangement of time for testing purposes.
As discussed above, sample based testing of the anticipated errors will be the new addition which will help to develop the entire production cycle and push the right product towards the market. It is very important to create a perfect testing methodology so that the errors may be eradicated and at the same time the development process can be controlled.
Sample based product testing using data mining
As we have discussed earlier that the modern techniques of development and ,learning involves a sample based testing, however with the increasing complexity the methods of sampling have been proportionally improved and there is a wide use of statistics and computers to ensure that the checks placed are far more efficient and effective.
The statistical correlation is an extremely accurate technique to produce a linkage between the numerical data sets using a single correlation coefficient. The data sets that we analyze in this form of product testing are very diverse, complex sets of numeric and non-numeric structures and they may be of different types like graph structures, ordered sets, schemes, texts, plans etc., and the statistical correlation between two data sets can help to identify a faulty pattern of units.
There are various new additions to the testing techniques like Hybrid Co-relation which we will assess in this section where we will see that how a first-order logic and probabilistic semantic inference (PSI) are used as tools to detect these patterns. The approach of data analysis based testing involves the use of applied statistics to discover the units which are not upto the mark and may be improved further before going to the distribution vendor.
Usually the theorem that helps us to discover these above mentioned is as follows, It is assumed that a highly probable pattern should be normal. In more formal terms, the main hypothesis (MH) of this approach is: If Q is a highly probable pattern (e.g., >0.9) then Q constitutes a normal pattern and Q with negated conclusion can constitute a suspicious (abnormal) pattern.
Lets understand the testing model and see that how does a practical approach with a combination of computers and statistics help to find the product units that need improvement.
Now with the above simplification of the data as received we understand that there are RFA+RFSA transactions are faulty. Now taking the total number of units as 10,000, the above will be RFA+RSA/10,000% of the total transactions are faulty. Now there needs to be a computational approach towards the identification of the patterns and there will be application of statistics, however the initial step will be to create an algorithm to start the sampling analysis. We will apply the theorem as discussed, to each step.
Step 1: The Data is assembled as given above and all the units are placed into pairs according to the batch of production and equally divided sample size.
Step 2: The patterns are developed and discovered as horn clauses according to the pairs e.g. A1&A2....An-1An and then the probability is calculated as P={P1,P2,....Pn}
On comparing the actual units and the probable ones we get the result as shown below:
P (An/A1&A2&…&An-1)=N(An/A1&A2&…&An-1)/N(A1&A2&…&An-1) ... Now ‘N’ here represents the total number of produced units where An(fault) holds true.
Step 4: Statistical significance is tested using chi-square test, and it is deduced whether the sample data is normal and the condition that must be satisfied in this case is
(P(A1 & A2 &…& An-1⇒ An) >T=0.9
Step 5: Probability is calculated on the basis of the formula
P(A1&A2&…&An-1⇒ An) = 1- P( A1&A2&…&An-1⇒ An)
Step 6: All the pairs of transactions from the data base are searched, to satisfy the rules with the negated as deduced in previous steps and
Step 7: Transaction patterns that are statistically normal satisfy A1 & A2 &…&An-1 & An* for error free and faulty cases respectively.
Hence, once deduced that in a given set of transactions that conditions A&B result in C, and if the data sample is normal we can create comparisons with the negated conditions to conclude between good quality and faulty units.
Recommendations
Based on the inferences made using the above report we understand that if flawless units need to be produced in such a manner that the product is not returned, we must use the learning and development methodologies and there are a few additions as mentioned below:-
The learning procedure must be completed in a four step manner, there should be an experiment and observation based report to create the learning path and help to achieve final development.
There must be a time based approach, in order to work on this improvement area we need to ensure that all the new post development steps are planned to release some time from the other processes so that the new once can be effectively accommodated. With the use of statistics in sampling, we can definitely find the probable slots where there is a faulty Unit so that the entire production division is the same. Lastly, to every set of units sold (with no return), the employees should be getting an incentive, and this will help to enhance the effect of learning and development.
Bibliography
Boud, D., & Garrick, J. (2001). Understanding Learning at Work. London: Routledge.
Cook, A. B., & Lewis, R. (1996). The Independent Learner: Developing Independence in Learning. London: Hull University of Humberside.
FEDA ( Further Education Development Agency ). (1995). Learning Styles. FEDA.
Hindle, K. (2002). A CRITICAL EVALUATION OF THE STANDARDISATION POTENTIAL OF BUSINESS PLAN EVALUATION AIDS (BPEA) USED IN VENTURE CAPITAL INVESTMENT DECISION-MAKING. Hawthorn: Swinburne University of Technology.
Kolb, D. A. (1984). Experiential Learning - Experience as the Source of Learning and Developmen. London: Prentice Hall.
Kovalerchuk, B. (2003). Correlation of complex evidence in forensic accounting using data mining. Washington.
Lewis, R. (1993). How to Manage your Study Time. Collins and NEC.
Seymour, V., Jha, A., & Sima, S. (2004). Evaluation of the Development Gateway Final Report. London: London School of Economics .
(2004). The Handbook of Work Based Learning. London: Gower.