1.0 Introduction
The institute of electrical and electronics engineers (IEEE) defines testing as an activity carried out for evaluating products quality, and for enhancing it, by identifying defects and problems. The definition of testing may vary from one area of study to another. In whichever, area of study it is performed for purposes of verification and validation of the product in question. Some of the areas of study to be looked at comprises of model based testing, data mining and marketing tests, software testing, network testing, power aware testing, cloud testing and automatic generation of testing cases from unified modeling language(UML). (Utting and Bruno 2010).
The techniques of testing and tools also used differ depending on the area of study. The testing process can either be done for the finished product that is being developed or at various stages. Some of the stages where most product life cycle is tested include design stage, the implementation stage, and post-implementation stage among others. Errors and defects in the product are some of the factors that force designers of various products to carry out testing process in order to verify and validate the product before they are approved for use by different users. A failure during the testing process is an undesired behavior which is normally observed at the time of testing the product. A fault on the other hand is the cause of the failure. An error in, say, the software that is the result of human error made in the specification, design or any other later stages (Calderón, and Mercedes 2014).
The fact that testing is done at various stages results to many kinds of testing whereby the product can be tested under different scales ranging from small units up to the whole product or system. The implementation code and system requirements are used to design a test for white box and black box testing respectively especially for software testing. The functional and robustness tests of the product are designed by means of the black box testing techniques. A number of systems or products uses white box coverage metrics for checking parts of the implementation that have not yet been tested as expected for reasons of redesigning the tests for those cases. In the case of model based testing, a form of black box testing is employed given that tests are produced from the model that is constructed from the requirements document. Although testing processes differs for different areas of study as mentioned earlier, these principles and theory of testing are quiet similar in various areas (Utting and Bruno 2010).
2.0 About automatic testing
a) Discussion overview
Testing can be explained in several ways. Apart from the IEEE way of defining testing, there are also other ways in which testing can be defined which give rise to slightly varied meaning. In other words, there is no specific definition of testing even within the same field of study such as software testing. According to Oxford English Dictionary, the term test has its origin from the Latin expression called testum. The term represents an earthenware pot used by Romans plus their contemporaries in assessing the quality of materials like precious metal ores. Likewise, software or product testing is about evaluating its quality through verification and validation processes.
Testing definitions and interpretations
Let’s consider at least three different definitions. To begin with, we can define testing, as the process of executing a program or system with the intent of finding defects. This definition represents a form of negative testing that is less intuitive and where requirements of the system are not fully considered. The definition rather introduces the notion of actively searching for defects outside the scope of software requirement that can be any problem or defect in an ideal situation. Nevertheless, testing in practice combines elements of both positive and negative tests checking that the system meets its requirements. However, it also attempts to find errors that may compromise the successful function or usefulness of the system or product (Watkins and Simon 2010, Lemos 2011).
Secondly, testing can also be defined as any activity aimed at evaluating an attribute or capability of the program, system or product and determining that it meets its required result. This definition represents positive testing that addresses traditional testing approach which is meant to find out whether the system conforms to its requirements. It appears to be an intuitive perspective of testing. That is, we have a number of statements concerning how the system ought to behave, and we verify that the requirements are met (Watkins and Simon 2010).
The third definition: Testing is the process by which we explore and understand the status of the benefits and risks associated with the release of software system. In this definition, the responsibility of is to manage or alleviate risk of failure of the system and its undesirable effects it may have on the user. Thus, it offers the tester with an extra method for approaching testing of the system. In this case, the tester is entirely involved in the analysis of software in order to identify areas of high risks that need to be tested. It is therefore the most popular definition as it gives the tester more opportunity to understand the product by getting involved in the analysis. It helps the tester to identify the inherent risk easily at the time of testing (Watkins and Simon 2010, Lemos 2011).
Considering the first two definitions and many more from other text, it may be difficult for the staff involved in planning and design of tests to recognize particular risks for specific application under test(AUT). In particular, a case where they may not be familiar with the domain operation of the software. As a result, various risks have to be taken into an account for successful testing. They include business, safety or security criticality of the AUT, The commercial or public visibility of the AUT. There is also the experience of testing similar to related systems, experience of testing earlier versions of the same AUT and the views of the users of the AUT. In addition, the views of the analyst, designers, and implementers of the application under test (Watkins and Simon 2010).
b) Argument between studies
i) Testing in Software engineering and IT
In considering model based testing, a dozen full-time testers can be used to design testers manually, record tests on paper in addition to manual performance of tests every time there is a change in the system. Another option is to design a group of tests manually and then later use the automated test execution tools to rerun the tests after every change together with reporting the tests that fail. The state-of-the-art tools are then employed that can produce tests automatically from the model of your requirements. This can help in the regeneration of the updated test suits every time there is a change in the requirements. It is in addition to reporting exactly about the requirements which have been as well as those which have not been tested. In both the cases, model based testing can take the advantage of fresh technology to obtain quick results with lower costs as compared to traditional approaches. The model based testing ensures an extra systematic, less ad hoc and coverage of the failures in the program. This third approach called model based testing uses fresh breed of test generation tools to enhance the testing practices while at the same time reducing the overall cost of product testing (Utting and Bruno 2010).
Testing tools
The model based testing process takes five steps. It starts with modeling of system unit test (SUT), generation of the abstract test from the model, and concretizing (transforming) the abstract tests. It is to make them executable test scripts, execution and finally analysis in that order. The testing tools used include test case generator and test script generator. These model based testing tools generates a group of test cases from the model. The complete test design time is reduced, and the benefit is that one can generate a variety of test suits from the same model by simply using different text selection criteria (Utting and Bruno 2010).
Testing Techniques/approaches
The testing techniques used include white box and black box testing. Black box testing approach uses system unit test (SUT) by designing tests from system requirements, describing the anticipated behavior of the black box. The white box employs implementation code as a framework for test designs. Other kinds of testing include system test, integration test, unit test, functional test, robustness test and usability tests. Model based testing automates the generation of oracles and choice of test inputs. The oracle is the observed behavior of the system in terms of either failure or success. The exhaustive testing is also not possible for most real programs. The equivalence class and boundary value testing are used as informal strategies to help in deciding which tests are likely to be more effective. A number of informal testing strategies are the basis for the test selection algorithms in the model based testing tools (Utting and Bruno 2010).
Testing Challenges
The key challenge of testing is how to select the tests that are most likely to expose failures in the system that appears to be difficult. A small number of tests should be chosen to run the test within the available time (Utting and Bruno 2010).
ii) Data mining and marketing tests
Marketing tests are also an important part of marketing analysis just like data mining. Both data mining and marketing tests complement each other. Marketing tests are very significant in appreciating whether data mining efforts are working. There are two things that must be tested by use of data mining for marketing treatment. The first thing to test is whether marketing message is working. The second is to test whether modeling for data mining is working. The four potential groups namely target the group, control group, holdout group and modeled holdout groups are employed to understand the two factors tested. Both target groups and control groups receive the treatment (Linoff and Michael 2011).
However, the target group has model scores that show response while control group is selected either at random or based on lower model scores. Both holdout group and modeled holdout group does not receive treatment. The holdout group is selected either at random or depending on lower model scores while modeled holdout groups has model scores that show response. Data mining offers feedback into different processes that may require changing. The results of data mining also change over time as models expire and become less useful as time goes on. This is because data ages very fast in addition to markets and customers who change fast as well (Linoff and Michael 2011).
Testing techniques/tools in data mining and marketing
Decision tree are used to derive rules in classifying existing customers as either likely or unlikely for response. The tool is used to train on thousands of examples of customers who had or had not obtained the product. Once the existing rules are discovered, the resulting model is employed to add an attribute to every future record. The sequential pattern finding technique is employed to discover a sequence of events that had frequently preceded fruitful solicitations in the past. The clustering technique is on the other hand used to segment customers into groups with similar attributes (Linoff and Michael 2011).
iii) Very large scale integration (VLSI) testing
VLSI testing comprised of all spectrums of test methods/ approaches/ techniques and structured embedded within the system on chip with an aim of ensuring good quality of devices manufactured at the time of manufacturing test. The test methods employed normally include fault simulation and test generation which help in ensuring that quality test patterns can be supplied to every device. The test structures often use a particular design for testability techniques such as scan design and built-in self-test (BIST). These techniques are used to test digital logic portions of the device. Logic tests, memory testing and SOC testing for low power device applications are some of the most current advances in design for testability DFT). These approaches are important in improving the product quality as well as reducing the defect and test cost of the devices manufactured. In addition, the methods simplify the test debugging and diagnosis tasks (Girard, Nicola, and Xiaoqing 2010).
c) Main points around the topics of study
The three topics of study discussed namely software testing, data mining and marketing tests and VLSI testing all had common main points. In all cases, it is true that testing is aimed at product verification and validation. The testing in whichever area of study is also aimed at finding out the presence of defects that may exist in a product before handing them over to the final users. It is also evident from the discussion above that every area of study has some tools and techniques that have to be used for any automatic successful testing of the product and/or system. The final result of testing in any area of study is either the approval or disapproval of the product under test.
d) Charts/tables for further analysis
The following shows illustrations for further understanding and analysis of testing under the areas of study discussed above.
3.0 Conclusion
The testing techniques and tools discussed in this paper are based on only three areas of study namely software engineering and/or IT, data mining and marketing as well as very large scale integration testing. However, there are also other areas of study where automatic testing can apply. Mobile application testing, cloud testing and network testing are some of the remaining areas of study not considered among many others. The mobile area for example, has also become extra complex, particular development tools along with frameworks. It is in addition to cost effective testing techniques plus tools will be necessary to guarantee the growth of protected, high-quality mobile applications (Amalfitano 2012). Nevertheless, it is expected that testing techniques and management tools used may vary from one area of study to another but with the same final aim of verifying and validating the product being subjected to test.
The different definitions of testing from different sources as demonstrated in the paper also show that there is no standard definition that is universally accepted for testing. The definitions thus tend to vary and with varying processes or steps of testing in each case.
References
Amalfitano, Domenico, et al. "Using GUI ripping for automated testing of Android applications." Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering. ACM, 2012.
Calderón, Alejandro, and Mercedes Ruiz."Bringing real-life practice in software project management training through a simulation-based serious game."6th International Conference on Computer Supported Education, CSEDU 2014. 2014.
Girard, Patrick, Nicola Nicolici, and Xiaoqing Wen.Power-Aware Testing and Test Strategies for Low Power Devices.Springer, 2010.
Lemos, R. D. Software engineering for self-adaptive systems: A second research roadmap (draft version of may 20, 2011). Tech. Rep.(October 2010), 2011.
Linoff, Gordon S., and Michael JA Berry.Data mining techniques: for marketing, sales, and customer relationship management. John Wiley & Sons, 2011.
Utting, Mark, and Bruno Legeard. Practical model-based testing: a tools approach. Morgan Kaufmann, 2010.
Watkins, John, and Simon Mills. Testing IT: an off-the-shelf software testing process. Cambridge University Press, 2010.