Automation test in software development

xiaoxiao2021-03-06  26

Most of the test tools are functional testing functions of software (also known as record / playback tools), such as ROBOT and MERCUR, WinRunner, etc. The defects of the record / playback tool make us not too much dependence on it in the test. Defect: Record / Playback Tool can record keys and mouse movements when users and applications interact. By and mouse movements are recorded into a script, and then "play back" during the test execution. Although this method is beneficial for a particular situation, the recording / playback function is only 1/10 of its full function. Recording / playback script must be modified after the first recording is generated. Modifications to functional tests are mainly concentrated in testing through the GUI, also known as black box testing. It is only a serious limitation and disadvantage that the script is created directly through the record / playback. 1. Hard coded value. The recording / playback tool generates a script based on the user's interactive action, which also contains any data input or accepted from the user interface. Let the value "hardcodes" in the script will bring problems with future maintenance work. If the application's user interface or other aspects have changed, the hardcoded value can cause the script illegal. For example, when generating scripts during recording, input values, window coordinates, window titles, and other values ​​may also be recorded in the generated script code. If in the application, any of these values ​​has changed, then these fixed values ​​have become the culprit: Test script interacts with the application is wrong, or completely failed. Another hard code value that maybe problem is a datestamp. If the test engineer records the current date during the test, then run this script again after a few days, it will fail because the script contains hard-coded values ​​no longer match the current date. 2. Non-modular, not easy to maintain scripts. The recording / playback script generated by the test tool is usually not modular, and it is very difficult to maintain. For example: Many testing processes will reference a special URL in a web application. If the hard-coded URL is used in the script, the change in this URL will result in a lot of script to be invalid. In a modular development method, only one function is referenced, or the URL is packaged. Subsequently, each script is quote this function, so that any changes to the URL only need to change one place. 3. There is a lack of reusability standards. One of the most important topics that test process development is reusable. If the test creates a special standard, it is clearly required to develop a reused automatic test process, so it will greatly improve the work efficiency of the test group. If the test seal in the user interface into the modular, reusable script file, for other script calls, the maintenance work of the script will be greatly reduced when the user interface is constantly changing. When you create a reusable function library, it is best to bring, such as data reading, write, and confirmation, navigation, navigation, logic, and error check functions to different script files, respectively. The guidelines for automatic test development should have borrowed the principles followed by efficient software development work. Follow the guidelines for the developing language of the test tool to generate scripts, this is a good time. For example: If the tool generated script is similar to the C language, you should follow the development criteria for the C language; if the tool generated script is similar to the Basic language, then the development standard of the Basic language should be followed. In automatic test development, only test scripts generated by the recording / playback method is difficult to maintain and reuse, which is a significant fact. Although there are also a few cases, it can be used without processing scripts, but for most cases, if the script is not modified after the record, the test engineer will repeat the script due to the change in the test. With the potential benefits that can be brought by record / playback tools will definitely reconstruct the test script. This will cause testers to create a strong frustration and will feel dissatisfied with the automatic test tool. To avoid issues that have not been processed record / playback scripts, the development policy of the reuse test script should be established.

Unsuccessful recording / playback script does not represent an effective automatic test. Workaround: Homemade Development A Test Tool In order to remove the limitations of the automatic test tool and more in-depth tests for core components, you can develop a test tool. This customized test tool typically writes with robust programming, such as: stand-alone C or Java programs, customized test tools typically run faster than the script generated by the automatic test tool, but more flexible because these scripts are subject to Limited to a specific environment for test tools. We will give an example suitable for testing tasks with custom test tools, assuming that an application is used to calculate according to the information provided by the user, and generate a report. The calculation process may be complex and may be sensitive to different combinations of various input parameters. This may have millions of potential changes, and these changes will produce different results, so a comprehensive test of the calculation process can ensure the correctness of the calculation. Handmade development and verification of a large number of computational test cases are very wasteful. In most cases, a large number of test cases to perform a large number of tests through the interface is also very slow. At this point, a more efficient method is to develop a test tool directly for the application of the application (generally directly under the core components below the user interface). Another way to use the homemade test tool is to compare new components in control legacy components or systems. The data storage formats commonly used by two systems are different, and the user interface implemented in different technologies is also different. At this time, in order to run the same test case on both systems and generate comparison reports, the automatic test tool requires a special mechanism to copy the automatic test script. In the worst case, a single test tool cannot be compatible with both systems at the same time, and the two test scripts must be developed with two different automatic test tools. A better alternative is to generate a customized, automatic test tool, which encapsulates the difference between the two systems into the independent module so that we can design the testing of two systems simultaneously. Homemade automated test tools can automatically verify the results generated by comparing the test results generated by the legacy system and automatically verify the results of the new system by comparing the difference between the two sets and the difference between them. One way to achieve the above purpose is to use the homemade tool adapter mode. Homemade Test Tool Adapter is a module that is compatible with the homemade test tool by conversion or transformation is being tested, so that the homemade test tool can perform a predefined test case by the adapter in the system species, and store the result is stored as mutual It can be automatically compared. Standard format. For each development, the adapter must be able to interact directly with the system and perform test cases for the system. Testing two systems with a homemade test tool requires different gear adapters and independently calls two homemade test tools, each system calls once. The results of the two calls are applied and compared. The illustration describes a custom test tool for performing test cases for legacy systems and new systems. The same test case can be used in multiple systems by using different adapters for each system. A set of baseline results is generated for the adapter of the legacy system, which is used to compare the results of the new system. In order to complete their tasks, the homemade test tool adapter first obtains a set of test cases, then performs these uses in order to directly test the logic of each system, bypass the user interface. The performance is optimized to the performance of the user, so that the throughput of the test case is maximized. It also has higher stability. If the homemade test tool relies on the user interface, any change in the user interface (in the development of life cycle, the user interface will often modify multiple times) may result in the disclosure of the self-made test tool to defective leakage. Checking such results will waste a lot of valuable time. The execution result of each test case is stored in one or more result files, and the stored format is the same. Not related to the system being tested. The save result file is to compare with the results generated by the subsequently run test. Comparison can have a custom-generated result tool to complete, this tool reads and evaluates results files in accordance with certain rules, and outputs all errors or differences found.

转载请注明原文地址:https://www.9cbs.com/read-65101.html

New Post(0)