Testing Tools Material
Testing Tools Material
Every profession has its own vocabulary.To learn a profession, the first and crucial step is to master its vocabulary.The entire knowledge of a profession is compressed and kept it in its vocabulary. Take our own software testing profession, while communicating with our collegues, we frequently use terms like 'regression testing', 'System testing', now imagine communicating the same to a person who is not in our profession or who doesn't understand our testing vocabulary, we need to explain in detail each and every term . ommunication becomes so difficult and painful.To speak the language of testing, you need to learn its vocabulary. !ind below a huge collection of testing vocabulary
Affinity Diagram" # group process that takes large amounts of language data, such as developing by brainstorming, and divides it into categories Audit: This is an inspection$assessment activity that verifies compliance with plans, policies and procedures and ensures that resources are conserved. Baseline:# quantitative measure of the current level of performance. Benchmarking" omparing your company's products, services or processes against best practices or competitive practices, to help define superior performance of a product,service or support processes. Black-b ! Testing: # test technique that focuses on testing the functionality of the program component or application against its specifications without knowlegde of how the system constructed. B undary "alue analysis: # data selection technique in which test data is chosen from the %boundaries% of the input or output domain classes, data structures and procedure parameters. hoices often include the actual minimum and maximum boundary values, the maximum value plus or minus one and the minimum value plus or minus one. Branch Testing: # test method that requires that each possible branch on each decision be executed on at least once. &rainstorming" # group process for generating creative and diverse ideas. Bug: # catchall term for all software defects or errors. Certificati n testing: #cceptance of software by an authori'ed agent after the software has been validated by the agent or after its validity has been demonstrated to the agent. Check# int$ r "erificati n # int%: Expected behaviour of the application which must be validated with the actual behaviour after certain action has been performed on the application.
Client: The customer that pays for the product received and receives the benefit from the use of the product. C nditi n C "erage: # white(box testing technique that measures the number of or percentage of decision outcomes covered by the test cases designed.)**+ condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing. C nfigurati n &anagement T ls Tools that are used to keep track of changes made to systems and all related artifacts. These are also known as version control tools. C nfigurati n testing: Testing of an application on all supported hardware and software platforms.This may include various combinations of hardware types, configuration settings and software versions. C m#leteness: # product is said to be complete if it has met all requirements. C nsistency: #dherence to a given set of rules. C rrectness: The extent to which software is free from design and coding defects. ,t is also the extent to which software meets the specified requirements and user ob-ectives. C st f 'uality: .oney spent above and beyond expected production costs to ensure that the product the customer receives is a quality product. The cost of quality includes prevention, appraisal, and correction or repair costs. C n"ersi n Testing: /alidates the effectiveness of data conversion processes, including field( field mapping and data translation. Cust mer: The individual or organi'ation, internal or external to the producing organi'ation that receives the product. Cycl matic c m#le!ity: The number of decision statements plus one.
Testing - Page (
Debugging: The process of analysing and correcting syntactic, logic and other errors identified during testing. Decisi n C "erage: # white(box testing technique that measures the number of ( or percentage ( of decision directions executed by the test case designed. )**+ 0ecision coverage would indicate that all decision directions had been executed at least once during testing. #lternatively each logical path through the program can be tested. Decisi n Table # tool for documenting the unique combinations of conditions and associated results in order to derive unique test cases for validation testing.
Defect Tracking T ls Tools for documenting defects as they are found during testing and for tracking their status through to resolution. Desk Check: # verification technique conducted by the author of the artifcat to verify the completeness of their own work. This technique does not involve anyone else. Dynamic Analysis: #nalysis performed by executing the program code.0ynamic analysis executes or simulates a development phase product and it detects errors by analy'ing the response of the product to sets of input data. Entrance Criteria: 1equired conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process. E)ui"alence Partiti ning: # test technique that utili'es a subset of data that is representative of a larger class. This is done in place of undertaking exhaustive testing of each value of the larger class of data. Err r r defect: ).# discrepancy between a computed, observed or measured value or condition and the true, specified or theortically correct value or conditon 2.3uman action that results in software containing a fault 4e.g., omission or misinterpretation of user requirements in a software specification, incorrect translation or omission of a requirement in the design specification5 Err r *uessing: Test data selection techniques for picking values that seem likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on intuition and experience of the tester. E!hausti"e Testing: Executing the program through all possible combination of values for program variables. E!it criteria: Standards for work product quality which block the promotion of incomplete or defective work products to subsequent stages of the software development process. +l ,chart 6ictorial representations of data flow and computer logic. ,t is frequently easier to understand and assess the structure and logic of an application system by developing a flow chart than to attempt to understand narrative descriptions or verbal explanations. The flowcharts for systems are normally developed manually, while flowcharts of programs can be produced. + rce +ield Analysis # group technique used to identify both driving and restraining forces that influence a current situation. + rmal Analysis Technique that uses rigorous mathematical techniques to analy'e the algorithms of a solution for numerical properties, efficiency, and correctness. +uncti nal Testing Testing that ensures all functional requirements are met without regard to the final program structure.
Testing - Page Hist gram # graphical description of individually measured values in a data set that is organi'ed according to the frequency or relative frequency of occurrence. # histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation. .ns#ecti n # formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. ,nspections involve authors only when specific questions concerning deliverables exist. #n inspection identifies defects, but does not attempt to correct them. #uthors take corrective actions and arrange follow(up reviews as needed. .ntegrati n Testing This test begins after two or more programs or application components have been successfully unit tested. ,t is conducted by the development team to validate the interaction or communication$flow of information between the individual components which will be integrated. /ife Cycle Testing The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle. Pass0+ail Criteria 0ecision rules used to determine whether a software item or feature passes or fails a test. Path Testing # test method satisfying the coverage criteria that each logical path through the program be
tested. 7ften, paths through the program are grouped into a finite set of classes and one path from each class is tested. Perf rmance Test /alidates that both the online response time and batch run times meet the defined performance requirements. P licy .anagerial desires and intents concerning either process 4intended ob-ectives5 or products 4desired attributes5. P #ulati n Analysis #naly'es production data to identify, independent from the specifications, the types and frequency of data that the system will have to process$produce. This verifies that the specs can handle types and frequency of actual data and can be used to create validation tests. Pr cedure The step(by(step method followed to ensure that standards are met. Pr cess ). The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures. 2. # statement of purpose and an essential set of practices 4activities5 that address that purpose. Pr f f C rrectness The use of mathematical logic techniques to show that a relationship between program variables assumed true at program entry implies that another relationship between program variables holds at program exit. 'uality # product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to" quality means meets requirements. !rom a customer8s perspective, quality means 9fit for use.: 'uality Assurance $'A% 0eals with 'prevention' of defects in the product being developed.,t is associated with a process.The set of support activities 4including facilitation, training, measurement, and analysis5 needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and are fit for use. 'uality C ntr l $'C% ,ts focus is defect detection and removal. Testing is a quality control activity 'uality .m#r "ement To change a production process so that the rate at which defective products 4defects5 are produced is reduced. Some process changes may require the product to be changed.
Testing - Page 1
Rec "ery Test Evaluates the contingency features built into the application for handling interruptions and for returning to specific points in the application processing cycle, including checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is possible. Regressi n Testing Testing of a previously verified program or application following program modification for extension or correction to ensure no new defects have been introduced. Risk &atri! Shows the controls within application systems used to reduce the identified risk, and in what segment of the application those risks exist. 7ne dimension of the matrix is the risk, the second dimension is the segment of the application system, and within the matrix at the intersections are the controls. !or example, if a risk is 9incorrect input: and the systems segment is 9data entry,: then the intersection within the matrix would show the controls designed to reduce the risk of incorrect input during the data entry segment of the application system. 2catter Pl t Diagram # graph designed to show whether there is a relationship between two changing variables. 2tandards The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured. 2tatement f Re)uirements The exhaustive list of requirements that define a product. 2tatement Testing # test method that executes each statement in a program at least once during program testing. 2tatic Analysis #nalysis of a program that is performed without executing the program. ,t may be applied to the requirements, design, or code. 2tress Testing This test sub-ects a system, or components of a system, to varying environmental conditions that defy normal expectations. !or example, high transaction volume, large database si'e or restart$recovery circumstances. The intention of stress testing is to identify constraints and to ensure that there are no performance problems. 2tructural Testing # testing method in which the test data is derived solely from the program structure.
2tub Special code segments that when invoked by a code segment under testing, simulate the behavior of designed and specified modules not yet constructed. 2ystem Test 0uring this event, the entire system is tested to verify that all functional, information, structural and quality requirements have been met. Test Case Test cases document the input, expected results, and execution conditions of a given test item. Test Plan # document describing the intended scope, approach, resources, and schedule of testing activities. ,t identifies test items, the features to be tested, the testing tasks, the personnel performing each task, and any risks requiring contingency planning. Test 2cri#ts # tool that specifies an order of actions that should be performed during a test session. The script also contains expected results. Test scripts may be manually prepared using paper forms, or may be automated using capture$playback tools or other kinds of automated scripting tools. Test 2uite &anager # tool that allows testers to organi'e test scripts by function or other grouping. 3nit Test Testing individual programs, modules, or components to demonstrate that the work package executes per specification, and validate the design and technical quality of the application. The focus is on ensuring that the detailed logic within the component is accurate and reliable according to pre(determined specifications. Testing stubs or drivers may be used to simulate behavior of interfacing modules. 3sability Test The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design 4layout and sequence, etc.5 enables the business functions to be executed as easily and intuitively as possible. This review includes assuring that the user interface adheres to documented ;ser ,nterface standards, and should be conducted early in the design stage of development. ,deally, an application prototype is used to walk the client group through various business scenarios, although paper copies of screens, windows, menus, and reports can be used. 3ser Acce#tance Test ;ser #cceptance Testing 4;#T5 is conducted to ensure that the system meets the needs of the organi'ation and the end user$customer. ,t validates that the system will work as intended by the user in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the right system was built. 4alidati n 0etermination of the correctness of the final program or software produced from a development pro-ect with respect to the user needs and requirements.
4erificati n ). The process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase. 2. The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements. 5alkthr ughs 0uring a walkthrough, the producer of a product 9walks through: or paraphrases the products content, while a team of other individuals follow along. The team8s -ob is to ask questions and raise issues about the product that may lead to defect identification. 5hite-b ! Testing # testing technique that assumes that the path of the logic in a program unit or component is known. <hite(box testing usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as ;nit or omponent testing.
Can7t ,e define a )uality #r duct as the ne that c ntains n bugs0defects6 >uality is much more than absence of defects$bugs. onsider this, though the product may have 'ero defects, but if the usability sucks i.e it is difficult to learn and operate the product, then its not a quality product. .f the #r duct has s me defects8 can it be still called a )uality #r duct6 ,t depends on the nature of those bugs.&ut in some cases, even though a product has bugs, it can be still called a quality product. ;nless the product is very critical, aiming for 'ero defects is not cost effective always.<e should aim for )**+ defect 'detection', but given the budget, time and resources constraints, we can still release the product with some unfixed or open bugs. ,f the open bugs cause no loss to the customer,then it can be still called a quality product. .s )uality nly testers res# nsiblity6 ?o. >uality is everybody's responsibility including the customer.<e, testers identify the deviations and report them, thats it.There are many factors that impact the quality such as maintainabiltiy, reusability, flexibility, portabilty which the testers can't validate. Testers can only validate the correctness, reliability, usability and interoperability of a product and report the deviations. 5hen is the right time t catch a bug6 #s soon as possible.The cost of fixing the bug will keep on increasing exponentially as the product development progresses.!or example, the cost of fixing a design bug identified in system testing is much more than fixing it, if it had been identified during design phase itself because now you not only have to rectify the design but also the code, the corresponding documents and code that is dependent on this code. Are there any ther )uality c ntr l #ractices a#art fr m testing6 @es.,nspections, design and code walkthroughs, reviews etc. ,hat are s ft,are )uality fact rs6 software quality factors are attributes of the software that, if they are wanted and not present, pose a risk to the success of the software. There are )) main factors and their definitions are given below. The priority and importance of the these attributes keeps changing from product to product.=ike if the product being developed needs to be changed quite frequently, then flexibility and reusability of the product needs to be given priority. The following are the quality factors Correctness" Extent to which a program satisfies its requirements Reliability" Extent to which a program can be expected to perform its intended function with required precision. Efficiency" The amount of computing resources and code required by a program to perform a function. Integrity" Extent to which access to software or data by unauthori'ed persons can be controlled. Usability" Effort required learning, operating, preparing input, and interpreting output of a program. Maintainability" Effort required locating and fixing an error in an operational program. Testability" Effort required testing a program to ensure that it performs its intended function.
Flexibility" Effort required modifying an operational program. Portability" Effort required to transfer software from one configuration to another. Reusability" Extent to which a program can be used in other applications A related to the packaging and scope of the functions that programs perform. Interoperability" Effort required to couple one system with another. H , t reduce the am unt s#end t ensure and build )uality6 r H , t reduce the c st f )uality6 cost of quality includes the total amount spent on preventing errors, identifying and correcting errors. oming to reducing this cost.Try to build a product that has less defects or no defects even before it goes to testing phase and to achieve this you should spend more money and effort on tyring to prevent errors from going into the product.@ou must concentrate greatly on building efficient and effective processes and keep on continuously improving them by identifying weakness in them.@ou many not reap great benefits immediately but over a long run you can make significant savings by reducing the cost of quality. H , t reduce the c st f fi!ing a bug6 atch it as early as possible. #s the development process progresses,the cost of fixing a bug keep on increasing exponentially. 6ractice life cycle testing.
10
C ding #hase: /erify that the design is correctly translated to code /erify coding is as per company's standards and policies /erification Techniques" ode walkthroughs, code ,nspections /alidation Techniques" ;nit testing and ,ntegration techniques 2ystem Testing #hase: Execute test cases =og bugs and track them to closure
11
3ser Acce#tance #hase: ;sers validate the applicability and usability of the software in performing their day to day operations. &aintenance #hase: #fter the software is implemented, any changes to the software must be thoroughly tested and care should be taken not to introduce regression issues.
The life cycle testing is also called / testing. The pro-ect8s 0o and heck procedures slowly converge from start to finish 4see above figure5, which indicates that as the 0o team attempts to implement a solution, the heck team concurrently develops a process to minimi'e or eliminate the risk. ,f the two groups work closely together, the high level of risk at a pro-ect8s inception will decrease to an acceptable level by the pro-ect8s conclusion.
12
3nit testing ( ;nit is the smallest compilable component. # unit typically is the work of one programmer.This unit is tested in isolation with the help of stubs or drivers.Typically done by the programmer and not by testers.
.ncremental integrati n testing ( continuous testing of an application as new functionality is addedB requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as neededB done by programmers or by testers. .ntegrati n testing ( testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client$server and distributed systems. +uncti nal testing ( black(box testing aimed to validate to functional requirements of an applicationB this type of testing should be done by testers. 2ystem testing ( black(box type testing that is based on overall requirements specificationsB covers all combined parts of a system. End-t -end testing ( similar to system testing but involves testing of the application in a environment that mimics real(world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed mimics the end users usage of the application. 2anity testing ( typically an initial testing effort to determine if a new software version is performing well enough to accept it for a ma-or testing effort. !or example, if the new software is crashing systems every C minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. 2m ke testing ( The general definition 4related to 3ardware5 of Smoke Testing is" Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors. ,n relation to software, the definition is Smoke testing is non(exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. 2tatic testing ( Test activities that are performed without running the software is called static testing. Static testing includes code inspections, walkthroughs, and desk checks Dynamic testing ( test activities that involve running the software are called dynamic testing. Regressi n testing ( Testing of a previously verified program or application following program modification for extension or correction to ensure no new defects have been introduced.#utomated testing tools can be especially useful for this type of testing. Acce#tance testing ( final testing based on specifications of the end(user or customer, or based on use by end(users$customers over some limited period of time. / ad testing (=oad testing is a test whose ob-ective is to determine the maximum sustainable load the system can handle. =oad is varied from a minimum 4'ero5 to the maximum level the system can sustain without running out of resources or having, transactions suffer 4application( specific5 excessive delay.
13
2tress testing ( Stress testing is sub-ecting a system to an unreasonable load while denying it the resources 4e.g., 1#., disc, mips, interrupts5 needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave 4e.g., fail5 in a decent manner 4e.g., not corrupting or losing data5. The load 4incoming transaction stream5 in stress testing is often deliberately distorted so as to force the system into resource depletion.
14
Al#ha testing ( testing of an application when development is nearing completionB minor design changes may still be made as a result of such testing. Typically done by users within the development team. Beta testing ( testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end(users or others, not by programmers or testers. &utati n testing ( a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes 4'bugs'5 and retesting with the original test data$cases to determine if the 'bugs' are detected. 6roper implementation requires large computational resources Cr ss br ,ser testing ( application tested with different browser for usablity testing E compatiblity testing C ncurrent testing ( .ulti(user testing geared towards determining the effects of accessing the same application code, module or database records. ,dentifies and measures the level of locking, deadlocking and use of single(threaded code and locking semaphores etc.
9egati"e testing ( Testing the application for fail conditions,negative testing is testing the tool with improper inputs.for example entering the special characters for phone number
15
16
17
A##r ach 1"This approache requires, that you have a fixed number of test cases ready before test execution cycle.,n each testing cycle you execute all test cases.@ou stop testing when all the test cases are 6assed or + failure is very very less in the latest testing cycle. A##r ach (".ake use of the following metrics &ean Time Bet,een +ailure: The average operational time it takes before a software system fails. C "erage metrics" the percentage of instructions or paths executed during tests. Defect density: defects related to si'e of software such as 9defects$)*** lines of code: 7pen bugs and their severity levels, ,f the coverage of code is good, .ean time between failure is quite large, defect density is very ow and not may high severity bugs still open, then 'may' be you should stop testing. 'Mood', 'large', 'low' and 'high' are sub-ective terms and depends on the product being tested.!inally, the risk associated with moving the application into production, as well as the risk of not moving forward, must be taken into consideration.
CHAPTER =
18
19
Testing 2trategy: ,n this section specify different testing types used to test the product. Tools needed to execute the strategy are also specified. Testing 2chedule: ,n this section specify, first the entire pro-ect schedule and then detailed testing schedule. Res urces: ,n this section specify all the resources needed to execute the plan successfully
C mmunicati n A##r ach: ,n this section specify how the testing team will report the bugs to the development, how it will report the testing progress to management, how it will report issues and concerns to higher ups.
20
outline document is '?ot #pproved', then either the scenarios mentioned are not sufficient or the scenarios are in a very bad shape4not in a state to be reviewed5 etc. D cument References: #ny additional documents that will help better understand the test outline document like design documents or 1equirements document etc. Pr ?ects C "ered in Test >utline: 6ro-ects can be features of the product or modules which are covered in the test outline document. Traceability &atri!: This .atrix is filled after finishing writing all scenarios in the outline.This is to ensure that all requiremnts or features are sufficiently covered by the test cases and none are missing.So you map the requirement or feature and subfeature to the test case that will be covering it. The following ,0s uniquely identify the requirements or feature and subfeature.@ou can add your own ,0s based on the need 1E>O,0 L 1equirement ,0 from the S1S document 00O,0 L 0etailed 0esign ,0 from the 0etailed 0esign document 2etu# Re)uirements: #ny setup that has to be done in the application being tested, prior to executing this test case, should be mentioned here.!or example, if the test case needs certain login ,0s with certain settings to begin, which are not created as part of the test case, then such things need to mentioned in this section. Test >b?ecti"es: Specify at a very high level, what the test case is intended to achieve or verify. Test Case /imitati ns: 0oes the test case achieve the above mentioned test ob-ective completely or are there any exceptionsKThese exceptions need to be specified in this section.!or example, test case has to verify 'something' on type #, type & and type P, but because of some reason it could ?7T verify that 'something' on type P, then its a limitation. Test Case De#endencies 0 Assum#ti ns: 6rior to executing this test case, any other test case needs to be runK #ll those dependencies need to mentioned here. Pr cess +l ,: ,n this section, we specify at a high level what the flow of the test case is.Suppose there are multiple users in the test case, then a process flow can look like user1: does something user2: does something else user1: does again something user2: says good bye Test >utline Table c lumn - 73ser7: <ho has to perform the action. Suppose in a application, there are two roles '&uyer' and 'Supplier', then user can be those role names. Test >utline Table c lumn - 7Acti n7: ;nder #ction you specify the following !low ?ame ( # high level name given to action performed by the user.Suppose &uyer has to create certain purchase orders in the applications, then the flow name can be ' reate 6urchase 7rders' 0escription ( The following things should be mentioned here at a high level 0escription of what actions should be performed <hat is the type or characteristics of data to be used. <hat should be verified or checked after performing the action. Eff rt Estimates: ,n this section you specify the effort needed to write each test case and the effort needed to execute them.
21
22
Test Case: The actual test case begins in section C, which can be further divided into subsections upon convenience and need.!or example, if the test case is for an integrated application, then everytime we login to a new application, we can have a new subsection. !ollowing is the example of how a test case looks like Step Num: 1 Step escription: chec! login Path and Action: "nter user name, "nter pwd, clic! #ogin $est ata: abcd, abcd "xpected %esults: &erify error message is thrown that username and password entered are wrong A##endi!: This section contain any additional data that the test case refers.!or example if your test case has large amounts of 'Test 0ata' which is difficult to put under the column 'Test 0ata' for each step, then you can use the appendix section to hold the data and in the test case, can give reference to appendix. Test Case Re"ie, Tem#late: This template can be used by the reviewers to provide their review comments.They can classify the comments based on their severity.The Test Engineer who incorporates the comments in the test case, should specify the action taken by him in the template and then ' lose' the comment.
23
understand the bug and fix it. /ist f Bug statuses: /ifecycle f s me ty#es f bugs: Analysis f bugs"&ugs logged during a testing phase a invaluable source to improve the existing testing processes.
24
Descri#ti n f #r blem cause: Descri#ti n f fi!: C de secti n0file0m dule0class0meth d that ,as fi!ed: Date f fi!: 4ersi n f the file that c ntains the fi!:
25
Analysis f bugs
&ugs logged during a testing phase a invaluable source to improve the existing testing processes.The holygrail for any testing team is 'ero customer bugs.7nce a product is released, ma-ority of the customer bugs come within Qmonths to ) year of product usage. &ut immediately after a testing of product is over the following can be done. (Testing Team should analy'e all the invalid$duplicate$couldOnotObeOreproduced bugs and come up with measures to reduce their count in future testing efforts. 7nce customer bugs start pouring in the following can be done. (Testing Team should analy'e each and every customer bug,find out why they have missed them in their testing effort and take appropriate measures.
26