FrugalTesting
FrugalTesting
1. Consent to bond:
Yes, I have received the stipend and CTC details. The stipend during the internship
period is 10K/M for 3 months and 15K/M per 9 months, and the CTC postinternship is
800000 per annum.
3. Willingness to relocate:
My motivation in software testing comes from my interest in ensuring that quality products are
met and finding ways to improve the user's experience in terms of usability. I find this
interesting because it gives one satisfaction about actual problems found and guarantees that
the final product is as reliable and efficient as possible. I believe there's always a need to test,
ensuring that customers get some highquality software that meets clients' expectations.
It is exciting for me to be a part of Frugal Testing considering its great standing in the industry
and innovation in software quality assurance. I believe that the company's commitment to
highquality testing solutions aligns well with my passion for flawless products. What's more, I
believe that much can be learnt from your team; that's why I'm excited to be a part of Frugal
Testing.
SECTION A
Task 2 : OrangeHRM Navigation:
https://drive.google.com/file/d/1Fw1C5NDE86_5spB0cA6j7iJkDorǪo7bB/view?usp=drivesdk
Section B
3.
3.1
In the case study, the key challenge was ensuring the accuracy and reliability of health
data collected from IoT devices like wristbands and blood pressure monitors, along with
overcoming device compatibility and integration issues. Our team addressed this by
creating detailed test scenarios and setting up a robust environment with over 170 test
cases, simulating real-world conditions using both physical devices and virtual
environments. We conducted extensive functional and compatibility testing to ensure
the application worked smoothly across different systems, validating data accuracy and
seamless device interactions. This approach not only improved the user experience but
also boosted the client’s reputation by ensuring reliable health monitoring and accurate
real-time alerts. This method could be applied to other IoT systems, like smart home
automation, by focusing on device interoperability, real-time responsiveness, and
smooth data integration.
3.2
The article highlights the importance of software testing in ensuring that applications
meet user expectations in terms of functionality, reliability, and performance. Here are
the key takeaways:
2. Common Testing Types: From unit testing to integration, system, and acceptance
testing, each type addresses different aspects of the software lifecycle. These ensure
thorough validation of an application's functionality and robustness.
3. Strategies for Entry-Level Testers: New testers are encouraged to focus on developing
problem-solving skills, learning debugging methods, and understanding test cases.
Practical knowledge in automation and manual testing is essential to progress in this
field.
Real-World Application:
In real-world projects, these insights can significantly enhance software quality. For
example:
- Collaboration between testers and developers in agile teams can speed up the
identification and resolution of bugs.
- Advanced techniques like test automation and continuous testing improve efficiency,
reduce manual errors, and ensure frequent releases.
- Hidden tools can streamline testing workflows, allowing for better test case
management, load testing, and regression testing, especially in large-scale
applications.
By integrating these practices, projects can achieve faster delivery cycles with fewer
defects, ultimately leading to a more robust and user-friendly product.
4.
Another key takeaway was the importance of clear communication with stakeholders,
especially on sensitive issues like user data and compliance. Working closely with legal
teams and other developers ensured that our technical solution adhered to regulations
without compromising functionality.
Developing ProblemSolving Skills as an SDET:
As an SDET (Software Development Engineer in Test), this experience sharpened my
problemsolving skills across multiple areas. I applied critical thinking and testing
strategies to ensure that software features were implemented securely and met privacy
standards. Automated testing and realworld scenario simulations helped identify
potential edge cases early in development. This project taught me to design secure,
scalable solutions focused on performance, reliability, and user experience while
adhering to legal requirements.
5.
When an unexpected bug or issue arises during a project, I follow a systematic approach
to identify and resolve it:
Reproducing the Problem: The first step is to replicate the issue in a controlled
environment to confirm its existence and identify the specific conditions under which it
occurs.
Log Analysis: I then examine the application logs, which often provide valuable
information on errors or irregularities that occurred leading up to the bug. This helps
pinpoint the root cause.
Debugging: Using debugging tools, I go through the code line by line, inspecting
variables and the application’s state. This helps isolate the problematic part of the code
and gain a clearer understanding of the issue.
Root Cause Analysis (RCA): I perform an RCA to fully understand why the bug happened,
ensuring that it won’t reappear by addressing the underlying cause.
Automated Testing: I introduce unit, integration, and regression tests that specifically
target the areas of code related to the bug. This ensures that future changes won’t
inadvertently reintroduce the issue.
Code Review: Regular code reviews help catch potential bugs before they occur. Peer
review promotes quality assurance and helps identify vulnerabilities early in the
development process.
6.
6.1 In the project using the machine learning library "Prismia" for detecting image
contents, I applied several important skills that contributed to its success. Your
knowledge of machine learning, especially in image recognition, was crucial in building
and training models to identify image contents accurately. You would have also
leveraged your experience with image processing tools, such as OpenCV or PIL, to
preprocess images, improving the model’s accuracy. Strong Python programming skills
allowed you to implement the machine learning models using libraries like TensorFlow,
Keras, or PyTorch, while efficiently handling data pipelines. Your ability to manage large
datasets, clean and label images, and your expertise in model training and evaluation
helped finetune the performance of the system. Additionally, you likely contributed to
solving technical challenges, such as underfitting or overfitting, by adjusting the model
architecture and applying techniques like hyperparameter tuning or data augmentation.
If you integrated the model into an application, your efforts in ensuring smooth
integration and deploying the model for realworld use were pivotal. Overall, your skills in
machine learning, programming, data handling, and problemsolving were key to the
project’s success.
6.2 During my project using the machine learning library "Prismia" to detect image
contents, I encountered several obstacles. One major challenge was handling
datarelated issues, such as sourcing a sufficiently large and diverse dataset and dealing
with imbalanced data, which impacted the accuracy of my model. Additionally, finding
the right model architecture to avoid overfitting or underfitting proved tricky, as
balancing accuracy and generalization is often difficult in machine learning projects. I
also faced limitations in computational resources, making it hard to train complex
models or process large datasets efficiently. Hyperparameter tuning required careful
experimentation to optimize the model’s performance, which was quite timeconsuming.
When it came to integrating the model into the application, I had to ensure smooth
communication between the machine learning component and the frontend, which
presented its own set of technical difficulties. Realtime performance was another issue,
as I needed to balance speed and accuracy while working with large image datasets.
Finally, deploying the model for realworld use and ensuring it could scale effectively
posed challenges, especially since I was still getting familiar with cloud deployment
tools and services.
Additionally, effective communication was a challenge due to our remote work setup,
leading to occasional misunderstandings about task assignments. To improve
collaboration, I initiated regular standup meetings, which enhanced accountability and
alignment among team members.
We also encountered integration issues between the machine learning model and the
application interface, causing delays. I facilitated troubleshooting sessions with both
frontend and backend developers, helping to identify and resolve discrepancies in data
formats.
Engaging with my faculty has further deepened my understanding of the various testing
methodologies and best practices. I plan to stay engaged and continuously improve in
this field by keeping up with the latest trends and tools in automation testing,
performance testing, and security testing. I intend to participate in relevant online
communities, attend webinars and conferences, and obtain certifications in testing
frameworks like Selenium and ISTǪB. Additionally, I will continue experimenting with
realworld projects and seek mentorship from industry professionals to refine my skills
and knowledge.
10.
LinkedIn : www.linkedin.com/in/nasir-ansari
Resume :
https://drive.google.com/file/d/1t8iDdgBwrIA66XAVIAFPjP9tuDn3Xcdo/view?usp=drives
dk
Project : https://github.com/nxsir/crypto-portfolio-tracker11.
11.
12. How do you use ChatGPT to solve problems in your daily work?
I use ChatGPT as a resource to quickly clarify concepts, debug code, and explore
different problemsolving approaches. For example, when I encounter issues while
writing code or testing software, I can ask specific questions related to the error or the
logic and get suggestions on how to resolve it. ChatGPT also helps me generate test
case scenarios or understand complex documentation, which saves time and boosts
productivity. Additionally, it assists in exploring alternative ways to structure algorithms
or database queries, streamlining the development process.
13. What are the most effective ways to ensure quality in a software project?
Ensuring quality in a software project involves several best practices:
1. Comprehensive Testing: Incorporating unit, integration, system, and acceptance tests
to cover all aspects of the software.
2. Automated Testing: Using automated tests to run regression tests quickly and
consistently across new builds.
3. Code Reviews: Having peer reviews to catch errors and improve code quality before
merging.
4. Continuous Integration/Continuous Delivery (CI/CD): Automating the build, test, and
deployment processes to ensure that each code change is validated and safely
delivered.
5. TestDriven Development (TDD): Writing tests before the actual code to ensure that the
code meets its requirements from the outset.
6. User Acceptance Testing (UAT): Engaging endusers to verify the software in realworld
scenarios before the final release.
15. What does exploratory testing mean, and how would you explain it using a daily life
example?
Exploratory testing is a handson, unscripted testing approach where the tester actively
explores the application, identifying bugs or issues without predefined test cases. It's
about using creativity, intuition, and experience to find hidden issues that structured
testing might miss.
A daily life example: Imagine you’ve just bought a new phone, and instead of reading the
manual, you start exploring its features. You open different apps, press random buttons,
change settings, and test out functions like the camera, calling, and messaging to see if
everything works as expected. This process of "exploring" the phone, discovering any
potential bugs or quirks on your own, is similar to exploratory testing in software.
16. What is the purpose of a bug report, and how would you explain it to someone who’s
not in tech?
The purpose of a bug report is to document an issue or error in a software application so
that developers can understand and fix it. Think of it like writing a detailed note when
something in your house breaks, for example, a light bulb. You'd explain what went
wrong, where the issue occurred, and any clues that might help fix it, like "the light
flickers when I switch it on." Similarly, a bug report provides all the information needed
to diagnose and resolve a problem in the software.
17. How can ChatGPT or similar AI tools enhance test case generation, bug reporting, or
test result analysis?
ChatGPT can enhance these areas by automating and accelerating tasks:
Test Case Generation: AI can help by generating test cases based on application
requirements or user inputs, suggesting edge cases that testers might overlook.
Bug Reporting: AI tools can assist in drafting bug reports by analyzing error logs,
summarizing issues, and ensuring all critical information is included.
Test Result Analysis: ChatGPT can help analyze test outcomes, detect patterns in test
failures, and offer suggestions for improvements based on historical data or best
practices in the field.
18. How do you decide which test cases to automate and which to leave manual?
Test cases that are repetitive, timeconsuming, and require running across multiple
environments are ideal for automation. Automation is also suitable for regression tests,
performance tests, and tests that need to be run frequently. Manual testing, on the other
hand, is preferred for exploratory tests, complex test cases that require human intuition,
or onetime test cases where automation would be inefficient or costly.
19. How would you explain the importance of regression testing to a nontechnical
person?
Regression testing ensures that when something new is added to or changed in an
application, it doesn’t break existing features. Think of it like renovating a room in your
house—after the renovation, you want to check that the plumbing, wiring, and structure
of the entire house are still working fine. Regression testing is that careful check to make
sure everything still works as it should after changes.
As it comes to the improve the quality of final products, most fields are integrating
Artificial Intelligence, and so happens with software quality assurance testing. New AI
driven tools in the field of testing have created additional chances for automation,
effectiveness, and correctness. Nevertheless, in a Bid to make the most of the
opportunities presented by AI in testing, certain challenges must be met. The following
article analyses the evolution of software testing with the use of AI and the possible
problems regarding the introduction of those technologies by the specialists.
For example: Testim and Applitools are some of the tools that are able to create tests
automatically based on the interactions of the user with the application using rules of
machine learning and thereby shortening the effort needed for working on the
automation scripts.
There is always a chance of human error in manual testing. The use of AI technology
solves that problem by making it possible to run the same tests repeatedly every time.
For example, AI is capable of anticipating visual bugs or pixel mismatches, which are
prevalent in areas that require focus on visual systems such as web/mobile
applications.
Example: Visual AI technologies can accurately capture every visual revision made in
the app, identifying any problems that may go unnoticed by human testers.
Example: SonarQube Project is a free and open-source application that uses sonar-
analysis to determine code improvements using Softwar security within static inside the
room, which smash with potential bugs.
Most often than not, automated testing capabilities or test scripts within the automated
testing fail due to the changes made in the user interface and/or their flows. At this
level, the AI in the process creates selfhealing test automation. This means that there
is a way when the test cases change dynamically as the application changes hence
decreasing the frequency of changing the scripts.
With the help of these tools, execution of tests with unimaginable speed can be carried
out because of the analysis of such a great volume of data that cannot be performed
by a manual tester. This makes it possible to test edge cases and multi environments
over a software product and increase test regressions.
Although there are good things that can be derived from AI, setting up AI models for
purposes of testing is a daunting task. All that is needed is proper configuration of the
algorithms and supplying the right data in order for the model to output useful
information. If the models are incorrectly structured they are bound to produce false
negatives or false positives which means the testing even though diverse will not be
effective.
AI is not more intelligent than the data that it is trained on. Like other machine
learning systems, if the training data provided to the AI models used for prediction are
inadequate, biased or too old, the predictions ultimately suffer. Hence, there is a need
to manage the information effectively so that AI does not make illogical deductions.
AI will take away the work of many testers is an equally unfortunately wrong
statement. Certainly AI has replaced manual work and shortened the whole testing
lifecycle when it comes most of the tasks; however all the complex testing including
the exploratory testing and the usability testing are still human dependent. Hasty
conclusions and decision making processes borne by the complexity of the scenarios
that even AI technology will not help expedite will still need human testers.
Many top firms have incorporated Artificial Intelligence in the testing processes.
For example: They implement tooling which utilizes AI in their deep regression testing
in the multitude of code they have, on the other hand. AI is implemented to conduct
visual testing for their applications at Facebook, to maintain application usability.
In future, as developments take place in AI, so will the applications in software testing.
Some of the such advancements may include:
AIDriven Test Oracles: Devices which offer some complex metrics based on which,
they will decide whether a test case has passed or failed without a human being
making the judgement.
NLP will enable automated test generation based on business requirements. Test
creation using technical writers will be a thing of the past.
AI will enhance the way applications will be tested under realistic conditions even
without going through actual load testing.
While there will still be no substituting the human testers, human capabilities are going
to be enhanced to enable them to be faster and smarter
• There is no doubt that with time, a greater role will be played by AI in software
testing. Some features are expected to include:
• Intelligent Resist Test Oracles: Assisting tools that, on their own, determine
whether the given test case is passed or failed based on intricate criteria.
This technology will not substitute manual testers but will augment the skills of manual
testers allowing them to conduct tests creatively and efficiently. The integration of AI
with manual testing will bring in more balance, efficiency and reliability in the software
development life cycle.
Final thoughts:
AI is going to change so many routines in its area by enhancing and automating test
execution processes, deploying more test cases and striving towards greater accuracy.
However, it also has certain drawbacks like high configuration costs, intricacy and bias
when it comes to implementing AI-based tools. Nonetheless, in order to fully take
advantage of AI, the companies have to consider some of the issues when
incorporating it to ensure that AI does not replace human testers. In this regard, as
the architectural structure provides guidance, tools and applications created will have
great input.