This is my personal blog and You will find helpful articles related to software testing.
STAREAST 2012 - Software Testing Conference - STAREAST Tutorials - Register Early and Save!
ISTQB's Advanced Technical Test Analyst - A Technically Enlightened Way to Test Systems
- Structure the tasks defined in the test strategy in terms of technical requirements
- Analyze the internal structure of the system in sufficient detail to meet the expected quality level
- Evaluate the system in terms of technical quality attributes such as performance, security, etc.
- Prepare and execute the adequate activities, and report on their progress
- Conduct technical testing activities
- Provide the necessary evidence to support evaluations
- Implement the necessary tools and techniques to achieve the defined goals
Performance Testing on Advanced Web Sites - On Demand Webinar
Onlife Health will discuss how they've utilized HP TruClient technology to develop their online portal, how it has helped them reduce their application test cycles, as well as identify and eliminate potential problems throughout the application lifecycle. Onlife Health will describe how they've achieved faster scripting time—by at least 50%—and share real world best practices.
During this web seminar, you'll learn how to:
- Test critical end-user facing Web 2.0 and Ajax applications accurately and efficiently, even using beginners or less technical scripters
- Reduce hardware and software costs by predicting application scalability and capacity, and lower the cost of defects by testing earlier
- Pinpoint end-user, system-level, and code-level bottlenecks rapidly and with ease
Presenters
Ron Foster, Senior Systems Engineer, Onlife Health
Priya Kothari, Senior Product Manager, HP Software
Is testing about checking against system requirements, or is it about exploring the software?
Elisabeth proves that following claim in red is not always true.
Many years ago in a hallway conversation at a conference, a test manager and I were discussing our respective approaches to testing.
"If they can't tell me what the software is supposed to do, I can't test it," Francine, the test manager, scowled. "So, I tell them that I won't start testing until they produce a detailed requirements document."
My eyebrows shot up through my hairline. At the time, I was working for a Silicon Valley software vendor that made consumer applications. If I waited for a comprehensive specification before I started testing, I'd be waiting forever. And, I'd be fired for failing to contribute meaningfully to the project. I said something to that effect, and Francine just shook her head at me. She couldn't imagine not having detailed specifications. I couldn't imagine holding the project hostage until I got documentation.
Few more useful quotes:
In the past, I was firmly on the side of using exploratory approaches. For most of my career, I worked for organizations that preferred lightweight documentation, so we didn't usually produce detailed test scripts. Even if those organizations had wanted binders full of step-by-step test cases, I agreed with James Bach that sticking to a testing script is like playing a game of Twenty Questions where you have to ask all the questions in advance.
However, my perspective on this debate has shifted in the past several years as I started working with agile teams that value testing in all forms. I have come to realize that the old discussion of whether "good testing" involves predefined, detailed test scripts or exploratory testing is like pitting salt against pepper, glue against staples, or belts against suspenders.
It is a false dilemma and a pointless debate.
Click here to Read the complete article "The two sides of testing".
Agile Test Automation - Training
Successful Automation in an Agile Environment
December 13–14, 2011
Don't miss out on this popular new course! Make plans to join SQE for the final Agile Test Automation course of 2011!
In this interactive tutorial, Janet Gregory describes how to use automation early and guide development, what tests should be automated, and how to work through ways to overcome common barriers to automation.
Are your automated tests effective and easy to maintain?
Janet will use examples to illustrate how to design automated tests for maximum effectiveness and ease of maintenance. Find out different approaches for evaluating and implementing automated test tools, shortening feedback cycles, creating realistic test data, and evaluating your automation efforts.
Do you ever question how to deliver good quality when you have to release so often?
By combining a collaborative team approach with appropriate tools and design approaches, over time you can not only automate your regression tests but also use automation to enhance exploratory testing.
Do you worry about testing lagging behind coding?
By the end of this session, you'll understand how to fit automation activities within each iteration so that testing "keeps up" with coding.
Here's what one recent attendee had to say about Agile Test Automation:
"Excellent content, excellent instructor, convenient format. Great combo!" William Krebs, Allscripts
About the Instructor: |
Click Here to register and join for the training
Performance Testing on the Cloud
Date: 17-Nov-2011
Time: 4.00pm to 5.30pm IST
Webinar Link: https://www2.gotomeeting.com/register/314865266
Detailed Content:
- How will you know the performance of a hosted SaaS application?
- Measure performance and availability of cloud applications.
- Load test from Amazon cloud.
- Load test from real end user machines, from various cities, various countries.
- Analyze performance measurements.
You can register in this webinar by just clicking the above gotomeeting link,this webinar is absolutely free of cost.
Smart Phone Testing – Are You "Smart" Enough?
Mobile phone usage has exploded over the last few years as the device transitions from its traditional role as a communications medium to becoming a multi-purpose personal gadget. This expansion, driven by a flurry of technological advancements across a variety of device models, complicates the product development and rollout process for device manufacturers and application developers.
The more daunting task now becomes application quality testing across operating systems, device platforms, and networks to ensure wide acceptance and proper usage. Non-functional testing, including usability, security, and adaptability, is as important as functional testing. Effective testing enables device makers and application developers to collect appropriate metrics that help improve product quality.
In this web seminar, Cognizant explores industry best practices on mobile testing and demonstrates effective ways of managing mobile application quality. From this web seminar, you’ll take away:
- A clear understanding of different aspects of mobile application testing and effective execution of appropriate testing approaches
- An automation approach to accelerate any mobile testing cycle
- How to establish a mobile testing lab
- New techniques to emulate, simulate, and handle multiple browsers, operating systems, platforms, networks, and languages
Presenter: Pradeep Kumar, Head of Mobile Testing Practice, Cognizant
Click on below link to See this On Demand Webinar -
https://goo.gl/v1iEk
Visual Studio Quality Assurance and Testing Tools - Case Study?
• | 26% reduced development-to-test cycle time | |
• | 91% increase in defects discovered |
Read this white paper from Pique Solutions to learn how you can get these types of improvements, and the business case to support adopting Microsoft Visual Studio quality assurance and testing tools.
Download this report from Pique Solutions today!
Requirements management Fundamentals?
Learn four fundamentals of requirements management.
Too often projects fail due to issues with requirements. Today, more than ever, it's important for everyone involved in a project to clearly understand the scope of what it is the team is building and why. In this whitepaper, we'll cover the significance of requirements management, as well as four fundamental concepts that are valuable for all stakeholders to understand:
- Planning good requirements
- Collaboration and getting buy-in
- Traceability and change management
- Quality assurance
Click Here to download the free whitepaper - https://www.jamasoftware.com/
Performance Testing Across the Lifecycle?
They'll explore how to:
- Build comprehensive test plans that involve Development, Test and Operations
- Create iterative performance tests and execute them across the development cycle
- Integrate functional and performance testing in fast-paced environments
- Integrate performance testing into continuous build frameworks
- Report on "Performance Regression" and "Performance Coverage"
When Testers Abuse Authority: Q&A with Michael Bolton?
A software expert's heuristic for regression testing?
Regression testing can be a bundle of work. Regression testing is testing designed to revisit existing aspects of an application or product to ensure the application is still working after changes have been made within a product or new features have been added. By definition, regression testing can be expansive because we may want to ensure nearly every aspect of a product is retested. Recognizing that regression tests are typically previously-created tests means that the labor of regression testing is not in test creation as much as test execution time. Planning what to regression test is the first challenge. So, how do you choose what to regression test?
Regression testing can be a bundle of work. Regression testing is testing designed to revisit existing aspects of an application or product to ensure the application is still working after changes have been made within a product or new features have been added. By definition, regression testing can be expansive because we may want to ensure nearly every aspect of a product is retested. Recognizing that regression tests are typically previously-created tests means that the labor of regression testing is not in test creation as much as test execution time. Planning what to regression test is the first challenge. So, how do you choose what to regression test?
I devised a heuristic to plan regression testing, it's called: RCRCRC. It stands for:
- Recent
- Core
- Risky
- Configuration sensitive
- Repaired
- Chronic
Read more at: https://searchsoftwarequality.techtarget.com/tip/A-software-experts-heuristic-for-regression-testing
Tips for Better User Acceptance Testing?
The theory of user acceptance testing (UAT) is straightforward: User acceptance testing is conducted by users of the product. Users test a product to determine whether the product meets their needs, expectations, and/or requirements. But the distance between the theory of UAT and the reality of what takes place in UAT can be a mighty big gap.
The user acceptance test cycle can be one of the vaguest and most poorly planned segments of the whole product development lifecycle. Confusion may abound about exactly what UAT is and who is responsible for running it. One of the larger pain points of UAT is how late in the cycle this testing takes place. Typically UAT is one of the last efforts before product launch. The late timeframe of the testing adds to frustration, leaving some users and product team members wondering, "What's the point of UAT?"
Read more at:
https://www.informit.com/articles/article.aspx?p=1431821
Top 10 Qualities of a Project Manager?
Inspires a Shared Vision
An effective project leader is often described as having a vision of where to go and the ability to articulate it. Visionaries thrive on change and being able to draw new boundaries. It was once said that a leader is someone who "lifts us up, gives us a reason for being and gives the vision and spirit to change." Visionary leaders enable people to feel they have a real stake in the project. They empower people to experience the vision on their own. According to Bennis "They offer people opportunities to create their own vision, to explore what the vision will mean to their jobs and lives, and to envision their future as part of the vision for the organisation." (Bennis, 1997)
Don't Discard Test-driven Development in the Cloud?
Writing software for the cloud can be very different than writing software that runs on a single server. It can make test-driven development (TDD) more complicated, but it is still well worth doing. For the purposes of this article, I'll consider two types of software development in the cloud: cloud hosting and distributed computing.
In cloud hosting, you are still writing the same type of software that you have always written. A simple example is a website developed in PHP, Java, Ruby on Rails, or .NET. You are not developing anything out of the ordinary, and the only impact cloud computing makes on your architecture is that it is easier for you to scale the web UI of your system as traffic grows.
For cloud-hosting scenarios, nothing has changed with regards to TDD. The typical xUnit frameworks will provide all that you need to write solid software using good XP practices.
Distributed computing is different. For the purposes of this article, I will define it as software that is designed to scale horizontally across many servers in order to improve some combination of reliability or speed or simply to spread the computational requirements of complex algorithms across many servers.
The use of clouds for distributed computing is more complicated and less common than the more straightforward cloud hosting scenario. However, more teams are being called on to develop these types of applications, and there are many open source projects that are making it easier to tap into the more advanced powers of cloud computing.
Read more at:
https://www.stickyminds.com/testandevaluation.asp?Function=FEATUREDETAIL&ObjectId=17177&ObjectType=COL
Hypothesis Testing?
Say I hand you a coin. How would you tell if it’s fair? If you flipped it 100 times and it came up heads 51 times, what would you say? What if it came up heads 5 times, instead?In the first case you’d be inclined to say the coin was fair and in the second case you’d be inclined to say it was biased towards tails. How certain are you? Or, even more specifically, how likely is it actually that the coin is fair in each case?
Read more at:
https://20bits.com/articles/hypothesis-testing-the-basics/
A Pragmatic Strategy for NOT Testing in the Dark?
- Discover the product's requirements, to know what testing needs to be done;
- Define what quality means to the project, to know how much time and effort we can apply to testing;
- Define a test plan, including release criteria, to check out different people's understanding of what's important about the product, and to know when we're ready to ship.
Discover the Requirements
Metrics for Software Testing: Managing with Facts: Part 2: Process Metrics ?
In the next three articles in the series, we’ll look at specific types of metrics. In this article, we will take up process metrics. Process metrics can help us understand the quality capability of the software engineering process as well as the testing capability of the software testing process. Understanding these capabilities is a pre-requisite to rational, fact-driven process improvement decisions. In this article, you’ll learn how to develop and understand good process metrics.
Read More at :https://www.rbcs-us.com/images/documents/Metrics-Article2-0711.pdf
Checklist for Windows Compliance Testing?
Software QA Series - Principles of Quality - Free Webinar?
Best Practices in Performance Testing to Ensure Success?
On September 28 at Noon EDT, Neotys invites you to a webinar: "Best Practices in Performance Testing to Ensure Success".
In this live webinar with leading retailer, The Bon-Ton Stores, you'll learn how to optimize the performance of your web applications while improving your responsiveness to the business with ease and without any special skills. Dan Gerard, Divisional Vp of Technical & Web Services and Will Esclusa, Manager Web Services & Technologies at The Bon-Ton Stores, will join me Rebecca Clinard, Technology Strategist at Neotys to discuss:
- Meeting the challenge of establishing your own in-house performance testing
- How you can better meet the urgent and changing needs of the business
- Overcoming the challenges of load testing a complex Web 2.0 eCommerce site
- Achieving the "10-minute Test Script"
- The right way to handle the squeeze of tight timeframes
- How to improve test productivity and efficiency for resource-constrained technology teams
Best practices Webinar - Wednesday September 28, Noon EDT (9am pDT)
Register for "Best Practices in Performance Testing to Ensure Success" today.
Register here - https://www.sdtimes.com/
Top Ten Risks When Leading an Offshore Test Team (Part 2)?
Key Principles of Test Design?
Effective Management of Test Automation Failures
- There is an error in the automated test itself
- The application under test (AUT) has changed
- The automation has uncovered a bug in the AUT
Getting Automated Testing Under Control
Effective Management of Test Automation Failures
A Great post By Hung Q. Nguyen, CEO, President, LogiGear Corporation
Global Software Test Automation - Book Review
Mobile Application Testing: Process, Tools and Techniques ?
The market for mobile applications increases every day and is becoming more and more demanding as technology grows. In a new study, Yankee Group predicts a $4.2 billion “Mobile App Gold Rush” by 2013 which includes:
- Estimated number of smartphone users: 160 million
- Estimated number of smartphone app downloads: 7 billion
- Estimated revenue from smartphone app downloads: $4.2 billion
At Organic, our goal is to stay on the cutting edge of emerging platforms by launching new and diverse applications. We have this goal in mind when developing mobile web applications. We utilize some of the same styles of programming used for the developing of web applications. We also follow the same testing methodology employed for web development testing when testing our mobile applications.
Read More at: https://www.logigear.com/july-issue-2011/1059-mobile-application-testing-process-tools-and-techniques.html
Agile Test Automation - Truth, Oxymoron or Lie?
It can be confusing for everyone in an agile team to understand when or what to test, when there isn't a test phase or any formal documented requirements. Whatever your agile methodology, projects require a change in the way QA and development work together. The use of technology and automation are much more difficult and finding a practical approach to testing is critical for successful agile projects.
George Wilson explores how testing in agile is different and gives pragmatic advice to ensure that application quality, within an agile environment, isn't compromised. Discussions on the techniques for quickly getting control of manual testing and progressing to automated testing in agile will leave you with fresh thinking to resolve or prevent any testing dysfunctions in your agile teams.
Download the presentation from here - https://www.origsoft.com/webinars/agile_testing/agile_test_automation.pdf
Watch recorded webinar - https://vzaar.com/videos/676465
Source: https://www.origsoft.com/webinars/agile_testing/
TestMaker - Open Source Software Test Platform Now?
Why is software testing so critical?
Resuire Software Testers - Performance Testing | Job Location Gurgaon ?
Qualification : B.Tech/B.E., Bachelor of Computer Applications (B.C.A.), Master of Computer Applications (M.C.A.), Master of Technology (M.Tech/ME) Experience : Min (2) Year Max (5) Year Job Description : QA Engineers with 2-5 years of experience in performance analysis, performance monitoring, automated scripting for performance data logging preferred. Exposure to load management tools, hand-on skills in adapting open source tools will be an asset. Functional Area : IT / Telecom - Software Location : Gurgaon Country : India Apply-> [email protected] (with Subject as "Resume for Performance Tester"
Required Software Tester (Quality Engineer) in Gurgaon ?
Cvent is looking for a talented quality engineer to join our Technology team, which designs, develops and operates large-scale, Web-based applications. The Quality Engineer ensures the quality and integrity of the application. This position plays an integral role in application usability, providing product feedback at all stages of the development life cycle. This is an entry-level position with significant opportunity for career growth. Star performers in this profile receive a unique opportunity to visit the Cvent Headquarters office in the US, for 3 months of further learning and career development. Position Duties · Write and execute test plans for the application · Review and assist in the development of test plans being prepared by others · Document defects and work with engineers to resolve issues · Provide usability feedback to the product team · Assist in applying application standards Candidate Requirements: · B. E.\B Tech\MCA (Must) · 0– 1 years of relevant experience · Excellent problem solving and analytical skills · Superior attention to detail · Interest in technology and a hunger for learning · Ability to work independently and as part of a team · Knowledge of relational databases and Microsoft Office required · Knowledge of SQL, Software Development Lifecycle, HTML, XML, and automated testing tools a plus Email your resume at: [email protected]
Crowd Sourced Testing ?
(By Rajini Padmanaban, Director of Engagement, Global Testing Services) Given the global distribution of software and how internet is bringing the world together, community based testing activities have been gaining a lot of momentum in the recent years. Such activities could be forum discussions, beta testing efforts, crowd sourced testing etc. Of specific interest in this blog is to see what crowd sourced testing is and when can this model be leveraged to yield success In simple terms, crowd sourced testing is leveraging the community at large to test a given product. This is the community that spans people from diverse cultures, geographies, languages, walks of life who test the given software, putting the software to use under very realistic scenarios which a tester in the core test team may not be able to think of, given his / her limited bounds of operation. Read More at: https://www.qainfotech.com/blog/2011/06/crowd-sourced-testing-is-it-really-for-you/
A Note on globalization testing?
The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. Proper functionality of the product assumes both a stable component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data. The following must be part of your globalization-testing plan: Decide the priority of each component To make globalization testing more effective, assign a testing priority to all tested components. Components that should receive top priority: Support text data in the ANSI (American National Standards Institute) format Extensively handle strings (for example, components with many edit controls) Use files for data storage or data exchange (e.g., Windows metafiles, security configuration tools, and Web-based tools) Source: https://www.software-testing-india.info/globalization-testing.html
A note on USABILITY TESTING ?
conVisitors to your companies Website may have a wide range of Internet experience and, consequently, have different expectations which must be fulfilled to win them over. While experienced users look for implementation of industry norms, newcomers need guidance to surf through the unfamiliar Web environment. Failure to cater to such expectations is likely to result into lost sales, as visitors are unable to locate what they are looking for or unable to complete transactions. Usability testing starts by identifying specific demographic groups within the target audience, taking into account their age, profession, cultural background, level of Internet exposure and many other relevant factors. Goals of usability testing Usability testing is a black-box testing technique. The aim is to observe people using the product to discover errors and areas of improvement. Usability testing generally involves measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or control measurement; all subsequent tests can then be compared to the baseline to indicate improvement. 1. Performance -- How much time, and how many steps, are required for people to complete basic tasks? (For example, find something to buy, create a new account, and order the item.) 2. Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right information?) 3. Recall -- How much does the person remember afterwards or after periods of non-use? 4. Emotional response -- How does the person feel about the tasks completed? Is the person confident, stressed? Would the user recommend this system to a friend?
Emerging Trends in Security Testing?
(by APP Labs) Today the application security testing space is not what it used to be. There are several trends that are affecting the development and testing of next generation applications from a security perspective. The three towering facets that are rewriting the conventional path taken for security testing are – Cloud, Mobile and Rich Internet Application (RIA) platforms. RIAs challenge traditional application security testing tools, which tend to focus on testing the web server side of the application. With RIA, the client side of the application logic has become equally important, if not more and has to be tested as well. This is bringing in new tides of challenges. Cloud platforms will require application security testing tools to evolve to support the testing of applications built for specific cloud platforms, and built using cloud-specific languages and frameworks. The other disruption cloud platforms are driving demand testing to support XML-based APIs used to reach out and consume cloud-based services. Read More - https://blog.applabs.com/index.php/2011/01/emerging-trends-in-security-testing/
SYSTEM AND USER ACCEPTANCE TESTING ?
System testing usually refers to the testing of a specific system in a controlled environment to ensure that it will perform as expected and as required. From a Systems Development perspective, the term System Testing refers to the testing performed by the development team (programmers and other technicians) to ensure that the system works module by module (unit testing) and also as a whole. System Testing should ensure that each function of the system works as expected and all errors (bugs) are detected and analysed. It should also ensure that interfaces for export and import routines will function as required. After meeting the criteria of the Test Plan, the software moves to the next phase of quality check and undergoes User Acceptance Testing. User Acceptance Testing: UAT refers to the test procedures which lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any project and requires significant participation of 'End Users'. An Acceptance Test Plan is also developed detailing the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to compare the new system against the current one. The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide realistic and adequate exposure. The testing can be based upon User Requirements Specifications to which the system should conform. However, problems will continue to arise and it is important to determine what will be the expected and required responses from various parties concerned including Users, Project Team, Vendors and possibly Consultants/Contractors.
Don’t Measure All Software Defects Equally?
(By App Labs) Quality just cannot be build into a software/application right before it gets launched, it must be part of the software life cycle right from the requirements to the production phase. With the growing prominence on software quality, most of the enterprises are investing in advanced tools, processes, and people, and more so on testing and quality assurance. But, the growing need to develop and update applications faster to stay ahead of the competition and stringent project deadlines, constrict enterprises from accommodating enough time for testing. While resolving all defects is quite important the kind of effort can be varied based on the priority of the defects. All defects may not have the same impact on the application, and hence smarter testing based on the defect severity is what is needed for enterprises to improve quality while meeting the time Read More at: https://blog.applabs.com/index.php/2011/03/dont-measure-all-software-defects-equally/
Web Testing with Automation anywhere?
Businesses and applications today are increasingly moving to web based systems. Time tracking systems, CRM, HR and payroll systems, financial software, materials management, order tracking systems and report generation, everything is web based. Automation Anywhere can automate all web based processes without any programming; from simple, online form-filling to more complicated tasks like data transfer, web data extraction, image recognition or process automation. SMART Automation Technology of "Automation Anywhere" offers over 180+ powerful actions for web automation. Automation Anywhere works with any website, even complex websites using java, javascript, AJAX, Flash or iFrames. Agent-less remote deployment allows automated task to be run over various machines on the network. Our advanced Web Recorder ensures accurate re-runs taking into account website changes. Automation Anywhere offers 2 easy options to automate web tasks: Use our powerful Web Recorder or use the editor with Point & Click wizards to automate tasks in minutes. Web Recorder: Use the ‘Record’ button to simply record your actions. The ‘Web Recorder’ tool uses SMART Automation Technology to account for website changes or web control position changes to ensure that recorded tasks continue to run smoothly. Watch demo video here: https://www.automationanywhere.com/lrn/keyFeat/webRecorder.htm?r=examples
Types of testing tools with examples?
- Test Project Management - MS Project and Test Director - Defect Management - Test Director, PVCS Defect Tracker, Bugzilla, and Rational Clear Quest - Regression Test Automation - Win Runner, Rational Robot, Quick TestPro and QES Architect - Coverage Management - Rational Requisite Pro and Mercury Test Director - Performance Testing - Mercury LoadRunner, Rational Performance Studio and Compuware QA - Configuration Management - Visual Source Safe and PVCS - Test Data - Thinksoft Test Data Manager - Dynamic Code Coverage - Rational pure coverage
Open source solns have lesser software flaws (by CIOL)?
The debate on the usage of open source technologies in security products is growing day-by-day. When most of the companies are using open source security products, a few companies are still evaluating to use open source applications to protect their IT products. In an interaction with Abhigna NG of CIOL, Rahul Kopikar, Head - Business Development of Seclore, shared his view on the adoption of open source products by enterprises and the best practices what developers need to follow while developing open source applications. Excerpts: CIOL: How safe is it to use open source applications with the increase in malware attacks? Rahul Kopikar: It is pretty safe as long as the software has gone through stringent QC and testing. The notion that open source is prone to malware attacks is wrong. On the contrary, proprietary software are more prone to malware attacks because it goes to limited testing, whereas in open source software the whole worldwide community contributes and tests the system. Read More at: https://www.ciol.com/Developer/Open-Source/Feature/Open-source-solns-have-lesser-software-flaws/152619/0/
Automation Test Tool Selection | Which automation tool is good for use ?
Before going for any automaton tool, make sure that the toll has the following featurees: - Significant reduction in time taken per testing iteration for future application releases - Savings in manpower & associated costs, owing to reduced manual testing efforts (potentially up to 80%). - Improved regression test coverage within short time frames - Flexibility, as individual modules can be tested independently - Radical improvement in consistency and uniformity of the testing process - Easy modification of reusable components; once created and benchmarked, the automation suite is flexible, repeatable, and stable.
Test Automation Framework by DST Worldwide Services?
New testing platform supports quality and cost management associated with test scripts KANSAS CITY, Mo., June 29, 2011 /PRNewswire/ -- DST Worldwide Services (DSTWS) has launched a new test automation framework, a leading-edge platform designed for automating functional and regression testing in system environments. This solution was created using industry-standard process frameworks to provide comprehensive automation capabilities and address the key challenges of traditional test automation approaches. The test automation framework enables repeatability, re-usability and faster development of test scripts, supporting increased quality at decreased costs. This results in speedier time to market and enables subject matter experts to spend more time testing complex system functionality. The test automation framework includes comprehensive logging and reporting capabilities, and has the ability to support multiple sets of data. It offers scheduling of test scenarios and can be easily integrated with testing tools such as QuickTest Professional, SilkTest, Selenium and Quality Center. Read more at: https://www.prnewswire.com/news-releases/dst-worldwide-services-launches-test-automation-framework-124707638.html
System Thinking required for Developing Testing Workforce?
By Pradeep C | CEO - Edista Testing Institute QAI participated in the SofTec 2011 conducted at Nimhans Convention centre, Bangalore on 2nd July 2011, and presented on the need for a system thinking approach for developing Testing Workforce to meet the growing demand of skilled testers. QAI participated in SOFTEC 2011. The conference which facilitates at promoting Software related Test Experiences by bringing together software test professionals, practitioners, experts, academicians, and service/product vendors to share techniques, methodologies, frameworks, experiences, and case studies to perform, manage, and automate Software Testing. Speaking on the occasion, Mr. Pradeep C, Founder & CEO emphasized the challenges of the existing state of practice on the workforce selection, and the need for innovative Workforce Strategies for Creating Successful Test Organization. He engaged a highly interactive session on å±€orkforce strategies for creating successful test organizations. The leadership track session focused on questioning the existing paradigms for capability and capacity development, and highlighted how a change of perspective can provide an innovative answer for the current problems faced by Testing Heads. His presentation focused on the need for doing structured assessments to determine and focus on the areas of improvement at an Individual and an organizational level. Additionally, the assessments need to focus on identifying the role specific capabilities for an individual using adaptive, intelligent assessment methods. Read More at: https://www.prsafe.com/new_press_releases/view/3675 (By Pradeep C | CEO - Edista Testing Institute)
Simple Factors for Risk Based Software Testing?
( by - Rex Black) We work with a number of clients to help them implement risk based testing. It’s important to keep the process simple enough for broad-based participation by all stakeholders. A major part of doing so is simplifying the process of assessing the level of risk associated with each risk item. To do so, we recommend that stakeholders assess two factors for each risk item: - Likelihood. Upon delivery for testing, how likely is the system to contain one or more bugs related to the risk item? - Impact. If such bugs were not detected in testing and were delivered into production, how bad would the impact be? Read More at: https://goo.gl/rR4c0 (by Rex Black)