(by Dean Hume) I have recently been looking into an awesome tool called Selenium. What is Selenium? Well, it's a suite of tools that you can use to automate web application testing across many platforms. I recently attended a Selenium Users Group at the Google London Head Offices where they discussed the many usages of Selenium. Why would you want to use it? Well, imagine you had a web application that did the same thing even after a new deployment. You would need to regression test the entire front-end of the application. With Selenium, you could automate all those tests and run them at the click of a button. Not only that, but you could integrate this into your build process! One of the presentations that were shown to us during the Selenium Users Group meeting was how Mozilla use Selenium to test the workings of Firefox. Previously they would have hundreds of front-end tests to run, but once they automated the process it took a matter of minutes! Read More at: https://goo.gl/tnuy9 (by Dean Hume)
This is my personal blog and You will find helpful articles related to software testing.
Automated Testing with Selenium 2 and NUnit?
Tips for Easy Unit Testing - Webinar?
Typemock is organising a webinar on July 26, 2011(Tuesday) from 10:00 AM – 11:00 AM EDT on the topic “Tips for Easy Unit Testing ” . Struggle with legacy code? Code quality? Heard about unit testing but not sure how it can help you develop or help your development team? This webinar will enable the viewers to learn come of the common problems with Unit testing and how to solve them . As per the event’s site , it is also mentioned that a giveaway of a free license for Isolator for .NET and two free “Legalize Unit Testing” t-shirts is planned . TypeMock is a Unit Testing Company that has quite a few interesting Products like Isolator .NET – A breakthrough Isolation framework for .Net Isolator++ For Windows – A breakthrough Isolation framework for C++ Isolator++ For Linux – A breakthrough Isolation framework for C++ Test Lint – Automatic guide for writing unit tests TeamMate - Measure your unit testing adoption rate Register Here: https://www2.gotomeeting.com/register/545851563
Testing Oracle Forms with Openscript?
A Guide to using Oracle OpenScript to record a functional test with Oracle EBS/Forms With the release of Oracle Application Test Suite (OATS) 9.00, comes a new scripting tool called OpenScript. It is based upon the popular Eclipse IDE, and allows users/testers to record testing scripts (both load testing and functional testing) for a number of popular technologies, including: Oracle EBS/Forms Siebel Web In this article we will concentrate on using OpenScript to record functional test with Oracle EBS/Forms. (NOTE: For testing Siebel applications or Java applications in general see the blog written by Alex Amat). The Forms Environment. Before we can proceed with the testing the forms application, the first stage of this process involves making sure that the application can run from a normal web browser, and that the JInitiator environment is configured properly. First, you should install JInitiator. We used the 10g (10.1.2.0.2) Demos environment for this white paper. Read More at: https://goo.gl/9yQCC
Forex Tester - Testing of Forex trading?
The Forex Tester is a specialized software designed exclusively for accurately simulating Forex trading. It is the best tool for quickly improving your trading skills, testing out new strategies and developing confidence without risking any real money. The Tester has over 10 years of historical data that you can use to test any strategy. Depending on the time frame selected, you can simulate trading years of data in just a few hours. It is a free standing separate program that comes with its own extensive selection of indicators including Moving Averages, Bollinger Bands, MACDs, Pivot Points, Parabolic-SAR, RSI, Stochastic, Keltner Channels, Heiken Ashi candles, and many more. Choose from 18 different currencies to test, including gold and silver. It is also a very effective and powerful tool for testing out your automated strategy. The results are accurate and can be trusted to be a viable set of statistics that can give you the information you need to determine if your strategy is robust. Why use a Forex simulator? The first step in refining your trading skills and strategies is to accept that this is a process that involves repetition, practice and reinforcement. This is where a forex trading simulator is very useful. It will train, save you money and ensure that you don’t start out a loser. Download at: https://www.forextester.com/
Easy C/C++ Unit Testing with Powerful Mocking?
Unit testing in C/C++ has proven value in finding as much as 90% of defects. But it is very hard. We know that. We've been there. Using a test runner is just not enough. With Isolator++ it can be easy In Unit testing C/C++ code we want to test a single feature, but then we hit a major problem: our code calls other classes, databases, services and 3rd party libraries. To solve this, we'll need to change our code. But changing it is risky. We are likely to introduce new bugs and then find ourselves rewriting our unit tests instead of writing new code. With Isolator++ technology, all these C/C++ unit testing issues go away. Isolator++ can simulate any dependency, by observing the running code, and diverting function calls to automatically generated mocks. - Afraid of changing your code just in order to test it? Isolator++ can unit test any C/C++ code without modification - Changing other developer's code freaks you out? Isolator++ unit tests protect you as you change your C/C++ code - Wasting time rewriting C/C++ unit tests? Isolator++ tests are made to be shorter, clearer and resistant to production code changes. Isolator++ works on both Windows and Linux and with any test runner, including CppUnit, GoogleTest Boost and UnitTest++ Download at: https://www.typemock.com/isolatorpp-product-page
The Importance/Value of Software Testing
1. The value of software testing is often taken for granted. 2. The E57.04 Data Interoperability Sub-Committee wants to assist vendors with the testing phase. 3. A high level of end user demand for this standard is going to be needed to motivate the vendors to adopt. I used to sell one of the leading geographic coordinate software components. It was called the Geographic Calculator. One of the main reasons that companies purchased it was because it had been tested by major oil and gas exploration customers around the world. Could these very wealthy companies have developed the software on their own? Absolutely, but the cost of testing it would have been more than the what they paid us to license the component, and in some cases that was a significant amount. The issue of testing the software implementations came up today at the ASTM E57.04 Data Interoperability Sub-Committee meeting. We would like to provide guidelines to assist the vendors in determining whether their readers and writers are in fact producing the correct format, but when you begin to look at the scope of this effort you quickly realize that this is going to be a real challenge. We discussed the possibility of providing test data suites, the need for vendors to test all of the possible combinations of readers and writers and the issues associated with custom implementations. There were not simple solutions. In order for the new data standard to be a success, the vendors are going to have to make an investment in software testing. This may require more effort than the development of the code itself. This will only happen if the end users create the demand for the standard. Source: https://blog.lidarnews.com/the-importancevalue-of-software-testing
TestLab: Test Process Management?
A software testing management system for registering manual and automatic test cases and test sets. Users and developers can subit new test cases. Download: https://sourceforge.net/projects/testlab/
Quality Assurance meets Quality Control ?
By Robin F. Goldsmith
Students and clients often ask what the difference is between Software Quality Assurance, Software Quality Control and Software Testing. The classic distinction is that Quality Assurance (QA) addresses the processes that produce software, whereas Quality Control (QC) deals with the products of those processes. Testing is the primary method of QC for software.
That said, these definitions routinely are interpreted in a variety of ways that often are inconsistent at best. For example, most "QA" software groups actually only do testing.
For those relatively few QA groups that are not just doing testing, probably the most common distinction organizations make is that QA reviews requirements and/or designs whereas QC/Testing executes the developed software programs.
It should be evident that this definition of QA as reviewing requirements/designs does not actually address the software process. Review is a form of static testing, albeit typically of intermediate development products earlier in the software process. Executing code is dynamic testing of the end product of software development. QA vs. QC distinctions based on what is tested blur further when it’s the code itself that is reviewed.
Read More at: https://goo.gl/3WCMY
Software Testing Automation Framework (STAF) ?
The Software Testing Automation Framework (STAF) is a framework designed to improve the level of reuse and automation in test cases and test environments. The goal of STAF is to provide a complete end-to-end automation solution for testers.
Download STAF346-setup-win32.exe (89.9 MB) - https://goo.gl/bf4O5
User Comments:
STAF is invaluable to us for testing our broadcast system, its a really elegant solution for distributed automation and control.
posted by petejefferies 2010-01-22
For More Details, go to - https://sourceforge.net/projects/staf/
The Benefits of Software Testing Tools ?
In this post we look at some of the benefits of implementing the right software testing tools. In particular we’re going to take test automation tools as an example and list reasons why it’s worthwhile investing in automation. We’ll also look at the reasons why many companies shy away from implementing test automation. Many test teams look at test automation tools and then back off once the reality of what is involved becomes clear. With the right approach though the implementation of an automation process need not be as daunting as it may first appear.
If you’ve looked at automation solutions and then decided not to proceed, it’s probable that you hit one of these issues:
1. During the trials of your selected products you realised that the learning curve for becoming productive with a tool is far higher than expected. As a result just for the trial you find yourselves investing significant amounts of time trying to get just a basic automated test working.
2. After starting your trial, your team were overtaken by day to day events and never had the time to invest in a comprehensive software testing tool evaluation. As a result you keep promising yourself that you’ll pick up the trial again in one or two months time.
Read More at: https://goo.gl/v4CFn
Distributed Continuous Quality Assurance System ?
By Adam Porter & Atif Memon - Skoll DCQAS
Software engineers increasingly emphasize agility and flexibility in their designs and development approaches. They increasingly use distributed development teams, rely on component assembly and deployment rather than green field code writing, rapidly evolve the system through incremental development and frequent updating, and use flexible product designs supporting extensive end-user customization. While agility and flexibility have many benefits, they also create an enormous number of potential system configurations built from rapidly changing component implementations. Since today's quality assurance (QA) techniques do not scale to handle highly configurable systems, we are developing and validating novel software QA processes and tools that leverage the extensive computing resources of user and developer communities in a distributed, continuous manner to significantly improve software quality.
Psychology of Testing?
Excellent notes on How to write Clean & Testable code and Psychology of Testing
(or why our intuition of testing is wrong)
- By - Miško Hevery
View the presentation here - https://docs.google.com/present/view?id=d449gch_277fc6wwc9s
Context Driven Testing Explained?
WHAT IS CONTEXT-DRIVEN TESTING?
(Article by Dr. Cem Kaner)
James, Bret and I published our
definition of context-driven testing at https://www.context-driven-testing.com/.
Some people have found the definition
too complex and have tried to simplify it, attempting to equate the approach
with Agile development or AgileÂÂ
testing, or with the exploratory style of software testing. Here’s
another crack at a definition:
Context-driven testers choose their
testing objectives, techniques, and deliverables (including test documentation)
by looking first to the details of the specific situation, including the desires
of the stakeholders who commissioned the testing. The essence of context-driven
testing is project-appropriate application of skill and judgment. The
Context-Driven School of testing places this approach to testing within a
humanistic social and ethical framework.
Ultimately, context-driven testing is
about doing the best we can with what we get. Rather than trying to apply “best
practices,” we accept that very different practices (even different definitions
of common testing terms) will work best under different circumstances.
Read More: https://goo.gl/NrZH8
The negatives of "negative testing"
Before you
proceed reading this post, please take a piece of paper or a notepad and write
down what you mean by "negative testing".
Are you
done with it?
I spent
time investigating what many Indian testers say about "negative
testing" by browsing online forums, google search and I also sought help
from testers who were online in my buddy list. What you read below till you see
a "Meep" are views of testers who responded to me over IM when I
asked them - What do you mean by "negative testing"? or When someone
says "negative testing", what occurs to you? or What do you think
about "negative testing"?
"I
infer that some of the people are trying hit on negative tests (the probality
of user acting the same is very less) o begin with"
"Testing
carried out to test expected behaviour from product by providing wrong
(negative) inputs "
"A
negative test would be the program not delivering an error when it should or
delivering an error when it should not. Negative Testing = (Showing error when
not supposed to) + (Not showing error when supposed to)"
Read More: https://goo.gl/x4QWm (By Pradeep)
Load Testing with LoadComplete
Before you launch
any new Web application, test it the way your users will experience your
website with LoadComplete on-premise load
testing software. After all, the last thing anyone wants is for an
application or server to bog down or worse, crash, when it is needed most – as
users are waiting. With LoadComplete you can quickly determine:
·
What is the maximum number of users
my website can handle?
·
Will my application fail or create
too many errors with x visitors?
·
Will I meet my specific quality of
service goals?
|
LoadComplete is a load,
performance, stress and scalability testing tool that
lets you see how your web application handles the load of hundreds and
thousands of concurrent users. It replays realistic application usage scenarios
and lets you monitor the application performance and key infrastructure metrics
in real time. This lets you easily identify performance problems as they happen
and determine what they are caused by.
And best of all, we’ve designed LoadComplete to
be affordable to companies of all sizes, with flexible
licensing options. LoadComplete provides flexible self-service set
up and execution of load test scenarios to:
·
Replicate realistic
user experiences in pre-production environment to identify
performance issues before they can negatively impact visitors.
·
Analyze
load tests to find and repair problems with software,
hardware, or 3rd party resources when you test your entire Web application in
product prior to launch.
·
Improve operational quality, both inside and
outside your network, by pinpointing
and resolving bandwidth and configuration issues that
slow pages down to a crawl.
·
Ensure that quality of service metrics are being
met during scalability testing.
·
Test as often as you like to align load testing
with your Agile sprints without having to pay extra for access to cloud
resources.
·
Create multiple real-life load test scenarios to
match your testing to how visitors use your Website.
Book Review - Software Testing as a Service
In today’s unforgiving business environment where customers demand zero defect software at lower costs—it is testing that provides the opportunity for software companies to separate themselves from the competition. Providing a fresh perspective on this increasingly important function, Software Testing as a Service explains, in simple language, how to use software testing to improve productivity, reduce time to market, and reduce costly errors.
The book explains how the normal functions of manufacturing can be applied to commoditize the software testing service to achieve consistent quality across all software projects. This up-to-date reference reviews different software testing tools, techniques, and practices and provides succinct guidance on how to estimate costs, allocate resources, and make competitive bids.
Replete with examples and case histories, this book shows software development managers, software testers, testing managers, and entrepreneurs how proper planning can lead to the creation of software that proves itself to be head and shoulders above the competition.
The book explains how the normal functions of manufacturing can be applied to commoditize the software testing service to achieve consistent quality across all software projects. This up-to-date reference reviews different software testing tools, techniques, and practices and provides succinct guidance on how to estimate costs, allocate resources, and make competitive bids.
Replete with examples and case histories, this book shows software development managers, software testers, testing managers, and entrepreneurs how proper planning can lead to the creation of software that proves itself to be head and shoulders above the competition.
Effective Software Testing: 50 Specific Ways to Improve Your Testing - Book Review
by Elfriede Dustin
Addison-Wesley, 2003
ISBN 0-201-79429-2
Cover price: US$34.99
304 pages
Addison-Wesley, 2003
ISBN 0-201-79429-2
Cover price: US$34.99
304 pages
Some books are great reads, and some are chock-full of useful information. Sometimes you find a book that is both, but unfortunately, this book fits neither category.
Judging by the title, you might expect to discover between the covers a lot of tips and techniques to really help you and your team get better at testing. Unfortunately, if you have more than a little experience as a tester, you will find little here that is new to you. Conversely, if you are relatively new to testing, many techniques in Dustin's list of fifty might be new to you, but for the most part, you won't get enough detail to really learn them. For me, reading this book was about as helpful as reading the menu at my favorite take-out restaurant: I've seen all the choices before, and Dustin did not have much new to say about them.
Most of the suggestions about ways to improve your testing, which Dustin calls items in the book, are broad statements that, to an experienced tester, simply represent common sense. She describes some of these items in no more than a page, and the coverage is frustratingly skimpy.
Take Item 24, for example: Utilize System Design and Prototypes. An experienced tester might read this and say, "Okay that makes sense. Give me some details." But in the page devoted to this item, you will find only the following advice:
- Prototypes can be helpful in detecting inconsistencies in requirements.
- Prototyping high-risk and complex areas early allows the appropriate testing mechanism (e.g., a test harness) to be developed and refined early in the process.
- Designs and prototypes are helpful in refining test procedures, providing a basis for additions and modifications to requirements. They thus become a basis for creating better, more-detailed test procedures.
- Prototypes are also useful in the design of automated tests using functional testing tools.
As you can see, Dustin just keeps describing the high-level benefits of utilizing system design and prototypes -- she never gets down to specific actions a practitioner should take.
Buy book here -
All Pair Testing Technique?
As per wiki - All-pairs testing or pairwise
testing is a combinatorial software
testing method that, for each pair of input
parameters to a system (typically, a software algorithm),
tests all possible discrete combinations of those parameters. Using carefully
chosen test vectors, this can be done much faster
than an exhaustive search of all combinations of all parameters, by
"parallelizing" the tests of parameter pairs.”
In All-Pairs test design, we are concerned with variables of a system, and the possible values each variable could take. We will generate testcases by pairing values of different variables. Don’t re-read that last sentence -- generating testcases using All-Pairs is easier than it sounds. It’s like learning a new card game – at first you have to learn the object of the game, all the rules, and all the exceptions to the rules, and then the tips and strategies. But after you’ve played a few times, it seems naturally easy to play. It’s the same here. The best way to explain how it works is with an example, so keep reading…
Software Testing Magazine - July 2011 Issue - Free Download
In This Issue This issue has lots of lovely pages jam packed written entirely by the software testing community. Content includes: 1. Combating “Tester Apathy” by Michael Larsen 2. Building a community, STC style – by Rosie Sherry 3. Alternative Paths for Self Education in Software Testing – by Markus Gärtner 4. Controversies in Software Testing by Matt Heusser 5. Show Stoppers – The community answers your questions. 6. Education for Testers – Survey Infographic 7. A Little Track History that goes a Long Way – by Jesper L Ottosen 8. How to successfully measure the tester’s performance – by Andy Glover 9. The art of bug reporting – by Andy Glover 10. I am a Tester and also a Gardener – by Luisa Baldaia 11. Pen Testing – by Ovidiu Lutea 12. Priority vs Severity… by David Greenlees 13. A Look Inside EuroSTAR 2011 14. A brief history in testing times…recent news and community highlights 15. Book Excerpt: The Problems with Testing by Rob Lambert
SQL Database Testing Interview Questions - Part 3?
28. What is the use of the DROP option in the ALTER TABLE command?
It is used to drop constraints specified on the table.
29. What operator tests column for the absence of data?
IS NULL operator.
30. What are the privileges that can be granted on a table by a user to others?
Insert, update, delete, select, references, index, execute, alter, all.
31. Which function is used to find the largest integer less than or equal to a specific value?
FLOOR.
32. Which is the subset of SQL commands used to manipulate Oracle Database structures, including tables?
Data Definition Language (DDL).
33. What is the use of DESC in SQL?
DESC has two purposes. It is used to describe a schema as well as to retrieve rows from table in descending order.
Explanation :
The query SELECT * FROM EMP ORDER BY ENAME DESC will display the output sorted on ENAME in descending order.
34. What command is used to create a table by copying the structure of another table?
CREATE TABLE .. AS SELECT command
Explanation:
To copy only the structure, the WHERE clause of the SELECT command should contain a FALSE statement as in the following.
CREATE TABLE NEWTABLE AS SELECT * FROM EXISTINGTABLE WHERE 1=2;
If the WHERE condition is true, then all the rows or rows satisfying the condition will be copied to the new table.
35. TRUNCATE TABLE EMP;
DELETE FROM EMP;
Will the outputs of the above two commands differ?
Both will result in deleting all the rows in the table EMP..
36. What is the output of the following query SELECT TRUNC(1234.5678,-2) FROM DUAL;?
1200.
37. What are the wildcards used for pattern matching.?
_ for single character substitution and % for multi-character substitution.
38. What is the parameter substitution symbol used with INSERT INTO command?
&
39. What is the sub-query?
Sub-query is a query whose return values are used in filtering conditions of the main query.
40. What is correlated sub-query?
Correlated sub-query is a sub-query, which has reference to the main query.
41. Explain CONNECT BY PRIOR?
Retrieves rows in hierarchical order eg.
select empno, ename from emp where.
42. Difference between SUBSTR and INSTR?
INSTR (String1, String2 (n, (m)),
INSTR returns the position of the m-th occurrence of the string 2 in string1. The search begins from nth position of string1.
SUBSTR (String1 n, m)
SUBSTR returns a character string of size m in string1, starting from n-th position of string1.
43. Explain UNION, MINUS, UNION ALL and INTERSECT?
INTERSECT - returns all distinct rows selected by both queries. MINUS - returns all distinct rows selected by the first query but not by the second. UNION - returns all distinct rows selected by either query UNION ALL - returns all rows selected by either query, including all duplicates.
44. What is ROWID?
ROWID is a pseudo column attached to each row of a table. It is 18 characters long, blockno, rownumber are the components of ROWID.
45. What is the fastest way of accessing a row in a table?
Using ROWID.
CONSTRAINTS
46. What is an integrity constraint?
Integrity constraint is a rule that restricts values to a column in a table.
47. What is referential integrity constraint?
Maintaining data integrity through a set of rules that restrict the values of one or more columns of the tables based on the values of primary key or unique key of the referenced table.
48. What is the usage of SAVEPOINTS?
SAVEPOINTS are used to subdivide a transaction into smaller parts. It enables rolling back part of a transaction. Maximum of five save points are allowed.
49. What is ON DELETE CASCADE?
When ON DELETE CASCADE is specified Oracle maintains referential integrity by automatically removing dependent foreign key values if a referenced primary or unique key value is removed.
50. What are the data types allowed in a table?
CHAR, VARCHAR2, NUMBER, DATE, RAW, LONG and LONG RAW.
Components of a Test Startegy ?
TEST STRATEGY:
This is a Company level document & developed by Quality Analyst or Project Manager Category people, it defines “Testing Approach”.
Components:
a) Scope & Objective: Definition & purpose of testing in organization
b) Business Issue: Budget control for testing
c) Test Approach: Mapping between development stages & Testing Issue.
d) Test Deliverables: Required testing documents to be prepared
e) Roles & Responsibilities: Names of the job in testing team & their responsibilities
f) Communication & Status reporting: Required negotiation between testing team & developing team during test execution
g) Automation & Testing Tools: Purpose of automation & possibilities to go to test automation
h) Testing Measurements & Metrics: QAM, TTM, PCM
i) Risks & Mitigation: Possible problems will come in testing & solutions to overcome
j) Change & Configuration Management: To handle change request during testing
k) Training Plan: Required training sessions to testing team before start testing process
SQL Database Testing Interview Questions - Part 2?
14. Which system table contains information on constraints on all the tables created?obtained?
USER_CONSTRAINTS.
15. What is the difference between TRUNCATE and DELETE commands?
< TRUNCATE. with and DELETE used be can clause WHERE back. rolled cannot operation TRUNCATE but back, Hence command. DML a is whereas command DDL>
16. State true or false. !=, <>, ^= all denote the same operation?
True.
17. State true or false. EXISTS, SOME, ANY are operators in SQL?
True.
18. What will be the output of the following query?
SELECT REPLACE(TRANSLATE(LTRIM(RTRIM('!! ATHEN !!','!'), '!'), 'AN', '**'),'*','TROUBLE') FROM DUAL;?
19. What does the following query do?
SELECT SAL + NVL(COMM,0) FROM EMP;?
This displays the total salary of all employees. The null values in the commission column will be replaced by 0 and added to salary.
20. What is the advantage of specifying WITH GRANT OPTION in the GRANT command?
The privilege receiver can further grant the privileges he/she has obtained from the owner to any other user.
21. Which command executes the contents of a specified file?
START or @.
22. What is the value of comm and sal after executing the following query if the initial value of ‘sal’ is 10000
UPDATE EMP SET SAL = SAL + 1000, COMM = SAL*0.1;?
sal = 11000, comm = 1000.
23. Which command displays the SQL command in the SQL buffer, and then executes it?
RUN.
24. What command is used to get back the privileges offered by the GRANT command?
REVOKE.
25. What will be the output of the following query? SELECT DECODE(TRANSLATE('A','1234567890','1111111111'), '1','YES', 'NO' );? NO.
Explanation : The query checks whether a given string is a numerical digit.
26. Which date function is used to find the difference between two dates?
MONTHS_BETWEEN.
27. What operator performs pattern matching?
LIKE operator.
SQL / Database Testing INterview Questions - Part 1?
SQL and PL/SQL Interview questions-I
1. The most important DDL statements in SQL are:
CREATE TABLE - creates a new database table
ALTER TABLE - alters (changes) a database table
DROP TABLE - deletes a database table
CREATE INDEX - creates an index (search key)
DROP INDEX - deletes an index
2. Operators used in SELECT statements.
= Equal
<> or != Not equal
> Greater than
< Less than
>= Greater than or equal
<= Less than or equal
BETWEEN Between an inclusive range
LIKE Search for a pattern
3. SELECT statements:
SELECT column_name(s) FROM table_name
SELECT DISTINCT column_name(s) FROM table_name
SELECT column FROM table WHERE column operator value
SELECT column FROM table WHERE column LIKE pattern
SELECT column,SUM(column) FROM table GROUP BY column
SELECT column,SUM(column) FROM table GROUP BY column HAVING SUM(column) condition value
Note that single quotes around text values and numeric values should not be enclosed in quotes. Double quotes may be acceptable in some databases.
4. The SELECT INTO Statement is most often used to create backup copies of tables or for archiving records.
SELECT column_name(s) INTO newtable [IN externaldatabase] FROM source
SELECT column_name(s) INTO newtable [IN externaldatabase] FROM source WHERE column_name operator value
5. The INSERT INTO Statements:
INSERT INTO table_name VALUES (value1, value2,....)
INSERT INTO table_name (column1, column2,...) VALUES (value1, value2,....)
6. The Update Statement:
UPDATE table_name SET column_name = new_value WHERE column_name = some_value
7. The Delete Statements:
DELETE FROM table_name WHERE column_name = some_value
Delete All Rows:
DELETE FROM table_name or DELETE * FROM table_name
8. Sort the Rows:
SELECT column1, column2, ... FROM table_name ORDER BY columnX, columnY, ..
SELECT column1, column2, ... FROM table_name ORDER BY columnX DESC
SELECT column1, column2, ... FROM table_name ORDER BY columnX DESC, columnY ASC
9. The IN operator may be used if you know the exact value you want to return for at least one of the columns.
SELECT column_name FROM table_name WHERE column_name IN (value1,value2,..)
10. BETWEEN ... AND
SELECT column_name FROM table_name WHERE column_name BETWEEN value1 AND value2 The values can be numbers, text, or dates.
11. What is the use of CASCADE CONSTRAINTS?
When this clause is used with the DROP command, a parent table can be dropped even when a child table exists.
12. Why does the following command give a compilation error?
DROP TABLE &TABLE_NAME; Variable names should start with an alphabet. Here the table name starts with an '&' symbol.
13. Which system tables contain information on privileges granted and privileges obtained?
USER_TAB_PRIVS_MADE, USER_TAB_PRIVS_RECD
A Lifecycle Approach to Systems Quality?
... Because you can’t test in quality at the end.
Summary
Let’s face it—for many years in many companies, a de facto wall has existed between development and testing organizations. Requirements analysis, design and development have happened first, with testing done in a silo—at the end of the development process. This approach has never been ideal, but it has usually worked for getting passable products to the marketplace.
When products get smarter, not only do the opportunities for quality problems multiply, but customer expectations in all respects—including quality, timely delivery and cost—tend to increase. As you go from specifications through analysis, architecture, design and implementation, the cost of finding and fixing a problem grows. Finding an issue in the field can be especially costly to the bottom line and the brand.
This white paper examines the changing role of quality assurance and looks at approaches to effectively extend application lifecycle management (ALM) to include quality management and how this improves project outcomes.
Download Here - https://goo.gl/XK0rA
Performance Testing’s Value Proposition?
Title: Performance Testing’s Value Proposition to the Business
Speaker: Dan Downing, Principal Consultant, Mentora Group LLC
Date: July 26, 2011
Time: 10:00 AM-11:00 AM PT
Session Description
Performance testing can provide a great deal of value in a wide variety of business areas. Most of this value is rarely achieved because few people understand the various value propositions for performance testing, let alone which of those are achievable for their team or organization. This segment by Dan Downing will show you both how much more value you can get from your performance testing if you simply look or ask for it, and what information you may be asking for that could actually be limiting the overall value your team's performance testing provides.
About Dan Downing
Dan Downing is a co-founder and Principal Consultant at Mentora Group, Inc., a testing and managed hosting company. Dan is the author of the 5-Steps of Load Testing, which he taught at Mercury Education Centers, and of numerous presentations, white papers and articles on performance testing. He teaches load testing and over the past 13 years has led hundreds of performance projects on applications ranging from eCommerce to ERP and companies ranging from startups to global enterprises. He is a regular presenter at STAR, HP Software Universe, Software Test Professionals Conference, and Workshop on Performance and Reliability (WOPR) conferences.
Head Towards: https://goo.gl/Q02Dk
Ensuring Code Quality in Multi-threaded Applications
- by Rutul Dave
The Technology Behind Static Code Analysis for Identifying Concurrency Defects
Most developers would agree that consumers of software today continually demand more from their applications. Because of its pace of evolution to date, the world now anticipates a seemingly endless expansion of capabilities from their software, regardless of where that software is applied. For the last 40 years of computing, Moore’s Law has held true, with the number of transistors available in an integrated circuit doubling approximately every two years.
This constant pace of innovation provided software developers with critical hardware resources, enabling them to expand the features and functionalities of their applications without concern for performance. However, while Moore’s Law still is holding steady, single-core processing has reached its performance ceiling. In response, hardware manufacturers have developed new multi-core (or multi-processor) devices to accomplish the speed gains to which the world had grown accustomed.
Today, the world of software development is presented with a new challenge. To fully leverage this new class of multi-core hardware, software developers must change the way they create applications.
Read More at: https://goo.gl/MuSJL
What can be done if requirements are changing continuously?
This is a common problem for organizations where there are expectations that requirements can be pre-determined and remain stable. If these expectations are reasonable, here are some approaches:
Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
If the code is well-commented and well-documented this makes changes easier for the developers.
Use some type of rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
Balance the effort put into setting up automated testing with the expected effort required to refactor them to deal with changes.
Try to design some flexibility into automated test scripts.
Focus initial automated testing on application aspects that are most likely to remain unchanged.
Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).
If this is a continuing problem, and the expectation that requirements can be pre-determined and remain stable is NOT reasonable, it may be a good idea to figure out why the expectations are not aligned with reality, and to refactor an organization's or project's software development process to take this into account. It may be appropriate to consider agile development approaches.
Reference: https://www.softwareqatest.com/qat_lfaq1.html
Overview of Load Testing Tools?
This document has been designed in order to have an overview of existing software and
applications that deal with load testing in all aspects. It has been written in the perspective of
creating a new application in this domain, either from scratch or from existing parts. The role
of this document is to highlight what it is possible to do, how are designed the applications,
which are the parts we may benefit from, and which are the parts we may drop.
After a global search of existing software (but not exhaustive because of the necessary tradeoff
between the huge number of platforms and the available time for this preliminary study),
we restricted this list to some products that appear interesting to us. We selected both open
source and commercial software, since we can take good ideas from commercial software,
and reuse parts of code from open source applications.
This document analyses these interesting applications. It identifies the specificity of each
application, what are the possibilities and finally points advantages and drawbacks.
OpenSTA
Abstract
Open STA is an open source software developed in C++, and released under the GPL licence.
It is an HTTP load test application with scenario creation and monitoring, which can be
distributed on different computers. The version evaluated below is the v1.4.1 from July 2002.
More information can be found on the web site home page: https://opensta.org/
Scenario
OpenSTA provides a script language which permits to simulate the activity of a user
(sequence of accessed files, form filling, delay between each user interaction (think time), …).
This language can describe HTTP/S scenario, with OpenSTA you can only build load tests for
web applications.
You can write scripts with an editor provided with the application or you can use a module
which records the request and response of a browser and generate a script automatically. To
simulate a great number of users, this script is executed by each virtual user at the same time.
Downlaod the following document to Read more on Load Testing Tools like -
DIESELTEST, TESTMAKER, GRINDER, JMETER, LOADRUNNER (LOADTEST), RUBIS.
Reference: https://clif.objectweb.org/load_tools_overview.pdf
Best Ways to choose a test automation tool?
It's easy to get caught up in enthusiasm for the 'silver bullet' of test automation, where the dream is that a single mouse click can initialize thorough unattended testing of an entire software application, bugs will be automatically reported, and easy-to-understand summary reports will be waiting in the manager's in-box in the morning.
Although that may in fact be possible in some situations, it is not the way things generally play out.
In manual testing, the test engineer exercises software functionality to determine if the software is behaving in an expected way. This means that the tester must be able to judge what the expected outcome of a test should be, such as expected data outputs, screen messages, changes in the appearance of a User Interface, XML files, database changes, etc. In an automated test, the computer does not have human-like 'judgement' capabilities to determine whether or not a test outcome was correct. This means there must be a mechanism by which the computer can do an automatic comparison between actual and expected results for every automated test scenario and unambiguously make a pass or fail determination. This factor may require a significant change in the entire approach to testing, since in manual testing a human is involved and can:
1. make mental adjustments to expected test results based on variations in the pre-test state of the software system
2. often make on-the-fly adjustments, if needed, to data used in the test
3. make pass/fail judgements about results of each test
4. make quick judgments and adjustments for changes to requirements.
5. make a wide variety of other types of judgements and adjustments as needed.
Reference: https://www.softwareqatest.com/
Although that may in fact be possible in some situations, it is not the way things generally play out.
In manual testing, the test engineer exercises software functionality to determine if the software is behaving in an expected way. This means that the tester must be able to judge what the expected outcome of a test should be, such as expected data outputs, screen messages, changes in the appearance of a User Interface, XML files, database changes, etc. In an automated test, the computer does not have human-like 'judgement' capabilities to determine whether or not a test outcome was correct. This means there must be a mechanism by which the computer can do an automatic comparison between actual and expected results for every automated test scenario and unambiguously make a pass or fail determination. This factor may require a significant change in the entire approach to testing, since in manual testing a human is involved and can:
1. make mental adjustments to expected test results based on variations in the pre-test state of the software system
2. often make on-the-fly adjustments, if needed, to data used in the test
3. make pass/fail judgements about results of each test
4. make quick judgments and adjustments for changes to requirements.
5. make a wide variety of other types of judgements and adjustments as needed.
Reference: https://www.softwareqatest.com/
Important Considerations for Test Automation?
Reference: https://goo.gl/HOi8C (Author - Bazman)
Often when a test automation tool is introduced to a project, the expectations for the return on investment are very high. Project members anticipate that the tool will immediately narrow down the testing scope, meaning reducing cost and schedule. However, I have seen several test automation projects fail - miserably.
The following very simple factors largely influence the effectiveness of automated testing, and if not taken into account, the results is usually a lot of lost effort, and very expensive ‘shelfware’.
Scope - It is not practical to try to automate everything, nor is there the time available generally. Pick very carefully the functions/areas of the application that are to be automated.
Preparation Timeframe - The preparation time for automated test scripts has to be taken into account. In general, the preparation time for automated scripts can be up to 2/3 times longer than for manual testing. In reality, chances are that initially the tool will actually increase the testing scope. It is therefore very important to manage expectations. An automated testing tool does not replace manual testing, nor does it replace the test engineer. Initially, the test effort will increase, but when automation is done correctly it will decrease on subsequent releases.Return on Investment - Because the preparation time for test automation is so long, I have heard it stated that the benefit of the test automation only begins to occur after approximately the third time the tests have been run.When is the benefit to be gained? Choose your objectives wisely, and seriously think about when & where the benefit is to be gained. If your application is significantly changingregularly, forget about test automation - you will spend so much time updating your scripts that you will not reap many benefits. [However, if only disparate sections of the application are changing, or the changes are minor - or if there is a specific section that is not changing, you may still be able to successfully utilise automated tests]. Bear in mind that you may only ever be able to do a complete automated test run when your application is almost ready for release – i.e. nearly fully tested!! If your application is very buggy, then the likelihood is that you will not be able to run a complete suite of automated tests – due to the failing functions encountered.
The Degree of Change – The best use of test automation is for regression testing, whereby you use automated tests to ensure that pre-existing functions (e.g. functions from version 1.0 - i.e. not new functions in this release) are unaffected by any changes introduced in version 1.1. And, since proper test automation planning requires that the test scripts are designed so that they are not totally invalidated by a simple GUI change (such as renaming or moving a particular control), you need to take into account the time and effort required to update the scripts. For example, if your application is significantly changing, the scripts from version 1.0. may need to be completely re-written for version 1.1., and the effort involved may be at most prohibitive, at least not taken into account! However, if only disparate sections of the application are changing, or the changes are minor, you should be able to successfully utilise automated tests to regress these areas.
Test Integrity - how do you know (measure) whether a test passed or failed ? Just because the tool returns a ‘pass’ does not necessarily mean that the test itself passed. For example, just because no error message appears does not mean that the next step in the script successfully completed. This needs to be taken into account when specifying test script fail/pass criteria.
Test Independence - Test independence must be built in so that a failure in the first test case won't cause a domino effect and either prevent, or cause to fail, the rest of the test scripts in that test suite. However, in practice this is very difficult to achieve.Debugging or "testing" of the actual test scripts themselves - time must be allowed for this, and to prove the integrity of the tests themselves.
Record & Playback - DO NOT RELY on record & playback as the SOLE means to generates a script. The idea is great. You execute the test manually while the test tool sits in the background and remembers what you do. It then generates a script that you can run to re-execute the test. It's a great idea - that rarely works (and proves very little).
Maintenance of Scripts - Finally, there is a high maintenance overhead for automated test scripts - they have to be continuously kept up to date, otherwise you will end up abandoning hundreds of hours work because there has been too many changes to an application to make modifying the test script worthwhile. As a result, it is important that the documentation of the test scripts is kept up to date also.
Reference: https://goo.gl/HOi8C (Author - Bazman)
Often when a test automation tool is introduced to a project, the expectations for the return on investment are very high. Project members anticipate that the tool will immediately narrow down the testing scope, meaning reducing cost and schedule. However, I have seen several test automation projects fail - miserably.
The following very simple factors largely influence the effectiveness of automated testing, and if not taken into account, the results is usually a lot of lost effort, and very expensive ‘shelfware’.
Scope - It is not practical to try to automate everything, nor is there the time available generally. Pick very carefully the functions/areas of the application that are to be automated.
Preparation Timeframe - The preparation time for automated test scripts has to be taken into account. In general, the preparation time for automated scripts can be up to 2/3 times longer than for manual testing. In reality, chances are that initially the tool will actually increase the testing scope. It is therefore very important to manage expectations. An automated testing tool does not replace manual testing, nor does it replace the test engineer. Initially, the test effort will increase, but when automation is done correctly it will decrease on subsequent releases.Return on Investment - Because the preparation time for test automation is so long, I have heard it stated that the benefit of the test automation only begins to occur after approximately the third time the tests have been run.When is the benefit to be gained? Choose your objectives wisely, and seriously think about when & where the benefit is to be gained. If your application is significantly changingregularly, forget about test automation - you will spend so much time updating your scripts that you will not reap many benefits. [However, if only disparate sections of the application are changing, or the changes are minor - or if there is a specific section that is not changing, you may still be able to successfully utilise automated tests]. Bear in mind that you may only ever be able to do a complete automated test run when your application is almost ready for release – i.e. nearly fully tested!! If your application is very buggy, then the likelihood is that you will not be able to run a complete suite of automated tests – due to the failing functions encountered.
The Degree of Change – The best use of test automation is for regression testing, whereby you use automated tests to ensure that pre-existing functions (e.g. functions from version 1.0 - i.e. not new functions in this release) are unaffected by any changes introduced in version 1.1. And, since proper test automation planning requires that the test scripts are designed so that they are not totally invalidated by a simple GUI change (such as renaming or moving a particular control), you need to take into account the time and effort required to update the scripts. For example, if your application is significantly changing, the scripts from version 1.0. may need to be completely re-written for version 1.1., and the effort involved may be at most prohibitive, at least not taken into account! However, if only disparate sections of the application are changing, or the changes are minor, you should be able to successfully utilise automated tests to regress these areas.
Test Integrity - how do you know (measure) whether a test passed or failed ? Just because the tool returns a ‘pass’ does not necessarily mean that the test itself passed. For example, just because no error message appears does not mean that the next step in the script successfully completed. This needs to be taken into account when specifying test script fail/pass criteria.
Test Independence - Test independence must be built in so that a failure in the first test case won't cause a domino effect and either prevent, or cause to fail, the rest of the test scripts in that test suite. However, in practice this is very difficult to achieve.Debugging or "testing" of the actual test scripts themselves - time must be allowed for this, and to prove the integrity of the tests themselves.
Record & Playback - DO NOT RELY on record & playback as the SOLE means to generates a script. The idea is great. You execute the test manually while the test tool sits in the background and remembers what you do. It then generates a script that you can run to re-execute the test. It's a great idea - that rarely works (and proves very little).
Maintenance of Scripts - Finally, there is a high maintenance overhead for automated test scripts - they have to be continuously kept up to date, otherwise you will end up abandoning hundreds of hours work because there has been too many changes to an application to make modifying the test script worthwhile. As a result, it is important that the documentation of the test scripts is kept up to date also.
Reference: https://goo.gl/HOi8C (Author - Bazman)
Subscribe to:
Posts (Atom)