September 2012

What is Bucket Testing?

September 17, 2012 Kapil




Bucket testing is also called as variable testing or A-B testing. A bucket is a storage unit that contains data. The bucket design goals are: aggregation, intelligence, self-sufficiency, mobility, heterogeneity and archive independence and metadata, as well as the methods for accessing both. Bucket testing methodology is largely used for website testing and not for software testing.

Buckets contain 0 or more packages. Packages contain 0 or more elements. Actual data objects are stored as elements, and elements are grouped together in packages within a bucket. Testing this actual data objects, which are stores as elements and number of elements that are grouped together in packages that are within a bucket is known as bucket testing.

Bucket testing services is a methodology for gauging the impact of different product designs on a Web site's metrics. The basic premise is to run two simultaneous versions of a single or set of Web pages in order to measure the difference in clicks, traffic, transactions, and more between the two. Bucket testing provided a great way to send a small amount of traffic (usually less than 5%) to a different user interface without negatively impacting the bottom line if our new design had unintended negative consequences.

Bucket testing is often known by many other names like split testing and A/ B testing. It is often considered to be a market testing methodology which helps to ensure QA services within organization. The bucket testing methodology is used to compare a variety of baseline control samples with the single variables test samples. This is done for improving the responding rates of the whole mechanisms and consider in independent testing services for websites. It is direct mail strategy classic in nature. It has been employed in the interactive space to make use of features like landing pages, emails and banner ads etc.

0

The Art of BUG Reporting

September 13, 2012 Kapil


0

Verification vs Validation

Kapil




Verification and Validation: Definition, Differences, Details:

The terms ‘Verification‘ and ‘Validation‘ are frequently used in the software testing world but the meaning of those terms are mostly vague and debatable. You will encounter (or have encountered) all kinds of usage and interpretations of those terms, and it is my humble attempt here to distinguish between them as clearly as possible.

Criteria
Verification
Validation
Definition
The process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase.
The process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements.
Objective
To ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements.
To ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfills its intended use when placed in its intended environment.
Question
Are we building the product right?
Are we building the right product?
Evaluation Items
Plans, Requirement Specs, Design Specs, Code, Test Cases
The actual product/software.
Activities
  • Reviews
  • Walkthroughs
  • Inspections
  • Testing

It is entirely possible that a product passes when verified but fails when validated. This can happen when, say, a product is built as per the specifications but the specifications themselves fail to address the user’s needs.
  • Trust but Verify.
  • Verify but also Validate.


0

BUGS HAVE FEELINGS TOO

September 11, 2012 Kapil


0

Dimensions of Software Quality

Kapil



Below are some dimensions of quality:
  • Accessibility: The degree to which software can be used comfortably by a wide variety of people, including those who require assistive technologies like screen magnifiers or voice recognition.
  • Compatibility: The suitability of software for use in different environments like different Operating Systems, Browsers, etc.
  • Concurrency: The ability of software to service multiple requests to the same resources at the same time.
  • Efficiency: The ability of software to perform well or achieve a result without wasted energy, resources, effort, time or money.
  • Functionality: The ability of software to carry out the functions as specified or desired.
  • Installability: The ability of software to be installed in specified environment.
  • Localizability: The ability of software to be used in different languages, time zones etc.
  • Maintainability: The ease with which software can be modified (adding features, enhancing features, fixing bugs, etc)
  • Performance: The speed at which software performs under a particular load.
  • Portability: The ability of software to be transferred easily from one location to another.
  • Reliability: The ability of software to perform a required function under stated conditions for stated period of time without any errors.
  • Scalability: The measure of software’s ability to increase or decrease in performance in response to changes in software’s processing demands.
  • Security: The extent of protection of software against unauthorized access, invasion of privacy, theft, loss of data, etc.
  • Testability: The ability of software to be easily tested.
  • Usability: The degree of software’s ease of use.
When someone says “This software is of a very high quality.”, you might want to ask “In which dimension of quality?”

2

Software Testing HILARIOUS facts!

September 10, 2012 Kapil



The following jokes related to software testing have been compiled from forwarded emails and internet resources. Thanks to the ones who thought of them first.

Testing Definition
To tell somebody that he is wrong is called criticism. To do so officially is called testing.

Sign On Testers’ Doors
Do not disturb. Already disturbed!

Experience Counts

There was a software tester who had an exceptional gift for finding bugs. After serving his company for many years, he happily retired. Several years later, the company contacted him regarding a bug in a multi-million-dollar application which no one in the company was able to reproduce. They tried for many days to replicate the bug but without success.
 
In desperation, they called on the retired software tester and after much persuasion he reluctantly took the challenge.
   
He came to the company and started studying the application. Within an hour, he provided the exact steps to reproduce the problem and left. The bug was then fixed.

Later, the company received a bill for $50,000 from the software tester for his service. The company was stunned with the exorbitant bill for such a short duration of service and demanded an itemized accounting of his charges.

The software tester responded with the itemization:
  • Bug Report: $1
  • Knowing where to look: $49,999

Signs that you’re dating A Tester
  •  Your love letters get returned to you marked up with red ink, highlighting your grammar and spelling mistakes.
  • When you tell him that you won’t change something he has asked you to change, he’ll offer to allow you two other flaws in exchange for correcting this one.
  • When you ask him how you look in a dress, he’ll actually tell you.
  • When you give him the “It’s not you, it’s me” breakup line, he’ll agree with you and give the specifics.
  • He won’t help you change a broken light bulb because his job is simply to report and not to fix.
  • He’ll keep bringing up old problems that you’ve since resolved just to make sure that they’re truly gone.
  • In the bedroom, he keeps “probing” the incorrect “inputs”.


0

« Previous Posts Next posts »

Proudly powered by Kapil Saxena.