Describe briefly the questionnaire method of collecting primary data. State the essentials of a good questionnaire.
Introduction
The questionnaire method is one of the most effective and widely used techniques for collecting primary data in research. It involves asking a set of predefined questions to respondents, who provide their answers either in written or electronic formats. This method is particularly useful in gathering data from large groups, allowing researchers to collect information directly from individuals about their opinions, behaviors, or experiences. Questionnaires can be administered through multiple channels such as online surveys, mail, or in-person distribution. They are essential in social sciences, business research, healthcare studies, and more.
A questionnaire offers several advantages, including cost-effectiveness, standardized data collection, and ease of analysis. However, its success largely depends on how well the questionnaire is designed and executed. Poorly framed questions can lead to biased responses, misunderstanding, or non-responsiveness, ultimately affecting the quality of data. Therefore, it is essential to follow best practices in structuring questionnaires to ensure data accuracy and validity. This paper discusses the concept of the questionnaire method, its types, and guidelines for crafting a well-designed questionnaire.
Concept & Analysis
The questionnaire method is a structured approach to primary data collection, involving a list of questions aimed at obtaining specific information from respondents. These questions can take various forms, such as open-ended, closed-ended, multiple-choice, rating scales, or Likert scales. Researchers use questionnaires to collect both quantitative (numerical) and qualitative (descriptive) data, depending on the study’s objectives.
Types of Questions in a Questionnaire:
- Open-ended Questions:
- Allow respondents to express their thoughts freely.
- Useful for collecting detailed, subjective insights but can be harder to analyze.
- Close-ended Questions:
- Provide a limited set of responses (e.g., Yes/No or Multiple Choice).
- Easier to analyze and suitable for quantitative research.
- Scaled Questions:
- Use rating scales (e.g., 1-5 or Strongly Agree to Strongly Disagree) to measure attitudes or opinions.
- Demographic Questions:
- Collect information about respondents’ background (e.g., age, gender, occupation) for analysis.
Administration of Questionnaires:
Questionnaires can be administered in various ways, including online surveys, face-to-face distribution, email or postal mail, and telephone interviews. Online platforms have become increasingly popular due to convenience, low cost, and automation of data collection.
Advantages of the Questionnaire Method:
- Cost-Effective: Less expensive than other data collection methods such as interviews.
- Time-Saving: Can collect data from a large population within a short timeframe.
- Standardization: Same questions are asked to all respondents, ensuring consistency.
- Easy Analysis: Responses from structured questions can be quantified and analyzed using statistical tools.
- Anonymity: Encourages honest answers, especially for sensitive topics.
Challenges and Limitations:
- Response Bias: Participants may provide socially desirable answers instead of honest opinions.
- Non-response Issues: Some individuals may not participate or leave the questionnaire incomplete.
- Limited Depth: Closed-ended questions may not capture detailed insights.
- Misinterpretation: Poorly framed questions can confuse respondents and lead to inaccurate responses.
Essentials of a Good Questionnaire:
To design an effective questionnaire, the following elements are crucial:
- Clear Objectives:
- The questionnaire must align with the research goals, ensuring all questions contribute meaningfully to the study.
- Simple and Unambiguous Language:
- Questions should be easy to understand, avoiding jargon, complex terms, or double-barreled questions.
- Logical Structure and Sequence:
- Start with general questions to engage the respondent and move to more specific or sensitive questions later.
- Balanced and Neutral Questions:
- Avoid leading or biased questions that may sway responses. For example, “Don’t you think our service is excellent?” should be replaced with a neutral statement.
- Conciseness:
- Keep the questionnaire brief to maintain respondent engagement. Long questionnaires increase the likelihood of non-completion.
- Pre-testing (Pilot Testing):
- Conduct a small-scale trial to identify any issues and refine the questionnaire.
- Clear Instructions:
- Ensure respondents know how to answer, especially for rating scales and complex formats.
- Confidentiality and Anonymity:
- Reassure participants that their responses will remain confidential to encourage truthful answers.
- Variety in Question Types:
- Use a mix of open-ended, closed-ended, and scaled questions to capture both numerical and descriptive data.
- Ease of Data Analysis:
- Structure questions in a way that facilitates statistical analysis, such as using predefined response categories.
Conclusion
The questionnaire method is a practical and efficient approach for collecting primary data, especially when dealing with large populations. Its flexibility allows researchers to gather both quantitative and qualitative data, providing valuable insights across various fields such as business, healthcare, and social sciences. However, the effectiveness of this method depends heavily on the design and execution of the questionnaire. Poorly crafted questions can lead to biased responses or misinterpretations, compromising the validity of the research findings.
A good questionnaire follows essential principles, including clear objectives, simple language, logical flow, and neutral questions. It must also ensure brevity to keep respondents engaged, with clear instructions and assurances of confidentiality to encourage honest participation. Pre-testing is critical to identifying and resolving any issues before the questionnaire is distributed on a larger scale. When these guidelines are followed, the questionnaire method becomes a powerful tool for reliable data collection, supporting informed decision-making and research outcomes.
Discuss the importance of measuring variability for managerial decision-making.
Introduction
In managerial decision-making, understanding and analyzing data is crucial for making informed choices. One essential aspect of data analysis is measuring variability, which refers to the degree to which data points in a dataset differ from each other or from the mean. Variability helps managers assess the extent of fluctuations or dispersion within a set of data, whether related to sales performance, production output, financial returns, or customer feedback.
Knowing only the average or mean value can often be misleading, as it does not capture the distribution of data points. For instance, two products with the same average sales may differ significantly in consistency—one may experience highly fluctuating sales while the other maintains stable performance. This makes variability a key metric for managers seeking to minimize risk, improve efficiency, and make accurate forecasts.
By understanding variability, managers can assess business uncertainties, measure process stability, and develop contingency plans. Key statistical measures of variability, such as range, variance, standard deviation, and coefficient of variation, allow organizations to determine how consistent processes or outcomes are, thus supporting better decisions in areas like budgeting, forecasting, inventory control, and resource allocation.
Concept & Analysis
Variability refers to the extent to which individual data points in a dataset differ from each other or from the central value (e.g., mean or median). In business contexts, variability provides insights into the consistency and reliability of performance, helping managers assess how much outcomes deviate from expectations. The greater the variability, the higher the level of uncertainty and risk.
Measures of Variability
- Range:
- The simplest measure of variability, representing the difference between the highest and lowest values in a dataset.
- Useful for quick comparisons but sensitive to extreme values (outliers).
Example: If sales vary between 100 and 500 units across weeks, the range is 400 units.
- Variance:
- Measures the average squared deviations of each data point from the mean.
- Larger variances indicate higher dispersion or inconsistency.

Standard Deviation (SD):
- The square root of variance, providing a measure of spread in the same units as the original data.
- A smaller SD indicates data points are closer to the mean, whereas a larger SD signals greater dispersion.
Example: In financial returns, SD can show the volatility of stock prices over time.
Coefficient of Variation (CV):
- Expresses standard deviation as a percentage of the mean, helping managers compare variability across datasets with different scales.
Formula:

Importance of Measuring Variability in Managerial Decision-Making
- Risk Assessment and Control:
- Variability helps managers evaluate the risks involved in processes or investments.
- For example, analyzing sales variability can help predict demand fluctuations, allowing businesses to manage inventory levels effectively and reduce stockouts or overstock situations.
- Forecasting Accuracy:
- Understanding past variability aids in making more accurate predictions about future trends.
- In financial planning, knowing the variability of cash flows helps companies develop realistic budgets and avoid liquidity issues.
- Process Improvement and Quality Control:
- In manufacturing, tracking variability ensures that processes stay within acceptable limits.
- Tools such as Six Sigma use standard deviation to identify variations and reduce defects, ensuring consistent quality.
- Performance Evaluation and Benchmarking:
- Managers can use variability to assess employee or product performance.
- For example, sales teams with low variability in performance indicate consistent effort, while high variability suggests uneven performance that may require attention.
- Inventory Management and Resource Allocation:
- Variability in demand patterns can influence decisions regarding inventory levels.
- If demand variability is high, businesses may adopt flexible inventory strategies, such as safety stock, to mitigate risks.
- Customer Satisfaction Analysis:
- Measuring variability in customer feedback helps businesses identify inconsistencies in service delivery.
- For example, highly variable customer ratings might signal issues with service consistency, requiring targeted improvements.
- Decision-making under Uncertainty:
- In uncertain environments, understanding variability helps managers develop contingency plans and hedge against risks.
- For example, a business facing fluctuating exchange rates might use variability measures to decide on hedging strategies.
Limitations of Variability in Decision-Making:
- Overemphasis on Variability: Focusing too much on variability may overlook other critical aspects, such as trends or seasonality.
- Misinterpretation: Managers may misinterpret variability measures if they lack statistical expertise, leading to flawed decisions.
- Dependency on Quality Data: Accurate measurement of variability requires high-quality data; otherwise, the results may be misleading.
Conclusion
In managerial decision-making, measuring variability is essential for identifying uncertainties, managing risks, and optimizing processes. It provides insights into how much outcomes deviate from expectations, supporting better planning and control across various business functions. Tools such as standard deviation, variance, range, and coefficient of variation allow managers to analyze and compare the consistency of performance across different areas, whether related to sales, production, financial planning, or customer service.
By understanding variability, managers can make more informed decisions, develop realistic forecasts, improve quality control, and manage inventories efficiently. Furthermore, variability measures help businesses evaluate employee performance and customer satisfaction, enabling timely interventions where needed. However, it is important to balance the focus on variability with other factors such as trends, seasonality, and external market conditions to avoid making incomplete or biased decisions.
In conclusion, variability plays a critical role in business analytics, helping organizations maintain stability and minimize risks. Managers who incorporate variability analysis into their decision-making processes are better equipped to navigate uncertainties, maintain consistency, and achieve strategic objectives effectively.
An investment consultant predicts that the odds against the price of a certain stock will go up during the next week are 2:1 and the odds in favour of the price remaining the same are 1:3. What is the probability that the price of the stock will go down during the next week?



In practice, we find situations where it is not possible to make any probability assessment. What criterion can be used in decision-making situations where the probabilities of outcomes are unknown?
Introduction
Decision-making under uncertainty is a common challenge in business environments where outcomes cannot be predicted with certainty, and probabilities are either unknown or impossible to assess. In such situations, managers need to rely on systematic approaches to select the best alternative, minimizing risks and maximizing potential gains. These non-probabilistic decision-making techniques provide structured ways to choose among competing alternatives when no historical data or probabilistic information is available. The objective is to mitigate risks through various frameworks, ensuring decisions align with the organization’s goals and the decision-maker’s risk tolerance.
Different decision criteria are used based on the manager’s outlook—whether optimistic, pessimistic, or neutral. Some criteria focus on maximizing potential payoffs, while others emphasize minimizing risks or regrets. Approaches like the Maximax and Maximin criteria align with optimism and pessimism, respectively. Meanwhile, the Minimax Regret criterion addresses opportunity loss, and the Hurwicz criterion offers a balanced view. The Laplace criterion assumes equal likelihood for all outcomes, making it a useful strategy when no clear probabilities can be assigned. These methods help decision-makers develop robust strategies in uncertain environments, ensuring that the chosen course of action fits the organization’s risk appetite and long-term objectives.
Concept and Analysis
1. Maximax Criterion (Optimistic Approach)
The Maximax criterion is ideal for optimistic managers who believe the best-case scenario will occur. This approach identifies the maximum payoff for each option and selects the one with the highest possible gain. The decision-maker focuses on achieving the highest return, even if it involves taking substantial risks. This method is suitable for high-growth, risk-tolerant strategies such as investing in volatile markets or launching innovative projects with uncertain outcomes. However, the downside is that this approach ignores potential losses and uncertainties.
Example: An entrepreneur launching a new product focuses on the highest projected profits, disregarding other potential risks.
2. Maximin Criterion (Pessimistic Approach)
In contrast, the Maximin criterion caters to risk-averse decision-makers who prefer to minimize losses. This approach assumes that the worst-case scenario will happen, and the decision-maker selects the option with the highest minimum payoff. This method ensures that even in the worst outcome, the decision will have the least negative impact. It is commonly used in industries where stability and loss prevention are critical, such as insurance or utilities.
Example: A company opts for a low-risk investment with a smaller but more guaranteed return rather than risking higher losses on an uncertain project.
3. Minimax Regret Criterion
The Minimax Regret criterion focuses on minimizing regret or opportunity loss—the difference between the payoff from the selected option and the best possible payoff. A regret matrix is constructed to identify the regret values for each decision. The option with the smallest maximum regret is selected to ensure the decision-maker does not experience significant regret from making the wrong choice. This approach is beneficial when avoiding large opportunity losses is more critical than achieving the maximum gain.
Steps to Apply:
- Calculate the best possible payoff for each outcome.
- Subtract each option’s payoff from the best payoff to find the regret.
- Select the option with the lowest maximum regret.
Example: A manager chooses a stable project to minimize regrets if the more aggressive project performs poorly.


Conclusion
In decision-making scenarios where probabilities are unknown, managers must rely on non-probabilistic criteria to guide their choices. Each criterion reflects a specific approach to uncertainty, tailored to the decision-maker’s attitude towards risk. The Maximax criterion supports optimistic managers aiming for maximum returns, while the Maximin criterion suits those focused on minimizing potential losses. The Minimax regret approach helps mitigate opportunity loss, ensuring decisions are made with minimal regret. Hurwicz’s criterion offers a balanced method by weighing both optimism and pessimism, and the Laplace criterion provides a neutral, unbiased strategy by treating all outcomes as equally likely.
These frameworks offer valuable guidance for managers in uncertain environments, helping them choose the most suitable course of action despite the lack of probabilistic information. The selection of a particular criterion depends on the nature of the decision, the organizational context, and the risk appetite of the decision-maker. By applying these methods, businesses can develop strategies that are both thoughtful and adaptive, ensuring long-term resilience and effective resource allocation amidst uncertainty.
A purchase manager knows that the hardness of castings from any supplier is normally distributed with a mean of 20.25 and SD of 2.5. He picks up 100 samples of castings from any supplier who claims that his castings have heavier hardness and finds the mean hardness as 20.50. Test whether the claim of the supplier is tenable.
Introduction
In decision-making situations, especially in quality control and purchasing, managers often need to verify claims made by suppliers to ensure consistency with desired product specifications. One statistical tool for such verification is hypothesis testing, which helps determine whether observed differences are due to random chance or significant enough to reject a supplier’s claim. In this case, the purchase manager seeks to test whether a supplier’s claim of producing castings with heavier hardness holds true. The manager knows that hardness for castings follows a normal distribution with a mean (μmuμ) of 20.25 and a standard deviation (σsigmaσ) of 2.5. After collecting a sample of 100 castings, the manager finds the sample mean to be 20.50.
The purpose of the hypothesis test is to see if the sample mean of 20.50 is significantly different from the population mean of 20.25, or if this difference is due to chance. Using statistical tools like the z-test, we can determine whether the difference in means is large enough to reject the null hypothesis. This type of testing ensures sound decision-making, reducing the likelihood of accepting products that do not meet quality standards or rejecting ones that meet specifications.





6. Conclusion of the Hypothesis Test
Based on the z-test results, the sample evidence does not provide sufficient support for the supplier’s claim that the castings have heavier hardness. The observed difference of 0.25 in mean hardness could have occurred by chance, and the claim of the supplier cannot be accepted with a 95% confidence level.
Conclusion
In this case, the purchase manager aimed to verify the supplier’s claim that their castings have greater hardness than the standard mean of 20.25. After conducting a z-test using a sample of 100 castings, the calculated z-value was found to be 1, which is less than the critical value of 1.645 at a 5% significance level. As a result, the null hypothesis (H0H_0H0) that the mean hardness remains at 20.25 cannot be rejected.
This result highlights the importance of statistical testing in managerial decision-making, particularly in supplier evaluation and quality control. The test helps ensure that decisions are data-driven and objective, reducing the risk of accepting supplier claims that are not statistically valid. It also reinforces the value of hypothesis testing as a tool for maintaining quality standards. Since the supplier’s claim was not supported by the sample data, the purchase manager may choose to either reject the supplier’s offering or request further evidence of improved hardness.