Before big data analytics, real-time dashboards and user-friendly survey software, the vice president to whom I reported taught me one of the most important lessons of my career. He asked, “How do you know?” I was sitting across from him and had just outlined some urgent steps I thought we needed to take to respond to constituent concerns. My morning had been filled with dissatisfied calls and emails. As I began to enumerate who had contacted me and share their concerns, I realized (which had been his point) that even if their views were consistent with a larger percentage of our constituency, the number from which I had heard was not representative. Though they were vocal, I had no way of telling whether these sentiments reflected a predominant view. To make changes based solely on a small sample would be unwise. More information was needed. The lesson, of course, was to have answers the next time I was asked, “How do you know?” From that day on, I viewed collecting data and insights as central to decision-making and program design. No matter your field, understanding your marketplace is essential to your success. What you know about people’s beliefs, how they behave and what they value is paramount to making sound decisions. However, before you can answer how you know what you know, you have to make sure that the information you gather is both useful and defendable. I am going to stick to the basics here, but you can start with these guiding questions:
- What do you want to know? Whether you are assessing quality, trying to answer a specific question or seeking insights, clearly define your purpose.
- What data will you need? Save yourself time and headaches by making sure that the data you plan to collect will serve your purpose.
- How will you use the data you collect? How you plan to use the data will help determine which methods you choose and how rigorous your process needs to be.
- Are you certain your collection methods are sound? In order for your data to be trust-worthy, you have to consider many factors including validity, reliability, neutrality and user-design.
Recently, I took two surveys. Both were poorly designed. I took one survey following a visit to a national chain. My experience at the store was positive, but the survey required that I answer all questions even those not applicable to me. That company’s market researchers are gathering data. But, if they are using that information to assess operations, they are doing so based on flawed information. The other survey followed a customer service call with a national provider. My interaction with the customer advocate had been positive, but before I even spoke to someone I was on hold for 30 minutes. The survey I completed after the call did not ask about wait time nor did it include any means for offering additional concerns. Again, this company is collecting data, but they could have a completely false sense of how they are doing.
Avoid making these common mistakes. Instead, design flexibility into your research instrument.
- Questions should be optional unless the answers are absolutely necessary for your data to be credible. Offering a “not applicable” option is a simple solution.
- Provide an opportunity for respondents to express additional concerns. Letting them tell you what they want you to know may provide your most useful feedback.
Most of all, be prepared to answer the question, “How do you know?” Your decisions and proposals will be more credible.
Closing notes: I invite you to contact me for further conversation about gaining insights through data collection or how I might help you develop your organization’s strategic capability through learning. http://www.georgiannehewett.com