Validating Product Ideas During COVID-19 Period: A Case Study, Part 1

Shirley, Wang Xinling
IxD Stories

--

Intro

If you are designing for an enterprise product, what fills most of your days might be piles of business and product requirements, with complicated front-end and back-end logic to decipher… Unlike consumer products, enterprise products usually come with clear-cut business goals or demand high productivity or security standards. Usability improvements, compared with requirements that are associated with a direct business or income growth, are often deprioritized.

This series is a case study of a design-driven feature that has been successfully pitched and validated in an enterprise project that we’ve been working with. The validation was made swiftly with online prototype tool Figma and Google Form with our users, who are located remotely across 8 offices.

We hope this sharing could inspire more designers, from junior to higher proficiencies, to realize that idea pitching is more than just showing your work, yet does not have to be challenging.

This article will be divided into three episodes, covering the following topics.

Part 1 (This Article)

  • Background of the project
  • Why validating the idea in the early stages is vital
  • The limitation of conducting normal user studies
  • What we expect from the study

Part 2

  • How to prepare the prototype for test
  • Welcome message of the user survey
  • Tips of composing the body of the survey
  • How we collected participants’ contact information

Part 3

  • The results we finally got from the survey
  • Conclusions we made based on the results
  • Lessons from a remote design process

Disclaimer

This article is co-authored by Xinling Wang and Anne Hwarng for an internal project we completed in late 2020 in Shopee. To comply with the company’s NDA, this article will not cover any sensitive content or accurate numbers. All views are our own.

Background: how the idea emerged

A demo of the skeleton of the main UI of the quality check page

The enterprise product we worked for is a quality check platform serving over a thousand internal users across 8 operating regions. From late 2019 to 2020, we revamped the product and hit an increase in individual productivity from 6%~50%. We also received positive feedback from local business teams. Nevertheless, we know that the best is yet to be, there are always ways to ask ourselves: can we do even better?

An illustration of 8 regions and 7 sub-teams
The coverage of our product: 8 operating regions x 7 sub-teams

The quality check platform, although targeted at 7 business teams in each of the 8 operating regions, still adopted an interface that was quite similar to its original, which was inherited and re-designed based on the old platform during phase 1 of our revamp project.

We soon discovered that though it had won a general higher satisfaction rating compared with the old platform, the one-for-all layout was not always well received by everyone. In the post-launch user tests and interviews, we noticed quite an array of opinions on the same modules — with some users appreciating modules and others finding no use for certain the same ones. This distorted us at the beginning but soon an idea bumped up: if we couldn’t come up with the best layout for them, why not let users tailor it for themselves?

An illustration of two layouts where a paragraph is displayed, indicating different audiences’ views are diverged.
For example, even the display of a paragraph of product description varies: some prefer it being displayed in a condensed way, some prefer it being displayed naturally. The difference may be derived from the different nature of products, the interest from the viewers, etc.

The customization feature

The next afternoon, Xinling came up with a low-fidelity mockup of this customization feature.

She synthesized the major divergences from user surveys and broke down the content of the quality check items into sub-modules. Each came with customizable options that could be configured by the users.

We then gathered the business, product and development team. Amongst us, there was a unanimous understanding that this could be useful to the users that come with different focuses on the quality checks. Yet we were still uncertain if the configuration we provided would be favoured by users. Still, we needed more affirming input — from our users — to validate the product idea. In our company, we were running at the pace of a start-up expansion of our business scale and user base. We could not rely on a product idea only after it was really built as resources were scarce. Therefore early validation was vital. A few sketches were done in an afternoon that could have been worth a half-year-long development.

The limitation of conducting normal user studies

Before COVID-19 the design and product team visited major local offices to conduct field study or interviews. However, many nations were under lockdown last year and business travels had ceased for quite some time.

In the meanwhile, as one of the only product designers for this product, Xinling was also running multiple projects which did not allow much spare time to collect adequate responses from different team representatives. (Think about 8 regions, 7 teams in each region!)

A photo of user test with one participant and one facilitator
In moderated user tests, each participant is accompanied by at least one facilitator each time. (Source: Nielson Norman Group)

The solution: online test and a user survey

Due to the above-mentioned limitations, we went for an online prototype test approach supplemented by satisfaction questionnaires, done fully remotely!

The survey was conducted on Google Forms and Figma — both tools are the de facto collaboration tools among the organization.

An illustration of Figma & Google Form’s logo

In the first part of the survey, the participant is guided to experience the Figma prototype, which is the configuration page for their content page. Then the survey let the participant fill up their own preferences as if they were setting up the page for themselves.

Furthermore, participants were prompted to evaluate each of the modules based on the following four aspects (derived from Arnie Lund’s Usefulness, Satisfaction, and Ease of Use framework):

  • if the proposed feature is useful to them
  • if the interaction is easy for them
  • if the proposed function and interaction is easy to learn
  • if they are satisfied with the proposed function
An illustration including four keywords: usability, ease of learning, ease of use and satisfaction
Four measuring aspects of usability that tailored for the user survey, derived from Arnie Lund’s Usefulness, Satisfaction, and Ease of Use framework

We expect the results to reveal such perspectives from users to help us ascertain:

  • If the customization feature proposed is a go-to option to our users (by analyzing the satisfaction scores)
  • If all options provided are necessary (by analyzing if there are enough divergences in the chosen options)
  • If any unforeseen usability flaws were not discovered during the previous user studies (by providing open questions and reviewing responses)
An illustration includes a pyramid of three layers, bottom to top: usefulness of the proposal, scope of proposal, usability
Three layers of answers we sought for

In the next episode, we will share the findings from the survey, how it impacted the final decision-making process, and what we learned from the ideation and validation journey.

About the authors

What’s in your mind?

💭 Comment and let us know your thoughts, doubts, feedback!
👋 Connect with us on LinkedIn: Xinling’s Homepage · Anne’s Homepage

--

--