Tree testing is a usability technique for evaluating the findability of topics in a website.[1] It is also known as reverse card sorting or card-based classification.[2]

A large website is typically organized into a hierarchy (a "tree") of topics and subtopics.[3][4] Tree testing provides a way to measure how well users can find items in this hierarchy.[5][6]

Unlike traditional usability testing, tree testing is not done on the website itself; instead, a simplified text version of the site structure is used.[1] This ensures that the structure is evaluated in isolation, nullifying the effects of navigational aids, visual design, and other factors.[7]

Basic method

In a typical tree test:[8]

  1. The participant is given a "find it" task (e.g., "Look for men's belts under $25").[9]
  2. They are shown a text list of the top-level topics of the website.
  3. They choose a heading, and are then shown a list of subtopics.
  4. They continue choosing (moving down through the tree, backtracking if necessary) until they find a topic that satisfies the task (or until they give up).
  5. They do several tasks in this manner, starting each task back at the top of the tree.
  6. Once several participants have completed the test, the results are analyzed.

Analyzing the results

The analysis typically tries to answer these questions:

  • Could users successfully find particular items in the tree?
  • Could they find those items directly, without having to backtrack?
  • If they couldn't find items, where did they go astray?
  • Could they choose between topics quickly, without having to think too much?
  • Overall, which parts of the tree worked well, and which fell down?

Tools

Tree testing was originally done on paper (typically using index cards), but can now also be conducted using specialized software.[10]

References

  1. 1 2 Hanington, Bruce; Martin, Bella (2019). Universal Methods of Design, Expanded and Revised. Beverly, MA: Rockport Publishers. p. 232. ISBN 9781631597497.
  2. Donna Spencer (April 2003). "Card-Based Classification Evaluation".
  3. Chesnut, Donald; Nichols, Kevin (2014). UX for dummies. West Sussex, England: Wiley. p. 141. ISBN 9781118852781.
  4. Palade, Vasile (2003). Knowledge-based Intelligent Information and Engineering Systems. Springer Nature. p. 250. ISBN 978-3-540-23318-3.
  5. Elleithy, Khaled; Sobh, Tarek (2006). Advances in Systems, Computing Sciences and Software Engineering : Proceedings of SCSS 2005. Dordrecht: Springer. p. 232. ISBN 9781402052620.
  6. Paraguacu, Fabio; Gouarderes, Guy; Cerri, Stefano A. (2002). Intelligent tutoring systems : 6th International Conference, ITS 2002, Biarritz, France and San Sebastián, Spain, June 2-7, 2002 : proceedings. Berlin; London: Springer. p. 743. ISBN 978-3-540-43750-5.
  7. DESAI, SANDEEP; SRIVASTAVA, ABHISHEK (2016). Software Testing. Phi Learning. p. 310. ISBN 9788120352261.
  8. Frick, Tim; Eyler-Werve, Kate (2014). Return on Engagement : Content Strategy and Web Design Techniques for Digital Marketing. Oxford: CRC Press. pp. 78–87. ISBN 9781135012939.
  9. Sharon, Tomer; Gadbaw, Benjamin (2016). Validating product ideas: through lean user research. Brooklyn, NY: Rosenfeld Media. p. 275. ISBN 978-1-4571-9077-3.
  10. Soares, Marcelo M.; Rosenzweig, Elizabeth; Marcus, Aaron (2022). Design, User Experience, and Usability: UX Research, Design, and Assessment. Ch: Springer International Publishing AG. p. 84. ISBN 9783031058967.


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.