Content Quality Program at Wayfair
case study
Problem
The content strategy team at Wayfair developed content for five different end-user groups (customers, suppliers, sales and customer service agents, field workers, and corporate). Content quality for this UX and informational content was measured subjectively by each content creator. Independent contributors did not have a shared understanding of “good content” specific to the organization.
Why it mattered
Quality content is essential to build trust and engagement with users, evaluate content objectively, and show improvement over time. High-quality source content also improves the quality, cost, and speed of translations for global expansion, a key initiative for the organization. The quality of translations cannot be consistently measured if there is no measurement of the source input.
Solution and impact
A two-phase content quality control program to create objective standards, measure and establish a baseline, and improve quality that spanned all five end-user groups and the five Wayfair brands (Wayfair, Perigold, Joss & Main, Birch Lane, and Wayfair Professional).
Phase 1: A Content Scorecard program for content designers, technical writers, instructional designers, and content strategists. Content Scorecards were implemented as a short-term manual step to define content quality across multiple domains by scoring content consistently and setting a minimum quality baseline for publishing and translation.
Phase 2: Transition from a manual quality process to an automated one by using AI-powered software tools such as Acrolinx, Congree, or Writer.
My roles and process
- Project lead
- Workshop facilitator
- User research
- User experience
- Content strategist
This cross-domain project required understanding the commonalities and differences of five different end-users (audiences) to create a set of standards that could be used globally while adhering to overall brand guidelines and best practices.
Following design thinking methodology, I developed and facilitated a content-wide workshop with 30+ content creators to agree upon which metrics were the most important quality indicators. From there, I led a representative team to refine and test the workshop results in an iterative process. Finally, we tested the scorecards with sample content, made revisions, and developed a rollout program.
Once content creators had concrete quality guidelines that matched business needs, they could objectively measure and improve content for themselves and within their teams.
Standards from the Content Scorecards were also used to populate requirements for phase two software procurement.