Principle-focused evaluation, ops and lessons learnt

A guide to streamline the team process for principle-focused evaluation.

evaluation ops

6 min read, tops.

A principle-focused evaluation can help the team uncover potential gaps and opportunities in people’s experience with your product, by examining it against a set of principles. Depending on the objectives and the stage in the product lifecycle, the principles can be selected accordingly.

  • So, if your product needs to “pass the bar” in terms of usability, the principles can be a set of known usability, accessibility and design guidelines.
  • If your product already follows good usability standards but you want to uncover potential gaps on how enjoyable and engaging it is, the principles can stem from cognitive psychology biases and principles.

Regardless if you are starting out with a prototype, an MVP, or have an established product with a large customer base, well-formulated principle-focused evaluations give you a big perk: You can bundle and shortcut the process for discovering areas of concern, without expensive commitments.

However, as with any evaluative research method, you need to reduce the bias of a single evaluator in order to ensure findings are not skewed. That’s why it’s a process best done through teamwork.

Disclaimer: To the extent of my knowledge, principle-focused evaluation is not actually defined in the books, at least not with this specific term. The only reason I use this one is because the approach is not fully diligent to the most known heuristic evaluation. It’s rather a more agnostic framework since it can be based on a different or more extensive set of principles than the “heuristics”.

Principle-focused evaluation ops

This is a draft guide on the team ops needed for running principle-focused evaluations, along with the lessons learnt so far.

Set the plan

In order to set the plan in motion, you’ll need to define the focus lens and assemble the right team.

Define the focus lens
You have already discussed with the team the need to identify issues and opportunities, so at this point you need to define the focus lens for the evaluation. Depending on the current priorities for your product’s strategy you may want to target usability, consistency, accessibility or even delightfulness and engagement.

You can select to focus on all aspects, but remember to prioritise what’s important. If your product is just starting out, you want to make sure people complete their main tasks without struggling, before you invest on discovering gaps in how engaging the product is.

Get your stakeholders
You want to make sure that at the end of the evaluation, the findings won’t be pushed in some drawer to be dealt with once “work slows down”. Without business, product and tech stakeholders involved in the initiative, next steps won’t be pushed. Get buy-in and let them know early on that you will be running together a couple of activities to review, prioritise what was found and decide next steps.

Assemble the team
Ideally three to five evaluators in order to uncover a fair number of problems and opportunities. Try to not go fewer than three.

Set a facilitator
Get a person to manage time, resolve potential blockers for evaluators, communicate with stakeholders and coordinate the overall evaluation process.

Check the stage

Before the evaluation begins, you need to make sure that the team has access to a stable staging environment and identify potential blockers that may stall the process.

Get access to a stable staging
Ask the engineer team if there is a staging environment that will not be undergoing changes during the evaluation. This is really important to make sure that findings are not derived from obsolete work. Especially since evaluators may run the process in different timeframes, findings from different versions can cause confusion and trying to replicate issues will lead to dead ends.

Run a pre-evaluation walkthrough
Make sure to run a walkthrough to uncover potential blockers i.e bugs that completely block access to a certain product functionality. Ask if there is time to resolve them in order to move forward or decide to disregard them. Keep in mind that blockers may come up even during the evaluation. The facilitator is responsible for communicating them to the engineering team in order to see if they can be resolved and unblock the evaluator team.

Align on scope and tools

Evaluating a product in teams is not as simple as if you run the process alone. You and your team should spend time to build a common ground in order to define the scope and tools.

Define the scenarios
Identify the scenarios to be evaluated and describe them by listing the various steps a user would take to perform a sample set of realistic tasks. This way you ensure the team will examine the same targeted flows.

Agree on the depth
Defining a depth for the evaluation is not something that can easily be done, let alone predicting the number of issues you are going to uncover. However, agreeing on general lower and upper thresholds, helps the team adapt the level of detail with which they will examine the product. Keep in mind, however, that evaluators may focus on finding cosmetic issues in order to pass the defined threshold. A good strategy would be to discuss beforehand and show benchmarks of issue types we are aiming to find.

Select the documentation tools
When documenting findings you need to think about the next steps ahead, which is clustering, ranking and presenting to stakeholders. Agree on a tool that the team can easily use to add note records, include additional evidence, such as screenshots or screen recordings, add labels and tags, sort, filter and search.

Run the evaluation

The team starts examining the scenarios against a set of principles related to the selected focus lens.

Evaluate individually
Each evaluator goes through the selected scenarios and documents issues and opportunities based on the principles that are violated in their opinion. The evaluation process should ideally be run in a common timeframe for the team, to ensure they are examining the same product version.

Map evaluation trails
When the focus lens requires evaluators to have a clear overview of the flow, i.e when you want to detect consistency issues, map evaluation trails in order to be able to visualise different scenarios in parallel. Evaluation trails can also be used to help others to see how an evaluator came to their conclusions through point-by-point documentation of every part of the examination process.

Document findings

While running the evaluation, each evaluator will need to make sure that the way the identified issues are documented are easily read and understood by the team.

Follow a structure
Depending on the focus lens, define a documentation structure that guides evaluators into the necessary information they need to include. For each finding, it’s important to have a clear explanation: the principle violated should be clearly cited and related to the design, so that any fix will address the underlying issue and the same mistake will be avoided elsewhere as well.

Include evidence
Make sure that each finding will be retraceable to the specific scenario it was identified in. Include screenshots or screen recordings if you can to help evaluators identify quickly the problem you are describing.

Annotate with labels
Besides citing the principle violated in a finding, as you proceed with the evaluation, try to see if you can detect any emerging patterns. Label the findings based on the themes you notice, since this will help later with the clean up and clustering process.

Clean up and cluster findings

Once the team has completed the evaluation process, you’ll need to clean up, remove duplicate findings and define common themes.

Remove duplicates
Go through all findings and merge identical ones. You will notice that evaluators tend to find similar issues and opportunities up to a percentage, so expect the overall number of findings to decrease.

Define themes
As you go through the clean up process, try to finalise the themes that are coming up and cluster findings. This will be useful when presenting to stakeholders and prioritising next actions.

Rank findings

Once the findings are cleaned up, ask the team to review them and rank them individually.

Review and rank individually
A common approach for ranking evaluation findings is by defining the severity of the principle violation. Remember however, that severity ratings are subjective. The best way to make them effective is by defining the mean average. An additional ranking that can be considered is potential impact. An issue may be highly severe but it may not be part of a core scenario. Potential impact allows ranking to place focus also on the company’s best interests, and help later discussions with stakeholders. Notice however that the word “potential” is used, since the ranking is just a hypothesis by the team and will need to be validated later on.

Resolve potential conflicts
For the cases where rankings have a high range, the team will review the findings together in order to resolve potential conflicts.

Prioritise and decide

This is the point where you involve stakeholders in order to review the issues and opportunities, prioritise them and decide on the next steps.

Present critical findings
Gather stakeholders and present the most severe and potentially impactful findings. This will help them ease-in into the discussion and understand the structure of the findings. Ask them to review the full documentation list on their own time and select the top issues that they believe will have the most impact if resolved.

Decide next steps
Once stakeholders have completed their selection, gather them once again to define the priorities, only this time besides impact you also take into consideration the potential effort needed for the issue to be resolved or the opportunity to be explored. At the end of the workshop, you should have ended up with a list of the most important findings and be able to plan your next steps.

Any thoughts you want to share?

This post is actually a first iteration — ok, I mean first readable — so in case you want to contribute with your feedback on the ideas send it my way on Linkedin.