BLOG

How Logrus IT's Quality Evaluation Portal helps to improve Online Storefronts (Case study)



How Logrus IT's Quality Evaluation Portal helps to improve Online Storefronts (Case study)

In today's competitive e-commerce landscape, a well-functioning online storefront is essential for business success. However, ensuring positive user experience for the target audience can be a challenge, especially when dealing with multiple languages and platforms. Logrus IT's Quality Evaluation Portal is a powerful tool that helps businesses streamline content and process evaluation, provides a customizable, structured framework that allows to identify, classify, describe, grade, and analyze issues properly within any context, and delivers an efficient, seamless customer experience.

Task. The client has expanded into a major European market, which required them to add a localized version of their online storefront. They needed to evaluate the sentiment of the target audience (TA; online shoppers between 20 and 45 years of age, focusing mainly on women's fashion), and check the localized storefront usability. The latter included typical scenarios like:

  • adding items to the cart,
  • paying for them,
  • returning items,
  • marketing and transactional emails,
  • search functionality.

In this case, the client wanted to collect, average, and analyze feedback from multiple reviewers who tested the product in various browsers on both desktop and mobile platforms.


Challenge. One of the biggest traps in such cases is collecting unstructured feedback from reviewers in an ad hoc mode. While each of the collected impressions or comments can make sense per se (like user complaints), such feedback is almost impossible to structure, process, or quantify. This simple approach works well when the goal is to find and eliminate the most outstanding issues but fails when the goal changes to producing finer, multi-faceted evaluations of storefront perception by the TA and its performance, and generating clear, structured improvement recommendations for multiple areas.

Spoiler alert. This task was a perfect match for the Logrus IT Quality Evaluation Portal. The portal already had all required functionality, including:

  • The ability to develop custom, multidimensional, holistic metrics that make it possible to evaluate the quality of any content, both original and localized, against multiple criteria.
    • A metric can comprise any number of quality categories, each one covering a different, independent aspect or facet, such as language, tone/voice, content structure, accompanying graphics and/or videos, etc.
    • One or more quality evaluation scales are an inalienable part of any metric; they make it possible for reviewers to grade each quality category for each object (piece of content, procedural step, etc.) in a consistent, uniform manner and produce better objectivity.
  • Spawning multiple subprojects within each project to cover scenarios where one and the same content or procedure needs to be evaluated by several reviewers and/or on different platforms.
  • Quickly creating projects with links to each of the content or software pieces or components to be reviewed as well as numerous relevant metadata fields by simply importing Excel files.
  • Adding reviewers, each of whom can only access parts of projects explicitly assigned to them using either user credentials or individual links with preset expiration dates.
  • Providing clients with portal access making it possible for them to monitor project progress in real time, develop or modify metrics, access project cost estimates, export results, etc.
  • Displaying and calculating basic project stats, such as averaged evaluations for each quality category or project component.
  • Exporting project instructions, data, feedback, results entered by reviewers, and stats along with metric details as Excel files with multiple tabs. Exported files provide almost unlimited options for further analysis, including major project-wise summary conclusions and structured improvement recommendations for each area.

Logrus IT's Quality Evaluation Portal

Solution. The client provided a randomized, representative selection of product listings on the localized version of the storefront, and the list of other areas to be evaluated (home page, FAQ, category pages, store search, typical purchase, pay, and return procedures, transactional and marketing emails).

The Logrus IT team has developed several custom quality metrics (including all components, such as the list of evaluated quality categories, quality evaluation scales, and acceptance thresholds), one for each distinct area of interest (product listings, emails, and home/category pages, store search, going through typical purchase/pay/return procedures).

This is typically the most complex and important part of the whole project because subject matter areas as well as client expectations and budgets vary dramatically. Each combination of the above calls for a separate, project-specific metric. Selecting the right number of quality categories that are most important to the members of the TA within the context, and providing quality scales with clearly described, distinct grades are both essential for project success.


For example, adding too many and/or too vague quality categories typically results in definition overlaps for some of them and inevitable reviewer disorientation, let alone growing cost. At the same time, with too few categories we may lose essential information… Unclear definitions for various quality scale grades (when reviewers cannot clearly distinguish between SATISFACTORY and GOOD, etc.) make the whole evaluation process much less consistent/reliable.


The Logrus IT team is fully aware of the crucial importance of the metric development stage. All metrics went through several iterations as they were carefully developed, discussed with, and approved by the client. In the case of this online storefront the metrics included up to 9 measured quality categories, such as clear structure (colors and sizes available, product fit and material, etc.), informativeness (sufficient, but not excessive information is provided), language (adequate for the target audience), etc.

Our team has also helped the client to shortlist the most popular browsers for each platform (desktop and mobile), selected a team of 6 in-country reviewers representing the target audience (TA), and developed customized project instructions and step-by-step scripts for them.

Finally, we have started several quality evaluation projects, one for each area (product listings, search, typical scenarios, …). Each project used its own customized metric and comprised numerous subprojects representing all possible reviewer – browser – platform combinations for the same content or procedure. The number of reviewers varied between 3 and 6 depending on the relative browser and platform popularity on the target market.

LQE Results

Results. Upon project completion the Logrus IT Team has provided the client with the following.

  • Summarized high-level feedback and concrete recommendation instrumental for improving user experience.
  • Processed stats for each area averaged for all platforms and reviewers (including quantitative evaluations for each area).
  • All data produced by each reviewer and exported in an Excel format in case anyone needed these for validation or an alternative in-depth analysis.

General feedback. The experience overall proved to be positive across most areas when it came to the website design, professional look, product information, typical user actions, etc. At the same time, widespread minor localization issues (capitalization, punctuation, spelling, and grammar) permeating all areas negatively affected the general impression.

More detailed conclusions touched upon improvements for specific areas. For example, marketing emails were considered not informative enough, and reviewers would have preferred better targeting. Search sometimes returned irrelevant results (specific examples provided to the client).


Interestingly enough, the reviewers representing the TA (who are neither software, nor standards experts) produced diverse opinions on the correct way to present prices in euros, defying the existing locale-related standards. Namely, they were far from being unanimous on the best price format (10 EUR or 10 €), and whether a space is required between the number and the currency symbol: 10EUR or 10 EUR (10€ or 10 €).


The range and clear structure of feedback provided, including concrete recommendations for the product in general as well as for each distinct area based on stats (rather than gut feeling or anecdotal evidence), allowed the client to not only improve the product but also address bottlenecks and pain points in the globalization process. You are welcome to try it with your product or content, whatever it is…


Similar materials


BACK
This website uses cookies. If you click the ACCEPT button or continue to browse the website, we consider you have accepted the use of cookie files. Privacy Policy