Global Legal Review (GLR)

Fjord created a digital MVP to simplify a complex, global, legal approval process in Accenture.

What made this project interesting was my blended responsibility of content lead, interaction designer, and researcher, and this project’s need for me to wear so many hats and lead across disciplines, for content was the foundation of everything. The project also gave me the opportunity to work alongside developers to build an MVP quickly.

Context

The GLR is a messy, legal process that exists within Accenture to approve new tools and technologies for use by employees globally. The purpose of the legal process is to ensure the personal data rights of individuals’ remain protected under the law once new tools are deployed.

Different countries have different laws, and there are dozens of necessary actors involved. The foundation of the process was a long, inefficient, and confusing questionnaire, designed by lawyers, but used by non-lawyers. Work was done entirely manually, none of the process was digitized or automated, and there was no consolidated home for information. This makes approving a global tool extremely complex, complex enough that in the current state, it could take years.

My team’s mission was to change this.

Who are the key actors?

Though there are countless actors involved, for the MVP, we focused on the experience of two:

(1) The project manager (PM) who completes the questionnaire and is seeking to approve a new tool

(2) The LAC (lawyer) who reviews and owns the case and carries it through the process

My challenge

As content design lead and interaction designer, I led the redesign of the questionnaires serving as the foundation of the GLR process. I worked in collaboration with a researcher and business designer on the process side, as well as a visual designer and interaction designer on the UX side.

The Challenge

How might we digitize the questionnaire forming the foundation of the GLR process to make the approval experience faster, more efficient, and less painful for the key actors?

Research

My responsibility began by understanding user perspectives through the lens of the questionnaire, which formed the foundation of the overall process.

Surveys

I designed two sets of surveys for each actor, each prompting the actors to provide insight on each question in the questionnaire.

For LACs, I sought to understand:

  • What is the law corresponding to each question?

  • What is the conditional impact of a particular answer to each question?

  • Rank the legal significance, and explain

For PMs:

  • Do you understand each question? If no, why?

  • Who do you go to for help?

  • Rank difficulty, and explain

Interviews

I also held qualitative interviews to understand their perspective at a higher level and clarify my understanding of the surveys once completed.

Insights

My challenge was to address the pain points that emerged from the perspective of each actor.

PMs

  • Complex legal language makes it difficult for PMs to know how to answer questions

  • No categorization or sensible ordering of content

  • No transparency of where PMs are in the process, what still

    remains, and estimated time to approval

  • No control over the results, unclear what impact is of answering

    questions particular ways, or how to make choices to ensure optimal chance (and speed) of legal approval

LACs

  • Follow-up questions identified entirely manually through mental knowledge

  • Follow-up questions add additional steps and exchanges and further slow down the process

  • Questions skipped slow down LACs

Content strategy

Based on my research, I was able to begin strategizing a redefined content approach.

Categorization: I categorized questions that were related, which would aid in creating a sensible order.

Elimination: I removed questions that weren’t necessary.

Content Logic: Some questions, I found, were only sometimes necessary. I flagged the questions that were clearly conditional triggers, and mapped them to their child questions.

New Questions: I identified areas where questions needed to be broken down into multiple questions to ensure clarity for PMs.

Copywriting: Re-work questions that were confusing or commonly misunderstood.

As a strategy started to take shape, I began drafting an information flow map.

My primary goal was to reduce time required by the PM to understand questions, digest information, and provide the information the LAC needs. In this current state example, 30 clicks were required to answer a single question.

Iteration

Over weeks, I interacted with LACs and stakeholders to test the content logic through a variety of methods.

Sample case tests — test question flow with real cases by real lawyers, selected cases spanned all possible key scenarios

Feedback sessions — hold regular sessions for continual sharing of iterations and opportunities for all actors to poke holes, ask questions, and give feedback

Mapping questions to process — ensure crucial questions are being asked at right times, in alignment with re-designed process

Examples of the nature of conversations are included below.

“Actually, question X should also be asked if question Y is answered “Yes”, because I recently had a case where...”

— Julietta, Lead LAC

“Doesn’t question B need to be asked before question A, so that question A can be tailored based on the response to question B, and...”

— Julietta, Lead LAC

The team held collaborative workshop sessions to map GLR questions to steps in the GLR process. This enabled us to design a solution where the process a user needed to go through was in-sync with the questions being asked at different points in the journey.

My end solution structured 101 questions across 2 sections, 9 sub-sections, and 21 pages, vastly reducing the number of questions and vastly improving efficiency and clarity — and in itself, defining the information architecture for the tool.

Interaction Design

In parallel with working out the content redesign, we held multiple brainstorm sessions to explore how to address key pain points and structure the experience. To identify concepts to move forward with, the team voted on their favorite approaches with stickers on the wall.

We faced many challenges as we translated the content design into its supporting interaction design.

Can questions be skipped?

Because skipping questions would support PMs but frustrate LACs, we were unsure how to decide on a solution that would satisfy both sets of user needs.

🟢 The pros for skipping:

Enable PMs to move forward with the questions they can answer rather than being held back and stuck on the ones they can’t.

🔴 The cons:

Skipping would frustrate LACs, because they would be missing key pieces of information they needed to evaluate and process the case.

Ultimately we decided on a compromise — allow users to skip a question and continue, but warn them that before they can submit the section to the LAC, they will need to complete that question.

We explored how to design for complex questions that allowed users to select all that apply OR none of the above as a response.

We considered two main approaches:

1️⃣ All checkboxes

2️⃣ Checkboxes for the select all that apply, and a radio button for none of the above

We spoke with users and determined that option one — all checkboxes, combined with clear use of language and visual spacing, including a dividing line to separate the select all choices from the none was preferred.

 However, there was an outlier use case which would trigger a modal upon clicking. For this instance, we used a button, implying to the user that such a transition may occur.

We grappled with how, when, and where to display questions that had been conditionally triggered.

A modal came to mind, but we immediately eliminated this option, knowing it would be obtrusive.

That left three remaining options:
1️⃣ Display the conditional trigger(s) at the bottom of the current page
2️⃣ On a separate page entirely
3️⃣ Immediately after the current question in context.

Displaying them on a separate page introduced an added layer of complexity for the technical team, and also found that it would confuse and/or frustrate PMs to see new pages being spontaneously added to the progress bar (when they thought they were close to the finish line)!

We chose to display the questions immediately following the question in context, so the user can see the trigger happen in real-time and understand why the trigger took place, and so content remained contextual.

Sometimes, conditionally triggered questions would also add or subtract more conditionals. When this occurred, we kept it within the same container to ensure simplicity.

There are hundreds of countries in the world to choose from, and often, only some are involved in a case. Despite that, we had to explore how to support users in specifying geographies with ease.

Initially, we explored unique solutions such as a predictive text entry, or the more standard select from an alphabetized dropdown.

However, from talking to LACs, we found that common use cases are global cases, and region-specific cases, so we wanted our solution to easily accommodate those.

We settled on a selection module that enabled group selections. This accommodated for the most common scenarios with ease, and supported the edge cases through clear categorization based on region, as well as the familiar alphabetization within each region.

We also grappled with the design of the questionnaire progress bar, and how to communicate to users where they are in the journey and what remains, while staying true to our information architecture.

Fill: We denoted completed pages with the color red, current pages with white, and future pages with gray.

Size: The larger circles indicated sub-sections — groups of pages related to the same higher topic. The smaller circles indicated pages within a sub-section.

Icon: Beside section headers, a checkmark icon indicated completion of the section, and a disabled, grey lock icon indicated a future section that is currently unaccessible.

White space: For categories with >1 pages, we only display the name of the current sub page — to reduce user overwhelm and only display what’s relevant.

I collaborated with two front-end developers to build an MVP.

I communicated the conditional logic, information architecture, and UX/visual design, and sat side by side with them to answer their questions, brainstorm design alternatives to technical challenges, and test the logic and correct bugs in their work.

Our solution addressed pain points felt by both groups of key actors.

Guiding screens enable PMs feel supported and informed about what they’re doing and why.

Categorization and hierarchy of content makes sense to users.

Progress bar makes clear to users where they are in the process and how much remains until the next step.

Question types vastly improve efficiency and reduce time to complete. A single question in our redesign can cover 30+ questions from the old questionnaire!

Legal text explains the why behind the process.

Case dashboard creates a consolidated home for case management.

Contextual triggers involving choice inform users of the impact of decisions.

Contextual help aids PMs when they don’t understand a question.

Notifications & comments keep exchanges of information within the tool and automate changes to cases.

Explore our screens & prototypes for a closer look.

"Basics" Screens

"Detail" Screens

Prototypes

MVP Micro Experiences

The MVP was built to inspire Accenture to invest in building a new tool, and we succeeded in our goal. The first release of a new GLR tool, inspired by our work, was built immediately after our project’s completion, followed by incremental releases to grow towards our vision.

Learnings

1. Don’t be afraid to wear multiple hats

My ability to lead both the content design and the interaction design of the solution, and marry my unique knowledge of each discipline, was crucial to this project’s success.

2. Communicate often

When different stakeholders and actors with vastly different opinions and perspectives are involved, they must be communicated with often and given a voice at the design table. The greatest human need is to feel heard and understood, and I learned the importance and the impact of doing that with my team as a designer.

3. Never stop advocating for the simplification of the complex

Our key actors were lawyers — highly intellectual and firm in their belief that complex information must remain complex and fixed, because that’s just the way it is. We had to hold their hand and be patient with them as we completely transformed what they saw as their one and only source of truth to something radically different, yet still satisfying of their core needs.