Background
The objective was to come up with a solution for studio members to collaborate, seek and provide assistance and resolution, in the absence of a shared social workspace. The outcome, a digital Q&A and discussion platform hosted on the studio intranet, that we envisioned, prototyped and evaluated within the span of a month.
The success of any organization banks upon its practices, which in turn depend on the culture that thrives among its people. As a newborn studio under the Deloitte umbrella, one thing we learned to carry forward from our other counterparts right from the very start was to welcome openness and promote inquisition sans judgement or retaliation. Asking and addressing questions often helps us understand and improve the social dynamics within an organization, which in turn also helps its culture evolve in a direction that naturally breeds great practices.
As members of a studio in its nascency, being in the office helped us ground our problems in a shared physical context. Further, the space also afforded us a shared environment to collaborate and find solutions to those very problems, which meant an immediate resolution to the queries that berthed themselves in the foundations of our organization and were sought after by every associate that joined us anew.
While remote work was an already followed practice at Deloitte, the COVID-19 pandemic and the subsequent lockdowns it led to, brought us face to face with a new remote-first world order. A remote-first setup meant that we could no more leverage the advantages a shared workspace afforded us. Consequently, the people that joined us anew—freshmen and lateral hires alike—had fewer points of contacts to reach out to. As a result, the queries that would have otherwise distributed themselves across the studio floor were now beginning to concentrate themselves in certain hot pockets dotted by the senior associates who were few and far between. This left us scrambling to find better avenues to make collaboration more seamless and help colleagues sunset their workday in an environment that wasn't bound by mutually shared working hours.
As a result, a solution was required to make knowledge equitably available and accessible across the organization on demand without overwhelming the natural leaders.
Planning and Discovery
Validating our hypothesis was of utmost importance to ensure that we had not set out to solve a problem that did not even exist. My secondary research numerically reinforced the hypothesis. We surveyed the studio practitioners, and later, ran interviews and focus group discussion with them to frame the problem in context of the studio.
While I had experienced the impact of the shift personally, I wanted to ensure that mine wasn't an isolated experience. So I decided to test the hypothesis.
I started out by looking up how other organizations at the forefront had been coping with this new and abrupt shift, and tried to learn about the practices followed in certain culturally analogous organizations that had always been 100% remote. I happened to chance upon the "State of Remote Work 2021"—a report that Buffer publishes every year—and it was precisely what I had been looking for. And while the Buffer report reinforced my hypothesis, it was equally important to establish a direct correlation between what I had personally observed at Deloitte and what the report alluded towards as the status quo of remote work.

So I pitched the idea to one of the senior managers heading the studio, and set out to conduct a survey to understand the way people's means of collaboration had shifted under the new normal and what impact it had on their approach to seeking and providing solutions. The manager sponsored the pilot phase of the initiaive and helped me build a team of associates—two UX designers and a business experience designer, with whom I worked to identify the key parameters and constraints that affected remote work, and accordingly, to draft the survey questionnaire.
We collected a total of 187 responses over a workweek, spanning the major disciplines across all Deloitte USI studios. As we analyzed the survey results, certain statistics began to evidently reveal themselves, demarcating our problem and solution spaces. Here are a few key metrics that stood out.

Equipped with the knowledge that we now had, we proceeded to better understand the nuances of the challenges the practitioners faced on a day-to-day basis. We conducted 5 one-to-one interviews with senior practitioners across five major disciplines to understand the nature of queries they addressed, and a further 4 focus group discussions based on the cohorts of seekers we had identified based on the survey results.

Once the interviews and focus group discussions were over, we dumped our collective knowledge on to a FigJam board in the form of "How might we" questions—one question per sticky note—grouping similar ones together to identify broad themes that defined our problem space. The objective was to identify opportunity areas arising out of the challenges called out by the interview participants.

Now that we had a good grasp of our problem space, it was time to draft a strategy that would drive our solution in the right direction, and serve as our guiding light in case we were to hit an impasse.
Literature Review
While the seekers would naturally reach the platform in search of solutions, we needed a good engagement model to ensure that practitioners returned to the platform even when not actively seeking solutions. To achieve this, I reviewed the behavioral psychology literature available on knowledge and curiosity and tied it back to our How Might We questions to come up with 6 strategic principles to guide our decisions.
In any business as usual engagement, the business model of the client's product or service would have guided the model of engagement for the prospective solution. But given that this was a homegrown initiative with equitable access to information at its helm, we were but bound to ask ourselves—why does one seek knowledge?
Conventional wisdom would suggest that the answer is pretty simple—curiosity. But then, what is curiosity, and further, what makes us curious?
In his paper titled "A Theory of Human Curiosity", the British-Canadian behavioral psychologist, Daniel Berlyne defined curiosity as "a drive which is reduced by the reception and subsequent rehearsal of knowledge." He went further to distinguish this form of curiosity—which he labelled Epistemic Curiosity—from Perceptual Curiosity. He argued that while lower animals also exhibit arousal (perceptual curiosity drive) upon exposure to novel stimuli, "in an animal as well endowed in learning and remembering as the human adult, exploration is bound to leave a stock of permanent traces in the form of symbolic representations, which are manifestations of what we call knowledge."
He further elaborated on the notion of curiosity in his book titled "Conflict, Arousal, and Curiosity", where he distinguished between the two types of exploratory behaviors, namely, Diversive Exploration (seeking stimulation regardless of source or content) and Specific Exploration (detailed investigation of novel stimuli to acquire new information). He went on to expand upon Wilhelm Wundt's theory of "Optimal Levels of Stimulation" (the relationship between stimulus intensity and hedonic tone) in his paper titled "Novelty, Complexity and Hedonic Value", to posit that there needs to be an intermediate arousal potential for curiosity to be aroused. Too little stimulation might result in boredom and disinterest, whereas too much of it might lead to anxiety, aversion and withdrawal.

Building upon Berlyne's body of work, George Loewenstein, of Carnegie Mellon University postulated the "Information Gap Theory" for curiosity. Grounding his work in the domain of specific epistemic curiosity, he argued that "curiosity arises when attention becomes focussed on the gap in one's knowledge." An information gap can be defnied by two quantities—what one knows (objective), and what one wants to know (subjective). Curiosity arises when "one's informational reference point in a certain domain (what one wants to know) becomes elevated above one's current level of knowledge."
Employing the above research, we proceeded to map the region of knowledge our solution would ideally serve. To achieve this, we revisited our discovery notes and plotted the challenges, problems and the practitioners' respective approaches to address them on the knowledge graph to map out the solution space and the nature of problems it must cater to. Understanding curiosity and information gap as described above gave us a benchmark to evaluate our problems and their respective solution approaches against.

Based on our shared consensus of the solution space, we defined a list of six strategic principles that would help us address the needs and desires underlying our How Might We questions. While our aim was to breed cultural openness across the studios, given that the knowledge occupying the major share of the solution space was primarily specific epistemic in nature (followed by specific perceptual), ensuring the truthfulness of the information available on the platform was paramount. Simultaneously, retaining a certain amount of flexibility in the form of discussions and shared stories allowed us to account for immediacy, while inviting conversations.

With the above 6 principles guiding our product strategy, we followed up with lightning rounds to seek, share and scrutinize features spanning a variety of platforms across the web for inspiration. These inspirations would later help us remix and extend certain ideas to accommodate our use cases during ideation, while aiding the BXDs to draft the feature roadmap.
Ideation and Execution
We explored ideas and finalized features by evaluating them against our strategic principles. We drafted a storyboard that would illustrate the bird's eye view of our user's journey starting at the external point of entry and traversing through the key screens. We chose a launch announcement article on the studio intranet as our starting point.
Building upon our inspiration and keeping the principles in mind, we independently roughed up some solution sketches. We later congregated to discuss, refine and mash our ideas together to draft the final storyboard for our pilot product. Given that the platform was supposed to be integrated into the studio intranet, we chose the global navigation and the launch announcement banner as the point of entry on our storyboard. Our objective was to ensure that our target journey felt as authentic as possible.

An instantly accessible search interface was key to ensuring that the practitioners found the information they were looking for as quickly as possible. To make this possible, we chose to leverage the tech savviness of our target audience and assigned easily learnable keyboard shortcuts for key interactions such as search and the Ask a question CTA.

One crucial decision we made, was to allow questions to have multiple tags while not binding them to exclusive categories. Given that the studio has diverse audience, this gave us the flexibility to let conversations on intersectional topics thrive.
Another feature we placed specific emphasis was the sharing interface. Users could be tagged on questions, which would notify them elsewhere on the intranet and/or, trigger email notifications when tagged, while the option to copy link served the purpose of allowing one to share the question on other platforms.

One principle we placed special emphasis on was to keep the conversations on the platform driven towards deriving outcomes. To ensure this, we segregated answers and discussion. This allowed us to avail the user a better understanding of the question's context while keeping the interface decluttered by progressively disclosing only what was necessary.

While letting the user reign free to use any tag would have availed immense flexibility in terms of tag choice, the plethora of tags would spell disaster for our information scheme. Hence we decided to allow the user to pick from a set of tags, that they would be able to select using a type-ahead filter on the new question form.
But the success of the information scheme could not arrive at the cost of the users' natural course of interaction. And restricting tags to a limited list would mean just that. That meant, it was imperative for us to answer two key questions—
What would happen to a similar meaning tag that doesn't exist on the tag list—UX and User Experience, for example? And, what would be the course of action when the user enters a tag that doesn't exist on the tag list?
Thankfully, the lightning demos led us to a feature on StackOverflow that afforded us to seamlessly address the first half of the problem—synonyms. Synonyms are tags that are clubbed under a parent tag which would exemplify and encompass the scope of the synonym tag.
To address the latter part of the problem, we let the user add a tag that didn't belong to the tag list, but that definition was subject to the administrator's approval. Until approved, on the user who added the tag would be able to see the tag. The administrator would have three options to choose from—approve the tag, reject it, or synonymize it, i.e., merge it under a parent tag.

To ensure that the user didn't require to invest too much effort just to get started—which probably would otherwise have led to extensive number of drop offs—we chose to get the user started with a few tags to follow based on their discipline cohort, that they could edit and reorder later. Allowing users to edit tags also leverages, the IKEA effect—the cognitive bias by virtue of which people tend to place higher value in a product they invest effort into—to good use.

We went on to prototype our solution the in high fidelity on Figma, and put our ideas to the test the following week with 5 participants shortlisted based on their respective responses to our initial survey.
Usability Evaluation
We evaluated the solution for usability and discoverability with 5 participants. The onboarding flow was a standout success while participants repeatedly seemed to face minor discoverability challenges with the discussion flow. We rectified the design to accommodate the feedback in the first release of our MVP.
We hosted the task analysis session over Zoom with two of us running the usability session with individual participants while the rest of the team observed and took notes. Following a few introductory exchanges and setting the context for our participants, we discussed a few hypothetical scenarios dotted with activities for them to perform on the prototype. We encouraged the participants to walk us through their approach and watched them react as they performed the various activities. We further probed them to understand why they did what they did, and assisted them with hints if they ever got stuck.
All the participants found the one-click onboarding flow to be a breeze. The practitioners appreciated how it was limited to what was relevant and yet got them started as quickly as it did.

The search field saw some mixed reactions as the participants interacted with it. One of the participants actively used the keyboard shortcut to access the search bar, while another attempted to click on the search wildcard hints in an attempt to append it to their search query.

The practitioners found the structure of the question feed page convenient to navigate. Yet there seemed to be a slight tension in their reaction as they navigated and interacted with the list of questions. When probed in this regard, one of the participants mentioned that they would have preferred to see the question details upfront as the question titles themselves didn't furnish sufficient context.

One standout among the challenges the users faced while attempting the given tasks, was discovering the discussions tab on the question page. The task at hand was to seek clarification regarding the question. While the expectation was to see the users switch to the discussion tab to add a comment with their query regarding the question, only one user successfully completed the task. But once guided towards the tab, they immediately understood the intent. Ergo, making discussions discoverable was an immediate necessity.

For most parts, the changes required based on the feedback were pretty straightforward. We fixed the search wildcard hints by changing their appearance to make them seem less clickable. Adding the question details to the feed question card made sense, especially given that we did not have a lot of content to start with. So, we updated the information architecture of the feed question to include the question details up to 300 characters. We made a sidenote to revisit the page design once we had a significant influx of questions, to allow users to choose between the compact and detailed views.
But in all obviousness, these usability issues paled in front of the discussion discoverability issue. Discussion was vital to drive conversations, and consequently outcomes, on the platform. Hence, we redesigned the question detail page to better accommodate discussions and drive people to participate in conversations before attempting to answer a question.
To achieve this, we incorporated a discussion snippet right after the question to help users discover the discussion section and go through the conversations going on at a glance. Further, in order to not drown out any accepted answers, we ensured that the accepted answer took precedence over the discussion section, keeping discussions focussed towards driving outcomes.

Adding the top comments right after the question would ensure that the users would take stock of the conversations going on in context of the question. But how yet, would the users discover discussions when there were no comments yet? So we designed a banner making users aware of the feature in its empty state.

With these corrections in place we handed the designs off to be developed and tested before its scheduled launch. As the first release rolled out, we ran an email announcement and internal communations campaign to spread awareness among the studio audiences regarding our new platform, and assessed the analytics data to gauge the success of our platform.
Outcomes
We gathered analytics data and evaluated the performance of the platform against our expectations. Following some early wins, we identified opportunity areas to work towards in the successive releases to enhance the engagement on the platform. Overall, the platform showed great promise in its adoption metrics while much potential remained to be extracted in driving user engagement.
As the platform went live, practitioners started pouring in, and so did the data. But instead of reading too soon into the early influx of users, and consequently, the activity, we let the law of balances kick in following the initial spike. We actively took stock of the analytics data 100 working days following the launch of the platform to gather our initial insights. As we had expected, we breached the first 100 active users mark within the first week of the platform's launch.
A much more crucial number that better represented the vitals of our platform was the first 100 questions mark which took us 71 days.
In contrast, the platform definitely had some traction as revealed by the adoption rate and the daily active users over the 100 days. So we delved deeper to understand what was fundamentally leading to the slowdown. While the upvotes per question stood at a healthy mark (especially given that there were fewer than one asnwers per question), a portion of the questions were going unanswered.

We dug deeper to break the unanswered questions by their respective crafts, and talked to the assigned SPOCs for those crafts on the platform to understand what was causing the questions to go unanswered. The challenge apparently was not of the intent, but of availability. The specialists wished to answer the questions, but they were usually unavailable when the question would pop up in their inbox, and it would effectively slip their mind by the time they would become available.
So as our next step, we plan to introduce scheduled notification digests among other features, which would allow users to schedule when they want to be notified regarding open questions of the day based on their availability.
In conclusion, my team and I have succeeded in building a platform, and gaining traction and returning users in those who had sought a new avenue to seek answers and discuss ideas. Now that we have the seekers congregated in one place, our immediate priority is to tweak our engagement model appropriately to drive answerers to actively engage and participate on the platform.