There’s a lot of great content out there about product decision-making frameworks and strategy. This is not one of those.
Instead, this post shows how we apply many of the principles found in those frameworks to improve our product in a fast, iterative way.
We’ll share the “process”, tools, and metrics that inform our decisions. We hope this helps—whether it gives you new ideas or shows you what not to do (we hope it’s the former).
We want to give a special thanks to our friends at Mixpanel and Canny—two of the most important tools we use in this process—for their feedback and comments on this post.
The process: Release and improve
At Visily, we prioritize speed over perfection in all we do. Our processes are living, breathing workflows that can be amended, accelerated, or discarded when they’ve outlived their usefulness.
In a nutshell, our product development process is simply to release features, observe how users engage with them, and then improve them1. To do this, there are 5(ish) steps we follow2, outlined below. In most cases, this work is done in “squads” of a few folks from different disciplines (prod, eng, design, etc.)
1. Identify the problem
No problem can withstand the assault of sustained thinking. – Voltaire
Identifying the right problems to solve is both art and science (or maybe just magic) that requires a combination of intuition, user feedback, and usage data. This is the most important part of the process.
We begin with an insight about either a problem our software should solve (new feature) or one we’ve tried to solve but haven’t (feature improvement). At times, we stumble upon this insight while investigating an underperforming metric, but just as often, we discover the insight while actually using our own product. Regardless, everything we release is a response to a problem that we identify.
Choose the “right” problems to solve
Translating a product vision into a concrete product plan is really hard (so much so that a full blog series could be dedicated to the topic…). To keep us from veering off course, we have less of a “roadmap” and more of a “checkpoint” system for our product: we focus more on the capabilities our product must have to “unlock” to the next phase of problems Visily can solve than we do a set of features.
We debate internally these capability requirements and then often take our informal shortlist to users. This process is a safeguard against simply building whatever a user requests: you start first with the pieces you feel are necessary to reach the next milestone and then vet the list with users, against submitted feedback, etc.
Getting users to talk
The easiest way to get user feedback is to actively solicit it within the product. To organize and monitor feature requests, we rely heavily on Canny—not just for feature request upvotes but for understanding why users request things.
Sometimes it really is as simple as building something many people have requested. More often than not, however, user requests require us to seek out more clarity on what they’re requesting and why. In our experience, failing to fully understand the user’s request is the best way to build features that don’t get used.
Tactically, we make liberal use of the “Comments” section in Canny to surface the underlying “Jobs To Be Done”. Sometimes seemingly unrelated requests are simply two different paths to the same destination, and the only way to figure that out is to engage with users further.
The only time it’s a good idea to check out the comments section.
The comments section in Canny is a great way to have one-to-many conversations to solicit more information. When the team leaves a comment, voters get emails to participate in the discussion. This allows the team to dig into use cases and better understand “Jobs to be Done” at scale.
– Sarah Hum, CEO @ Canny
2. Develop Hypotheses
We make improvements to existing features as a result of something we see in our metrics or from dog-fooding the feature: adoption is below expectations, user ratings are subpar, an experience feels confusing, etc. Often, we can formulate a working theory on the “why” of the problem with a cursory review of user sessions. This can be done in Mixpanel or another product analytics or session replay tool.
Other times, metrics or user sessions don’t immediately surface why the feature is underperforming. To explore possible explanations, we use a few different approaches:
Watch segmented user sessions: A common complaint of session replay is that it’s time-consuming to do. Because we know the specific contexts where we expect a user to engage with the feature in question, it’s easier to pinpoint specific sessions to watch. It’s amazing how quickly usage patterns emerge after watching just a handful of sessions. We find it’s most helpful to watch together in a group, as each person brings different perspectives based on their area of expertise.
Engage super users: This is often the last resort for us, simply because it often involves a lot of coordination and back and forth correspondence. Because time is of the essence, we try to find the users who are most likely to respond to us. Generally, this is done by either (1) pulling (1) feedback related to the feature in our Canny board or (2) a feature usage frequency report in Mixpanel to surface the top users of the feature.
Collaborate in a project board: We create a project board in Visily where we can put all the different notes, screenshots, flowcharts, etc. We find a canvas tool works best because it’s easy to paste in any type of artifact, not just text. While Visily is great for this, you can do the same in any number of collaboration tools.
Once we’ve done a deeper dive into the issue, we develop a theory about the problem—first, individually and then together as a group. The value in doing this individually first is that it allows each person to formulate their unique ideas, independent of the groupthink or power dynamics that sometimes go undetected.
From here, we’re typically able to agree on a hypothesis, and are ready to create potential solutions.
3. Design solution(s)
At Visily (the company), anyone can propose an idea, independent of their specific domain or expertise; with Visily (the product), anyone can visualize them—regardless of design skill (sorry, I have to shill a little bit here). While it’s not a requirement that everyone create their own version of a solution, it’s empowering that anyone can do it. Some of our best ideas have come from the most unconventional places.
To ensure anyone can quickly prototype a solution, we capture key screens, flows, or components with the Visily Chrome Extension and import them to our project board. Screenshots taken with the extension are converted in Visily into components that anyone can share or modify, meaning everyone can prototype a solution (not just designers).
Depending on the complexity of the problem, we may design different solutions or collaborate on one. Whatever we do, we aim to get it in front of real users as soon as possible.
4. Validate the solution
For all solutions, our internal team forms the first “validation” layer by default since we QA it and use it internally. Beyond that, solution validation takes different forms depending on the feature’s importance and complexity. This is the rough heuristic we use to determine what validation path we take. As you can tell, these categories aren’t mutually exclusive, so the validation rules are flexible:
Big, important features → Live user sessions: “big” features are ones that are either core to our product, have a lot of dependencies, or will require a lot of work. These are the ones where the Silicon Valley mantra “gO fAsT AnD bReAK tHiNgS” does not apply. We solicit direct feedback from relevant users (typically the ones from the cohort/segments previously mentioned). Often, we schedule 15-minute calls to take them through the prototypes.
Low-dependency features → Launch: We often launch or improve things that, while important to a subset of users, are relatively disentangled from the core app experience. Examples might include Chrome Extension updates or new Smart Components. These are often easier to update because they’re more modular than core infrastructure, so we “launch” them—that is, release to production and heavily promote—to get user engagement with them. That engagement creates the feedback that kicks this whole process off again!
Speculative features → Quiet release: When working out the kinks of a brand new feature, it’s often easier to either (1) quietly push it to production without fanfare or (2) create a feature flag and monitor how users interact with it. This is a safer route than launching because it allows us to collect real user interaction data, albeit at a smaller scale than a “launched” feature, without the risk of promoting a feature that doesn’t live up to its purported benefits (which we’ve done before. Didn’t feel great.)
5. Measure & monitor
By definition, problems worth solving are reflected in your data (if they’re not, they’re likely not worth solving). When shipping a new feature or improvement, we define what success looks like and set up a tracking board in Mixpanel. To start, we use the same basic measurement for almost all features:
Discovery — How many first-time users did the feature have?
Utilization — How many repeat users did the feature have?
Impact — What measurable effect does it have on our core business KPIs?
Measuring anything beyond these initially feels like overkill. Why? Because most useful metrics are appendages of these three. Limiting our initial analysis to three metrics prevents us from falling into the metric rabbit hole.
Post release, we closely watch the three primary metrics, only delving deeper if the feature is underperforming. Rather than immediately create more reports, we review user sessions of:
Users who interacted with the feature, and
Users who should have interacted with it but didn’t3.
This is typically the fastest path to uncover friction points and errors that matter most. Every issue we discover gets documented for fast follow-up. These items become the potential inputs for the next round of this cycle.
A perpetual work in progress
Our process isn’t about following a specific framework, it’s about maintaining momentum and realizing our greater vision for Visily. As our team and product evolve, so too will this process. And that’s the point: we’ll uncover better, faster ways to build a product that solves the problems we care about. In the end, that’s what matters most.
1. We differentiate between “release” and “launch”; the former, is simply pushing an update to production without necessarily any additional fanfare (no emails, social posts, etc.); in contrast, “launching” something at Visily means there’s active promotion of it: it appears in release notes, in-app pop-ups, etc. This distinction is important for us.
2. Because our process is informal, we don’t often declare internally when we’re moving from one step to the next. As such, it’s not always clear where one step ends and another begins; It’s a fluid, organic experience.
3. Because solutions are a response to issues specific types of users have, we have a good idea of the contexts when the feature should be used.