One morning, I was reviewing a Hotjar report and discovered a user session lasting 45 minutes—not so long for some products, but we had just launched version 1.0 of Collimator, and this was the longest session I had seen by far. Curious and excited to see how someone might be using our tool for real work, I eagerly watched the session playback. Initially, I saw progress, but as the session continued, the user’s actions became erratic: searching, hesitating, backtracking. Despite having built something, they never clicked the "run simulation" button. Why not? And how many others were having similar issues?
Our product at Collimator is a modeling and simulation tool that allows engineers to translate equations into visual block diagrams which represent a system to be built. In order to validate their models, engineers run simulations and analyze system outputs. Running a simulation is crucial—if users aren’t running simulations, they’re either distracted, stuck on modeling, or unable to figure out how to run one.
I reviewed additional sessions and saw that there were many users who weren't running any simulations. I worked with the engineering team to check our analytics, which revealed that most sessions fell into this group. We decided to track this more closely, but our existing metrics failed to paint a complete picture, so I turned to hypothesis testing.
I had designed the "Run Simulation" button based on the "Play" button of a video player. We eventually wanted to allow for real-time playback, pause, rewind, and so on of simulation outputs. Early tests with experienced engineers had shown the button icon to be intuitive and in line with competitor's approaches, but maybe the metaphor wasn’t clear enough for others.
Out of the sessions I had reviewed, I saw a group of users who logged in, explored our block library, then quickly left—likely because the specific blocks they needed weren’t available yet. This suggested that some sessions weren’t relevant to the simulation issue but pointed to other needs. I noted the issue but continued my analysis.
I considered that users might be struggling with running simulations because they couldn’t find clear instructions. A quick review showed that though the documentation was comprehensive and accessible through multiple pathways, it still might not have been meeting user needs. Unfortunately, our analytics again failed to provide insights into how effectively users were finding and using these resources.
I reviewed my collection of hypotheses and ranked them in order of likelihood to solve the problem, as well as level of effort. I determined that hypothesis 2 was probably unrelated. Users that logged in and clicked or scrolled around a bit, but didn't build anything were having other issues than the one I was trying to address. I made a note to investigate more later.
The next simplest thing to try was to change the icon used for the "Visualize Outputs" toggle. The existing toggle icon was intended to resemble both an oscilloscope screen and a speech bubble. In the rush to launch, I hadn’t fully validated its effectiveness. To test this, I quickly sketched out alternative icon concepts and presented them to our engineers, asking questions like, “What does this icon look like to you? If you saw it on a button, what action would it suggest?” Despite exploring various options, none of the alternatives proved more intuitive than my original design, so I moved on to the last hypothesis.
If the icon wasn't the issue, could better guidance help? I implemented context-sensitive prompts that would guide users through the simulation process exactly when they needed it. Using timely tooltips and brief video explanations, these prompts appeared after a brief delay if users seemed stuck, nudging them towards actions like running simulations. This adjustment led to a 4x increase in simulation runs over the course of the following two weeks. It wasn’t a complete solution, but it was enough to deprioritize the issue for the time being.
A few weeks later, I revisited the metrics and found that the issue of users struggling to run simulations had largely disappeared—even though our onboarding hints had become broken (which I promptly reported as a bug). It turned out that user-generated tutorials on YouTube had filled the gap, becoming the go-to resource for new users. We reached out to several of these creators to form ongoing partnerships. Not only did we contribute to improving the quality of the content, we also received valuable feedback on our product, and reports of unmet needs in the community. It also signaled to us that even though our platform was young, people were already talking about it—and so should we.